papers-content / 2110 /2110.01593.md
Sylvestre's picture
Sylvestre HF Staff
Squashing commit
8906289 verified

Title: Generalized Kernel Thinning

URL Source: https://arxiv.org/html/2110.01593

Markdown Content: Back to arXiv

This is experimental HTML to improve accessibility. We invite you to report rendering errors. Use Alt+Y to toggle on accessible reporting links and Alt+Shift+Y to toggle off. Learn more about this project and help improve conversions.

Why HTML? Report Issue Back to Abstract Download PDF Abstract 1Introduction 2Generalized Kernel Thinning References

HTML conversions sometimes display errors due to content that did not convert correctly from the source. This paper uses the following packages that are not yet supported by the HTML conversion tool. Feedback on these issues are not necessary; they are known and are being worked on.

failed: stackengine failed: etoc failed: autonum

Authors: achieve the best HTML results from your LaTeX submissions by following these best practices.

License: CC BY 4.0 arXiv:2110.01593v8 [stat.ML] 21 Jan 2025 \etocdepthtag

.tocmtchapter \etocsettagdepthmtchaptersubsection \etocsettagdepthmtappendixnone

Generalized Kernel Thinning Raaz Dwivedi1, Lester Mackey2 1 Department of Computer Science, Harvard University and Department of EECS, MIT 2 Microsoft Research New England raaz@mit.edu, lmackey@microsoft.com Abstract

The kernel thinning (KT) algorithm of Dwivedi and Mackey (2021) compresses a probability distribution more effectively than independent sampling by targeting a reproducing kernel Hilbert space (RKHS) and leveraging a less smooth square-root kernel. Here we provide four improvements. First, we show that KT applied directly to the target RKHS yields tighter, dimension-free guarantees for any kernel, any distribution, and any fixed function in the RKHS. Second, we show that, for analytic kernels like Gaussian, inverse multiquadric, and sinc, target KT admits maximum mean discrepancy (MMD) guarantees comparable to or better than those of square-root KT without making explicit use of a square-root kernel. Third, we prove that KT with a fractional power kernel yields better-than-Monte-Carlo MMD guarantees for non-smooth kernels, like Laplace and Matรฉrn, that do not have square-roots. Fourth, we establish that KT applied to a sum of the target and power kernels (a procedure we call KT+) simultaneously inherits the improved MMD guarantees of power KT and the tighter individual function guarantees of target KT. In our experiments with target KT and KT+, we witness significant improvements in integration error even in 100 dimensions and when compressing challenging differential equation posteriors.

1Introduction

A core task in probabilistic inference is learning a compact representation of a probability distribution โ„™ . This problem is usually solved by sampling points ๐‘ฅ 1 , โ€ฆ , ๐‘ฅ ๐‘› independently from โ„™ or, if direct sampling is intractable, generating ๐‘› points from a Markov chain converging to โ„™ . The benefit of these approaches is that they provide asymptotically exact sample estimates โ„™ in โข ๐‘“ โ‰œ 1 ๐‘› โข โˆ‘ ๐‘–

1 ๐‘› ๐‘“ โข ( ๐‘ฅ ๐‘– ) for intractable expectations โ„™ โข ๐‘“ โ‰œ ๐”ผ ๐‘‹ โˆผ โ„™ โข [ ๐‘“ โข ( ๐‘‹ ) ] . However, they also suffer from a serious drawback: the learned representations are unnecessarily large, requiring ๐‘› points to achieve | โ„™ โข ๐‘“ โˆ’ โ„™ in โข ๐‘“ |

ฮ˜ โข ( ๐‘› โˆ’ 1 2 ) integration error. These inefficient representations quickly become prohibitive for expensive downstream tasks and function evaluations: for example, in computational cardiology, each function evaluation ๐‘“ โข ( ๐‘ฅ ๐‘– ) initiates a heart or tissue simulation that consumes 1000s of CPU hours (Niederer et al., 2011; Augustin et al., 2016; Strocchi et al., 2020).

To reduce the downstream computational burden, a standard practice is to thin the initial sample by discarding every ๐‘ก -th sample point (Owen, 2017). Unfortunately, standard thinning often results in a substantial loss of accuracy: for example, thinning an i.i.d. or fast-mixing Markov chain sample from ๐‘› points to ๐‘› 1 2 points increases integration error from ฮ˜ โข ( ๐‘› โˆ’ 1 2 ) to ฮ˜ โข ( ๐‘› โˆ’ 1 4 ) .

The recent kernel thinning (KT) algorithm of Dwivedi & Mackey (2021) addresses this issue by producing thinned coresets with better-than-i.i.d. integration error in a reproducing kernel Hilbert space (RKHS, Berlinet & Thomas-Agnan, 2011). Given a target kernel1 ๐ค and a suitable sequence of input points ๐’ฎ in

( ๐‘ฅ ๐‘– ) ๐‘–

1 ๐‘› approximating โ„™ , KT returns a subsequence ๐’ฎ out of ๐‘› points with better-than-i.i.d. maximum mean discrepancy (MMD, Gretton et al., 2012),2

MMD ๐ค โก ( โ„™ , โ„™ out )

โ‰œ sup โ€– ๐‘“ โ€– ๐ค โ‰ค 1 | โ„™ โข ๐‘“ โˆ’ โ„™ out โข ๐‘“ | for โ„™ out โ‰œ 1 ๐‘› โข โˆ‘ ๐‘ฅ โˆˆ ๐’ฎ out ๐œน ๐‘ฅ ,

(1)

where โˆฅ โ‹… โˆฅ ๐ค denotes the norm for the RKHS โ„‹ associated with ๐ค . That is, the KT output admits ๐‘œ โข ( ๐‘› โˆ’ 1 4 ) worst-case integration error across the unit ball of โ„‹ .

KT achieves its improvement with high probability using non-uniform randomness and a less smooth square-root kernel ๐ค rt satisfying

๐ค โข ( ๐‘ฅ , ๐‘ฆ )

โˆซ โ„ ๐‘‘ ๐ค rt โข ( ๐‘ฅ , ๐‘ง ) โข ๐ค rt โข ( ๐‘ง , ๐‘ฆ ) โข ๐‘‘ ๐‘ง .

(2)

When the input points are sampled i.i.d. or from a fast-mixing Markov chain on โ„ ๐‘‘ , Dwivedi & Mackey prove that the KT output has, with high probability, ๐’ช ๐‘‘ โข ( ๐‘› โˆ’ 1 2 โข log โก ๐‘› )

MMD ๐ค error for โ„™ and ๐ค rt with bounded support, ๐’ช ๐‘‘ โข ( ๐‘› โˆ’ 1 2 โข ( log ๐‘‘ + 1 โก ๐‘› โข log โก log โก ๐‘› ) 1 2 )

MMD ๐ค error for โ„™ and ๐ค rt with light tails, and ๐’ช ๐‘‘ โข ( ๐‘› โˆ’ 1 2 + ๐‘‘ 2 โข ๐œŒ โข log โก ๐‘› โข log โก log โก ๐‘› )

MMD ๐ค error for โ„™ and ๐ค rt 2 with ๐œŒ

2 โข ๐‘‘ moments. Meanwhile, an i.i.d. coreset of the same size suffers ฮฉ โข ( ๐‘› โˆ’ 1 4 )

MMD ๐ค . We refer to the original KT algorithm as root KT hereafter.

Our contributions   In this work, we offer four improvements over the original KT algorithm. First, we show in Sec. 2.1 that a generalization of KT that uses only the target kernel ๐ค provides a tighter ๐’ช โข ( ๐‘› โˆ’ 1 2 โข log โก ๐‘› ) integration error guarantee for each function ๐‘“ in the RKHS. This target KT guarantee (a) applies to any kernel ๐ค on any domain (even kernels that do not admit a square-root and kernels defined on non-Euclidean spaces), (b) applies to any target distribution โ„™ (even heavy-tailed โ„™ not covered by root KT guarantees), and (c) is dimension-free, eliminating the exponential dimension dependence and ( log โก ๐‘› ) ๐‘‘ / 2 factors of prior root KT guarantees.

Second, we prove in Sec. 2.2 that, for analytic kernels, like Gaussian, inverse multiquadric (IMQ), and sinc, target KT admits MMD guarantees comparable to or better than those of Dwivedi & Mackey (2021) without making explicit use of a square-root kernel. Third, we establish in Sec. 3 that generalized KT with a fractional ๐›ผ -power kernel ๐ค ๐›ผ yields improved MMD guarantees for kernels that do not admit a square-root, like Laplace and non-smooth Matรฉrn. Fourth, we show in Sec. 3 that, remarkably, applying generalized KT to a sum of ๐ค and ๐ค ๐›ผ โ€”a procedure we call kernel thinning+ (KT+)โ€”simultaneously inherits the improved MMD of power KT and the dimension-free individual function guarantees of target KT.

In Sec. 4, we use our new tools to generate substantially compressed representations of both i.i.d. samples in dimensions ๐‘‘

2 through 100 and Markov chain Monte Carlo samples targeting challenging differential equation posteriors. In line with our theory, we find that target KT and KT+ significantly improve both single function integration error and MMD, even for kernels without fast-decaying square-roots.

\Centerstack

Gauss โข ( ๐œŽ )

๐œŽ

0 \Centerstack Laplace โข ( ๐œŽ )

๐œŽ

0 \CenterstackMatรฉrn ( ๐œˆ , ๐›พ )

๐œˆ

๐‘‘ 2 , ๐›พ

0 \CenterstackIMQ ( ๐œˆ , ๐›พ )

๐œˆ

0 , ๐›พ

0 \Centerstacksinc ( ๐œƒ )

๐œƒ โ‰  0 \CenterstackB-spline ( 2 โข ๐›ฝ + 1 , ๐›พ )

๐›ฝ โˆˆ โ„•

\Centerstack

exp โก ( โˆ’ โ€– ๐‘ง โ€– 2 2 2 โข ๐œŽ 2 ) \Centerstack exp โก ( โˆ’ โ€– ๐‘ง โ€– 2 ๐œŽ ) \Centerstack ๐‘ ๐œˆ โˆ’ ๐‘‘ 2 โข ( ๐›พ โข โ€– ๐‘ง โ€– 2 ) ๐œˆ โˆ’ ๐‘‘ 2

โ‹… ๐พ ๐œˆ โˆ’ ๐‘‘ 2 โข ( ๐›พ โข โ€– ๐‘ง โ€– 2 )
1 ( 1 + โ€– ๐‘ง โ€– 2 2 / ๐›พ 2 ) ๐œˆ
โˆ ๐‘—

1 ๐‘‘ sin โก ( ๐œƒ โข ๐‘ง ๐‘— ) ๐œƒ โข ๐‘ง ๐‘— \Centerstack ๐”… 2 โข ๐›ฝ + 2 โˆ’ ๐‘‘ โข โˆ ๐‘—

1 ๐‘‘ โ„Ž ๐›ฝ โข ( ๐›พ โข ๐‘ง ๐‘— ) Table 1:Common kernels ๐ค โข ( ๐‘ฅ , ๐‘ฆ ) on โ„ ๐‘‘ with ๐‘ง

๐‘ฅ โˆ’ ๐‘ฆ . In each case, โ€– ๐ค โ€– โˆž

1 . Here, ๐‘ ๐‘Ž โ‰œ 2 1 โˆ’ ๐‘Ž ฮ“ โข ( ๐‘Ž ) , ๐พ ๐‘Ž is the modified Bessel function of the third kind of order ๐‘Ž (Wendland, 2004, Def. 5.10), โ„Ž ๐›ฝ is the recursive convolution of 2 โข ๐›ฝ + 2 copies of ๐Ÿ [ โˆ’ 1 2 , 1 2 ] , and ๐”… ๐›ฝ โ‰œ 1 ( ๐›ฝ โˆ’ 1 ) ! โข โˆ‘ ๐‘—

0 โŒŠ ๐›ฝ / 2 โŒ‹ ( โˆ’ 1 ) ๐‘— โข ( ๐›ฝ ๐‘— ) โข ( ๐›ฝ 2 โˆ’ ๐‘— ) ๐›ฝ โˆ’ 1 .

Related work   For bounded ๐ค , both i.i.d. samples (Tolstikhin et al., 2017, Prop. A.1) and thinned geometrically ergodic Markov chains (Dwivedi & Mackey, 2021, Prop. 1) deliver ๐‘› 1 2 points with ๐’ช โข ( ๐‘› โˆ’ 1 4 ) MMD with high probability. The online Haar strategy of Dwivedi et al. (2019) and low discrepancy quasi-Monte Carlo methods (see, e.g., Hickernell, 1998; Novak & Wozniakowski, 2010; Dick et al., 2013) provide improved ๐’ช ๐‘‘ โข ( ๐‘› โˆ’ 1 2 โข log ๐‘‘ โก ๐‘› ) MMD guarantees but are tailored specifically to the uniform distribution on [ 0 , 1 ] ๐‘‘ . Alternative coreset constructions for more general โ„™ include kernel herding (Chen et al., 2010), discrepancy herding (Harvey & Samadi, 2014), super-sampling with a reservoir (Paige et al., 2016), support points convex-concave procedures (Mak & Joseph, 2018), greedy sign selection (Karnin & Liberty, 2019, Sec. 3.1), Stein point MCMC (Chen et al., 2019), and Stein thinning (Riabiz et al., 2020a). While some admit better-than-i.i.d. MMD guarantees for finite-dimensional kernels on โ„ ๐‘‘ (Chen et al., 2010; Harvey & Samadi, 2014), none apart from KT are known to provide better-than-i.i.d. MMD or integration error for the infinite-dimensional kernels covered in this work. The lower bounds of Phillips & Tai (2020, Thm. 3.1) and Tolstikhin et al. (2017, Thm. 1) respectively establish that any procedure outputting ๐‘› 1 2 -sized coresets and any procedure estimating โ„™ based only on ๐‘› i.i.d. sample points must incur ฮฉ โข ( ๐‘› โˆ’ 1 2 ) MMD in the worst case. Our guarantees in Sec. 2 match these lower bounds up to logarithmic factors.

Notation   We define the norm โ€– ๐ค โ€– โˆž

sup ๐‘ฅ , ๐‘ฆ | ๐ค โข ( ๐‘ฅ , ๐‘ฆ ) | and the shorthand [ ๐‘› ] โ‰œ { 1 , โ€ฆ , ๐‘› } , โ‰œ + { ๐‘ฅ โˆˆ : ๐‘ฅ โ‰ฅ 0 } , โ„• 0 โ‰œ โ„• โˆช { 0 } , โ„ฌ ๐ค โ‰œ { ๐‘“ โˆˆ โ„‹ : โ€– ๐‘“ โ€– ๐ค โ‰ค 1 } , and โ„ฌ 2 ( ๐‘Ÿ ) โ‰œ { ๐‘ฆ โˆˆ : ๐‘‘ โˆฅ ๐‘ฆ โˆฅ 2 โ‰ค ๐‘Ÿ } . We write ๐‘Ž โ‰พ ๐‘ and ๐‘Ž โ‰ฟ ๐‘ to mean ๐‘Ž

๐’ช โข ( ๐‘ ) and ๐‘Ž

ฮฉ โข ( ๐‘ ) , use โ‰พ ๐‘‘ when masking constants dependent on ๐‘‘ , and write ๐‘Ž

๐’ช ๐‘ โข ( ๐‘ ) to mean ๐‘Ž / ๐‘ is bounded in probability. For any distribution โ„š and point sequences ๐’ฎ , ๐’ฎ โ€ฒ with empirical distributions โ„š ๐‘› , โ„š ๐‘› โ€ฒ , we define MMD ๐ค โก ( โ„š , ๐’ฎ ) โ‰œ MMD ๐ค โก ( โ„š , โ„š ๐‘› ) and MMD ๐ค โก ( ๐’ฎ , ๐’ฎ โ€ฒ ) โ‰œ MMD ๐ค โก ( โ„š ๐‘› , โ„š ๐‘› โ€ฒ ) .

2Generalized Kernel Thinning

Our generalized kernel thinning algorithm (Alg. 1) for compressing an input point sequence ๐’ฎ in

( ๐‘ฅ ๐‘– ) ๐‘–

1 ๐‘› proceeds in two steps: kt-split and kt-swap detailed in App. A. First, given a thinning parameter ๐‘š and an auxiliary kernel ๐ค split , kt-split divides the input sequence into 2 ๐‘š candidate coresets of size ๐‘› / 2 ๐‘š using non-uniform randomness. Next, given a target kernel ๐ค , kt-swap selects a candidate coreset with smallest MMD ๐ค to ๐’ฎ in and iteratively improves that coreset by exchanging coreset points for input points whenever the swap leads to reduced MMD ๐ค . When ๐ค split is a square-root kernel ๐ค rt 2 of ๐ค , generalized KT recovers the original root KT algorithm of Dwivedi & Mackey. In this section, we establish performance guarantees for more general ๐ค split with special emphasis on the practical choice ๐ค split

๐ค . Like root KT, for any ๐‘š , generalized KT has time complexity dominated by ๐’ช โข ( ๐‘› 2 ) evaluations of ๐ค split and ๐ค and ๐’ช โข ( ๐‘› โข min โก ( ๐‘‘ , ๐‘› ) ) space complexity from storing either ๐’ฎ in or the kernel matrices ( ๐ค split โข ( ๐‘ฅ ๐‘– , ๐‘ฅ ๐‘— ) ) ๐‘– , ๐‘—

1 ๐‘› and ( ๐ค โข ( ๐‘ฅ ๐‘– , ๐‘ฅ ๐‘— ) ) ๐‘– , ๐‘—

1 ๐‘› .

Input: split kernel ๐ค split , target kernel ๐ค , point sequence ๐’ฎ in

( ๐‘ฅ ๐‘– ) ๐‘–

1 ๐‘› , thinning parameter ๐‘š โˆˆ โ„• , probabilities ( ๐›ฟ ๐‘– ) ๐‘–

1 โŒŠ ๐‘› / 2 โŒ‹ ( ๐’ฎ ( ๐‘š , โ„“ ) ) โ„“

1 2 ๐‘š โ† kt-splitโ€‰ ( ๐ค split , ๐’ฎ in , ๐‘š , ( ๐›ฟ ๐‘– ) ๐‘–

1 โŒŠ ๐‘› / 2 โŒ‹ )  // Split ๐’ฎ in into 2 ๐‘š candidate coresets of size โŒŠ ๐‘› 2 ๐‘š โŒ‹ [2pt] ๐’ฎ KT โ† kt-swapโ€‰ ( ๐ค , ๐’ฎ in , ( ๐’ฎ ( ๐‘š , โ„“ ) ) โ„“

1 2 ๐‘š )    โ€ƒ// Select best coreset and iteratively refine return coreset ๐’ฎ KT of size โŒŠ ๐‘› / 2 ๐‘š โŒ‹ Algorithm 1 Generalized Kernel Thinning โ€“ Return coreset of size โŒŠ ๐‘› / 2 ๐‘š โŒ‹ with small MMD ๐ค 2.1Single function guarantees for kt-split

We begin by analyzing the quality of the kt-split coresets. Our first main result, proved in App. B, bounds the kt-split integration error for any fixed function in the RKHS โ„‹ split generated by ๐ค split .

Theorem 1 (Single function guarantees for kt-split)

Consider kt-split (Alg. 2) with oblivious3 ๐’ฎ in and ( ๐›ฟ ๐‘– ) ๐‘–

1 ๐‘› / 2 and ๐›ฟ โ‹† โ‰œ min ๐‘– โก ๐›ฟ ๐‘– . If ๐‘› 2 ๐‘š โˆˆ โ„• , then, for any fixed ๐‘“ โˆˆ โ„‹ split , index โ„“ โˆˆ [ 2 ๐‘š ] , and scalar ๐›ฟ โ€ฒ โˆˆ ( 0 , 1 ) , the output coreset ๐’ฎ ( ๐‘š , โ„“ ) with โ„™ split ( โ„“ ) โ‰œ 1 ๐‘› / 2 ๐‘š โข โˆ‘ ๐‘ฅ โˆˆ ๐’ฎ ( ๐‘š , โ„“ ) ๐›… ๐‘ฅ satisfies

| โ„™ in โข ๐‘“ โˆ’ โ„™ split ( โ„“ ) โข ๐‘“ |

โ‰ค โ€– ๐‘“ โ€– ๐ค split โ‹… ๐œŽ ๐‘š โข 2 โข log โก ( 2 ๐›ฟ โ€ฒ ) for ๐œŽ ๐‘š โ‰œ 2 3 โข 2 ๐‘š ๐‘› โข โ€– ๐ค split โ€– โˆž , in โ‹… log โก ( 6 โข ๐‘š 2 ๐‘š โข ๐›ฟ โ‹† )

(3)

with probability at least ๐‘ sg โ‰œ 1 โˆ’ ๐›ฟ โ€ฒ โˆ’ โˆ‘ ๐‘—

1 ๐‘š 2 ๐‘— โˆ’ 1 ๐‘š โข โˆ‘ ๐‘–

1 ๐‘› / 2 ๐‘— ๐›ฟ ๐‘– . Here, โ€– ๐ค split โ€– โˆž , in โ‰œ max ๐‘ฅ โˆˆ ๐’ฎ in โก ๐ค split โข ( ๐‘ฅ , ๐‘ฅ ) .

Remark 1 (Guarantees for known and oblivious stopping times)

By Dwivedi & Mackey (2021, App. D), the success probability ๐‘ sg is at least 1 โˆ’ ๐›ฟ if we set ๐›ฟ โ€ฒ

๐›ฟ 2 and ๐›ฟ ๐‘–

๐›ฟ ๐‘› for a stopping time ๐‘› known a priori or ๐›ฟ ๐‘–

๐‘š โข ๐›ฟ 2 ๐‘š + 2 โข ( ๐‘– + 1 ) โข log 2 โก ( ๐‘– + 1 ) for an arbitrary oblivious stopping time  ๐‘› .

When compressing heavily from ๐‘› to ๐‘› points, Thm. 1 and Rem. 1 guarantee ๐’ช โข ( ๐‘› โˆ’ 1 2 โข log โก ๐‘› ) integration error with high probability for any fixed function ๐‘“ โˆˆ โ„‹ split . This represents a near-quadratic improvement over the ฮฉ โข ( ๐‘› โˆ’ 1 4 ) integration error of ๐‘› i.i.d. points. Moreover, this guarantee applies to any kernel defined on any space including unbounded kernels on unbounded domains (e.g., energy distance (Sejdinovic et al., 2013) and Stein kernels (Oates et al., 2017; Chwialkowski et al., 2016; Liu et al., 2016; Gorham & Mackey, 2017)); kernels with slowly decaying square roots (e.g., sinc kernels); and non-smooth kernels without square roots (e.g., Laplace, Matรฉrn with ๐›พ โˆˆ ( ๐‘‘ 2 , ๐‘‘ ] ), and the compactly supported kernels of Wendland (2004) with ๐‘  < 1 2 โข ( ๐‘‘ + 1 ) ). In contrast, the MMD guarantees of Dwivedi & Mackey covered only bounded, smooth ๐ค on โ„ ๐‘‘ with bounded, Lipschitz, and rapidly-decaying square-roots. In addition, for โ€– ๐ค โ€– โˆž

1 on โ„ ๐‘‘ , the MMD bounds of Dwivedi & Mackey feature exponential dimension dependence of the form ๐‘ ๐‘‘ or ( log โก ๐‘› ) ๐‘‘ / 2 while the Thm. 1 guarantee is dimension-free and hence practically relevant even when ๐‘‘ is large relative to ๐‘› .

Thm. 1 also guarantees better-than-i.i.d. integration error for any target distribution with | โ„™ โข ๐‘“ โˆ’ โ„™ in โข ๐‘“ |

๐‘œ โข ( ๐‘› โˆ’ 1 4 ) . In contrast, the MMD improvements of Dwivedi & Mackey (2021, cf. Tab. 2) applied only to โ„™ with at least 2 โข ๐‘‘ moments. Finally, when kt-split is applied with a square-root kernel ๐ค split

๐ค rt , Thm. 1 still yields integration error bounds for ๐‘“ โˆˆ โ„‹ , as โ„‹ โІ โ„‹ split . However, relative to target kt-split guarantees with ๐ค split

๐ค , the error bounds are inflated by a multiplicative factor of โ€– ๐ค rt โ€– โˆž , in โ€– ๐ค โ€– โˆž , in โข โ€– ๐‘“ โ€– ๐ค rt โ€– ๐‘“ โ€– ๐ค . In App. H, we show that this inflation factor is at least 1 for each kernel explicitly analyzed in Dwivedi & Mackey (2021) and grows exponentially in dimension for Gaussian and Matรฉrn kernels, unlike the dimension-free target kt-split bounds.

Finally, if we run kt-split with the perturbed kernel ๐ค split โ€ฒ defined in Cor. 1, then we simultaneously obtain ๐’ช โข ( ๐‘› โˆ’ 1 2 โข log โก ๐‘› ) integration error for ๐‘“ โˆˆ โ„‹ split , near-i.i.d. ๐’ช โข ( ๐‘› โˆ’ 1 4 โข log โก ๐‘› ) integration error for arbitrary bounded ๐‘“ outside of โ„‹ split , and intermediate, better-than-i.i.d. ๐‘œ โข ( ๐‘› โˆ’ 1 4 ) integration error for smoother ๐‘“ outside of โ„‹ split (by interpolation). We prove this guarantee in App. C.

Corollary 1 (Guarantees for functions outside of โ„‹ split )

Consider extending each input point ๐‘ฅ ๐‘– with the standard basis vector ๐‘’ ๐‘– โˆˆ โ„ ๐‘› and running kt-split (Alg. 2) on ๐’ฎ in โ€ฒ

( ๐‘ฅ ๐‘– , ๐‘’ ๐‘– ) ๐‘–

1 ๐‘› with ๐ค split โ€ฒ โข ( ( ๐‘ฅ , ๐‘ค ) , ( ๐‘ฆ , ๐‘ฃ ) )

๐ค split โข ( ๐‘ฅ , ๐‘ฆ ) โ€– ๐ค split โ€– โˆž + โŸจ ๐‘ค , ๐‘ฃ โŸฉ for ๐‘ค , ๐‘ฃ , โˆˆ ๐‘› . Under the notation and assumptions of Thm. 1, for any fixed index โ„“ โˆˆ [ 2 ๐‘š ] , scalar ๐›ฟ โ€ฒ โˆˆ ( 0 , 1 ) , and ๐‘“ defined on ๐’ฎ in , we have, with probability at least ๐‘ sg ,

| โ„™ in โข ๐‘“ โˆ’ โ„™ split ( โ„“ ) โข ๐‘“ |

โ‰ค min โก ( ๐‘› 2 ๐‘š โข โ€– ๐‘“ โ€– โˆž , in , โ€– ๐ค split โ€– โˆž โข โ€– ๐‘“ โ€– ๐ค split ) โข 2 ๐‘š ๐‘› โข 8 โข log โก ( 2 ๐›ฟ โ€ฒ ) โ‹… log โก ( 8 โข ๐‘š 2 ๐‘š โข ๐›ฟ โ‹† ) .

(4) 2.2MMD guarantee for target KT

Our second main result bounds the MMD ๐ค 1โ€”the worst-case integration error across the unit ball of โ„‹ โ€”for generalized KT applied to the target kernel, i.e., ๐ค split

๐ค . The proof of this result in App. D is based on Thm. 1 and an appropriate covering number for the unit ball โ„ฌ ๐ค of the ๐ค RKHS.

Definition 1 ( ๐ค covering number)

For a set ๐’œ and scalar ๐œ€

0 , we define the ๐ค covering number ๐’ฉ ๐ค โข ( ๐’œ , ๐œ€ ) with โ„ณ ๐ค โข ( ๐’œ , ๐œ€ ) โ‰œ log โก ๐’ฉ ๐ค โข ( ๐’œ , ๐œ€ ) as the minimum cardinality of a set ๐’ž โŠ‚ โ„ฌ ๐ค satisfying

โ„ฌ ๐ค โІ โ‹ƒ โ„Ž โˆˆ ๐’ž { ๐‘” โˆˆ โ„ฌ ๐ค : sup ๐‘ฅ โˆˆ ๐’œ | โ„Ž โข ( ๐‘ฅ ) โˆ’ ๐‘” โข ( ๐‘ฅ ) | โ‰ค ๐œ€ } .

(5) Theorem 2 (MMD guarantee for target KT)

Consider generalized KT (Alg. 1) with ๐ค split

๐ค , oblivious ๐’ฎ in and ( ๐›ฟ ๐‘– ) ๐‘–

1 โŒŠ ๐‘› / 2 โŒ‹ , and ๐›ฟ โ‹† โ‰œ min ๐‘– โก ๐›ฟ ๐‘– . If ๐‘› 2 ๐‘š โˆˆ โ„• , then for any ๐›ฟ โ€ฒ โˆˆ ( 0 , 1 ) , the output coreset ๐’ฎ KT is of size ๐‘› 2 ๐‘š and satisfies

MMD ๐ค โก ( ๐’ฎ in , ๐’ฎ KT ) โ‰ค inf ๐œ€ โˆˆ ( 0 , 1 ) , ๐’ฎ in โŠ‚ ๐’œ 2 โข ๐œ€ + 2 ๐‘š ๐‘› โ‹… 8 3 โข โ€– ๐ค โ€– โˆž , in โข log โก ( 6 โข ๐‘š 2 ๐‘š โข ๐›ฟ โ‹† ) โ‹… [ log โก ( 4 ๐›ฟ โ€ฒ ) + โ„ณ ๐ค โข ( ๐’œ , ๐œ€ ) ]

(6)

with probability at least ๐‘ sg , where โ€– ๐ค โ€– โˆž , in and ๐‘ sg were defined in Thm. 1.

When compressing heavily from ๐‘› to ๐‘› points, Thm. 2 and Rem. 1 with ๐œ€

โ€– ๐ค โ€– โˆž , in ๐‘› and ๐’œ

โ„ฌ 2 โข ( โ„œ in ) for โ„œ in โ‰œ max ๐‘ฅ โˆˆ ๐’ฎ in โก โ€– ๐‘ฅ โ€– 2 guarantee

MMD ๐ค โก ( ๐’ฎ in , ๐’ฎ KT ) โ‰พ ๐›ฟ โ€– ๐ค โ€– โˆž , in โข log โก ๐‘› ๐‘› โ‹… โ„ณ ๐ค โข ( โ„ฌ 2 โข ( โ„œ in ) , โ€– ๐ค โ€– โˆž , in ๐‘› )

(7)

with high probability. Thus we immediately obtain an MMD guarantee for any kernel ๐ค with a covering number bound. Furthermore, we readily obtain a comparable guarantee for โ„™ since MMD ๐ค โก ( โ„™ , ๐’ฎ KT ) โ‰ค MMD ๐ค โก ( โ„™ , ๐’ฎ in ) + MMD ๐ค โก ( ๐’ฎ in , ๐’ฎ KT ) . Any of a variety of existing algorithms can be used to generate an input point sequence ๐’ฎ in with MMD ๐ค โก ( โ„™ , ๐’ฎ in ) no larger than the compression bound 7, including i.i.d. sampling (Tolstikhin et al., 2017, Thm. A.1), geometric MCMC (Dwivedi & Mackey, 2021, Prop. 1), kernel herding (Lacoste-Julien et al., 2015, Thm. G.1), Stein points (Chen et al., 2018, Thm. 2), Stein point MCMC (Chen et al., 2019, Thm. 1), greedy sign selection (Karnin & Liberty, 2019, Sec. 3.1), and Stein thinning (Riabiz et al., 2020a, Thm. 1).

2.3Consequences of Thm. 2

Sec. 2.3 summarizes the MMD guarantees of Thm. 2 under common growth conditions on the log covering number โ„ณ ๐ค and the input point radius โ„œ ๐’ฎ in โ‰œ max ๐‘ฅ โˆˆ ๐’ฎ in โก โ€– ๐‘ฅ โ€– 2 . In Props. 2 and LABEL:rkhs_covering_numbers of App. J, we show that analytic kernels, like Gaussian, inverse multiquadric (IMQ), and sinc, have LogGrowth โ„ณ ๐ค (i.e., โ„ณ ๐ค โข ( โ„ฌ 2 โข ( ๐‘Ÿ ) , ๐œ€ ) โ‰พ ๐‘‘ ๐‘Ÿ ๐‘‘ โข log ๐œ” โก ( 1 ๐œ€ ) ) while finitely differentiable kernels (like Matรฉrn and B-spline) have PolyGrowth โ„ณ ๐ค (i.e., โ„ณ ๐ค โข ( โ„ฌ 2 โข ( ๐‘Ÿ ) , ๐œ€ ) โ‰พ ๐‘‘ ๐‘Ÿ ๐‘‘ โข ๐œ€ โˆ’ ๐œ” ).

Our conditions on โ„œ ๐’ฎ in arise from four forms of target distribution tail decay: (1) Compact ( โ„œ ๐’ฎ in โ‰พ ๐‘‘ 1 ), (2) SubGauss ( โ„œ ๐’ฎ in โ‰พ ๐‘‘ log โก ๐‘› ), (3) SubExp ( โ„œ ๐’ฎ in โ‰พ ๐‘‘ log โก ๐‘› ), and (4) HeavyTail ( โ„œ ๐’ฎ in โ‰พ ๐‘‘ ๐‘› 1 / ๐œŒ ) . The first setting arises with a compactly supported โ„™ (e.g., on the unit cube [ 0 , 1 ] ๐‘‘ ), and the other three settings arise in expectation and with high probability when ๐’ฎ in is generated i.i.d. from  โ„™ with sub-Gaussian tails, sub-exponential tails, or ๐œŒ moments respectively.

Substituting these conditions into 7 yields the eight entries of Sec. 2.3. We find that, for LogGrowth โ„ณ ๐ค , target KT MMD is within log factors of the ฮฉ โข ( ๐‘› โˆ’ 1 / 2 ) lower bounds of Sec. 1 for light-tailed โ„™ and is ๐‘œ โข ( ๐‘› โˆ’ 1 / 4 ) (i.e., better than i.i.d.) for any distribution with ๐œŒ

2 โข ๐‘‘ moments. Meanwhile, for PolyGrowth โ„ณ ๐ค , target KT MMD is ๐‘œ โข ( ๐‘› โˆ’ 1 / 4 ) whenever ๐œ” < 1 for light-tailed โ„™ or whenever โ„™ has ๐œŒ

2 โข ๐‘‘ / ( 1 โˆ’ ๐œ” ) moments.

\Centerstack

& \Centerstack Compact  โ„™

โ„œ in โ‰พ ๐‘‘

1

\Centerstack

SubGauss  โ„™

โ„œ in โ‰พ ๐‘‘

log โก ๐‘›

\Centerstack

SubExp  โ„™

โ„œ in โ‰พ ๐‘‘

log โก ๐‘›

\Centerstack

HeavyTail  โ„™

โ„œ in โ‰พ ๐‘‘

๐‘› 1 / ๐œŒ

\Centerstack

LogGrowth โ„ณ ๐ค

โ„ณ ๐ค โข ( โ„ฌ 2 โข ( ๐‘Ÿ ) , ๐œ€ )

โ‰พ ๐‘‘ ๐‘Ÿ ๐‘‘ โข log ๐œ” โก ( 1 ๐œ€ )

( log โก ๐‘› ) ๐œ” + 1 ๐‘›

( log โก ๐‘› ) ( ๐‘‘ / 2 ) + ๐œ” + 1 ๐‘›

( log โก ๐‘› ) ๐‘‘ + ๐œ” + 1 ๐‘›

( log โก ๐‘› ) ๐œ” + 1 ๐‘› 1 โˆ’ ๐‘‘ / ๐œŒ

\Centerstack

PolyGrowth โ„ณ ๐ค

โ„ณ ๐ค โข ( โ„ฌ 2 โข ( ๐‘Ÿ ) , ๐œ€ )

โ‰พ ๐‘‘ ๐‘Ÿ ๐‘‘ โข ๐œ€ โˆ’ ๐œ”

log โก ๐‘› ๐‘› 1 โˆ’ ๐œ” / 2

( log โก ๐‘› ) ( ๐‘‘ / 2 ) + 1 ๐‘› 1 โˆ’ ๐œ” / 2

( log โก ๐‘› ) ๐‘‘ + 1 ๐‘› 1 โˆ’ ๐œ” / 2

log โก ๐‘› ๐‘› 1 โˆ’ ๐œ” / 2 โˆ’ ๐‘‘ / ๐œŒ

Table 2: MMD guarantees for target KT under โ„ณ ๐ค  5 growth and โ„™ tail decay. We report the MMD ๐ค โก ( ๐’ฎ in , ๐’ฎ KT ) bound 7 for target KT with ๐‘› input points and ๐‘› output points, up to constants depending on ๐‘‘ and โ€– ๐ค โ€– โˆž , in . Here โ„œ in โ‰œ max ๐‘ฅ โˆˆ ๐’ฎ in โก โ€– ๐‘ฅ โ€– 2 .

Next, for each of the popular convergence-determining kernels of Tab. 1, we compare the root KT MMD guarantees of Dwivedi & Mackey (2021) with the target KT guarantees of Thm. 2 combined with covering number bounds derived in Apps. J and K. We see in Tab. 3 that Thm. 2 provides better-than-i.i.d. and better-than-root KT guarantees for kernels with slowly decaying or non-existent square-roots (e.g., IMQ with ๐œˆ < ๐‘‘ 2 , sinc, and B-spline) and nearly matches known root KT guarantees for analytic kernels like Gauss and IMQ with ๐œˆ โ‰ฅ ๐‘‘ 2 , even though target KT makes no explicit use of a square-root kernel. See App. K for the proofs related to Tab. 3.

\CenterstackKernel ๐ค \Centerstack Target KT \Centerstack Root KT \Centerstack KT+ \Centerstack Gauss โข ( ๐œŽ ) \Centerstack ( log โก ๐‘› ) 3 โข ๐‘‘ 4 + 1 ๐‘› โ‹… ๐‘ ๐‘› ๐‘‘ \Centerstack ( ๐ฅ๐จ๐  โก ๐’ ) ๐’… ๐Ÿ’ + ๐Ÿ ๐Ÿ โข ๐’„ ๐’ ๐’ \Centerstack ( ๐ฅ๐จ๐  โก ๐’ ) ๐’… ๐Ÿ’ + ๐Ÿ ๐Ÿ โข ๐’„ ๐’ ๐’

\Centerstack Laplace โข ( ๐œŽ ) \Centerstack ๐‘› โˆ’ 1 4 \CenterstackN/A \Centerstack ( ๐’„ ๐’ โข ( ๐ฅ๐จ๐  โก ๐’ ) ๐Ÿ + ๐Ÿ โข ๐’… โข ( ๐Ÿ โˆ’ ๐œถ ) ๐’ ) ๐Ÿ ๐Ÿ’ โข ๐œถ

\CenterstackMatรฉrn ( ๐œˆ , ๐›พ )

๐œˆ โˆˆ ( ๐‘‘ 2 , ๐‘‘ ] \Centerstack ๐‘› โˆ’ 1 4 \CenterstackN/A \Centerstack ( ๐’„ ๐’ โข ( ๐ฅ๐จ๐  โก ๐’ ) ๐Ÿ + ๐Ÿ โข ๐’… โข ( ๐Ÿ โˆ’ ๐œถ ) ๐’ ) ๐Ÿ ๐Ÿ’ โข ๐œถ

\CenterstackMatรฉrn ( ๐œˆ , ๐›พ )

๐œˆ

๐‘‘ \Centerstack min โก ( ๐‘› โˆ’ 1 4 , ( log โก ๐‘› ) ๐‘‘ + 1 2 ๐‘› ( ๐œˆ โˆ’ ๐‘‘ ) / ( 2 โข ๐œˆ โˆ’ ๐‘‘ ) ) \Centerstack ( ๐ฅ๐จ๐  โก ๐’ ) ๐’… + ๐Ÿ ๐Ÿ โข ๐’„ ๐’ ๐’ \Centerstack ( ๐ฅ๐จ๐  โก ๐’ ) ๐’… + ๐Ÿ ๐Ÿ โข ๐’„ ๐’ ๐’

\CenterstackIMQ ( ๐œˆ , ๐›พ )

๐œˆ < ๐‘‘ 2 \Centerstack ( ๐ฅ๐จ๐  โก ๐’ ) ๐’… + ๐Ÿ ๐’ \Centerstack ๐‘› โˆ’ 1 4 \Centerstack ( ๐ฅ๐จ๐  โก ๐’ ) ๐’… + ๐Ÿ ๐’

\CenterstackIMQ ( ๐œˆ , ๐›พ )

๐œˆ โ‰ฅ ๐‘‘ 2 \Centerstack ( log โก ๐‘› ) ๐‘‘ + 1 ๐‘› \Centerstack ( ๐ฅ๐จ๐  โก ๐’ ) ๐’… + ๐Ÿ ๐Ÿ โข ๐’„ ๐’ ๐’ \Centerstack ( ๐ฅ๐จ๐  โก ๐’ ) ๐’… + ๐Ÿ ๐Ÿ โข ๐’„ ๐’ ๐’

\Centerstacksinc ( ๐œƒ ) \Centerstack ( ๐ฅ๐จ๐  โก ๐’ ) ๐’… / ๐Ÿ + ๐Ÿ ๐’ \Centerstack ๐‘› โˆ’ 1 4 \Centerstack ( ๐ฅ๐จ๐  โก ๐’ ) ๐’… / ๐Ÿ + ๐Ÿ ๐’

\Centerstack B-spline โข ( 2 โข ๐›ฝ + 1 , ๐›พ )

๐›ฝ โˆˆ 2 โข โ„• \Centerstack min โก ( ๐‘› โˆ’ 1 4 , ๐‘’ ๐‘› , ๐‘‘ , ๐›ฝ ) \CenterstackN/A \Centerstack ๐ฆ๐ข๐ง โก ( ๐’† ๐’ , ๐’… , ๐œท , ( ๐ฅ๐จ๐  โก ๐’ ๐’ ) ๐œท + ๐Ÿ ๐Ÿ โข ๐œท + ๐Ÿ’ )

\Centerstack B-spline โข ( 2 โข ๐›ฝ + 1 , ๐›พ )

๐›ฝ โˆˆ 2 โข โ„• 0 + 1 \Centerstack min โก ( ๐‘› โˆ’ 1 4 , ๐‘’ ๐‘› , ๐‘‘ , ๐›ฝ )
๐ฅ๐จ๐  โก ๐’ ๐’
๐ฅ๐จ๐  โก ๐’ ๐’ Table 3: ๐Œ๐Œ๐ƒ ๐ค โก ( ๐“ข in , ๐“ข KT ) guarantees for commonly used kernels. For ๐‘› input and ๐‘› output points, we report the MMD bounds of Thm. 2 for target KT, of Dwivedi & Mackey (2021, Thm. 1) for root KT, and of Thm. 4 for KT+ (with ๐›ผ > ๐‘‘ ๐‘‘ + 1 for Laplace, ๐›ผ > ๐‘‘ 2 โข ๐œˆ for Matรฉrn, ๐›ผ

๐›ฝ + 2 2 โข ๐›ฝ + 2 for B-spline with even ๐›ฝ , and ๐›ผ

1 2 for all other kernels). We assume a SubGauss  โ„™ for the Gauss kernel, a Compact  โ„™ for the B-spline  kernel, and a SubExp  โ„™ for all other ๐ค (see Sec. 2.3 for a definition of each โ„™ class). Here, ๐‘ ๐‘› โ‰œ log โก log โก ๐‘› , ๐‘’ ๐‘› , ๐‘‘ , ๐›ฝ โ‰œ log โก ๐‘› ๐‘› ( 2 โข ๐›ฝ โˆ’ ๐‘‘ ) / 4 โข ๐›ฝ , ๐›ฟ ๐‘–

๐›ฟ ๐‘› , ๐›ฟ โ€ฒ

๐›ฟ 2 , and error is reported up to constants depending on ( ๐ค , ๐‘‘ , ๐›ฟ , ๐›ผ ) . The target KT guarantee for Matรฉrn with ๐œˆ

3 โข ๐‘‘ / 2 assumes ๐œˆ โˆ’ ๐‘‘ / 2 โˆˆ โ„• to simplify the presentation (see 138 for the general case). The best rate is highlighted in blue. See App. K for details of the derivation. 3Kernel Thinning+

We next introduce and analyze two new generalized KT variants: (i) power KT which leverages a power kernel ๐ค ๐›ผ that interpolates between ๐ค and ๐ค rt to improve upon the MMD guarantees of target KT even when ๐ค rt is not available and (ii) KT+ which uses a sum of ๐ค and ๐ค ๐›ผ to retain both the improved MMD guarantee of ๐ค ๐›ผ and the superior single function guarantees of ๐ค .

Power kernel thinning   First, we generalize the square-root kernel 2 definition for shift-invariant ๐ค using the order 0 generalized Fourier transform (GFT, Wendland, 2004, Def. 8.9) ๐‘“ ^ of ๐‘“ : โ„ ๐‘‘ โ†’ โ„ .

Definition 2 ( ๐›ผ -power kernel)

Define ๐ค 1 โ‰œ ๐ค . We say a kernel ๐ค 1 2 is a 1 2 -power kernel for ๐ค if ๐ค โข ( ๐‘ฅ , ๐‘ฆ )

( 2 โข ๐œ‹ ) โˆ’ ๐‘‘ / 2 โข โˆซ โ„ ๐‘‘ ๐ค 1 2 โข ( ๐‘ฅ , ๐‘ง ) โข ๐ค 1 2 โข ( ๐‘ง , ๐‘ฆ ) โข ๐‘‘ ๐‘ง . For ๐›ผ โˆˆ ( 1 2 , 1 ) , a kernel ๐ค ๐›ผ โข ( ๐‘ฅ , ๐‘ฆ )

๐œ… ๐›ผ โข ( ๐‘ฅ โˆ’ ๐‘ฆ ) on โ„ ๐‘‘ is an ๐›ผ -power kernel for ๐ค โข ( ๐‘ฅ , ๐‘ฆ )

๐œ… โข ( ๐‘ฅ โˆ’ ๐‘ฆ ) if ๐œ… ๐›ผ ^

๐œ… ^ ๐›ผ .

By design, ๐ค 1 2 matches ๐ค rt 2 up to an immaterial constant rescaling. Given a power kernel ๐ค ๐›ผ we define power KT as generalized KT with ๐ค split

๐ค ๐›ผ . Our next result (with proof in App. E) provides an MMD guarantee for power KT.

Theorem 3 (MMD guarantee for power KT)

Consider generalized KT (Alg. 1) with ๐ค split

๐ค ๐›ผ for some ๐›ผ โˆˆ [ 1 2 , 1 ] , oblivious sequences ๐’ฎ in and ( ๐›ฟ ๐‘– ) ๐‘–

1 โŒŠ ๐‘› / 2 โŒ‹ , and ๐›ฟ โ‹† โ‰œ min ๐‘– โก ๐›ฟ ๐‘– . If ๐‘› 2 ๐‘š โˆˆ โ„• , then for any ๐›ฟ โ€ฒ โˆˆ ( 0 , 1 ) , the output coreset ๐’ฎ KT is of size ๐‘› 2 ๐‘š and satisfies

MMD ๐ค โก ( ๐’ฎ in , ๐’ฎ KT )

โ‰ค ( 2 ๐‘š ๐‘› โข โ€– ๐ค ๐›ผ โ€– โˆž ) 1 2 โข ๐›ผ โข ( 2 โ‹… ๐” ~ ๐›ผ ) 1 โˆ’ 1 2 โข ๐›ผ โข ( 2 + ( 4 โข ๐œ‹ ) ๐‘‘ / 2 ฮ“ โข ( ๐‘‘ 2 + 1 ) โ‹… โ„œ max ๐‘‘ 2 โ‹… ๐” ~ ๐›ผ ) 1 ๐›ผ โˆ’ 1 ,

(8)

with probability at least ๐‘ sg (defined in Thm. 1). The parameters ๐” ~ ๐›ผ and โ„œ max are defined in App. E and satisfy ๐” ~ ๐›ผ

๐’ช ๐‘‘ โข ( log โก ๐‘› ) and โ„œ max

๐’ช ๐‘‘ โข ( 1 ) for compactly supported โ„™ and ๐ค ๐›ผ and ๐” ~ ๐›ผ

๐’ช ๐‘‘ โข ( log โก ๐‘› โข log โก log โก ๐‘› ) and โ„œ max

๐’ช ๐‘‘ โข ( log โก ๐‘› ) for subexponential โ„™ and ๐ค ๐›ผ , when ๐›ฟ โ‹†

๐›ฟ โ€ฒ ๐‘› .

Thm. 3 reproduces the root KT guarantee of Dwivedi & Mackey (2021, Thm. 1) when ๐›ผ

1 2 and more generally accommodates any power kernel via an MMD interpolation result (LABEL:mmd_sandwich) that may be of independent interest. This generalization is especially valuable for less-smooth kernels like Laplace and Matรฉrn ( ๐œˆ , ๐›พ ) with ๐œˆ โˆˆ ( ๐‘‘ 2 , ๐‘‘ ] that have no square-root kernel. Our target KT MMD guarantees are no better than i.i.d. for these kernels, but, as shown in App. K, these kernels have Matรฉrn kernels as ๐›ผ -power kernels, which yield ๐‘œ โข ( ๐‘› โˆ’ 1 4 ) MMD in conjunction with Thm. 3.

Kernel thinning+   Our final KT variant, kernel thinning+, runs kt-split with a scaled sum of the target and power kernels, ๐ค โ€  โ‰œ ๐ค / โ€– ๐ค โ€– โˆž + ๐ค ๐›ผ / โ€– ๐ค ๐›ผ โ€– โˆž .4 Remarkably, this choice simultaneously provides the improved MMD guarantees of Thm. 3 and the dimension-free single function guarantees of Thm. 1 (see LABEL:sec:proof_of_ktplus for the proof).

Theorem 4 (Single function & MMD guarantees for KT+)

Consider generalized KT (Alg. 1) with ๐ค split

๐ค โ€  , oblivious ๐’ฎ in and ( ๐›ฟ ๐‘– ) ๐‘–

1 โŒŠ ๐‘› / 2 โŒ‹ , ๐›ฟ โ‹† โ‰œ min ๐‘– โก ๐›ฟ ๐‘– , and ๐‘› 2 ๐‘š โˆˆ โ„• . For any fixed function ๐‘“ โˆˆ โ„‹ , index โ„“ โˆˆ [ 2 ๐‘š ] , and scalar ๐›ฟ โ€ฒ โˆˆ ( 0 , 1 ) , the kt-split coreset ๐’ฎ ( ๐‘š , โ„“ ) satisfies

| โ„™ in โข ๐‘“ โˆ’ โ„™ split ( โ„“ ) โข ๐‘“ | โ‰ค 2 ๐‘š ๐‘› โ‹… 16 3 โข log โก ( 6 โข ๐‘š 2 ๐‘š โข ๐›ฟ โ‹† ) โข log โก ( 2 ๐›ฟ โ€ฒ ) โข โ€– ๐‘“ โ€– ๐ค โข โ€– ๐ค โ€– โˆž ,

(9)

with probability at least ๐‘ sg (for ๐‘ sg and โ„™ split ( โ„“ ) defined in Thm. 1). Moreover,

MMD ๐ค โก ( ๐’ฎ in , ๐’ฎ KT ) โ‰ค min โก [ 2 โ‹… ๐Œ ยฏ targetKT โข ( ๐ค ) , 2 1 2 โข ๐›ผ โ‹… ๐Œ ยฏ powerKT โข ( ๐ค ๐›ผ ) ]

(10)

with probability at least ๐‘ sg , where ๐Œ ยฏ targetKT โข ( ๐ค ) denotes the right hand side of 6 with โ€– ๐ค โ€– โˆž , in replaced by โ€– ๐ค โ€– โˆž , and ๐Œ ยฏ powerKT โข ( ๐ค ๐›ผ ) denotes the right hand side of 8.

As shown in Tab. 3, KT+ provides better-than-i.i.d. MMD guarantees for every kernel in Tab. 1โ€”even the Laplace, non-smooth Matรฉrn, and odd B-spline kernels neglected by prior analysesโ€”while matching or improving upon the guarantees of target KT and root KT in each case.

4Experiments

Figure 1:Generalized kernel thinning (KT) vs i.i.d. sampling for an 8-component mixture of Gaussians target โ„™ . For kernels ๐ค without fast-decaying square-roots, KT+ offers visible and quantifiable improvements over i.i.d. sampling. For Gaussian ๐ค , target KT closely mimics root KT.

Dwivedi & Mackey (2021) illustrated the MMD benefits of root KT over i.i.d. sampling and standard MCMC thinning with a series of vignettes focused on the Gaussian kernel. We revisit those vignettes with the broader range of kernels covered by generalized KT and demonstrate significant improvements in both MMD and single-function integration error. We focus on coresets of size ๐‘› produced from ๐‘› inputs with ๐›ฟ ๐‘–

1 2 โข ๐‘› , let โ„™ out denote the empirical distribution of each output coreset, and report mean error ( ยฑ 1 standard error) over 10 independent replicates of each experiment.

Target distributions and kernel bandwidths   We consider three classes of target distributions on โ„ ๐‘‘ : (i) mixture of Gaussians โ„™

1 ๐‘€ โข โˆ‘ ๐‘—

1 ๐‘€ ๐’ฉ โข ( ๐œ‡ ๐‘— , ๐ˆ 2 ) with ๐‘€ component means ๐œ‡ ๐‘— โˆˆ โ„ 2 defined in App. I, (ii) Gaussian โ„™

๐’ฉ โข ( 0 , ๐ˆ ๐‘‘ ) , and (iii) the posteriors of four distinct coupled ordinary differential equation models: the Goodwin (1965) model of oscillatory enzymatic control ( ๐‘‘

4 ), the Lotka (1925) model of oscillatory predator-prey evolution ( ๐‘‘

4 ), the Hinch et al. (2004) model of calcium signalling in cardiac cells ( ๐‘‘

38 ), and a tempered Hinch posterior. For settings (i) and (ii), we use an i.i.d. input sequence ๐’ฎ in from โ„™ and kernel bandwidths ๐œŽ

1 / ๐›พ

2 โข ๐‘‘ . For setting (iii), we use MCMC input sequences ๐’ฎ in from 12 posterior inference experiments of Riabiz et al. (2020a) and set the bandwidths ๐œŽ

1 / ๐›พ as specified by Dwivedi & Mackey (2021, Sec. K.2).

Figure 2:MMD and single-function integration error for Gaussian ๐ค and standard Gaussian โ„™ in โ„ ๐‘‘ . Without using a square-root kernel, target KT matches the MMD performance of root KT and improves upon i.i.d. MMD and single-function integration error, even in ๐‘‘

100 dimensions.

Function testbed   To evaluate the ability of generalized KT to improve integration both inside and outside of โ„‹ , we evaluate integration error for (a) a random element of the target kernel RKHS ( ๐‘“ โข ( ๐‘ฅ )

๐ค โข ( ๐‘‹ โ€ฒ , ๐‘ฅ ) described in App. I), (b) moments ( ๐‘“ โข ( ๐‘ฅ )

๐‘ฅ 1 and ๐‘“ โข ( ๐‘ฅ )

๐‘ฅ 1 2 ), and (c) a standard numerical integration benchmark test function from the continuous integrand family (CIF, Genz, 1984), ๐‘“ CIF โข ( ๐‘ฅ )

exp โก ( โˆ’ 1 ๐‘‘ โข โˆ‘ ๐‘—

1 ๐‘‘ | ๐‘ฅ ๐‘— โˆ’ ๐‘ข ๐‘— | ) for ๐‘ข ๐‘— drawn i.i.d. and uniformly from [ 0 , 1 ] .

Generalized KT coresets   For an 8 -component mixture of Gaussians target โ„™ , the top row of Fig. 1 highlights the visual differences between i.i.d. coresets and coresets generated using generalized KT. We consider root KT with Gauss  ๐ค , target KT with Gauss  ๐ค , and KT+ ( ๐›ผ

0.7 ) with Laplace  ๐ค , KT+ ( ๐›ผ

1 2 ) with IMQ  ๐ค ( ๐œˆ

0.5 ), and KT+( ๐›ผ

2 3 ) with B-spline(5) ๐ค , and note that the B-spline(5) (i.e., ๐›ฝ

2 ) and Laplace  ๐ค do not admit square-root kernels. In each case, even for small ๐‘› , generalized KT provides a more even distribution of points across components with fewer within-component gaps and clumps. Moreover, as suggested by our theory, target KT and root KT coresets for Gauss  ๐ค have similar quality despite target KT making no explicit use of a square-root kernel. The MMD error plots in the bottom row of Fig. 1 provide a similar conclusion quantitatively, where we observe that for both variants of KT, the MMD error decays as ๐‘› โˆ’ 1 2 , a significant improvement over the ๐‘› โˆ’ 1 4 rate of i.i.d. sampling. We also observe that the majority of the empirical MMD decay rates are in close agreement with the rates guaranteed by our theory in Tab. 3 ( ๐‘› โˆ’ 1 2 for Gauss and IMQ and ๐‘› โˆ’ 1 4 โข ๐›ผ

๐‘› โˆ’ 0.36 for Laplace). We provide additional visualizations and results in Secs. 2.3 and 2.3 of App. I, including MMD errors for ๐‘€

4 and ๐‘€

6 component mixture targets. The conclusions remain consistent with those drawn from Fig. 1.

Target KT vs. i.i.d. sampling   For Gaussian โ„™ and Gaussian ๐ค , Fig. 2 quantifies the improvements in distributional approximation obtained when using target KT in place of a more typical i.i.d. summary. Remarkably, target KT significantly improves the rate of decay and order of magnitude of mean MMD ๐ค โก ( โ„™ , โ„™ out ) , even in ๐‘‘

100 dimensions with as few as 4 output points. Moreover, in line with our theory, target KT MMD closely tracks that of root KT without using ๐ค rt . Finally, target KT delivers improved single-function integration error, both of functions in the RKHS (like ๐ค โข ( ๐‘‹ โ€ฒ , โ‹… ) ) and those outside (like the first moment and CIF benchmark function), even with large ๐‘‘ and relatively small sample sizes.

KT+ vs. standard MCMC thinning   For the MCMC targets, we measure error with respect to the input distribution โ„™ in (consistent with our guarantees), as exact integration under each posterior โ„™ is intractable. We employ KT+ ( ๐›ผ

0.81 ) with Laplace  ๐ค for Goodwin and Lotka-Volterra and KT+ ( ๐›ผ

0.5 ) with IMQ  ๐ค ( ๐œˆ

0.5 ) for Hinch. Notably, neither kernel has a square-root with fast-decaying tails. In Fig. 3, we evaluate thinning results from one chain targeting each of the Goodwin, Lotka-Volterra, and Hinch posteriors and observe that KT+ uniformly improves upon the MMD error of standard thinning (ST), even when ST exhibits better-than-i.i.d. accuracy. Furthermore, KT+ provides significantly smaller integration error for functions inside of the RKHS (like ๐ค โข ( ๐‘‹ โ€ฒ , โ‹… ) ) and outside of the RKHS (like the first and second moments and the benchmark CIF function) in nearly every setting. See Fig. 6 of App. I for plots of the other 9 MCMC settings.

Figure 3:Kernel thinning+ (KT+) vs. standard MCMC thinning (ST). For kernels without fast-decaying square-roots, KT+ improves MMD and integration error decay rates in each posterior inference task. 5Discussion and Conclusions

In this work, we introduced three new generalizations of the root KT algorithm of Dwivedi & Mackey (2021) with broader applicability and strengthened guarantees for generating compact representations of a probability distribution. Target kt-split provides ๐‘› -point summaries with ๐’ช โข ( log โก ๐‘› / ๐‘› ) integration error guarantees for any kernel, any target distribution, and any function in the RKHS; power KT yields improved better-than-i.i.d. MMD guarantees even when a square-root kernel is unavailable; and KT+ simultaneously inherits the guarantees of target KT and power KT. While we have focused on unweighted coreset quality we highlight that the same MMD guarantees extend to any improved reweighting of the coreset points. For example, for downstream tasks that support weights, one can optimally reweight โ„™ out to approximate โ„™ in in ๐’ช โข ( ๐‘› 3 2 ) time by directly minimizing MMD ๐ค . Finally, one can combine generalized KT with the Compress++ meta-algorithm of Shetty et al. (2022) to obtain coresets of comparable quality in near-linear time.

Reproducibility Statement

See App. I for supplementary experimental details and results and the goodpoints Python package

https://github.com/microsoft/goodpoints

for Python code reproducing all experiments.

Acknowledgments

We thank Lucas Janson and Boaz Barak for their valuable feedback on this work. RD acknowledges the support by the National Science Foundation under Grant No. DMS-2023528 for the Foundations of Data Science Institute (FODSI).

References Augustin et al. (2016) โ†‘ Christoph M Augustin, Aurel Neic, Manfred Liebmann, Anton J Prassl, Steven A Niederer, Gundolf Haase, and Gernot Plank.Anatomically accurate high resolution modeling of human whole heart electromechanics: A strongly scalable algebraic multigrid solver method for nonlinear deformation.Journal of computational physics, 305:622โ€“646, 2016. Batir (2017) โ†‘ Necdet Batir.Bounds for the gamma function.Results in Mathematics, 72(1):865โ€“874, 2017.doi: 10.1007/s00025-017-0698-0.URL https://doi.org/10.1007/s00025-017-0698-0. Berlinet & Thomas-Agnan (2011) โ†‘ Alain Berlinet and Christine Thomas-Agnan.Reproducing kernel Hilbert spaces in probability and statistics.Springer Science & Business Media, 2011. Chen et al. (2018) โ†‘ Wilson Ye Chen, Lester Mackey, Jackson Gorham, Franรงois-Xavier Briol, and Chris J. Oates.Stein points.In Proceedings of the 35th International Conference on Machine Learning, 2018. Chen et al. (2019) โ†‘ Wilson Ye Chen, Alessandro Barp, Franรงois-Xavier Briol, Jackson Gorham, Mark Girolami, Lester Mackey, and Chris Oates.Stein point Markov chain Monte Carlo.In International Conference on Machine Learning, pp. 1011โ€“1021. PMLR, 2019. Chen et al. (2010) โ†‘ Yutian Chen, Max Welling, and Alex Smola.Super-samples from kernel herding.In Proceedings of the Twenty-Sixth Conference on Uncertainty in Artificial Intelligence, UAIโ€™10, pp.  109โ€“116, Arlington, Virginia, USA, 2010. AUAI Press.ISBN 9780974903965. Chwialkowski et al. (2016) โ†‘ Kacper Chwialkowski, Heiko Strathmann, and Arthur Gretton.A kernel test of goodness of fit.In International conference on machine learning, pp. 2606โ€“2615. PMLR, 2016. Dick et al. (2013) โ†‘ Josef Dick, Frances Y Kuo, and Ian H Sloan.High-dimensional integration: The quasi-Monte Carlo way.Acta Numerica, 22:133โ€“288, 2013. Dwivedi & Mackey (2021) โ†‘ Raaz Dwivedi and Lester Mackey.Kernel thinning.arXiv preprint arXiv:2105.05842v8, 2021. Dwivedi et al. (2019) โ†‘ Raaz Dwivedi, Ohad N Feldheim, Ori Gurel-Gurevich, and Aaditya Ramdas.The power of online thinning in reducing discrepancy.Probability Theory and Related Fields, 174(1):103โ€“131, 2019. Genz (1984) โ†‘ Alan Genz.Testing multidimensional integration routines.In Proc. of international conference on Tools, methods and languages for scientific and engineering computation, pp.  81โ€“94, 1984. Girolami & Calderhead (2011) โ†‘ Mark Girolami and Ben Calderhead.Riemann manifold Langevin and Hamiltonian Monte Carlo methods.Journal of the Royal Statistical Society: Series B (Statistical Methodology), 73(2):123โ€“214, 2011. Goodwin (1965) โ†‘ Brian C Goodwin.Oscillatory behavior in enzymatic control process.Advances in Enzyme Regulation, 3:318โ€“356, 1965. Gorham & Mackey (2017) โ†‘ Jackson Gorham and Lester Mackey.Measuring sample quality with kernels.In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp.  1292โ€“1301. JMLR. org, 2017. Gretton et al. (2012) โ†‘ Arthur Gretton, Karsten M. Borgwardt, Malte J. Rasch, Bernhard Schรถlkopf, and Alexander Smola.A kernel two-sample test.Journal of Machine Learning Research, 13(25):723โ€“773, 2012. Haario et al. (1999) โ†‘ Heikki Haario, Eero Saksman, and Johanna Tamminen.Adaptive proposal distribution for random walk Metropolis algorithm.Computational Statistics, 14(3):375โ€“395, 1999. Harvey & Samadi (2014) โ†‘ Nick Harvey and Samira Samadi.Near-optimal herding.In Conference on Learning Theory, pp.  1165โ€“1182, 2014. Hickernell (1998) โ†‘ Fred Hickernell.A generalized discrepancy and quadrature error bound.Mathematics of computation, 67(221):299โ€“322, 1998. Hinch et al. (2004) โ†‘ Robert Hinch, JL Greenstein, AJ Tanskanen, L Xu, and RL Winslow.A simplified local control model of calcium-induced calcium release in cardiac ventricular myocytes.Biophysical journal, 87(6):3723โ€“3736, 2004. Karnin & Liberty (2019) โ†‘ Zohar Karnin and Edo Liberty.Discrepancy, coresets, and sketches in machine learning.In Conference on Learning Theory, pp.  1975โ€“1993. PMLR, 2019. Lacoste-Julien et al. (2015) โ†‘ Simon Lacoste-Julien, Fredrik Lindsten, and Francis Bach.Sequential kernel herding: Frank-Wolfe optimization for particle filtering.In Artificial Intelligence and Statistics, pp.  544โ€“552. PMLR, 2015. Liu et al. (2016) โ†‘ Qiang Liu, Jason Lee, and Michael Jordan.A kernelized Stein discrepancy for goodness-of-fit tests.In Proc. of 33rd ICML, volume 48 of ICML, pp. 276โ€“284, 2016. Lotka (1925) โ†‘ Alfred James Lotka.Elements of physical biology.Williams & Wilkins, 1925. Mak & Joseph (2018) โ†‘ Simon Mak and V Roshan Joseph.Support points.The Annals of Statistics, 46(6A):2562โ€“2592, 2018. Newey & McFadden (1994) โ†‘ Whitney K. Newey and Daniel McFadden.Chapter 36: Large sample estimation and hypothesis testing.In Handbook of Econometrics, volume 4, pp.  2111โ€“2245. Elsevier, 1994.doi: https://doi.org/10.1016/S1573-4412(05)80005-4.URL https://www.sciencedirect.com/science/article/pii/S1573441205800054. Niederer et al. (2011) โ†‘ Steven A Niederer, Lawrence Mitchell, Nicolas Smith, and Gernot Plank.Simulating human cardiac electrophysiology on clinical time-scales.Frontiers in Physiology, 2:14, 2011. Novak & Wozniakowski (2010) โ†‘ E Novak and H Wozniakowski.Tractability of multivariate problems, volume ii: Standard information for functionals, european math.Soc. Publ. House, Zรผrich, 3, 2010. Oates et al. (2017) โ†‘ Chris J Oates, Mark Girolami, and Nicolas Chopin.Control functionals for Monte Carlo integration.Journal of the Royal Statistical Society: Series B (Statistical Methodology), 79(3):695โ€“718, 2017. Owen (2017) โ†‘ Art B Owen.Statistically efficient thinning of a Markov chain sampler.Journal of Computational and Graphical Statistics, 26(3):738โ€“744, 2017. Paige et al. (2016) โ†‘ Brooks Paige, Dino Sejdinovic, and Frank Wood.Super-sampling with a reservoir.In Proceedings of the Thirty-Second Conference on Uncertainty in Artificial Intelligence, pp.  567โ€“576, 2016. Phillips & Tai (2020) โ†‘ Jeff M Phillips and Wai Ming Tai.Near-optimal coresets of kernel density estimates.Discrete & Computational Geometry, 63(4):867โ€“887, 2020. Riabiz et al. (2020a) โ†‘ Marina Riabiz, Wilson Chen, Jon Cockayne, Pawel Swietach, Steven A Niederer, Lester Mackey, and Chris Oates.Optimal thinning of MCMC output.arXiv preprint arXiv:2005.03952, 2020a. Riabiz et al. (2020b) โ†‘ Marina Riabiz, Wilson Ye Chen, Jon Cockayne, Pawel Swietach, Steven A. Niederer, Lester Mackey, and Chris J. Oates.Replication Data for: Optimal Thinning of MCMC Output, 2020b.URL https://doi.org/10.7910/DVN/MDKNWM.Accessed on Mar 23, 2021. Roberts & Tweedie (1996) โ†‘ Gareth O Roberts and Richard L Tweedie.Exponential convergence of Langevin distributions and their discrete approximations.Bernoulli, 2(4):341โ€“363, 1996. Rudi et al. (2020) โ†‘ Alessandro Rudi, Ulysse Marteau-Ferey, and Francis Bach.Finding global minima via kernel approximations.arXiv preprint arXiv:2012.11978, 2020. Sejdinovic et al. (2013) โ†‘ Dino Sejdinovic, Bharath Sriperumbudur, Arthur Gretton, and Kenji Fukumizu.Equivalence of distance-based and rkhs-based statistics in hypothesis testing.The Annals of Statistics, pp.  2263โ€“2291, 2013. Shetty et al. (2022) โ†‘ Abhishek Shetty, Raaz Dwivedi, and Lester Mackey.Distribution compression in near-linear time.In International Conference on Learning Representations, 2022. Simon-Gabriel et al. (2020) โ†‘ Carl-Johann Simon-Gabriel, Alessandro Barp, and Lester Mackey.Metrizing weak convergence with maximum mean discrepancies.arXiv preprint arXiv:2006.09268, 2020. Steinwart & Christmann (2008) โ†‘ Ingo Steinwart and Andreas Christmann.Support vector machines.Springer Science & Business Media, 2008. Steinwart & Fischer (2021) โ†‘ Ingo Steinwart and Simon Fischer.A closer look at covering number bounds for Gaussian kernels.Journal of Complexity, 62:101513, 2021. Strocchi et al. (2020) โ†‘ Marina Strocchi, Matthias AF Gsell, Christoph M Augustin, Orod Razeghi, Caroline H Roney, Anton J Prassl, Edward J Vigmond, Jonathan M Behar, Justin S Gould, Christopher A Rinaldi, Martin J Bishop, Gernot Plank, and Steven A Niederer.Simulating ventricular systolic motion in a four-chamber heart model with spatially varying robin boundary conditions to model the effect of the pericardium.Journal of Biomechanics, 101:109645, 2020. Sun & Zhou (2008) โ†‘ Hong-Wei Sun and Ding-Xuan Zhou.Reproducing kernel Hilbert spaces associated with analytic translation-invariant Mercer kernels.Journal of Fourier Analysis and Applications, 14(1):89โ€“101, 2008. Tolstikhin et al. (2017) โ†‘ Ilya Tolstikhin, Bharath K Sriperumbudur, and Krikamol Muandet.Minimax estimation of kernel mean embeddings.The Journal of Machine Learning Research, 18(1):3002โ€“3048, 2017. Volterra (1926) โ†‘ Vito Volterra.Variazioni e fluttuazioni del numero dโ€™individui in specie animali conviventi.1926. Wainwright (2019) โ†‘ Martin J Wainwright.High-dimensional statistics: A non-asymptotic viewpoint, volume 48.Cambridge University Press, 2019. Wendland (2004) โ†‘ Holger Wendland.Scattered data approximation, volume 17.Cambridge university press, 2004. Zhang & Zhao (2013) โ†‘ Haizhang Zhang and Liang Zhao.On the inclusion relation of reproducing kernel hilbert spaces.Analysis and Applications, 11(02):1350014, 2013. Zhou (2002) โ†‘ Ding-Xuan Zhou.The covering number in learning theory.Journal of Complexity, 18(3):739โ€“767, 2002. \etoctocstyle

1Appendix \etocdepthtag.tocmtappendix \etocsettagdepthmtchapternone \etocsettagdepthmtappendixsection

Contents 1Introduction 2Generalized Kernel Thinning Appendix ADetails of kt-split and kt-swap Input: kernel ๐ค split , point sequence ๐’ฎ in

( ๐‘ฅ ๐‘– ) ๐‘–

1 ๐‘› , thinning parameter ๐‘š โˆˆ โ„• , probabilities ( ๐›ฟ ๐‘– ) ๐‘–

1 โŒŠ ๐‘› 2 โŒ‹ ๐’ฎ ( ๐‘— , โ„“ ) โ† { } for 0 โ‰ค ๐‘— โ‰ค ๐‘š and 1 โ‰ค โ„“ โ‰ค 2 ๐‘— โ€ƒ// Empty coresets: ๐’ฎ ( ๐‘— , โ„“ ) has size โŒŠ ๐‘– 2 ๐‘— โŒ‹ after round ๐‘– ๐œŽ ๐‘— , โ„“ โ† 0 for 1 โ‰ค ๐‘— โ‰ค ๐‘š and 1 โ‰ค โ„“ โ‰ค 2 ๐‘— โˆ’ 1 โ€ƒ// Swapping parameters for  ๐‘–

1 , โ€ฆ , โŒŠ ๐‘› / 2 โŒ‹  do        ๐’ฎ ( 0 , 1 ) โข .append โข ( ๐‘ฅ ๐‘– ) ; ๐’ฎ ( 0 , 1 ) โข .append โข ( ๐‘ฅ 2 โข ๐‘– )       [2pt] // Every 2 ๐‘— rounds, add one point from parent coreset ๐’ฎ ( ๐‘— โˆ’ 1 , โ„“ ) to each child coreset ๐’ฎ ( ๐‘— , 2 โข โ„“ โˆ’ 1 ) , ๐’ฎ ( ๐‘— , 2 โข โ„“ )       [1pt] for ( ๐‘—

1 ; ๐‘— โ‰ค ๐‘š โข and โข ๐‘– / 2 ๐‘— โˆ’ 1 โˆˆ โ„• ; ๐‘—

๐‘— + 1 ) do              for  โ„“

1 , โ€ฆ , 2 ๐‘— โˆ’ 1  do                    ( ๐’ฎ , ๐’ฎ โ€ฒ ) โ† ( ๐’ฎ ( ๐‘— โˆ’ 1 , โ„“ ) , ๐’ฎ ( ๐‘— , 2 โข โ„“ โˆ’ 1 ) ) ; โ€ƒ ( ๐‘ฅ , ๐‘ฅ ~ ) โ† get_last_two_points โข ( ๐’ฎ )                   [2pt] // Compute swapping threshold ๐”ž                   [1pt] ๐”ž , ๐œŽ ๐‘— , โ„“ โ† get_swap_params( ๐œŽ ๐‘— , โ„“ , ๐”Ÿ , ๐›ฟ | ๐’ฎ | / 2 โข 2 ๐‘— โˆ’ 1 ๐‘š โ€‹) for ๐”Ÿ 2

๐ค split โข ( ๐‘ฅ , ๐‘ฅ ) + ๐ค split โข ( ๐‘ฅ ~ , ๐‘ฅ ~ ) โˆ’ 2 โข ๐ค split โข ( ๐‘ฅ , ๐‘ฅ ~ )                   [2pt]                   // Assign one point to each child after probabilistic swapping                   [1pt] ๐›ผ โ† ๐ค split โข ( ๐‘ฅ ~ , ๐‘ฅ ~ ) โˆ’ ๐ค split โข ( ๐‘ฅ , ๐‘ฅ ) + ฮฃ ๐‘ฆ โˆˆ ๐’ฎ โข ( ๐ค split โข ( ๐‘ฆ , ๐‘ฅ ) โˆ’ ๐ค split โข ( ๐‘ฆ , ๐‘ฅ ~ ) ) โˆ’ 2 โข ฮฃ ๐‘ง โˆˆ ๐’ฎ โ€ฒ โข ( ๐ค split โข ( ๐‘ง , ๐‘ฅ ) โˆ’ ๐ค split โข ( ๐‘ง , ๐‘ฅ ~ ) )                   [3pt]                    ( ๐‘ฅ , ๐‘ฅ ~ ) โ† ( ๐‘ฅ ~ , ๐‘ฅ ) with probability min โก ( 1 , 1 2 โข ( 1 โˆ’ ๐›ผ ๐”ž ) + )                   [2pt] ๐’ฎ ( ๐‘— , 2 โข โ„“ โˆ’ 1 ) โข .append โข ( ๐‘ฅ ) ; ๐’ฎ ( ๐‘— , 2 โข โ„“ ) โข .append โข ( ๐‘ฅ ~ )              end for                     end for        end for return ( ๐’ฎ ( ๐‘š , โ„“ ) ) โ„“

1 2 ๐‘š , candidate coresets of size โŒŠ ๐‘› / 2 ๐‘š โŒ‹ function get_swap_params( ๐œŽ , ๐”Ÿ , ๐›ฟ ):        ๐”ž โ† max โก ( ๐”Ÿ โข ๐œŽ โข 2 โข log โก ( 2 / ๐›ฟ ) , ๐”Ÿ 2 )        ๐œŽ 2 โ† ๐œŽ 2 + ๐”Ÿ 2 โข ( 1 + ( ๐”Ÿ 2 โˆ’ 2 โข ๐”ž ) โข ๐œŽ 2 / ๐”ž 2 ) +        return ( ๐”ž , ๐œŽ ) Algorithm 2 kt-split โ€“  Divide points into candidate coresets of size โŒŠ ๐‘› / 2 ๐‘š โŒ‹ Input: kernel ๐ค , point sequence ๐’ฎ in

( ๐‘ฅ ๐‘– ) ๐‘–

1 ๐‘› , candidate coresets ( ๐’ฎ ( ๐‘š , โ„“ ) ) โ„“

1 2 ๐‘š ๐’ฎ ( ๐‘š , 0 ) โ† baseline_thinning โข ( ๐’ฎ in , size

โŒŠ ๐‘› / 2 ๐‘š โŒ‹ ) โ€ƒ// Compare to baseline (e.g., standard thinning) ๐’ฎ KT โ† ๐’ฎ ( ๐‘š , โ„“ โ‹† ) โข  for  โข โ„“ โ‹† โ† missing ๐‘Ž โข ๐‘Ÿ โข ๐‘” โข ๐‘š โข ๐‘– โข ๐‘› โ„“ โˆˆ { 0 , 1 , โ€ฆ , 2 ๐‘š } โข MMD ๐ค โก ( ๐’ฎ in , ๐’ฎ ( ๐‘š , โ„“ ) ) // Select best candidate coreset // Swap out each point in ๐’ฎ KT for best alternative in ๐’ฎ in [1pt] for  ๐‘–

1 , โ€ฆ , โŒŠ ๐‘› / 2 ๐‘š โŒ‹  do        ๐’ฎ KT โข [ ๐‘– ] โ† missing ๐‘Ž โข ๐‘Ÿ โข ๐‘” โข ๐‘š โข ๐‘– โข ๐‘› ๐‘ง โˆˆ ๐’ฎ in โข MMD ๐ค โก ( ๐’ฎ in , ๐’ฎ KT โข  with  โข ๐’ฎ KT โข [ ๐‘– ]

๐‘ง ) end for return ๐’ฎ KT , refined coreset of size โŒŠ ๐‘› / 2 ๐‘š โŒ‹ Algorithm 3 kt-swap โ€“ Identify and refine the best candidate coreset Appendix BProof of Thm. 1: Single function guarantees for kt-split

The proof is identical for each index โ„“ , so, without loss of generality, we prove the result for the case โ„“

1 . Define

๐’ฒ ~ ๐‘š โ‰œ ๐’ฒ 1 , ๐‘š

โ„™ in โข ๐ค split โˆ’ โ„™ out ( 1 ) โข ๐ค split

1 ๐‘› โข โˆ‘ ๐‘ฅ โˆˆ ๐’ฎ in ๐ค split โข ( ๐‘ฅ , โ‹… ) โˆ’ 1 ๐‘› / 2 ๐‘š โข โˆ‘ ๐‘ฅ โˆˆ ๐’ฎ ( ๐‘š , 1 ) ๐ค split โข ( ๐‘ฅ , โ‹… ) .

(11)

Next, we use the results about an intermediate algorithm, kernel halving (Dwivedi & Mackey, 2021, Alg. 3) that was introduced for the analysis of kernel thinning. Using the arguments from Dwivedi & Mackey (2021, Sec. 5.2), we conclude that kt-split with ๐ค split and thinning parameter ๐‘š , is equivalent to repeated kernel halving with kernel ๐ค split for ๐‘š rounds (with no Failure in any rounds of kernel halving). On this event of equivalence, denoted by โ„ฐ equi , Dwivedi & Mackey (2021, Eqns. (50, 51)) imply that the function ๐’ฒ ~ ๐‘š โˆˆ โ„‹ split is equal in distribution to another random function ๐’ฒ ๐‘š , where ๐’ฒ ๐‘š is unconditionally sub-Gaussian with parameter

๐œŽ ๐‘š

2 3 โข 2 ๐‘š ๐‘› โข โ€– ๐ค split โ€– โˆž โข log โก ( 6 โข ๐‘š 2 ๐‘š โข ๐›ฟ โ‹† ) ,

(12)

that is,

๐”ผ โข [ exp โก ( โŸจ ๐’ฒ ๐‘š , ๐‘“ โŸฉ ๐ค split ) ] โ‰ค exp โก ( 1 2 โข ๐œŽ ๐‘š 2 โข โ€– ๐‘“ โ€– ๐ค split 2 ) for all ๐‘“ โˆˆ โ„‹ split ,

(13)

where we note that the analysis of Dwivedi & Mackey (2021) remains unaffected when we replace โ€– ๐ค split โ€– โˆž by โ€– ๐ค split โ€– โˆž , in in all the arguments. Applying the sub-Gaussian Hoeffding inequality (Wainwright, 2019, Prop. 2.5) along with 13, we obtain that

โ„™ โข [ | โŸจ ๐’ฒ ๐‘š , ๐‘“ โŸฉ ๐ค split |

๐‘ก ] โ‰ค 2 โข exp โก ( โˆ’ 1 2 โข ๐‘ก 2 / ( ๐œŽ ๐‘š 2 โข โ€– ๐‘“ โ€– ๐ค split 2 ) ) โ‰ค ๐›ฟ โ€ฒ โข  for  โข ๐‘ก โ‰œ ๐œŽ ๐‘š โข โ€– ๐‘“ โ€– ๐ค split โข 2 โข log โก ( 2 ๐›ฟ โ€ฒ ) .

(14)

Call this event โ„ฐ sg . As noted above, conditional to the event โ„ฐ equi , we also have

๐’ฒ ๐‘š

๐‘‘ ๐’ฒ ~ ๐‘š โŸน โŸจ ๐’ฒ ๐‘š , ๐‘“ โŸฉ ๐ค split

๐‘‘ โ„™ in โข ๐‘“ โˆ’ โ„™ out ( 1 ) โข ๐‘“ ,

(15)

where

๐‘‘ denotes equality in distribution. Furthermore, Dwivedi & Mackey (2021, Eqn. 48) implies that

โ„™ โข ( โ„ฐ equi ) โ‰ฅ 1 โˆ’ โˆ‘ ๐‘—

1 ๐‘š 2 ๐‘— โˆ’ 1 ๐‘š โข โˆ‘ ๐‘–

1 ๐‘› / 2 ๐‘— ๐›ฟ ๐‘– .

(16)

Putting the pieces together, we have

โ„™ โข [ | โ„™ in โข ๐‘“ โˆ’ โ„™ out ( 1 ) โข ๐‘“ | โ‰ค ๐‘ก ] โ‰ฅ โ„™ โข ( โ„ฐ equi โˆฉ โ„ฐ sg ๐‘ ) โ‰ฅ โ„™ โข ( โ„ฐ equi ) โˆ’ โ„™ โข ( โ„ฐ sg ) โ‰ฅ 1 โˆ’ โˆ‘ ๐‘—

1 ๐‘š 2 ๐‘— โˆ’ 1 ๐‘š โข โˆ‘ ๐‘–

1 ๐‘› / 2 ๐‘— ๐›ฟ ๐‘– โˆ’ ๐›ฟ โ€ฒ

๐‘ sg ,

(17)

as claimed. The proof is now complete.

Appendix CProof of Cor. 1: Guarantees for functions outside of โ„‹ split

Fix any index โ„“ โˆˆ [ 2 ๐‘š ] , scalar ๐›ฟ โ€ฒ โˆˆ ( 0 , 1 ) , and ๐‘“ defined on ๐’ฎ in , and consider the associated vector ๐‘” โˆˆ โ„ ๐‘› with ๐‘” ๐‘–

๐‘“ โข ( ๐‘ฅ ๐‘– ) for each ๐‘– โˆˆ [ ๐‘› ] . We define two extended functions by appending the domain by n as follows: For any ๐‘ค โˆˆ ๐‘› , define ๐‘“ 1 โข ( ( ๐‘ฅ , ๐‘ค ) )

๐‘“ โข ( ๐‘ฅ ) and ๐‘“ 2 โข ( ( ๐‘ฅ , ๐‘ค ) )

โŸจ ๐‘” , ๐‘ค โŸฉ (the Euclidean inner product). Then we note that these functions replicate the values of ๐‘“ on ๐’ฎ in , since ๐‘“ 1 โข ( ( ๐‘ฅ ๐‘– , ๐‘ค ) )

๐‘“ โข ( ๐‘ฅ ๐‘– ) for arbitrary ๐‘ค โˆˆ ๐‘› , and ๐‘“ 2 โข ( ( ๐‘ฅ ๐‘– , ๐‘’ ๐‘– ) )

โŸจ ๐‘” , ๐‘’ ๐‘– โŸฉ

๐‘” ๐‘–

๐‘“ โข ( ๐‘ฅ ๐‘– ) , where ๐‘’ ๐‘– denotes the ๐‘– -th basis vector in n. Thus we can write

โ„™ in โข ๐‘“ โˆ’ โ„™ split ( โ„“ ) โข ๐‘“

โ„™ in โ€ฒ โข ๐‘“ 1 โˆ’ โ„™ โ€ฒ split ( โ„“ ) โข ๐‘“ 1

โ„™ in โ€ฒ โข ๐‘“ 2 โˆ’ โ„™ โ€ฒ split ( โ„“ ) โข ๐‘“ 2

(18)

for the extended empirical distributions โ„™ in โ€ฒ

1 ๐‘› โข โˆ‘ ๐‘–

1 ๐‘› ๐›ฟ ๐‘ฅ ๐‘– , ๐‘’ ๐‘– and โ„™ โ€ฒ split ( โ„“ ) , defined analogously. Notably, any function of the form ๐‘“ ~ โข ( ( ๐‘ฅ , ๐‘ค ) )

โŸจ ๐‘” ~ , ๐‘ค โŸฉ belongs to the RKHS of ๐ค split โ€ฒ with

โ€– ๐‘“ ~ โ€– ๐ค split โ€ฒ โ‰ค โ€– ๐‘” ~ โ€– 2

(19)

by Berlinet & Thomas-Agnan (2011, Thm. 5).

By the repeated halving interpretation of kernel thinning (Dwivedi & Mackey, 2021, Sec. 5.2), on an event โ„ฐ of probability at least ๐‘ sg + ๐›ฟ โ€ฒ we may write

โ„™ in โ€ฒ โข ๐‘“ 2 โˆ’ โ„™ โ€ฒ split ( โ„“ ) โข ๐‘“ 2

โˆ‘ ๐‘—

1 ๐‘š โŸจ ๐’ฒ ๐‘— , ๐‘“ 2 โŸฉ ๐ค split โ€ฒ

โˆ‘ ๐‘—

1 ๐‘š โŸจ ๐’ฒ ๐‘— , ๐‘“ 2 , ๐‘— โŸฉ ๐ค split โ€ฒ

(20)

where ๐’ฒ ๐‘— denotes suitable random functions in the RKHS of ๐ค split โ€ฒ , and each ๐‘“ 2 , ๐‘— โข ( ( ๐‘ฅ , ๐‘ค ) )

โŸจ ๐‘” ( ๐‘— ) , ๐‘ค โŸฉ for ๐‘” ( ๐‘— ) โˆˆ โ„ ๐‘› a sparsification of ๐‘” with at most ๐‘› 2 ๐‘— โˆ’ 1 non-zero entries that satisfy

๐”ผ โข [ exp โก ( โŸจ ๐’ฒ ๐‘— , ๐‘“ 2 , ๐‘— โŸฉ ๐ค split โ€ฒ ) โˆฃ ๐’ฒ ๐‘— โˆ’ 1 ] โ‰ค exp โก ( ๐œŽ ๐‘— 2 2 โข โ€– ๐‘“ 2 , ๐‘— โ€– ๐ค split โ€ฒ 2 ) โ‰ค 19 exp โก ( ๐œŽ ๐‘— 2 2 โข โ€– ๐‘” ( ๐‘— ) โ€– 2 2 ) โ‰ค exp โก ( ๐œŽ ๐‘— 2 2 โข ๐‘› 2 ๐‘— โˆ’ 1 โข โ€– ๐‘“ โ€– โˆž , in 2 )

(21)

for ๐’ฒ 0 โ‰œ 0 and

๐œŽ ๐‘— 2

4 โข ( 2 ๐‘— โˆ’ 1 ๐‘› ) 2 โข โ€– ๐ค split โ€ฒ โ€– โˆž , in โข log โก ( 4 โข ๐‘š 2 ๐‘— โข ๐›ฟ โ‹† ) โ‰ค 2 โ‹… 4 ๐‘— ๐‘› 2 โข log โก ( 4 โข ๐‘š 2 ๐‘— โข ๐›ฟ โ‹† ) ,

(22)

since by definition โ€– ๐ค split โ€ฒ โ€– โˆž , in โ‰ค 2 . Hence, by sub-Gaussian additivity (see, e.g., Dwivedi & Mackey, 2021, Lem. 8), โ„™ in โข ๐‘“ 2 โˆ’ โ„™ split ( โ„“ ) โข ๐‘“ 2 is ๐œŽ ~ 2 sub-Gaussian with

๐œŽ ~ 2 2 โ‰ค 4 ๐‘› โข โ€– ๐‘“ โ€– โˆž , in 2 โ‹… โˆ‘ ๐‘—

1 ๐‘š 2 ๐‘— โข log โก ( 4 โข ๐‘š 2 ๐‘— โข ๐›ฟ โ‹† )

( ๐‘– ) 4 ๐‘› โข โ€– ๐‘“ โ€– โˆž , in 2 โ‹… 2 โข ( ( 2 ๐‘š โˆ’ 1 ) โข log โก ( 4 โข ๐‘š ๐›ฟ โ‹† ) โˆ’ ( ( 2 ๐‘š โˆ’ 1 ) โข ( ๐‘š โˆ’ 1 ) + ๐‘š ) โข log โก 2 )

(23)

= 4 ๐‘› โข โ€– ๐‘“ โ€– โˆž , in 2 โ‹… 2 โข ( ( 2 ๐‘š โˆ’ 1 ) โข log โก ( 4 โข ๐‘š โ‹… 2 2 ๐‘š โข ๐›ฟ โ‹† ) โˆ’ ๐‘š โข log โก 2 )

(24)

โ‰ค 8 โ‹… 2 ๐‘š ๐‘› โข โ€– ๐‘“ โ€– โˆž , in 2 โ‹… log โก ( 8 โข ๐‘š 2 ๐‘š โข ๐›ฟ โ‹† ) ,

(25)

i.e.,

๐œŽ ~ 2 โ‰ค 2 ๐‘š ๐‘› โ‹… โ€– ๐‘“ โ€– โˆž , in โ‹… 8 โข log โก ( 8 โข ๐‘š 2 ๐‘š โข ๐›ฟ โ‹† )

(26)

on the event โ„ฐ , where step (i) makes use of the following expressions:

โˆ‘ ๐‘—

1 ๐‘š 2 ๐‘—

2 โข ( 2 ๐‘š โˆ’ 1 ) and โˆ‘ ๐‘—

1 ๐‘š ๐‘— โข 2 ๐‘—

2 โข ( ( ๐‘š โˆ’ 1 ) โข ( 2 ๐‘š โˆ’ 1 ) + ๐‘š ) .

(27)

Moreover, when ๐‘“ โˆˆ โ„‹ split , we additionally have ๐‘“ 1 in the RKHS of ๐ค split โ€ฒ with

โ€– ๐‘“ 1 โ€– ๐ค split โ€ฒ โ‰ค โ€– ๐‘“ โ€– ๐ค split โข โ€– ๐ค split โ€– โˆž ,

(28)

as argued in the proof of LABEL:eq:rkhs_scaling_norms. The proof of Thm. 1 then implies that โ„™ in โ€ฒ โข ๐‘“ 1 โˆ’ โ„™ โ€ฒ split ( โ„“ ) โข ๐‘“ 1 is ๐œŽ ~ 1 sub-Gaussian with

๐œŽ ~ 1

โ‰ค โ€– ๐‘“ ๐‘Ž โ€– ๐ค split โ€ฒ โข 2 3 โข 2 ๐‘š ๐‘› โข โ€– ๐ค split โ€ฒ โ€– โˆž , in โ‹… log โก ( 6 โข ๐‘š 2 ๐‘š โข ๐›ฟ โ‹† ) โ‰ค 2 ๐‘š ๐‘› โ‹… โ€– ๐‘“ โ€– ๐ค split โข โ€– ๐ค split โ€– โˆž โ‹… 8 3 โข log โก ( 6 โข ๐‘š 2 ๐‘š โข ๐›ฟ โ‹† ) ,

(29)

on the very same event โ„ฐ .

Recalling 18 and putting the pieces together with the definitions 29 and 26, we conclude that on the event โ„ฐ , the random variable โ„™ in โข ๐‘“ โˆ’ โ„™ split ( โ„“ ) โข ๐‘“ is ๐œŽ ~ sub-Gaussian for

๐œŽ ~ โ‰œ min โก ( ๐œŽ ~ 1 , ๐œŽ ~ 2 ) โ‰ค 26 , 29 min โก ( ๐‘› 2 ๐‘š โข โ€– ๐‘“ โ€– โˆž , in , โ€– ๐‘“ โ€– ๐ค split โข โ€– ๐ค split โ€– โˆž ) โ‹… 2 ๐‘š ๐‘› โข 8 โข log โก ( 8 โข ๐‘š 2 ๐‘š โข ๐›ฟ โ‹† ) .

(30)

The advertised high-probability bound 4 now follows from the ๐œŽ ~ sub-Gaussianity on โ„ฐ exactly as in the proof of Thm. 1.

Appendix DProof of Thm. 2: MMD guarantee for target KT

First, we note that by design, kt-swap ensures

MMD ๐ค โก ( ๐’ฎ in , ๐’ฎ KT ) โ‰ค MMD ๐ค โก ( ๐’ฎ in , ๐’ฎ ( ๐‘š , 1 ) ) ,

(31)

where ๐’ฎ ( ๐‘š , 1 ) denotes the first coreset returned by kt-split. Thus it suffices to show that MMD ๐ค โก ( ๐’ฎ in , ๐’ฎ ( ๐‘š , 1 ) ) is bounded by the term stated on the right hand side of 6. Let โ„™ out ( 1 ) โ‰œ 1 ๐‘› / 2 ๐‘š โข โˆ‘ ๐‘ฅ โˆˆ ๐’ฎ ( ๐‘š , 1 ) ๐œน ๐‘ฅ . By design of kt-split, supp โข ( โ„™ out ( 1 ) ) โІ supp โข ( โ„™ in ) . Recall the set ๐’œ is such that supp โข ( โ„™ in ) โІ ๐’œ .

Proof of 6

Let ๐’ž โ‰œ ๐’ž ๐ค , ๐œ€ โข ( ๐’œ ) denote the cover of minimum cardinality satisfying 5. Fix any ๐‘“ โˆˆ โ„ฌ ๐ค . By the triangle inequality and the covering property 5 of ๐’ž , we have

| ( โ„™ in โˆ’ โ„™ out ( 1 ) ) โข ๐‘“ |

โ‰ค inf ๐‘” โˆˆ ๐’ž | ( โ„™ in โˆ’ โ„™ out ( 1 ) ) โข ( ๐‘“ โˆ’ ๐‘” ) | + | ( โ„™ in โˆ’ โ„™ out ( 1 ) ) โข ( ๐‘” ) |

(32)

โ‰ค inf ๐‘” โˆˆ ๐’ž | โ„™ in โข ( ๐‘“ โˆ’ ๐‘” ) | + | โ„™ out ( 1 ) โข ( ๐‘“ โˆ’ ๐‘” ) | + sup ๐‘” โˆˆ ๐’ž | ( โ„™ in โˆ’ โ„™ out ( 1 ) ) โข ( ๐‘” ) |

(33)

โ‰ค inf ๐‘” โˆˆ ๐’ž 2 โข sup ๐‘ฅ โˆˆ ๐’œ | ๐‘“ โข ( ๐‘ฅ ) โˆ’ ๐‘” โข ( ๐‘ฅ ) | + sup ๐‘” โˆˆ ๐’ž | ( โ„™ in โˆ’ โ„™ out ( 1 ) ) โข ( ๐‘” ) |

(34)

โ‰ค 2 โข ๐œ€ + sup ๐‘” โˆˆ ๐’ž | ( โ„™ in โˆ’ โ„™ out ( 1 ) ) โข ( ๐‘” ) | .

(35)

Applying Thm. 1, we have

| ( โ„™ in โˆ’ โ„™ out ( 1 ) ) โข ( ๐‘” ) | โ‰ค 2 ๐‘š ๐‘› โข โ€– ๐‘” โ€– ๐ค โข 8 3 โข โ€– ๐ค โ€– โˆž , in โ‹… log โก ( 4 ๐›ฟ โ‹† ) โข log โก ( 4 ๐›ฟ โ€ฒ )

(36)

with probability at least 1 โˆ’ ๐›ฟ โ€ฒ โˆ’ โˆ‘ ๐‘—

1 ๐‘š 2 ๐‘— โˆ’ 1 ๐‘š โข โˆ‘ ๐‘–

1 ๐‘› / 2 ๐‘— ๐›ฟ ๐‘–

๐‘ sg โˆ’ ๐›ฟ โ€ฒ . A standard union bound then yields that

sup ๐‘” โˆˆ ๐’ž | ( โ„™ in โˆ’ โ„™ out ( 1 ) ) โข ( ๐‘” ) | โ‰ค 2 ๐‘š ๐‘› โข sup ๐‘” โˆˆ ๐’ž โ€– ๐‘” โ€– ๐ค โข 8 3 โข โ€– ๐ค โ€– โˆž , in โ‹… log โก ( 4 ๐›ฟ โ‹† ) โข [ log โก | ๐’ž | + log โก ( 4 ๐›ฟ โ€ฒ ) ]

(37)

probability at least ๐‘ sg โˆ’ ๐›ฟ โ€ฒ . Since ๐‘“ โˆˆ โ„ฌ ๐ค was arbitrary, and ๐’ž โŠ‚ โ„ฌ ๐ค and thus sup ๐‘” โˆˆ ๐’ž โ€– ๐‘” โ€– ๐ค โ‰ค 1 , we therefore have

MMD ๐ค โก ( ๐’ฎ in , ๐’ฎ ( ๐‘š , 1 ) )

sup โ€– ๐‘“ โ€– ๐ค โ‰ค 1 | ( โ„™ in โˆ’ โ„™ out ( 1 ) ) โข ๐‘“ | โ‰ค 35 2 โข ๐œ€ + sup ๐‘” โˆˆ ๐’ž | ( โ„™ in โˆ’ โ„™ out ( 1 ) ) โข ( ๐‘” ) |

(38)

โ‰ค 2 โข ๐œ€ + 8 โข โ€– ๐ค โ€– โˆž 3 โ‹… 2 ๐‘š ๐‘› โข log โก ( 4 ๐›ฟ โ‹† ) โข [ log โก | ๐’ž | + log โก ( 4 ๐›ฟ โ€ฒ ) ] ,

(39)

with probability at least ๐‘ sg โˆ’ ๐›ฟ โ€ฒ as claimed.

Appendix EProof of Thm. 3: MMD guarantee for power KT Definition of ๐” ~ ๐›ผ and โ„œ max

Define the ๐ค ๐›ผ tail radii,

โ„œ ๐ค ๐›ผ , ๐‘› โ€ 

โ‰œ min โก { ๐‘Ÿ : ๐œ ๐ค ๐›ผ โข ( ๐‘Ÿ ) โ‰ค โ€– ๐ค ๐›ผ โ€– โˆž ๐‘› } , where ๐œ ๐ค ๐›ผ โข ( ๐‘… ) โ‰œ ( sup ๐‘ฅ โˆซ โ€– ๐‘ฆ โ€– 2 โ‰ฅ ๐‘… ๐ค ๐›ผ 2 โข ( ๐‘ฅ , ๐‘ฅ โˆ’ ๐‘ฆ ) โข ๐‘‘ ๐‘ฆ ) 1 2 ,

(40)

โ„œ ๐ค ๐›ผ , ๐‘›

โ‰œ min โก { ๐‘Ÿ : sup โ€– ๐‘ฅ โˆ’ ๐‘ฆ โ€– 2 โ‰ฅ ๐‘Ÿ | ๐ค ๐›ผ โข ( ๐‘ฅ , ๐‘ฆ ) | โ‰ค โ€– ๐ค ๐›ผ โ€– โˆž ๐‘› } ,

(41)

and the ๐’ฎ in tail radii

โ„œ ๐’ฎ in โ‰œ max ๐‘ฅ โˆˆ ๐’ฎ in โก โ€– ๐‘ฅ โ€– 2 , and โ„œ ๐’ฎ in , ๐ค ๐›ผ , ๐‘› โ‰œ min โก ( โ„œ ๐’ฎ in , ๐‘› 1 + 1 ๐‘‘ โข โ„œ ๐ค ๐›ผ , ๐‘› + ๐‘› 1 ๐‘‘ โข โ€– ๐ค ๐›ผ โ€– โˆž / ๐ฟ ๐ค ๐›ผ ) .

(42)

Furthermore, define the inflation factor

๐” ๐ค ๐›ผ โข ( ๐‘› , ๐‘š , ๐‘‘ , ๐›ฟ , ๐›ฟ โ€ฒ , ๐‘… ) โ‰œ 37 โข log โก ( 6 โข ๐‘š 2 ๐‘š โข ๐›ฟ ) โข [ log โก ( 4 ๐›ฟ โ€ฒ ) + 5 โข ๐‘‘ โข log โก ( 2 + 2 โข ๐ฟ ๐ค ๐›ผ โ€– ๐ค ๐›ผ โ€– โˆž โข ( โ„œ ๐ค ๐›ผ , ๐‘› + ๐‘… ) ) ] ,

(43)

where ๐ฟ ๐ค ๐›ผ denotes a Lipschitz constant satisfying | ๐ค ๐›ผ โข ( ๐‘ฅ , ๐‘ฆ ) โˆ’ ๐ค ๐›ผ โข ( ๐‘ฅ , ๐‘ง ) | โ‰ค ๐ฟ ๐ค ๐›ผ โข โ€– ๐‘ฆ โˆ’ ๐‘ง โ€– 2 for all ๐‘ฅ , ๐‘ฆ , ๐‘ง โˆˆ โ„ ๐‘‘ . With the notations in place, we can define the quantities appearing in Thm. 3:

๐” ~ ๐›ผ โ‰œ ๐” ๐ค ๐›ผ โข ( ๐‘› , ๐‘š , ๐‘‘ , ๐›ฟ โ‹† , ๐›ฟ โ€ฒ , โ„œ ๐’ฎ in , ๐ค ๐›ผ , ๐‘› ) and โ„œ max โ‰œ max โก ( โ„œ ๐’ฎ in , โ„œ ๐ค ๐›ผ , ๐‘› / 2 ๐‘š โ€  ) .

(44)

The scaling of these two parameters depends on the tail behavior of ๐ค ๐›ผ and the growth of the radii โ„œ ๐’ฎ in (which in turn would typically depend on the tail behavior of โ„™ ). The scaling of ๐” ~ ๐›ผ and โ„œ max stated in Thm. 3 under the compactly supported or subexponential tail conditions follows directly from Dwivedi & Mackey (2021, Tab. 2, App. I).

Proof of Thm. 3

The kt-swap step ensures that

MMD ๐ค โก ( ๐’ฎ in , ๐’ฎ ๐›ผ โข KT ) โ‰ค MMD ๐ค โก ( ๐’ฎ in , ๐’ฎ ๐›ผ ( ๐‘š , 1 ) ) ,

(45)

where ๐’ฎ ๐›ผ ( ๐‘š , 1 ) denotes the first coreset output by kt-split with ๐ค split

๐ค ๐›ผ . Next, we state a key interpolation result for MMD ๐ค that relates it to the MMD of its power kernels (Def. 2) (see App. G for the proof).

Proof of 10

Repeating the proof of Thm. 2 with the bound 36 replaced by 9 yields that

MMD ๐ค โก ( ๐’ฎ in , ๐’ฎ KT+ )

โ‰ค inf ๐œ€ , ๐’ฎ in โŠ‚ ๐’œ 2 โข ๐œ€ + 2 ๐‘š ๐‘› โข 16 3 โข โ€– ๐ค โ€– โˆž โข log โก ( 6 โข ๐‘š 2 ๐‘š โข ๐›ฟ โ‹† ) โ‹… [ log โก ( 4 ๐›ฟ โ€ฒ ) + โ„ณ ๐ค โข ( ๐’œ , ๐œ€ ) ]

(58)

โ‰ค 2 โ‹… ๐Œ ยฏ targetKT โข ( ๐ค )

(59)

with probability at least ๐‘ sg . Let us denote this event by โ„ฐ 1 .

To establish the other bound, first we note that kt-swap step ensures that

MMD ๐ค โก ( ๐’ฎ in , ๐’ฎ KT+ ) โ‰ค MMD ๐ค โก ( ๐’ฎ in , ๐’ฎ KT + ( ๐‘š , 1 ) ) ,

(60)

where ๐’ฎ KT + ( ๐‘š , 1 ) denotes the first coreset output by kt-split with ๐ค split

๐ค โ€  . Thus for this case the suitable analog of the sub-Gaussian parameter (in 12) is given by

๐œŽ ๐‘š

2 3 โข 2 ๐‘š ๐‘› โข โ€– ๐ค โ€  โ€– โˆž โข log โก ( 6 โข ๐‘š 2 ๐‘š โข ๐›ฟ โ‹† ) where โ€– ๐ค โ€  โ€– โˆž โ‰ค 2 .

(61)

Next we note that ๐ค ๐›ผ โข ( ๐‘ฅ , โ‹… ) belongs to the RKHS of ๐ค โ€  with

โ€– ๐ค ๐›ผ โข ( ๐‘ฅ , โ‹… ) โ€– ๐ค โ€  โ‰ค LABEL:eq:rkhs_scaling_norms โ€– ๐ค ๐›ผ โ€– โˆž โข โ€– ๐ค ๐›ผ โข ( ๐‘ฅ , โ‹… ) โ€– ๐ค ๐›ผ

โ€– ๐ค ๐›ผ โ€– โˆž โข ๐ค ๐›ผ โข ( ๐‘ฅ , ๐‘ฅ ) โ‰ค โ€– ๐ค ๐›ผ โ€– โˆž .

(62)

Now we are ready to adapt the arguments from Dwivedi & Mackey (2021, Proof of Thm. 4) with โ€– ๐ค โ€– โˆž by replacing โ€– ๐ค โ€  โ€– โˆž (which in turn we bound by 2 ) in Dwivedi & Mackey (2021, Eqn. 35) due to 61, and replacing ๐ค , โ€– ๐ค โ€– โˆž by ๐ค ๐›ผ , โ€– ๐ค ๐›ผ โ€– โˆž respectively in Dwivedi & Mackey (2021, Lem. (5, 6, 7)) due to 62. Overall these substitutions imply that we can repeat the proof of Thm. 3 from App. E with โ€– ๐ค ๐›ผ โ€– โˆž , in replaced by 2 โข โ€– ๐ค ๐›ผ โ€– โˆž .5 Putting it together with 60, we conclude that

MMD ๐ค โก ( ๐’ฎ in , ๐’ฎ KT+ )

โ‰ค ( 2 ๐‘š ๐‘› โข 2 โข โ€– ๐ค ๐›ผ โ€– โˆž ) 1 2 โข ๐›ผ โข ( 2 โข ๐” ~ ๐ค ๐›ผ ) 1 โˆ’ 1 2 โข ๐›ผ โข ( 2 + ( 4 โข ๐œ‹ ) ๐‘‘ / 2 ฮ“ โข ( ๐‘‘ 2 + 1 ) โ‹… โ„œ max ๐‘‘ 2 โ‹… ๐” ~ ๐ค ๐›ผ ) 1 ๐›ผ โˆ’ 1

(63)

= 2 1 2 โข ๐›ผ โ‹… ๐Œ ยฏ powerKT โข ( ๐ค ๐›ผ ) ,

(64)

with probability at least ๐‘ sg . Let us denote this event by โ„ฐ 2 .

Note that the quantities on the right hand side of the bounds 59 and 64 are deterministic given ๐’ฎ in and thus can be computed a priori. Consequently, we apply the high probability bound only for one of the two events โ„ฐ 1 or โ„ฐ 2 depending on which of the two quantities (deterministically) attains the minimum. Thus, the bound 10 holds with probability at least ๐‘ sg as claimed. โ–ก

Appendix GProof of LABEL:mmd_sandwich: An interpolation result for MMD

For two arbitrary distributions โ„™ and โ„š , and any reproducing kernel ๐ค , Gretton et al. (2012, Lem. 4) yields that

MMD ๐ค 2 โก ( โ„™ , โ„š )

โ€– ( โ„™ โˆ’ โ„š ) โข ๐ค โ€– ๐ค 2 .

(65)

Let โ„ฑ denote the generalized Fourier transform (GFT) operator (Wendland (2004, Def. 8.9)). Since ๐ค โข ( ๐‘ฅ , ๐‘ฆ )

๐œ… โข ( ๐‘ฅ โˆ’ ๐‘ฆ ) , Wendland (2004, Thm. 10.21) yields that

โ€– ๐‘“ โ€– ๐ค 2

1 ( 2 โข ๐œ‹ ) ๐‘‘ / 2 โข โˆซ โ„ ๐‘‘ ( โ„ฑ โข ( ๐‘“ ) โข ( ๐œ” ) ) 2 โ„ฑ โข ( ๐œ… ) โข ( ๐œ” ) โข ๐‘‘ ๐œ” , for ๐‘“ โˆˆ โ„‹ .

(66)

Let ๐œ… ^ โ‰œ โ„ฑ โข ( ๐œ… ) , and consider a discrete measure ๐”ป

โˆ‘ ๐‘–

1 ๐‘› ๐‘ค ๐‘– โข ๐œน ๐‘ฅ ๐‘– supported on finitely many points, and let ๐”ป โข ๐ค โข ( ๐‘ฅ ) โ‰œ โˆ‘ ๐‘ค ๐‘– โข ๐ค โข ( ๐‘ฅ , ๐‘ฅ ๐‘– )

โˆ‘ ๐‘ค ๐‘– โข ๐œ… โข ( ๐‘ฅ โˆ’ ๐‘ฅ ๐‘– ) . Now using the linearity of the GFT operator โ„ฑ , we find that for any ๐œ” โˆˆ โ„ ๐‘‘ ,

โ„ฑ ( ๐”ป ๐ค ) ( ๐œ” )

โ„ฑ ( โˆ‘ ๐‘–

1 ๐‘› ๐‘ค ๐‘– ๐œ… ( โ‹… โˆ’ ๐‘ฅ ๐‘– ) )

โˆ‘ ๐‘–

1 ๐‘› ๐‘ค ๐‘– โ„ฑ ( ๐œ… ( โ‹… โˆ’ ๐‘ฅ ๐‘– )

( โˆ‘ ๐‘–

1 ๐‘› ๐‘ค ๐‘– โข ๐‘’ โˆ’ โŸจ ๐œ” , ๐‘ฅ ๐‘– โŸฉ ) โ‹… ๐œ… ^ โข ( ๐œ” )

(67)

= ๐ท ^ โข ( ๐œ” ) โข ๐œ… ^ โข ( ๐œ” )

(68)

where we used the time-shifting property of GFT that โ„ฑ ( ๐œ… ( โ‹… โˆ’ ๐‘ฅ ๐‘– ) ) ( ๐œ” )

๐‘’ โˆ’ โŸจ ๐œ” , ๐‘ฅ ๐‘– โŸฉ ๐œ… ^ ( ๐œ” ) (proven for completeness in Lem. 1), and used the shorthand ๐ท ^ โข ( ๐œ” ) โ‰œ ( โˆ‘ ๐‘–

1 ๐‘› ๐‘ค ๐‘– โข ๐‘’ โˆ’ โŸจ ๐œ” , ๐‘ฅ ๐‘– โŸฉ ) in the last step. Putting together 65, 66, and 68 with ๐”ป

โ„™ โˆ’ โ„š , we find that

MMD ๐ค 2 โก ( โ„™ , โ„š )

1 ( 2 โข ๐œ‹ ) ๐‘‘ / 2 โข โˆซ โ„ ๐‘‘ ๐ท ^ 2 โข ( ๐œ” ) โข ๐œ… ^ โข ( ๐œ” ) โข ๐‘‘ ๐œ”

(69)

= 1 ( 2 โข ๐œ‹ ) ๐‘‘ / 2 โข โˆซ โ„ ๐‘‘ ๐ท ^ 2 โข ( ๐œ” ) โข ๐œ… ^ ๐›ผ โข ( ๐œ” ) โข ( ๐œ… ^ ๐›ผ โข ( ๐œ” ) ) 1 โˆ’ ๐›ผ ๐›ผ โข ๐‘‘ ๐œ”

(70)

= 1 ( 2 โข ๐œ‹ ) ๐‘‘ / 2 โข โˆซ โ„ ๐‘‘ ๐ท ^ 2 โข ( ๐œ” โ€ฒ ) โข ๐œ… ^ ๐›ผ โข ( ๐œ” โ€ฒ ) โข ๐‘‘ ๐œ” โ€ฒ โข โˆซ โ„ ๐‘‘ ๐ท ^ 2 โข ( ๐œ” ) โข ๐œ… ^ ๐›ผ โข ( ๐œ” ) โˆซ โ„ ๐‘‘ ๐ท ^ 2 โข ( ๐œ” โ€ฒ ) โข ๐œ… ^ ๐›ผ โข ( ๐œ” โ€ฒ ) โข ๐‘‘ ๐œ” โ€ฒ โข ( ๐œ… ^ ๐›ผ โข ( ๐œ” ) ) 1 โˆ’ ๐›ผ ๐›ผ โข ๐‘‘ ๐œ”

(71)

โ‰ค ( ๐‘– ) 1 ( 2 โข ๐œ‹ ) ๐‘‘ / 2 โข โˆซ โ„ ๐‘‘ ๐ท ^ 2 โข ( ๐œ” โ€ฒ ) โข ๐œ… ^ ๐›ผ โข ( ๐œ” โ€ฒ ) โข ๐‘‘ ๐œ” โ€ฒ โข ( โˆซ โ„ ๐‘‘ ๐ท ^ 2 โข ( ๐œ” ) โข ๐œ… ^ ๐›ผ โข ( ๐œ” ) โˆซ โ„ ๐‘‘ ๐ท ^ 2 โข ( ๐œ” โ€ฒ ) โข ๐œ… ^ ๐›ผ โข ( ๐œ” โ€ฒ ) โข ๐‘‘ ๐œ” โ€ฒ โข ๐œ… ^ ๐›ผ โข ( ๐œ” ) โข ๐‘‘ ๐œ” ) 1 โˆ’ ๐›ผ ๐›ผ

(72)

= 1 ( 2 โข ๐œ‹ ) ๐‘‘ / 2 โข ( โˆซ โ„ ๐‘‘ ๐ท ^ 2 โข ( ๐œ” โ€ฒ ) โข ๐œ… ^ ๐›ผ โข ( ๐œ” โ€ฒ ) โข ๐‘‘ ๐œ” โ€ฒ ) 2 โˆ’ 1 ๐›ผ โข ( โˆซ โ„ ๐‘‘ ๐ท ^ 2 โข ( ๐œ” ) โข ๐œ… ^ 2 โข ๐›ผ โข ( ๐œ” ) ๐‘‘ โข ๐œ” ) 1 โˆ’ ๐›ผ ๐›ผ

(73)

= ( 1 ( 2 โข ๐œ‹ ) ๐‘‘ / 2 โข โˆซ โ„ ๐‘‘ ๐ท ^ 2 โข ( ๐œ” โ€ฒ ) โข ๐œ… ^ ๐›ผ โข ( ๐œ” โ€ฒ ) โข ๐‘‘ ๐œ” โ€ฒ ) 2 โˆ’ 1 ๐›ผ โข ( 1 ( 2 โข ๐œ‹ ) ๐‘‘ / 2 โข โˆซ โ„ ๐‘‘ ๐ท ^ 2 โข ( ๐œ” ) โข ๐œ… ^ 2 โข ๐›ผ โข ( ๐œ” ) ๐‘‘ โข ๐œ” ) 1 โˆ’ ๐›ผ ๐›ผ

(74)

= ( ๐‘– โข ๐‘– ) ( MMD ๐ค ๐›ผ 2 โก ( โ„™ , โ„š ) ) 2 โˆ’ 1 ๐›ผ โ‹… ( MMD ๐ค 2 โข ๐›ผ 2 โก ( โ„™ , โ„š ) ) 1 ๐›ผ โˆ’ 1 ,

(75)

where step (i) makes use of Jensenโ€™s inequality and the fact that the function ๐‘ก โ†ฆ ๐‘ก 1 โˆ’ ๐›ผ ๐›ผ for ๐‘ก โ‰ฅ 0 is concave for ๐›ผ โˆˆ [ 1 2 , 1 ] , and step (ii) follows by applying 69 for kernels ๐ค ๐›ผ and ๐ค 2 โข ๐›ผ and noting that by definition โ„ฑ โข ( ๐ค ๐›ผ )

๐œ… ^ ๐›ผ , and โ„ฑ โข ( ๐ค 2 โข ๐›ผ )

๐œ… ^ 2 โข ๐›ผ . Noting MMD is a non-negative quantity, and taking square-root establishes the claim LABEL:eq:mmd_sandwich.

Lemma 1 (Shifting property of the generalized Fourier transform)

If ๐œ… ^ denotes the generalized Fourier transform (GFT) (Wendland, 2004, Def. 8.9) of the function ๐œ… : โ†’ ๐‘‘ , then ๐‘’ โˆ’ โŸจ โ‹… , ๐‘ฅ ๐‘– โŸฉ โข ๐œ… ^ denotes the GFT of the shifted function ๐œ… ( โ‹… โˆ’ ๐‘ฅ ๐‘– ) , for any ๐‘ฅ ๐‘– โˆˆ ๐‘‘ .

Proof

Note that by definition of the GFT ๐œ… ^  (Wendland, 2004, Def. 8.9), we have

โˆซ ๐œ… โข ( ๐‘ฅ ) โข ๐›พ ^ โข ( ๐‘ฅ ) โข ๐‘‘ ๐‘ฅ

โˆซ ๐œ… ^ โข ( ๐œ” ) โข ๐›พ โข ( ๐œ” ) โข ๐‘‘ ๐œ” ,

(76)

for all suitable Schwartz functions ๐›พ  (Wendland, 2004, Def. 5.17), where ๐›พ ^ denotes the Fourier transform (Wendland, 2004, Def. 5.15) of ๐›พ since GFT and FT coincide for these functions (as noted in the discussion after Wendland (2004, Def. 8.9)). Thus to prove the lemma, we need to verify that

โˆซ ๐œ… โข ( ๐‘ฅ โˆ’ ๐‘ฅ ๐‘– ) โข ๐›พ ^ โข ( ๐‘ฅ ) โข ๐‘‘ ๐‘ฅ

โˆซ ๐‘’ โˆ’ โŸจ ๐œ” , ๐‘ฅ ๐‘– โŸฉ โข ๐œ… ^ โข ( ๐œ” ) โข ๐›พ โข ( ๐œ” ) โข ๐‘‘ ๐œ” ,

(77)

for all suitable Schwartz functions ๐›พ . Starting with the right hand side of the display 77, we have

โˆซ ๐‘’ โˆ’ โŸจ ๐œ” , ๐‘ฅ ๐‘– โŸฉ โข ๐œ… ^ โข ( ๐œ” ) โข ๐›พ โข ( ๐œ” ) โข ๐‘‘ ๐œ”

โˆซ ๐œ… ^ โข ( ๐œ” ) โข ( ๐‘’ โˆ’ โŸจ ๐œ” , ๐‘ฅ ๐‘– โŸฉ โข ๐›พ โข ( ๐œ” ) ) โข ๐‘‘ ๐œ”

( ๐‘– ) โˆซ ๐œ… โข ( ๐‘ฅ ) โข ๐›พ ^ โข ( ๐‘ฅ + ๐‘ฅ ๐‘– ) โข ๐‘‘ ๐‘ฅ

( ๐‘– โข ๐‘– ) โˆซ ๐œ… โข ( ๐‘ง โˆ’ ๐‘ฅ ๐‘– ) โข ๐›พ ^ โข ( ๐‘ง ) โข ๐‘‘ ๐‘ง ,

(78)

where step (i) follows from the shifting property of the FT (Wendland, 2004, Thm. 5.16(4)), and the fact that the GFT condition 76 holds for the shifted function ๐›พ ( โ‹… + ๐‘ฅ ๐‘– ) function as well since it is still a Schwartz function (recall that ๐›พ ^ is the FT), and step (ii) follows from a change of variable. We have thus established 77, and the proof is complete.

Appendix HSub-optimality of single function guarantees with root KT

Define ๐ค ~ rt as the scaled version of ๐ค rt , i.e., ๐ค ~ rt โ‰œ ๐ค rt / โ€– ๐ค rt โ€– โˆž that is bounded by 1 . Then Zhang & Zhao (2013, Proof of Prop. 2.3) implies that

โ€– ๐‘“ โ€– ๐ค rt

1 โ€– ๐ค rt โ€– โˆž โข โ€– ๐‘“ โ€– ๐ค ~ rt .

(79)

And thus we also have โ„‹ rt

โ„‹ ~ rt where โ„‹ rt and โ„‹ ~ rt respectively denote the RKHSs of ๐ค rt and ๐ค ~ rt .

Next, we note that for any two kernels ๐ค 1 and ๐ค 2 with corresponding RKHSs โ„‹ 1 and โ„‹ 2 with โ„‹ 1 โŠ‚ โ„‹ 2 , in the convention of Zhang & Zhao (2013, Lem. 2.2, Prop. 2.3), we have

โ€– ๐‘“ โ€– ๐ค 2 โ€– ๐‘“ โ€– ๐ค 1 โ‰ค ๐›ฝ โข ( โ„‹ 1 , โ„‹ 2 ) โ‰ค ๐œ† โข ( โ„‹ 1 , โ„‹ 2 ) for ๐‘“ โˆˆ โ„‹ .

(80)

Consequently, we have

max ๐‘ฅ โˆˆ ๐’ฎ in โก ๐ค rt โข ( ๐‘ฅ , ๐‘ฅ ) โข โ€– ๐‘“ โ€– ๐ค rt โ€– ๐‘“ โ€– ๐ค โ‰ค โ€– ๐ค rt โ€– โˆž โข โ€– ๐‘“ โ€– ๐ค rt โ€– ๐‘“ โ€– ๐ค

79 โ€– ๐‘“ โ€– ๐ค ~ rt โ€– ๐‘“ โ€– ๐ค โ‰ค ๐œ† โข ( โ„‹ , โ„‹ ~ rt ) ,

(81)

where in the last step, we have applied the bound 80 with ( ๐ค 1 , โ„‹ 1 ) โ† ( ๐ค , โ„‹ ) and ( ๐ค 2 , โ„‹ 2 ) โ† ( ๐ค ~ rt , ๐ค ~ rt ) since โ„‹ โŠ‚ โ„‹ rt

๐ค ~ rt .

Next, we use 81 to the kernels studied in Dwivedi & Mackey (2021) where we note that all the kernels in that work were scaled to ensure โ€– ๐ค โ€– โˆž

1 and in fact satisfied ๐ค โข ( ๐‘ฅ , ๐‘ฅ )

1 . Consequently, the multiplicative factor stated in the discussion after Thm. 1, namely, โ€– ๐ค rt โ€– โˆž , in โ€– ๐ค โ€– โˆž , in โข โ€– ๐‘“ โ€– ๐ค rt โ€– ๐‘“ โ€– ๐ค can be bounded by ๐œ† โข ( โ„‹ , โ„‹ ~ rt ) given the arguments above.

For ๐ค

Gauss โข ( ๐œŽ ) kernels, Zhang & Zhao (2013, Prop. 3.5(1)) yields that

๐œ† โข ( โ„‹ , โ„‹ ~ rt )

2 ๐‘‘ / 2 .

(82)

For ๐ค

B-spline โข ( 2 โข ๐›ฝ + 1 , ๐›พ ) with ๐›ฝ โˆˆ 2 โข โ„• + 1 , Zhang & Zhao (2013, Prop. 3.5(1)) yields that

๐œ† โข ( โ„‹ , โ„‹ ~ rt )

1 .

(83)

For ๐ค

Matรฉrn ( ๐œˆ , ๐›พ ) with ๐œˆ

๐‘‘ , some algebra along with Zhang & Zhao (2013, Prop 3.1) yields that

๐œ† โข ( โ„‹ , โ„‹ ~ rt )

ฮ“ โข ( ๐œˆ ) โข ฮ“ โข ( ( ๐œˆ โˆ’ ๐‘‘ ) / 2 ) ฮ“ โข ( ๐œˆ โˆ’ ๐‘‘ / 2 ) โข ฮ“ โข ( ๐œˆ / 2 ) โ‰ฅ 1 .

(84) Appendix IAdditional experimental results

This section provides additional experimental details and results deferred from Sec. 4.

Common settings and error computation   To obtain an output coreset of size ๐‘› 1 2 with ๐‘› input points, we (a) take every ๐‘› 1 2 -th point for standard thinning (ST) and (b) run KT with ๐‘š

1 2 โข log 2 โก ๐‘› using an ST coreset as the base coreset in kt-swap. For Gaussian and MoG target we use i.i.d. points as input, and for MCMC targets we use an ST coreset after burn-in as the input (see App. I for more details). We compute errors with respect to โ„™ whenever available in closed form and otherwise use โ„™ in . For each input sample size ๐‘› โˆˆ { 2 4 , 2 6 , โ€ฆ , 2 14 } with ๐›ฟ ๐‘–

1 2 โข ๐‘› , we report the mean MMD or function integration error ยฑ 1 standard error across 10 independent replications of the experiment (the standard errors are too small to be visible in all experiments). We also plot the ordinary least squares fit (for log mean error vs log coreset size), with the slope of the fit denoted as the empirical decay rate, e.g., for an OLS fit with slope โˆ’ 0.25 , we display the decay rate of ๐‘› โˆ’ 0.25 .

Details of test functions   We note the following: (a) For Gaussian targets, the error with CIF function and i.i.d. input is measured across the sample mean over the ๐‘› input points and ๐‘› output points obtained by standard thinning the input sequence, since โ„™ โข ๐‘“ CIF does not admit a closed form. (b) To define the function ๐‘“ : ๐‘ฅ โ†ฆ ๐ค โข ( ๐‘‹ โ€ฒ , ๐‘ฅ ) , first we draw a sample ๐‘‹ โˆผ โ„™ , independent of the input, and then set ๐‘‹ โ€ฒ

2 โข ๐‘‹ . For the MCMC targets, we draw a point uniformly from a held out data point not used as input for KT. For each target, the sample is drawn exactly once and then fixed throughout all sample sizes and repetions.

I.1Mixture of Gaussians Experiments

Our mixture of Gaussians target is given by โ„™

1 ๐‘€ โข โˆ‘ ๐‘—

1 ๐‘€ ๐’ฉ โข ( ๐œ‡ ๐‘— , ๐ˆ ๐‘‘ ) for ๐‘€ โˆˆ { 4 , 6 , 8 } where

๐œ‡ 1

[ โˆ’ 3 , 3 ] โŠค , ๐œ‡ 2

[ โˆ’ 3 , 3 ] โŠค , ๐œ‡ 3

[ โˆ’ 3 , โˆ’ 3 ] โŠค , ๐œ‡ 4

[ 3 , โˆ’ 3 ] โŠค ,

(85)

๐œ‡ 5

[ 0 , 6 ] โŠค , ๐œ‡ 6

[ โˆ’ 6 , 0 ] โŠค , ๐œ‡ 7

[ 6 , 0 ] โŠค , ๐œ‡ 8

[ 0 , โˆ’ 6 ] โŠค .

(86)

Two independent replicates of Fig. 1 can be found in Sec. 2.3. Finally,we display mean MMD ( ยฑ 1 standard error across ten independent experiment replicates) as a function of coreset size in Sec. 2.3 for ๐‘€

4 , 6 component MoG targets. The conclusions from Sec. 2.3 are identical to those from the bottom row of Fig. 1: target KT and root KT provide similar MMD errors with Gauss  ๐ค , and all variants of KT provide a significant improvement over i.i.d. sampling both in terms of magnitude and decay rate with input size. Morever the observed decay rates for KT+ closely match the rates guaranteed by our theory in Tab. 3.

Figure 4:Generalized kernel thinning (KT) and i.i.d. coresets for various kernels ๐ค (in parentheses) and an 8-component mixture of Gaussian target โ„™ with equidensity contours underlaid. These plots are independent replicates of Fig. 1. See Sec. 4 for more details. Figure 5:Kernel thinning versus i.i.d. sampling. For mixture of Gaussians โ„™ with ๐‘€ โˆˆ { 4 , 6 } components and the kernel choices of Sec. 4, the target KT with Gauss  ๐ค provides comparable MMD ๐ค โก ( โ„™ , โ„™ out ) error to the root KT, and both provide an ๐‘› โˆ’ 1 2 decay rate improving significantly over the ๐‘› โˆ’ 1 4 decay rate from i.i.d. sampling. For the other kernels, KT+ provides a decay rate close to ๐‘› โˆ’ 1 2 for IMQ and B-spline  ๐ค , and ๐‘› โˆ’ 0.35 for Laplace  ๐ค . See Sec. 4 for further discussion. I.2MCMC experiments

Our set-up for MCMC experiments follows closely that of Dwivedi & Mackey (2021). For complete details on the targets and sampling algorithms we refer the reader to Riabiz et al. (2020a, Sec. 4).

Goodwin and Lotka-Volterra experiments

From Riabiz et al. (2020b), we use the output of four distinct MCMC procedures targeting each of two ๐‘‘

4 -dimensional posterior distributions โ„™ : (1) a posterior over the parameters of the Goodwin model of oscillatory enzymatic control (Goodwin, 1965) and (2) a posterior over the parameters of the Lotka-Volterra model of oscillatory predator-prey evolution (Lotka, 1925; Volterra, 1926). For each of these targets, Riabiz et al. (2020b) provide 2 ร— 10 6 sample points from the following four MCMC algorithms: Gaussian random walk (RW), adaptive Gaussian random walk (adaRW, Haario et al., 1999), Metropolis-adjusted Langevin algorithm (MALA, Roberts & Tweedie, 1996), and pre-conditioned MALA (pMALA, Girolami & Calderhead, 2011).

Hinch experiments

Riabiz et al. (2020b) also provide the output of two independent Gaussian random walk MCMC chains targeting each of two ๐‘‘

38 -dimensional posterior distributions โ„™ : (1) a posterior over the parameters of the Hinch model of calcium signalling in cardiac cells (Hinch et al., 2004) and (2) a tempered version of the same posterior, as defined by Riabiz et al. (2020a, App. S5.4).

Burn-in and standard thinning   We discard the initial burn-in points of each chain using the maximum burn-in period reported in Riabiz et al. (2020a, Tabs. S4 & S6, App. S5.4). Furthermore, we also normalize each Hinch chain by subtracting the post-burn-in sample mean and dividing each coordinate by its post-burn-in sample standard deviation. To obtain an input sequence ๐’ฎ in of length ๐‘› to be fed into a thinning algorithm, we downsample the remaining even indices of points using standard thinning (odd indices are held out). When applying standard thinning to any Markov chain output, we adopt the convention of keeping the final sample point.

The selected burn-in periods for the Goodwin task were 820,000 for RW; 824,000 for adaRW; 1,615,000 for MALA; and 1,475,000 for pMALA. The respective numbers for the Lotka-Volterra task were 1,512,000 for RW; 1,797,000 for adaRW; 1,573,000 for MALA; and 1,251,000 for pMALA.

Additional remarks on Fig. 3   When a Markov chain is fast mixing (as in the Goodwin and Lotka-Volterra examples), we expect standard thinning to have ฮฉ โข ( ๐‘› โˆ’ 1 4 ) error. However, when the chain is slow mixing, standard thinning can enjoy a faster rate of decay due to a certain degeneracy of the chain that leads it to lie close to a one-dimensional curve. In the Hinch figures, we observe these better-than-i.i.d. rates of decay for standard thinning, but, remarkably, KT+ still offers improvements in both MMD and integration error. Moreover, in this setting, every additional point discarded via improved compression translates into thousands of CPU hours saved in downstream heart-model simulations.

Figure 6:Kernel thinning+ (KT+) vs. standard MCMC thinning (ST). For kernels without fast-decaying square-roots, KT+ improves MMD and integration error decay rates in each posterior inference task. Appendix JUpper bounds on RKHS covering numbers

In this section, we state several results on covering bounds for RKHSes for both generic and specific kernels. We then use these bounds with Thm. 2 (or Sec. 2.3) to establish MMD guarantees for the output of generalized kernel thinning as summarized in Tab. 3.

We first state covering number bounds for RKHS associated with generic kernels, that are either (a) analytic, or (b) finitely many times differentiable. These results follow essentially from Sun & Zhou (2008); Steinwart & Christmann (2008), but we provide a proof in Sec. J.2 for completeness.

Proposition 2 (Covering numbers for analytic and differentiable kernels)

The following results hold true.

(a)

Analytic kernels: Suppose that ๐ค โข ( ๐‘ฅ , ๐‘ฆ )

๐œ… โข ( โ€– ๐‘ฅ โˆ’ ๐‘ฆ โ€– 2 2 ) for ๐œ… : โ†’ + real-analytic with convergence radius ๐‘… ๐œ… , that is,

| 1 ๐‘— ! โข ๐œ… + ( ๐‘— ) โข ( 0 ) | โ‰ค ๐ถ ๐œ… โข ( 2 / ๐‘… ๐œ… ) ๐‘— for all ๐‘— โˆˆ โ„• 0

(87)

for some constant ๐ถ ๐œ… , where ๐œ… + ( ๐‘— ) denotes the right-sided ๐‘— -th derivative of ๐œ… . Then for any set ๐’œ โŠ‚ โ„ ๐‘‘ and any ๐œ€ โˆˆ ( 0 , 1 2 ) , we have

โ„ณ ๐ค โข ( ๐’œ , ๐œ€ )

โ‰ค ๐’ฉ 2 โข ( ๐’œ , ๐‘Ÿ โ€  / 2 ) โ‹… ( 4 โข log โก ( 1 / ๐œ€ ) + 2 + 4 โข log โก ( 16 โข ๐ถ ๐œ… + 1 ) ) ๐‘‘ + 1 ,

(88)

where  โข ๐‘Ÿ โ€ 

โ‰œ min โก ( ๐‘… ๐œ… 2 โข ๐‘‘ , ๐‘… ๐œ… + ๐ท ๐’œ 2 โˆ’ ๐ท ๐’œ ) ,  and  โข ๐ท ๐’œ โ‰œ max ๐‘ฅ , ๐‘ฆ โˆˆ ๐’œ โก โ€– ๐‘ฅ โˆ’ ๐‘ฆ โ€– 2 .

(89) (b)

Differentiable kernels: Suppose that for ๐’ณ โŠ‚ โ„ ๐‘‘ , the kernel ๐ค : ๐’ณ ร— ๐’ณ โ†’ is ๐‘  -times continuously differentiable, i.e., all partial derivatives โˆ‚ ๐›ผ , ๐›ผ ๐ค : ๐’ณ ร— ๐’ณ โ†’ exist and are continuous for all multi-indices ๐›ผ โˆˆ โ„• 0 ๐‘‘ with | ๐›ผ | โ‰ค ๐‘  . Then, for any closed Euclidean ball โ„ฌ ยฏ 2 โข ( ๐‘Ÿ ) contained in ๐’ณ and any ๐œ€

0 , we have

โ„ณ ๐ค โข ( โ„ฌ ยฏ 2 โข ( ๐‘Ÿ ) , ๐œ€ ) โ‰ค ๐‘ ๐‘  , ๐‘‘ , ๐ค โ‹… ๐‘Ÿ ๐‘‘ โ‹… ( 1 / ๐œ€ ) ๐‘‘ / ๐‘  ,

(90)

for some constant ๐‘ ๐‘  , ๐‘‘ , ๐ค that depends only on on ๐‘  , ๐‘‘ and ๐ค .

Next, we state several explicit bounds on covering numbers for several popular kernels. See Sec. J.3 for the proof.

Next, we state results that relate RKHS covering numbers for a change of domain for a shift-invariant kernel. We use โ„ฌ โˆฅ โ‹… โˆฅ ( ๐‘ฅ ; ๐‘Ÿ ) โ‰œ { ๐‘ฆ โˆˆ : ๐‘‘ โˆฅ ๐‘ฅ โˆ’ ๐‘ฆ โˆฅ โ‰ค ๐‘Ÿ } to denote the ๐‘Ÿ radius ball in โ„ ๐‘‘ defined by the metric induced by a norm โˆฅ โ‹… โˆฅ .

Definition 4 (Euclidean covering numbers)

Given a set ๐’ณ โŠ‚ โ„ ๐‘‘ , a norm โˆฅ โ‹… โˆฅ , and a scalar ๐œ€

0 , we use ๐’ฉ โˆฅ โ‹… โˆฅ โข ( ๐’ณ , ๐œ€ ) to denote the ๐œ€ -covering number of ๐’ณ with respect to โˆฅ โ‹… โˆฅ -norm. That is, ๐’ฉ โˆฅ โ‹… โˆฅ โข ( ๐’ณ , ๐œ€ ) denotes the minimum cardinality over all possible covers ๐’ž โŠ‚ ๐’ณ that satisfy

๐’ณ โŠ‚ โˆช ๐‘ง โˆˆ ๐’ž โ„ฌ โˆฅ โ‹… โˆฅ โข ( ๐‘ง ; ๐œ€ ) .

(104)

When โˆฅ โ‹… โˆฅ

โˆฅ โ‹… โˆฅ ๐‘ž for some ๐‘ž โˆˆ [ 1 , โˆž ] , we use the shorthand ๐’ฉ ๐‘ž โ‰œ ๐’ฉ โˆฅ โ‹… โˆฅ ๐‘ž .

Lemma 4 (Covering number for shift-invariant kernels with compactly supported spectral density)

Suppose ๐œ… : โ„ ๐‘‘ โ†’ โ„ denotes the Fourier transform

๐œ… โข ( ๐‘ง )

1 ( 2 โข ๐œ‹ ) ๐‘‘ โข โˆซ ๐‘‘ ๐œ… ^ โข ( ๐œ‰ ) โข ๐‘’ โˆ’ ๐‘– โข ๐‘ง โข ๐œ‰ โข ๐‘‘ ๐œ‰

(108)

of a bounded nonnegative function ๐œ… ^ supported on [ โˆ’ ๐‘Ž , ๐‘Ž ] ๐‘‘ for a finite ๐‘Ž > 0 . Then the shift-invariant kernel ๐ค โข ( ๐‘ฅ , ๐‘ฆ )

๐œ… โข ( ๐‘ฅ โˆ’ ๐‘ฆ ) satisfies

โ„ณ ๐ค โข ( [ 0 , 1 ] ๐‘‘ , ๐œ€ )

โ‰ค 2 โข ๐‘‘ โข log โก 2 โ‹… ( ๐‘ ๐œ… , ๐‘Ž , ๐‘‘ + 1 ) ๐‘‘ โข ( ๐‘ ๐œ… , ๐‘Ž , ๐‘‘ + log โก ( 16 โข ๐œ… โข ( 0 ) ๐œ€ ) )

(109)

where ๐‘ ๐œ… , ๐‘Ž , ๐‘‘

โ‰œ max โก { 1 , โŒˆ 2 โข ๐‘Ž โŒ‰ , log โก ( ( 3 โข ๐‘Ž 2 โข ๐œ‹ ) ๐‘‘ โ‹… 32 โข ๐‘‘ โข โ€– ๐œ… ^ โ€– โˆž 3 โข ๐œ€ 2 ) } .

(110) Proof

Our proof makes use of Zhou (2002, Thm. 2).7 In that result, the author bounds the external covering number of the balls { ๐‘“ โˆˆ โ„‹ : โ€– ๐‘“ โ€– ๐ค โ‰ค ๐‘… } in RKHS using centers from the class of continuous functions in โˆฅ โ‹… โˆฅ โˆž -norm. Notably, given an ๐œ€ -cover ๐’ž

{ ๐‘“ 1 , โ€ฆ , ๐‘“ ๐‘˜ } of smallest size that comprises of continuous functions for the unit RKHS ball โ„ฌ ๐ค , we can immediately identify an internal 2 โข ๐œ€ -cover { ๐‘” 1 , ๐‘” 2 , โ€ฆ , ๐‘” ๐‘˜ } with ๐‘” ๐‘— โˆˆ โ„ฌ ๐ค for each ๐‘— โˆˆ [ ๐‘˜ ] . To see this claim, for each ๐‘“ ๐‘— โˆˆ ๐’ž , choose an arbitrary ๐‘” ๐‘— โˆˆ โ„ฌ ๐ค in the ๐œ€ -ball centered around ๐‘“ ๐‘— . Note that such a ๐‘” ๐‘— exists since ๐’ž is a cover of smallest size. Now for any ๐‘” โˆˆ โ„ฌ ๐ค , there exists an ๐‘“ ๐‘— โˆˆ ๐’ž such that โ€– ๐‘” โˆ’ ๐‘“ ๐‘— โ€– โˆž โ‰ค ๐œ€ by the definition of cover, and consequently โ€– ๐‘” โˆ’ ๐‘” ๐‘— โ€– โˆž โ‰ค โ€– ๐‘” โˆ’ ๐‘“ ๐‘— โ€– โˆž + โ€– ๐‘“ ๐‘— โˆ’ ๐‘” ๐‘— โ€– โˆž โ‰ค 2 โข ๐œ€ by triangle inequality and the definition of ๐‘” ๐‘— . Our claim then follows.

Using this claim and substituting ๐‘› โ† ๐‘‘ , ๐‘… โ† 1 , and ๐œ‚ โ† ๐œ€ / 2 in Zhou (2002, Thm. 1) we find that the righthand side of Zhou (2002, (4.5)) is a valid upper bound on โ„ณ ๐ค โข ( [ 0 , 1 ] ๐‘‘ , ๐œ€ ) in our notation:

โ„ณ ๐ค โข ( [ 0 , 1 ] ๐‘‘ , ๐œ€ )

โ‰ค ( ๐‘ + 1 ) ๐‘‘ โข log โก [ 8 โข ๐œ… โข ( 0 ) โข ( ๐‘ + 1 ) ๐‘‘ / 2 โข ( ๐‘ โข 2 ๐‘ ) ๐‘‘ โข 2 ๐œ€ ]

(111)

โ‰ค ( ๐‘ + 1 ) ๐‘‘ + 1 โ‹… ๐‘‘ โข log โก 2 + ( ๐‘ + 1 ) ๐‘‘ โข [ 3 โข ๐‘‘ 2 โข log โก ( ๐‘ + 1 ) + log โก ( 16 โข ๐œ… โข ( 0 ) ๐œ€ ) ]

(112)

โ‰ค ( ๐‘– ) 2 โข ๐‘‘ โข log โก 2 โ‹… ( ๐‘ + 1 ) ๐‘‘ + 1 + ( ๐‘ + 1 ) ๐‘‘ โข log โก ( 16 โข ๐œ… โข ( 0 ) ๐œ€ ) ,

(113)

for any positive integer ๐‘ satisfying ๐œ† ๐‘˜ โข ( ๐‘ ) โ‰ค ( ( ๐œ€ / 2 ) ( 2 โ‹… 1 ) ) 2

๐œ€ 2 16 , where

๐œ† ๐‘˜ โข ( ๐‘ )

โ‰œ ๐‘‘ โข ( 1 + 2 โˆ’ ๐‘ ) ๐‘‘ โˆ’ 1 ( 2 โข ๐œ‹ ) ๐‘‘ โข max ๐‘— โˆˆ [ ๐‘‘ ] โข โˆซ ๐œ‰ โˆˆ [ โˆ’ ๐‘ / 2 , ๐‘ / 2 ] ๐‘‘ ๐œ… ^ โข ( ๐œ‰ ) โข | ๐œ‰ ๐‘— | ๐‘ ๐‘ ๐‘ โข ๐‘‘ ๐œ‰

(114)

  • ( 1
  • ( ๐‘ โข 2 ๐‘ ) ๐‘‘ ) 2 ( 2 โข ๐œ‹ ) ๐‘‘ โข โˆซ ๐œ‰ โˆ‰ [ โˆ’ ๐‘ / 2 , ๐‘ / 2 ] ๐‘‘ ๐œ… ^ โข ( ๐œ‰ ) โข ๐‘‘ ๐œ‰ .

(115)

In the display 113, step (i) follows from the fact that 3 โข log โก ๐‘ฅ โ‰ค 2 โข ๐‘ฅ โข log โก 2 for all ๐‘ฅ โ‰ฅ 2 and ๐‘ + 1 โ‰ฅ 2 .

Now for any ๐‘ โ‰ฅ โŒˆ 2 โข ๐‘Ž โŒ‰ , the second term in the display 115 is zero. For any such ๐‘ , we find that

max ๐‘— โˆˆ [ ๐‘‘ ] โข โˆซ ๐œ‰ โˆˆ [ โˆ’ ๐‘ / 2 , ๐‘ / 2 ] ๐‘‘ ๐œ… ^ โข ( ๐œ‰ ) โข | ๐œ‰ ๐‘— | ๐‘ ๐‘ ๐‘ โข ๐‘‘ ๐œ‰

max ๐‘— โˆˆ [ ๐‘‘ ] โข โˆซ ๐œ‰ โˆˆ [ โˆ’ ๐‘Ž , ๐‘Ž ] ๐‘‘ ๐œ… ^ โข ( ๐œ‰ ) โข | ๐œ‰ ๐‘— | ๐‘ ๐‘ ๐‘ โข ๐‘‘ ๐œ‰

(116)

โ‰ค โ€– ๐œ… ^ โ€– โˆž ๐‘ ๐‘ โ‹… โˆซ ๐œ‰ โˆˆ [ โˆ’ ๐‘Ž , ๐‘Ž ] ๐‘‘ | ๐œ‰ 1 | ๐‘ ๐‘ ๐‘ โข ๐‘‘ ๐œ‰

(117)

= โ€– ๐œ… ^ โ€– โˆž โข ( 2 โข ๐‘Ž ) ๐‘‘ โˆ’ 1 ๐‘ ๐‘ โ‹… โˆซ ๐œ‰ 1 โˆˆ [ โˆ’ ๐‘Ž , ๐‘Ž ] | ๐œ‰ 1 | ๐‘ โข ๐‘‘ ๐œ‰ 1

(118)

= โ€– ๐œ… ^ โ€– โˆž โข ( 2 โข ๐‘Ž ) ๐‘‘ โˆ’ 1 ๐‘ ๐‘ โ‹… 2 โข ๐‘Ž ๐‘ + 1 ๐‘ + 1

(119)

= โ€– ๐œ… ^ โ€– โˆž โข 2 ๐‘‘ โข ๐‘Ž ๐‘‘ + ๐‘ ๐‘ ๐‘ + 1 โ‹… ( 1 + ๐‘ โˆ’ 1 ) โˆ’ 1 .

(120)

Now to achieve,

๐œ† ๐œ… โข ( ๐‘ ) โ‰ค ๐‘‘ โข ( 1 + 2 โˆ’ ๐‘ ) ๐‘‘ โˆ’ 1 ( 2 โข ๐œ‹ ) ๐‘‘ โ‹… โ€– ๐œ… ^ โ€– โˆž โข 2 ๐‘‘ โข ๐‘Ž ๐‘‘ + ๐‘ ๐‘ ๐‘ + 1 โ‹… ( 1 + ๐‘ โˆ’ 1 ) โˆ’ 1 โ‰ค ๐œ€ 2 16 ,

(121)

noting that for any ๐‘ โ‰ฅ 1 โˆจ โŒˆ 2 โข ๐‘Ž โŒ‰ ,

๐‘‘ โข ( 1 + 2 โˆ’ ๐‘ ) ๐‘‘ โˆ’ 1 ( 2 โข ๐œ‹ ) ๐‘‘ โ‹… โ€– ๐œ… ^ โ€– โˆž โข 2 ๐‘‘ โข ๐‘Ž ๐‘‘ + ๐‘ ๐‘ ๐‘ + 1 โ‹… ( 1 + ๐‘ โˆ’ 1 ) โˆ’ 1 โ‰ค 2 โข ๐‘‘ โข โ€– ๐œ… ^ โ€– โˆž 3 โข ( ๐‘Ž โข ( 1 + 2 โˆ’ ๐‘ ) / ๐œ‹ ) ๐‘‘ ( ๐‘ / ๐‘Ž ) ๐‘ ,

(122)

it suffices to choose

๐‘ ๐‘Ž โข log โก ( ๐‘ ๐‘Ž ) โ‰ฅ 1 ๐‘Ž โข log โก ( ( 3 โข ๐‘Ž 2 โข ๐œ‹ ) ๐‘‘ โ‹… 32 โข ๐‘‘ โข โ€– ๐œ… ^ โ€– โˆž 3 โข ๐œ€ 2 ) ,

(123)

for which it suffices to choose

๐‘ โ‰ฅ 1 โˆจ โŒˆ 2 โข ๐‘Ž โŒ‰ โˆจ ( log โก ( ( 3 โข ๐‘Ž 2 โข ๐œ‹ ) ๐‘‘ โ‹… 32 โข ๐‘‘ โข โ€– ๐œ… ^ โ€– โˆž 3 โข ๐œ€ 2 ) ) .

(124)

Substituting the choice 124 into 113 yields the claimed bound in 109.

Lemma 5 (Relation between Euclidean covering numbers)

We have

๐’ฉ โˆž โข ( โ„ฌ 2 โข ( ๐‘Ÿ ) , 1 ) โ‰ค 1 ๐œ‹ โข ๐‘‘ โ‹… [ ( 1 + 2 โข ๐‘Ÿ ๐‘‘ ) โข 2 โข ๐œ‹ โข ๐‘’ ] ๐‘‘ for all ๐‘‘ โ‰ฅ 1 .

(125) Proof

We apply Wainwright (2019, Lem. 5.7) with โ„ฌ

โ„ฌ 2 โข ( ๐‘Ÿ ) and โ„ฌ โ€ฒ

โ„ฌ โˆž โข ( 1 ) to conclude that

๐’ฉ โˆž โข ( โ„ฌ 2 โข ( ๐‘Ÿ ) , 1 ) โ‰ค Vol โข ( 2 โข โ„ฌ 2 โข ( ๐‘Ÿ ) + โ„ฌ โˆž โข ( 1 ) ) Vol โข ( โ„ฌ โˆž โข ( 1 ) ) โ‰ค Vol โข ( โ„ฌ 2 โข ( 2 โข ๐‘Ÿ + ๐‘‘ ) ) โ‰ค ๐œ‹ ๐‘‘ / 2 ฮ“ โข ( ๐‘‘ 2 + 1 ) โ‹… ( 2 โข ๐‘Ÿ + ๐‘‘ ) ๐‘‘ ,

(126)

where Vol โข ( ๐’ณ ) denotes the ๐‘‘ -dimensional Euclidean volume of ๐’ณ โŠ‚ โ„ ๐‘‘ , and ฮ“ โข ( ๐‘Ž ) denotes the Gamma function. Next, we apply the following bounds on the Gamma function from Batir (2017, Thm. 2.2):

ฮ“ โข ( ๐‘ + 1 ) โ‰ฅ ( ๐‘ / ๐‘’ ) ๐‘ โข 2 โข ๐œ‹ โข ๐‘ โข  for any  โข ๐‘ โ‰ฅ 1 , and ฮ“ โข ( ๐‘ + 1 ) โ‰ค ( ๐‘ / ๐‘’ ) ๐‘ โข ๐‘’ 2 โข ๐‘ โข  for any  โข ๐‘ โ‰ฅ 1.1 .

(127)

Thus, we have

๐’ฉ โˆž โข ( โ„ฌ 2 โข ( ๐‘Ÿ ) , 1 ) โ‰ค ๐œ‹ ๐‘‘ / 2 2 โข ๐œ‹ โข ๐‘‘ โข ( ๐‘‘ 2 โข ๐‘’ ) ๐‘‘ / 2 โ‹… ( 2 โข ๐‘Ÿ + ๐‘‘ ) ๐‘‘ โ‰ค 1 ๐œ‹ โข ๐‘‘ โ‹… [ ( 1 + 2 โข ๐‘Ÿ ๐‘‘ ) โข 2 โข ๐‘’ โข ๐œ‹ ] ๐‘‘ ,

(128)

as claimed, and we are done.

J.2Proof of Prop. 2: Covering numbers for analytic and differentiable kernels

First we apply LABEL:rkhs_cover_restriction_domain so that it remains to establish the stated bounds simply on log โก ๐’ฉ ๐ค โ€  โข ( ๐’ณ , ๐œ€ ) .

Proof of bound 88 in part (a)

The bound 88 for the real-analytic kernel is a restatement of Sun & Zhou (2008, Thm. 2) in our notation (in particular, after making the following substitutions in their notation: ๐‘… โ† 1 , ๐ถ 0 โ† ๐ถ ๐œ… , ๐‘Ÿ โ† ๐‘… ๐œ… , ๐’ณ โ† ๐’œ , ๐‘Ÿ ~ โ† ๐‘Ÿ โ€  , ๐œ‚ โ† ๐œ€ , ๐ท โ† ๐ท ๐’œ 2 , ๐‘› โ† ๐‘‘ ). โ–ก

Proof of bound 90 for part (b):

Under these assumptions, Steinwart & Christmann (2008, Thm. 6.26) states that the ๐‘– -th dyadic entropy number Steinwart & Christmann (2008, Def. 6.20) of the identity inclusion mapping from โ„‹ | โ„ฌ ยฏ 2 โข ( ๐‘Ÿ ) to ๐ฟ โ„ฌ ยฏ 2 โข ( ๐‘Ÿ ) โˆž is bounded by ๐‘ ๐‘  , ๐‘‘ , ๐ค โ€ฒ โ‹… ๐‘Ÿ ๐‘  โข ๐‘– โˆ’ ๐‘  / ๐‘‘ for some constant ๐‘ ๐‘  , ๐‘‘ , ๐ค โ€ฒ independent of ๐œ€ and ๐‘Ÿ . Given this bound on the entropy number, and applying Steinwart & Christmann (2008, Lem. 6.21), we conclude that the log-covering number log โก ๐’ฉ ๐ค โ€  โข ( โ„ฌ ยฏ 2 โข ( ๐‘Ÿ ) , ๐œ€ ) is bounded by ln โก 4 โ‹… ( ๐‘ ๐‘  , ๐‘‘ , ๐ค โ€ฒ โข ๐‘Ÿ ๐‘  / ๐œ€ ) ๐‘‘ / ๐‘ 

๐‘ ๐‘  , ๐‘‘ , ๐ค โข ๐‘Ÿ ๐‘‘ โ‹… ( 1 / ๐œ€ ) ๐‘‘ / ๐‘  as claimed. โ–ก

J.3Proof of LABEL:rkhs_covering_numbers: Covering numbers for specific kernels

First we apply LABEL:rkhs_cover_restriction_domain so that it remains to establish the stated bounds in each part on the corresponding log โก ๐’ฉ ๐ค .

Proof for Gauss kernel: Part LABEL:item:gauss_cover

The bound LABEL:eq:gauss_cover for the Gaussian kernel follows directly from Steinwart & Fischer (2021, Eqn. 11) along with the discussion stated just before it. Furthermore, the bound LABEL:eq:gauss_const for ๐ถ Gauss , ๐‘‘ are established in Steinwart & Fischer (2021, Eqn. 6), and in the discussion around it. โ–ก

Proof for Matรฉrn kernel: Part LABEL:item:matern_kernel

We claim that Matรฉrn โข ( ๐œˆ , ๐›พ ) is โŒŠ ๐œˆ โˆ’ ๐‘‘ 2 โŒ‹ -times continuously differentiable which immediately implies the bound LABEL:eq:matern_cover using Prop. 2(b).

To prove the differentiability, we use Fourier transform of Matรฉrn kernels. For ๐ค

Matรฉrn โข ( ๐œˆ , ๐›พ ) , let ๐œ… : โ†’ ๐‘‘ denote the function such that noting that ๐ค โข ( ๐‘ฅ , ๐‘ฆ )

๐œ… โข ( ๐‘ฅ โˆ’ ๐‘ฆ ) . Then using the Fourier transform of ๐œ… from Wendland (2004, Thm 8.15), and noting that ๐œ… is real-valued, we can write

๐ค โข ( ๐‘ฅ , ๐‘ฆ )

๐‘ ๐ค , ๐‘‘ โข โˆซ cos โก ( ๐œ” โŠค โข ( ๐‘ฅ โˆ’ ๐‘ฆ ) ) โข ( ๐›พ 2 + โ€– ๐œ” โ€– 2 2 ) โˆ’ ๐œˆ โข ๐‘‘ ๐œ”

(129)

for some constant ๐‘ ๐ค , ๐‘‘ depending only on the kernel parameter, and ๐‘‘ (due to the normalization of the kernel, and the Fourier transform convention). Next, for any multi-index ๐‘Ž โˆˆ โ„• 0 ๐‘‘ , we have

| โˆ‚ ๐‘Ž , ๐‘Ž cos โก ( ๐œ” โŠค โข ( ๐‘ฅ โˆ’ ๐‘ฆ ) ) โข ( ๐›พ 2 + โ€– ๐œ” โ€– 2 2 ) โˆ’ ๐œˆ | โ‰ค โˆ ๐‘—

1 ๐‘‘ ๐œ” ๐‘— 2 โข ๐‘Ž ๐‘— โข ( ๐›พ 2 + โ€– ๐œ” โ€– 2 2 ) โˆ’ ๐œˆ โ‰ค โ€– ๐œ” โ€– 2 2 โข โˆ‘ ๐‘—

1 ๐‘‘ ๐‘Ž ๐‘— ( ๐›พ 2 + โ€– ๐œ” โ€– 2 2 ) ๐œˆ ,

(130)

where โˆ‚ ๐‘Ž , ๐‘Ž denotes the partial derivative of order ๐‘Ž . Moreover, we have

โˆซ โ€– ๐œ” โ€– 2 2 โข โˆ‘ ๐‘—

1 ๐‘‘ ๐‘Ž ๐‘— ( ๐›พ 2 + โ€– ๐œ” โ€– 2 2 ) ๐œˆ โข ๐‘‘ ๐œ”

๐‘ ๐‘‘ โข โˆซ ๐‘Ÿ > 0 ๐‘Ÿ ๐‘‘ โˆ’ 1 โข ๐‘Ÿ 2 โข โˆ‘ ๐‘—

1 ๐‘‘ ๐‘Ž ๐‘— ( ๐›พ 2 + ๐‘Ÿ 2 ) ๐œˆ โข ๐‘‘ ๐‘Ÿ โ‰ค ๐‘ ๐‘‘ โข โˆซ ๐‘Ÿ > 0 ๐‘Ÿ ๐‘‘ โˆ’ 1 + 2 โข โˆ‘ ๐‘—

1 ๐‘‘ ๐‘Ž ๐‘— โˆ’ 2 โข ๐œˆ < ( ๐‘– ) โˆž ,

(131)

where step (i) holds whenever

๐‘‘ โˆ’ 1 + 2 โข โˆ‘ ๐‘—

1 ๐‘‘ ๐‘Ž ๐‘— โˆ’ 2 โข ๐œˆ < โˆ’ 1 โŸบ โˆ‘ ๐‘—

1 ๐‘‘ ๐‘Ž ๐‘— < ๐œˆ โˆ’ ๐‘‘ 2 .

(132)

Then applying Newey & McFadden (1994, Lemma 3.6), we conclude that for all multi-indices ๐‘Ž such that โˆ‘ ๐‘—

1 ๐‘‘ ๐‘Ž ๐‘— โ‰ค โŒŠ ๐œˆ โˆ’ ๐‘‘ 2 โŒ‹ , the partial derivative โˆ‚ ๐‘Ž , ๐‘Ž ๐ค exists and is given by

๐‘ ๐ค , ๐‘‘ โข โˆซ โˆ‚ ๐‘Ž , ๐‘Ž cos โก ( ๐œ” โŠค โข ( ๐‘ฅ โˆ’ ๐‘ฆ ) ) โข ( ๐›พ 2 + โ€– ๐œ” โ€– 2 2 ) โˆ’ ๐œˆ โข ๐‘‘ โข ๐œ” ,

(133)

and we are done. โ–ก

Proof for IMQ kernel: Part LABEL:item:imq_cover

The bounds LABEL:eq:imq_cover and LABEL:eq:imq_const follow from Sun & Zhou (2008, Ex. 3), and noting that ๐’ฉ 2 โข ( โ„ฌ 2 โข ( ๐‘Ÿ ) , ๐‘Ÿ ~ / 2 ) is bounded by ( 1 + 4 โข ๐‘Ÿ ๐‘Ÿ ~ ) ๐‘‘  (Wainwright, 2019, Lem. 5.7). โ–ก

Proof for sinc kernel: Part LABEL:item:sinc_cover

Note that

1 2 โข ๐œ‹ โข โˆซ ๐Ÿ โข ( | ๐œ‰ | โ‰ค ๐œƒ ) โข ๐‘’ โˆ’ ๐‘– โข ๐‘ง โข ๐œ‰ โข ๐‘‘ ๐œ‰

1 2 โข ๐œ‹ โข โˆซ โˆ’ ๐œƒ ๐œƒ cos โก ( ๐‘ง โข ๐œ‰ ) โข ๐‘‘ ๐œ‰

1 2 โข ๐œ‹ โข 2 โข sin โก ( ๐œƒ โข ๐‘ง ) ๐‘ง

๐œƒ ๐œ‹ โข sinc โข ( ๐œƒ โข ๐‘ง ) .

(134)

and hence ๐œ… โข ( ๐‘ง )

โˆ ๐‘—

1 ๐‘‘ sinc โข ( ๐œƒ โข ๐‘ง ๐‘— ) is the Fourier transform (see Lem. 4) of

๐œ… ^ โข ( ๐œ‰ )

( ๐œ‹ ๐œƒ ) ๐‘‘ โข โˆ ๐‘—

1 ๐‘‘ ๐Ÿ โข ( | ๐œ‰ ๐‘— | โ‰ค ๐œƒ ) .

(135)

Now we can apply Lem. 4 with ๐‘Ž

๐œƒ and โ€– ๐œ… ^ โ€– โˆž

( ๐œ‹ ๐œƒ ) ๐‘‘ , to obtain

๐‘ ๐œ… , ๐‘Ž , ๐‘‘

max โก { 1 , โŒˆ 2 โข ๐œƒ โŒ‰ , log โก ( ( 3 โข ๐œƒ 2 โข ๐œ‹ ) ๐‘‘ โ‹… 32 โข ๐‘‘ 3 โข ๐œ€ 2 โ‹… ๐œ‹ ๐‘‘ ๐œƒ ๐‘‘ ) }

max โก { 1 , โŒˆ 2 โข ๐œƒ โŒ‰ , log โก ( ( 3 2 ) ๐‘‘ โ‹… 32 โข ๐‘‘ 3 โข ๐œ€ 2 ) } .

(136)

Now that for ๐‘ฅ , ๐‘ฆ โˆˆ [ โˆ’ ๐‘Ÿ , ๐‘Ÿ ] ๐‘‘ , we can define vectors ๐‘ฅ โ€ฒ and ๐‘ฆ โ€ฒ in [ 0 , 1 ] ๐‘‘ with ๐‘ฅ ๐‘— โ€ฒ

( ๐‘ฅ ๐‘— + ๐‘Ÿ ) / 2 โข ๐‘Ÿ and ๐‘ฆ ๐‘— โ€ฒ โ‰œ ( ๐‘ฆ ๐‘— + ๐‘Ÿ ) / ( 2 โข ๐‘Ÿ ) for each ๐‘— โˆˆ [ ๐‘‘ ] such that

sinc โข ( ๐œƒ โข ( ๐‘ฅ โˆ’ ๐‘ฆ ) )

sinc โข ( ๐‘Ÿ โข ๐œƒ โข ( ๐‘ฅ โ€ฒ โˆ’ ๐‘ฆ โ€ฒ ) ) .

(137)

And hence for ๐ค โข ( ๐‘ฅ , ๐‘ฆ )

sinc โข ( ๐œƒ โข ( ๐‘ฅ โˆ’ ๐‘ฆ ) ) , we can consider ๐ค โ€ฒ โข ( ๐‘ฅ , ๐‘ฆ )

sinc โข ( ๐‘Ÿ โข ๐œƒ โข ( ๐‘ฅ โˆ’ ๐‘ฆ ) ) so that โ„ณ ๐ค โข ( [ โˆ’ ๐‘Ÿ , ๐‘Ÿ ] ๐‘‘ , ๐œ€ )

โ„ณ ๐ค โ€ฒ โข ( [ 0 , 1 ] ๐‘‘ , ๐œ€ ) . Substituting ๐œƒ โ† ๐‘Ÿ โข ๐œƒ into the definition of ๐‘ ๐œ… , ๐‘Ž , ๐‘‘ above and invoking the bound 109 from Lem. 4 implies the desired claim. โ–ก

Proof for B-spline kernel: Part LABEL:item:spline_kernel

The analytical expression for the 2 โข ๐›ฝ + 2 -recursive convolution of ๐Ÿ [ โˆ’ 1 2 , 1 2 ] from Dwivedi & Mackey (2021, App. O.4.1) shows that the function โ„Ž ๐›ฝ : โ†’ [ 0 , 1 ] can be represented as a linear combination of functions ๐‘ฅ โ†ฆ max ( ๐‘Ž + ๐‘ฅ , 0 ) 2 โข ๐›ฝ + 1 for multiple different thresholds ๐‘Ž and consequently that โ„Ž ๐›ฝ is continuously differentiable 2 โข ๐›ฝ times on . Hence ๐ค โข ( ๐‘ฅ , ๐‘ฆ )

๐œ… โข ( ๐‘ฅ โˆ’ ๐‘ฆ ) for ๐œ… โข ( ๐‘ง )

๐”… 2 โข ๐›ฝ + 2 โˆ’ ๐‘‘ โข โˆ ๐‘—

1 ๐‘‘ โ„Ž ๐›ฝ โข ( ๐›พ โข ๐‘ง ๐‘— ) is ๐›ฝ -times continuously differentiable since for all multi-indices ๐›ผ 1 , ๐›ผ 2 โˆˆ โ„• 0 ๐‘‘ , we have โˆ‚ | ๐›ผ 1 | + | ๐›ผ 2 | โˆ‚ ๐›ผ 1 ๐‘ฅ โข โˆ‚ ๐›ผ 2 ๐‘ฆ โข ๐ค โข ( ๐‘ฅ , ๐‘ฆ )

( โˆ’ 1 ) | ๐›ผ 2 | โข ( โˆ‚ | ๐›ผ 1 | + | ๐›ผ 2 | โˆ‚ ๐›ผ 1 + ๐›ผ 2 ๐‘ง โข ๐œ… ) โข ( ๐‘ฅ โˆ’ ๐‘ฆ ) . As a result, B-spline โข ( 2 โข ๐›ฝ + 1 , ๐›พ ) satisfies the conditions of Prop. 2(b) with ๐‘ 

๐›ฝ yielding the claim. โ–ก

Appendix KProof of Tab. 3 results

In Tab. 3, the stated results for all the entries in the target KT column follow directly by substituting the covering number bounds from LABEL:rkhs_covering_numbers in the corresponding entry along with the stated radii growth conditions for the target โ„™ . (We substitute ๐‘š

1 2 โข log 2 โก ๐‘› since we thin to ๐‘› output size.) For the KT+ column, the stated result follows by either taking the minimum of the first two columns (whenever the root KT guarantee applies) or using the power KT guarantee. First we remark how to always ensure a rate of at least ๐’ช โข ( ๐‘› โˆ’ 1 4 ) even when the guarantee from our theorems are larger, using a suitable baseline procedure and then proceed with our proofs.

Remark 2 (Improvement over baseline thinning)

First we note that the kt-swap step ensures that, deterministically, MMD ๐ค โก ( ๐’ฎ in , ๐’ฎ KT ) โ‰ค MMD ๐ค โก ( ๐’ฎ in , ๐’ฎ base ) and MMD ๐ค โก ( โ„™ , ๐’ฎ KT ) โ‰ค 2 โข MMD ๐ค โก ( โ„™ , ๐’ฎ in ) + MMD ๐ค โก ( โ„™ , ๐’ฎ base ) for ๐’ฎ base a baseline thinned coreset of size ๐‘› 2 ๐‘š and any target โ„™ . For example if the input and baseline coresets are drawn i.i.d. and ๐ค is bounded, then MMD ๐ค โก ( ๐’ฎ in , ๐’ฎ KT ) and MMD ๐ค โก ( โ„™ , ๐’ฎ KT ) are ๐’ช โข ( 2 ๐‘š / ๐‘› ) with high probability (Tolstikhin et al., 2017, Thm. A.1), even if the guarantee of Thm. 2 is larger. As a consequence, in all well-defined KT variants, we can guarantee a rate of ๐‘› โˆ’ 1 4 for MMD ๐ค โก ( ๐’ฎ in , ๐’ฎ KT ) when the output size is ๐‘› simply by using baseline as i.i.d. thinning in the kt-swap step.

Gauss kernel

The target KT guarantee follows by substituting the covering number bound for the Gaussian kernel from LABEL:rkhs_covering_numbersLABEL:item:gauss_cover in 7, and the root KT guarantee follows directly from Dwivedi & Mackey (2021, Tab. 2). Putting the guarantees for the root KT and target KT together (and taking the minimum of the two) yields the guarantee for KT+.

IMQ kernel

The target KT guarantee follows by putting together the covering bound LABEL:rkhs_covering_numbersLABEL:item:imq_cover and the MMD bounds 7.

For the root KT guarantee, we use a square-root dominating kernel ๐ค ~ rt IMQ ( ๐œˆ โ€ฒ , ๐›พ โ€ฒ )  Dwivedi & Mackey (2021, Def.2) as suggested by Dwivedi & Mackey (2021). Dwivedi & Mackey (2021, Eqn.(117)) shows that ๐ค ~ rt is always defined for appropriate choices of ๐œˆ โ€ฒ , ๐›พ โ€ฒ . The best root KT guarantees are obtained by choosing largest possible ๐œˆ โ€ฒ (to allow the most rapid decay of tails), and Dwivedi & Mackey (2021, Eqn.(117)) implies with ๐œˆ < ๐‘‘ 2 , the best possible parameter satisfies ๐œˆ โ€ฒ โ‰ค ๐‘‘ 4 + ๐œˆ 2 . For this parameter, some algebra shows that max โก ( โ„œ ๐ค ~ rt , ๐‘› โ€  โข โ„œ ๐ค ~ rt , ๐‘› ) โ‰พ ๐‘‘ , ๐œˆ , ๐›พ ๐‘› 1 / 2 โข ๐œˆ , leading to a guarantee worse than ๐‘› โˆ’ 1 4 , so that the guarantee degenerates to ๐‘› โˆ’ 1 4 using Rem. 2 for root KT. When ๐œˆ โ‰ฅ ๐‘‘ 2 , we can use a Matรฉrn kernel as a square-root dominating kernel from Dwivedi & Mackey (2021, Prop. 3), and then applying the bounds for the kernel radii 41, and the inflation factor 44 for a generic Matรฉrn kernel from Dwivedi & Mackey (2021, Tab. 3) leads to the entry for the root KT stated in Sec. 2.3. The guarantee for KT+ follows by taking the minimum of the two.

Matรฉrn kernel

For target KT, substituting the covering number bound from LABEL:rkhs_covering_numbersLABEL:item:matern_kernel in 7 with ๐‘…

log โก ๐‘› and โ„“ โ‰œ โŒŠ ๐œˆ โˆ’ ๐‘‘ 2 โŒ‹

0 yields the MMD bound of order

log โก ๐‘› โ‹… ( log โก ๐‘› ) ๐‘‘ ๐‘› 1 โˆ’ ๐‘‘ / ( 2 โข โ„“ )

( log โก ๐‘› ) ๐‘‘ + 1 2 ๐‘› ( 2 โข โ„“ โˆ’ ๐‘‘ ) / 4 โข โ„“

(138)

which decays faster than ๐‘› โˆ’ 1 4 only when โ„“ > ๐‘‘ or equivalently ๐œˆ > 3 โข ๐‘‘ / 2 . The rate in 138 simplifies to the entry in the Tab. 3 when ๐œˆ โˆ’ ๐‘‘ 2 is an integer, i.e., when โ„“

๐œˆ โˆ’ ๐‘‘ 2 . When ๐œˆ โ‰ค 3 โข ๐‘‘ / 2 , we can simply use baseline as i.i.d. thinning to obtain an order ๐‘› โˆ’ 1 4 MMD error as in Rem. 2.

The root KT (and thereby KT+) guarantees for ๐œˆ

๐‘‘ follow from Dwivedi & Mackey (2021, Tab. 2).

When ๐œˆ โˆˆ ( ๐‘‘ 2 , ๐‘‘ ] , we use power KT with a suitable ๐›ผ to establish the KT+ guarantee. For Matรฉrn ( ๐œˆ , ๐›พ ) kernel, the ๐›ผ -power kernel is given by Matรฉrn ( ๐›ผ โข ๐œˆ , ๐›พ ) if ๐›ผ โข ๐œˆ > ๐‘‘ 2 (a proof of this follows from Def. 2 and Dwivedi & Mackey (2021, Eqns (71-72))). Since Laplace ( ๐œŽ )

Matรฉrn โข ( ๐‘‘ + 1 2 , ๐œŽ โˆ’ 1 ) , we conclude that its ๐›ผ -power kernel is defined for ๐›ผ > ๐‘‘ ๐‘‘ + 1 . And using the various tail radii 41, and the inflation factor 44 for a generic Matรฉrn kernel from Dwivedi & Mackey (2021, Tab. 3), we conclude that ๐” ~ ๐›ผ โ‰พ ๐‘‘ , ๐ค ๐›ผ , ๐›ฟ log โก ๐‘› โข log โก log โก ๐‘› , and max โก ( โ„œ ๐ค ๐›ผ , ๐‘› โ€  โข โ„œ ๐ค ๐›ผ , ๐‘› ) โ‰พ ๐‘‘ , ๐ค ๐›ผ log โก ๐‘› , so that โ„œ max

๐’ช ๐‘‘ , ๐ค ๐›ผ โข ( log โก ๐‘› )  42 for SubExp  โ„™ setting. Thus for this case, the MMD guarantee for ๐‘› thinning with power KT (tracking only scaling with ๐‘› ) is

( 2 ๐‘š ๐‘› โข โ€– ๐ค ๐›ผ โ€– โˆž ) 1 2 โข ๐›ผ โข ( 2 โ‹… ๐” ~ ๐›ผ ) 1 โˆ’ 1 2 โข ๐›ผ โข ( 2 + ( 4 โข ๐œ‹ ) ๐‘‘ / 2 ฮ“ โข ( ๐‘‘ 2 + 1 ) โ‹… โ„œ max ๐‘‘ 2 โ‹… ๐” ~ ๐›ผ ) 1 ๐›ผ โˆ’ 1

(139)

โ‰พ ๐‘‘ , ๐ค ๐›ผ , ๐›ฟ ( 1 ๐‘› ) 1 2 โข ๐›ผ โข ( ๐‘ ๐‘› โข log โก ๐‘› ) 1 โˆ’ 1 2 โข ๐›ผ โ‹… ( ( log โก ๐‘› ) ๐‘‘ 2 + 1 2 โข ๐‘ ๐‘› ) 1 ๐›ผ โˆ’ 1

( ๐‘ ๐‘› โข ( log โก ๐‘› ) 1 + 2 โข ๐‘‘ โข ( 1 โˆ’ ๐›ผ ) ๐‘› ) 1 4 โข ๐›ผ

(140)

where ๐‘ ๐‘›

log โก log โก ๐‘› ; and we thus obtain the corresponding entry (for KT+) stated in Tab. 3.

sinc kernel

The guarantee for target KT follows directly from substituting the covering number bounds from LABEL:rkhs_covering_numbersLABEL:item:sinc_cover in 7 as โ„ฌ 2 โข ( โ„œ in ) โІ [ โˆ’ โ„œ in , โ„œ in ] ๐‘‘ .

For the root KT guarantee, we note that the square-root kernel construction of Dwivedi & Mackey (2021, Prop.2) implies that sinc โข ( ๐œƒ ) itself is a square-root of sinc ( ๐œƒ ) since the Fourier transform of sinc is a rectangle function on a bounded domain. However, the tail of the sinc kernel does not decay fast enough for the guarantee of Dwivedi & Mackey (2021, Thm. 1) to improve beyond the ๐‘› โˆ’ 1 4 bound of Dwivedi & Mackey (2021, Rem. 2) obtained when running root KT with i.i.d. baseline thinning.

In this case, target KT and KT+ are identical since ๐ค rt

๐ค .

B-spline kernel

The guarantee for target KT follows directly from substituting the covering number bounds from LABEL:rkhs_covering_numbersLABEL:item:spline_kernel in 7.

For the B-spline ( 2 โข ๐›ฝ + 1 , ๐›พ ) kernel, using arguments similar to that in Dwivedi & Mackey (2021, Tab.4), we conclude that (up to a constant scaling) the ๐›ผ -power kernel is defined to be B-spline โข ( ๐ด + 1 , ๐›พ ) whenever ๐ด โ‰œ 2 โข ๐›ผ โข ๐›ฝ + 2 โข ๐›ผ โˆ’ 2 โˆˆ 2 โข โ„• 0 . Whenever the ๐›ผ -power kernel is defined, we can then apply the various tail radii 41 and the inflation factor 44 from Dwivedi & Mackey (2021, Tab. 3) to conclude that the MMD error rates for the power KT for Compact  โ„™ are the same as root KT up to factors depending on ๐›ผ and ๐›ฝ , which as per Dwivedi & Mackey (2021, Tab. 2) are of order log โก ๐‘› / ๐‘› .

For odd ๐›ฝ we can always take ๐›ผ

1 2 and B-spline โข ( ๐›ฝ , ๐›พ ) is a valid (up to a constant scaling) square-root kernel (Dwivedi & Mackey, 2021). In this case, the root KT guarantee is log โก ๐‘› / ๐‘› , and the KT+ guarantee follows by taking the minimum MMD error for target KT and root KT.

For even ๐›ฝ , we can choose ๐›ผ โ‰œ ๐‘ + 1 ๐›ฝ + 1 โˆˆ ( 1 2 , 1 ) with ๐‘

โŒˆ ๐›ฝ โˆ’ 1 2 โŒ‰

๐›ฝ 2 โˆˆ โ„• , which is feasible as long as ๐›ฝ โ‰ฅ 2 . Thus B-spline โข ( ๐›ฝ + 1 , ๐›พ ) is a suitable ๐ค ๐›ผ for B-spline โข ( 2 โข ๐›ฝ + 1 , ๐›พ ) for even ๐›ฝ โ‰ฅ 2 with ๐›ผ

๐›ฝ + 2 2 โข ๐›ฝ + 2 โˆˆ ( 1 2 , 1 ) . Since ๐ค ๐›ผ is compactly supported, Thm. 3 implies that ๐” ~ ๐›ผ

๐’ช ๐‘‘ โข ( log โก ๐‘› ) and โ„œ max

๐’ช ๐‘‘ โข ( 1 ) , and hence the MMD guarantee for ๐‘› thinning with power KT (tracking only the scaling with ๐‘› ) is

( 2 ๐‘š ๐‘› โข โ€– ๐ค ๐›ผ โ€– โˆž ) 1 2 โข ๐›ผ โข ( 2 โ‹… ๐” ~ ๐›ผ ) 1 โˆ’ 1 2 โข ๐›ผ โข ( 2 + ( 4 โข ๐œ‹ ) ๐‘‘ / 2 ฮ“ โข ( ๐‘‘ 2 + 1 ) โ‹… โ„œ max ๐‘‘ 2 โ‹… ๐” ~ ๐›ผ ) 1 ๐›ผ โˆ’ 1

(141)

โ‰พ ๐‘‘ , ๐ค ๐›ผ , ๐›ฟ ( 1 ๐‘› ) 1 2 โข ๐›ผ โข ( log โก ๐‘› ) 1 โˆ’ 1 2 โข ๐›ผ โ‹… ( log โก ๐‘› ) 1 ๐›ผ โˆ’ 1

( log โก ๐‘› ๐‘› ) 1 4 โข ๐›ผ

( log โก ๐‘› ๐‘› ) ๐›ฝ + 1 2 โข ๐›ฝ + 4 .

(142)

Taking the minimum MMD error for target KT and ๐›ผ -power KT yields the entry for KT+ stated in Tab. 3 for even ๐›ฝ .

Report Issue Report Issue for Selection Generated by L A T E xml Instructions for reporting errors

We are continuing to improve HTML versions of papers, and your feedback helps enhance accessibility and mobile support. To report errors in the HTML that will help us improve conversion and rendering, choose any of the methods listed below:

Click the "Report Issue" button. Open a report feedback form via keyboard, use "Ctrl + ?". Make a text selection and click the "Report Issue for Selection" button near your cursor. You can use Alt+Y to toggle on and Alt+Shift+Y to toggle off accessible reporting links at each section.

Our team has already identified the following issues. We appreciate your time reviewing and reporting rendering errors we may not have found yet. Your efforts will help us improve the HTML versions for all readers, because disability should not be a barrier to accessing research. Thank you for your continued support in championing open access for all.

Have a free development cycle? Help support accessibility at arXiv! Our collaborators at LaTeXML maintain a list of packages that need conversion, and welcome developer contributions.