| { |
| "File Number": "1050", |
| "Title": "Consistent Interpolating Ensembles via the Manifold-Hilbert Kernel", |
| "Limitation": "One limitation of this work is the lack of rate of convergence of the ensemble methods. The analogous result for the Nadaraya-Watson regression have been obtained by Belkin et al. [8]. However, it is not\nclear if the kernels used in [8] are weighted random partition kernels. Resolving this is an interesting future direction. Similar to PERT [20], another limitation of this work is that the base classifiers in the ensemble are data-independent. Such ensemble methods in these line of work (including ours) are easier to analyze than the data-dependent ensemble methods used in practice. See Biau and Scornet [12] and [13] for an in-depth discussion. We believe our work offers one theoretical basis towards understanding generalization in the interpolation regime of ensembles of histogram classifiers over data-dependent partitions, e.g., decision trees à la CART [17].", |
| "Reviewer Comment": "Reviewer_2: I have to say that this is really for away from my expertise, so it was difficult to follow.\ndetailed theoretical analysis\nwell written, no obvious mistakes I can find\nattempts to improve readability by explaining with words important theory\n1 result: manifold theory extension of Devroye, which showed that kernel regression with the Hilbert kernel is interpolating and weakly consistent for data with a density and bounded labels. weaknesses:\nI found it hard to a bit hard to read, since there is a lot of jargon. That is surely my missing background in that area, but maybe that can be helped by putting the relevant background earlier. That helped a bit to understand the motivation of the manuscript\nprobably easier to have arabic panels in Fig. 1 and upper left corner instead of somewhere on the lower left.\nQuestions:\nThe authors believe that their work offers a theoretical basis to understand genearlization of for example decision trees. Is there something more concrete than just a believe?\nLimitations:\nLimitations are clearly stated. It is a theoretical analysis under simplifying assumptions and the connection to popular ensemble methods used in practice is still unknown.\nEthics Flag: No\nSoundness: 3 good\nPresentation: 3 good\nContribution: 3 good\n\nReviewer_3: Strengths:\nThe problem tackled is of interest to the learning community, with an important consistency result.\nThe paper is well written and accessible despite the underlying sophisticated mathematics, making it a high quality manuscript.\nRelated work in machine learning is well organized and allows to quickly understand the goal of the paper\nWeaknesses:\nThe heart of the contribution lies in the technical proofs provided by the authors, which I humbly admit to being unable to truly evaluate the novelty of. Essentially, the technical addition of the paper is to extend the results from [19] to the case of the input space being a manifold. While I do not doubt the integrity of the authors, it is hard for me to judge whether the various lemmas (4.1: Riemanniann logarithm, 4.2: change of variable) are groundbreaking in this regard. It would be interesting to have an expert in Riemannian geometry (which I am not) comment on this.\nThe theoretical nature of the paper, as well as its technical density may not be the best fit for the NeurIPS community - other learning theory venues such as COLT may be more suited for this.\nQuestions:\nRemarks:\nLinks in the pdf are not working.\nTypo line 77: \"when\" -> we ?\nFigure 1's placement is suboptimal.\nLimitations:\nThe authors have adequately addressed the limitations and potential negative societal impact of their work.\nEthics Flag: No\nSoundness: 4 excellent\nPresentation: 3 good\nContribution: 3 good\n\nReviewer_4: I do not actively work with differential/Riemannian geometry in my research, and beyond basic notions which I acquired a number of years ago, I am not familiar with the concepts used in this paper, and due to time constraints, I could not read up on all the background material required to go through every step of the proofs and certify their correctness.\nStrengths\nI believe the paper is tackling an important problem, although as someone who does not work directly in relevant areas, my judgment on the significance of the results is not well-grounded.\nThe paper is technical, but in spite of this, the clarity of the mathematical presentation makes it a pleasure to read. Despite my lack of required background to fully judge its merit, this helped me form a more positive picture of the work, and in parts where I did try to go through the mathematics, helped convince me of its soundness.\nWeaknesses\nThe main weakness that I can see is the lack of convergence rates. Only consistency is shown in the infinite-sample limit, but I think results would be much stronger if uniform rates were given over some class of distributions.\nQuestions:\nI am not familiar with ensemble methods, having never directly worked with them or used them. Is the form of ensemble used in this work, namely the one based on weighted random partition in (4), a general one in which many ensemble methods fall under? Or is this a particular form, which could limit the scope of your contribution?\nLimitations:\nThe contributions of this paper are theoretical in nature, thence I do not see any reason to discuss potential negative societal impact.\nMinor comments:\nline 77: \"When show that\" -> \"We show that\" line 135:\nϵ\n>\n0\nis a real positive number, not an open set, I presume?\nEthics Flag: No\nSoundness: 4 excellent\nPresentation: 4 excellent\nContribution: 3 good\n\nReviewer_5: Strengths:\nthe authors derive an ensemble method that has the consistent-interpolating property\nthe authors define the manifold-Hilbert kernel\nthe authors derive a specific realization of the kernels on spheres\nWeaknesses:\nthe paper is very technical and very hard to read.\nAs a non-specialist in this area, I was not able to confirm correctness of the presented results However, I have no reason to believe that the paper is not technically correct.\nI have a hard time evaluating the novelty. Proving consistency in this setting seems like a good result, but I don't know the literature well enough to judge how significant this is.\nThere is no experimental evaluation. Since the paper is purely theoretical, I believe this is OK.\nQuestions:\nno questions\nLimitations:\nyes\nEthics Flag: No\nSoundness: 3 good\nPresentation: 2 fair\nContribution: 3 good", |
| "Limitations_refined": "One limitation of this work is the lack of rate of convergence of the ensemble methods. The analogous result for the Nadaraya-Watson regression have been obtained by Belkin et al. [8]. However, it is not\nclear if the kernels used in [8] are weighted random partition kernels. Resolving this is an interesting future direction. Similar to PERT [20], another limitation of this work is that the base classifiers in the ensemble are data-independent. Such ensemble methods in these line of work (including ours) are easier to analyze than the data-dependent ensemble methods used in practice.", |
| "abstractText": "Recent research in the theory of overparametrized learning has sought to establish generalization guarantees in the interpolating regime. Such results have been established for a few common classes of methods, but so far not for ensemble methods. We devise an ensemble classification method that simultaneously interpolates the training data, and is consistent for a broad class of data distributions. To this end, we define the manifold-Hilbert kernel for data distributed on a Riemannian manifold. We prove that kernel smoothing regression and classification using the manifold-Hilbert kernel are weakly consistent in the setting of Devroye et al. [22]. For the sphere, we show that the manifold-Hilbert kernel can be realized as a weighted random partition kernel, which arises as an infinite ensemble of partition-based classifiers.", |
| "Unlabeled Sections": "Recent research in the theory of overparametrized learning has sought to establish generalization guarantees in the interpolating regime. Such results have been established for a few common classes of methods, but so far not for ensemble methods. We devise an ensemble classification method that simultaneously interpolates the training data, and is consistent for a broad class of data distributions. To this end, we define the manifold-Hilbert kernel for data distributed on a Riemannian manifold. We prove that kernel smoothing regression and classification using the manifold-Hilbert kernel are weakly consistent in the setting of Devroye et al. [22]. For the sphere, we show that the manifold-Hilbert kernel can be realized as a weighted random partition kernel, which arises as an infinite ensemble of partition-based classifiers.", |
| "1 Introduction": "Ensemble methods are among the most often applied learning algorithms, yet their theoretical properties have not been fully understood [12]. Based on empirical evidence, Wyner et al. [42] conjectured that interpolation of the training data plays a key role in explaining the success of AdaBoost and random forests. However, while a few classes of learning methods have been analyzed in the interpolating regime [6, 4], ensembles have not.\nTowards developing the theory of interpolating ensembles, we examine an ensemble classification method for data distributed on the sphere, and show that this classifier interpolates the training data and is consistent for a broad class of data distributions. To show this result, we develop two additional contributions that may be of independent interest. First, for data distributed on a Riemannian manifold M , we introduce the manifold-Hilbert kernel KHM , a manifold extension of the Hilbert kernel [39]. Under the same setting as Devroye et al. [22], we prove that kernel smoothing regression with KHM is weakly consistent while interpolating the training data. Consequently, the classifier obtained by taking the sign of the kernel smoothing estimate has zero training error and is consistent.\nSecond, we introduce a class of kernels called weighted random partition kernels. These are kernels that can be realized as an infinite, weighted ensemble of partition-based histogram classifiers. Our main result is established by showing that when M = Sd, the d-dimensional sphere, the manifoldHilbert kernel is a weighted random partition kernel. In particular, we show that on the sphere, the manifold-Hilbert kernel is a weighted ensemble based on random hyperplane arrangements. This implies that the kernel smoothing classifier is a consistent, interpolating ensemble on Sd. To our knowledge, this is the first demonstration of an interpolating ensemble method that is consistent for a broad class of distributions in arbitrary dimensions.\n36th Conference on Neural Information Processing Systems (NeurIPS 2022).", |
| "1.1 Problem statement": "Consider the problem of binary classification on a Riemannian manifold M . Let (X,Y ) be random variables jointly distributed on M × {±1}. Let Dn := {(Xi, Yi)}ni=1 be the (random) training data consisting of n i.i.d copies of X,Y . A classifier, i.e., a mapping from Dn to a function f̂(•∥Dn) : M → {±1}, has the interpolating-consistent property if, when X has a continuous distribution, both of the following hold: 1) f̂(Xi∥Dn) = Yi, for all i ∈ {1, . . . , n}, and 2)\nPr{f̂(X∥Dn) ̸= Y } → inf f :M→{±1} measurable Pr{f(X) ̸= Y } in probability as n→ ∞. (1)\nOur goal is to find an interpolating-consistent ensemble of histogram classifiers, to be defined below.\nA partition on M , denoted by P , is a set of subsets of M such that P ∩P ′ = ∅ for all P, P ′ ∈ P and M = ⋃ P∈P P . Given x ∈M , let P[x] denote the unique element P ∈ P such that x ∈ P . The set of all partitions on a space M is denoted Part(M). The histogram classifier with respect to Dn over P is the sign of the function ĥ(•∥Dn,P) :M → R given by\nĥ(x∥Dn,P) := n∑\ni=1\nYi · I{x ∈ P[Xi]}, (2)\nwhere I is the indicator function. See Figure 1-left panels. Definition 1.1. A weighted random partition (WRP) over M is a 3-tuple (Θ,P, α) consisting of (i) parameter space of partitions: a set Θ where Pθ ∈ Part(M) for each θ ∈ Θ, (ii) random partitions: a probability measure P on Θ, and (iii) weights: a nonnegative function α : Θ → R≥0 that is integrable with respect to the measure P. Example 1.2 (Regular partition of the d-cube). Let M = [0, 1]d and Θ = {1, 2 . . . } =: N+. For each n ∈ N+, denote by Pn the regular partition of M into nd d-cubes of side length 1/n. For any probability mass function P on N+ and weights α : N+ → R≥0, the 3-tuple (Θ,P, α) is a WRP. Below, WRPs will be denoted with 2-letter names in the sans-serif font, e.g., “rp” for a generic WRP, and “ha” for the weighted hyperplane arrangement random partition (Definition 5.1). The weighted random partition kernel associated to rp = (Θ,P, α) is defined as\nKrpM :M ×M → R≥0 ∪ {∞}, KrpM (x, z) := Eθ∼P[α(θ)I{x ∈ Pθ[z]}]. (3) When α ≡ 1, we recover the notion of unweighted random partition kernel introduced in [21]. Note that the kernel is symmetric since I{x ∈ Pθ[z]} = I{z ∈ Pθ[x]}. IfKrpM <∞, thenKrpM is a positive definite (PD) kernel. When KrpM can evaluate to ∞, the definition of a PD kernel is not applicable since the positive definite property is defined only for to kernels taking finite values [10].\nLet sgn : R∪{±∞} → {±1} be the sign function. For a WRP, define the weighted infinite-ensemble\nû(x∥Dn,KrpM ) := n∑\ni=1\nYi ·KrpM (x,Xi) = Eθ∼P[α(θ)ĥ(x∥Dn,Pθ)]. (4)\nNote that the equality on the right follows immediately from linearity of the expectation and the definition of ĥ(•∥Dn,Pθ) in Equation (2). See Figure 1-right panel. Main problem. Find a WRP such that sgn(û(•∥Dn,KrpM )) has the interpolating-consistent property.2", |
| "1.2 Outline of approach and contributions": "In the regression setting, we have (X,Y ) jointly distributed on M × R. Let m(x) := E[Y |X = x]. Recall from Belkin et al. [8, Equation (7)] the definition of the kernel smoothing estimator with a so-called singular1 kernel K :M ×M → [0,+∞]:\nm̂(x∥Dn,K) := Yi : ∃i ∈ [n] such that x = Xi∑n i=1 YiK(x,Xi)∑n j=1 K(x,Xj) : ∑n j=1K(x,Xj) > 0\n0 : otherwise.\n(5)\nWe note that Equation (5) is referred as the Nadaraya-Watson estimate in [8]. Now, we simply write m̂n(x) instead of m̂(x∥Dn,K) when there is no ambiguity. Similarly, we write ûn(x) instead of û(x∥Dn,K) from earlier. Note that sgn(m̂n(x)) = sgn(ûn(x)) if ∑n j=1K(x,Xj) > 0.\nObserve that m̂n is interpolating by construction. Let µX denote the marginal distribution of X . The L1-error of m̂n in approximating m is Jn := ∫ M\n|m̂n(x) −m(x)|µX(dx). For M = Rd and the Hilbert kernel defined by KHRd(x, z) := ∥x− z∥−d, Devroye et al. [22] proved L1-consistency for regression: Jn → 0 in probability when Y is bounded and X is continuously distributed. Our contributions. Our primary contribution is to demonstrate an ensemble method with the consistent-interpolating property. Toward this end, in Section 3, we introduce the manifold-Hilbert kernel KHM on a Riemannian manifold M . When show that when M is complete, connected, and smooth, kernel smoothing regression with KHM has the same consistency guarantee (Theorem 3.2) as KHRd mentioned in the preceding paragraph. In Section 5, we consider the case when M = S\nd, and show that the manifold-Hilbert kernel KHSd is a weighted random partition kernel (Proposition 5.2).\nDevroye et al. [22, Section 7] observed that the L1-consistency of m̂n for regression implies the consistency for classification of sgn◦ ûn. Furthermore, m̂n is interpolating for regression implies that sgn ◦ ûn is interpolating for classification. These observations together with our results demonstrate the existence of a weighted infinite-ensemble classifier with the interpolating-consistent property.", |
| "1.3 Related work": "Kernel regression. Kernel smoothing regression, or simply kernel regression, is an interpolator when the kernel used is singular, a fact known to Shepard [39] in 1968. Devroye et al. [22] showed that kernel regression with the Hilbert kernel is interpolating and weakly consistent for data with a density and bounded labels. Using singular kernels with compact support, Belkin et al. [8] showed that minimax optimality can be achieved under additional distributional assumptions.\nRandom forests. Wyner et al. [42] proposed that interpolation may be a key mechanism for the success of random forests and gave a compelling intuitive rationale. Belkin et al. [6] studied empirically the double descent phenomenon in random forests by considering the generalization performance past the interpolation threshold. The PERT variant of random forests, introduced by Cutler and Zhao [20], provably interpolates in 1-dimension. Belkin et al. [7] pose as an interesting question whether the result of Cutler and Zhao [20] extends to higher dimension. Many work have established consistency of random forest and its variants under different settings [15, 11, 38]. However, none of these work addressed interpolation.\nBoosting. For classification under the noiseless setting (i.e., the Bayes error is zero), AdaBoost is interpolating and consistent (see Freund and Schapire [26, first paragraph of Chapter 12]). However, this setting is too restrictive and the result does not answer if consistency is possible when fitting the noise. Bartlett and Traskin [5] proved that AdaBoost with early stopping is universally consistent, however without the interpolation guarantee. To the best of our knowledge, whether AdaBoost or any other variant of boosting can be interpolating and consistent remains open.\nRandom partition kernels. Breiman [16] and Geurts et al. [28] studied infinite ensembles of simplified variants of random forest and connections to certain kernels. Davies and Ghahramani [21] formalized this connection and coined the term random partition kernel. Scornet [37] further developed the theory of random forest kernels and obtained upper bounds on the rate of convergence. However, it is not clear if these variants of random forests are interpolating.\n1The “singular” modifier refers to the fact that K(x, x) = +∞ for all x ∈ M .\nPreviously defined (unweighted) random partition kernels are bounded, and thus cannot be singular. On the other hand, the manifold-Hilbert kernel is always singular. To bridge between ensemble methods and theory on interpolating kernel smoothing regression, we propose weighted random partitions (Definition 1.1), whose associated kernel (Equation 3) can be singular.\nLearning on Riemannian manifolds. Strong consistency of a kernel-based classification method on manifolds has been established by Loubes and Pelletier [32]. However, the result requires the kernel to be bounded and thus the method is not guaranteed to be interpolating. See Feragen and Hauberg [25] for a review of theoretical results regarding kernels on Riemannian manifolds.\nBeyond kernel methods, other classical methods for Euclidean data have been extended to Riemannian manifolds, e.g., regression [40], classification [43], and dimensionality reduction and clustering [44][34]. To the best of our knowledge, no previous works have demonstrated an interpolatingconsistent classifiers on manifolds other than Rd. In many applications, the data naturally belong to a Riemannian manifold. Spherical data arise from a range of disciplines in natural sciences. See the influential textbook by Mardia and Jupp [33, Ch.1§4]. For applications of the Grassmanian manifold in computer vision, see Jayasumana et al. [30] and the references therein. Topological data analysis [41] presents another interesting setting of manifold-valued data in the form of persistence diagrams [3, 31].", |
| "2 Background on Riemannian Manifolds": "We give an intuitive overview of the necessary concepts and results on Riemannian manifolds. A longer, more precise version of this overview is in the Supplemental Materials Section A.1.\nA smooth d-dimensional manifold M is a topological space that is locally diffeomorphic2 to open subsets of Rd. For simplicity, suppose that M is embedded in RN for some N ≥ d, e.g., Sd ⊆ Rd+1. Let x ∈ M be a point. The tangent space at x, denoted TxM , is the set of vectors that is tangent to M at x. Since linear combinations of tangent vectors are also tangent, the tangent space TxM is a vector space. Tangent vectors can also be viewed as the time derivative of smooth curves. In particular, let x ∈ M . If ϵ > 0 is an open set and γ : (−ϵ, ϵ) → M is a smooth curve such that γ(0) = x, then dγdt (0) ∈ TxM . A Riemannian metric on M is a choice of inner product ⟨·, ·⟩x on TxM for each x such that ⟨·, ·⟩x varies smoothly with x. Naturally, ∥z∥x := √ ⟨z, z⟩x defines a norm on TxM . The length\nof a piecewise smooth curve γ : [a, b] → M is defined by len(γ) := ∫ b a ∥γ̇(t)∥γ(t)dt. Define distM (x, ξ) := inf{len(γ) : γ is a piecewise smooth curve from x to ξ}, which is a metric on M in the sense of metric spaces (see Sakai [36, Proposition 1.1]). For x ∈M and r ∈ (0,∞), the open metric ball centered at x of radius r is denoted Bx(r,M) := {ξ ∈M : distM (x, ξ) < r}. A curve γ : [a, b] →M is a geodesic if γ is locally distance minimizing and has constant speed, i.e., ∥dγdt (τ)∥γ(τ) is constant. Now, suppose x ∈M and v ∈ TxM are such that there exists a geodesic γ : [0, 1] → M where γ(0) = x and dγdt (0) = v. Define expx(v) := γ(1), the element reached by traveling along γ at time = 1. See Figure 2 for the case when M = S2. For a fixed x ∈M , the above function expx, the exponential map, can be defined on an open subset of TxM containing the origin. The Hopf-Rinow theorem ([23, Ch. 8, Theorem 2.8]) states that if M is connected and complete with respect to the metric distM , then expx can be defined on all of TxM .", |
| "3 The Manifold-Hilbert kernel": "Throughout the remainder of this work, we assume that M is a complete, connected, and smooth Riemannian manifold of dimension d.\nDefinition 3.1. We define the manifold-Hilbert kernel KHM :M ×M → [0,∞] for each x, ξ ∈M by KHM (x, ξ) := distM (x, ξ)\n−d if x ̸= ξ and KHM (x, x) := ∞ otherwise. 2A diffeomorphism is a smooth bijection whose inverse is also smooth.\nLet λM be the Riemann–Lebesgue volume measure of M . Integration with respect to this measure is denoted ∫ M fdλM for a function f :M → R. For details of the construction of λM , see Amann and\nEscher [1, Proposition 1.5]. When M = Rd, λM is the ordinary Lebesgue measure and ∫ Rd fdλRd is the ordinary Lebesgue integral. For this case, we simply write λ instead of λRd .\nWe now state our first main result, a manifold theory extension of Devroye et al. [22, Theorem 1]. Theorem 3.2. Suppose that X has a density fX with respect to λM and that Y is bounded. Let PY |X be a conditional distribution of Y given X and mY |X be its conditional expectation. Let m̂n(x) := m̂(x∥Dn,KHM ). Then\n1. at almost all x ∈M with fX(x) > 0, we have m̂n(x) → mY |X(x) in probability, 2. Jn := ∫ M |m̂n(x)−mY |X(x)|fX(x)dλM (x) → 0 in probability.\nIn words, the kernel smoothing regression estimate m̂n based on the manifold-Hilbert kernel is consistent and interpolates the training data, provided X has a density and Y is bounded. As a consequence, following the same logic as in Devroye et al. [22], the associated classifier sgn ◦ ûn has the interpolating-consistent property. Before proving Theorem 3.2, we first review key concepts in probability theory on Riemannian manifolds.", |
| "3.1 Probability on Riemannian manifolds": "Let BM be the Borel σ-algebra of M , i.e., the smallest σ-algebra containing all open subsets of M . We recall the definition of M -valued random variables, following Pennec [35, Definition 2]: Definition 3.3. Let (Ω,P,A) be a probability space with measure P and σ-algebra A. A M -valued random variable X is a Borel-measurable function Ω →M , i.e., X−1(B) ∈ A for all B ∈ BM . Definition 3.4 (Density). A random variable X taking values in M has a density if there exists a nonnegative Borel-measurable function f :M → [0,∞] such that for all Borel sets B in M , we have Pr(X ∈ B) = ∫ B fdλM . The function f is said to be a probability density function (PDF) of X .\nNext, we recall the definition of conditional distributions, following Dudley [24, Ch. 10 §2]: Definition 3.5 (Conditional distribution3). Let (X,Y ) be a random variable jointly distributed on M × R. Let PX(·) be the probability measure corresponding to the marginal distribution of X . A conditional distribution for Y given X is a collection of probability measures PY |X(·|x) on R indexed by x ∈M satisfying the following:\n1. For all Borel sets A ⊆ R, the function M ∋ x 7→ PY |X(A|x) ∈ [0, 1] is Borel-measurable. 2. For all A ⊆ R and B ⊆M Borel sets, Pr(Y ∈ A,X ∈ B) = ∫ B PY |X(A|x)PX(dx).\nThe conditional expectation4 is defined as mY |X(x) := ∫ R yPY |X(dy|x).\n3also known as disintegration measures according to Chang and Pollard [18]. 4More often, the conditional expectation is denoted E[Y |X = x]. However, our notation is more convenient\nfor function composition and compatible with that of [22].\nThe existence of a conditional probability for a joint distribution (X,Y ) is guaranteed by Dudley [24, Theorem 10.2.2]. When (X,Y ) has a joint density fXY and marginal density fX , the above definition gives the classical formula PY |X(A|x) = ∫ A fXY (x, y)/fX(x)dy when ∞ > fX(x) > 0. See the first example in Dudley [24, Ch. 10 §2].", |
| "3.2 Lebesgue points on manifolds": "Devroye et al. [22] proved Theorem 3.2 when M = Rd and, moreover, that part 1 holds for the so-called Lebesgue points, whose definition we now recall.\nDefinition 3.6. Let f :M → R be an absolutely integrable function and x ∈M . We say that x is a Lebesgue point of f if f(x) = limr→0 1λM (Bx(r,M)) ∫ Bx(r,M) fdλM .\nFor an integrable function, the following result states that almost all points are its Lebesgue points. For the proof, see Fukuoka [27, Remark 2.4].\nTheorem 3.7 (Lebesgue differentation). Let f :M → R be an absolutely integrable function. Then there exists a set A ⊆M such that λM (A) = 0 and every x ∈M \\A is a Lebesgue point of f .\nNext, for the reader’s convenience, we restate Devroye et al. [22, Theorem 1], emphasizing the connection to Lebesgue points. The result will be used in our proof of Theorem 3.2 in the next section.\nTheorem 3.8 (Devroye et al. [22]). Let M = Rd be the flat Euclidean space. Then Theorem 3.2 holds. Moreover, Part 1 holds for all x that is a Lebesgue point to both fX and mY |X · fX .", |
| "4 Proof of Theorem 3.2": "The focal point of the first subsection is Lemma 4.1 which shows the Borel measurability of extensions of the so-called Riemannian logarithm. The second subsection contains two key results regarding densities of M -valued random variables transformed by the Riemannian logarithm. The final subsection proves Theorem 3.2 leveraging results from the preceding two subsections.", |
| "4.1 The Riemannian logarithm": "Throughout, x is assumed to be an arbitrary point of M . Let UxM = {v ∈ TxM : ∥v∥x = 1} ⊆ TxM denote the set of unit tangent vectors. Define a function τx : UxM → (0,∞] as follows5:\nτx(u) := sup{t > 0 : t = distM (x, expx(tu))}.\nThe tangent cut locus is the set C̃x ⊆ TxM defined by C̃x := {τx(u)u : u ∈ UxM, τx(u) < ∞}. Note that it is possible for τx(u) = ∞ for all u ∈ UxM in which case C̃x is empty. The cut locus is the set Cx := expx(C̃x) ⊆M . The tangent interior set is Ĩx := {tu : 0 ≤ t < τx(u), u ∈ UxM} and the interior set is the set Ix := expx(Ĩx). Finally, define D̃x := Ĩx ∪ C̃x. Note that for each z = tu ∈ Ĩx, we have\n∥z∥x = t = distM (x, expx(tu)) = distM (x, expx(z)). (6) Consider the example where M = S2 as in Figure 2. Then τx(u) = π for all u ∈ UxM . Thus, the tangent interior set Ĩx = B0(π,R2), the open disc of radius π centered at the origin.\nWhen restricted to Ĩx, the exponential map expx |Ĩx : Ĩx → Ix is a diffeomorphism. Its functional inverse, denoted by logx |Ix , is called the Riemannian Logarithm [9, 45]. In previous works, logx |Ix is only defined from Ix to Ĩx. The next result shows that the domain of logx |Ix : Ix → Ĩx can be extended to logx :M → D̃x while remaining Borel-measurable. Lemma 4.1. For all x ∈ M , there exists a Borel measurable map logx : M → TxM such that logx(M) ⊆ D̃x and expx ◦ logx is the identity on M . Furthermore, for all x, ξ ∈ M , we have distM (x, ξ) = ∥ logx(ξ)∥x.\n5Positivity of τx is asserted at Sakai [36, eq. (4.1)]\nProof sketch. The full proof of the lemma is provided in Section A.2 of the Supplemental Materials6. Below, we illustrate the idea of the proof using the example when M = S2 as in Figure 2.\nLet x ∈ S2 be the “northpole” (the blue point). The tangent cut locus C̃x is the dashed circle in the left panel of Figure 2. The exponential map expx is one-to-one on D̃x except on the dashed circle, which all gets mapped to −x, the “southpole” (the orange point). A consequence of the measurable selection theorem7 is that logx can be extended to be a Borel-measurable right inverse of expx by selecting z point on C̃x such that logx(−x) = z.\nThus, we’ve shown that logx :M → TxM is Borel-measurable. Now, recall that TxM is equipped with the inner product ⟨·, ·⟩x, i.e., the Riemannian metric. Below, for each x ∈ M choose an orthonormal basis on TxM with respect to ⟨·, ·⟩. Then TxM is isomorphic as an inner product space to Rd with the usual dot product. Our next two results are “change-of-variables formulas” for computing the densities/conditional distributions of M -valued random variables after the logx transform. Recall that λM is the RiemannLebesgue measure on M and λ is the ordinary Lebesgue measure on Rd = TxM . Proposition 4.2. Let x ∈M be fixed. There exists a Borel measurable function νx :M → R with the following properties:\n(i) Let X be a random variable on M with density fX and let Z := logx(X). Then Z is a random variable on TxM with density fZ(z) := fX(expx(z)) · νx(expx(z)).\n(ii) Let f : M → R be an absolutely integrable function such that x is a Lebesgue point of f . Define f : TxM → R by h(z) := f(expx(z)) · νx(expx(z)). Then 0 ∈ TxM is a Lebesgue point for h.\nProposition 4.3. Let (X,Y ) have a joint distribution on M × R such that the marginal of X has a density fX on M . Let PY |X(·|·) be a conditional distribution for Y given X . Let x ∈ M . Define Z := logx(X) and consider the joint distribution (Z, Y ) on TpM × R. Then PY |Z(·|·) := PY |X(·| expx(·)) is a conditional distribution for Y given Z. Consequently, mY |X ◦ expx = mY |Z .\nThe above propositions are straightforward manifold-theoretic extensions of well-known results on Euclidean spaces. For completeness, the full proofs are in Supplemental Materials Section A.4. An anonymous reviewer brought to our attention that Proposition 4.2 is the consequence of a well-known formula from geometric measure theory, called the area formula [2, p. 44-45].", |
| "4.2 Proof of Theorem 3.2": "Fix x ∈M such that x is a Lebesgue point of fX and mY |X · fX . Note that by Theorem 3.7, almost all x ∈M has this property. Next, let Z = logx(X) and fZ be as in Proposition 4.2-(i). Then\n1. fZ = (fX ◦ expx) · (νx ◦ expx), and 2. (mY |X ◦ expx) · fZ = (mY |X ◦ expx) · (fX ◦ expx) · (νx ◦ expx).\nNow, proposition 4.2-(ii) implies that 0 is a Lebesgue point of both fZ and (mY |X ◦ expx) · fZ . Furthermore, by Proposition 4.3, we have mY |X ◦ expx = mY |Z . Thus, 0 is a Lebesgue point of fZ and mY |Z · fZ . Now, let Dn := {(Xi, Yi)}i∈[n]. Define Zi := logx(Xi), which are i.i.d copies of the random variable Z := logx(X), and let D̃n := {(Zi, Yi)}i∈[n]. Then we have\nm̂(x∥Dn,KHM ) (a) = ∑n i=1 Yi · distM (x,Xi)−d∑n\nj=1 distM (x,Xj) −d\n(b) = ∑n i=1 Yi · ∥Zi∥−dx∑n\nj=1 ∥Zj∥−dx (c) = ∑n i=1 Yi · distRd(0, Zi)−d∑n\nj=1 distRd(0, Zj) −d\n(d) = m̂(0∥D̃n,KHRd)\n6An anonymous reviewer has provided a shorter, alternative proof of Lemma 4.1. See https://openreview. net/forum?id=zqQKGaNI4lp¬eId=VYOugBMOil\n7Kuratowski–Ryll-Nardzewski measurable selection theorem (see [14, Theorem 6.9.3])\nwhere equations marked by (a) and (d) follow from Equation (5), (b) from Lemma 4.1, and (c) from the fact that the inner product space TxM with ⟨·, ·⟩x is isomorphic to Rd with the usual dot product. By Theorem 3.8, we have m̂(0∥D̃n,KHRd) → mY |Z(0) in probability. In other words, for all ϵ > 0,\nlim n→∞\nPr{|m̂(0∥D̃n,KHRd)−mY |Z(0)| > ϵ} = 0.\nBy Proposition 4.3, we have mY |Z(0) = mY |Z(expx(0)) = mY |Z(x). Therefore,{ |m̂(0∥D̃n,KHRd)−mY |Z(0)| > ϵ } = { |m̂(x∥Dn,KHM )−mY |X(x)| > ϵ } as events. Thus, m̂(x∥Dn,KHM ) → mY |X(x) converges in probability, proving Theorem 3.2 part 1. As noted in Devroye et al. [22, §2], part 2 of Theorem 3.2 is an immediate consequence of part 1.\n5 Application to the d-Sphere\nThe d-dimensional round sphere is Sd := {x ∈ Rd+1 : x21 + · · ·+ x2d+1 = 1}. Here, a round sphere assumes that Sd has the arc-length metric:\ndistSd(x, z) = ∠(x, z) = cos −1(x⊤z) ∈ [0, π]. (7)\nLet S be a set and σ :M → S be a function. The partition induced by σ is defined by {σ−1(s) : s ∈ Range(σ)}. For example, when M = Sd and W ∈ R(d+1)×h, then the function σW : Sd → {±1}h defined by σW (x) = sgn(W⊤x) induces a hyperplane arrangement partition.\nLet N = {1, 2, . . . } and N0 = N ∪ {0} denote the positive and non-negative integers. Definition 5.1 (Random hyperplane arrangement partition). Let d ∈ N and M = Sd. Let q < 0 be a negative number, and let H be a random variable with probability mass function pH : N0 → [0, 1] such that pH(h) > 0 for all h. Define the following weighted random partition ha := (Θ,P, α):\n1. The parameter space Θ = ⊔∞\nh=0 R(d+1)×h is the disjoint union of all (d+ 1)× h matrices. Element of Θ are matrices θ = W ∈ R(d+1)×h where the number of columns h ∈ {0, 1, 2, . . . } varies. By convention, if h = 0, the partition Pθ = PW is the trivial partition {Sd}. If h > 0, PW is the partition induced by x 7→ sgn(W⊤x). 2. The probability P is constructed by the procedure where we first sample h ∼ pH(h), then sample the entries of W ∈ Rd×h i.i.d according to Gaussian(0, 1).\n3. For θ ∈ Θ, define α(θ) := πqpH(h)−1(−1)h ( q h ) , where ( q h ) := 1h! ∏h−1 j=0 (q − j).\nNote that (−1)h ( q h ) = 1h! ∏h−1 j=0 (−q + j) > 0 when q < 0.\nTheorem 5.2. Let ha = (Θ,P, α) be as in Definition 5.1. Then\nKhaSd(x, z) = { ∠(x, z)q : ∠(x, z) ̸= 0 +∞ : otherwise.\nWhen q = −d, we have KhaSd = KHSd where the right hand side is the manifold-Hilbert kernel.\nProof of Theorem 5.2. Before proceeding, we have the following useful lemma:\nLemma 5.3. Let rp = (Θ,P, α) be a WRP. LetH be a random variable. Let θ ∼ P. Suppose that for all x, z ∈M , the random variables α(θ) and I{x ∈ Pθ[z]} are conditionally independent given H . Then we haveKrpM (x, z) = EH [ α(H)·Eθ∼P[I{x ∈ Pθ[z]}|H] ] whereα(h) := Eθ∈P [α(θ)|H = h]\nfor a realization h of H .\nThe lemma follows immediately from the Definition of KrpM (x, z) in Equation 3 and the conditional independence assumption. Now, we proceed with the proof of Theorem 5.2.\nLet ϕ := ∠(x, z)/π. Let H ∼ pH and θ ∼ P be the random variables in Definition 5.1. Note that by construction, the following condition is satisfied: for all x, z ∈ M , the random variables α(θ)\nand I{x ∈ Pθ[z]} are conditionally independent given H . In fact, α(θ) = πqpH(h)−1(−1)h ( q h ) is constant given H = h. Hence, applying Lemma 5.3, we have\nKhaSd(x, z) = EH [ α(H) · Eθ∼P[I{x ∈ Pθ[z]}|H] ] =\n∞∑ h=0 πq(−1)h ( q h ) · Eθ∼P[I{x ∈ Pθ[z]}|H = h] = ∞∑ h=0 πq(−1)h ( q h ) · Pr{x ∈ Pθ[z]|H = h}.\nNext, we claim that Pr{x ∈ Pθ[z]|H = h} = (1−ϕ)h. When h = 0, x ∈ Pθ[z] is always true since Pθ = {Sd} is the trivial partition. In this case, we have Pr{x ∈ Pθ[z]|H = h} = 1 = (1 − ϕ)0. When h > 0, we recall an identity involving the cosine angle:\nLemma 5.4 (Charikar [19]). Let x, z ∈ Sd. Let w ∈ Rd+1 be a random vector whose entries are sampled i.i.d according to Gaussian(0, 1). Then Pr{sgn(w⊤x) = sgn(w⊤z)} = 1− (∠(x, z)/π).\nLet W = [w1, . . . , wh] be as in Definition 5.1 where wj denotes the j-th column of W . Then by construction, wj is distributed identically as w in Lemma 5.4. Furthermore, wj and wj′ are independent for j, j′ ∈ [h] where j ̸= j′. Thus, the claim follows from\nPr{x ∈ Pθ[z]|H = h} (a) = Pr{sgn(W⊤x) = sgn(W⊤z)|H = h} (b) =\nh∏ j=1 Pr{sgn(w⊤j x) = sgn(w⊤j z)} (c) = h∏ j=1 (1− ϕ) = (1− ϕ)h .\nwhere equality (a) follows from Definition 5.1, (b) from W ∈ R(d+1)×h having i.i.d standard Gaussian entries given H = h, and (c) from Lemma 5.4. Putting it all together, we have\nKpartP,α (x, z) = ∞∑ h=0 πq(−1)h ( q h ) (1− ϕ)h = πq ∞∑ h=0 ( q h ) (ϕ− 1)h = ∠(x, z)q.\nFor the last step, we used the fact that for all q ∈ R the binomial series (1 + t)q = ∑∞h=0 (qh)th converges absolutely for |t| < 1 (when ϕ ∈ (0, 1]) and diverges to +∞ for t = −1 (when ϕ = 0). Corollary 5.5. Let q := −d and KhaSd be as in Theorem 5.2. The infinite-ensemble classifier sgn(û(•∥Dn,KhaSd)) (see Equation 4 for definition) has the interpolating-consistent property.\nProof. As observed in Devroye et al. [22, Section 7], for an arbitrary kernel K, the L1-consistency of m̂(•∥Dn,K) for regression implies the consistency for classification of sgn(û(•∥Dn,K)). Furthermore, m̂(•∥Dn,K) is interpolating for regression implies that sgn(û(•∥Dn,K)) is interpolating for classification. While the argument there is presented in the Rd case, the argument holds in the more general manifold case mutatis mutandis.\nThus, by Theorem 3.2, we have sgn(û(•∥Dn,KHSd)) is consistent for classification, i.e., Equation (1) holds. It is also interpolating since m̂(•∥Dn,K) is interpolating. By Proposition 5.2, we have KhaSd = K H Sd . Thus sgn(û(•∥Dn,KhaSd)) is an ensemble method having the interpolating-consistent property.", |
| "6 Discussion": "We have shown that using the manifold-Hilbert kernel in kernel smoothing regression, also known as Nadaraya-Watson regression, results in a consistent estimator that interpolates the training data on a Riemannian manifold M . We proposed weighted random partition kernels, a generalization of the unweighted analogous definition by Davies and Ghahramani [21] which provided a framework for analyzing ensemble methods such as random forest via kernels. When M = Sd is the sphere, we showed that the manifold-Hilbert kernel is a weighted random partition kernel, where the random partitions are induced by random hyperplane arrangements. This demonstrates an ensemble method that has the interpolating-consistent property.\nOne limitation of this work is the lack of rate of convergence of the ensemble methods. The analogous result for the Nadaraya-Watson regression have been obtained by Belkin et al. [8]. However, it is not\nclear if the kernels used in [8] are weighted random partition kernels. Resolving this is an interesting future direction.\nSimilar to PERT [20], another limitation of this work is that the base classifiers in the ensemble are data-independent. Such ensemble methods in these line of work (including ours) are easier to analyze than the data-dependent ensemble methods used in practice. See Biau and Scornet [12] and [13] for an in-depth discussion. We believe our work offers one theoretical basis towards understanding generalization in the interpolation regime of ensembles of histogram classifiers over data-dependent partitions, e.g., decision trees à la CART [17].", |
| "Acknowledgements": "The authors were supported in part by the National Science Foundation under awards 1838179 and 2008074. The authors would like to thank Vidya Muthukumar for helpful discussions, as well as an anonymous reviewer for bringing to our attention facts from geometric measure theory that provided context for the ad-hoc technical result Proposition 4.2.", |
| "Reviewer Summary": "Reviewer_2: The paper \"Consistent Interpolating Ensembles via the Manifold-Hilbert Kernel\" defines the manifold-Hilbert kernel and prove that kernel smoothing regression and classification using it are weakly consistent and hence establishes generalization guarantees in the interpolating regime. For the sphere it is shown that the kernel can be realized as weighted random partition kenrel arising as infinite ensemble of partition-based classifiers, which the authors claim to offer a theoretical basis towards understanding generalization of ensemble of histogram classifiers such as decision trees.\n\nReviewer_3: The paper proposes a theoretical analysis of a specific instance of interpolating ensemble methods for classification when the input data lie on a Riemannian manifold. The contributions are two-fold:\nThe authors introduce an extension of the Hilbert kernel, denoted the manifold-Hilbert kernel, and show that the classification rule produced by the sign of the Nadaraya-Watson estimator associated with this kernel is both interpolating and consistent.\nThey connect ensemble methods to this classification rule through the introduction of weighted random partition, and associated weighted random partition kernels. When a weighted random partition is found to give rise to the manifold-Hilbert kernel, then the corresponding ensemble method is both interpolating and consistent.\nThis phenomenon is detailed when the manifold is the\nd\n-dimensional sphere; in this case the proposed random hyperplane arrangement partition kernel corresponds to the manifold-Hilbert kernel associated to the arc-length metric.\n\nReviewer_4: In this paper, the authors first define the manifold-Hilbert kernel on a complete, connected, smooth Riemannian manifold of dimension d, as the d-th power reciprocal of the manifold distance between two points. The main result of the paper (Theorem 3.2) is that the Nadaraya-Watson estimator of the conditional mean of Y given X based on this manifold-Hilbert kernel, which by design interpolates the data, is consistent, both pointwise at almost every point and in terms of the L1 distance with respect to the density of X, assumed to exist.\nIn Section 5, the authors move onto a particular case of the manifold being the d-dimensional sphere, and it is shown in Theorem 5.2 that the random hyperplane arrangement partition, which corresponds to an ensemble method, actually coincides with the manifold-Hilbert kernel proposed in earlier parts of the paper, thereby showing that ensemble classifiers of this form are interpolating and consistent.\n\nReviewer_5: The authors derive an ensemble classification method for interpolating manifold-valued training data that they denote manifold-Hilbert kernel. The kernel is based on the Riemannian distances. The authors prove a consistency result with the kernel and derive a specific realization of the kernel on spheres." |
| } |