Limitless063's picture
Duplicate from IbrahimAlAzhar/limitation-generation-dataset-bagels
0f2f2d3 verified
{
"File Number": "1034",
"Title": "Intrinsic Dimension, Persistent Homology and Generalization in Neural Networks",
"Limitation": "Societal Impact and Limitations. We believe that our study will not pose any negative societal or ethical consequences due to its theoretical nature. The main limitation of our study is that it solely considers the terms E0α, whereas PH offers a much richer structure. Hence, as our next step, we will explore finer ways to incorporate PH in generalization performance. We will further extend our results in terms of dimensions of measures by using the techniques presented in [CDE+21].",
"Reviewer Comment": "Reviewer_1: Originality: This work combines previous theoretical results to propose a practical methodology for estimating intrinsic dimension using topological data analysis (TDA). Novel contributions include verification of the proposed PH dimension calculation algorithm on synthetic data and its practical application in multiple networks to demonstrate its usefulness in quantifying generalization error. The proposed regularization, based on PH dimension, is shown to be effective at controlling generalization error during training. All related work, datasets, and models are cited. Quality: The exposition is technically sound, and theoretical findings are supported by experiments. The authors provide context for their work within the broader scope of intrinsic dimension estimation, manifold reconstruction, bounds of generalization error, and the application of TDA to analyze deep networks. The authors clearly state that the α parameter should be smaller than the intrinsic dimension in theory, but the experiments were performed with fixed α = 1. I suggest the following improvements:\nDefinition 4: where PHi(VR(W)) is the i dimensional persistent homology of the Cech Vietoris-Rips complex on a finite point set …\nPlease consider including computational complexity and/or runtime performance of estimating PH dimension and the extra training time required when using PH dimension as a regularizer.\nFigure 7 (main text) is hard to read and poorly labelled.\nIn SM Fig. 1, please indicate in the caption that PCA is outside the bounds of the axis limits.\nIn SM Fig. 7, consider using the same axis limits for all figures to allow for easy comparison. The observed changes in the persistence diagrams corresponding to changes in the intrinsic dimension are not explained. Why do points move closer to or away from the diagonal?\nSimilar to comment (3) above, the visualization of distance matrices (Fig. 4 in the main text, SM Figs. 5 and 6) showing non-uniform pixelation merits further explanation. How are the rows and columns of these matrices organized? Can the non-uniform pixelation be quantified and what does it represent? Clarity: The main paper and the supplementary material are very well organized and clearly written. I discovered only a few typos and grammatical mistakes that can easily be corrected. Reference to Fig. 5 in the supplement shows up as a ‘?’. Significance: The analysis of generalization error associated with various training algorithms is important for uncovering how deep networks learn and operate. This work is a significant contribution in this direction. The authors describe a practical algorithm, based on solid theoretical foundations, for estimating the intrinsic dimension of training trajectories and link it to generalization error. As demonstrated in this manuscript, the PH dimension can be used to evaluate different hyperparameters and control generalization loss during training. Higher-dimensional topological features computed from training trajectories may yield more insight in the future.\nI am satisfied by the author responses and will maintain the high score.\nLimitations And Societal Impact:\nLimitations are discussed in the manuscript. No negative societal impact expected from this work.\nEthical Concerns:\nNone\nNeeds Ethics Review: No\nTime Spent Reviewing: 6\n\nReviewer_2: This work addresses an important problem, predicting generalization, and approach it from an interesting perspective, that of TDA.\nI have some doubts about the empirical evaluation of PHD's predictive ability of generalization. No quantitative measures are given. There indeed appears to be trends in Figures 2 and 3, but their strength is unclear.\nEven so, as some have pointed out in the literature, correlative measures may be problematic and not indicative of a causal relationship [0,1].\nFor instance in [0], the authors write: Correlation with Generalization: Evaluating measures based on correlation with generalization is very useful but it can also provide a misleading picture. To check the correlation, we should vary architectures and optimization algorithms to produce a set of models. If the set is generated in an artificial way and is not representative of the typical setting, the conclusions might be deceiving and might not generalize to typical cases... Another pitfall is drawing conclusion from changing one or two hyper-parameters (e.g changing the width or batch-size and checking if a measure would correlate with generalization). In these cases, the hyper-parameter could be the true cause of both change in the measure and change in the generalization, but the measure itself has no causal relationship with generalization. Therefore,one needs to be very careful with experimental design to avoid unwanted correlations.\nCan the authors justify their choice of empirical methodology, over the Kendall's rank correlation and conditional mutual information as suggested by [1]?\nUsing the measure for regularization is a natural and interesting idea. However it's empirical evaluation appears weak in Figure 5, i.e. < +2%, except for the high learning rate case. I would like to see error bars on this experiment to gauge it's significance.\nMinor notes: Typo on line 256 \"netowork\"\n[0] Fantastic Generalization Measures and Where to Find Them - Jiang et al. (2019) https://arxiv.org/abs/1912.02178\n[1] NeurIPS 2020 Competition:Predicting Generalization in Deep Learning - Jiang et al. (2020) https://arxiv.org/abs/2012.07976v1\nLimitations And Societal Impact:\nA brief but adequate discussion of societal impacts is given in Section 6.\nA brief discussion of limitations is given in Section 6, which could be expanded.\nNeeds Ethics Review: No\nTime Spent Reviewing: 4\n\nReviewer_3: For the normalization term of NN loss, many methods exist, but many are experimental and few have been mathematically proven. Mathematical analysis is necessary for the future development of technology. This paper logically shows that the generalization of NN can be adjusted by intrinsic dimension based on persistent homology. Most of the theories presented are based on previous results, but they are well developed for the generalization of NNs. Although the proposed normalization term is a simple one, it seems to be significant just because it shows the mathematical implications. Many normalization terms have been proposed in the past. In this paper, only the effect for one problem is shown, but in practical terms, comparison with other normalization terms may be necessary. In addition, since persistent homology is generally computationally expensive, future evaluation in terms of computational complexity will be necessary.\nLimitations And Societal Impact:\nI don't believe there are any negative effects, and I think this is something that can be developed in the future, but for practical purposes, it needs more testing.\nNeeds Ethics Review: No\nTime Spent Reviewing: 8 hours\n\nReviewer_4: The quality of the presentation is in general poor and the results of the numerical tests seem to me already present in the literature, and in a case possibly flawed.\nFig 2: the x axis are not labeled. Fig 3: the dataset on which the tests are performed is not specified. If the x axis (not labeled) is the difference between test and train accuracy, then this ranges between 30 and 40. IThese differences are enormous, for any dataset mentioned in the paper, pointing to possible flaws in the model. Fig. 4 is unreadable, even by magnifying it. Its message is, at least to me, obscure. In the middle panel we see points approximately lying on a line, which is floating up and down across the panels. What do we learn from this? the bottom lines the two sets of points are labeled H_0 and infinity. What does this mean? Fig 7 is also unreadable. The axis labels are missing. The different panels are also not labeled.\nThe idea of regularizing learning by controlling the ID is not new. It was introduced in ref [MA] https://arxiv.org/abs/1806.02612 (2018). This is a key reference that should be cited and discussed. The results illustrated in the manuscript on this important point are not really convincing: in fig 5 it is shown that with the learning rate which allows obtaining the best test accuracy the effect of ID regularization is practically zero (65 % with and without regularization). Moreover, an accuracy of 65 % in cifar10 is way below the state of the art for this dataset which is 95 % for convolutional NNs, and almost 99 % for architectures exploiting transformers (see for example https://paperswithcode.com/sota/image-classification-on-cifar-10). Also this points to possible flaws.\nThe intrinsic dimensions reported in fig 2 and 3 are of order 2, while those reported in ref ALMZ19 and [MA] range between 10 and 100, for the Imagenet dataset. The reason for this qualitative discrepancy should be discussed.\nLimitations And Societal Impact:\nthe limitations of the approach are discussed only shortly, possibly due to length constraints. But I do not see this as the main problem of the poper.\nNeeds Ethics Review: No\nTime Spent Reviewing: 4",
"abstractText": "Disobeying the classical wisdom of statistical learning theory, modern deep neural networks generalize well even though they typically contain millions of parameters. Recently, it has been shown that the trajectories of iterative optimization algorithms can possess fractal structures, and their generalization error can be formally linked to the complexity of such fractals. This complexity is measured by the fractal’s intrinsic dimension, a quantity usually much smaller than the number of parameters in the network. Even though this perspective provides an explanation for why overparametrized networks would not overfit, computing the intrinsic dimension (e.g., for monitoring generalization during training) is a notoriously difficult task, where existing methods typically fail even in moderate ambient dimensions. In this study, we consider this problem from the lens of topological data analysis (TDA) and develop a generic computational tool that is built on rigorous mathematical foundations. By making a novel connection between learning theory and TDA, we first illustrate that the generalization error can be equivalently bounded in terms of a notion called the ’persistent homology dimension’ (PHD), where, compared with prior work, our approach does not require any additional geometrical or statistical assumptions on the training dynamics. Then, by utilizing recently established theoretical results and TDA tools, we develop an efficient algorithm to estimate PHD in the scale of modern deep neural networks and further provide visualization tools to help understand generalization in deep learning. Our experiments show that the proposed approach can efficiently compute a network’s intrinsic dimension in a variety of settings, which is predictive of the generalization error.",
"1 Introduction": "In recent years, deep neural networks (DNNs) have become the de facto machine learning tool and have revolutionized a variety of fields such as natural language processing [DCLT18], image perception [KSH12, RBH+21], geometry processing [QSMG17, ZBL+20] and 3D vision [DBI18, GLW+21]. Despite their widespread use, little is known about their theoretical properties. Even now the top-performing DNNs are designed by trial-and-error, a pesky, burdensome process for the average practitioner [EMH+19]. Furthermore, even if a top-performing architecture is found, it is difficult to provide performance guarantees on a large class of real-world datasets.\nThis lack of theoretical understanding has motivated a plethora of work focusing on explaining what, how, and why a neural network learns. To answer many of these questions, one naturally examines the generalization error, a measure quantifying the differing performance on train and\n35th Conference on Neural Information Processing Systems (NeurIPS 2021)\ntest data since this provides significant insights into whether the network is learning or simply memorizing [ZBH+21]. However, generalization in neural networks is particularly confusing as it refutes the classical proposals of statistical learning theory such as uniform bounds based on the Rademacher complexity [BM02] and the Vapnik–Chervonenkis (VC) dimension [Vap68].\nInstead, recent analyses have started focusing on the dynamics of deep neural networks. [NBMS17, BO18, GJ16] provide analyses on the final trained network, but these miss out on critical training patterns. To remedy this, a recent study [SSDE20] connected generalization and the heavy tailed behavior of network trajectories–a phenomenon which had already been observed in practice [SSG19, ŞGN+19, SZTG20, GSZ21, CWZ+21, HM20, MM19]. [SSDE20] further showed that the generalization error can be linked to the fractal dimension of a parametric hypothesis class (which can then be taken as the optimization trajectories). Hence, the fractal dimension acts as a ‘capacity metric’ for generalization.\nWhile [SSDE20] brought a new perspective to generalization, several shortcomings prevent application in everyday training. In particular, their construction requires several conditions which may be infeasible in practice: (i) topological regularity conditions on the hypothesis class for fast computation, (ii) a Feller process assumption on the training algorithm trajectory, and that (iii) the Feller process exhibits a specific diffusive behavior near a minimum. Furthermore, the capacity metrics in [SSDE20] are not optimization friendly and therefore can’t be incorporated into training.\nIn this work, we address these shortcomings by exploiting the recently developed connections between fractal dimension and topological data analysis (TDA). First, by relating the box dimension [Sch09] and the recently proposed persistent homology (PH) dimension [Sch20], we relax the assumptions in [SSDE20] to develop a topological intrinsic dimension (ID) estimator. Then, using this estimator we develop a general tool for computing and visualizing generalization properties in deep learning. Finally, by leveraging recently developed differentiable TDA tools [CHU17, CHN19], we employ our ID estimator to regularize training towards solutions that generalize better, even without having access to the test dataset.\nOur experiments demonstrate that this new measure of intrinsic dimension correlates highly with generalization error, regardless of the choice of optimizer. Furthermore, as a proof of concept, we illustrate that our topological regularizer is able to improve the test accuracy and lower the generalization error. In particular, this improvement is most pronounced when the learning rate/batch size normally results in a poorer test accuracy.\nOverall, our contributions are summarized as follows:\n• We make a novel connection between statistical learning theory and TDA in order to develop a generic computational framework for the generalization error. We remove the topological regularity condition and the decomposable Feller assumption on training trajectories, which were required in [SSDE20]. This leads to a more generic capacity metric.\n• Using insights from our above methodology, we leverage the differentiable properties of persistent homology to regularize neural network training. Our findings also provide the first steps towards theoretically justifying recent topological regularization methods [BGND+19, CNBW19].\n• We provide extensive experiments to illustrate the theory, strength, and flexibility of our framework.\nWe believe that the novel connections and the developed framework will open new theoretical and computational directions in the theory of deep learning. To foster further developments at the the intersection of persistent homology and statistical learning theory, we release our source code under: https://github.com/tolgabirdal/PHDimGeneralization.",
"2 Related Work": "Intrinsic dimension in deep networks Even though a large number of parameters are required to train deep networks [FC18], modern interpretations of deep networks avoid correlating model over-fitting or generalization to parameter counting. Instead, contemporary studies measure model complexity through the degrees of freedom of the parameter space [JFH15, GJ16], compressibility (pruning) [BO18] or intrinsic dimension [ALMZ19, LFLY18, MWH+18]. Tightly related to the ID, Janson et al. [JFH15] investigated the degrees of freedom [Ghr10] in deep networks and expected difference between test error and training error. Finally, LDMNet [ZQH+18] explicitly penalizes the ID regularizing the network training.\nGeneralization bounds Several studies have provided theoretical justification to the observations that trained neural networks live in a lower-dimensional space, and this is related to the generalization performance. In particular, compression-based generalization bounds [AGNZ18, SAM+20, SAN20, HJTW21, BSE+21] have shown that the generalization error of a neural network can be much lower if it can be accurately represented in lower dimensional space. Approaching the problem from a geometric viewpoint, [SSDE20] showed that the generalization error can be formally linked to the fractal dimension of a parametric hypothesis class. This dimension indeed the plays role of the intrinsic dimension, which can be much smaller than the ambient dimension. When the hypothesis class is chosen as the trajectories of the training algorithm, [SSDE20] further showed that the error can be linked to the heavy-tail behavior of the trajectories.\nDeep networks & topology Previous works have linked neural network training and topological invariants, although all analyze the final trained network [FGFAEV21]. For example, in [RTB+19], the authors construct Neural Persistence, a measure on neural network layer weights. They furthermore show that Neural Persistence reflects many of the properties of convergence and can classify weights based on whether they overfit, underfit, or exactly fit the data. In a parallel line of work, [DZF19] analyze neural network training by calculating topological properties of the underlying graph structure. This is expanded upon in [CMEM20], where the authors compute correlations between neural network weights and show that the homology is linked with the generalization error.\nHowever, these previous constructions have been done mostly in an adhoc manner. As a result, many of the results are mostly empirical and work must still be done to show that these methods hold theoretically. Our proposed method, by contrast, is theoretically well-motivated and uses tools from statistical persistent homology theory to formally links the generalization error with the network training trajectory topology.\nWe also would like to note that prior work has incorporated topological loss functions to help normalize training. In particular, [BGND+19] constructed a topological normalization term for GANs to help maintain the geometry of the generated 3d point clouds.",
"3 Preliminaries & Technical Background": "We imagine a point cloud W = {wi ∈ Rd} as a geometric realization of a d-dimensional topological space W ⊂ W ⊂ Rd. Bδ(x) ⊂ Rd denotes the closed ball centered around x ∈ Rd with radius δ.\nPersistent Homology From a topological perspective, W can be viewed a cell complex composed of the disjoint union of k-dimensional balls or cells σ ∈ W glued together. For k = 0, 1, 2, . . . , we form a chain complex C(W) = . . . Ck+1(W) ∂k+1−−−→ Ck(W) ∂k−→ . . . by sequencing chain groupsCk(W), whose elements are equivalence classes of cycles, via boundary maps ∂k : Ck(W) 7→ Ck−1(W) with ∂k−1◦∂k ≡ 0. In this paper, we work with finite simplicial complexes restricting the cells to be simplices.\nThe kth homology group or k-dimensional homology is then defined as the equivalence classes of k-dimensional cycles who differ only by a boundary, or in other words, the quotient group Hk(W) = Zk(W)/Yk(W) where Zk(W) = ker ∂k and Yk(W) = im ∂k+1. The generators or basis of H0(W), H1(W) and H2(W) describe the shape of the topological spaceW by its connected components, holes and cavities, respectively. Their ranks are related to the Betti numbers i.e.βk = rank(Hk).\nDefinition 1 (Čech and Vietoris-Rips Complexes). For W a set of fine points in a metric space, the Čech cell complex Čechr(W ) is constructed using the intersection of r-balls around W , Br(W ): Čechr(W ) = { Q ⊂ W : ∩x∈QBr(x) 6= 0 } . The construction of such complex is intricate.\nInstead, the Vietoris-Rips complex VRr(W ) closely approximates Čechr(W ) using only the pairwise distances or the intersection of two r-balls [RB21]: Wr = VRr(W ) = { Q ⊂ W : ∀x, x′ ∈\nQ, Br(x) ∩Br(x′) 6= 0 } .\nDefinition 2 (Persistent Homology). PH indicates a multi-scale version of homology applied over a filtration {Wt}t := VR(W ) : ∀(s ≤ t)Ws ⊂ Wt ⊂ W , keeping track of holes created (born) or filled (died) as t increases. Each persistence module PHk(VR(W )) = {γi}i keeps track of a single k-persistence cycle γi from birth to death. We denote the entire lifetime of cycle γ as I(γ) and its length as |I(γ)| = death(γ)− birth(γ). We will also use persistence diagrams, 2D plots of all persistence lifetimes (death vs. birth). Note that for PH0, the Čech and VR complexes are equivalent.\nLifetime intervals are instrumental in TDA as they allow for extraction of topological features or summaries. Note that, each birth-death pair can be mapped to the cells that respectively created and destroyed the homology class, defining a unique map for a persistence diagram, which lends itself to differentibility [BGND+19, CHN19, CHU17]. We conclude this brief section by referring the interested reader to the well established literature of persistent homology [Car14, EH10] for a thorough understanding.\nIntrinsic Dimension The intrinsic dimension of a space can be measured by using various notions. In this study, we will consider two notions of dimension, namely the upper-box dimension (also called the Minkowski dimension) and the persistent homology dimension. The box dimension is based on covering numbers and can be linked to generalization via [SSDE20], whereas the PH dimension is based on the notions defined earlier in this section.\nWe start by the box dimension.\nDefinition 3 (Upper-Box Dimension). For a bounded metric space W , let Nδ(W) denote the maximal number of disjoint closed δ-balls with centers inW . The upper box dimension is defined as:\ndimBoxW = lim sup δ→0\n( log(Nδ(W))/log(1/δ) ) . (1)\nWe proceed with the PH dimension. First let us define an intermediate construct, which will play a key role in our computational tools.\nDefinition 4 (α-Weighted Lifetime Sum). For a finite set W ⊂ W ⊂ Rd, the weighted ith homology lifetime sum is defined as follows:\nEiα(W ) = ∑\nγ∈PHi(VR(W ))\n|I(γ)|α, (2)\nwhere PHi(VR(W )) is the i-dimensional persistent homology of the Čech complex on a finite point set W contained inW and |I(γ)| is the persistence lifetime as explained above.\nNow, we are ready to define the PH dimension, which is the key notion in this paper.\nDefinition 5 (Persistent Homology Dimension). The PHi-dimension of a bounded metric spaceW is defined as follows:\ndimiPHW := inf { α : Eiα(W ) < C; ∃C > 0,∀ finite W ⊂ W } . (3)\nIn words, dimiPHW is the smallest exponent α for which Eiα is uniformly bounded for all finite subsets ofW .",
"4 Generalization Error via Persistent Homology Dimension": "In this section, we will illustrate that the generalization error can be linked to the PH0 dimension. Our approach is based on the following fundamental result.\nTheorem 1 ([KLS06, Sch19]). LetW ⊂ Rd be a bounded set. Then, it holds that:\ndimPHW := dim0PHW = dimBoxW.\nIn the light of this theorem, we combine the recent result showing that the generalization error can be linked to the box dimension [SSDE20], and Theorem 1, which shows that, for bounded subsets of Rd, the box dimension and the PH dimension of order 0 agree.\nBy following the notation of [SSDE20], we consider a standard supervised learning setting, where the data space is denoted by Z = X × Y , and X and Y respectively denote the features and the labels. We assume that the data is generated via an unknown data distribution D and we have access to a training set of n points, i.e., S = {z1, . . . , zn}, with the samples {zi}ni=1 are independent and identically (i.i.d.) drawn from D. We further consider a parametric hypothesis classW ⊂ Rd, that potentially depends on S. We choose W to be optimization trajectories given by a training algorithm A, which returns the entire (random) trajectory of the network weights in the time frame [0, T ], such that [A(S)]t = wt being the network weights returned byA at ‘time’ t, and t is a continuous iteration index. Then, in the setW , we collect all the network weights that appear in the optimization trajectory:\nW := {w ∈ Rd : ∃t ∈ [0, T ], w = [A(S)]t}\nwhere we will set T = 1, without loss of generality.\nTo measure the quality of a parameter vector w ∈ W , we use a loss function ` : Rd ×Z 7→ R+, such that `(w, z) denotes the loss corresponding to a single data point z. We then denote the population and empirical risks respectively by R(w) := Ez[`(w, z)] and R̂(w, S) := 1n ∑n i=1 `(w, zi). The generalization error is hence defined as |R̂(w, S)−R(w)|. We now recall [SSDE20, Asssumption H4], which is a form of algorithmic stability [BE02]. Let us first introduce the required notation. For any δ > 0, consider the fixed grid on Rd,\nG =\n{( (2j1 + 1)δ\n2 √ d\n, . . . , (2jd + 1)δ\n2 √ d\n) : ji ∈ Z, i = 1, . . . , d } ,\nand define the set Nδ := {x ∈ G : Bδ(x) ∩W 6= ∅}, that is the collection of the centers of each ball that intersectW . H1. Let Z∞ := (Z × Z × · · · ) denote the countable product endowed with the product topology and let B be the Borel σ-algebra generated by Z∞. Let F,G be the sub-σ-algebras of B generated by the collections of random variables given by {R̂(w, S) : w ∈ Rd, n ≥ 1} and { 1 {w ∈ Nδ} :\nδ ∈ Q>0, w ∈ G,n ≥ 1 }\nrespectively. There exists a constant M ≥ 1 such that for any A ∈ F, B ∈ G we have P [A ∩B] ≤MP [A]P[B].\nThe next result forms our main observation, which will lead to our methodological developments. Proposition 1. LetW ⊂ Rd be a (random) compact set. Assume that H1 holds, ` is bounded by B and L-Lipschitz continuous in w. Then, for n sufficiently large, we have\nsup w∈W\n|R̂(w, S)−R(w)| ≤ 2B\n√ [dimPHW + 1] log2(nL2)\nn +\nlog(7M/γ)\nn , (4)\nwith probability at least 1− γ over S ∼ D⊗n.\nProof. By using the same proof technique as [SSDE20, Theorem 2], we can show that (4) holds with dimBoxW in place of dimPHW . Since W is bounded, we have dimBoxW = dimPHW by Theorem 1. The result follows.\nThis result shows that the generalization error of the trajectories of a training algorithm is deeply linked to its topological properties as measured by the PH dimension. Thanks to novel connection, we have now access to the rich TDA toolbox, to be used for different purposes.",
"4.1 Analyzing Deep Network Dynamics via Persistent Homology": "By exploiting TDA tools, our goal in this section is to develop an algorithm to compute dimPHW for two main purposes. The first goal is to predict the generalization performance by using dimPH. By this approach, we can use dimPH for hyperparameter tuning without having access to test data. The second goal is to incorporate dimPH as a regularizer to the optimization problem in order to improve generalization. Note that similar topological regularization strategies have already been proposed\nAlgorithm 1: Computation of dimPH. 1 input :The set of iterates W = {wi}Ki=1, smallest sample size nmin, and a skip step ∆, α 2 output :dimPHW 3 n← nmin, E ← [] 4 while n ≤ K do 5 Wn ← sample(W,n) // random sampling 6 Wn ← VR(Wn) // Vietoris-Rips filtration 7 E[i]← Eα(Wn) , ∑ γ∈PH0(Wn) |I(γ)| α // compute lifetime sums from PH 8 n← n+ ∆ 9 m, b← fitline (log(nmin : ∆ : K), log(E)) // power law on Ei1(W )\n10 dimPHW ← α1−m\n[BGND+19, CNBW19] without a formal link to generalization. In this sense, our observations form the first step towards theoretically linking generalization and TDA.\nIn [SSDE20], to develop a computational approach, the authors first linked the intrinsic dimension to certain statistical properties of the underlying training algorithm, which can be then estimated. To do so, they required an additional topological regularity condition, which necessitates the existence of an ‘Ahlfors regular’ measure defined onW , i.e., a finite Borel measure µ such that there exists s, r0 > 0 where 0 < ars ≤ µ(Br(x)) ≤ brs < ∞, holds for all x ∈ W, 0 < r ≤ r0. This assumption was used to link the box dimension to another notion called Hausdorff dimension, which can be then linked to statistical properties of the training trajectories under further assumptions (see Section 1). An interesting asset of our approach is that, we do not require this condition and thanks to the following result, we are able to develop an algorithm to directly estimate dimPHW , while staying agnostic to the finer topological properties ofW . Proposition 2. Let W ⊂ Rd be a bounded set with dimPHW =: d?. Then, for all ε > 0 and α ∈ (0, d? + ε), there exists a constant Dα,ε, such that the following inequality holds for all n ∈ N+ and all collections Wn = {w1, . . . , wn} with wi ∈ W , i = 1, . . . , n:\nE0α(Wn) ≤ Dα,εn d?+ε−α d?+ε . (5)\nProof. Since W is bounded, we have dimBoxW = d? by Theorem 1. Fix ε > 0. Then, by Definition 3, there exists δ0 = δ0(ε) > 0 and a finite constant Cε > 0 such that for all δ ≤ δ0 the following inequality holds:\nNδ(W) ≤ Cεδ−(d ?+ε). (6)\nThen, the result directly follows from [Sch20, Proposition 21].\nThis result suggests a simple strategy to estimate an upper bound of the intrinsic dimension from persistent homology. In particular, we note that rewriting (5) for logarithmic values give us that(\n1− α d∗ +\n) log n+ logDα, ≥ logE0α. (7)\nIf logE0α and log n are sampled from the data and give an empirical slope m, then we see that d∗ + ≤ m1−α . In many cases, we see that d\n∗ ≈ α1−m (as further explained in Sec. 5.2), so we take α 1−m as our PH dimension estimation. We provide the full algorithm for computing this from our sampled data in Alg. 1. Note that our algorithm is similar to that proposed in [AAF+20], although our method works for sets rather than probability measures. In our implementation we compute the homology by the celebrated Ripser package [Bau21] unless otherwise specified.\nOn computational complexity. Computing the Vietoris Rips complex is an active area of research, as the worst-case time complexity is meaningless due to natural sparsity [Zom10]. Therefore, to calculate the time complexity of our estimator, we focus on analyzing the PH computation from the output simplices: calculating PH takes O(pw) time, where w < 2.4 is the constant of matrix multiplication and p is the number of simplices produced in the filtration [BP19]. Since we compute\nwith 0th order homology, this would imply that the computational complexity is O(nw), where n is the number of points. In particular, this means that estimating the PH dimension would take O(knw) time, where k is the number of samples taken assuming that samples are evenly spaced in [0, n].",
"4.2 Regularizing Deep Networks via Persistent Homology": "Motivated by our results in proposition 2, we theorize that controlling dimPHW would help in reducing the generalization error. Towards this end, we develop a regularizer for our training procedure which seeks to minimize dimPHW during train time. If we let L be our vanilla loss function, then we will instead optimize over our topological loss function Lλ := L+ λ dimPHW , where λ ≥ 0 controls the scale of the regularization andW now denotes a sliding window of iterates (e.g., the latest 50 iterates during training). This way, we aim to regularize the loss by considering the dimension of the ongoing training trajectory.\nIn Alg. 1, we let wi be the stored weights from previous iterations for i ∈ {1, . . . ,K − 1} and let wK be the current weight iteration. Since the persistence diagram computation and linear regression are differentiable, this means that our estimate for dimPH is also differentiable, and, if wk is sampled as in Alg. 1, is connected in the computation graph with wK . We incorporate our regularizer into the network training using PyTorch [PGM+19] and the associated persistent homology package torchph [CHU17, CHN19].",
"5 Experimental Evaluations": "This section presents our experimental results in two parts: (i) analyzing and quantifying generalization in practical deep networks on real data, (ii) ablation studies on a random diffusion process. In all the experiments we will assume that the intrinsic dimension is strictly larger than 1, hence we will set α = 1, unless specified otherwise. Further details are reported in the supplementary document.",
"5.1 Analyzing and Visualizing Deep Networks": "Measuring generalization. We first verify our main claim by showing that our persistent homology dimension derived from topological analysis of the training trajectories correctly measures of generalization. To demonstrate this, we apply our analysis to a wide variety of networks, training procedures, and hyperparameters. In particular, we train AlexNet [KSH12], a 5-layer (fcn-5) and 7-layer (fcn-7) fully connected networks, and a 9-layer convolutional netowork (cnn-9) on MNIST, CIFAR10 and CIFAR100 datasets for multiple batch sizes and learning rates until convergence. For AlexNet, we consider 1000 iterates prior to convergence and, for the others, we only consider 200. Then, we estimate dimPH on the last iterates by using Alg. 1. For varying n, we randomly pick n of last iterates and compute E0α, and then we use the relation given in (5).\nWe obtain the ground truth (GT) generalization error as the gap between training and test accuracies. Fig. 2 plots the PH-dimension with respect to test accuracy and signals a strong correlation of our PH-dimension and actual performance gap. The lower the PH-dimension, the higher the test accuracy. Note that this results aligns well with that of [SSDE20]. The figure also shows that the intrinsic dimensions across different datasets can be similar, even if the parameters of the models can vary greatly. This supports the recent hypothesis that what matters for the generalization is the effective capacity and not the parameter count. In fact, the dimension should be as minimal as possible without collapsing important representation features onto the same dimension. The findings in Fig. 2 are further augmented with results in Fig. 3, where a similar pattern is observed on AlexNet and CIFAR100.\nCan dimPH capture intrinsic properties of trajectories? After revealing that our ID estimation is a gauge for generalization, we set out to investigate whether it really hinges on the intrinsic properties of the data. We train several instances of 7-fcn for different learning rates and batch sizes. We compute the PH-dimension of each network using training trajectories. We visualize the following in the rows of Fig. 4 sorted by dimPH: (i) 200× 200 distance matrix of the sequence of iterates w1, . . . , wK (which is the basis for PH computations), (ii) corresponding logE0α=1 estimates as we sweep over n in an increasing fashion, (iii) persistence diagrams per each distance matrix. It is clear that there is a strong correlation between dimPH and the structure of the distance matrix. As dimension increases, matrix of distances become non-uniformly pixelated. The slope estimated from the total edge lengths the second row is a quantity proportional to our dimension. Note that the slope decreases as our estimte increases (hence generalization tends to decrease). We further observe clusters emerging in the persistence diagram. The latter has also been reported for better generalizing networks, though using a different notion of a topological space [BGND+19].\nIs dimPH a real indicator of generalization? To quantitatively assess the quality of our complexity measure, we gather two statistics: (i) we report the average p-value over different batch sizes for AlexNet trained with SGD on the Cifar100 dataset. The value of p = 0.0157 < 0.05 confirms the statistical significance. Next, we follow the recent literature [JFY+20] and consult the Kendall correlation coefficient (KCC). Similar to the p-value experiment above, we compute KCC for AlexNet+SGD for different batch sizes (64, 100, 128) and attain (0.933, 0.357, 0.733) respectively. Note that, a positive correlation signals that the test gap closes as dimPH decreases. Both of these experiments agree with our theoretical insights that connect generalization to a topological characteristic of a neural network: intrinsic dimension of training trajectories.\nEffect of different training algorithms. We also verify that our method is algorithm-agnostic and does not require assumptions on the training algorithm. In particular, we show that our above analyses extend to both the RMSProp [TH12] and Adam [KB15] optimizer. Our results are visualized in Fig. 3. We plot the dimension with respect to the generalization error for varying optimizers and batch sizes; our results verify that the generalization error (which is inversely related to the test accuracy) is\npositively correlated with the PH dimension. This corroborates our previous results in Fig. 2 and in particular shows that our dimension estimator of test gap is indeed algorithm-agnostic.\nEncouraging generalization via regularization dimPH. We furthermore verify that our topological regularizer is able to help control the test gap in accordance with our theory. We train a Lenet-5 network [LBBH98] on Cifar10 [Kri09] and compare a clean trianing with a training with our topological regularizer with λ set to 1. We train for 200 epochs with a batch size of 128 and report the train and test accuracies in Fig. 5 over a variety of learning rates. We tested over 10 trials and found that, with p < 0.05 for all cases except lr = 0.01, the results are different.\nOur topological optimizer is able to produce the best improvements when our network is not able to converge well. These results show that our regularizer behaves as expected: the regularizer is able to recover poor training\ndynamics. We note that this experiment uses a simple architecture and as such, it presents a proof of concept. We do not aim for the state of the art results. Furthermore, we directly compared our approach with the generalization estimator of [CMEM20], which most closely resembles our construction. In particular, we found their method does not scale and is often numerically unreliable. For example, their methodology grows quadratically with respect to number of network weights and linearly with the dataset size, while our method does not scale much beyond memory usage with vectorized computation. Furthermore, for many of our test networks, their metric space construction (which is based off of the correlation between activation functions and used for the Vietoris-Rips complex) would be numerically brittle and result in degenerate persistent homology. These prevent [CMEM20] to be applicable in this scenario.",
"5.2 Ablation Studies": "To assess the quality of our dimension estimator, we now perform ablation studies, on a synthetic data whose the ground truth ID is known. To this end, we use the synthetic experimental setting\npresented in [SSDE20] (see the supplementary document for details), and we simulate a d = 128 dimensional stable Levy process with varying number of points 100 ≤ n ≤ 1500 and tail indices 1 ≤ β ≤ 2. Note that the tail index equals the intrinsic dimension in this case, which is an order of magnitude lower for this experiment.\nCan dimPH match the ground truth ID? We first try to predict the GT intrinsic dimension running Alg. 1 on this data. We also estimate the TwoNN dimension [FdRL17] to quantify how the state of the art ID estimators correlate with GT in such heavy tailed regime. Our results are plotted in Fig. 6. Note that as n increases our estimator becomes smoother and well approximates the GT up to a slight over-estimation, a repeatedly observed phenomenon [CCCR15]. TwoNN does not guarantee recovering the box-dimension. While it is found to be useful in estimating the ID of data [ALMZ19], we find it to be less desirable in a heavy-tailed regime as reflected in the plots. Our supplementary material provides further results on other, non-dynamics like synthetic dataset such as points on a sphere where TwoNN can perform better. We also include a robust line fitting variant of our approach PH0-RANSAC, where a random sample consensus is applied iteratively. Though, as our data is not outlier-corrupted, we do not observe a large improvement.\nEffect of α on dimension estimation. While our theory requires α to be smaller than the intrinsic dimension of the trajectories, in all of our experiments we fix α = 1.0. It is of curiosity whether such choice hampers our estimates. To see the effect, we vary α in range [0.5, 2.5] and plot our estimates in Fig. 7. It is observed (blue curve) that our dimension estimate follows a U-shaped trend with increasing α. We indicate the GT ID by a dashed red line and our estimate as a dashed green line. Ideally, these two horizontal lines should overlap. It is noticeable that, given the oracle for GT ID, it might be possible to optimize for an α?. Yet, such information is not available for the deep networks. Nevertheless, α = 1 seems to yield reasonable performance and we leave the estimation of a better α for future work. We provide additional results in our supplementary material.",
"6 Conclusion": "In this paper, we developed novel connections between dimPH of the training trajectory and the generalization error. Using these insights, we proposed a method for estimating the dimPH from data and, unlike previous work [SSDE20], our approach does not presuppose any conditions on the trajectory and offers a simple algorithm. By leveraging the differentiability of PH computation, we showed that we can use dimPH as a regularizer during training, which improved the performance in different setups.\nSocietal Impact and Limitations. We believe that our study will not pose any negative societal or ethical consequences due to its theoretical nature. The main limitation of our study is that it solely considers the terms E0α, whereas PH offers a much richer structure. Hence, as our next step, we will explore finer ways to incorporate PH in generalization performance. We will further extend our results in terms of dimensions of measures by using the techniques presented in [CDE+21].",
"Acknowledgements": "Umut Şimşekli’s research is supported by the French government under management of Agence Nationale de la Recherche as part of the “Investissements d’avenir” program, reference ANR-19P3IA-0001 (PRAIRIE 3IA Institute).",
"Reviewer Summary": "Reviewer_1: The authors establish a relationship between the generalization error of trajectories obtained from a training algorithm and the persistent homology (PH) dimension. The theoretical contribution of this work involves combining two results: (1) box dimension of a bounded set can be computed using PH0 dimension [KLS06, Sch19], and (2) previous work linking box dimension and generalization error [SSDE20]. Based on these theoretical findings, the authors propose a simple algorithm to compute PH dimension directly, by performing a line fit on 0-dimensional topological features derived from weights in previous training iterations. This algorithm is based on mild assumptions, in contrast to previous work on computing the fractal dimension of training trajectories. Experiments applying this algorithm to a variety of networks (including AlexNet, CNNs and FCNs trained on MNIST, CIFAR10 and CIFAR100) and training algorithms (SGD, RMSprop and Adam) show that the PH dimension is inversely correlated with test accuracy. Next, the authors took advantage of the differentiability of persistent homology to incorporate the PH dimension computed from previous iterates as a regularizer to control the generalization error. The topological regularizer improved performance on the test dataset, especially at high learning rates where the unregularized network has low test accuracy. Finally, the authors performed ablation studies using synthetic data generated from β-stable Levy processes that exhibit heavy-tails observed in network trajectories. PH dimension outperformed other intrinsic dimension estimators in these tests, with varying number of points, ambient dimensions and line fitting procedures.\n\nReviewer_2: This paper proposes a measure of generalization based on persistence homology, a standard technique of topological data analysis.\nThe proposed measure, the persistence homology dimension (PHD), is computed on optimization trajectories during training.\nA theoretical connect between PHD and generalization is made using existing results in the literature: first the equivalence of PHD and the box-dimension [KLS06, Sch19], and second that the generalization error may be bounded under the box dimension (under the assumption that the optimization dynamics follow a Feller process).\nFrom these main result of the paper, Proposition 1, then follows under assumption (H1) . The authors then proceed to remove the Feller requirement in Proposition. 2 using a recent result in the mathematics literature [Sch20].\nBased on these results, an algorithm for computing PHD is proposed.\nThe authors proceed to perform an empirical evaluation of how well PHD predicts generalization for select deep network architectures (alexnet, cnn-9, fcn-5, fcn-7), datasets (mnist, cifar-10, cifar-100), and training hyperparameters (learning rate, batch size).\nThe authors visualize these results in Figures 2 and 3, where in Figure 3 the authors claim that PHD \"directly\" correlates with generalization error across experimental settings. Linear trends are evident in the plots, however no measures of goodness of fit are given.\nIn addition, a novel loss regularizer is proposed based on their PHD measure. The effect on generalization of his regularizer is studied, but small differences are shown.\nAblation studies are additionally carried out.\n\nReviewer_3: The relationship between generalized loss of NN and Intrinsic Dimension by Persistent homology is mathematically proved. They also propose a normalization term based on the theory presented. In addition, we experimentally verify the theoretical results and confirm the effectiveness of the proposed algorithm.\n\nReviewer_4: The paper proposes to use an estimator of the intrinsic dimension (ID) based on topological data analysis for two goals, both important in neural network theory: (1) as a proxy of the test accuracy, which allows estimating it without performing explicitly any validation and (2) as a regularizer, by adding an ID-dependent term to the loss. A part of the paper is devoted to derive (or recall from the literature) rigorous properties of this ID estimator."
}