text
string
source
string
results indicate that our model achieves better calibration of predicted survival probabilities, ensuring closer align- ment between predictions and observed outcomes in both the survival CDF S(t|x)and PDF f(t|x). This underscores the reliability and robustness of our model in accurately capturing true survival behavior across diverse datasets. For illustration purposes, Figure 2 shows the performance of all models and datasets using Harrell’s C-index and CensD- cal. Specifically, Figure 2(a) provides a comparison of Har- rell’s C-index, highlighting the discriminative performance of our proposed model, ALD, alongside other baseline mod- els. The x axis lists all the datasets, both synthetic datasets (e.g., Gaussian linear, exponential, etc.) and real-world datasets ( e.g., METABRIC, WHAS, etc.), while the y axis indicates the range of corresponding mean C-index values, with error bars representing standard deviation across 10 model runs. Our method demonstrates consistently strong 7 Learning Survival Distributions with the Asymmetric Laplace Distribution Figure 3. Examples of best and worst calibration curves. Slope and intercept of the linear fit are shown in the legend. performance across both synthetic and real-world datasets, frequently achieving higher or comparable C-index values in relation to the baseline models. Among these, CQRNN stands out as the most competitive alternative, achieving similar levels of performance on certain datasets. However, in most cases, our model outperforms CQRNN, reflecting its robustness and superior discriminative capability. In par- ticular, ALD excels under scenarios with high censorsing rates. For example, on Norm heavy (PropCens: 0.80), Norm med (PropCens: 0.49), LogNorm heavy (PropCens: 0.75), and LogNorm med (PropCens: 0.52), ALD consistently out- performs other models. This performance highlights ALD’s ability to effectively handle challenging scenarios. Such ro- bustness under high censorship further underscores ALD’s reliability and adaptability in various survival analysis tasks. Using a similar comparison framework, Figure 2(b) presents the calibration results using CensDcal. Concisely, the pro- posed model achieves consistently better (lower) CensDcal figures across most datasets, reflecting superior calibration performance compared to the baseline models. Complementary to the CensDcal calibration metric, the slope and intercept summaries of the calibration curve pro- vide a more intuitive (and graphical) perspective of the calibration results. Figure 3 presents the best (first row) and worst (second row) results from our model on real- world data. The left and right columns represent the curves for Cal [S(t|x)]and for Cal [f(t|x)], respectively. The gray dashed line represents the idealized result for which theslope is one and the intercept is zero. The proposed model demonstrates exceptional performance on the TMBImmuno dataset for Cal [f(S|x)]summaries, as well as on the LGGGBM dataset for Cal [f(t|x)]sum- maries indicating robust calibration across both versions of the calibration metrics. In contrast, the performance on the SUPPORT dataset is relatively weaker. This discrepancy can largely be attributed to our method’s reliance on the as- sumption of the ALD, which may not be appropriate across all datasets. This limitation is particularly evident in datasets like SUPPORT. Notably, the SUPPORT data exhibit high skewness with a relatively small range of y. Specifically, Figure 6 in the Appendix shows the event distribution for the SUPPORT data, from which we
https://arxiv.org/abs/2505.03712v2
can see that it is heavily skewed. Such an skewness manifested as the concentration of events close to 0 makes it challenging to achieve good calibration in that range, i.e.,t→0. Similarly, our method attempted to predict smaller values for the initial quantiles but still allocated a disproportionately large weight to the first two intervals because of the small and highly concen- trated predicted quantiles, which significantly reduced the capacity for the remaining intervals and ultimately degraded calibration performance. However, the calibration results for this dataset remain within reasonable ranges and more importantly, comparable to those from the baselines. Detailed results for all datasets are provided in Appendix C. Overall and consistent with the summary results in Table 2, our model demonstrates a clear advantage on the slope and intercept metrics, consistently achieving better performance compared to the baselines. Finally, we also explore other distribution summaries, i.e., the mode and median, to evaluate their impact on the perfor- mance of MAE and C-index. Table 5 shows that different summaries may perform better on some datasets. In addition, recognizing that the ALD has support for t <0, we sum- marized the empirical quantiles of the predicted FALD(0|x), i.e., the probability that events occur up to t= 0. Interest- ingly, Table 6 in the Appendix indicate that this is not an issue most of the time, as in most cases, FALD(0|x)→0for most of the predictions by the model on the test set. 6. Conclusion In this paper, we proposed a parametric survival model based on the Asymmetric Laplace Distribution and provided a comprehensive comparison and analysis with existing methods, particularly CQRNN, which leverages the same distribution. Our model produces closed-form distributions, which enables flexible summarization and interpretation of predictions. Experimental results on a diverse range of synthetic and real-world datasets demonstrate that our approach offers very competitive performance in relation 8 Learning Survival Distributions with the Asymmetric Laplace Distribution to multiple baselines across accuracy, concordance, and calibration metrics. Limitations. First, our method relies on the assumption of the Asymmetric Laplace Distribution, which may not be universally applicable. This limitation was particularly evident in certain cases, such as with the SUPPORT dataset, as highlighted in Section 5.4, where the performance of our method faced challenges, especially in terms of calibration. Second, while our approach facilitates the calculation of different distribution metrics such as mean, median, mode, and even distribution quantiles, selecting the most suitable summary statistic for specific datasets or applications re- mains a non-trivial task. In this study, we selected the mean as the main summary statistic, which results in relatively balanced performance metrics; however, it does not offer an advantage for example, in terms of C-index and MAE when compared to CQRNN. Nevertheless, considering other summary statistics as part of model selection, which we did not attempt, may improve performance on these metrics for certain datasets, as detailed in Appendix C. Acknowledgements This work was supported by grant 1R61-NS120246-02 from the National Institute of Neurological Disorders and Dis- eases (NINDS). Impact Statement Our proposed survival analysis method utilizes the Asym- metric Laplace Distribution (ALD)
https://arxiv.org/abs/2505.03712v2
to deliver closed-form solutions for key event summaries, such as means and quan- tiles, facilitating more interpretable predictions. The method outperforms both traditional parametric and nonparametric approaches in terms of discrimination and calibration by optimizing individual-level parameters through maximum likelihood. This advancement has significant implications for applications like personalized medicine, where accurate and interpretable predictions of event timing are crucial. References Aalen, O. Nonparametric inference for a family of counting processes. The Annals of Statistics , pp. 701–726, 1978. Benjamini, Y . and Hochberg, Y . Controlling the false dis- covery rate: a practical and powerful approach to multiple testing. Journal of the Royal statistical society: series B (Methodological) , 57(1):289–300, 1995. Chapfuwa, P., Tao, C., Li, C., Page, C., Goldstein, B., Duke, L. C., and Henao, R. Adversarial time-to-event modeling. InInternational Conference on Machine Learning , pp. 735–744. PMLR, 2018.Cox, D. R. Regression models and life-tables. Journal of the Royal Statistical Society: Series B (Methodological) , 34(2):187–202, 1972. Emmerson, J. and Brown, J. Understanding survival analysis in clinical trials. Clinical Oncology , 33(1):12–14, 2021. Feigl, P. and Zelen, M. Estimation of exponential survival probabilities with concomitant information. Biometrics , pp. 826–838, 1965. Gepp, A. and Kumar, K. The role of survival analysis in financial distress prediction. International research journal of finance and economics , 16(16):13–34, 2008. Goldstein, M., Han, X., Puli, A., Perotte, A., and Ran- ganath, R. X-cal: Explicit calibration for survival analy- sis.Advances in neural information processing systems , 33:18296–18307, 2020. Graf, E., Schmoor, C., Sauerbrei, W., and Schumacher, M. Assessment and comparison of prognostic classification schemes for survival data. Statistics in medicine , 18(17- 18):2529–2545, 1999. Haider, H., Hoehn, B., Davis, S., and Greiner, R. Effective ways to build and evaluate individual survival distribu- tions. Journal of Machine Learning Research , 21(85): 1–63, 2020. Harrell, F. E., Califf, R. M., Pryor, D. B., Lee, K. L., and Rosati, R. A. Evaluating the yield of medical tests. Jama , 247(18):2543–2546, 1982. Hoseini, M., Bahrampour, A., and Mirzaee, M. Comparison of weibull and lognormal cure models with cox in the survival analysis of breast cancer patients in rafsanjan. Journal of research in health sciences , 17(1):369, 2017. Jung, E.-Y ., Baek, C., and Lee, J.-D. Product survival analysis for the app store. Marketing Letters , 23:929– 941, 2012. Kaplan, E. L. and Meier, P. Nonparametric estimation from incomplete observations. Journal of the American statis- tical association , 53(282):457–481, 1958. Katzman, J. L., Shaham, U., Cloninger, A., Bates, J., Jiang, T., and Kluger, Y . Deepsurv: personalized treatment rec- ommender system using a cox proportional hazards deep neural network. BMC medical research methodology , 18: 1–12, 2018. Klein, J. P. and Moeschberger, M. L. Survival analysis: techniques for censored and truncated data . Springer Science & Business Media, 2006. 9 Learning Survival Distributions with the Asymmetric Laplace Distribution Koenker, R. Quantile Regression, ‘quantreg’. R Pack- age. Cambridge University Press, 2022. doi: 10.1017/ CBO9780511754098. Koenker, R. and Bassett Jr, G. Regression quantiles. Econo- metrica: journal of the Econometric Society , pp. 33–50, 1978. Kotz, S., Kozubowski, T., and Podgorski, K. The Laplace
https://arxiv.org/abs/2505.03712v2
distribution and generalizations: a revisit with applica- tions to communications, economics, engineering, and finance . Springer Science & Business Media, 2012. Lai, C. D. and Xie, M. Stochastic ageing and dependence for reliability . Springer Science & Business Media, 2006. L´anczky, A. and Gy ˝orffy, B. Web-based survival analysis tool tailored for medical research (kmplot): development and implementation. Journal of medical Internet research , 23(7):e27633, 2021. Lee, C., Zame, W., Yoon, J., and Van Der Schaar, M. Deep- hit: A deep learning approach to survival analysis with competing risks. In Proceedings of the AAAI conference on artificial intelligence , volume 32, 2018. Nagpal, C., Li, X., and Dubrawski, A. Deep survival ma- chines: Fully parametric survival regression and repre- sentation learning for censored data with competing risks. IEEE Journal of Biomedical and Health Informatics , 25 (8):3163–3175, 2021. Neocleous, T., Branden, K. V ., and Portnoy, S. Correc- tion to censored regression quantiles by s. portnoy, 98 (2003), 1001–1012. Journal of the American Statistical Association , 101(474):860–861, 2006. Pearce, T., Jeong, J.-H., Zhu, J., et al. Censored quantile regression neural networks for distribution-free survival analysis. Advances in Neural Information Processing Systems , 35:7450–7461, 2022. Royston, P. The lognormal distribution as a model for survival time in cancer, with an emphasis on prognostic factors. Statistica Neerlandica , 55(1):89–104, 2001. Scholz, F. and Works, B. P. Maximum likelihood estima- tion for type i censored weibull data including covariates. Technical report, ISSTECH-96-022, Boeing Information and Support Services, 1996. Uno, H., Cai, T., Pencina, M. J., D’Agostino, R. B., and Wei, L.-J. On the c-statistics for evaluating overall adequacy of risk prediction procedures with censored survival data. Statistics in medicine , 30(10):1105–1117, 2011. V oronov, S., Frisk, E., and Krysander, M. Data-driven battery lifetime prediction and confidence estimation forheavy-duty trucks. IEEE Transactions on Reliability , 67 (2):623–639, 2018. Yu, K. and Moyeed, R. A. Bayesian quantile regression. Statistics & Probability Letters , 54(4):437–447, 2001. Zhang, W., Le, T. D., Liu, L., Zhou, Z.-H., and Li, J. Min- ing heterogeneous causal effects for personalized cancer treatment. Bioinformatics , 33(15):2372–2378, 2017. 10 Learning Survival Distributions with the Asymmetric Laplace Distribution A. Analytical Results This section provides the analytical results. Detailed proofs for the Asymmetric Laplace Distribution Loss can be found in Appendix A.1, while the analysis of all the baselines, including CQRNN, LogNormal MLE, DeepSurv, and DeepHit, is presented in Appendix A.2. A.1. Proofs for the Asymmetric Laplace Distribution Loss Theorem 1. IfY∼ AL (θ, σ, κ ), where ALdenotes the Asymmetric Laplace Distribution with location parameter θ, scale parameter σ >0, and asymmetry parameter κ >0, then the ALD loss is given by: LALD=Lo(y;θ, σ, κ ) +Lc(y;θ, σ, κ ) =−X n∈D ologfALD(yn|xn)−X n∈D clog (1−FALD(yn|xn)) (7) where DOandDCare the subsets of Dfor which e= 1ande= 0, respectively. The first term maximizes the likelihood fALD(t|x)for the observed data, while the second term maximizes the survival probability SALD(t|x)for the censored data. To achieve this, the parameters θ, σ, κ predicted by a multi-layer perceptron (MLP) conditioned on the input features, x, enabling the model to adapt flexibly
https://arxiv.org/abs/2505.03712v2
to varying input distributions. The observed component Lo(y;θ, σ, κ )is defined as: Lo(y;θ, σ, κ ) = log σ−logκ κ2+ 1+√ 2 σ  κ(y−θ),ify≥θ, 1 κ(θ−y),ify < θ.(8) The censored loss component Lc(y;θ, σ, κ )is computed using the survival probability function: Lc(y;θ, σ, κ ) =  log(κ2+ 1) +√ 2 σκ(y−θ), ify≥θ, log(κ2+ 1)−logh 1 +κ2 1−exp −√ 2 σκ(θ−y)i ,ify < θ.(9) Proposition 2. (Mean, Mode, Variance of Y)The mean, mode, variance of Yare given by: E[Y] =θ+σ√ 21 κ−κ (10) Mode[ Y] =θ (11) Var[Y] =σ2 21 κ2+κ2 (12) Proposition 3. (Quantiles of Y)LetθALD qdenotes the q-th quantile of Y. Then, the quantiles can be expressed as: θALD q=  θ+σκ√ 2logh 1+κ2 κ2qi , ifq∈ 0,κ2 1+κ2i , θ−σ√ 2κlog (1 +κ2)(1−q) ,ifq∈ κ2 1+κ2,1 .(13) A.2. Analysis of All the Baselines CQRNN. CQRNN (Pearce et al., 2022) combines the likelihood of the Asymmetric Laplace Distribution, fALD(t|x), with the re-weighting scheme wintroduced by Portnoy (Neocleous et al., 2006). For the observed data, CQRNN employs the Maximum Likelihood Estimation (MLE) approach to directly maximize the likelihood of the Asymmetric Laplace Distribution AL(θ, σ, q ). The likelihood is defined over all quantiles of interest. For censored data, CQRNN splits each censored data point into two pseudo data points: one at the censoring location y=cand another at a large pseudo value y∗. This approach enables the formulation of a weighted likelihood for censored data, resulting in the following loss function: 11 Learning Survival Distributions with the Asymmetric Laplace Distribution LCQR=Lo(y;θ, σ, q ) +Lc(y, y∗;θ, σ, q, w ) (14) where Lorepresents the negative log-likelihood for observed data, and Lcaccounts for the weighted negative log-likelihood of censored data using the re-weighting scheme. Expanding this, the loss can be expressed as: LCQR=−X n∈DologfALD(yn|xn)−X n∈Dc[logfALD(yn|xn) + (1 −w)fALD(y∗|xn)]. (15) where DoandDcare the subsets of Dfor which e= 1ande= 0, respectively. Here, CQRNN utilizes the Asymmetric Laplace Distribution AL(θ, σ, q )to model the data. The Asymmetric Laplace Distribution, denoted as AL(θ, σ, κ ), can be reparameterized as AL(θ, σ, q )to facilitate quantile regression within a Bayesian inference framework (Yu & Moyeed, 2001), where q∈(0,1)is the percentile parameter that represents the desired quantile. The relationship between qandκis given by: q=κ2 κ2+ 1. (16) Thus, the probability density function for Y∼ AL (θ, σ, q )is: fALD(y;θ, σ, q ) =q(1−q) σ  expq σ(θ−y) , ify≥θ, exp (1−q) σ(y−θ) ,ify < θ.(17) And the cumulative distribution function is: FALD(y;θ, σ, q ) =  1−(1−q) expq σ(θ−y) ,ify≥θ, qexp1−q σ(y−θ) , ify < θ.(18) Thus, the negative log-likelihood LQR(y;θ, σ, q )then can be explicitly derived as: LQR(y;θ, σ, q ) = log σ−log[q(1−q)] +1 σ( q(y−θ), ify≥θ (1−q)(θ−y),ify < θ(19) In their implementation, the scale parameter σis omitted, and the percentile parameter qis predefined, typically set to values such as q={0.1,0.2, . . . , 0.9}. A multi-layer perceptron (MLP) in CQRNN, conditioned on the input features x, predicts θqfor the predefined quantile values, corresponding to the location parameter θ. The negative log-likelihood LQR(y;θ, σ, q )
https://arxiv.org/abs/2505.03712v2
is then further simplified as: LQR(y;θq, q) =  q(y−θq), ify≥θq, (1−q)(θq−y),ify < θ q.= (y−θq)(q−I[θq> y]). (20) This formulation is also referred to as the pinball loss or “checkmark” loss (Koenker & Bassett Jr, 1978), which is widely used in quantile regression to directly optimize the q-th quantile estimate. For censored data, CQRNN adopts Portnoy’s estimator (Neocleous et al., 2006), which minimizes a specific objective function tailored for censored quantile regression. This approach introduces a re-weighting scheme to handle all censored data, with the formula defined as: Lc(y, y∗;θq, q, w) =wLQR(y;θq, q) + (1 −w)LQR(y∗;θq, q), (21) 12 Learning Survival Distributions with the Asymmetric Laplace Distribution where y∗is a pseudo value set to be significantly larger than all observed values of yin the dataset. Specifically, it is defined as y∗= 1.2 max iyiin CQRNN (Pearce et al., 2022). The weight parameter wis apportioned between each pair of pseudo-data points as: w=q−qc 1−qc, (22) where qcis the quantile at which the data point was censored ( e= 0, y=c) with respect to the observed value distribution, i.e.,p(o < c|x). However, the exact value of qcis not accessible in practice. To address this issue, CQRNN approximates qcusing the proportion qcorresponding to the quantile that is closest to the censoring value c, based on the distribution of observed events y, which are readily available. LogNormal MLE. LogNormal MLE (Hoseini et al., 2017) enhances parameter estimation using neural networks for LogNormal distributions. Specifically, a random variable Yfollows a LogNormal distribution if the natural logarithm of Y, denoted as ln(Y), follows a Normal distribution, i.e.,ln(Y)∼ N(µ, η2). Here, µrepresents the mean, and ηis the standard deviation (SD) of the normal distribution. The probability density function of the LogNormal distribution is given by: fLogNormal (y;µ, η) =1 yη√ 2πexp −(lny−µ)2 2η2 (23) where y >0andη >0. The cumulative distribution function is expressed as: FLogNormal (y, µ, η ) = Φln(y)−µ η (24) where Φ(z)is the standard normal cumulative distribution function: Φ(z) =1√ 2πZz −∞exp −t2 2 dt (25) The maximum likelihood estimation (MLE) loss with censored data is then defined as: LLogNormal =−X n∈DologfLogNormal (yn|xn)−X n∈Dclog (1−FLogNormal (yn|xj)). (26) A multi-layer perceptron (MLP) in LogNormal MLE, conditioned on the input features x, is used to predict the mean µand the standard deviation ηof the corresponding normal distribution. The quantiles θLogNormal q for the LogNormal distribution can be expressed as: θLogNormal q = exp( µ+ηΦ−1(q)), (27) where Φ−1(q)is the inverse CDF (quantile function) of the standard normal distribution. DeepSurv. DeepSurv (Katzman et al., 2018) is a semi-parametric survival model based on the Cox proportional hazards framework, leveraging deep neural networks for feature representation. A multi-layer perceptron (MLP) in DeepSurv, conditioned on the input features x, is used to predict the log hazard function h(x): λ(t|x) =λ0(t)eh(x)(28) where λ0(t)is the baseline hazard function. The hazard function is defined as: λ(t|x) = lim ∆t→0P(t≤T < t + ∆t|T≥t,x) ∆t(29) 13 Learning Survival Distributions with the Asymmetric Laplace Distribution This can be rewritten as: λ(t|x) =−dS(t|x)/dt S(t|x)(30) where S(t|x) =P(T > t |x)is the survival function. By integrating both sides,
https://arxiv.org/abs/2505.03712v2
we have: Z λ(t|x)dt=Z −dS(t|x) S(t|x)(31) which simplifies to: Λ(t|x) =−logS(t|x) +C (32) where Cis the constant of integration and Λ(t|x)is the cumulative hazard function: Λ(t|x) = Λ 0(t)eh(x)(33) where Λ0(t)is the baseline cumulative hazard function. For survival analysis, Cis typically set to 0 when starting from t= 0. Thus, the survival function can be expressed as: S(t|x) =e−Λ(t|x)=e−Λ0(t)eh(x)= [S0(t)]eh(x) (34) where S0(t)is the baseline survival function, typically estimated by the Kaplan-Meier method (Kaplan & Meier, 1958) using the training data. The cumulative distribution function (CDF) can then be derived as: FDeepSurv (t|x) = 1−S(t|x) = 1−[S0(t)]eh(x) (35) The quantiles θDeepSurv q for DeepSurv can be obtained from the inverse CDF F−1 DeepSurv (t|x)(quantile function). DeepHit. A multi-layer perceptron (MLP) in DeepHit (Lee et al., 2018), conditioned on the input features x, is used to predict the probability distribution f(t|x)over event times using a fully non-parametric approach. The quantiles θDeepHit q can be obtained from the inverse cumulative distribution function F−1 DeepHit (t|x), where FDeepHit (t|x) =PfDeepHit (t|x). B. Experimental Details This section provides additional details about the experiments conducted. The experiments were implemented using the PyTorch framework. Detailed information about the datasets, metrics, baselines and implementation details can be found in Appendix B.1, Appendix B.2, and Appendix B.3, respectively. Hardware. All experiments were conducted on a MacBook Pro with an Apple M3 Pro chip, featuring 12 cores (6 performance and 6 efficiency cores) and 18 GB of memory. CPU-based computations were utilized for all experiments, as the models primarily relied on fully-connected neural networks. B.1. Datasets Our datasets are designed following the settings outlined in Pearce et al. (2022). The first type of dataset consists of synthetic target data with synthetic censoring. In these datasets, the input features, x, are generated uniformly as x∼ U(0,2)D, where Ddenotes the number of features. The observed variable, o∼p(o|x), and the censored variable, c∼p(c|x), follow distinct distributions, with their parameters varying based on the specific dataset configuration. Table 3 provides detailed descriptions of the distributions for the observed and censored variables. Additionally, the coefficient vector used in some datasets is defined as β= [0.8,0.6,0.4,0.5,−0.3,0.2,0.0,−0.7]. 14 Learning Survival Distributions with the Asymmetric Laplace Distribution Table 3. Characteristics of synthetic datasets encompassing the number of features, parameterized distributions of observed variables, and censored variables, as utilized in the experimental framework. Synthetic Dataset Feats ( D) Observed Variables o∼p(o|x) Censored Variables c∼p(c|x) Norm linear 1 N(2x+ 10,(x+ 1)2) N(4x+ 10,(0.8x+ 0.4)2) Norm non-linear 1 N(xsin(2x) + 10 ,(0.5x+ 0.5)2) N(2x+ 10,22) Exponential 1 Exp(2 x+ 4) Exp( −3x+ 15) Weibull 1 Weibull( xsin(2x−2) + 10 ,5) Weibull( −3x+ 20,5) LogNorm 1 LogNorm(( x−1)2,x2) U(0,10) Norm uniform 1 N(2xcos(2x) + 13 ,(x+ 0.5)2) U(0,18) Norm heavy 4 N(3x0+x2 1−x2 2+ 2sin( x2x3) + 6,(x+ 0.5)2) U(0,12) Norm med 4 —”— U(0,20) Norm light 4 —”— U(0,40) Norm same 4 —”— Equal to observed dist. LogNorm heavy 8 LogNorm(P8 i=1βixi,1)/10 U(0,0.4) LogNorm med 8 —”— U(0,1.0) LogNorm light 8 —”— U(0,3.5) LogNorm same 8 —”— Equal to observed dist. The other type of dataset comprises real-world target data with real
https://arxiv.org/abs/2505.03712v2
censoring, sourced from various domains and character- ized by distinct features, sample sizes, and censoring proportions: •METABRIC (Molecular Taxonomy of Breast Cancer International Consortium): This dataset contains genomic and clinical data for breast cancer patients. It includes 9 features, 1523 training samples, and 381 testing samples, with a censoring proportion of 0.42. Retrieved from the DeepSurv Repository. •WHAS (Worcester Heart Attack Study): This dataset focuses on predicting survival following acute myocardial infarction. It includes 6 features, 1310 training samples, and 328 testing samples, with a censoring proportion of 0.57. Retrieved from the DeepSurv Repository. •SUPPORT (Study to Understand Prognoses Preferences Outcomes and Risks of Treatment): This dataset provides survival data for critically ill hospitalized patients. It includes 14 features, 7098 training samples, and 1775 testing samples, with a censoring proportion of 0.32. Covariates include demographic information and basic diagnostic data. Retrieved from the DeepSurv Repository. •GBSG (Rotterdam & German Breast Cancer Study Group): Originating from the German Breast Cancer Study Group, this dataset tracks survival outcomes of breast cancer patients. It includes 7 features, 1785 training samples, and 447 testing samples, with a censoring proportion of 0.42. Retrieved from the DeepSurv Repository. •TMBImmuno (Tumor Mutational Burden and Immunotherapy): This dataset predicts survival time for patients with various cancer types using clinical data. It includes 3 features, 1328 training samples, and 332 testing samples, with a censoring proportion of 0.49. Covariates include age, sex, and mutation count. Retrieved from the cBioPortal. •BreastMSK: Derived from the Memorial Sloan Kettering Cancer Center, this dataset focuses on predicting survival time for breast cancer patients using tumor-related information. It includes 5 features, 1467 training samples, and 367 testing samples, with a censoring proportion of 0.77. Retrieved from the cBioPortal. •LGGGBM: This dataset integrates survival data from low-grade glioma (LGG) and glioblastoma multiforme (GBM), frequently used for model validation in cancer genomics. It includes 5 features, 510 training samples, and 128 testing samples, with a censoring proportion of 0.60. Retrieved from the cBioPortal. B.2. Metrics We employ nine distinct evaluation metrics to assess model performance comprehensively: Mean Absolute Error (MAE), Integrated Brier Score (IBS) (Graf et al., 1999), Harrell’s C-Index (Harrell et al., 1982), Uno’s C-Index (Uno et al., 2011), censored D-calibration (CensDcal) (Haider et al., 2020), along with the slope and intercept derived from two versions 15 Learning Survival Distributions with the Asymmetric Laplace Distribution of censored D-calibration (Cal [S(t|x)](Slope), Cal [S(t|x)](Intercept), Cal [f(t|x)](Slope), and Cal [f(t|x)](Intercept)). These metrics provide a holistic evaluation framework, effectively capturing the survival models’ predictive accuracy, discriminative ability, and calibration quality. •MAE: MAE =1 NNX i=1|yi−˜yi| (36) where yirepresents the observed survival times, ˜yidenotes the predicted survival times, and Nis the total number of data points in the test set. •IBS: BS(t) =1 NNX i=1  1−˜F(t|xi)2 I(yi≤t, ei= 1) ˜G(yi)+˜F(t|xi)2I(yi> t) ˜G(t)  (37) IBS =1 t2−t1Zt2 t1BS(y)dy (38) where BS(t)represents the Brier score at time t, and 100 time points are evenly selected from the 0.1 to 0.9 quantiles of the y-distribution in the training set. ˜F(t|xi)denotes the estimated cumulative distribution function of the survival time for test subjects, I(·)is
https://arxiv.org/abs/2505.03712v2
the indicator function, and eiis the event indicator ( ei= 1 if the event is observed). xirepresents the covariates, and ˜G(·)refers to the Kaplan-Meier estimate (Kaplan & Meier, 1958) of the censoring survival function. •Harrell’s C-Index: CH=P(ϕi> ϕj|yi< yj, ei= 1) =P i̸=j I(ϕi> ϕj) + 0.5∗I(ϕi=ϕj) I(yi< yj)eiP i̸=jI(yi< yj)ei(39) where ϕi=˜S(yi|xi) = 1−˜F(t|xi)represents the risk score predicted by the survival model. For implementation, we utilize the concordance index censored function from the sksurv.metrics module, as documented in the scikit-survival API. •Uno’s C-Index: CU=P(ϕi> ϕj|yi< yj, yi< yτ) =Pn i=1Pn j=1G(yi)−2[I(ϕi> ϕj) + 0.5∗I(ϕi=ϕj)]I(yi< yj, yi< yτ)eiPn i=1Pn j=1G(yi)−2I(yi< yj, yi< yτ)ei. (40) where yτis the cutoff value for the survival time. For implementation, we utilize the concordance index ipcw function from the sksurv.metrics module, as documented in the scikit-survival API. •CensDcal: CensDcal = 100 ×10X j=1 (qj+1−qj)−1 Nζ2 , (41) where ζis defined by (Goldstein et al., 2020) as: ζ=X i∈S observedI[˜θi,qj< yi≤˜θi,qj+1] +X i∈S censored(qj+1−qi)I[˜θi,qj< yi≤˜θi,qj+1] 1−qi+(qj+1−qj)I[qi< qj] 1−qi.(42) Here, the percentile parameter qjis predefined as [0.1,0.2, . . . , 0.9]at the outset, and qiis the quantile at which the data point was censored ( e= 0, y=c) with respect to the observed value distribution, i.e.,p(o < c|x).˜θi,qjrepresents the estimated qth quantile of yi. 16 Learning Survival Distributions with the Asymmetric Laplace Distribution •Slope & Intercept: The Slope and Intercept metrics evaluate the calibration quality of predicted survival quantiles relative to observed data under censoring. We utilize the np.polyfit function from the NumPy module, as docu- mented in the NumPy API, to fit the 10 pointsn 0.1j,P j1 Nζjo10 j=1and subsequently obtain the Slope and Intercept metrics. Two versions of the Slope and Intercept (Cal [S(t|x)](Slope), Cal [S(t|x)](Intercept), Cal [f(t|x)](Slope), and Cal[f(t|x)](Intercept)) are calculated, differing in how the quantile intervals are defined: –Version 1 (Measuring S(t|x)):The predicted survival probabilities are divided into intervals based on the target proportions, i.e.,q= [0.1,0.2, . . . , 0.9,1.0]. For each quantile interval, the proportion of ground truth values (observed survival times) that fall within the corresponding predicted quantile1 Nζis calculated. For example, the ratio for 0.1 ( j= 1) is calculated within the interval [0,0.1], and for 0.2 ( j= 2), within [0,0.2]. Thus, the horizontal axis represents the target proportions 0.1j, while the vertical axis represents the observed proportionsP j1 Nζjderived from predictions. In the end, this metric is suitable for evaluating the Survival Function S(t|x) (or CDF F(t|x)). –Version 2 (Measuring f(t|x)):Narrower intervals centered around target proportions are used, i.e.,q= [. . . ,0.4,0.45,0.55,0.6, . . .]. For each quantile, the observed proportions are calculated within these narrower intervals. For example, the ratio for 0.1 is calculated within the interval [0.45,0.55], and for 0.2, within [0.4,0.6]. In the end, this metric is ideal for assessing the probability density function (PDF) f(t|x). B.3. Implementation Details Baselines. We compare our method against four baselines to evaluate performance and effectiveness: LogNorm (Royston, 2001), DeepSurv (Katzman et al., 2018), DeepHit (Lee et al., 2018), and CQRNN (Pearce et al., 2022). All methods were trained using the same optimization procedure and neural network architecture
https://arxiv.org/abs/2505.03712v2
to ensure a fair comparison. The implementations for CQRNN andLogNorm were sourced from the official CQRNN repository (GitHub Link). The implementations for DeepSurv andDeepHit were based on the pycox.methods module (GitHub Link). Hyperparameter settings. All experiments were repeated across 10 random seeds to ensure robust and reliable results. The hyperparameter settings were as follows: •Default Neural Network Architecture: Fully-connected network with two hidden layers, each consisting of 100 hidden nodes, using ReLU activations. •Default Epochs: 200 •Default Batch Size: 128 •Default Learning Rate: 0.01 •Dropout Rate: 0.1 •Optimizer: Adam •Batch Norm: FALSE Our Method. We incorporate a residual connection between the shared feature extraction layer and the first hidden layer to enhance gradient flow. To satisfy the parameter constraints of the Asymmetric Laplace Distribution (ALD), the final output layer applies an exponential ( Exp) activation function, ensuring that the outputs of the θ,σandκbranches remain positive. Each of the two hidden layers contains 32 hidden nodes. A validation set is created by splitting 20% of the training set. Early stopping is utilized to terminate training when the validation performance ceases to improve. CQRNN. We followed the hyperparameter settings tuned in the original paper (Pearce et al., 2022), where three random splits were used for validation (ensuring no overlap with the random seeds used in the final test runs). The following settings were applied: •Weight Decay: 0.0001 •Grid Size: 10 17 Learning Survival Distributions with the Asymmetric Laplace Distribution •Pseudo Value: y∗= 1.2×max iyi •Dropout Rate: 0.333 The number of epochs and dropout usage were adjusted based on the dataset type: •Synthetic Datasets: –Norm linear, Norm non-linear, Exponential, Weibull, LogNorm, Norm uniform: 100 epochs with dropout disabled. – Norm heavy, Norm medium, Norm light, Norm same: 20 epochs with dropout disabled. – LogNorm heavy, LogNorm medium, LogNorm light, LogNorm same: 10 epochs with dropout disabled. •Real-World Datasets: – METABRIC: 20 epochs with dropout disabled. – WHAS: 100 epochs with dropout disabled. – SUPPORT: 10 epochs with dropout disabled. – GBSG: 20 epochs with dropout enabled. – TMBImmuno: 50 epochs with dropout disabled. – BreastMSK: 100 epochs with dropout disabled. – LGGGBM: 50 epochs with dropout enabled. LogNorm. The output dimensions of the default neural network architecture are 2, where the two outputs represent the mean and standard deviation of a Log-Normal distribution. To ensure the standard deviation prediction is always positive and differentiable, the output representing the standard deviation is passed through a SoftPlus activation function. We followed the hyperparameter settings tuned in the original paper (Pearce et al., 2022), with a Dropout Rate of 0.333. The number of epochs and dropout usage were adjusted based on the dataset type as follows: •Synthetic Datasets: The same settings as described above for CQRNN . •Real-World Datasets: – METABRIC: 10 epochs with dropout disabled. – WHAS: 50 epochs with dropout disabled. – SUPPORT: 20 epochs with dropout disabled. – GBSG: 10 epochs with dropout enabled. – TMBImmuno: 50 epochs with dropout disabled. – BreastMSK: 50 epochs with dropout disabled. – LGGGBM: 20 epochs with dropout enabled. DeepSurv. We adhered to the official hyperparameter settings from the
https://arxiv.org/abs/2505.03712v2
pycox.methods module (GitHub Link). Each of the two hidden layers contains 32 hidden nodes. A validation set was created by splitting 20% of the training set. Early stopping was employed to terminate training when the validation performance ceased to improve. Batch normalization was applied. DeepHit. We adhered to the official hyperparameter settings from the pycox.methods module (GitHub Link). Each of the two hidden layers contains 32 hidden nodes. A validation set was created by splitting 20% of the training set. Early stopping was employed to terminate training when the validation performance ceased to improve. Batch normalization was applied, with additional settings: numdurations = 100, alpha = 0.2, and sigma = 0.1. 18 Learning Survival Distributions with the Asymmetric Laplace Distribution C. Additional Results This section presents additional results to provide a comprehensive evaluation. Figure 4 plots 9 distinct evaluation metrics, each presented with error bars for clarity, based on 10 model runs. Figure 5 illustrates the full results for the calibration linear fit. Figure 6 shows the statistical distribution of the SUPPORT dataset for both the training set and the test set. The histograms illustrate the count of observations across different y-values, with the blue lines representing density estimates. Table 4 provides the complete results for all datasets, methods, and metrics. Table 5 summarizes the results for all datasets, focusing on the ALD method (Mean, Median, Mode) across various metrics. Finally, Table 6 presents the 50th, 75th, and 95th percentiles of the CDF estimation for t= 0,FALD(0|x), under the Asymmetric Laplace Distribution. Overall Results. Table 5 summarizes the full results across 21 datasets, comparing our method with 4 baselines across 9 metrics and Figure 4 visualizes these results for a more intuitive comparison. In Table 5, the best performance is highlighted in bold. Figure 4 provides a graphical representation of nine distinct evaluation metrics to comprehensively assess predictive performance, including Mean Absolute Error (MAE), Integrated Brier Score (IBS), Harrell’s C-Index, Uno’s C-Index, Censored D-calibration (CensDcal), and the slope and intercept derived from two versions of censored D-calibration (Cal[S(t|x)](Slope), Cal [S(t|x)](Intercept), Cal [f(t|x)](Slope), and Cal [f(t|x)](Intercept)). Specifically, the following transformations were applied to enhance the clarity of the results: • MAE and CensDcal were log-transformed to better illustrate their value distributions and differences. •For Cal [S(t|x)](Slope) and Cal [f(t|x)](Slope), |1−Cal[S(t|x)](Slope )|and|1−Cal[f(t|x)](Slope )|were computed to measure their deviation from the ideal value of 1. •For Cal [S(t|x)](Intercept) and Cal [f(t|x)](Intercept), |Cal[S(t|x)](Intercept )|and|Cal[f(t|x)](Intercept )|were com- puted to measure their deviation from the ideal value of 0. These transformations allow for a more intuitive comparison of the performance differences across metrics and models. In the end, each subfigure in Figure 4 provides a comparison of its corresponding metric. The x-axis lists all the datasets, both synthetic datasets ( e.g., Gaussian linear, exponential, etc.) and real-world datasets ( e.g., METABRIC, WHAS, etc.), while the y-axis indicates the range of corresponding its metric, with error bars representing standard deviation across 10 model runs. Calibration. Figure 5 illustrates the full results of the calibration linear fit, providing a more intuitive and graphical perspective of the calibration performance. The horizontal
https://arxiv.org/abs/2505.03712v2
axis represents the target proportions [0.1,0.2, . . . , 0.9,1.0], while the vertical axis denotes the observed proportions derived from the model predictions. 19 Learning Survival Distributions with the Asymmetric Laplace Distribution 20 Learning Survival Distributions with the Asymmetric Laplace Distribution 21 Learning Survival Distributions with the Asymmetric Laplace Distribution Figure 4. Performance on calibration metrics. 22 Learning Survival Distributions with the Asymmetric Laplace Distribution Table 4. Full results table for all datasets, methods, and metrics. The values represent the mean ±1 standard error for the test set over 10 runs. Dataset Method MAE IBS Harrell’s C-index Uno’s C-index CensDcal Cal [S(t|x)](Slope) Cal [S(t|x)](Intercept) Cal [f(t|x)](Slope) Cal [f(t|x)](Intercept) ald 0.865 ± 1.337 0.278 ± 0.008 0.653 ± 0.014 0.648 ± 0.011 0.407 ± 0.343 1.025 ± 0.016 0.005 ± 0.030 1.027 ± 0.042 -0.016 ± 0.037 CQRNN 0.278 ± 0.144 0.326 ± 0.034 0.657 ± 0.008 0.651 ± 0.007 0.466 ± 0.150 1.001 ± 0.062 -0.003 ± 0.026 1.007 ± 0.039 -0.020 ± 0.047 Norm linear LogNorm 0.372 ± 0.228 0.709 ± 0.028 0.652 ± 0.016 0.646 ± 0.014 0.496 ± 0.399 0.965 ± 0.024 0.005 ± 0.014 0.978 ± 0.041 0.014 ± 0.067 DeepSurv 0.239 ± 0.114 0.676 ± 0.026 0.657 ± 0.008 0.651 ± 0.007 0.139 ± 0.071 0.983 ± 0.018 0.015 ± 0.016 1.007 ± 0.018 -0.005 ± 0.014 DeepHit 1.481 ± 0.527 0.503 ± 0.025 0.635 ± 0.024 0.628 ± 0.025 6.540 ± 1.458 0.967 ± 0.036 0.098 ± 0.070 1.216 ± 0.051 -0.302 ± 0.029 ald 0.243 ± 0.080 0.212 ± 0.006 0.670 ± 0.015 0.644 ± 0.016 0.406 ± 0.179 1.072 ± 0.021 -0.011 ± 0.015 1.038 ± 0.025 -0.016 ± 0.040 CQRNN 0.117 ± 0.037 0.507 ± 0.026 0.674 ± 0.014 0.651 ± 0.014 0.241 ± 0.099 0.983 ± 0.026 0.002 ± 0.018 0.987 ± 0.012 0.011 ± 0.027 Norm nonlinear LogNorm 0.396 ± 0.432 0.560 ± 0.058 0.630 ± 0.087 0.617 ± 0.074 2.136 ± 3.886 1.003 ± 0.052 0.051 ± 0.054 1.097 ± 0.060 -0.098 ± 0.059 DeepSurv 0.197 ± 0.047 0.623 ± 0.013 0.670 ± 0.015 0.650 ± 0.014 0.196 ± 0.128 1.015 ± 0.019 0.007 ± 0.016 1.019 ± 0.022 -0.007 ± 0.026 DeepHit 1.099 ± 0.130 0.515 ± 0.049 0.610 ± 0.051 0.596 ± 0.040 3.886 ± 3.682 0.999 ± 0.064 -0.007 ± 0.061 1.064 ± 0.067 -0.161 ± 0.084 ald 0.473 ± 0.344 0.045 ± 0.002 0.785 ± 0.010 0.703 ± 0.019 0.115 ± 0.030 1.019 ± 0.020 0.002 ± 0.016 1.016 ± 0.015 -0.006 ± 0.021 CQRNN 0.301 ± 0.104 0.535 ± 0.015 0.786 ± 0.009 0.706 ± 0.015 0.162 ± 0.141 1.018 ± 0.033 -0.013 ± 0.013 1.002 ± 0.015 -0.007 ± 0.017 Norm uniform LogNorm 17.079 ± 5.833 0.387 ± 0.013 0.615 ± 0.118 0.578 ± 0.083 3.799 ± 0.354 0.951 ± 0.059 0.159 ± 0.043 1.186 ± 0.016 -0.129 ± 0.021 DeepSurv 0.627 ± 0.180 0.516 ± 0.009 0.781 ± 0.014 0.701 ± 0.020 0.466 ± 0.149 1.038 ± 0.017 0.022 ± 0.013 1.069 ± 0.012 -0.051 ± 0.016 DeepHit 1.468 ± 0.458 0.364
https://arxiv.org/abs/2505.03712v2
± 0.048 0.758 ± 0.033 0.688 ± 0.028 3.150 ± 1.142 1.015 ± 0.045 0.024 ± 0.047 1.128 ± 0.051 -0.209 ± 0.027 ald 2.942 ± 2.389 0.309 ± 0.018 0.560 ± 0.008 0.560 ± 0.007 0.432 ± 0.405 0.978 ± 0.047 -0.015 ± 0.014 0.964 ± 0.049 0.016 ± 0.053 CQRNN 1.943 ± 0.297 0.317 ± 0.013 0.558 ± 0.013 0.557 ± 0.011 0.305 ± 0.129 0.976 ± 0.066 0.012 ± 0.043 1.001 ± 0.027 -0.008 ± 0.019 Exponential LogNorm 3.223 ± 0.823 0.455 ± 0.010 0.527 ± 0.028 0.528 ± 0.028 0.419 ± 0.141 0.983 ± 0.026 0.042 ± 0.018 1.057 ± 0.022 -0.051 ± 0.021 DeepSurv 1.913 ± 0.269 0.486 ± 0.015 0.558 ± 0.007 0.558 ± 0.006 0.119 ± 0.066 0.986 ± 0.033 0.009 ± 0.022 1.003 ± 0.018 -0.008 ± 0.018 DeepHit 2.626 ± 2.759 0.471 ± 0.012 0.526 ± 0.032 0.526 ± 0.031 1.205 ± 1.060 0.960 ± 0.027 -0.012 ± 0.021 0.907 ± 0.055 0.127 ± 0.066 ald 5.135 ± 9.533 0.219 ± 0.028 0.767 ± 0.009 0.763 ± 0.009 0.648 ± 0.511 1.044 ± 0.023 -0.023 ± 0.033 0.993 ± 0.049 0.021 ± 0.060 CQRNN 0.350 ± 0.098 0.461 ± 0.030 0.775 ± 0.005 0.769 ± 0.005 0.346 ± 0.131 0.989 ± 0.057 -0.001 ± 0.042 0.995 ± 0.022 -0.003 ± 0.023 Weibull LogNorm 0.862 ± 0.121 0.840 ± 0.021 0.773 ± 0.006 0.767 ± 0.006 0.598 ± 0.172 0.993 ± 0.026 0.029 ± 0.016 1.050 ± 0.028 -0.053 ± 0.038 DeepSurv 0.381 ± 0.098 0.969 ± 0.019 0.772 ± 0.004 0.766 ± 0.006 0.118 ± 0.049 0.989 ± 0.023 0.006 ± 0.012 1.005 ± 0.018 -0.009 ± 0.021 DeepHit 1.975 ± 0.172 0.618 ± 0.032 0.769 ± 0.006 0.763 ± 0.007 3.020 ± 1.157 0.998 ± 0.058 0.122 ± 0.071 1.206 ± 0.056 -0.187 ± 0.069 ald 0.363 ± 0.068 0.376 ± 0.013 0.588 ± 0.014 0.585 ± 0.014 0.256 ± 0.150 1.005 ± 0.021 0.006 ± 0.011 1.011 ± 0.028 -0.004 ± 0.029 CQRNN 0.950 ± 0.091 0.407 ± 0.019 0.588 ± 0.016 0.584 ± 0.016 0.459 ± 0.220 1.024 ± 0.066 -0.019 ± 0.034 0.996 ± 0.042 0.000 ± 0.031 LogNorm LogNorm 0.267 ± 0.062 0.645 ± 0.021 0.588 ± 0.015 0.584 ± 0.015 0.103 ± 0.020 1.009 ± 0.012 0.006 ± 0.009 1.016 ± 0.015 -0.010 ± 0.017 DeepSurv 0.963 ± 0.058 0.658 ± 0.029 0.589 ± 0.016 0.586 ± 0.016 0.137 ± 0.049 0.996 ± 0.021 0.001 ± 0.020 0.997 ± 0.025 0.002 ± 0.021 DeepHit 0.902 ± 0.504 0.568 ± 0.025 0.551 ± 0.032 0.548 ± 0.031 2.088 ± 1.666 0.988 ± 0.031 -0.026 ± 0.050 0.892 ± 0.072 0.162 ± 0.090 ald 0.667 ± 0.139 0.019 ± 0.001 0.919 ± 0.007 0.870 ± 0.029 0.036 ± 0.017 1.009 ± 0.005 -0.004 ± 0.004 1.001 ± 0.009 -0.002 ± 0.010 CQRNN 0.574 ± 0.031 0.538 ± 0.006 0.914 ± 0.008 0.863 ± 0.033 0.062 ± 0.099 1.000 ± 0.019 -0.002 ± 0.007 1.000 ± 0.012 -0.004 ± 0.011 Norm heavy LogNorm 33.140 ± 12.004 0.411 ± 0.014 0.781
https://arxiv.org/abs/2505.03712v2
± 0.071 0.679 ± 0.126 2.249 ± 0.490 1.122 ± 0.022 0.001 ± 0.029 1.111 ± 0.031 -0.074 ± 0.032 DeepSurv 1.662 ± 0.157 0.558 ± 0.007 0.726 ± 0.035 0.582 ± 0.056 0.577 ± 0.067 1.070 ± 0.006 -0.002 ± 0.009 1.065 ± 0.011 -0.065 ± 0.010 DeepHit 0.814 ± 0.104 0.475 ± 0.037 0.913 ± 0.009 0.856 ± 0.034 1.349 ± 0.374 1.051 ± 0.044 0.055 ± 0.035 1.139 ± 0.027 -0.121 ± 0.036 ald 0.238 ± 0.036 0.047 ± 0.003 0.894 ± 0.005 0.872 ± 0.004 0.157 ± 0.044 1.058 ± 0.012 -0.035 ± 0.011 0.997 ± 0.012 0.004 ± 0.014 CQRNN 0.312 ± 0.033 0.608 ± 0.010 0.888 ± 0.005 0.867 ± 0.005 0.097 ± 0.045 0.984 ± 0.026 0.001 ± 0.013 0.989 ± 0.019 0.007 ± 0.020 Norm med. LogNorm 7.300 ± 2.579 0.430 ± 0.019 0.810 ± 0.048 0.777 ± 0.048 8.192 ± 0.660 0.751 ± 0.073 0.350 ± 0.052 1.280 ± 0.021 -0.192 ± 0.036 DeepSurv 0.253 ± 0.026 0.722 ± 0.012 0.893 ± 0.004 0.871 ± 0.004 0.609 ± 0.111 1.054 ± 0.014 0.008 ± 0.015 1.061 ± 0.019 -0.051 ± 0.017 DeepHit 0.916 ± 0.077 0.576 ± 0.012 0.886 ± 0.006 0.863 ± 0.005 1.655 ± 0.409 1.056 ± 0.032 0.038 ± 0.026 1.130 ± 0.032 -0.130 ± 0.066 ald 0.236 ± 0.051 0.090 ± 0.007 0.882 ± 0.004 0.874 ± 0.004 0.339 ± 0.076 1.087 ± 0.017 -0.050 ± 0.011 0.999 ± 0.014 0.005 ± 0.021 CQRNN 0.271 ± 0.032 0.671 ± 0.013 0.879 ± 0.002 0.871 ± 0.002 0.149 ± 0.097 0.999 ± 0.027 -0.014 ± 0.017 0.985 ± 0.022 0.004 ± 0.027 Norm light LogNorm 3.152 ± 2.154 0.548 ± 0.023 0.832 ± 0.022 0.821 ± 0.022 12.884 ± 1.700 0.804 ± 0.162 0.358 ± 0.126 1.351 ± 0.018 -0.256 ± 0.038 DeepSurv 0.247 ± 0.016 0.941 ± 0.024 0.882 ± 0.002 0.874 ± 0.002 0.582 ± 0.127 1.038 ± 0.014 0.016 ± 0.019 1.057 ± 0.023 -0.040 ± 0.023 DeepHit 0.959 ± 0.051 0.691 ± 0.030 0.875 ± 0.004 0.867 ± 0.004 1.854 ± 0.461 1.044 ± 0.041 0.063 ± 0.022 1.159 ± 0.023 -0.157 ± 0.053 ald 0.405 ± 0.079 0.066 ± 0.003 0.890 ± 0.005 0.847 ± 0.008 0.114 ± 0.036 1.007 ± 0.014 0.006 ± 0.012 1.004 ± 0.018 0.010 ± 0.022 CQRNN 0.301 ± 0.024 0.568 ± 0.018 0.886 ± 0.006 0.841 ± 0.016 0.147 ± 0.109 0.988 ± 0.028 0.003 ± 0.014 0.999 ± 0.024 -0.007 ± 0.031 Norm same LogNorm 0.379 ± 0.202 0.770 ± 0.039 0.894 ± 0.005 0.850 ± 0.009 0.900 ± 0.801 0.994 ± 0.032 0.017 ± 0.030 1.036 ± 0.087 -0.048 ± 0.110 DeepSurv 0.254 ± 0.036 0.787 ± 0.015 0.889 ± 0.004 0.837 ± 0.025 0.227 ± 0.053 1.033 ± 0.014 -0.014 ± 0.011 1.010 ± 0.016 -0.010 ± 0.018 DeepHit 1.303 ± 0.132 0.572 ± 0.032 0.882 ± 0.006 0.832 ± 0.017 1.798 ± 0.770 1.041 ± 0.049 0.060 ± 0.047 1.142 ± 0.041 -0.124 ± 0.059 ald 0.385 ± 0.193 0.095 ± 0.006 0.777 ± 0.012
https://arxiv.org/abs/2505.03712v2
0.727 ± 0.021 0.043 ± 0.019 1.003 ± 0.014 -0.005 ± 0.005 0.998 ± 0.014 -0.003 ± 0.014 CQRNN 0.717 ± 0.027 0.436 ± 0.035 0.767 ± 0.009 0.718 ± 0.018 0.235 ± 0.104 0.992 ± 0.026 -0.007 ± 0.013 0.998 ± 0.032 -0.019 ± 0.035 LogNorm heavy LogNorm 0.755 ± 0.194 0.401 ± 0.012 0.643 ± 0.053 0.609 ± 0.046 0.066 ± 0.056 1.018 ± 0.012 -0.002 ± 0.005 1.011 ± 0.019 -0.003 ± 0.020 DeepSurv 0.842 ± 0.019 0.459 ± 0.013 0.497 ± 0.034 0.465 ± 0.029 0.102 ± 0.068 1.031 ± 0.008 -0.010 ± 0.007 1.015 ± 0.017 -0.018 ± 0.015 DeepHit 0.724 ± 0.020 0.402 ± 0.012 0.756 ± 0.012 0.712 ± 0.019 0.282 ± 0.121 1.036 ± 0.020 -0.012 ± 0.006 1.030 ± 0.014 -0.045 ± 0.018 ald 0.178 ± 0.046 0.174 ± 0.005 0.747 ± 0.004 0.718 ± 0.007 0.087 ± 0.052 1.008 ± 0.017 -0.004 ± 0.010 1.002 ± 0.009 -0.001 ± 0.012 CQRNN 0.540 ± 0.059 0.368 ± 0.053 0.746 ± 0.005 0.716 ± 0.006 0.376 ± 0.166 0.985 ± 0.069 -0.001 ± 0.037 0.994 ± 0.035 -0.006 ± 0.041 LogNorm med. LogNorm 0.549 ± 0.101 0.452 ± 0.012 0.694 ± 0.024 0.665 ± 0.021 0.085 ± 0.067 1.002 ± 0.016 0.007 ± 0.011 1.008 ± 0.022 0.000 ± 0.026 DeepSurv 0.654 ± 0.029 0.545 ± 0.015 0.638 ± 0.011 0.596 ± 0.011 0.138 ± 0.058 1.020 ± 0.017 -0.015 ± 0.009 0.994 ± 0.016 0.005 ± 0.018 DeepHit 0.600 ± 0.018 0.426 ± 0.010 0.729 ± 0.019 0.702 ± 0.015 0.344 ± 0.118 1.046 ± 0.018 -0.032 ± 0.017 0.986 ± 0.018 0.018 ± 0.020 ald 0.184 ± 0.035 0.310 ± 0.011 0.725 ± 0.007 0.713 ± 0.008 0.185 ± 0.095 0.985 ± 0.015 0.007 ± 0.009 1.001 ± 0.014 -0.001 ± 0.017 CQRNN 0.356 ± 0.073 0.418 ± 0.045 0.725 ± 0.007 0.714 ± 0.008 0.976 ± 0.602 0.988 ± 0.077 -0.012 ± 0.045 0.962 ± 0.071 0.044 ± 0.072 LogNorm light LogNorm 0.311 ± 0.022 0.794 ± 0.026 0.709 ± 0.009 0.698 ± 0.010 0.231 ± 0.170 0.972 ± 0.027 -0.007 ± 0.014 0.964 ± 0.035 0.029 ± 0.041 DeepSurv 0.403 ± 0.027 0.833 ± 0.025 0.715 ± 0.009 0.700 ± 0.011 0.211 ± 0.123 1.010 ± 0.017 -0.000 ± 0.012 1.004 ± 0.017 0.005 ± 0.018 DeepHit 0.581 ± 0.018 0.654 ± 0.017 0.702 ± 0.008 0.692 ± 0.008 0.253 ± 0.174 1.006 ± 0.030 -0.013 ± 0.016 0.974 ± 0.021 0.042 ± 0.026 ald 0.191 ± 0.044 0.154 ± 0.006 0.739 ± 0.009 0.697 ± 0.008 0.076 ± 0.057 1.012 ± 0.011 -0.001 ± 0.008 1.009 ± 0.011 -0.005 ± 0.010 CQRNN 0.319 ± 0.079 0.300 ± 0.049 0.740 ± 0.008 0.698 ± 0.009 0.787 ± 0.336 0.986 ± 0.086 -0.003 ± 0.040 0.971 ± 0.058 0.041 ± 0.056 LogNorm same LogNorm 0.273 ± 0.068 0.528 ± 0.017 0.736 ± 0.012 0.695 ± 0.010 0.213 ± 0.117 0.972 ± 0.015 -0.006 ± 0.010 0.963 ± 0.028 0.033 ± 0.040 DeepSurv 0.362 ± 0.026 0.511 ± 0.012 0.743 ± 0.010
https://arxiv.org/abs/2505.03712v2
0.700 ± 0.007 0.138 ± 0.040 1.017 ± 0.013 -0.005 ± 0.012 1.004 ± 0.014 0.001 ± 0.013 DeepHit 0.560 ± 0.098 0.385 ± 0.022 0.652 ± 0.066 0.633 ± 0.047 1.265 ± 1.911 0.925 ± 0.071 -0.010 ± 0.011 0.925 ± 0.088 0.058 ± 0.108 ald 1.626 ± 0.194 0.245 ± 0.012 0.637 ± 0.021 0.633 ± 0.031 0.293 ± 0.125 1.001 ± 0.033 -0.011 ± 0.016 0.993 ± 0.028 -0.009 ± 0.024 CQRNN 0.998 ± 0.074 0.344 ± 0.027 0.632 ± 0.017 0.630 ± 0.033 0.641 ± 0.391 0.972 ± 0.048 0.007 ± 0.018 1.001 ± 0.042 -0.022 ± 0.051 METABRIC LogNorm 1.329 ± 0.041 0.526 ± 0.015 0.609 ± 0.019 0.613 ± 0.046 0.619 ± 0.247 0.964 ± 0.026 -0.024 ± 0.011 0.937 ± 0.017 0.040 ± 0.018 DeepSurv 0.981 ± 0.029 0.533 ± 0.019 0.645 ± 0.016 0.635 ± 0.035 0.159 ± 0.075 1.009 ± 0.011 -0.008 ± 0.013 1.003 ± 0.022 -0.010 ± 0.023 DeepHit 1.177 ± 0.065 0.462 ± 0.008 0.563 ± 0.040 0.577 ± 0.053 0.659 ± 0.212 1.070 ± 0.022 -0.036 ± 0.014 1.018 ± 0.015 -0.024 ± 0.029 23 Learning Survival Distributions with the Asymmetric Laplace Distribution Dataset Method MAE IBS Harrell’s C-index Uno’s C-index CensDcal Cal [S(t|x)](Slope) Cal [S(t|x)](Intercept) Cal [f(t|x)](Slope) Cal [f(t|x)](Intercept) ald 2.196 ± 0.612 0.134 ± 0.013 0.823 ± 0.016 0.824 ± 0.014 0.198 ± 0.094 0.972 ± 0.027 0.003 ± 0.016 0.981 ± 0.021 0.009 ± 0.023 CQRNN 0.798 ± 0.049 0.636 ± 0.018 0.838 ± 0.016 0.846 ± 0.016 0.564 ± 0.248 0.974 ± 0.060 0.002 ± 0.022 0.998 ± 0.057 -0.024 ± 0.053 WHAS LogNorm 1.976 ± 0.232 0.614 ± 0.025 0.600 ± 0.042 0.575 ± 0.039 0.584 ± 0.233 0.920 ± 0.032 0.041 ± 0.021 0.994 ± 0.032 -0.002 ± 0.033 DeepSurv 0.867 ± 0.050 0.699 ± 0.023 0.711 ± 0.014 0.637 ± 0.025 0.228 ± 0.101 0.997 ± 0.020 0.005 ± 0.018 1.012 ± 0.020 -0.019 ± 0.019 DeepHit 0.966 ± 0.077 0.604 ± 0.023 0.806 ± 0.018 0.811 ± 0.018 0.269 ± 0.172 0.963 ± 0.036 0.018 ± 0.022 1.008 ± 0.026 -0.015 ± 0.031 ald 1.121 ± 0.107 0.362 ± 0.013 0.568 ± 0.015 0.572 ± 0.015 2.197 ± 0.667 1.084 ± 0.043 -0.113 ± 0.023 0.900 ± 0.056 0.084 ± 0.046 CQRNN 0.659 ± 0.047 0.344 ± 0.007 0.612 ± 0.005 0.613 ± 0.006 0.724 ± 0.428 1.034 ± 0.066 -0.019 ± 0.034 0.992 ± 0.051 0.019 ± 0.049 SUPPORT LogNorm 1.311 ± 0.150 0.688 ± 0.020 0.597 ± 0.011 0.597 ± 0.011 2.792 ± 0.942 0.942 ± 0.040 -0.114 ± 0.008 0.769 ± 0.041 0.169 ± 0.042 DeepSurv 0.511 ± 0.021 0.629 ± 0.014 0.599 ± 0.008 0.597 ± 0.009 0.092 ± 0.036 0.989 ± 0.016 -0.003 ± 0.011 0.986 ± 0.012 0.006 ± 0.014 DeepHit 0.574 ± 0.034 0.530 ± 0.009 0.577 ± 0.008 0.582 ± 0.009 0.829 ± 0.213 0.891 ± 0.013 0.006 ± 0.008 0.909 ± 0.014 0.086 ± 0.021 ald 1.713 ± 0.208 0.279 ± 0.014 0.671 ± 0.013 0.665 ± 0.013 0.283 ± 0.106 1.000
https://arxiv.org/abs/2505.03712v2
± 0.035 -0.018 ± 0.016 0.977 ± 0.025 0.014 ± 0.034 CQRNN 0.865 ± 0.070 0.357 ± 0.021 0.680 ± 0.015 0.672 ± 0.014 0.573 ± 0.577 0.953 ± 0.043 -0.008 ± 0.016 0.967 ± 0.030 0.002 ± 0.040 GBSG LogNorm 1.469 ± 0.105 0.577 ± 0.015 0.660 ± 0.012 0.653 ± 0.012 0.817 ± 0.303 0.968 ± 0.025 -0.057 ± 0.011 0.886 ± 0.025 0.086 ± 0.035 DeepSurv 0.709 ± 0.036 0.569 ± 0.016 0.611 ± 0.017 0.602 ± 0.016 0.180 ± 0.126 1.002 ± 0.021 -0.003 ± 0.013 0.996 ± 0.013 0.004 ± 0.018 DeepHit 0.773 ± 0.037 0.495 ± 0.016 0.649 ± 0.016 0.644 ± 0.016 2.020 ± 1.450 0.967 ± 0.049 -0.045 ± 0.014 0.952 ± 0.031 -0.025 ± 0.016 ald 3.002 ± 1.497 0.245 ± 0.015 0.561 ± 0.037 0.547 ± 0.040 0.835 ± 0.604 1.053 ± 0.045 -0.038 ± 0.025 0.994 ± 0.021 0.004 ± 0.025 CQRNN 1.008 ± 0.053 0.272 ± 0.013 0.567 ± 0.022 0.557 ± 0.017 0.251 ± 0.123 0.967 ± 0.037 0.011 ± 0.026 0.988 ± 0.027 0.009 ± 0.020 TMBImmuno LogNorm 1.880 ± 0.156 0.420 ± 0.011 0.561 ± 0.028 0.557 ± 0.028 0.617 ± 0.196 0.949 ± 0.028 -0.027 ± 0.019 0.913 ± 0.025 0.066 ± 0.027 DeepSurv 0.948 ± 0.097 0.395 ± 0.012 0.543 ± 0.034 0.526 ± 0.039 0.246 ± 0.168 1.019 ± 0.030 -0.001 ± 0.023 1.009 ± 0.020 -0.003 ± 0.018 DeepHit 1.117 ± 0.141 0.400 ± 0.011 0.560 ± 0.023 0.554 ± 0.021 0.464 ± 0.214 0.963 ± 0.039 -0.018 ± 0.026 0.935 ± 0.020 0.058 ± 0.026 ald 2.593 ± 0.289 0.086 ± 0.008 0.617 ± 0.032 0.568 ± 0.036 0.066 ± 0.027 1.002 ± 0.019 -0.007 ± 0.010 0.993 ± 0.020 0.003 ± 0.021 CQRNN 1.864 ± 0.354 0.316 ± 0.035 0.599 ± 0.044 0.561 ± 0.036 0.172 ± 0.083 0.993 ± 0.036 -0.005 ± 0.013 0.990 ± 0.030 0.003 ± 0.034 BreastMSK LogNorm 6.675 ± 0.597 0.310 ± 0.015 0.610 ± 0.029 0.573 ± 0.046 0.208 ± 0.089 1.044 ± 0.010 -0.004 ± 0.009 1.031 ± 0.012 -0.023 ± 0.015 DeepSurv 1.639 ± 0.217 0.334 ± 0.018 0.614 ± 0.033 0.582 ± 0.049 0.212 ± 0.099 1.046 ± 0.024 -0.006 ± 0.011 1.036 ± 0.019 -0.036 ± 0.019 DeepHit 1.523 ± 0.076 0.303 ± 0.016 0.614 ± 0.036 0.563 ± 0.046 0.411 ± 0.213 1.062 ± 0.011 -0.021 ± 0.006 1.032 ± 0.011 -0.040 ± 0.014 ald 1.232 ± 0.325 0.108 ± 0.011 0.778 ± 0.021 0.736 ± 0.030 0.450 ± 0.267 0.995 ± 0.047 0.003 ± 0.022 0.996 ± 0.038 0.009 ± 0.040 CQRNN 0.808 ± 0.197 0.375 ± 0.041 0.790 ± 0.024 0.754 ± 0.034 0.543 ± 0.273 0.989 ± 0.071 0.001 ± 0.037 0.990 ± 0.052 0.011 ± 0.058 LGGGBM LogNorm 1.191 ± 0.214 0.382 ± 0.017 0.795 ± 0.022 0.758 ± 0.037 0.327 ± 0.190 1.005 ± 0.025 0.007 ± 0.026 1.020 ± 0.040 -0.018 ± 0.044 DeepSurv 0.785 ± 0.155 0.472 ± 0.024 0.728 ± 0.057 0.664 ± 0.079 0.481 ± 0.219 1.022 ± 0.027 0.002 ±
https://arxiv.org/abs/2505.03712v2
0.025 1.018 ± 0.040 -0.012 ± 0.046 DeepHit 2.062 ± 0.285 0.377 ± 0.024 0.769 ± 0.022 0.734 ± 0.035 1.176 ± 0.539 1.085 ± 0.034 -0.052 ± 0.023 0.968 ± 0.035 0.066 ± 0.037 24 Learning Survival Distributions with the Asymmetric Laplace Distribution 25 Learning Survival Distributions with the Asymmetric Laplace Distribution 26 Learning Survival Distributions with the Asymmetric Laplace Distribution 27 Learning Survival Distributions with the Asymmetric Laplace Distribution Figure 5. Calibration Linear Fit. The blue and orange lines represent the curves for Cal [S(t|x)]and Cal [f(t|x)], respectively. The gray dashed line represents the idealized result where the slope is one and the intercept is zero. Figure 6. Distribution of the SUPPORT dataset for the training set and test set. 28 Learning Survival Distributions with the Asymmetric Laplace Distribution Table 5. Full results table for all datasets, the ALD method (Mean, Median, Mode), and metrics. The values represent the mean ±1 standard error on the test set over 10 runs. Dataset Method MAE IBS Harrell’s C-index Uno’s C-index CensDcal Cal [S(t|x)](Slope) Cal [S(t|x)](Intercept) Cal [f(t|x)](Slope) Cal [f(t|x)](Intercept) ald (Mean) 0.865 ± 1.336 0.653 ± 0.014 0.648 ± 0.011 Norm linear ald (Median) 0.217 ± 0.037 0.278 ± 0.008 0.654 ± 0.012 0.682 ± 0.037 0.407 ± 0.343 1.027 ± 0.042 -0.016 ± 0.037 1.025 ± 0.016 0.005 ± 0.030 ald (Mode) 0.689 ± 0.186 0.657 ± 0.008 0.718 ± 0.006 ald (Mean) 0.243 ± 0.080 0.670 ± 0.015 0.644 ± 0.016 Norm non-lin ald (Median) 0.253 ± 0.073 0.212 ± 0.006 0.667 ± 0.015 0.582 ± 0.029 0.406 ± 0.179 1.038 ± 0.025 -0.016 ± 0.040 1.072 ± 0.021 -0.011 ± 0.015 ald (Mode) 0.438 ± 0.089 0.632 ± 0.060 0.573 ± 0.054 ald (Mean) 0.473 ± 0.344 0.785 ± 0.010 0.703 ± 0.020 Norm uniform ald (Median) 0.392 ± 0.196 0.045 ± 0.002 0.785 ± 0.011 0.696 ± 0.013 0.115 ± 0.030 1.016 ± 0.014 -0.006 ± 0.021 1.019 ± 0.020 0.002 ± 0.016 ald (Mode) 0.613 ± 0.118 0.788 ± 0.012 0.696 ± 0.014 ald (Mean) 2.942 ± 2.389 0.560 ± 0.008 0.560 ± 0.007 Exponential ald (Median) 1.088 ± 0.308 0.309 ± 0.018 0.559 ± 0.010 0.553 ± 0.020 0.432 ± 0.405 0.964 ± 0.049 0.016 ± 0.053 0.978 ± 0.047 -0.015 ± 0.014 ald (Mode) 5.009 ± 0.235 0.556 ± 0.011 0.555 ± 0.020 ald (Mean) 5.134 ± 9.533 0.768 ± 0.009 0.763 ± 0.010 Weibull ald (Median) 0.484 ± 0.059 0.219 ± 0.028 0.767 ± 0.006 0.691 ± 0.023 0.648 ± 0.511 0.993 ± 0.049 0.021 ± 0.060 1.044 ± 0.023 -0.023 ± 0.033 ald (Mode) 1.163 ± 0.340 0.750 ± 0.008 0.689 ± 0.023 ald (Mean) 0.363 ± 0.068 0.588 ± 0.014 0.585 ± 0.014 LogNorm ald (Median) 0.533 ± 0.097 0.376 ± 0.013 0.589 ± 0.015 0.510 ± 0.023 0.256 ± 0.150 1.011 ± 0.028 -0.004 ± 0.029 1.005 ± 0.021 0.006 ± 0.011 ald (Mode) 1.733 ± 0.190 0.549 ± 0.043 0.496 ± 0.020 ald (Mean) 0.667 ± 0.139 0.919 ± 0.007 0.870 ± 0.029 Norm heavy ald (Median) 0.454 ± 0.081
https://arxiv.org/abs/2505.03712v2
0.019 ± 0.001 0.916 ± 0.009 0.802 ± 0.008 0.256 ± 0.150 1.011 ± 0.028 -0.004 ± 0.029 1.005 ± 0.021 0.006 ± 0.011 ald (Mode) 0.627 ± 0.072 0.911 ± 0.012 0.802 ± 0.008 ald (Mean) 0.238 ± 0.036 0.894 ± 0.005 0.872 ± 0.004 Norm med. ald (Median) 0.298 ± 0.036 0.047 ± 0.003 0.889 ± 0.006 0.868 ± 0.011 0.157 ± 0.044 0.997 ± 0.012 0.004 ± 0.014 1.058 ± 0.012 -0.036 ± 0.011 ald (Mode) 0.388 ± 0.047 0.884 ± 0.007 0.849 ± 0.011 ald (Mean) 0.236 ± 0.051 0.882 ± 0.004 0.874 ± 0.004 Norm light ald (Median) 0.255 ± 0.016 0.090 ± 0.007 0.880 ± 0.003 0.853 ± 0.017 0.339 ± 0.076 0.998 ± 0.014 0.005 ± 0.021 1.087 ± 0.017 -0.050 ± 0.011 ald (Mode) 0.328 ± 0.029 0.876 ± 0.003 0.850 ± 0.017 ald (Mean) 0.404 ± 0.078 0.890 ± 0.005 0.847 ± 0.008 Norm same ald (Median) 0.281 ± 0.022 0.066 ± 0.003 0.888 ± 0.006 0.886 ± 0.004 0.114 ± 0.036 1.004 ± 0.018 0.010 ± 0.022 1.007 ± 0.014 0.006 ± 0.012 ald (Mode) 0.518 ± 0.065 0.881 ± 0.008 0.880 ± 0.004 ald (Mean) 0.385 ± 0.193 0.777 ± 0.012 0.727 ± 0.022 LogNorm heavy ald (Median) 0.244 ± 0.042 0.095 ± 0.006 0.779 ± 0.011 0.749 ± 0.011 0.043 ± 0.019 0.998 ± 0.014 -0.002 ± 0.014 1.003 ± 0.014 -0.005 ± 0.005 ald (Mode) 0.898 ± 0.045 0.756 ± 0.029 0.724 ± 0.012 ald (Mean) 0.178 ± 0.046 0.747 ± 0.004 0.718 ± 0.007 LogNorm med. ald (Median) 0.247 ± 0.024 0.174 ± 0.006 0.748 ± 0.004 0.749 ± 0.013 0.087 ± 0.052 1.002 ± 0.009 -0.001 ± 0.012 1.008 ± 0.017 -0.004 ± 0.010 ald (Mode) 0.896 ± 0.082 0.723 ± 0.013 0.709 ± 0.012 ald (Mean) 0.184 ± 0.035 0.725 ± 0.007 0.713 ± 0.008 LogNorm light ald (Median) 0.221 ± 0.064 0.310 ± 0.011 0.725 ± 0.007 0.696 ± 0.020 0.185 ± 0.095 1.001 ± 0.014 -0.001 ± 0.016 0.985 ± 0.015 0.008 ± 0.009 ald (Mode) 0.921 ± 0.053 0.702 ± 0.014 0.697 ± 0.016 ald (Mean) 0.191 ± 0.044 0.739 ± 0.009 0.697 ± 0.008 LogNorm same ald (Median) 0.259 ± 0.062 0.154 ± 0.006 0.740 ± 0.010 0.751 ± 0.014 0.076 ± 0.057 1.009 ± 0.011 -0.005 ± 0.010 1.012 ± 0.011 -0.001 ± 0.008 ald (Mode) 0.943 ± 0.043 0.710 ± 0.007 0.715 ± 0.014 ald (Mean) 1.626 ± 0.194 0.637 ± 0.021 0.633 ± 0.031 METABRIC ald (Median) 1.123 ± 0.088 0.245 ± 0.012 0.640 ± 0.018 0.588 ± 0.031 0.293 ± 0.125 0.993 ± 0.028 -0.008 ± 0.024 1.001 ± 0.033 -0.012 ± 0.016 ald (Mode) 0.856 ± 0.039 0.605 ± 0.021 0.547 ± 0.018 ald (Mean) 2.196 ± 0.612 0.823 ± 0.016 0.824 ± 0.014 WHAS ald (Median) 1.118 ± 0.152 0.134 ± 0.013 0.784 ± 0.043 0.765 ± 0.017 0.198 ± 0.094 0.981 ± 0.021 0.009 ± 0.023 0.972 ± 0.027 0.003 ± 0.016 ald (Mode) 0.916 ± 0.101 0.802 ± 0.018 0.806 ± 0.022 ald (Mean)
https://arxiv.org/abs/2505.03712v2
1.121 ± 0.107 0.568 ± 0.015 0.572 ± 0.015 SUPPORT ald (Median) 0.856 ± 0.062 0.362 ± 0.013 0.572 ± 0.015 0.561 ± 0.015 2.197 ± 0.667 0.900 ± 0.056 0.084 ± 0.046 1.084 ± 0.043 -0.113 ± 0.023 ald (Mode) 0.421 ± 0.051 0.532 ± 0.016 0.522 ± 0.044 ald (Mean) 1.713 ± 0.208 0.671 ± 0.014 0.665 ± 0.013 GBSG ald (Median) 1.161 ± 0.094 0.278 ± 0.014 0.672 ± 0.010 0.590 ± 0.035 0.283 ± 0.106 0.977 ± 0.025 0.014 ± 0.034 1.000 ± 0.035 -0.018 ± 0.016 ald (Mode) 0.664 ± 0.072 0.657 ± 0.023 0.554 ± 0.062 ald (Mean) 3.002 ± 1.497 0.561 ± 0.037 0.547 ± 0.040 TMBImmuno ald (Median) 1.085 ± 0.191 0.245 ± 0.015 0.562 ± 0.032 0.548 ± 0.030 0.835 ± 0.604 0.994 ± 0.021 0.004 ± 0.025 1.053 ± 0.045 -0.038 ± 0.025 ald (Mode) 0.609 ± 0.069 0.546 ± 0.024 0.531 ± 0.025 ald (Mean) 2.593 ± 0.289 0.617 ± 0.032 0.568 ± 0.036 BreastMSK ald (Median) 1.116 ± 0.394 0.086 ± 0.008 0.457 ± 0.068 0.538 ± 0.083 0.066 ± 0.027 0.993 ± 0.020 0.003 ± 0.021 1.002 ± 0.019 -0.007 ± 0.010 ald (Mode) 0.686 ± 0.077 0.591 ± 0.071 0.515 ± 0.090 ald (Mean) 1.232 ± 0.325 0.778 ± 0.021 0.736 ± 0.030 LGGGBM ald (Median) 0.846 ± 0.239 0.108 ± 0.011 0.785 ± 0.030 0.750 ± 0.043 0.450 ± 0.267 0.996 ± 0.038 0.008 ± 0.040 0.995 ± 0.047 0.003 ± 0.022 ald (Mode) 0.497 ± 0.100 0.777 ± 0.023 0.739 ± 0.058 29 Learning Survival Distributions with the Asymmetric Laplace Distribution Table 6. The 50th, 75th and 95th percentiles of the CDF estimation for t= 0,FALD(0|x), under the Asymmetric Laplace Distribution. Dataset 50th Percentile 75th Percentile 95th Percentile Norm linear 0.0001 0.0007 0.0018 Norm non-linear 1.9878e-06 0.0001 0.0007 Norm uniform 2.9879e-05 0.0028 0.0124 Exponential 0.0194 0.0665 0.1204 Weibull 0.0015 0.0032 0.0046 LogNorm 0.0031 0.0109 0.0134 Norm heavy 1.1804e-06 2.6128e-05 0.0007 Norm med 4.2222e-06 3.5778e-05 0.0004 Norm light 1.1978e-05 0.0001 0.0009 Norm same 7.8051e-07 4.8624e-06 0.0001 LogNorm heavy 0.0001 0.0014 0.0142 LogNorm med 0.0001 0.0007 0.0082 LogNorm light 0.0004 0.0024 0.0150 LogNorm same 0.0004 0.0021 0.0123 METABRIC 0.0068 0.0123 0.0292 WHAS 0.0046 0.0151 0.0507 SUPPORT 0.0957 0.1393 0.2035 GBSG 0.0248 0.0394 0.0668 TMBImmuno 0.0523 0.0681 0.0878 BreastMSK 0.0006 0.0008 0.0130 LGGGBM 0.0570 0.0842 0.1356 30
https://arxiv.org/abs/2505.03712v2
A Graphical Global Optimization Framework for Parameter Estimation of Statistical Models with Nonconvex Regularization Functions Danial Davarnia Mohammadreza Kiaghadi Iowa State University Iowa State University Abstract Optimization problems with norm-bounding constraints appear in various applications, from portfolio optimization to machine learn- ing, feature selection, and beyond. A widely used variant of these problems re- laxes the norm-bounding constraint through Lagrangian relaxation and moves it to the objective function as a form of penalty or regularization term. A challenging class of these models uses the zero-norm function to induce sparsity in statistical parameter es- timation models. Most existing exact solu- tion methods for these problems use addi- tional binary variables together with artificial bounds on variables to formulate them as a mixed-integer program in a higher dimension, which is then solved by off-the-shelf solvers. Other exact methods utilize specific struc- tural properties of the objective function to solve certain variants of these problems, mak- ing them non-generalizable to other problems with different structures. An alternative ap- proach employs nonconvex penalties with de- sirable statistical properties, which are solved using heuristic or local methods due to the structural complexity of those terms. In this paper, we develop a novel graph-based method to globally solve optimization prob- lems that contain a generalization of norm- bounding constraints. This includes stan- dard ℓp-norms for p∈[0,∞) as well as non- convex penalty terms, such as SCAD and MCP, as special cases. Our method uses de- cision diagrams to build strong convex re- laxations for these constraints in the origi- Proceedings of the 28thInternational Conference on Artifi- cial Intelligence and Statistics (AISTATS) 2025, Mai Khao, Thailand. PMLR: Volume 258. Copyright 2025 by the au- thor(s).nal space of variables without the need to in- troduce additional auxiliary variables or im- pose artificial variable bounds. We show that the resulting convexification method, when incorporated into a spatial branch-and-cut framework, converges to the global optimal value of the problem. To demonstrate the capabilities of the proposed framework, we conduct preliminary computational experi- ments on benchmark sparse linear regression problems with challenging nonconvex penalty terms that cannot be modeled or solved by existing global solvers. 1 INTRODUCTION Norm-bounding constraints are often used in optimiza- tion problems to improve model stability and guide the search towards solutions with desirable properties. For example, in machine learning, norm-bounding con- straints are imposed as a form of regularization to re- duce overfitting and induce sparsity in feature selection models (Hastie et al., 2009). Other uses of these con- straints include improving the numerical stability of optimization algorithms (Nocedal and Wright, 2006), controlling the complexity of the solution space (Bert- sekas, 1999), and enhancing the interpretability of pa- rameter estimation (Zou and Hastie, 2005). These con- straints have also played a critical role in shaping the statistical properties of solution estimators for para- metric stochastic optimization problems under both the frequentist (Davarnia and Cornuejols, 2017) and Bayesian (Davarnia et al., 2020) frameworks. While norm-bounding constraints defined by ℓp-norm for p≥ 1 fall within the class of convex programs, making them amenable to solution techniques from numerical optimization (Nocedal
https://arxiv.org/abs/2505.03899v1
and Wright, 2006) and convex optimization (Bertsekas, 2015) literature, constraints involving ℓp-norm for p∈[0,1) or their nonconvex proxies belong to the class of mixed-integer nonlinear (nonconvex) programs (MINLPs), posing greater chal-arXiv:2505.03899v1 [math.OC] 6 May 2025 Global Optimization of Statistical Models with Nonconvex Regularization Functions lenges for global solution approaches. The challenge to solve problems with ℓp-norm for p∈(0,1) stems from the nonconvexity of this func- tion, which necessitates the addition of new constraints and auxiliary variables to decompose the function into smaller terms with simpler structures. These terms are convexified separately through the so-called fac- torable decomposition , the predominant convexifica- tion technique in existing global solvers; see Bonami et al. (2012); Khajavirad et al. (2014) for a detailed ac- count. In contrast, the challenge in solving problems with ℓ0-norm, also referred to as best-subset selection problem, is due to the discontinuity of the norm func- tion, which necessitates the introduction of new binary variables and (often) artificial bounds on variables to reformulate it as a mixed-integer program (MIP) that can be solved by existing MIP solvers (Bertsimas et al., 2016; Dedieu et al., 2021). Other exact methods for solving these problems are highly tailored to exploit the specific structural properties of the objective func- tion, which makes them often unsuitable for problems with different structures (Bertsimas and Parys, 2020; Hazimeh et al., 2022; Xie and Deng, 2020). One of the most common applications of the best-subset selec- tion problem is in sparse parameter estimation, where the goal is to limit the number of nonzero parameters (Hastie et al., 2015; Pilanci et al., 2015). Other appli- cations of problems with ℓ0-norm constraints include compressive sensing, metabolic engineering, and port- folio selection, among others; see Jain (2010) and ref- erences therein for an exposition on these applications. Perhaps the most challenging class of these problems, from a global optimization perspective, involves non- convex penalty functions that exhibit desirable sta- tistical properties, such as variable selection consis- tency and unbiasedness (Fan and Li, 2001; Zhang and Zhang, 2012). The complexity of these problems is partly attributed to the algebraic form of their penalty functions, which are often difficult to model as stan- dard mathematical programs that can be solved by a global solver. For example, the smoothly clipped absolute deviation (SCAD) (Fan and Li, 2001) and the minimax concave penalty (MCP) (Zhang, 2010) functions include integration in their definition, an op- erator that is inadmissible in existing global solvers such as BARON (Tawarmalani and Sahinidis, 2005). As a result, the existing optimization approaches for these problems mainly consist of heuristic and local methods (Mazumder et al., 2011; Zou and Li, 2008). Various works in the literature have studied the fa- vorable properties of global solutions of these statis- tical models, which are often not achievable by lo- cal methods or heuristic approaches (Fletcher et al., 2009; Hazimeh and Mazumder, 2020; Liu et al., 2016; Wainwright, 2009). This advocates for the need to de-velop global optimization methods that, despite being computationally less efficient and scalable compared to heuristic or local methods, can provide deeper insights
https://arxiv.org/abs/2505.03899v1
into the true statistical characteristics of optimal es- timators across various models. Consequently, these insights can facilitate the development of new models endowed with more desirable properties; see the dis- cussion in Fan and Li (2001) for an example of the process to design a new model. In this paper, we introduce a novel global optimization framework for a generalized class of norm-bounding constraints based on the concept of decision diagrams (DDs), which are special-structured graphs that draw out latent variable interactions in the constraints of the MINLP models. We refer the reader to van Ho- eve (2024) for an introduction to DDs, and to Castro et al. (2022) for a recent survey on DD-based opti- mization. One of the most prominent advantages of DD-based solution methods compared to alternative approaches is the ability of DDs to model complex con- straint forms, such as nonconvex, nonsmooth, black- box, etc., while capturing their underlying data struc- ture through a graphical representation. In this work, we exploit these properties of DDs to design a frame- work to globally solve optimization problems that in- clude norm-bounding constraints, which encompass a variety of challenging nonconvex structures. The main contribution of this paper is the development of a global solution framework through the lens of DDs, which has the following advantages over existing global optimization tools and techniques. (i) Our framework is applicable to a generalized defi- nition of the norm functions that includes ℓp-norm with p∈[0,∞), as well as nonconvex penalty terms, such as SCAD and MCP, as special cases. (ii) The devised method guarantees convergence to a global optimal solution of the underlying op- timization model under mild conditions. (iii) The proposed approach provides a unified frame- work to handle different norm and penalty forms, ranging from convex to nonconvex to discontinu- ous, unlike existing techniques that employ a dif- ferent tailored approach to model and solve each norm and penalty form. (iv) Our framework models the norm-bounding con- straints and solves the associated optimization problem in the original space of variables without using additional auxiliary variables, unlike con- ventional approaches in the literature and global solvers that require the introduction of new aux- iliary variables for ℓp-norms for p∈[0,1). Danial Davarnia, Mohammadreza Kiaghadi (v) Our approach can model and solve nonconvex penalty functions that contain irregular operators, such as the integral in SCAD and MCP func- tions, which are considered intractable in state- of-the-art global solvers, thereby providing the first general-purpose global solution framework for such structures. (vi) The developed framework does not require arti- ficial large bounds on the variables, which are commonly imposed when modeling ℓ0-norms as a MIP. (vii) Our algorithms can be easily incorporated into so- lution methods for optimization problems with a general form of objective function and constraints, whereas the majority of existing frameworks de- signed for ℓ0-norm or nonconvex penalty terms heavily rely on the problem-specific properties of the objective function and thus are not general- izable to problems that have different objective functions and constraints. (viii) Our approach can be used to model the regular- ized
https://arxiv.org/abs/2505.03899v1
variant of problems in which the norm func- tions are moved to the objective function through Lagrangian relaxation and treated as a weighted penalty. Notation. Vectors of dimension n∈Nare denoted by bold letters such as x, and the non-negative or- thant in dimension nis referred to as Rn +. We define [n] :={1,2, . . . , n }. We refer to the convex hull of a setP⊆Rnby conv( P). For a sequence {t1, t2, . . .} of real-valued numbers, we refer to its limit inferior as lim inf n→∞tn. For a sequence {P1, P2, . . .}of mono- tone non-increasing sets in Rn, i.e. P1⊇P2⊇. . ., we denote by {Pj} ↓ ¯Pthe fact that this sequence converges (in the Hausdorff sense) to the set ¯P⊆Rn. Forx∈R, we define ( x)+:= max {0, x}. We refer the reader to the Appendix for all proofs that have been omitted from the main manuscript. 2 BACKGROUND ON DDs In this section, we present basic definitions and results relevant to our DD analysis. A DD Dis a directed acyclic graph denoted by the triple ( U,A, l) where U is a node set, Ais an arc set, and l:A →Ris an arc label mapping for the graph components. This DD is composed of n∈Narc layers A1,A2, . . . ,An, and n+ 1 node layers U1,U2, . . . ,Un+1. The node layers U1andUn+1contain the root rand the terminal t, respectively. In any arc layer j∈[n] :={1,2, . . . , n }, an arc ( u, v)∈ Ajis directed from the tail node u∈ Uj to the head node v∈ Uj+1. The width ofDis definedas the maximum number of nodes at any node layer Uj. DDs have been traditionally used to model a bounded integer set P ⊆Znsuch that each r-tarc-sequence (path) of the form ( a1, . . . , a n)∈ A 1×. . .×Anencodes a point x∈ Pwhere l(aj) =xjforj∈[n], that is x is an n-dimensional point in Pwhose j-th coordinate is equal to the label value l(aj) of arc the aj. For such a DD, we have P= Sol( D), where Sol( D) represents the set of all r-tpaths. The graphical property of DDs can be exploited to optimize an objective function over a discrete set P. To this end, DD arcs are weighted in such a way that the weight of an r-tpath, obtained by the summa- tion of the weights of its arcs, encoding a solution x∈ P equals the objective function value evaluated atx. Then, a shortest (resp. longest) r-tpath for the underlying minimization (resp. maximization) prob- lem is found through a polynomial-time algorithm to obtain an optimal solution of the underlying integer program. If there is a one-to-one correspondence between the r-t paths of the DD and the discrete set P, we say that the DD is exact . It is clear that the construction of an exact DD is computationally prohibitive due to the exponential growth rate of its size. To address this difficulty, the concept of relaxed andrestricted DDs were developed in
https://arxiv.org/abs/2505.03899v1
the literature to keep the size of DDs manageable. In a relaxed DD, if the number of nodes in a layer exceeds a predetermined width limit , a subset of nodes are merged into one node to reduce the number of nodes in that layer and thereby sat- isfy the width limit. This node-merging operation is performed in such a way that all feasible solutions of the exact DD are maintained, while some new infea- sible solutions might be created during the merging process. Optimization over this relaxed DD provides a dual bound to the optimal solution of the original integer program. In a restricted DD, the collection of allr-tpaths of the DD encode a subset of the feasible solutions of the exact DD that satisfies the prescribed width limit. Optimization over this restricted DD pro- vides a primal bound to the optimal solution of the original integer program. The restricted and relaxed DDs can be successively refined through a branch-and- bound framework until their primal and dual bounds converge to the optimal value of the integer program. As outlined above, DDs have been traditionally used to model and solve discrete optimization problems. Recently, through a series of works (Davarnia, 2021; Salemi and Davarnia, 2022, 2023; Davarnia and Ki- aghadi, 2025), the extension of DD-based optimization to mixed-integer programs was proposed together with applications in new domains, from energy systems to transportation, that include a mixture of discrete and Global Optimization of Statistical Models with Nonconvex Regularization Functions continuous variables. In this paper, we make use of some of the methods developed in those works to build a global solution framework for optimization problems with norm-bounding constraints. 3 SCALE FUNCTION In this section, we introduce the scale function as a generalization of well-known norm functions. Definition 1. Forx∈Rn, define the “scale” function η(x) =P i∈Nηi(xi), where ηi:R→R+is a real- valued univariate function such that (i)ηi(x) = 0 if and only if x= 0, (ii)ηi(x1)≤ηi(x2)for0≤x1≤x2(monotone non- decreasing), (iii) ηi(x1)≥ηi(x2)forx1≤x2≤0(monotone non- increasing). The definition of the scale function η(x) does not im- pose any assumption on the convexity, smoothness, or even continuity of the terms involved in the func- tion, leading to a broad class of possible functional forms. In fact, as a special case, this function takes the form of the nonconvex penalty function outlined in Fan and Li (2001); Zhang (2010), which is commonly used as a regularization factor in the Lagrangian for- mulations of statistical estimation problems. Next, we show that two of the most prominent instances of such penalty functions, namely SCAD and MCP, are scale functions. Proposition 1. Consider the SCAD penalty function ρSCAD(x,λ,γ) =Pn i=1ρSCAD i(xi, λi, γi)where λi>0 andγi>2, for i∈[n], are degree of regulariza- tion and nonconvexity parameters, respectively, and ρSCAD i(xi, λi, γi) = λiR|xi| 0min{1,(γi−y/λi)+/(γi− 1)}dy. Similarly, consider the MCP penalty func- tionρMCP(x,λ,γ) =Pn i=1ρMCP i(xi, λi, γi)where λi>0 andγi>0are degree of regularization and noncon- vexity parameters, respectively, and ρMCP i(xi, λi, γi) = λiR|xi| 0(1−y/(λiγi))+dy. Then, ρSCAD(x,λ,γ)and ρMCP(x,λ,γ)are scale functions in x. As a more familiar special case, we next show how the
https://arxiv.org/abs/2505.03899v1
ℓp-norm constraints for all p∈[0,∞) can be repre- sented using scale functions. Proposition 2. Consider a norm-bounding constraint of the form ||x||p≤βfor some β≥0andp∈[0,∞), where ||x||pdenotes the ℓp-norm. Then, this con- straint can be written as η(x)≤¯βfor a scale function η(x)such that (i) if p∈(0,∞), then ηi(xi) =xp iand¯β=βp,(ii) if p= 0, then ηi(xi) = 0 forxi= 0, and ηi(xi) = 1forxi̸= 0, and ¯β=β. We note that ℓ∞-norm is excluded in the defini- tion of scale function, as constraints of the form ||x||∞≤βcan be broken into multiple separate con- straints bounding the magnitude of each component, i.e.,|xi| ≤βfori∈[n]. 4 GRAPH-BASED CONVEXIFICATION METHOD In this section, we develop a novel convexification method for the feasible regions described by a scale function. We present these results for the case where the scale functions appears in the constraints of the op- timization problem. The extension to the case where these constraints are relaxed and moved to the objec- tive function as a penalty term follows from similar arguments. In the remainder of the paper, we refer to a constraint that imposes an upper bound on a scale function as a norm-bounding constraint. 4.1 DD-Based Relaxation Define the feasible region of a norm-bounding con- straint over the variable domains as F={x∈Qn i=1[li,ui]|η(x)≤β}. For each i∈[n], define Lito be the index set for sub-intervals [ li j,ui j] for j∈Lithat span the entire domain of variable xi, i.e.,S j∈Li[li j,ui j] = [li,ui]. Algorithm 1 gives a top-down procedure to construct a DD that provides a relaxation for the convex hull of F, which is proven next. Proposition 3. Consider F ={x∈Qn i=1[li,ui]|η(x)≤β}where n∈N. Let Dbe the DD constructed via Algorithm 1 for some domain partitioning sub-intervals Lifori∈[n]. Then, conv(F)⊆conv(Sol( D)). In view of Proposition 3, note that the variable do- mains are not required to be bounded, as the state value calculated for each node of the DD is always finite. This allows for the construction of the entire DD layers regardless of the specific values of the arc labels. This property is specifically useful when DDs are employed to build convex relaxations for problems that do not have initial finite bounds on variables, such as statistical estimation models where the parameters can take any value in R. Remark 1. The size of the DD obtained from Algo- rithm 1 could grow exponentially when the number of variables and sub-intervals in the domain partitions increases. To control this growth rate, there are two common approaches. The first approach involves con- trolling the size of the DD by reducing the number of Danial Davarnia, Mohammadreza Kiaghadi Algorithm 1: Relaxed DD for a norm- bounding constraint Data: SetF={x∈Qn i=1[li,ui]|η(x)≤β}, and the domain partitioning intervals Lifor i∈[n] Result: A DD D= (U,A, l(.)) 1create the root node rin the node layer U1with state value s(r) = 0 2create the terminal node tin the node layer Un+1 3forall i∈[n−1],u∈ Ui,j∈Lido 4 ifui j≤0then 5 create a node vwith state value s(v) =s(u) +ηi(ui j) (if it does not already exist) in the node layer Ui+1 6 else if li
https://arxiv.org/abs/2505.03899v1
j≥0then 7 create a node vwith state value s(v) =s(u) +ηi(li j) (if it does not already exist) in the node layer Ui+1 8 else 9 create a node vwith state value s(v) =s(u) (if it does not already exist) in the node layer Ui+1 10 add two arcs from utovwith label values li j and ui jrespectively 11forall u∈ Un,j∈Lndo 12 ifui j≤0then 13 calculate ¯ s=s(u) +ηi(ui j) 14 else if li j≥0then 15 calculate ¯ s=s(u) +ηi(li j) 16 else 17 calculate ¯ s=s(u) 18 if¯s≤βthen 19 add two arcs from uto the terminal node t with label values li jand ui jrespectively sub-intervals in the domain partitions of variables at certain layers. For example, assume that the number of nodes at each layer of the DD grows through the top-down construction procedure of Algorithm 1 un- til it reaches the imposed width limit Wat layer k. Then, by setting the number of sub-intervals for the next layer k+ 1to one (i.e., |Lk+1|= 1), the algo- rithm guarantees that the number of nodes at layer k+ 1 will not exceed W. The second approach in- volves controlling the size of the DD by creating a “re- laxed DD” through merging nodes at layers that ex- ceed the width limit W. In this process, multiple nodes {v1, v2, . . . , v k}, for some k∈N, in a layer are merged into a single node ˜vin such a way that all feasible paths of the original DD are maintained. For the DD constructed through Algorithm 1, choosing the state value s(˜v) = min j=1,...,k{s(vj)}provides such a guar- antee for all feasible paths of the original DD to be maintained, because this state value underestimates the state values of each of the merged nodes, producing a relaxation for the norm-bounding constraint η(x)≤β.Figure 1 illustrates the steps of Algorithm 1 to con- struct a DD corresponding to set F∗={x∈ [−7,8]3|||x||1≤4}, using the domain partitioning in- tervals {[−7,−2],[−2,3],[3,8]}for each variable. 4.2 Outer Approximation The next step after constructing a DD that provides a relaxation for the convex hull of a set is to obtain an ex- plicit description of the convex hull of the DD solution set to be implemented inside an outer approximation framework to find dual bounds. Davarnia (2021) and Davarnia and Van Hoeve (2020) have proposed effi- cient methods to obtain such convex hull descriptions for DDs in the original space of variables through a successive generation of cutting planes. In addition, Davarnia et al. (2017) and Khademnia and Davarnia (2024) have proposed convexification methods through exploiting network structures such as those found in DD formulations. In this section, we present a sum- mary of those methods that are applicable for the DD built by Algorithm 1. We begin with the description of the convex hull in an extended (lifted) space of vari- ables. Proposition 4. Consider a DD D= (U,A, l(.)) with solution set Sol(D)⊆Rnthat is obtained from Algorithm 1 associated with the constraint F= {x∈Qn i=1[li,ui]|η(x)≤β}. Define G(D) = (x;y)∈Rn×R|A| (1a),(1b) where X a∈δ+(u)ya−X a∈δ−(u)ya=fu, ∀u∈ U
https://arxiv.org/abs/2505.03899v1
(1a) X a∈Akl(a)ya=xk, ∀k∈[n] (1b) ya≥0, ∀u∈ U, (1c) where fs=−ft= 1,fu= 0 foru∈ U \ { s, t}, andδ+(u)(resp. δ−(u)) denotes the set of outgoing (resp. incoming) arcs at node u. Then, projxG(D) = conv(Sol( D)). Viewing yaas the network flow variable on arc a∈ A ofD, the formulation (1a)–(1a) implies that the LP re- laxation of the network model that routes one unit of supply from the root node to the terminal node of the DD represents a convex hull description for the solu- tion set of Din a higher dimension. Thus, projecting the arc-flow variables yfrom this formulation would yield conv(Sol( D)) in the original space of variables. This result leads to a separation oracle that can be used to separate any point ¯x∈Rnfrom conv(Sol( D)) through solving the following LP. Proposition 5. Consider a DD D= (U,A, l(.))with solution set Sol(D)⊆Rnthat is obtained from Al- gorithm 1 associated with the constraint F={x∈ Global Optimization of Statistical Models with Nonconvex Regularization Functions Figure 1: DD Constructed by Algorithm 1 for Set F∗ Qn i=1[li,ui]|η(x)≤β}. Consider a point ¯x∈Rn, and define ω∗= maxX k∈[n]¯xkγk−θt (2) θt(a)−θh(a)+l(a)γk≤0,∀k∈[n], a∈ Ak (3) θs= 0, (4) where t(a)and h(a)represent the tail and the head node of arc a, respectively. Then, ¯x∈conv(Sol( D)) ifω∗= 0. Otherwise, ¯xcan be separated from conv(Sol( D))viaP k∈[n]xkγ∗ k≤θ∗ twhere (θ∗;γ∗)is an optimal recession ray of (2)–(4). The above separation oracle requires solving a LP whose size is proportional to the number of nodes and arcs of the DD, which could be computationally de- manding when used repeatedly inside an outer approx- imation framework; see Davarnia and Hooker (2019); Davarnia et al. (2022); Rajabalizadeh and Davarnia(2023) for learning-based methods to accelerate the cut-generating process for such LPs when implemented in branch-and-bound. As a result, an alternative subgradient-type method is proposed to solve the same separation problem, but with a focus on detecting a vi- olated cut faster. To summarize the recursive step of the separation method in Algorithm 2, the vector γ(τ)∈Rnis used in line 3 to assign weights to the arcs of the DD, in which a longest r-tpath is obtained. The solution xP(τ)cor- responding to this longest path is then subtracted from the separation point ¯x, which provides the subgradient value for the objective function of the separation prob- lem at point γ(τ). Then, the subgradient direction is updated in line 7 for a step size ρτ, which is then pro- jected onto the unit sphere on variables γin line 8. It is shown that for an appropriate step size, this algo- rithm converges to an optimal solution of the separa- tion problem (2)–(4), which yields the desired cutting plane in line 11. Since this algorithm is derivative- Danial Davarnia, Mohammadreza Kiaghadi Algorithm 2: A subgradient-based separation algorithm Data: A DD D= (U,A, l(.)) representing F={x∈Qn i=1[li,ui]|η(x)≤β}and a point ¯x Result: A valid inequality to separate ¯xfrom conv(Sol( D)) 1initialize τ= 0,γ(0)∈Rn,τ∗= 0, ∆∗= 0 2while the stopping criterion is not met do 3 assing weights wa=l(a)γ(τ) kto each arc a∈ AkofDfor all k∈[n] 4 find a longest
https://arxiv.org/abs/2505.03899v1
r-tpath P(τ)in the weighted DD and compute its encoding point xP(τ) 5 ifγ(τ)(¯x−xP(τ))>max{0,∆∗}then 6 update τ∗=τand ∆∗=γ(τ)(¯x−xP(τ)) 7 update ϕ(τ+1)=γ(τ)+ρτ(¯x−xP(τ)) for step size ρτ 8 find the projection γ(τ+1)ofϕ(τ+1)onto the unit sphere defined by ||γ||2≤1 9 setτ=τ+ 1 10if∆∗>0then 11 return inequality γ(τ∗)(x−xP(τ∗))≤0 free, as it calculates the subgradient values through solving a longest path problem over a weighted DD, it is very efficient in finding a violated cutting plane in comparison to the LP (2)–(4), which makes it suit- able for implementation inside spatial branch-and-cut frameworks such as that proposed in Section 5. The cutting planes obtained from the separation ora- cles in Proposition 5 and Algorithm 2 can be employed inside an outer approximation framework as follows. We solve a convex relaxation of the problem whose optimal solution is denoted by x∗. Then, this solution is evaluated at F. Ifη(x∗)≤β, then the algorithm terminates due to finding a feasible (or optimal) solu- tion of the problem. Otherwise, the above separation oracles are invoked to generate a cutting plane that separates x∗from conv(Sol( D)). The resulting cut- ting plane is added to the problem relaxation, and the procedure is repeated until no new cuts are added or a stopping criterion, such as iteration number or gap tolerance, is triggered. If at termination, an optimal solution is not returned, a spatial branch-and-bound scheme is employed as discussed in Section 5. 5 SPATIAL BRANCH-AND-CUT In global optimization of MINLPs, a divide-and- conquer strategy, such as spatial branch-and-bound, is employed to achieve convergence to the global op- timal value of the problem. The spatial branch-and-bound strategy reduces the variables’ domain through successive partitioning of the original box domains of the variables. These partitions are often rectangular in shape, dividing the variables’ domain into smaller hyper-rectangles as a result of branching. For each such partition, a convex relaxation is constructed to calculate a dual bound. Throughout this process, the dual bounds are updated as tighter relaxations are ob- tained, until they converge to a specified vicinity of the global optimal value of the problem. To prove such converge results, one needs to show that the convexifi- cation method employed at each domain partition con- verges (in the Hausdorff sense) to the convex hull of the feasible region restricted to that partition; see Belotti et al. (2009); Ryoo and Sahinidis (1996); Tawarmalani and Sahinidis (2004) for a detailed account of spatial branch-and-bound methods for MINLPs. As demonstrated in the previous section, the convex- ification method used in our framework involves gen- erating cutting planes for the solution set of the DDs obtained from Algorithm 1. We refer to the spatial branch-and-bound strategy incorporated into our solu- tion method as spatial branch-and-cut (SB&C). In this section, we show that the convex hull of the solution set of the DDs obtained from Algorithm 1 converges to the convex hull of the solutions of the original set F as the partition volume reduces. First, we prove that reducing the variables’ domain through partitioning produces tighter convex relaxations obtained by the proposed DD-based convexification method described in Section 4. Proposition
https://arxiv.org/abs/2505.03899v1
6. Consider Fj={x∈Pj|η(x)≤β} forj= 1,2, where η(x)is a scale function, and Pj=Qn i=1[li j,ui j]is a box domain of variables. For each j= 1,2, letDjbe the DD constructed via Algorithm 1 for a single sub-interval [li j,ui j]fori∈[n]. IfP2⊆P1, then conv(Sol( D2))⊆conv(Sol( D1)). While Proposition 6 implies that the dual bounds ob- tained by the DD-based convexification framework can improve through SB&C as a result of partitioning the variables’ domain, additional functional properties for the scale function in the norm-bounding constraint are required to ensure convergence to the global optimal value of the problem. Next, we show that a suffi- cient condition to guarantee such convergence is the lower semicontinuity of the scale function. This condi- tion is not very restrictive, as all ℓp-norm functions for p∈[0,∞) as well as typical nonconvex penalty func- tions, such as those described in Proposition 1, satisfy this condition. Proposition 7. Consider a scale function η(x) =Pn i=1ηi(xi), where ηi(x) :R→R+is lower semi- continuous for i∈[n], i.e., lim inf x→x0ηi(x)≥ηi(x0) for all x0∈R. Define Fj={x∈Pj|η(x)≤β} Global Optimization of Statistical Models with Nonconvex Regularization Functions forj∈N, where Pj=Qn i=1[li j,ui j]is a bounded box domain of variables. For j∈N, letDjbe the DD as- sociated with Fjthat is constructed via Algorithm 1 for single sub-interval [li j,ui j]fori∈[n]. Assume that{P1, P2, . . .}is a monotone decreasing sequence of rectangular partitions of the variables domain cre- ated through the SB&C process, i.e., Pj⊃Pj+1for each j∈N. Let ˜x∈Rnbe the point in a singleton set to which the above sequence converges (in the Haus- dorff sense), i.e., {Pj} ↓ { ˜x}. Then, the following statements hold: (i) If η(˜x)≤β, then conv(Sol( Dj)) ↓ {˜x}. (ii) If η(˜x)> β, then there exists m∈Nsuch that Sol(Dj) =∅for all j≥m. The result of Proposition 7 shows that the convex hull of the solution set represented by the DDs constructed through our convexification technique converges to the feasible region of the underlying norm-bounding con- straint during the SB&C process. If this constraint is embedded into an optimization problem whose other constraints also have a convexification method with such convergence results, it can guarantee convergence to the global optimal value of the optimization prob- lem through the incorporation of SB&C. We conclude this section with two remarks about the results of Proposition 7. First, although the above convergence results are obtained for DDs with a unit width W= 1, they can be easily extended to cases with larger widths using the following observation. The solution set of a DD can be represented by a finite union of the solution sets of DDs of width 1 obtained from decomposing node sequences of the original DD. By considering the collection of all such decomposed DDs of unit width, we can use the result of Propo- sition 7 to show convergence to the feasible region of the underlying constraint. In practice, increasing the width limit of a DD often leads to stronger DDs with tighter convex relaxations. This, in turn, accelerates the bound improvement rate, helping to achieve the desired convergence results faster. Second, even though the convergence
https://arxiv.org/abs/2505.03899v1
results of Propo- sition 7 are proven for a bounded domain of variables, the SB&C can still be implemented for bounded opti- mization problems that contain unbounded variables. In this case, as is common in spatial branch-and-bound solution methods, the role of a primal bound becomes critical for pruning nodes that contain very large (or infinity) variable bounds, which result in large dual bounds. This is particularly evident in the statistical models that minimize an objective function composed of the estimation errors and a penalty function based on parameters’ norms. In the next section, we providepreliminary computational results for an instance of such models. 6 COMPUTATIONAL RESULTS In this section, we present preliminary computational results to evaluate the effectiveness of our solution framework. As previously outlined, although our framework can be applied to a broad class of norm- type constraints with general MINLP structures, here we focus on a well-known model that involves a chal- lenging nonconvex norm-induced penalty function. In particular, we consider the following statistical regres- sion problem with the SCAD penalty function. min β∈Rp||y−Xβ||2 2+ρSCAD(β,λ,γ), (5) where nis the sample size, pis the number of features, X∈Rn×pis the data matrix, y∈Rnis the response vector, β∈Rnis the decision variables that repre- sent coefficient parameters in the regression model, andρSCAD(β,λ,γ) is the SCAD penalty function as de- scribed in Proposition 1. It follows from this proposi- tion that ρSCAD(β,λ,γ) is a scale function, making the problem amenable to our DD-based solution frame- work. The SCAD penalty function is chosen as a challenging nonconvex regularization form to showcase the capa- bilities of our framework in handling such structures. This is an important test feature because the SCAD penalty function is not admissible in state-of-the-art global solvers, such as BARON, due to the presence of the integration operator. Consequently, to the best of our knowledge, our proposed approach represents the first global solution framework for this problem class that does not require any reformulation of the original problem. For our experiments, we use datasets from public repositories such as the UCI Machine Learning Repos- itory (Kelly et al.) and Kaggle (Kag). These datasets contain instances with the number of features rang- ing from 7 to 147, and the sample size ranging from 200 to 5000, as shown in Tables 1 and 2. We con- ducted these experiments for two different categories of (λ, γ)∈ {(1,3),(10,30)}to consider small and large degrees of regularization and nonconvexity for the SCAD function. These results are obtained on a Windows 11 (64-bit) operating system, 64 GB RAM, 3.8 GHz AMD Ryzen CPU. The DD-ECP Algorithm is written in Julia v1.9 via JuMP v1.11.1, and the outer approximation models are solved with CPLEX v22.1.0. For the DD construction, we use Algorithm 1 together with the DD relaxation technique described in Re- mark 1 with the width limit Wset to 10000. The Danial Davarnia, Mohammadreza Kiaghadi Table 1: Computational Results for Penalty Function Parameters ( λ, γ) = (1 ,3) Datasets Name Resource p n Primal bound Dual bound Time (s) Graduate Admission 2 kaggle 7 400 20.304 20.295 22.46
https://arxiv.org/abs/2505.03899v1
Statlog (Landsat Satellite) UCI 36 4434 5854.507 5853.978 49.29 Connectionist Bench (Sonar, Mines vs. Rocks) UCI 60 207 71.853 68.266 195.17 Communities and Crime UCI 127 123 9.891 9.466 76.44 Urban Land Cover UCI 147 168 1.002 0.999 5.10 Relative location of CT slices on axial axis UCI 384 4000 10730.592 10426.985 281.718 Table 2: Computational Results for Penalty Function Parameters ( λ, γ) = (10 ,30) Datasets Name Resource p n Primal bound Dual bound Time (s) Graduate Admission 2 kaggle 7 400 22.234 22.125 8.97 Statlog (Landsat Satellite) UCI 36 4434 5859.670 5855.088 60.42 Connectionist Bench (Sonar, Mines vs. Rocks) UCI 60 207 110.730 105.351 18.207 Communities and Crime UCI 127 123 18.456 17.560 133.35 Urban Land Cover UCI 147 168 10.002 9.990 8.87 Relative location of CT slices on axial axis UCI 384 4000 16280.696 16032.315 276.858 merging operation merges nodes with close state val- ues that lie in the same interval of the state range. We set the number of sub-intervals |Li|up to 2500 for each DD layer i. To generate cutting planes for the outer approximation approach, we use the subgradi- ent method of Algorithm 2 with the stopping criterion equal to 50 iterations. The maximum number of itera- tions to apply the subgradient method before invoking the branching procedure to create new child nodes is 100. The branching process selects the variable whose optimal value in the outer approximation model of the current node is closest to the middle point of its do- main interval, and then the branching occurs at that value. Throughout the process, a primal bound is up- dated by constructing a feasible solution to the prob- lem by plugging the current optimal solution of the outer approximation at each node into the objective function of (5). This primal bound is used to prune nodes with dual bounds worse than the current pri- mal bound. The stopping criterion for our algorithm is reaching a relative optimality gap of 5% which is cal- culated as (primal bound −dual bound) /dual bound. The numerical results obtained for the instances we studied are presented in Tables 1 and 2. The first and second columns of these tables include the name of the dataset and the resource (UCI or Kaggle), respec- tively. The third and fourth columns represent the size of the instance. The next column contains the best primal bound obtained for each instance. The best dual bound obtained by our solution frameworkis reported in the sixth column, and the solution time (in seconds) to obtain that bound is shown in the last column. These results demonstrate the potential of our solution method in globally solving, especially in the absence of any global solver capable of handling such problems. Acknowledgements We thank the anonymous referees for suggestions that helped improve the paper. This work was supported in part by the AFOSR YIP Grant FA9550-23-1-0183, and the NSF CAREER Grant CMMI-2338641. References The Kaggle Repository. https://www.kaggle.com/ datasets . P. Belotti, J. Lee, L. Liberti, F. Margot, and A. Wachter. Branching and bounds tightening tech- niques for non-convex
https://arxiv.org/abs/2505.03899v1
minlp. Optimization Methods and Software , 24:597–634, 2009. D. Bertsekas. Nonlinear Programming . Athena Scien- tific, 1999. D. Bertsekas. Convex Optimization Algorithms . Athena Scientific, 2015. D. Bertsimas and B. Van Parys. Sparse high- dimensional regression: Exact scalable algorithms and phase transitions. The Annals of Statistics , 48 (1):300–323, 2020. Global Optimization of Statistical Models with Nonconvex Regularization Functions D. Bertsimas, A. King, and R. Mazumder. Best sub- set selection via a modern optimization lens. The Annals of Statistics , 44(2):813–852, 2016. P. Bonami, M. Kilin¸ c, and J. Linderoth. Algorithms and software for convex mixed integer nonlinear pro- grams. In J. Lee and S. Leyffer, editors, Mixed In- teger Nonlinear Programming. The IMA Volumes in Mathematics and its Applications , volume 154. Springer, 2012. M. Castro, A. Cire, and J. Beck. Decision diagrams for discrete optimization: A survey of recent advances, 2022. D. Davarnia. Strong relaxations for continuous non- linear programs based on decision diagrams. Oper- ations Research Letters , 49(2):239–245, 2021. D. Davarnia and G. Cornuejols. From estimation to optimization via shrinkage. Operations Research Letters , 45(6):642–646, 2017. doi: 10.1016/j.orl. 2017.10.005. D. Davarnia and J. N. Hooker. Consistency for 0–1 programming. Lecture Notes in Computer Science , 11494:225–240, 2019. doi: 10.1007/ 978-3-030-19212-9 15. D. Davarnia and M. Kiaghadi. A graphical framework for global optimization of mixed-integer nonlin- ear programs. https://doi.org/10.48550/arXiv. 2409.19794 , 2025. D. Davarnia and W.-J. Van Hoeve. Outer approxima- tion for integer nonlinear programs via decision di- agrams. Mathematical Programming , 187:111–150, 2020. D. Davarnia, J.P. Richard, and M. Tawarmalani. Si- multaneous convexification of bilinear functions over polytopes with application to network interdiction. SIAM Journal on Optimization , 27(3):1801–1833, 2017. D. Davarnia, B. Kocuk, and G. Cornuejols. Compu- tational aspects of bayesian solution estimators in stochastic optimization. INFORMS Journal on Op- timization , 2(4):256–272, 2020. doi: 10.1287/ijoo. 2019.0035. D. Davarnia, A. Rajabalizadeh, and J. Hooker. Achieving consistency with cutting planes. Math- ematical Programming , 2022. doi: 10.1007/ s10107-022-01778-8. A. Dedieu, H. Hazimeh, and R.: Mazumder. Learning sparse classi ers: Continuous and mixed integer optimization per- spectives. Journal of Machine Learning Research , 22 (1):1–47, 2021. J. Fan and R. Li. Variable selection via nonconcave penalized likelihood and its oracle properties. Jour-nal of the American statistical Association , 96(456): 1348–1360, 2001. A.K. Fletcher, S. Rangan, and V.K. Goyal. Necessary and sufficient conditions for sparsity pattern recov- ery.IEEE Transactions on Information Theory , 55 (12):5758–5772, 2009. T. Hastie, R. Tibshirani, and J. Friedman. The Ele- ments of Statistical Learning: Data Mining, Infer- ence, and Prediction . Springer, 2009. T. Hastie, R. Tibshirani, and M. Wainwright. Statisti- cal learning with sparsity: the lasso and generaliza- tions. CRC Press, 2015. H. Hazimeh and R. Mazumder. Fast best subset se- lection: Coordinate descent and local combinatorial optimization algorithms. Operations Research , 68 (5):1517–1537, 2020. H. Hazimeh, R. Mazumder, and A. Saab. Sparse re- gression at scale: Branch-and-bound rooted in fi rst-order optimization. Mathematical Programming , 196(1):347–388, 2022. V. Jain. Zero-norm optimization: Models and applica- tions. Phd thesis, Texas Tech University, 2010. M. Kelly, R. Longjohn, and K. Nottingham. The UCI machine learning repository.
https://arxiv.org/abs/2505.03899v1
https://archive. ics.uci.edu . E. Khademnia and D. Davarnia. Convexification of bilinear terms over network polytopes. Mathematics of Operations Research , 2024. doi: 10.1287/moor. 2023.0001. A. Khajavirad, J. J. Michalek, and N. V. Sahini- dis. Relaxations of factorable functions with convex- transformable intermediates. Mathematical Pro- gramming , 144(1):107–140, 2014. H. Liu, T. Yao, and R. Li. Global solutions to folded concave penalized nonconvex learning. Annals of statistics , 44(2):629, 2016. R. Mazumder, J.H. Friedman, and T. Hastie. Sparsenet: Coordinate descent with nonconvex penalties. Journal of the American Statistical As- sociation , 106(495):1125–1138, 2011. J. Nocedal and S. J. Wright. Numerical Optimization . Springer, 2006. M. Pilanci, M.J. Wainwright, and L. El Ghaoui. Sparse learning via boolean relaxations. Mathematical Pro- gramming , 151(1):63–87, 2015. A. Rajabalizadeh and D. Davarnia. Solving a class of cut-generating linear programs via machine learn- ing. INFORMS Journal on Computing , 2023. doi: 10.1287/ijoc.2022.0241. H. Ryoo and N. Sahinidis. A branch-and-reduce ap- proach to global optimization. Journal of global op- timization , 8:107–138, 1996. Danial Davarnia, Mohammadreza Kiaghadi H. Salemi and D. Davarnia. On the structure of deci- sion diagram-representable mixed integer programs with application to unit commitment. Operations Research , 2022. doi: 10.1287/opre.2022.2353. H. Salemi and D. Davarnia. Solving unsplittable net- work flow problems with decision diagrams. Trans- portation Science , 2023. doi: 10.1287/trsc.2022. 1194. M. Tawarmalani and N. Sahinidis. Global optimization of mixed-integer nonlinear programs: A theoretical and computational study. Mathematical program- ming, 99:563–591, 2004. M. Tawarmalani and N. Sahinidis. A polyhedral branch-and-cut approach to global optimization. Mathematical programming , 103:225–249, 2005. W.-J. van Hoeve. An introduction to decision dia- grams for optimization. INFORMS TutORials in Operations Research , pages 1–28, 2024. M. J. Wainwright. Information-theoretic limits on sparsity recovery in the high-dimensional and noisy setting. IEEE Transactions on Information Theory , 55(12):5728–5741, 2009. W. Xie and X. Deng. Scalable algorithms for the sparse ridge regression. SIAM Journal on Optimization , 30 (4):3359–3386, 2020. C.-H. Zhang. Nearly unbiased variable selection under minimax concave penalty. The Annals of Statistics , 38(1):894–942, 2010. C.-H. Zhang and T. Zhang. A general theory of con- cave regularization for high-dimensional sparse esti- mation problems. Statistical Science , 27(4):576–593, 2012. H. Zou and T. Hastie. Regularization and Variable Selection Via the Elastic Net. Journal of the Royal Statistical Society Series B: Statistical Methodology , 67(2):301–320, 2005. H. Zou and R. Li. One-step sparse estimates in non- concave penalized likelihood models. The Annals of Statistics , 36(1):1509–1533, 2008. Global Optimization of Statistical Models with Nonconvex Regularization Functions 7 SUPPLEMENTARY MATERIALS In this section, we provide the proofs omitted from the main paper. 7.1 Proof of Proposition 1 We show the result for ρSCAD(x,λ,γ) as the proof for ρMCP(x,λ,γ) follows from similar arguments. It is clear that ρMCP i(0, λi, γi) = 0. Further, since min {1,(γi−y/λi)+/(γi−1)} ≥0 for γi>2, we conclude that the integral function λiR|xi| 0min{1,(γi−y/λi)+/(γi−1)}dyis non-decreasing over the interval [0 ,∞) for|xi|asλi>0, proving the result. 7.2 Proof of Proposition 2 (i) Using the definition ||x||p=P i∈N|xi|p1/pforp∈(0,∞), we can rewrite constraint ||x||p≤βasP i∈N|xi|p≤βp=¯βby raising both sides to the power of p. The left-hand-side of
https://arxiv.org/abs/2505.03899v1
this inequality can be considered as η(x) =P i∈Nηi(xi) where ηi(xi) =|xi|p. Since ηi(xi) satisfies all three conditions in Definition 1, we conclude that η(x) is a scale function. (ii) Using the definition ||x||0=P i∈NI(xi) where I(xi) = 0 if xi= 0, and I(xi) = 1 if xi̸= 0, we can rewrite constraint ||x||0≤βasP i∈Nηi(xi)≤βwhere ηi(xi) =I(xi). Since ηi(xi) satisfies all three conditions in Definition 1, we conclude that η(x) is a scale function. 7.3 Proof of Proposition 3 It suffices to show that F ⊆ conv(Sol( D)) since the convex hull of a set is the smallest convex set that contains it. Pick ¯x∈ F. It follows from the definition of FthatPn i=1ηi(¯xi)≤β. For each i∈[n], let j∗ ibe the index of a domain sub-interval [ li j∗ i,ui j∗ i] inLisuch that li j∗ i≤¯xi≤ui j∗ i. Such index exists because ¯ x∈[li,ui] =S j∈Li[li j,ui j] where the inclusion follows from the fact that ¯ x∈ F, and the equality follows from the definition of domain partitioning. Next, we show that Dincludes a node sequence u1, u2, . . . , u n+1, where ui∈ Uifori∈[n+ 1], such that each node uiis connected to ui+1via two arcs with labels li j∗ iand ui j∗ ifor each i∈[n]. We prove the result using induction on the node layer index k∈[n] in the node sequence u1, u2, . . . , u n+1. The induction base k= 1 follows from line 1 of Algorithm 1 as the root node rcan be considered as u1. For the inductive hypothesis, assume that there exists a node sequence u1, u2, . . . , u kofDwith uj∈ Ujforj∈[k] such that each node uiis connected to ui+1via two arcs with labels li j∗ iand ui j∗ ifor each i∈[k−1]. For the inductive step, we show that there exists a node sequence u1, u2, . . . , u k, uk+1ofDwith uj∈ Ujforj∈[k+ 1] such that each node uiis connected to ui+1via two arcs with labels li j∗ iandui j∗ ifor each i∈[k]. We consider two cases. For the first case, assume that k≤n−1. Then, the for-loop lines 3–10 of Algorithm 1 imply that node ukis connected to another node in the node layer Uk+1, which can be considered as uk+1, via two arcs with labels lk j∗ kanduk j∗ kbecause the conditions of the for-loop are satisfied as follows: k∈[n−1] due to the assumption of the first case, uk∈ Ukbecause of the inductive hypothesis, and j∗ k∈Lkby construction. For the second case of the inductive step, assume that k=n. It follows from lines 1–10 of Algorithm 1 that the state value of node ui+1fori∈[k−1] is calculated as s(ui+1) =s(ui) +γiwhere s(u1) = 0 because of line 1 of the algorithm, and where γiis calculated depending on the sub-interval bounds li j∗ iandui j∗ iaccording to lines 4–9 of the algorithm. In particular, γi=ηi(ui j∗ i) ifui j∗ i≤0,γi=ηi(li j∗ i) ifli j∗ i≥0, and γi= 0 otherwise. As a result, we have s(uk) =Pk−1 i=1γi. On the other hand, since η(x) is a scale function,
https://arxiv.org/abs/2505.03899v1
according to Definition 1, we must have ηi(0) = 0, ηi(x1)≤ηi(x2) for 0 ≤x1≤x2, and ηi(x1)≥ηi(x2) for x1≤x2≤0 for each i∈[n]. Using the fact that li j∗ i≤¯xi≤ui j∗ i, we consider three cases. (i) If ui j∗ i≤0, then ¯ xi≤ui j∗ i≤0, and thus ηi(¯xi)≥ηi(ui j∗ i) =γi. (ii) Ifli j∗ i≥0, then ¯ xi≥li j∗ i≥0, and thus ηi(¯xi)≥ηi(li j∗ i) =γi. (iii) If ui j∗ i>0 and li j∗ i<0, then ηi(¯xi)≥0 =γi. Considering all these cases, we conclude that γi≤ηi(¯xi) for each i∈[k−1]. Therefore, we can write that s(uk) =Pk−1 i=1γi≤Pk−1 i=1ηi(¯xi). Now consider lines 11-18 of the for-loop of Algorithm 1 for uk∈ Ukand j∗ k∈Lk. We compute ¯ s=s(uk) +γk, where γkis calculated as described previously. Using a similar argument to that above, we conclude that γk≤ηk(¯xk). Combining this result with that derived for s(uk), we obtain that ¯s=Pk i=1γi≤Pk i=1ηi(¯xi)≤β, where the last inequality follows from the fact that ¯x∈ F. Therefore, lines 18–19 of Algorithm 1 imply that two arcs with label values lk j∗ kanduk j∗ kconnect node ukto the terminal node t Danial Davarnia, Mohammadreza Kiaghadi which can be considered as uk+1, completing the desired node sequence. Now consider the collection of points ˜xκ forκ∈[2n] encoded by all paths composed of the above-mentioned pair of arcs with labels li j∗ iandui j∗ ibetween each two consecutive nodes uiandui+1in the sequence u1, u2, . . . , u n+1. Therefore, ˜xκ∈Sol(D) for κ∈[2n]. It is clear that these points also form the vertices of an n-dimensional hyper-rectangle defined byQn i=1[li j∗ i,ui j∗ i]. By construction, we have that ¯x∈Qn i=1[li j∗ i,ui j∗ i], i.e., ¯xis a point inside the above hyper-rectangle. As a result, ¯xcan be represented as a convex combination of the vertices ˜xκforκ∈[2n] of the hyper-rectangle, yielding ¯x∈conv(Sol( D)). 7.4 Proof of Proposition 6 Since there is only one sub-interval [ li 2,ui 2] for the domain of variable xifori∈[n], there is only one node, referred to as ui, at each node layer of D2according to Algorithm 1. Following the top-down construction steps of this algorithm, for each i∈[n−1],uiis connected via two arcs with label values li 2and ui 2toui+1. There are two cases for the arcs at layer i=nbased on the value of ¯ scalculated in lines 12–17 of Algorithm 1. For the first case, assume that ¯ s > β . Then, the if-condition in line 18 of Algorithm 1 is not satisfied. Therefore, node unis not connected to the terminal node tofD2. As a result, there is no r-tpath in this DD, leading to an empty solution set, i.e., conv(Sol( D2)) = Sol( D2) =∅. This proves the result since ∅ ⊆conv(Sol( D1)). For the second case, assume that ¯ s≤β. Then, the if-condition in line 18 of Algorithm 1 is satisfied, and node unis connected to the terminal node tofD2via two arcs with label values ln 2and un 2. Therefore, the solution set of D2contains 2npoints encoded by all the r-tpaths of the DD, each composed of arcs with label values li 2orui 2fori∈[n].
https://arxiv.org/abs/2505.03899v1
It is clear that these points correspond to the extreme points of the rectangular partition P2=Qn i=1[li 2,ui 2]. Pick one of these points, denoted by ¯x. We show that ¯x∈conv(Sol( D1)). It follows from lines 1–10 of Algorithm 1 that each layer i∈[n] includes a single node vi. Further, each node viis connected to vi+1via two arcs with label values li 1and ui 1fori∈[n−1]. To determine whether vnis connected to the terminal node of D1, we need to calculate ¯ s(which we refer to as ¯¯sto distinguish it from that calculated for D2) according to lines 12–17 of Algorithm 1. Using an argument similar to that in the proof of Proposition 3, we write that ¯¯s=Pn i=1s(vi), where s(vi) =ηi(ui 1) ifui 1≤0,s(vi) =ηi(li 1) ifli 1≥0, and s(vi) = 0 otherwise for all i∈[n], and s(v1) = 0. On the other hand, we can similarly calculate the value of ¯ s forD2as ¯s=Pn i=1s(ui), where s(ui) =ηi(ui 2) ifui 2≤0,s(ui) =ηi(li 2) ifli 2≥0, and s(ui) = 0 otherwise for alli∈[n], and s(u1) = 0. Because P2⊆P1, we have that li 1≤li 2≤ui 2≤ui 1for each i∈[n]. Consider three cases. If ui 2≤0, then either ui 1≤0 which leads to s(vi) =ηi(ui 1)≤ηi(ui 2) =s(ui) due to monotone property of scale functions, or ui 1>0 which leads to s(vi) = 0 ≤ηi(ui 2) =s(ui) as in this case li 1≤li 2≤ui 2≤0. If li 2≥0, then either li 1≥0 which leads to s(vi) =ηi(li 1)≤ηi(li 2) =s(ui) due to monotone property of scale functions, or li 1<0 which leads to s(vi) = 0≤ηi(li 2) =s(ui) as in this case ui 1≥ui 2≥li 2≥0. If ui 2>0 and li 2<0, then s(vi) =s(ui) = 0. As a result, s(vi)≤s(ui) for all cases and for all i∈[n]. Therefore, we obtain that ¯¯s=Pn i=1s(vi)≤Pn i=1s(ui) = ¯s≤β, where the last inequality follows from the assumption of this case. We conclude that the if-condition in line 18 of Algorithm 1 is satisfied for D1, and thus vnis connected to the terminal node of D2via two arcs with label values ln 1andun 1. Consequently, Sol( D1) includes all extreme points of the rectangular partition P1encoded by the r-tpaths of this DD. Since P2⊆P1, the extreme point ¯xofP2 is in conv(Sol( D1)), proving the result. 7.5 Proof of Proposition 7 (i) Assume that η(˜x)≤β. Consider j∈N. First, note that Fj⊆conv(Fj)⊆conv(Sol( Dj)) according to Proposition 3. Next, we argue that Sol( Dj)⊆Pj. There are two cases. For the first case, assume that the if-condition in line 18 of Algorithm 1 is violated. Then, it implies that there are no r-tpaths in Dj, i.e., Sol( Dj) =∅ ⊆ Pj. For the second case, assume that the if-condition in line 18 of Algorithm 1 is satisfied. Then, it implies that Sol( Dj) contains the points encoded by all r-tpaths in Djcomposed of arc label values li jorui jfor each i∈[n], i.e., Sol( Dj)⊆Pj. As a result, conv(Sol( Dj))⊆Pj. It follows from Proposition 6 that the sequence {conv(Sol( D1)),conv(Sol( D2)), . . .}is monotone non-increasing, i.e., conv(Sol( Dj))⊇conv(Sol( Dj+1)) for
https://arxiv.org/abs/2505.03899v1
j∈N. On the other hand, we can write Fj={x∈Rn|η(x)≤β}∩Pj by definition. Since {Pj} ↓ { ˜x}, we obtain that {Fj} ↓ {x∈Rn|η(x)≤β} ∩ { ˜x}={˜x}since η(˜x)≤β by assumption. Therefore, based on the previous arguments, we can write that Fj⊆conv(Sol( Dj))⊆Pj. Because {Fj} ↓ { ˜x}and{Pj} ↓ { ˜x}, we conclude that conv(Sol( Dj)) ↓ {˜x}. (ii) Assume that η(˜x)> β. Since the sequence of domain partitions {Pj}is a monotone decreasing sequence Global Optimization of Statistical Models with Nonconvex Regularization Functions that converges to {˜x}, we can equivalently write that the sequence of variable lower bounds {li 1,li 2, . . .} in these partitions is monotone non-decreasing and it converges to ˜ xifori∈[n], i.e., li 1≤li 2≤. . ., and lim j→∞li j= ˜xi. Similarly, the sequence of variable upper bounds {ui 1,ui 2, . . .}in these partitions is monotone non-increasing and it converge to ˜ xifori∈[n], i.e., ui 1≥ui 2≥. . ., and lim j→∞ui j= ˜xi. The assumption of this case implies that η(˜x) =Pn i=1ηi(˜xi)> β. Define ϵ=Pn i=1ηi(˜xi)−β n>0. For each DDDjforj∈N, using an argument similar to that of Proposition 6, we can calculate the value ¯ sin line 11–17 of Algorithm 1 as ¯ s=Pn i=1s(vi) where viis the single node at layer i∈[n] ofDj. In this equation, considering that the variable domain is [ li j,ui j] for each i∈[n], we have s(v1) = 0, and s(vi) =ηi(ui j) if ui j≤0,s(vi) =ηi(li j) ifli j≥0, and s(vi) = 0 otherwise. Because ηi(xi), for i∈[n], is lower semicontinuous at ˜xiby assumption, for every real zi< ηi(˜xi), there exists a nonempty neighborhood Ri= (ˆli,ˆui) of ˜xisuch that ηi(x)> zifor all x∈ R i. Set zi=ηi(˜xi)−ϵ. On the other hand, since lim j→∞li j= ˜xi and lim j→∞ui j= ˜xi, by definition of sequence convergence, there exists mi∈Nsuch that li mi>ˆliand ui mi<ˆui. Pick m= max i∈[n]mi. It follows that, for each i∈[n], we have ηi(x)> zi=ηi(˜xi)−ϵfor all x∈[li m,ui m]. Now, consider the value of s(vi) at layer i∈[n] ofDm. There are three cases. If ui m≤0, then s(vi) =ηi(ui m)> ηi(˜xi)−ϵ. Ifli m≥0, then s(vi) =ηi(li m)> ηi(˜xi)−ϵ. Ifui m>0 and li m<0, then s(vi) =ηi(0) = 0 > ηi(˜xi)−ϵas 0∈[li m,ui m]. Therefore, s(vi)> ηi(˜xi)−ϵfor all cases. Using the arguments given previously, we calculate that ¯ s=Pn i=1s(vi)>Pn i=1ηi(˜xi)−nϵ=β, where the last equality follows from the definition of ϵ. Since ¯ s > β , the if-condition in line 18 of Algorithm 1 is not satisfied, and thus node vnis not connected to the terminal node in DD Dm, implying that Sol( Dm) =∅. Finally, it follows from Proposition 6 that Sol( Dj)⊆conv(Sol( Dj))⊆conv(Sol( Dm)) =∅, for all j > m , proving the result.
https://arxiv.org/abs/2505.03899v1
PRINCIPAL CURVES IN METRIC SPACES AND THE SPACE OF PROBABILITY MEASURES BYANDREW WARREN1,aANTON AFANASSIEV1,bFOREST KOBAYASHI1,c YOUNG -HEON KIM1,dAND GEOFFREY SCHIEBINGER1,e 1Department of Mathematics, University of British Columbia,aawarren@math.ubc.ca;banton.a@math.ubc.ca; cfkobayashi@math.ubc.ca;dyhkim@math.ubc.ca;egeoff@math.ubc.ca We introduce principal curves in Wasserstein space, and in general com- pact metric spaces. Our motivation for the Wasserstein case comes from optimal-transport-based trajectory inference, where a developing population of cells traces out a curve in Wasserstein space. Our framework enables new experimental procedures for collecting high-density time-courses of devel- oping populations of cells: time-points can be processed in parallel (making it easier to collect more time-points). However, then the time of collection is unknown, and must be recovered by solving a seriation problem (or one- dimensional manifold learning problem). We propose an estimator based on Wasserstein principal curves, and prove it is consistent for recovering a curve of probability measures in Wasserstein space from empirical samples. This consistency theorem is obtained via a series of results regarding principal curves in compact metric spaces. In par- ticular, we establish the validity of certain numerical discretization schemes for principal curves, which is a new result even in the Euclidean setting. CONTENTS 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.1 Contributions and overview . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.2 Related work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2 Principal curves in metric spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.1 Discretization and algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . 8 3 Principal curves in the space of probability measures . . . . . . . . . . . . . . . . 12 4 Application to the seriation problem . . . . . . . . . . . . . . . . . . . . . . . . . 17 4.1 Consistency theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 4.2 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 5 Discussion . . . . . . . . . . . . . . .
https://arxiv.org/abs/2505.04168v1
. . . . . . . . . . . . . . . . . . . . . . . . 23 5.1 Variants of principal curves . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 A Properties of curves in metric spaces . . . . . . . . . . . . . . . . . . . . . . . . . 30 B Background on reproducing kernel Hilbert spaces . . . . . . . . . . . . . . . . . . 34 C Deferred Proofs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 C.1 Proofs for Section 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 C.2 Proofs for Section 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 C.3 Proofs for Section 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 D Extensions for Section 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 D.1 Nonlocal discretization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 MSC2020 subject classifications: Primary 62G05, 49Q20; secondary 62P10, 62R20. Keywords and phrases: Principal curves, optimal transport, unsupervised learning, seriation, manifold learn- ing, trajectory inference. 1arXiv:2505.04168v1 [math.ST] 7 May 2025 2 D.2 Fixed endpoints and semi-supervision . . . . . . . . . . . . . . . . . . . . . 49 E Additional details for experiments . . . . . . . . . . . . . . . . . . . . . . . . . . 50 1. Introduction. Principal components
https://arxiv.org/abs/2505.04168v1
analysis (PCA) is one of the most basic and widely used tools in exploratory data analysis and unsupervised learning. Principal curves [42] provide a natural, nonlinear generalization of principal components (Figure 1). Motivated by an application to single cell RNA-sequencing (scRNA-seq), we develop a theory of principal curves in general compact metric spaces. For this application, we consider principal curves in the space of probability measures over a compact space of cell states (e.g. gene expression space, which can be modeled with the d-dimensional simplex). This space of probability measures over cell states is itself a compact metric space when equipped with the Wasserstein metric, given by optimal transport theory [3, 90]. Indeed, a population of cells can be thought of as a probability distribution over cell states, and as the population changes over time (e.g. in embryonic development), it traces out a curve in the space of probability distributions [60, 79]. Biologists collect empirical distributions along this developmental curve by profiling with single-cell RNA-sequencing, which provides high-dimensional measurements of cell states for thousands to millions of cells [60, 79, 80]. However, it is currently practically difficult to collect large numbers of time-points along the curve, because each time-point is typically processed manually, in series (e.g. several separate mouse embryos are fertilized, and each is allowed to develop for a specific amount of time before cells are profiled with scRNA- seq). The benefit of this careful, separate processing of each developmental time-point allows experimentalists to record the time-of-collection for each time-point. However, it would be possible to process far more embryos if one could infer the time-of-collection, because this would allow embryos to be processed in parallel, e.g. profile an entire cup of fly embryos, each at a different stage of development – note that the cells from same embryo come from the same developmental time-point. The resulting data would consist of a (unordered) set of empirical distributions, along a developmental curve. By fitting a principal curve to these empirical distributions, one can order these embryos in developmental time. More precisely, let ρtdenote a continuous1curve of probability measures on a com- pact domain, parameterized by time t∈[0,1].Suppose we observe empirical distributions ˆρt1,..., ˆρtN, constructed from i.i.d. samples from ρtn, where each tnis an i.i.d. random time in[0,1], as illustrated in Figure 1. However, the temporal labels t1,...,t Nare unknown, and our goal is to recover them. We can approach this problem by fitting a principal curve to infer ρt, in the space of probability measures, equipped with the Wasserstein distance W2which metrizes convergence in distribution. Inferring the curve ρtcan also be thought of as a measure-valued manifold learning prob- lem: we want to learn a one-dimensional continuous manifold in the space of probability measures equipped with the W2distance. 1.1. Contributions and overview. We introduce the principal curve problem in general compact metric spaces, motivated by an application of principal curves in the Wasserstein space of probability measures. Given a compact metric space (X,d)and a probability distri- bution ΛonX(called the data distribution ), we seek a curve γ⋆: [0,1]→X, which satisfies the following
https://arxiv.org/abs/2505.04168v1
two criteria: the distribution Λis close to γ⋆, and the curve γ⋆is not “too long.” We measure the fit of γtoΛby the average squared distance of points to γ, namely Fit(Λ,γ):=Z Xinf t∈[0,1]d2(x,γt)dΛ(x). 1Here continuity is with respect to convergence in distribution. PRINCIPAL CURVES IN METRIC SPACES AND W2 SPACE 3 A principal curve γ⋆passing through Λis then defined via (1) minimize γFit(Λ,γ) +βLength (γ), where β >0determines the level of length penalization. The minimization runs over all sufficiently regular curves, in a sense we make precise in Section 2. t1 t2 t3 · · · t12ρt1 (a) ˆρtn γt (b) Fig 1: (a) An illustration showing ρt(blue) at evenly-spaced time samples t1,...,t N, with the samples comprising ˆρtnshown in red. (b) An illustration of a principal curve γt(black curve) “fitting” the empirical measures ˆρtn, each of which is represented by a single red dot. Note that the curve achieves low average projection distance, while the curve itself is not “too long”. The straight lines connecting individual data points to their projections along the curve represent length-minimizing geodesics in the Wasserstein space. One special case of interest is the Wasserstein space of probability measures over a com- pact space V: X=P(V),and d(µ,ν) =W2(µ,ν). Here, a natural choice for the data distribution Λis one concentrated around a curve of mea- sures ρ.In practice, one may observe data in the form of empirical distributions ˆρt1,..., ˆρtT along the curve ρ, as described above in Section 1. In this case, the data distribution is the 4 following empirical measure: (2) ˆΛ:=1 NNX n=1δˆρtn. In this finite-data setting, the principal curve problem (1) can be discretized by replacing Λwith the empirical approximation ˆΛ, and discretizing the Length penalty, as follows: (3) minimize γ1,...,γ K∈P(V)1 NNX n=1inf k∈[1,K]W2 2(ˆρtn,γk) +βK−1X k=1W2(γtk,γtk+1). In Section 2.1, we describe a coupled Lloyd’s algorithm for this discretized objective. Im- plemented via existing Wasserstein barycenter solvers, we show it can generate reasonable results in practice. Minimizers of the discrete problem (3) can be thought of as estimators for the true curve of measures ρ. One of our main results is that these estimators are consistent. THEOREM 1.1 (Consistency of Wasserstein principal curves). LetVbe a compact, con- vex domain. Suppose we are given data from a ground truth curve ρt: [0,1]→ P(V)in the form of an empirical measure ˆΛas in equation (2), with Msamples for each empiri- cal distribution ˆρt1,..., ˆρtN. Assuming that ρtis injective and sufficiently regular2, then with probability 1, ρt= lim β→0lim N,M,K →∞argmin γ1,...,γ K" 1 NTX n=1inf k∈[1,K]W2 2(ˆρtn,γk) +βK−1X k=1W2(γtk,γtk+1)# . We make precise the sense in which we recover the ground truth in the limit in Section 4; but the idea is that in the limit, our minimizing K-tuple {γ∗ tk}K k=1from (3) converges to a curve γ∗ twith the same range asρt. Then if ρtis injective, this allows us to infer the ordering of the time labels along ρtup to reversing the whole ordering. In fact, we establish a more general version of Theorem 1.1 which allows for measurement noise typically encountered in single cell
https://arxiv.org/abs/2505.04168v1
RNA sequencing (scRNA-seq). In practice, scRNA- seq provides an imperfect snap-shot of each cell’s expression profile by sampling from the distribution of messenger RNAs within the cells. After profiling many cells at various time- points along a developmental curve ρ, the resulting data would consist of noisy empirical distributions ˜ρt1,..., ˜ρtN,defined as (4) ˜ρtn=1 MMX m=1δˆvm, where ˆvm∼Multinomial (vm,Rm)is the noisy expression profile when we sequence Rm ‘reads’ from a cell with expression profile vm∼ρtn[54]. A version of Theorem 1.1 also holds with noisy empirical distributions ˜ρtn, as long as Rm→ ∞ uniformly. We derive Theorem 1.1 by way of a series of results for the more general problem (1) of principal curves on compact metric spaces, which we now summarize. In Section 2, we prove existence of minimizers for (1) (Proposition 2.1), as well as stability of minimizers with respect to the underlying distribution Λ(Proposition 2.3). We then establish the con- sistency of a discretized variational problem generalizing (3), meaning minimizers converge to minimizers of the continuum problem (1) (Theorem 2.5). This discrete problem enjoys a 2It suffices to assume ρtis Lipschitz as a function from [0,1]toP(V)equipped with the W2metric. PRINCIPAL CURVES IN METRIC SPACES AND W2 SPACE 5 "coupled Lloyd’s algorithm" type numerical scheme (Algorithm 1); in the case of measure- valued data where X=P(V)andd=W2, this scheme can be feasibly implemented via existing Wasserstein barycenter solvers. We also consider a variant numerical scheme which incorporates nonlocal adaptive kernel smoothing, and which offers better performance in our experiments; we likewise prove its consistency with respect to the continuum problem (1), in Proposition D.1. In Section 3, we consider special features of problem (1) in the case of where X=P(V) andd=W2, and so Λis a distribution over probability distributions. In particular, we es- tablish an iterated Glivenko-Cantelli theorem for doubly empirical measures over compact metric spaces (Theorem 3.1). This theorem establishes the convergence of measures such as theˆΛfrom (2) to the limiting distribution over distributions Λ, in a sense which allows us to apply the discrete-to-continuum convergence results from Section 2. We further extend this to a triple-iterated Glivenko-Cantelli theorem to handle the practical setting of finite reads in single-cell RNA-sequencing [54, 95]. This triple-iterated Glivenko-Cantelli result is stated in Proposition 3.2. In Section 4, we apply principal curves to the problems of inferring curves and missing temporal labels from data, mentioned above. Theorem 4.2, our third main theorem, estab- lishes that in the case where the support of Λcoincides with the range of an injective curve ρtin a compact metric space X, as we send the regularization parameter βto 0, we recover ρt(up to monotone or reverse-monotone time reparametrization). We show in Proposition 4.4 that this means we can recover the ordering of the ρt’s (up to total reversal), and so our principal curves machinery can be used as a seriation algorithm. We test the performance of our numerical scheme in Section 4.2 by fitting principal curves to several simulated datasets and comparing the accuracy of the ordering derived from the principal curve to that of existing seriation algorithms. We find
https://arxiv.org/abs/2505.04168v1
that our approach to principal curves is competitive with existing methods, even while simultaneously providing an estimate of the latent one-dimensional structure of the data, something not provided by widely used seriation methods like that of [6]. We anticipate that this framework could be useful for generating high-density time-courses with single-embryo, single-cell RNA-sequencing [67]. 1.2. Related work. Our study of principal curves in the space of probability measures is a contribution to a growing body of work which transposes widely used learning algorithms to the setting of measure-valued data, including: regression [18, 23, 38, 48, 60, 72, 80], spline interpolation [8, 20, 21, 47], clustering [74, 89, 96], and manifold learning [40]. In particular, there has been some work on principal component analysis in Wasserstein space [11, 17, 49, 82, 92]. However, principal curves have not yet been studied in Wasserstein space. Moreover, some elements of our results are new even in the setting of Euclidean space. In particular, the theoretical guarantees for numerical schemes for principal curves provided in Section 2 and Appendix D appear to be new; similar schemes have been used widely for decades (including in the R princurve package), but to our knowledge have only been justified heuristically. Accordingly, conducting our analysis in a general metric space setting allows us to handle the cases of Euclidean- and measure-valued data simultaneously; moreover, it also covers more abstract principal curves which have been considered previously, such as those on compact Riemannian manifolds [43] and compact subsets of Banach spaces [85]. Comparison to existing work on principal curves. The study of principal curves was inau- gurated in the 1980s by Hastie and Stuetzle [41, 42], who defined a principal curve for a probability distribution ΛonRdto be a critical point of the functional PC(Λ)( γ):=Z Rdinf t∈[0,1]d2(x,γt)dΛ(x) 6 where the argument γt: [0,1]→Rdranges over infinitely differentiable injective curves. Hastie and Stuetzle offer an appealing interpretation of critical points of PC(Λ) as curves which “locally pass through the middle” of the distribution Λ. Nonetheless the functional PC(Λ) has some undesirable features. Indeed, PC(Λ) achieves a value of zero if one takes γtto be a space-filling curve on the support of Λ; while space-filling curves are excluded from the domain of PC(Λ) by definition, it remains the case that approximate minimizers will be approximate space-filling curves, and these do not provide a desired summary of the data distribution in typical applications. One might hope that some non-minimal critical points of PC(Λ) are better behaved, but here too there are obstacles: [31] observes that every critical point of PC(Λ) is a saddle point (rather than a local minimum), which prevents the use of cross-validation; see also discussion in [36]. Making a very similar observation but with different terminology, [85] notes that computing critical points of PC(Λ) is an ill-posed problem , meaning that these critical points are unstable with respect to small perturbations of the distribution Λ. Motivated by these technical problems, numerous alternate formulations have been pro- posed, including density ridge estimation [19, 34, 35, 70], conditional expectation optimiza- tion [36], local principal components
https://arxiv.org/abs/2505.04168v1
[29], spline-based methods [86], and penalized princi- pal curves : a wide range of approaches close in spirit to ours, whereby the functional PC(Λ) is modified with some kind of penalty on the curve γ. Various approaches to penalized prin- cipal curves include: a hard constraint on the length of γ[28, 51], or a soft penalty on the length of γ[62, 85], or some penalization of other attributes like “curvature” [9, 57, 64, 77]. In this article, we employ the “soft length penalty” variant of principal curves, because it can be directly translated to the general setting of compact metric spaces, and hence the Wasserstein space of probability measures. Moreover, the defining variational problem is stable with respect to perturbations of the data distribution, as we show in Proposition 2.3, and the problem can be naturally discretized, and attacked numerically, even in metric spaces which have no differentiable structure and no analog of the Lebesgue measure (so that there is no meaningful notion of “probability density”). Comparison to related work on seriation. Seriation, the problem of ordering data given pair- wise comparisons, is related to our principal curve problem. Early theoretical treatments of the seriation problem are due to Robinson [75] and Kendall [52, 53], and are closely related to the traveling salesman problem [37]. There is another branch of seriation literature focused on inferring permutations from pairwise comparisons between objects [32, 83]. Spectral se- riation computes eigenvectors of the Laplacian constructed from a kernel matrix of pairwise comparisons [6, 39, 45, 68, 94]. In our principal curve formulation, by contrast, we directly observe the objects (e.g. the empirical distributions), and we explicitly search for a curve embedded in a metric space. Comparison to related work on single-cell trajectory inference. The problem of reconstruct- ing a curve of measures ρtfrom empirical samples ˆρtnhas also been studied recently in the context of trajectory inference [23, 60]. In trajectory inference, ρtrepresents the curve of marginals of a stochastic process, and the goal is to recover the underlying trajectories (or law on paths). Lavenant et al. [60] showed that a convex approach, based on optimal transport, can recover a stochastic differential equation from marginal samples, as long as the vector field for the drift has zero curl [60]. Assuming that the drift is conservative implies that the curve of marginals uniquely determines the law on paths of the SDE. The primary difference with the setup here is that now we are not given time-labels tnfor the empirical marginals ˆρtnalong the curve (Figure 1). Indeed, a corollary of our work is a consistent approach for trajectory inference, without time-labels. PRINCIPAL CURVES IN METRIC SPACES AND W2 SPACE 7 2. Principal curves in metric spaces. We first give definitions of some key objects used in the setup of our inference problem. Let(X,d)be a compact metric space. By curve inX, we mean a continuous function γ(·): [0,1]→X. The metric speed of a curve γat time tis denoted |˙γt|and is given by the following limit: |˙γt|:= lim s→td(γs,γt) |s−t|. A curve γis said to be absolutely continuous (AC) provided
https://arxiv.org/abs/2505.04168v1
that |˙γt|exists for Lebesgue- almost all t∈[0,1], and that |˙γt|is an integrable function in time. In this case we define the length ofγas follows: Lengthd(γ):=Z1 0|˙γt|dt. We note that the length of γis independent of time-reparametrization, and that a sufficient condition for Lengthd(γ)to be finite is that t7→γtis Lipschitz, in which case Lengthd(γ)≤ Lipd(γ). We write AC ([0,1];X)to denote the set of AC curves into X. A metric space (X,d)is said to be a length space if, for all x,y∈X, the distance d(x,y) coincides with the infimum of the length of all AC curves connecting xtoy, namely d(x,y) = inf γ∈AC([0,1];X):γ0=x,γ1=yZ1 0|˙γt|dt. Likewise we say that (X,d)is ageodesic metric space if this infimum is attained. In this case ageodesic curve γfrom xtoyis one where d(x,y) =R1 0|˙γ|dt, and γ0=xandγ1=y. For further background on curves in metric spaces, we refer the reader to [2, 14]. We collect a number of basic results concerning curves in metric spaces in Appendix A. We also recall the definition of the weak* topology for probability measures on a com- pact metric space X: here, a sequence (Λn)n∈Nof elements of P(X)is said to converge weak*ly (a.k.a. converge in distribution) to a probability measure Λprovided that, for every continuous φ:X→R, Z XφdΛn−→ n→∞Z XφdΛ. In this case we write Λn⇀∗Λ. By Prokhorov’s theorem, P(X)is itself a compact metric space when equipped with the weak* topology. We first consider existence of minimizers for the penalized version of the principal curves functional PPC(Λ; β)(γ):=Z Xd2(x,Γ)dΛ(x) +βLength( γ) where γ(·)∈AC([0,1];X),Λ∈ P(X), and Γdenotes the range of γ. We shall often drop theβargument and simply write PPC(Λ) for brevity. We call minimizers of PPC(Λ; β) “principal curves for Λ” with the understanding that there is implicit dependence on the choice of penalty β >0. In this section as well as in Section 3, we allow Λto be a general probability measure; we then specialize to the case where Λis induced by a ground truth curve ρtin Section 4. PROPOSITION 2.1 (Existence of principal curves). Let(X,d)be a compact metric space. LetΛ∈ P(X)and let β >0. Then there exist minimizers for PPC(Λ; β). 8 The proof is deferred to Appendix C.1. We note that the argument is close to that of [63, Lem. 2.2], which establishes existence in the case where X=Rd. A natural next question is whether we can guarantee solutions are unique. In many cases the answer is no, and we give an elementary example for R2in a moment in Proposition 2.2. The idea is that when µhas nontrivial symmetries and an optimizer γofPPC(Λ) is known to be injective, we may simply apply said symmetries to γto obtain a new optimizer. While the counterexample to uniqueness we provide here is quite specific, we also note that the data- fitting termR Xd2(x,Γ)dΛ(x)is neither concave nor convex in γeven on Euclidean domains [57, § 2.2.1], so one should not expect the energy landscape of the functional PPC(Λ) to be simple. This failure of uniqueness means that, for a given data distribution Λ(and length penalty β >0), we must be willing to accept each principal curve
https://arxiv.org/abs/2505.04168v1
for Λas an estimate of Λ’s implicit one-dimensional structure. Whether this makes sense surely depends on Λitself as well as the domain application. For example, the consistency results of Section 4 apply in the limiting case where Λitself has one-dimensional support inside X. PROPOSITION 2.2 (Non-uniqueness of principal curves). There exists a compact metric space (X,d)and probability measure Λ∈ P(X)for which minimizers of PPC(Λ) are not unique. In particular we can take Xto be the unit ball in R2andΛthe uniform distribution on an equilateral triangle. The details of the proof are given in Appendix C.1. Next, we investigate questions of stability , that is, the dependency of principal curves on the choice of measure Λ. As we do not have uniqueness, our stability result is for the set of optimizers as a whole. PROPOSITION 2.3 (Stability of principal curves with respect to data distribution). Let (X,d)be a compact metric space. Let (ΛN)N∈Ndenote a sequence in P(X), such that ΛN⇀∗Λ. Then minimizers of PPC(Λ N)converge to minimizers of PPC(Λ) in the following sense. Ifγ∗N∈AC([0,1],X)is a minimizer of PPC(Λ N)which has been reparametrized to have constant speed, then there exists a minimizer γ∗ofPPC(Λ) such that up to passage to a subsequence, γ∗N t→γ∗ tuniformly in t. Moreover, given any subsequence of minimizers γ∗NofPPC(Λ N)which converges uni- formly in tto some γ∗∈AC([0,1];X), it holds that γ∗is a minimizer of PPC(Λ) . See Appendix C.1 for the proof. We then obtain the following by way of Varadarajan’s version [88] of the Glivenko-Cantelli theorem: COROLLARY 2.4. LetΛ∈ P(X), and let (ΛN)N∈Nbe a sequence of empirical measures forΛ. Then with probability 1, it holds that minimizers of PPC(Λ N)converge to minimizers ofPPC(Λ) in the sense of the previous proposition. 2.1. Discretization and algorithm. In this subsection, we first formally discretize the principal curves functional PPC(Λ) , and describe a Lloyd-type algorithm to minimize this discretization. Then, we provide a rigorous consistency result for our discretization scheme. We first give the following heuristic computation as motivation. Let x1,x2,...,x Nbe Ni.i.d. random elements of Xdrawn according to Λ, and let γ∈AC([0,1];X). Fix also PRINCIPAL CURVES IN METRIC SPACES AND W2 SPACE 9 timepoints 0 =t1< ... < t k< ... < t K= 1. Now, for any 1≤n≤N, observe that if tk(n) is the index of the γtkwhich is closest to xnamongst all {γtk}K k=1(where we pick the lowest index in the event of a tie), then it holds that d(xn,γtk(n))≈d(xn,Γ), with equality in the limit where K→ ∞ . According to Monte Carlo integration, we expect heuristically that Z d2(x,Γ)dΛ(x)≈1 NNX n=1d2(xn,γtk(n)). Now, let Ik:={n∈N:xnprojects onto γtk}. (Where again we pick the lowest index in the case of a tie, for the “projection” onto the γtk’s.) The sets Ikcan be thought of as “V oronoi cells” for the points xnwith respect to {γtk}K k=1. Then we can rewrite NX n=1d2(xn,γtk(n)) =KX k=1X n∈Ikd2(xn,γtk). At the same time, we can approximate Length (γ)≈PK−1 k=1d(γtk,γtk+1).Altogether, Z Xd2(x,Γ)dΛ(x) +βLength (γ)≈KX k=1X n∈Ikd2(xn,γtk) N+βK−1X k=1d(γtk,γtk+1). Note again that the “V oronoi cells” Ikdepend on γtkin this expression. This discretization is useful to us
https://arxiv.org/abs/2505.04168v1
for two reasons. Firstly, it turns out that we can prove that minimizers of the discrete objectivePK k=1P n∈Ikd2(xn,γtk) N+βPK−1 k=1d(γtk,γtk+1) converge (in roughly the same sense as in Proposition 2.3) to minimizers of PPC(Λ) as N,K→ ∞ . Indeed we prove such a discrete-to-continuum convergence result later in this section, see Theorem 2.5. Second, the discrete objective admits a descent algorithm which is morally similar to Lloyd’s algorithm for k-means clustering, except that the means are “coupled” to each other via the global length penalty βPK−1 k=1d(γtk,γtk+1). While (just as for usual k-means cluster- ing) we are unable to prove global convergence due to the non-convexity of the objective, we nonetheless observe in our experiments that such a descent procedure works well in practice. To wit, we now give a prose description of a“coupled Lloyd’s algorithm” for the discrete objective. A restatement of the algorithm in pseudocode is given below as Algorithm 1; a single loop of the algorithm is illustrated below in Figure 2. 1. Initialize the “knots” {γk}K k=1. (Line 1 in Algorithm 1) 2. Given some predetermined threshold ε >0, repeat steps 3-5 below in a loop, until the value of the discrete objectivePK k=1P n∈Ikd2(xn,γtk) N+βPK−1 k=1d(γtk,γtk+1)drops by less than ε >0. (Lines 2 and 6) 3. Permute the indices for the knots {γk}K k=1to minimize the total lengthPK−1 k=1d(γk+1,γk). This amounts to solving the traveling salesman problem with respect to the matrix of pairwise distances between the γk’s. (Line 3) 4. Given {γk}K k=1, compute the sets Ik:={xn:γkis the closest knot to xn}. Tiebreak lexi- cographically if necessary. (Line 4) 5. Given the Ik’s from step 4, compute a new set of γk’s by minimizingPK k=1P n∈Ikd2(xn,γk) N+ βPK−1 k=1d(γk+1,γk).(line 5) 10 A version of this algorithm on Euclidean domains has been proposed in [56]; our version is for general compact metric spaces for which pairwise distances and barycenters can be effectively computed. Algorithm 1: Coupled Lloyd’s Algorithm for Principal Curves Input: data{xn}N n=1,parameters β >0,ε >0 1{γk}K k=1←initialize_knots() ; 2repeat 3{γk}K k=1←TSP_ordering ({γk}K k=1);/*min-length ordering */ 4{Ik}K k=1←compute_Voronoi_cells ({γk}K k=1); 5{γk}K k=1←argmin{γ′ k}K k=1PK k=1P n∈Ikd2(xn,γ′ k) N+βPK−1 k=1d(γ′ k+1,γ′ k); 6until ε-convergence ; Result: {γk}K k=1; /*The updated output knots */ (a) (b) (c) (d) Fig 2: A visualization of one loop of the algorithm. (a) A local view of the situation in the discretized case. Here, the knots {γk}K k=1are plotted with geodesic interpolations between adjacent points, ordered using a TSP solver. Also shown: The data points {xn}N n=1and the V oronoi cells (dotted black lines). (b) An illustration of how the update step works. The position of each knot γkaffects the overall objective value via: (1) the average distance from γkto the data points xn∈Ik, and (2) the distance from γkto the adjacent knots γk−1,γk+1. In that sense, at the update step each γkis “pulled” toward the points of Ik(vectors drawn in gray) and {γk−1,γk+1}(vectors draw in blue). (c) Moving the knot points according a weighted sum of the vectors in Fig. 2b. In accordance with the weightings of the terms in
https://arxiv.org/abs/2505.04168v1
the discretized functional in Step 5, each vector pointing to a xn∈Ikis weighted by 1/N, while the vectors pointing to γk−1,γk+1are weighted by β/(2K−2)(the factor of two arising because each d2(γk,γk+1)is split into one vector on γkand one on γk+1). (d) The updated {γk}K k=1with the associated updated V oronoi cells. Let us make several remarks regarding the details of Algorithm 1. PRINCIPAL CURVES IN METRIC SPACES AND W2 SPACE 11 REMARK 2.1. The choice of initialization of the knots {γk}K k=1matters for the per- formance of Algorithm 1 in practice, just like for the usual Lloyd’s algorithm for k-means clustering. This is because the objective is nonconvex for general data distributions. We rec- ommend a “k-means++” style initialization where the {γk}K k=1’s are first taken to be Kran- domly chosen points among the data {xn}N n=1, as this initialization has performed well for us in practice. REMARK 2.2. Step 2 of Algorithm 1, where one solves the traveling salesman problem as a subroutine, is feasible for moderately large K(sayK∼1000 ) thanks to highly efficient TSP solvers which are now available, such as [5]. REMARK 2.3. The optimization subroutine in Step 4 of Algorithm 1 is somewhat com- plicated by the fact that the data-fitting term is quadratic while the length penalty is linear. Nonetheless, optimization of objectives of this form can be done using modern convex opti- mization methods (e.g split Bregman projection), see discussion in [56, 87]. REMARK 2.4. In the case where K=N, Algorithm 1 is a relaxation of the travel- ing salesman problem. Indeed, when K=Nit is possible to make the data-fitting termPK k=1P n∈Ikd2(xn,γtk) Nequal zero by placing a γkdirectly atop each data point xn. Then, minimizing the length penalty (subject to enforcing that the data-fitting term is identically zero) amounts to choosing an ordering of the γk’s of minimal total length according to the metric d. We mention in particular the work [73] which studies the properties this relaxation of the traveling salesman problem in R2. In Section 3 below we particularize this descent algorithm to the case where (X,d)is a Wasserstein space of probability measures over a compact space; we emphasize that, even in that setting, the algorithm is readily implemented using existing numerics for optimal transport problems. Next we provide a consistency result for the discretization we have described above. This requires introducing the following objective functional on K-tuples in X: PPCK(Λ)(γ1,...,γ K):=Z Xd2(x,ΓK)dΛ(x) +βK−1X k=1d(γk,γk+1) where {γk}K k=1represents a discrete curve and d(x,ΓK) = min 1≤k≤Kd(x,γk). Notice thatR Xd2(x,ΓK)dΛ(x) =PK k=1P n∈Ikd2(xn,γk) Nin the case where Λis uniform on Natoms {xn}N n=1. By solving the minimization problem PPCK(Λ), we procure a K-tuple {γ1,...,γ K}of “knots” belonging to X, which we think of as a discrete estimator for a minimizer of the continuum problem PPC (Λ). Note however, that this discrete estimator can also be associ- ated to a piecewise-geodesic interpolation connecting the knots. Therefore, the discrete-to- continuum convergence question we address is actually the following: as we send K→ ∞ , does a given piecewise geodesic interpolation of the minimizing knots {γ1,...,γ K}con-
https://arxiv.org/abs/2505.04168v1
verge to an AC curve γtwhich then minimizes PPC (Λ)? We address this question in the following proposition, which also allows for simultaneously taking a convergent sequence of data distributions (ΛN)N∈N. THEOREM 2.5 (Discrete to continuum). LetXbe a compact geodesic metric space. Let K,N∈NandΛN∈ P(X). Suppose also that ΛN⇀∗Λwhere Λ∈ P(X). Let{γ∗K k}K k=1 12 be a minimizer of PPCK(ΛN), and let γ∗K tbe a constant-speed piecewise geodesic interpo- lation of {γ∗K k}K k=1. Then, up to passage to a subsequence in K, we have that: there exists an AC curve γ∗ such that γ∗K t→γ∗ tuniformly in t, and γ∗is a minimizer for PPC (Λ). Moreover, given any uniformly convergent subsequence of geodesic interpolations γ∗Kof minimizers {γ∗K k}K k=1 with limit γ∗∈AC([0,1];X), it holds that γ∗is a minimizer of PPC (Λ). We prove this theorem in Appendix C.1. Lastly we mention that in Appendix D, we discuss some ways that the variational problem PPC and its discrete approximation PPCKcan be modified in their details. One variant is an alternate nonlocal discretization scheme similar to Nadaraya-Watson kernel regression, which is also consistent in the same sense as Theorem 2.5 (Proposition D.1). We use this scheme in our experiments in Section 4, as it seems to offer better performance in practice. We also consider a semi-supervised variant of the PPC functional, meaning that the locations of some of the points along γare fixed in advance. 3. Principal curves in the space of probability measures. The preceding sections have been concerned with principal curves in the rather abstract setting of metric spaces. In this section, we highlight the case where the metric space Xis taken to be (P(V),W2), the 2-Wasserstein space of probability measures over a compact metric space V. Given probability measures µ,νonV, we write Π(µ,ν)to denote the space of couplings between µandν. Forp∈[1,∞)we define the p-Wasserstein metric onP(V)as follows: Wp(µ,ν):= inf π∈Π(µ,ν)Z V×Vdp(x,y)dπ(x,y)1/p . The metric Wpmetrizes weak* convergence of probability measures on V, and (P(V),Wp) is a compact metric space [3]. In this article we only use the W2orW1metric specifically. We can make sense of the length of a curve of probability measures using this metric: in particular, for any absolutely continuous (AC) curve γ(·): [0,1]→ P(V), LengthW2(γ):=Z1 0|˙γt|dt where |˙γt|is the metric speed of γat time t: |˙γt|= lim s→tW2(γs,γt) |s−t|. In the case where (V,d)is a compact geodesic metric space, then so too is (P(V),W2)[3, Theorem 2.10]. In particular, if Vis a compact convex domain in Rdthen(P(V),W2)is a compact geodesic metric space. Thus far, we have defined principal curves in terms of a probability measure Λon the metric space X. Putting X=P(V)means that now Λ∈ P(P(V)), in other words Λis a probability measure over probability measures. In situations where we need to consider convergence of a sequence of Λ’s, we still make use of weak* convergence, but let us state precisely what exactly this means here. To emphasize, in this setting a sequence Λn inP(P(V))converges in the weak* sense to Λ∈ P(P(V))if and only if, for every “test function” φ:P(V)→R, Z P(V)φdΛn−→ n→∞Z P(V)φdΛ. PRINCIPAL CURVES IN METRIC SPACES
https://arxiv.org/abs/2505.04168v1
AND W2 SPACE 13 Here the test function φis any continuous function from P(V)toR, where P(V)is itself equipped with the weak* topology. We also note that, by applying Prokhorov’s theorem twice, P(P(V))is compact when equipped with the specific weak* topology we have just described. REMARK 3.1. The class of AC curves for the metric W2is relevant for our intended applications because it includes “naturally arising” time-dependent probability measures such as the curve of marginals for certain drift-diffusion processes. To wit, for the SDE dXt=−∇V(Xt) +√ 2dWt(see [80] for how this SDE arises in models in develop- mental biology) the distribution ρtofXtat time tsatisfies the Fokker-Planck equation ∂tρt=∇ ·(ρt∇V) + ∆ ρt. The solution ρtto this equation can be identified as the gradi- ent flow of the relative entropy functional H(· |e−Vdx)with respect to the metric W2[46]. We do not elaborate on the precise sense of “gradient flow with respect to a given metric” that is meant here, but see [3, 4, 78] for details. However, we note that gradient flows in this sense are automatically 1/2-Holder continuous [78], but need not be Lipschitz. In particu- lar, it holds that the curve of marginals for this drift-diffusion SDE belongs to the set of AC curves for the metric W2. Given a finite family {µk}K k=1of probability measures on Vand nonnegative weights {λk}K k=1summing to 1, a 2-Wasserstein barycenter is a minimizer of the variational problem min ν∈P(V)KX k=1λkW2 2(µk,ν). Indeed, this is simply a particular instance of the notion of barycenter on a general metric space. Wasserstein barycenters were initially studied in [1], which established existence of minimizers and provided sufficient conditions for the Wasserstein barycenter to be unique. One can extend the notion of Wasserstein barycenter by replacing the finite set of weights {λk}K k=1with a probability measure Λ∈ P(P(V)), in which case a Wasserstein barycenter forΛis a minimizer to the variational problem min ν∈P(V)Z P(V)W2 2(µ,ν)dΛ(µ). For this continuum notion of Wasserstein barycenter, existence and conditions for uniqueness were established in [55]. Finally we note that efficient numerics for Wasserstein barycenters are well-established. We refer the reader to [25] for a now-classical approach based on entropic regularization, and [22] for an up-to-date comparison of existing methods. More generally, the textbook [71] offers a thorough explanation of computational aspects of optimal transport writ large. In this setting, the PPC objective reads as follows: PPC(Λ)(γ) =Z P(V)W2 2(µ,Γ)dΛ(µ) +βLength (γ). Here Λ∈ P(P(V)),γ∈AC([0,1];P(V)), and just as before Γis the range of γinP(V); and where P(V)is equipped with the W2metric, so that Length (γ)is defined with respect to W2. Likewise, the PPCKobjective (which discretizes the curves γ) reads as follows, where γ1,...,γ K∈ P(V): PPCK(Λ)(γ1,...,γ K) =Z P(X)W2 2(µ,ΓK)dΛ(µ) +βK−1X k=1W2(γk,γk+1) 14 where ΓK={γ1,...,γ K}and hence W2 2(µ,ΓK) = min 1≤k≤KW2 2(µ,γk). In the particular case where ΛN=1 NPN n=1δµn(that is, Λis supported on finitely many atoms in P(V)and those atoms have equal weight) we have PPCK(ΛN)(γ1,...,γ K) =1 NNX n=1W2 2(µn,ΓK) +βK−1X k=1W2(γk,γk+1) =1 NKX k=1X n∈IkW2 2(µn,γk) +βK−1X k=1W2(γk,γk+1) where as before {Ik}is a V oronoi partition of {µn}N n=1with respect to
https://arxiv.org/abs/2505.04168v1
{γk}K k=1, in other words the Ik’s are disjoint and every µn∈Iksatisfies γk∈argmin1≤k≤KW2 2(µn,γk). Lastly, the nonlocal discrete objective PPCK w(ΛN)introduced in Appendix D also makes sense in this setting, and is defined in an identical manner to PPCK(ΛN). For the sake of motivation, we give two examples where a distribution Λon the space of distributions P(V)arises naturally, and for which the task of estimating principal curves makes sense. EXAMPLE 3.2. Suppose we observe a continuously-varying distribution ρtwhich is sup- ported on V. Consider the map ρ(·): [0,1]→ P(V); t7→ρt. Then let Λdenote the pushforward of the Lebesgue measure on [0,1]viaρ(·). This measure Λ encodes for us the “1d volume measure” of the curve ρt; phrased differently, sampling from Λ corresponds to first drawing i.i.d. time points t1,t2,...,t Nand then for each tnobserving the distribution ρtn. Note however that Λdoes not encode time labels; so, the temporal ordering of the distributions ρtnis unknown even with perfect knowledge of Λ. EXAMPLE 3.3. Continuing the previous example, suppose at each time tnwe make noisy observations of ρtn: namely, consider a random variable Ytn=Xtn+EinU⊂⊂Rdwhere Xtn∼ρtnandEis a compactly supported noise variable with distribution L(E)(which is independent at each time). Then at time tnour observations of Ytnare distributed according toρtn∗ L(E). Let Vbe a compact domain containing the support of ρtn∗ L(E)for all t. Then Λ∈ P(P(V))corresponds to the pushforward of the Lebesgue measure on [0,1]via ρ(·)∗ L(E). Crucially, in this case Λis still concentrated on a curve in P(P(V)), and so this situation (with additive measurement noise) presents no technical distinction from the previous example. By contrast, suppose that Ytn=Xtn+Eω, where Eωis a random variable that is itself in- dependently randomly chosen, according to index ω∈Ωwhich is drawn according to some probability distribution P. (For instance, consider the case where the variance of the mea- surement noise is in some sense changing randomly between observations.) Then, Λis the pushforward of Leb [0,1]×Punder the map (t,ω)7→ρt∗L(Eω). In this case Λis not concen- trated on a curve in P(P(V)), but rather is “concentrated near a curve” provided that L(Eω) does not vary too much (in the W2sense) with ω. In this setting, our results for (compact, geodesic) metric spaces particularize as follows: minimizers for PPC (Λ)exist for any Λ∈ P(P(V)); minimizers are stable with respect to weak* perturbations of Λ; and up to subsequence, minimizers for PPCK(ΛN)converge to minimizers of PPC (Λ)asK→ ∞ andΛN⇀∗Λ(and similarly for the alternate objective PPCK w(ΛN)from Appendix D). In particular we have that ΛN⇀∗Λwith probability 1 when PRINCIPAL CURVES IN METRIC SPACES AND W2 SPACE 15 (ΛN)N∈Nis a sequence of empirical measures for Λ; meaning that ΛN=1 NPN n=1δµnand theµn’s are random probability measures which are i.i.d. with distribution Λ. Statistically, the interpretation of principal curves in the space (P(V),W2)is as follows. We have a family of probability measures {µ}with distribution Λ∈ P(P(V)), which we believe to be concentrated near an AC curve ρtin the space (P(V),W2). Can we infer ρtfrom partial observations of the distribution Λ? However, in this space we must be careful about what it means to “make observations” of
https://arxiv.org/abs/2505.04168v1
Λ. One option is that we draw i.i.d. µn∼Λand get to “observe” each probability measure µn∈ P(V)exactly. However, this observation scheme is rather idealized for the applications we wish to consider. Rather, we wish to consider the following observation scheme: Λ∈ P(P(V)) µ1,...,µ N∼Λ X1 n,...,XMnn∼µn. This scheme has the following interpretation: we get N“batches” of data points in V, and each batch of data consists of Mni.i.d. draws from a randomly chosen probability measure µn, which in turn is randomly distributed according to Λ. For simplicity we generally take Mn=M, i.e. every batch has the same number of draws. We then “store” the data as fol- lows: for each batch of data, we have the empirical measure ˆµn=1 MPM m=1δXm n, and then, corresponding to the collection of batches of data, we form the measure ˆΛN,M:=1 NNX n=1δˆµn. Finally, the functional PPCK(ˆΛN,M)now encodes the discretized principal curve problem for the data which has been observed according to this scheme, in other words we minimize the objective PPCK(ˆΛN,M)(γ1,...,γ K) =1 NKX k=1X n∈IkW2 2(ˆµn,γk) +βK−1X k=1W2(γk,γk+1). The nonlocal discrete functional PPCK wfrom Appendix D is defined similarly. We empha- size that the optimization problem of minimizing PPCK(ˆΛN,M)is amenable to off-the-shelf numerical methods for optimal transport; see also Section 4.2 below for concrete examples of numerical experiments with measure-valued data of the form ˆΛN,M. In the remainder of this section, we address the following question: do minimizers of PPCK(ˆΛN,M)converge to minimizers of PPC (Λ)asK→ ∞ ,N→ ∞ , andM→ ∞ , where Nis the number of batches and Mis the number of datapoints per batch? By Theorem 2.5 it suffices to show that the “doubly empirical measure” ˆΛN,M converges toΛin the weak* sense with probability 1. Surprisingly, we were unable to find any in- vestigation of this convergence question in the literature, with one exception: as our work was nearing completion, we learned of the recent article [16] which considers closely related questions for an application to nonparametric Bayesian inference. (Their results are more quantitative than ours but require stronger assumptions on the base space, and so neither our nor their results imply the other.) Accordingly, we provide an argument that ˆΛN,M⇀∗Λ under mild conditions. In order to state this convergence result, we need to consider the 1-Wasserstein metric on P(P(V)). Recall that the 1-Wasserstein metric makes sense atop any complete separable 16 metric space. Thus, let dP(V)be a complete separable metric on P(V); given this choice of metric on P(V), for any Λ,Ξ∈ P(P(V))we can define W1(Λ,Ξ):= inf π∈Π(Λ,Ξ)Z P(V)×P(V)dP(V)(µ,µ′)dπ(µ,µ′). We use the notation W1just for disambiguation with W1as defined on P(V)(rather than P(P(V))). We defer the proof of the following theorem to Appendix C.2. THEOREM 3.1 (Iterated Glivenko-Cantelli theorem). LetVbe a compact metric space, letΛ∈ P(P(V)), and let (ΛN)N∈Nbe a sequence of empirical measures for Λ. Let M depend on Nand assume that M→ ∞ asN→ ∞ . Then, regarding the doubly empirical measure ˆΛN,M forΛ, we have the following convergence results: 1.There exists a metric dP(V)which metrizes weak* convergence on P(V), for which W1(ˆΛN,M,Λ)converges to 0in probability. In particular, there
https://arxiv.org/abs/2505.04168v1
exists a subsequence of N(and thus of M) along which ˆΛN,M⇀∗Λalmost surely. 2.Assume that M≥C(logN)qfor some C >0andq >1. Then ˆΛM,N⇀∗Λalmost surely. REMARK 3.4. The proof of Theorem 3.1 also works in some cases where the number of samples Mnin the empirical measure ˆµnvaries with n. For example, if we take M:= min1≤n≤NMnand assume that M→ ∞ asN→ ∞ , then Theorem 3.1 still holds as stated. Part (1) of this theorem is enough for our purposes, since our stability results all require the passage to a subsequence regardless. We include part (2) for completeness, since it shows that the need to pass to a subsequence can be replaced with a mild assumption on the dependency ofMonN. Additionally, we note that, even though part (1) implies that every subsequence ofNcontains a further subsequence along which ˆΛM,N⇀∗Λalmost surely, this does not imply that ˆΛM,N⇀∗Λalmost surely, since almost sure convergence is non-metrizable. This leaves open the possibility that, along some subsequences, ˆΛM,N fails to converge to ΛifM grows sub-logarithmically with N; we are unable to resolve this question. It is also possible to establish analogs of Theorem 3.1 (and therefore analogs of Theo- rem 1.1 via Theorem 2.5) in cases where the empirical data is not observed exactly, but rather we only get partial measurements of each sample in each empirical distribution. For example, in the context of scRNA-seq, the support points of the empirical distributions are also noisy (see Eq (4)). From the noisy empirical distributions, we then form the three level empirical measure ˆΛN,M,R where Ris the number of reads per “fully observed” empirical measure ˆµn. For simplicity we only prove the convergence in probability (and therefore subsequential weak* almost sure convergence) of ˆΛN,M,R toΛ; we note however that, similarly to the pre- vious proposition, it is possible to prove a.s. weak* convergence under the assumption that Mgrows fast enough relative to Mand also that Rgrows fast enough relative to M, using the concentration inequalities established in [54]. In order to make use of these concentra- tion inequalities, we also take as given Assumption 2.3 from the article [54]; this technical assumption, which we do not precisely restate here, amounts to requiring that the distribution of reads over the support points in a given empirical measure is approximately uniform. PROPOSITION 3.2. LetVbe the unit simplex in Rd, letΛ∈ P(P(V)), and let (ΛN)N∈N be a sequence of empirical measures for Λ. LetMdepend on Nand assume that M→ ∞ as N→ ∞ . Additionally, let Rbe the number of reads per empirical measure ˆµn, and assume PRINCIPAL CURVES IN METRIC SPACES AND W2 SPACE 17 M/R→0asM→ ∞ , and also that the distribution of reads satisfies [54, Assumption 2.3]. Then, regarding the three level empirical measure ˆΛN,M,R forΛ, we have the following convergence result. Setting dP(V)=W1, then it holds that W1(ˆΛN,M,R ,Λ)converges to 0in probability. In particular, there exists a subsequence of N(and thus of MandR) along which ˆΛN,M,R ⇀∗Λ almost surely. A proof can be found in Section C.2. 4. Application to the seriation problem. In this section, we return to the problem
https://arxiv.org/abs/2505.04168v1
out- lined in the introduction (Figure 1). Namely, given a data distribution which comes from observing a ground truth curve with unknown time labels, we show how to infer both the ground truth curve and the ordering of the points along said curve, and so obtain a principal curves based seriation algorithm. In Section 4.1 we describe how principal curves provide consistent estimators for the ground truth curve as well as the ordering of points along said curve. We first state our consistency results for the general case of metric-space-valued data and then explain how our results cover the case of data taking values in the Wasserstein space of probability measures. In Section 4.2, we then provide numerical experiments where principal curves for measure- valued data are used for seriation. 4.1. Consistency theory. While in previous sections we have considered arbitrary data distributions Λ, here we focus on the case where Λis the “1d volume measure” associated to some “ground truth” curve ρ∈AC([0,1];X), meaning that Λ = ( ρ(·))#Leb[0,1]. As a step towards our main consistency results below, we first state the following proposition, which asserts that principal curves for Λare concentrated around, and no longer than, ρitself. See Appendix C.3 for the proof. PROPOSITION 4.1. (i) Let Xbe a compact metric space. Let ρ∈AC([0,1];X)and Λ =ρ(·)#Leb[0,1]. Letγ∗β∈AC([0,1];X)be a minimizer for PPC (Λ;β). Then, Length (γ∗β)≤Length (ρ). (ii) (Concentration estimate) Let α >0. Then, we have that Γ∗β(the graph of γ∗β) is concentrated around ρin the following sense: Leb[0,1]n t:d2(ρt,Γ∗β)≥αo ≤βLength (ρ) α. Intuitively, one might expect that we can send β→0in the statement of Proposition 4.1, and show that in the limit, the principal curves γ∗βconverge to the ground truth curve ρ. The following theorem tells us that this is indeed the case. THEOREM 4.2 (consistency up to time-reparametrization). LetXbe a compact metric space. Let ρ∈AC([0,1];X)andΛ =ρ(·)#Leb[0,1]∈ P(X). Suppose that t7→ρtis injective. Letβ→0, and γ∗β∈AC([0,1];X)be a minimizer for PPC (Λ;β). Then, up to passage to a subsequence of β, there exists a γ∗∈AC([0,1];X)such that γ∗β t→γ∗ tuniformly in t andγ∗is a reparametrization of ρwhich is either order-preserving or order-reversing. More- over any limit of a convergent subsequence of γ∗β’s satisfies this property. 18 The proof can be found in Appendix C.3. (We remark that the proof is not simply a direct consequence of Proposition 4.1, as the limiting objective PPC (Λ;0) is poorly behaved.) The preceding result establishes that the ground truth ρand minimizer γ∗have the same graphs, and that γ∗is either an order-preserving, or order-reversing, reparametrization of ρ. If we are purely interested in seriation, it remains only to check whether γ∗has the correct, versus backwards, ordering compared to ρ. Our variational framework cannot do this directly. We note, however, that it suffices to ask an oracle whether, according to the ground truth, ρ0 comes before ρ1, or vice versa. In our domain application of scRNA data analysis, we expect that the true ordering of ρ0versus ρ1is typically available as expert knowledge. By combining Theorem 4.2 with our earlier discrete-to-continuum result from Theorem 2.5, we obtain the following corollary. COROLLARY
https://arxiv.org/abs/2505.04168v1
4.3. LetXbe a compact geodesic metric space. Let ρ∈AC([0,1];X)and Λ =ρ(·)#Leb[0,1]. Suppose that t7→ρtis injective. Let (ΛN)N∈Nbe a sequence in P(X) converging weak*ly to Λ. Let{γ∗β,K k}K k=1be a minimizer of PPCK(ΛN;β), and let γ∗β,K t be a constant-speed piecewise geodesic interpolation of {γ∗β,K k}K k=1. Then: up to passage to a subsequence twice, as we send N,K→ ∞ followed by β→0, there exists a γ∗∈AC([0,1];X)such that γ∗β,K t→γ∗ tuniformly in t, and γ∗is a reparametrization of ρwhich is either order-preserving or order-reversing. Alternatively, one can replace minimizers of PPCK(ΛN;β)with minimizers of the non- local discrete objective PPCK w(ΛN;β)in the statement of the corollary above, and invoke Proposition D.1 instead of Theorem 2.5. Likewise the two passages to a subsequence can be dropped if we consider the convergence of γ∗β,K t toγ∗ tin a slightly weaker topology, such as the Hausdorff topology on the ranges of the curves γ∗β,K t . By applying this corollary in the case where: X=P(V),Vis a compact convex domain inRd,d=W2, and(ΛN)N∈Nis a sequence of doubly empirical measures for Λ, then (thanks to Theorem 3.1) we recover a precise statement of Theorem 1.1 which was advertised in the introduction. Likewise, employing Proposition 3.2 in place of Theorem 3.1 above gives us the version of Theorem 1.1 where the empirical data is measured imprecisely with finite reads. Ordering estimation. The results above concern consistency at the level of estimation of the curve ρ; we now explain how to use principal curves to consistently assign an ordering to points along ρ. Suppose that ΛNis discrete and so supported on atoms {xn}N n=1belonging to X. LetγN be a principal curve for ΛN. Using γN, we can assign projection pseudotimes to the xn’s, as follows: define ˆτ(xn)∈arg min t∈[0,1]d2(xn,γN t). In other words, we assign each data point xna time label ˆτ(xn)by projecting xnontoγNand using the time argument of this projection. Note that in general this projection is not unique, in which case we simply pick one of the projections arbitrarily. In practice, ˆτ(xn)will be approximated by instead projecting onto a discrete approximation of γN, e.g. a minimizer forPPCK(ΛN). Now, consider the limiting case where ΛN⇀∗ΛandΛis supported on a ground truth curve ρt. In this case, Theorem 4.2 tells us that by then sending β→0, we extract a principal curve with the same range as ρt. In this limiting case, the projection distance from each point in the support of Λto the limiting principal curve γis, of course, zero. In other words, in the limit our pseudotime assignment simply reads off the time labels from γ, which is a monotone or reverse-monotone time-reparametrization of the ground truth ρ. PRINCIPAL CURVES IN METRIC SPACES AND W2 SPACE 19 We also have the following consistency result for the ordering obtained from projection pseudotimes, which allows for βto be small but nonzero. PROPOSITION 4.4. Under the same assumptions as Theorem 4.2, fix Tdistinct points {ρi}T i=1along ρtwhose true time labels {ti}T i=1are unknown. Let γ∗β∈arg min PPC(Λ;β), and let ˆτ(ρi)∈arg min t∈[0,1]d2(ρi,γ∗β t)be a projection pseudotime for ρi. Lastly, let βj→0 be a subsequence of
https://arxiv.org/abs/2505.04168v1
βconverging to zero, along which γ∗β tconverges uniformly in t. Then, for all sufficiently small βj>0, it holds that either: 1.For all 1≤i,i′≤T,ti< ti′⇐⇒ ˆτ(ρi)<ˆτ(ρ′ i), or 2.For all 1≤i,i′≤T,ti< ti′⇐⇒ ˆτ(ρi)>ˆτ(ρi′). In other words, the ordering given by ˆτis correct up to total reversal. This result is derived directly from Theorem 4.2; a proof is provided in Appendix C.3. Time label estimation. Suppose we wish to use the principal curve γ∗to infer the correct time labels along ρ, rather than merely the ordering. Assume we have already queried an oracle for the labels of the endpoints, so that we can put γ∗ 0=ρ0andγ∗ 1=ρ1. By convention, we take γ∗to have constant speed, but ρtypically has unknown speed. Nonetheless, for a particular sampling scheme for ρ, it is possible to infer the correct time-reparametrization ofγ∗(i.e. so that γ∗ t=ρtfor all t) as follows. Suppose that our observations of ρare i.i.d. from Λ, and hence come in the following form: first we draw Ti.i.d. random times uniformly on[0,1], then for each random time τwe observe ρτ, but the time label τis unknown. In the limit where T→ ∞ thenβ→0, each ρτcoincides with some γ∗ t. Define the following estimated ordering on ρ’s as follows: ρτ⪯ρτ′⇐⇒ ρτ=γt,ρτ′=γt′,andt≤t′. In other words we first endow the ρ’s with the ordering inherited from γ∗. Label the ρτ’s according to this ordering; as the τ’s are drawn uniformly, it is consistent to estimate the true times of the ρτ’s simply by assigning them time labels which are evenly spaced in [0,1], in order. 4.2. Experiments. In this subsection, we test our approach to principal curve estima- tion on data drawn from simulated curves in the Wasserstein space of probability measures. Specifically we consider the data observation scheme outlined in the introduction: given a synthetic curve of probability measures ρ, we sample Ni.i.d. times ti, and at each time we observe an empirical measure ˆρtionMdatapoints drawn from the true measure ρtiat time ti. We then estimate the curve ρvia the doubly empirical measure ˆΛN,M=1 NPN i=1δˆρtiwhich encodes the observed data. In practice, we compute an approximate principal curve ˆγforˆΛN,M by running Algorithm 1, but with PPCK w(ˆΛN,M;β)with fixed endpoints as the objective (see Algorithm 3 in Appendix D for an explanation of this variant objective), and takeˆγto be a piecewise-geodesic interpolation of the output. How “fixed endpoints” are in- corporated into the objective is detailed in Appendix D, but this means, in particular, that we take as known which batch among the ˆρti’s is the start and which is the end. This allows us to identify whether the forward or backward ordering along the curve is the correct one, and represents a form of expert knowledge in the biological domain application context. To evaluate the performance of our approach, we view it purely as a seriation method, and compare to two existing seriation methods that can be used in W2space: the Traveling Salesman Problem (TSP) approach to seriation [59], and spectral seriation [6, 33, 68], which we outline in Appendix E. All approaches take as input the matrix W=
https://arxiv.org/abs/2505.04168v1
[W2(ˆρti,ˆρtj)]i,jof pairwise W2distances between empirical measures ˆρti. 20 To quantify performance of each method, we use a loss on the space of permutations of time labels. Specifically, we consider the error metric E(ˆτ) =2 T(T−1)X i<j1(ˆτ(ˆρti)>ˆτ(ˆρtj)), which computes the percentage of pairs of non-identical time-indices whose order is reversed by the given pseudotime ˆτfor the set of ρti’s. This metric is a normalized version of Kendall’s Tau distance on the space of permutations; see [30] for a discussion of this metric and its relation to other metrics on permutations. Fig 3: A simple curve of probability measures with 250 time points that undergoes a branch- ing event (A) is fitted with a principle curve, using kernel bandwidth h= 0.037and length penalty β= 0.17(B). The color code bar to the right of panel (B) indicates the normalized time parameter for the underlying dataset (A) and the fitted principal curve (B) respectively. A performance comparison to other methods can be seen in (C), which indicates the seriation error for various choices of the number of time points, with a fixed budget of 10000 total atoms measured. For spectral seriation, a kernel bandwidth of σ= 0.5was used. PRINCIPAL CURVES IN METRIC SPACES AND W2 SPACE 21 Test Dataset 1. Our first test dataset mimics a cellular differentiation event with a branching Gaussian mixture, obtained from a simple 2-D branching curve convolved with a Gaussian. The curve ρtis obtained by convolving a curve of discrete measures µtwith a Gaussian ρt=N(0,σ2)∗µt, where σ= 0.1andµtis defined as follows: µt=( δ(0,1−t) 0≤t≤1 1 2δt−1√ 2(−1,−1)+1 2δt−1√ 2(1,−1)1≤t≤1 +√ 2 To produce the dataset, we uniformly sample times {ti|ti∈[0,1 +√ 2]}; then for each ti, we then draw i.i.d. samples from ρti. The resulting dataset can be seen in Figure 3 panel (A). A parameter sweep was first done on training data to find a performant combination of length penalty term βand kernel bandwidth parameter h, the details of which can be found in Appendix E. On a test dataset with 250time points, and 10,000atoms in total, we were able to achieve a mean Kendall Tau error of 0.0157 over5trials. The estimated principal curve for this dataset can be seen in Figure 3, panel (B). We further generate a family of test datasets from the same model, in the “fixed budget” situation where the number of timepoints varies, but the total number of data points is fixed. This represents the scenario where there is a budget to sequence a fixed number of cells, and it is desired to find the largest number of time points that can be useful. The results of these trials, along with comparisons with alternative seriation methods, can be seen in Figure 3, panel (C). For our principal curves seriation method, we see good performance across many regimes, but are marginally less performant than spectral seriation. REMARK 4.1. We note that existing pseudotime inference methods do exist which assign orderings to datasets with branching in the feature domain [76], but the branching structure leads to separate, incomparable pseudotimes along each branch. If we instead view the
https://arxiv.org/abs/2505.04168v1
data in the space of probability measures over features, as we do here, we can instead infer a single curve (and ordering) in the Wasserstein space for the whole dataset. For branching data, our approach therefore has the benefit of assigning an ordering which is comparable across branches. Test dataset 2. Next, we present a more complex branching dataset, with an area of rapid direction change that is poorly sampled. This curve ρtis again obtained by convolving a curve of discrete measures µtwith a Gaussian ρt=N(0,σ2)∗µt, where σ= 0.1, but here µtis defined as follows: µt=  δ(0,1−t) 0≤t≤1 δt−1 0.1(1.5,0) 1≤t≤1.1 1 2δ(t−1.1) −1,1 +(1.5,0)+1 2δ(t−1.1) 1,1 +(1.5,0)1.1≤t≤2.1. In the biological context, this dataset represents a period of rapid developmental change fol- lowed by a branching event. In time series experiments, time points may not always be cap- tured at a fine enough time resolution to characterize such changes. This leads to segments of the developmental curve having low numbers of observed time points and thus breaks in continuity [66]. 22 Fig 4: A curve in the space of probability measures with a non-trivial change in direction, and which undergoes a branching event (A) is fitted with a principle curve with h= 0.01, andβ= 0.037(B). The curve contains 250 time points. The principle curve seriation method can be seen to obtain the best performance for a number of regimes with varying time points and 10000 total atoms (C). For spectral seriation, a kernel bandwidth σ= 0.315was used. A test dataset drawn from this model can be seen in Figure 4 panel (A), with a principal curve fitted to this dataset overlaid in panel (B); as with Test Dataset 1, a parameter selection was first done using training data from the same model, see Appendix E. Note that the esti- mated principal curve ˆγdoes not put any knots in the low-density patch; in this region ˆγis therefore a piecewise geodesic. In particular, the estimated principle curve is still able to capture the global structure of the dataset, despite the patch of missing/low resolution data. This ability is reflected in the performance against competing methods for this dataset (see Figure 4 panel (C)). For this dataset, the performance of spectral seriation is particularly poor. Indeed we can- not expect spectral methods to be robust to a loss of data which results in changes to the large-scale geometry of the dataset, as the spectrum of the graph or manifold Laplacian is PRINCIPAL CURVES IN METRIC SPACES AND W2 SPACE 23 sensitive to changes in global geometric properties like the number of connected components [24, 69, 81]. On the other hand, as discussed in Subsection 2.1, our method includes a re- laxation of the traveling salesman problem, and so with proper hyperparameter selection one should expect that the performance of our method should be similar to or better than that of TSP. Indeed this can be seen in both of our experiments. 5. Discussion. We have introduced principal curves in general metric spaces, which we envision will enable high-frequency data collection for single-cell time-courses. We
https://arxiv.org/abs/2505.04168v1
propose an estimator and prove it is consistent for recovering a curve of probability measures in Wasserstein space from empirical samples. This can be interpreted as a one-dimensional manifold learning problem, and is related to seriation [32, 83] and trajectory inference [60]. Ordering scRNA-seq datasets along a principal developmental curve can be thought of as an unsupervised version of the trajectory inference (TI) problem [23, 60, 79]. In TI, one observes empirical marginals ˆρtifrom independent snapshots of a developmental stochastic process ρ, and one wishes to recover the law on paths forρ. In future work, a combined statistical analysis of our principal curve problem and trajec- tory inference could shed light on optimal experimental design. We anticipate there would be a statistical trade-off between the sub-problems of principal curves (where more samples per time-point are ideal), and trajectory inference (where more time-points are ideal). We intend to investigate these considerations as part of future work involving real biological data. Finally, we conclude with a summary of several minor variants of the principal curve problem, including loops, multiple curves, andspline interpolation . 5.1. Variants of principal curves. In this section we consider several variants on the basic inference problem discussed in the bulk of the article. Loops . In the case where Λis believed to be concentrated near a loop, i.e. a closed curve, it makes sense to pose the principal curve problem over the space of closed curves, that is, we minimize over the set {γ∈AC([0,1];X) :γ0=γ1}. While this situation is atypical for the case where Λcomes from the marginals of a drift-diffusion SDE, say, there are circumstances in cellular dynamics/single-cell transcrip- tomics where the cells follow an approximately periodic dynamics (in particular “cell cycle” [61]), and so unsupervised inference of a closed curve is required. Our variational frame- work can easily accommodate this case: similarly to the case of fixed endpoints, the set {γ∈AC([0,1];X) :γ0=γ1}is closed inside AC([0,1];X). At the continuum level, one obtains the optimization problem inf γ∈AC([0,1];X)Z Xd2(x,Γ)dΛ(x) +βLength (γ) :γ0=γ1 ; at the discrete level, where one has knots {γk}K k=1instead of a continuous curve γ, we simply replace the “discrete length”PK−1 k=1d(γk,γk+1)withd(γK,γ1) +PK−1 k=1d(γk,γk+1), and at each Lloyd-iteration we label the knots by computing a Hamiltonian cycle through {γk}K k=1 rather than solving the traveling salesman problem. Multiple curves . In the Euclidean setting, [56] proposed a modification of the principal curves problem where one optimizes over finite collections of curves rather than a single curve, with a penalty on the number of curves. In other words, one considers the optimization problem inf K∈N,{γk}K k=1∈(AC([0,1];X))K( Z Xd2 x,K[ k=1Γk! dΛ(x) +βKX k=1Length (γk)! +β′K) . 24 HereSK k=1Γkdenotes the union of the graphs of the curves γk. We remark that the penalty on {γk}(total length plus number of components) is distinct from penalizing the total variation of the collection {γk}, and this turns out to ensure that minimizers have a finite number of components (which is not obvious with TV penalization). Briefly, [56] observe that proving the existence of minimizers for the “multiple penalized principle curves” problem
https://arxiv.org/abs/2505.04168v1
is routine: indeed, along any minimizing sequence, the number of components is necessarily bounded, so one can pass to the limit by extracting a limiting curve for each component separately. Additionally, [56] propose numerics for their multiple curves problem which are similar to ours (albeit without proof of consistency of the scheme). Here we simply point out that the minimization problem and numerics described in [56] also make sense in our more general setting of a compact geodesic metric space, in particular their proof of existence of minimizers can be translated to our setting without modification. Spline interpolation . The algorithm we have proposed for producing minimizers to the discrete variational problem PPCKgives us a K-tuple of points {γk}K k=1as its output. If we prefer the output to be an AC curve instead, one natural choice is to take a piecewise-geodesic interpolation of {γk}K k=1. In the case where Xis a Euclidean domain, we also have access to splines as an alternate way to interpolate between the points {γk}K k=1. Cubic splines, in particular, have the inter- pretation [26] as curves of least total squared acceleration fitting the prescribed points: that is, one solves the optimization problem inf γ:∈C2([0,1];X)Z |¨γt|2dt:γtk= ¯γtk for prescribed points {¯γtk}K k=1. This optimization problem can be posed more generally on a space where we can make sense of the magnitude of acceleration |¨γt|of a curve γ, for instance one can take Xto be a Riemannian manifold. General geodesic metric spaces do not have enough differential structure for |¨γt|to be well-defined, however. Nonetheless, in the case of the space of probability measures equipped with the 2- Wasserstein metric, which is of particular interest to us, several versions of cubic splines have been proposed in the literature, together with effective numerics [8, 20, 21, 47]. In particular we direct readers to [47] which provides an up to date review of related literature. Acknowledgements. AW, AA and GS were supported by the Burroughs Welcome Fund and a CIHR Project Grant. AW, AA, GS, and YHK were partially supported by the Explo- ration Grant NFRFE-2019-00944 from the New Frontiers in Research Fund (NFRF). FK was supported by the UBC Four Year Doctoral Fellowship (4YF). AW, FK, and YHK also con- ducted portions of this research as guests of the Stochastic Analysis and Application Research Center (SAARC) at the Korea Advanced Institute of Science and Technology (KAIST), and gratefully acknowledge SAARC’s hospitality. AW thanks Dejan Slep ˇcev for bringing the articles [56, 63] to our attention, and thanks Danica Sutherland for helpful discussions on re- producing kernel Hilbert spaces, especially for directing us to [93]. AW additionally thanks Filippo Santambrogio for helpful discussions regarding his article [44], in particular for ex- plaining to us that the results of that article would not be directly applicable for our work. YHK is partially supported by the Natural Sciences and Engineering Research Council of Canada (NSERC), with Discovery Grant RGPIN-2019-03926. This research is also partially supported by the Pacific Institute for the Mathematical Sciences (PIMS), through the PIMS Research Network (PRN) program (PIMS-PRN01), the Kantorovich
https://arxiv.org/abs/2505.04168v1
Initiative (KI). PRINCIPAL CURVES IN METRIC SPACES AND W2 SPACE 25 REFERENCES [1] Martial Agueh and Guillaume Carlier. Barycenters in the wasserstein space. SIAM Journal on Mathematical Analysis , 43(2):904–924, 2011. [2] L. Ambrosio and P. Tilli. Topics on Analysis in Metric Spaces . Oxford lecture series in mathematics and its applications. Oxford University Press, 2004. [3] Luigi Ambrosio and Nicola Gigli. A user’s guide to optimal transport. InModelling and optimisation of flows on networks , pages 1–155. Springer, 2013. [4] Luigi Ambrosio, Nicola Gigli, and Giuseppe Savaré. Gradient flows: in metric spaces and in the space of probability measures . Springer Science & Business Media, 2008. [5] David L Applegate. The traveling salesman problem: a computational study , volume 17. Princeton university press, 2006. [6] Jonathan E. Atkins, Erik G. Boman, and Bruce Hendrickson. A Spectral Algorithm for Seriation and the Consecutive Ones Problem. SIAM Journal on Computing , 28(1):297–310, 1998. _eprint: https://doi.org/10.1137/S0097539795285771. [7] Peter L Bartlett and Shahar Mendelson. Rademacher and gaussian complexities: Risk bounds and structural results. Journal of Machine Learning Research , 3(Nov):463–482, 2002. [8] Jean-David Benamou, Thomas O Gallouët, and François-Xavier Vialard. Second-order models for optimal transport and cubic splines on the wasserstein space. Foundations of Computational Mathematics , 19:1113–1143, 2019. [9] Gérard Biau and Aurélie Fischer. Parameter Selection for Principal Curves. IEEE Trans. Inf. Theory , 58(3):1924–1939, October 2011. [10] Paolo Bientinesi, Inderjit S. Dhillon, and Robert A. van de Geijn. A Parallel Eigensolver for Dense Symmetric Matrices Based on Multiple Relatively Robust Representations. SIAM Journal on Scientific Computing , 27(1):43–66, 2005. _eprint: https://doi.org/10.1137/030601107. [11] Jérémie Bigot, Raul Gouet, Thierry Klein, and Alfredo Lopez. Geodesic PCA in the Wasserstein space by convex PCA. Annales de l’Institut Henri Poincaré (B) Probabilités et Statistiques , 53(1):1–26, 2017. [12] Emmanuel Boissard and Thibaut Le Gouic. On the mean speed of convergence of empirical and occupation measures in Wasserstein distance. Annales de l’Institut Henri Poincaré (B) Probabilités et Statistiques , 50(2):539–563, 2014. [13] Andrea Braides. Γ-convergence for beginners , volume 22 of of Oxford Lecture Series in Mathematics and its Applications . Oxford University Press, 2002. [14] Dmitri Burago, Yuri Burago, and Sergei Ivanov. A course in metric geometry. American Mathematical Society , 2001. [15] Giusppe Buttazzo, Edouard Oudet, and Eugene Stepanov. Optimal transportation problems with free dirichlet regions. In Gianni dal Maso and Franco Tomarelli, editors, Variational Methods for Discontinuous Structures , vol- ume 51 of Progress in Nonlinear Differential Equations and Their Applications , pages 41–65, Basel, 2002. Birkhäuser. [16] Marta Catalano and Hugo Lavenant. Hierarchical integral probability metrics: A distance on random probability measures with low sample com- plexity. In Ruslan Salakhutdinov, Zico Kolter, Katherine Heller, Adrian Weller, Nuria Oliver, Jonathan Scarlett, and Felix Berkenkamp, editors, Proceedings of the 41st International Conference on Machine Learning , volume 235 of Proceedings of Machine Learning Research , pages 5841–5861. PMLR, 21–27 Jul 2024. [17] Elsa Cazelles, Vivien Seguy, Jérémie Bigot, Marco Cuturi, and Nicolas Papadakis. Geodesic pca versus log-pca of histograms in the wasserstein space. SIAM Journal on Scientific Computing , 40(2):B429–B456, 2018. 26 [18] Yaqing Chen, Zhenhua Lin,
https://arxiv.org/abs/2505.04168v1
and Hans-Georg Müller. Wasserstein regression. Journal of the American Statistical Association , 118(542):869–882, 2023. [19] Yen-Chi Chen, Christopher R. Genovese, and Larry Wasserman. Asymptotic theory for density ridges. Annals of Statistics , 43(5):1896–1928, 2015. [20] Yongxin Chen, Giovanni Conforti, and Tryphon T Georgiou. Measure-valued spline curves: An optimal transport viewpoint. SIAM Journal on Mathematical Analysis , 50(6):5947–5968, 2018. [21] Sinho Chewi, Julien Clancy, Thibaut Le Gouic, Philippe Rigollet, George Stepaniants, and Austin Stromme. Fast and smooth interpolation on wasserstein space. InInternational Conference on Artificial Intelligence and Statistics , pages 3061–3069. PMLR, 2021. [22] Lénaïc Chizat. Doubly regularized entropic wasserstein barycenters. arXiv preprint arXiv:2303.11844 , 2023. [23] Lénaïc Chizat, Stephen Zhang, Matthieu Heitz, and Geoffrey Schiebinger. Trajectory inference via mean-field langevin in path space. Advances in Neural Information Processing Systems , 35:16731–16742, 2022. [24] Fan RK Chung. Spectral graph theory , volume 92 of Regional Conference Series in Mathematics . American Mathematical Society, 1997. [25] Marco Cuturi and Arnaud Doucet. Fast computation of wasserstein barycenters. InInternational conference on machine learning , pages 685–693. PMLR, 2014. [26] Carl De Boor. Best approximation properties of spline functions of odd degree. Journal of Mathematics and Mechanics , pages 747–749, 1963. [27] Ennio De Giorgi. Selected Papers . Springer Collected Works in Mathematics. Springer Berlin Heidelberg, 2013. [28] Sylvain Delattre and Aurélie Fischer. On principal curves with a length constraint. Annales de l’Institut Henri Poincaré (B) Probabilités et Statistiques , 56(3):2108–2140, August 2020. [29] Pedro Delicado. Another look at principal curves and surfaces. Journal of Multivariate Analysis , 77(1):84–116, 2001. [30] Persi Diaconis and R. L. Graham. Spearman’s footrule as a measure of disarray. Journal of the Royal Statistical Society. Series B (Methodological) , 39(2):262–268, 1977. [31] Tom Duchamp and Werner Stuetzle. Extremal properties of principal curves in the plane. The Annals of Statistics , 24(4):1511 – 1520, 1996. [32] Nicolas Flammarion, Cheng Mao, and Philippe Rigollet. Optimal rates of statistical seriation. Bernoulli , 25(1):623–653, 2019. [33] Fajwel Fogel, Alexandre d’Aspremont, and Milan V ojnovic. Spectral ranking using seriation. Journal of Machine Learning Research , 17(88):1–45, 2016. [34] Christopher R Genovese, Marco Perone-Pacifico, Isabella Verdinelli, and Larry Wasserman. The geometry of nonparametric filament estimation. Journal of the American Statistical Association , 107(498):788–799, 2012. [35] Christopher R Genovese, Marco Perone-Pacifico, Isabella Verdinelli, and Larry Wasserman. Nonparametric ridge estimation. The Annals of Statistics , pages 1511–1545, 2014. [36] Samuel Gerber and Ross Whitaker. Regularization-free principal curve estimation. Journal of Machine Learning Research , 14(39):1285–1302, 2013. [37] Thomas L Gertzen and Martin Grötschel. Flinders petrie, the travelling salesman problem, and the beginning of mathematical modeling in archaeol- ogy. PRINCIPAL CURVES IN METRIC SPACES AND W2 SPACE 27 Documenta Mathematica , 2012:199–210, 2012. [38] Laya Ghodrati and Victor M Panaretos. Distribution-on-distribution regression via optimal transport maps. Biometrika , 109(4):957–974, 2022. [39] Christophe Giraud, Yann Issartel, and Nicolas Verzelen. Localization in 1d non-parametric latent space models from pairwise affinities. Electronic Journal of Statistics , 17(1):1587–1662, 2023. [40] Keaton Hamm, Caroline Moosmüller, Bernhard Schmitzer, and Matthew Thorpe. Manifold learning in Wasserstein space. arXiv preprint arXiv:2311.08549 , 2023. [41] Trevor Hastie. Principal Curves and
https://arxiv.org/abs/2505.04168v1
Surfaces . PhD thesis, Stanford University, 1984. [42] Trevor Hastie and Werner Stuetzle. Principal Curves. Journal of the American Statistical Association , 84(406):502–516, 1989. Publisher: [American Statistical Association, Taylor & Francis, Ltd.]. [43] Søren Hauberg. Principal curves on riemannian manifolds. IEEE transactions on pattern analysis and machine intelligence , 38(9):1915–1921, 2015. [44] Mikaela Iacobelli, Francesco S Patacchini, and Filippo Santambrogio. Weighted ultrafast diffusion equations: from well-posedness to long-time behaviour. Archive for Rational Mechanics and Analysis , 232:1165–1206, 2019. [45] Jeannette Janssen and Aaron Smith. Reconstruction of line-embeddings of graphons. Electronic Journal of Statistics , 16(1):331–407, 2022. [46] Richard Jordan, David Kinderlehrer, and Felix Otto. The variational formulation of the Fokker–Planck equation. SIAM Journal on Mathematical Analysis , 29(1):1–17, 1998. [47] Jorge Justiniano, Martin Rumpf, and Matthias Erbar. Approximation of splines in wasserstein spaces. arXiv preprint arXiv:2302.10682 , 2023. [48] Amirhossein Karimi and Tryphon T Georgiou. Regression analysis of distributional data through multi-marginal optimal transport. arXiv preprint arXiv:2106.15031 , 2021. [49] Amirhossein Karimi, Luigia Ripani, and Tryphon T Georgiou. Statistical learning in wasserstein space. IEEE Control Systems Letters , 5(3):899–904, 2020. [50] Alexander Kechris. Classical descriptive set theory , volume 156. Springer Science & Business Media, 2012. [51] Balázs Kégl, Adam Krzyzak, Tamás Linder, and Kenneth Zeger. Learning and design of principal curves. IEEE Transactions on Pattern Analysis and Machine Intelligence , 22(3):281–297, 2000. [52] David Kendall. Incidence matrices, interval graphs and seriation in archeology. Pacific Journal of mathematics , 28(3):565–570, 1969. [53] David G Kendall. A statistical approach to flinders petrie’s sequence-dating. Bulletin of the International Statistical Institute , 40(2):657–681, 1963. [54] Jakwang Kim, Sharvaj Kubal, and Geoffrey Schiebinger. Optimal sequencing depth for single-cell rna-sequencing in wasserstein space. arXiv preprint arXiv:2409.14326 , 2024. [55] Young-Heon Kim and Brendan Pass. Wasserstein barycenters over riemannian manifolds. Advances in Mathematics , 307:640–683, 2017. [56] Slav Kirov and Dejan Slep ˇcev. Multiple penalized principal curves: Analysis and computation. Journal of Mathematical Imaging and Vision , 59:234–256, 2017. 28 [57] Forest Kobayashi, Jonathan Hayase, and Young-Heon Kim. Monge-kantorovich fitting with sobolev budgets. arXiv preprint arXiv:2409.16541 , 2024. [58] Andrey Kolmogorov and Sergey Fomin. Elements of the Theory of Functions and Functional Analysis . Dover, 1999. [59] Gilbert Laporte. The seriation problem and the travelling salesman problem. Journal of Computational and Applied Mathematics , 4(4):259–268, 1978. [60] Hugo Lavenant, Stephen Zhang, Young-Heon Kim, and Geoffrey Schiebinger. Toward a mathematical theory of trajectory inference. The Annals of Applied Probability , 34(1A):428 – 500, 2024. [61] Zehua Liu, Huazhe Lou, Kaikun Xie, Hao Wang, Ning Chen, Oscar M Aparicio, Michael Q Zhang, Rui Jiang, and Ting Chen. Reconstructing cell cycle pseudo time-series via single-cell transcriptome data. Nature communications , 8(1):22, 2017. [62] Xin Yang Lu. Regularity of densities in relaxed and penalized average distance problem. Networks and Heterogeneous Media , 10(4):837–855, 2015. [63] Xin Yang Lu and Dejan Slep ˇcev. Average-distance problem for parameterized curves. ESAIM: Control, Optimisation and Calculus of Variations , 22(2):404–416, 2016. [64] Xin Yang Lu and Dejan Slep ˇcev. Average-distance problem with curvature penalization for data parameterization: regularity of minimizers. ESAIM: Control, Optimisation and Calculus of
https://arxiv.org/abs/2505.04168v1
Variations , 27:8, 2021. [65] A. Marek, V . Blum, R. Johanni, V . Havu, B. Lang, T. Auckenthaler, A. Heinecke, H.-J. Bungartz, and H. Lederer. The ELPA library: scalable parallel eigenvalue solutions for electronic structure theory and computational science. Journal of Physics: Condensed Matter , 26(21):213201, May 2014. Publisher: IOP Publishing. [66] Abdull J. Massri, Alejandro Berrio, Anton Afanassiev, Laura Greenstreet, Krista Pipho, Maria Byrne, Ge- offrey Schiebinger, David R. McClay, and Gregory A. Wray. Single-cell transcriptomics reveals evolutionary reconfiguration of embryonic cell fate specification in the sea urchin Heliocidaris erythrogramma. Genome biology and evolution , page evae258, November 2024. Place: England. [67] Markus Mittnenzweig, Yoav Mayshar, Saifeng Cheng, Raz Ben-Yair, Ron Hadas, Yoach Rais, Elad Chom- sky, Netta Reines, Anna Uzonyi, Lior Lumerman, et al. A single-embryo, single-cell time-resolved model for mouse gastrulation. Cell, 184(11):2825–2842, 2021. [68] Amine Natik and Aaron Smith. Consistency of spectral seriation. arXiv preprint arXiv:2112.04408 , 2021. [69] Andrew Ng, Michael Jordan, and Yair Weiss. On spectral clustering: Analysis and an algorithm. Advances in neural information processing systems , 14, 2001. [70] Umut Ozertem and Deniz Erdogmus. Locally defined principal curves and surfaces. The Journal of Machine Learning Research , 12:1249–1286, 2011. [71] Gabriel Peyré and Marco Cuturi. Computational optimal transport: With applications to data science. Foundations and Trends in Machine Learning , 11(5-6):355–607, 2019. [72] Barnabas Poczos, Aarti Singh, Alessandro Rinaldo, and Larry Wasserman. Distribution-free distribution regression. In Carlos M. Carvalho and Pradeep Ravikumar, editors, Proceedings of the Sixteenth International Confer- ence on Artificial Intelligence and Statistics , volume 31 of Proceedings of Machine Learning Research , pages 507–515, Scottsdale, Arizona, USA, 29 Apr–01 May 2013. PMLR. PRINCIPAL CURVES IN METRIC SPACES AND W2 SPACE 29 [73] Paz Polak and Gershon Wolansky. The lazy travelling salesman problem in R2. ESAIM: Control, Optimisation and Calculus of Variations , 13(3):538–552, 2007. [74] Rohan Rao, Amit Moscovich, and Amit Singer. Wasserstein k-means for clustering tomographic projections. InAdvances in neural information processing systems , 2020. [75] William S Robinson. A method for chronologically ordering archaeological deposits. American antiquity , 16(4):293–301, 1951. [76] Wouter Saelens, Robrecht Cannoodt, Helena Todorov, and Yvan Saeys. A comparison of single-cell trajectory inference methods. Nature biotechnology , 37(5):547–554, 2019. [77] Sathyakama Sandilya and Sanjeev R Kulkarni. Principal curves with bounded turn. IEEE Transactions on Information Theory , 48(10):2789–2793, 2002. [78] Filippo Santambrogio. {Euclidean, metric, and Wasserstein }gradient flows: an overview. Bulletin of Mathematical Sciences , 7:87–154, 2017. [79] Geoffrey Schiebinger. Reconstructing developmental landscapes and trajectories from single-cell data. Current Opinion in Systems Biology , 27:100351, 2021. [80] Geoffrey Schiebinger et al. Optimal-transport analysis of single-cell gene expression identifies developmental trajectories in reprogram- ming. Cell, 176(4):928–943, 2019. [81] Geoffrey Schiebinger, Martin J Wainwright, and Bin Yu. The geometry of kernelized spectral clustering. The Annals of Statistics , 43(2):819–846, 2015. [82] Vivien Seguy and Marco Cuturi. Principal geodesic analysis for probability measures under the optimal transport metric. InProceedings of the 28th International Conference on Neural Information Processing Systems-Volume 2 , pages 3312–3320, 2015. [83] Nihar B Shah, Sivaraman Balakrishnan, Adityanand Guntuboyina, and Martin J Wainwright. Stochastically transitive
https://arxiv.org/abs/2505.04168v1
models for pairwise comparisons: Statistical and computational issues. IEEE Transactions on Information Theory , 63(2):934–959, 2016. [84] Dejan Slep ˇcev. Counterexample to regularity in average-distance problem. Annales de l’IHP Analyse non linéaire , 31(1):169–184, 2014. [85] Alexander J Smola, Sebastian Mika, Bernhard Schölkopf, and Robert C Williamson. Regularized principal manifolds. Journal of machine learning research , 1:179–209, 2001. [86] Robert Tibshirani. Principal curves revisited. Statistics and computing , 2:183–190, 1992. [87] Robert Tibshirani, Michael Saunders, Saharon Rosset, Ji Zhu, and Keith Knight. Sparsity and smoothness via the fused lasso. Journal of the Royal Statistical Society Series B: Statistical Methodology , 67(1):91–108, 2005. [88] V . S. Varadarajan. On the convergence of sample probability distributions. Sankhy ¯a: The Indian Journal of Statistics (1933-1960) , 19(1/2):23–26, 1958. [89] Isabella Verdinelli and Larry Wasserman. Hybrid wasserstein distance and fast distribution clustering. Electronic Journal of Statistics , 13:5088–5119, 2019. [90] Cédric Villani. Optimal transport: old and new , volume 338. Springer Science & Business Media, 2008. [91] Martin J Wainwright. High-dimensional statistics: A non-asymptotic viewpoint , volume 48. Cambridge university press, 2019. 30 [92] Wei Wang, Dejan Slep ˇcev, Saurav Basu, John A Ozolek, and Gustavo K Rohde. A linear optimal transportation framework for quantifying and visualizing variations in sets of images. International journal of computer vision , 101:254–269, 2013. [93] George Wynne and Andrew B. Duncan. A kernel two-sample test for functional data. Journal of Machine Learning Research , 23(73):1–51, 2022. [94] Roomina Zendehboodi. Exploring the use of spectral seriation to uncover dynamics in embryonic development: a geometric and probabilistic approach. Master’s thesis, University of British Columbia, 2023. [95] Martin Jinye Zhang, Vasilis Ntranos, and David Tse. Determining sequencing depth in a single-cell rna-seq experiment. Nature communications , 11(1):774, 2020. [96] Yubo Zhuang, Xiaohui Chen, and Yun Yang. Wasserstein k-means for clustering probability distributions. InProceedings of the 36th International Conference on Neural Information Processing Systems , pages 11382–11395, 2022. APPENDIX A: PROPERTIES OF CURVES IN METRIC SPACES In this section, we collect some basic results about curves into metric spaces. Many of them are straightforward, but for results where we could not find a statement and proof elsewhere in the literature, we include the proofs in order to make the article more self-contained. The following is a key lemma for us, and is a restatement of [4, Lemma 1.1.4(b)]. LEMMA A.1 (Existence of constant speed reparametrization). LetXbe a complete met- ric space. Let γ∈AC([0,1];X). Then there exists an increasing, absolutely continuous φ: [0,1]→[0,1]and a ˆγ∈AC([0,1];X)such that γ= ˆγ◦φand|˙ˆγt|=Length (γ)for almost all t∈[0,1]. Another basic fact is the following version of the Arzelà-Ascoli theorem, which is a con- sequence of [58, V ol. 1, Ch. II, §18, Thm. 7]. LEMMA A.2 (Arzelà-Ascoli). LetXandYbe compact metric spaces. Let {fn}n∈Nbe a sequence of functions fn:X→Y,uniformly equicontinuous with modulus of uniform continuity ζ:R→R, meaning that: ∀ε >0,∀n, d X(x,x′)< ζ(ε) =⇒dY(fn(x),fn(x′))< ε. Then, there exists a subsequence {fnk}converging uniformly to a continuous function f: X→Ywith the same modulus of uniform continuity ζ. As an easy consequence of these two lemmas, we deduce the following compactness result for curves in
https://arxiv.org/abs/2505.04168v1
metric spaces: LEMMA A.3 (Compactness and lower semicontinuity of length functional). (i) Let X be a compact metric space. Let n∈N, and for each nletγn∈AC([0,1];X)be con- stant speed, in the sense that |˙γn t|=Length (γn)for almost all t∈[0,1]. Suppose that supn∈NLength (γn)<∞. Then there exists some γ∗∈AC([0,1];X)(but not necessarily constant speed) such that, up to a subsequence nk, it holds that γnk t→γ∗ tuniformly for all t∈[0,1]. (ii) Let γnbe a sequence in AC([0,1];X)converging uniformly in tto some γ∗∈ AC([0,1];X). Then, liminf nkLength (γnk)≥Length (γ∗). PRINCIPAL CURVES IN METRIC SPACES AND W2 SPACE 31 REMARK A.1. This lemma can be rephrased as follows: if one considers the quotient space AC([0,1];X)/∼where ∼denotes equality up to time reparametrization, then the functional Length( ·)is lower semicontinuous and has compact sublevel sets on this quotient space. PROOF . (i) Since |˙γtn|=Length (γn)for almost all t, we have |t−s|< δ=⇒dX(γn t,γn s)< δ·Length (γn)≤δ·sup n∈NLength (γn). Therefore, the fact that supn∈NLength (γn)<∞implies uniform equicontinuity of {γn} with the Lipschitz constant supn∈NLength (γn). The existence of γ∗now follows immedi- ately from the Arzelà-Ascoli theorem Lemma A.2. In particular, we have that t7→γ∗ tis Lipschitz with Lipschitz constant at most supn∈NLength (γn). Therefore γ∗∈AC([0,1];X)with metric speed and length at most supn∈NLength (γn). Note that the metric derivative |˙γ∗ t|:= lim s→tdX(γ∗ t,γ∗ s) |t−s|exists for a.e. t∈[0,1]by a version of Rademacher’s theorem for functions taking values in a compact metric space (see [2, Theorem 4.1.6]). (ii) Let γ∗∈AC([0,1];X), and fix δ >0. There exists a K∈N(depending on δ) and a sequence of times 0 =t0< t1< ... < t K−1< tK= 1such that K−1X k=0dX(γ∗ tk,γ∗ tk+1)>Length (γ∗)−δ. Next, consider a sequence γn→γ∗converging uniformly in t. Pick N∈Nsuch that for all n≥Nwe have dX(γn t,γ∗ t)< δ/K ; it follows that for all n≥N, Length (γn)≥K−1X k=0dX(γn tk,γn tk+1) ≥K−1X k=0 dX(γ∗ tk,γ∗ tk+1)−dX(γ∗ tk,γn tk)−dX(γ∗ tk+1,γn tk+1) >Length (γ∗)−3δ. Sending δ→0we conclude that liminf n→∞Length (γn)≥Length (γ∗)as desired. Next we establish a compactness result for certain piecewise-geodesic approximations of AC curves. In the context of the next lemma, we say that, given γ∈AC([0,1];X), a curve γK∈AC([0,1];X)is aKth piecewise-geodesic approximation of γprovided that: fort= 0,1 K,2 K,...,1,γK t=γt; and, for each interval [k K,k+1 K], the restriction γK⌞[k K,k+1 K] is a constant speed geodesic connecting γk/K andγ(k+1)/K. (Note that we do not assume uniqueness of geodesics; we only claim that, in any geodesic metric space, such piecewise- geodesic approximations exist.) We state the lemma below for constant-speed curves since this is the result we need for the bulk of the article, but the same result holds for Lipschitz curves more generally. LEMMA A.4. LetXbe a geodesic metric space. Let γ∈AC([0,1];X)be constant- speed. For each K∈N, letγKbe aKth piecewise-geodesic approximation of γ. Then, as K→ ∞ , it holds that γKconverges uniformly to γ. PROOF . This follows easily from the triangle inequality, since for each t∈[k K,k+1 K), d(γt,γk/K)≤1 K·Length (γ);d(γK t,γK k/K)≤1 K·Length (γ); and γk/K=γK k/K. 32 The next lemma relates the Length functional and the 1-dimensional Hausdorff measure H1. LEMMA A.5. LetXbe a metric space
https://arxiv.org/abs/2505.04168v1
and let γ∈AC[0,1];X), with range Γ. Then Length (γ)≥ H1(Γ), with equality if γis injective. PROOF . This is established in the case where γis Lipschitz by combining [2, Theorem 4.1.6] and [2, Theorem 4.4.2]. The same holds more generally by replacing γwith its con- stant speed (hence 1-Lipschitz) reparametrization, and observing that length, 1d Hausdorff measure, and injectivity are all preserved under this reparametrization. We state the following two basic analysis results without proof. LEMMA A.6. Let(X,TX)be compact and (Y,TY)be Hausdorff. Let f:X→Ybe continuous and injective. Then f−1:f(X)→Xis continuous. LEMMA A.7. Suppose φ: [0,1]→[0,1]is a continuous, injective function. Then φis strictly monotone. Next we show that injective reparametrization does not change the length of an AC curve. This is basically routine, but again we were unable to find an appropriate reference. LEMMA A.8. Suppose f∈AC([0 ,1];X)andφ: [0,1]→[0,1]is a continuous, injec- tive function. Then f◦φ∈AC([0 ,1];X). Moreover, if φis a bijection, then Length( f) = Length( f◦φ). PROOF . Let ε >0. Since fis AC, there exists δ >0such that for all finite sequences of pairwise disjoint intervals [a1,b1],...,[ak,bk]⊆[0,1]such that X j|bj−aj|< δ one has X jd(f(bj),f(aj))< ε. Define ε′=δ/k. Note that φis a continuous function on a compact set and hence uniformly continuous. Therefore there exists a δ′>0such that for all a,b∈[0,1]we have |b−a|< δ′=⇒ |φ(b)−φ(a)|< ε′. Now, select an arbitrary sequence of pairwise disjoint intervals [a1,b1],...,[ak,bk]⊆[0,1] such that X j|bj−aj|< δ′. Since φis a continuous, injective function, by Lemma A.7 it is strictly monotone. Without the loss of generality we suppose φis strictly increasing. Then the intervals [φ(a1),φ(b1)],...,[φ(ak),φ(bk)]are nonempty, and (by injectivity of φ) remain pairwise disjoint. Then by uniform continuity of φ, X j|φ(bj)−φ(aj)|< kε′=δ, PRINCIPAL CURVES IN METRIC SPACES AND W2 SPACE 33 whence X jd(f(φ(bj)),f(φ(aj)))< ε, sof◦φis absolutely continuous as desired. Now, further suppose φis a bijection. Let I(N)denote the set of all partitions of [0,1] according to 0 =t0< t1<···< tN−1< tN= 1. Note that by bijectivity and strictly- increasing-ness of φwe have φ(I(N)) =I(N) =φ−1(I(N)), thus sup NNX j=1d(f(φ(tj)),f(φ(tj−1))) = sup NNX j=1d(f(tj),f(tj−1)), in particular Length( f) = Length( f◦φ), as desired. Related to the previous lemma, injective reparametrization gives the minimum arc-length. The argument is again routine, but we provide an explicit argument to make the article self- contained. LEMMA A.9. Letf∈AC([0 ,1];X)be injective and denote Φ(f) ={g∈AC([0 ,1];X)|gis constant speed and g([0,1]) = f([0,1])}. Letˆfdenote the constant speed reparametrization of f. Then Length( g) = Length( f)iffg is injective, whence (up to a possible time reversal) we have g=ˆf. In particular, ˆf(and its time reversal) are the only elements of Φ(f)such that Length( ˆf) = inf g∈Φ(f)Length( g). PROOF . Let g∈Φ(f)be arbitrarily chosen. We proceed by casework. 1. Suppose gis injective. We want to show that g=ˆfup to time-reversal. To that end, by Lemma A.6, g−1is well-defined and continuous. So h: [0,1]→[0,1]given by h(t) =g−1(f(t)) is a continuous bijection. By Lemma A.8, we thus see Length( g) = Length( g◦h) = Length( f). It remains to show g=ˆf(up to a possible time-reversal). To that end, by Lemma A.7, h is strictly monotone.
https://arxiv.org/abs/2505.04168v1
First suppose his increasing. Then f,gvisit points of f([0,1])in the same order. By the definition of the constant-speed parametrization we immediately getˆf=g, as desired. Alternatively, his decreasing, in which case ˆfcoincides with the time-reversal of g. 2. Suppose that gis not injective and that {g(0),g(1)} ̸={f(0),f(1)}. We want to show this implies Length( g)>Length( f). To that end: First note that reversing the parametrization offas necessary we may assume infg−1({f(0)})<infg−1({f(1)}). 34 (Note that we cannot always achieve this by reversing the parametrization of g; for exam- ple, if g(0) = f(1) = g(1)). Define s0= inf g−1({f(0)})and let s1= inf([ s0,1]∩g−1({f(1)})). Suppose [s0,s1]̸= [0,1]; without the loss of generality we may assume s0̸= 0. Then g(0)̸=g(s0)and so 0< d(g(0),g(s0))≤Length( g⌞[0,s0]). Now, on the other hand, observe f−1◦g: [0,1]→[0,1]is well-defined and continuous. Since (f−1◦g)(s0) = 0 and(f−1◦g)(s1) = 1 and the continuous image of a connected set is connected, we get (f−1◦g)([s0,s1]) = [0 ,1], whence we have g([s0,s1]) =f([0,1]). It follows that Length( f)≤Length( g⌞[0,s0]). Together with the earlier bound we thus obtain Length( g)≥Length( g⌞[0,s0]) + Length( g⌞[0,s0])>Length( f), as desired. 3. Suppose gis not injective and that {g(0),g(1)}={f(0),f(1)}. We want to show Length( g)>Length( f). To that end fix t∈[0,1]such that g−1({g(t)})contains mul- tiple values. Let t0= inf g−1({g(t)})and t1= sup g−1({g(t)}). Note t0̸=t1. Observe that since ghas constant-speed parametrization we have g([t0,t1]) is not a singleton. Hence we have 0<diam( g([t0,t1]))≤Length( g⌞[t0,t1]). Define ˜g(t) =( g(t)t̸∈[t0,t1] g(t0)t∈[t0,t1]. Note that ˜gis continuous, hence f−1◦˜gis continuous, and the same analysis as in the previous case shows Length( f)≤Length(˜ g). Finally we have Length( g) = Length( g⌞[t0,t1]) + Length(˜ g)>Length( f), as desired. Since these cases are exhaustive, we see Length( g) = Length( f)iffgis injective, which occurs iff g=f(up to time reversal). APPENDIX B: BACKGROUND ON REPRODUCING KERNEL HILBERT SPACES For technical reasons, our arguments for Section 3 rely on the existence of a specific repro- ducing kernel Hilbert space (RKHS) defined atop a given compact metric space. We recall the following definition. DEFINITION B.1. Given a Polish space Vand an RKHS Hof functions from VtoR, themaximum mean discrepancy (MMD) distance on P(V)is defined by MMD H(µ,ν):= sup ∥f∥H≤1 Z Vfdµ−Z Vfdν . PRINCIPAL CURVES IN METRIC SPACES AND W2 SPACE 35 The following is a folklore result which can be deduced by combining certain published results on RKHSes with a classical universality theorem from descriptive set theory. THEOREM B.2. LetVbe a compact metric space .Then there exists an RKHS Hof functions from VtoRfor which the topology on P(V)induced by MMD His the same as the weak* topology, and for which the reproducing kernel kis bounded. PROOF . We argue by abstract construction of a specific reproducing kernel. Let Ybe a separable Hilbert space and T:V→Ybe a continuous injection. Then Theorems 9 and 21 from [93] establish that the MMD distance associated to the RKHS Hinduced by the kernel k(x,x′) =e−1 2∥T(x)−T(x′)∥2 Y induces the weak* topology on P(V). (Note that this kernel is automatically bounded since T(V)is compact inside Y.) Lastly, such a map Tand separable Hilbert space Yalways
https://arxiv.org/abs/2505.04168v1
exist whenever Vis compact. Indeed, taking Y=ℓ2, this follows from the classical fact that every compact metric space can be embedded homeomorphically as a compact subset of ℓ2, see Section 4.C in [50]. Given a compact metric space Vand RKHS HatopV, we equip the space P(V)with the metric MMD H. In particular we choose Hso that MMD Hputs the same topology on P(V) as the weak* topology, so in particular (P(V),MMD H)is itself a compact metric space. Now given two measures Λ,Ξ∈ P(P(V)), we consider the 1-Wasserstein distance on P(P(V))defined with respect to MMD HonP(V): W1(Λ,Ξ):= inf π∈Π(Λ,Ξ)Z P(V)×P(V)MMD H(µ,µ′)dπ(µ,µ′). It holds that W1metrizes the weak* topology on P(P(V)). At the same time, it is shown in the proof of [91, Theorem 4.10] that E[MMD (µ,ˆµn)]is bounded by twice the Rademacher complexity Rad n(F)of the function class F={∥f∥H≤ 1}; and in the case where the reproducing kernel kis bounded, [7, Section 4.3] establishes thatRad n(F)≤q supxk(x,x) n. Consequently E[MMD (µ,ˆµn)]≤CHn−1/2for some uniform constant CH>0. Likewise, [91, Theorem 4.10] establishes that P[MMD (µ,ˆµn)≥t+E[MMD (µ,ˆµn)]]≤exp −nt2 DH where DHis a constant which depends only on the reproducing kernel k(·,·)forHand is finite whenever supx,y∈Vk(x,y)<∞. APPENDIX C: DEFERRED PROOFS C.1. Proofs for Section 2. PROOF OF PROPOSITION 2.1. The proof follows essentially from the compactness and semicontinuity properties of the family of absolutely continuous curves as given in Ap- pendix A. To give the details, let M:= inf γ∈AC([0,1];X)Z Xd2(x,Γ)dΛ(x) +βLength (γ) . Letγnbe a sequence of AC curves in X, such that lim n→∞Z Xd2(x,Γn)dΛ(x) +βLength (γn) =M. 36 Without loss of generality, we can take supnLength (γn)≤2M β. Note also that Z Xd2(x,Γ)dΛ(x) +βLength (γ) is invariant under time-reparametrization. In particular, by Lemma A.1 we can take each γnto have unit-speed parametrization. We can thus apply Lemma A.3 to deduce that there is some subsequence nkand some limiting AC curve γ∗: [0,1]→Xsatisfying liminf k→∞Length (γnk)≥Length (γ∗), andγnk t→γ∗ tuniformly in t. The latter implies that pointwise in x,limk→∞d(x,Γnk) =d(x,Γ∗). Consequently M= liminf k→∞Z Xd2(x,Γnk)dΛ(x)+βLength (γnk)≥Z Xd2(x,Γ∗)dΛ(x)+βLength (γ∗), and so γ∗is a minimizer, as desired. PROOF OF PROPOSITION 2.2. Let X⊆R2such that X◦̸=∅and let µ∈ P(X). For some fixed β >0letγminimize PPC(Λ) . Then by [63, Thm. 1.3], γis injective; in partic- ular,γ(0)̸=γ(1). Suppose there exist distinct isometries φ1,φ2,φ3such that (φ1)#µ=µ; taking an affine extension, we may treat them as isometries of R2. There are only two such isometries that fix {γ(0),γ(1)}: (1) identity, and (2) reflection across {d(x,γ(0)) = d(x,γ(1))}. So some φimaps{γ(0),γ(1)}to a different set. Since γis a homeomorphism onto its image (Lemma A.6), γ(0),γ(1)are the unique noncut3points of image( γ), and since φiis also a homeomorphism, φinot fixing {γ(0),γ(1)}implies image( φ◦γ)̸= image( γ). Finally, since µis invariant under φi, and since the rest of both the data-fitting term and penalty term in PPC(Λ) depend only on the metric and φiis an isometry, we see φi◦γ̸=γ are distinct minimizers of PPC(Λ) not equivalent by reparametrization. φ1= Identity φ2= Reflection 3Given a set S, we say p∈Sis anoncut point of SifS\ {p}is connected. PRINCIPAL CURVES IN METRIC SPACES AND W2 SPACE 37 φ3= Rotate
https://arxiv.org/abs/2505.04168v1
2 π/3 Fig 4: Illustration for the proof of Proposition 2.2. Take φ1,φ2,φ3to be the three distinct isometries indicated, applied to a curve with µuniform on a triangle. In this case, φ3yields a distinct image. We use the following lemma as an ingredient in the proof of Proposition 2.3. LEMMA C.1. LetXbe a compact metric space. Let (γn)n∈Ndenote a sequence of mea- surable functions from [0,1]toX, and let (Λm)m∈Ndenote a sequence in P(X). Suppose that for some γ: [0,1]→XandΛ∈ P(X)we have γn t→γtuniformly in t, and Λm⇀∗Λ, respectively. Then, lim m,n→∞Z Xd2(x,Γn)dΛm(x) =Z Xd2(x,Γ)dΛ(x). PROOF . We employ the Moore-Osgood theorem for double limits. Accordingly, it suffices to show that: 1.R Xd2(x,Γn)dΛm(x)converges uniformly in mtoR Xd2(x,Γ)dΛm(x)asn→ ∞ ; and, 2.R Xd2(x,Γn)dΛm(x)converges pointwise in ntoR Xd2(x,Γn)dΛ(x)asm→ ∞ . For (1), we use the fact that γn t→γtuniformly in t. Indeed let Nbe sufficiently large that for all n≥N,d(γn t,γt)< εuniformly in t. Then, for fixed x∈X, we claim that inf s∈[0,1]d(x,γn s)−inf t∈[0,1]d(x,γt) <2ε. Indeed, without the loss of generality suppose that infs∈[0,1]d(x,γn s)≥inft∈[0,1]d(x,γt), and let t0∈[0,1]such that d(x,γt0)<inft∈[0,1]d(x,γt) +ε. Then by the triangle inequality, d(x,γn t0)<inft∈[0,1]d(x,γt) + 2ε; hence infs∈[0,1]d(x,γn s)<inft∈[0,1]d(x,γt) + 2ε. Apply- ing the same reasoning when infs∈[0,1]d(x,γn s)≤inft∈[0,1]d(x,γt)establishes the claim. Now, supposing again that infs∈[0,1]d(x,γn s)≥inft∈[0,1]d(x,γt), we have that  inf s∈[0,1]d(x,γn s)2 < inf t∈[0,1]d(x,γt) + 2ε2 . Note that we may rewrite the inequality above as inf s∈[0,1]d2(x,γn s)<inf t∈[0,1](d2(x,γt) + 4εd(x,γt) +ε2), ≤inf t∈[0,1]d2(x,γt) + 4ε·Diam (X) +ε2. By a symmetric argument, we thus have that |d2(x,Γn)−d2(x,Γ)|= inf s∈[0,1]d2(x,γn s)−inf t∈[0,1]d2(x,γt) <4ε·Diam (X) +ε2. 38 This estimate is uniform in µ. Therefore we compute that Z Xd2(x,Γn)dΛm(x)−Z Xd2(x,Γ)dΛm(x) ≤Z X|d2(x,Γn)−d2(x,Γ)|dΛm(x) <4ε·Diam (X) +ε2 which is uniform in mas desired. For (2), first note that for any nonempty A⊆X, the Hausdorff distance d(x,A)is contin- uous in x; unrelatedly, the domain Xis compact, hence bounded. Therefore, for any fixed n the function x7→d2(x,Γn)is bounded and continuous on (X,d). Hence, directly from the fact that Λm⇀∗Λ, we deduce limm→∞R Xd2(x,Γn)dΛm(x) =R Xd2(x,Γn)dΛ(x). REMARK C.1. Let us give an indication of the proof strategy for Proposition 2.3, which may be beneficial for readers familiar with the notion of Γ-convergence. (See [13, 27] for background on Γ-convergence, which is an abstract notion of convergence of minimization problems; familiarity with this notion is beneficial but not necessary to follow our arguments.) As we remark immediately following Lemma A.3, Lemma A.3 can be interpreted as showing that the functional Length (·)is lower semicontinuous on the space AC([0,1];X)/∼, where ∼denotes equality up to time reparametrization. Similarly, here our strategy amounts to arguing that if ΛN⇀∗Λ, then the sequence of functionals PPC (ΛN) Γ-converge to PPC (Λ), at least if the functionals PPC (ΛN)and PPC (Λ)are understood as taking arguments in the space AC([0,1];X)/∼. We then combine this Γ-convergence with a compactness argument applied to the sequence of minimizers. We note, moreover, that the final part of the statement of the proposition can also be read as asserting that there exists a subsequence of minimizers γ∗Nwhich converges to some γ∗ in a suitable quotient topology
https://arxiv.org/abs/2505.04168v1
on AC([0,1];X)/∼. REMARK C.2. We mention that Proposition 2.3 is broadly analogous to a result obtained by two of the authors in [57, Cor. 5.2], albeit in a different setting; see also [84, Lem. 3] for a similar stability result on Euclidean space for the closely related “average distance variational problem” from [15]. PROOF OF PROPOSITION 2.3. Step 1. First, we observe the following. Let (γN)N∈Nbe any sequence in AC([0,1];X)converging uniformly in tto some γ∈AC([0,1];X). Then, by combining Lemmas C.1 and A.3(ii), we have that liminf N→∞Z Xd2(x,ΓN)dΛN(x) +βLength (γN) ≥Z Xd2(x,Γ)dΛ(x) +βLength (γ). Step 2 . Let γ∈AC([0,1];X)be arbitrary. We claim that there exists a sequence γNin AC([0,1];X)converging uniformly in ttoγ∈AC([0,1];X), such that lim N→∞Z Xd2(x,ΓN)dΛN(x) +βLength (γN) ≤Z Xd2(x,Γ)dΛ(x) +βLength (γ). Indeed we can just take γN=γ. Then Lemma C.1 tells us that Z Xd2(x,ΓN)dΛN(x)→Z Xd2(x,Γ)dΛ(x), while Length (γN) =Length (γ). PRINCIPAL CURVES IN METRIC SPACES AND W2 SPACE 39 Step 3 . Next, note that for arbitrary x,y∈X, we have d2(x,y)≤diam(X)2, hence for arbitrary γN, Z Xd2(x,ΓN)dΛN(x)≤diam(X)2 also. Furthermore, if we consider a γNwhich is constant in time, then we have that γNhas length zero, hence min γ∈AC([0,1];X)PPC(ΛN)≤Z Xd2(x,ΓN)dΛN(x) +βLength (γN) =Z Xd2(x,ΓN)dΛN(x) ≤diam(X)2. Consequently, let γ∗Ndenote a minimizer for PPC (ΛN); it follows that Length (γ∗N)≤ diam(X)2/βalso. We can take γ∗Nto be constant speed without loss of generality. And so, arguing as in the proof of Proposition 2.1, by Lemma A.3(i), we deduce that: passing to a subsequence Nk, there exists some AC curve γ∗such that γ∗Nk t→γ∗ tuniformly. More generally, let γ∗Nk tbe any subsequence of minimizers of PPC (ΛN)which converges uniformly in t, with limit γ∗∈AC([0,1];X). Now, fix any γ∈AC([0,1],X). Step 1 implies that: Z Xd2(x,Γ∗)dΛ(x) +Length (γ∗)≤liminf Nk→∞Z Xd2(x,Γ∗Nk)dΛNk(x) +Length (γ∗Nk) , and since each γ∗Nkminimizes PPC(Λ Nk), letting γNk=γas in Step 2 yields ≤lim Nk→∞Z Xd2(x,ΓNk)dΛNk(x) +Length (γNk) =Z Xd2(x,Γ)dΛ(x) +Length (γ). Asγwas chosen arbitrarily, this shows that γ∗is a minimizer of PPC (Λ)as desired. REMARK C.3. We structure the proof of Theorem 2.5 below as if we are proving the Γ- convergence of PPCK(ΛN)to PPC (Λ), and then using compactness to extract a convergent subsequence of minimizers which therefore converges to a minimizer of PPC (Λ). However, we alert the reader who is a specialist in Γ-convergence that in order for the proof below to “really” be thought of as establishing Γ-convergence, we must formally replace the space on which PPCK(ΛN)is defined: specifically we must replace the space of K-tuples with the space of K-piecewise geodesics, and then think of the integralR Xd2(x,ΓK)dΛ(x)as a func- tional over K-piecewise geodesics depending only on the locations of the knots. Subject to this replacement, PPCK(ΛN)and PPC (Λ)now are defined atop a common space (namely the space AC([0,1];X)/∼mentioned previously), and thus the question of whether PPCK(ΛN) Γ-converges to PPC (Λ)is well-defined. (However, we emphasize that the validity of our proof does not hinge on these abstract analytic considerations.) PROOF OF THEOREM 2.5. Step 1. We consider a sequence of sets of points ({γK k}K k=1)K∈N. For each K, letγKdenote a constant speed piecewise geodesic connecting the points
https://arxiv.org/abs/2505.04168v1
γK kto 40 γK k+1, for all 1≤k≤K−1; such a γKexists because Xis a geodesic metric space. Note that by construction, we have that Length (γK) =K−1X k=1d(γK k,γK k+1). At the same time, let ΓK contdenote the graph of γK. Note that ΓK⊂ΓK cont, and so for any x∈X, we have d2(x,ΓK cont)≤d2(x,ΓK). Therefore, Z Xd2(x,ΓK cont)dΛN(x) +βLength (γK)≤Z Xd2(x,ΓK)dΛN(x) +βK−1X k=1d(γK k,γK k+1). Suppose that γKconverges uniformly in tto some limiting AC curve γ. Applying Lemma C.1, we see that lim K→∞,N→∞Z Xd2(x,ΓK cont)dΛN(x) =Z Xd2(x,Γ)dΛ(x). Combining this with the facts that liminf K→∞Length (γK)≥Length (γ)(from Lemma A.3(ii)) and Length (γK) =PK−1 k=1d(γK k,γK k+1), we see that liminf K,N→∞ Z Xd2(x,ΓK)dΛN(x) +βK−1X k=1d(γK k,γK k+1)! ≥Z Xd2(x,Γ)dΛ(x)+βLength (γ). Step 2. Letγ∈AC([0,1];X)be constant speed. For each K, we define {γK k}K k=1by γK k=γk/K. Note that for each Kwe have Length (γ)≥K−1X k=0d(γK k,γK k+1). We can extend {γK k}K k=1to a piecewise-constant function from [0,1]toXhaving the same range, by setting γK t:=γK ⌊tK⌋. Note that γK tconverges uniformly in ttoγasK→ ∞ , since sup t∈[0,1]d(γt,γK t)≤1 K−1Length (γ). Hence we can apply Lemma C.1 to deduce that lim K,N→∞Z Xd2(x,ΓK)dΛN(x) =Z Xd2(x,Γ)dΛ(x). It follows that limsup K,N→∞ Z Xd2(x,ΓK)dΛN(x) +βK−1X k=1d(γK k,γK k+1)! ≤Z Xd2(x,Γ)dΛ(x)+βLength (γ). Step 3 . Finally, given any constant speed γ∈AC([0,1];X), letγK kbe defined as in Step 2; and, let {γ∗K k}K k=1be a minimizer of PPCK(ΛN), and let γ∗Kbe a constant-speed piecewise geodesic interpolation of {γ∗K k}K k=1. First we claim that the γ∗K’s have uniformly bounded length. Note that for any K∈N, we can always consider a competitor {¯γK k}K k=1where all Kknots are located at the same (arbitrary) point in X; in this case,PK−1 k=0d(¯γK k,¯γK k+1) = 0 andR Xd2(x,¯ΓK)dΛN(x)≤ PRINCIPAL CURVES IN METRIC SPACES AND W2 SPACE 41 Diam (X)2. It therefore follows, from the fact that {∗γK k}K k=1is a minimizer of PPC K(ΛN), that Diam (X)2≥Z Xd2(x,Γ∗K)dΛN(x) +βK−1X k=0d γ∗K k,γ∗K k+1 ≥βK−1X k=0d γ∗K k,γ∗K k+1 . Consequently, Length (γ∗K) =PK−1 k=0d γ∗K k,γ∗K k+1 ≤β−1Diam (X)2uniformly in K. So by Lemma A.3 we can pass to a subsequence Kjand extract a limiting AC curve γ∗such thatγ∗Kj t→γ∗ tuniformly in t; and moreover, liminf Kj→∞Length (∗γKj)≥Length (γ∗). More generally, let {γ∗Kj k}Kj k=1be any subsequence of minimizers of PPCKj(ΛN), with geodesic interpolations γ∗Kj, such that γ∗Kj t converges uniformly in tto some γ∗∈ AC([0,1];X). We deduce that Z Xd2(x,Γ∗)dΛ(x) +βLength (γ∗) ≤liminf N,K j→∞ Z Xd2(x,Γ∗Kj)dΛN(x) +βKj−1X k=1d(γ∗Kj k,γ∗Kj k+1)  ≤limsup N,K j→∞ Z Xd2(x,ΓKj)dΛN(x) +βKj−1X k=1d(γKj k,γKj k+1)  ≤Z Xd2(x,Γ)dΛ(x) +βLength (γ) where, in the computation above, we have used Step 1 in the first inequality, the fact that {γ∗K k}K k=1is a minimizer in the second inequality, and Step 2 in the third inequality. As γ was chosen arbitrarily (up to time reparametrization), this shows that γ∗is a minimizer of PPC(Λ)as desired. C.2. Proofs for Section 3. LEMMA C.2. LetVbe a complete metric space. Let dP(V)be a complete separa- ble metric on P(V). Suppose that for each m= 1,...,M , we have µm,µ′
https://arxiv.org/abs/2505.04168v1
m∈ P(V), and dP(V)(µm,µ′ m)< ε. Then, W1 1 MMX m=1δµm,1 MMX m=1δµ′ m! < ε. PROOF . Consider the map Twhich sends δµmtoδµ′ m; with respect to this map, we have that Z P(V)dP(V)(ν,T(ν))d 1 MMX m=1δµm! (ν) =1 MMX m=1dP(V)(µm,µ′ m)< ε. This directly implies that W1 1 MPM m=1δµm,1 MPM m=1δµ′ m < εas desired. 42 PROOF OF THEOREM 3.1. Assume that dP(V)metrizes weak* convergence. In what fol- lows, we condition on the event that W1(ΛN,Λ)→0, which is guaranteed to have probability 1 by the Glivenko-Cantelli theorem applied to P(P(V)). From the triangle inequality, it then suffices to show that W1(ΛN,ˆΛN,M)→0, either in probability or a.s. respectively. We take Mto depend on N, and assume that M→ ∞ asN→ ∞ . We take dP(V)to be a maximum mean discrepancy (MMD) distance MMD Hcoming from a reproducing kernel Hilbert space Hof functions from VtoR, for which MMD Hmetrizes weak* convergence on P(V); the existence of such an RKHS is guaranteed by the results described in Appendix B. From Lemma C.2, we have that Eh W1(ΛN,ˆΛN,M)i ≤1 NNX n=1E[MMD H(µn,ˆµn)]. From Appendix B we have that E[MMD H(µn,ˆµn)] =O(M−1/2), and so Eh W1(ΛN,ˆΛN,M)i → 0asN,M→ ∞ . This implies that there exists a subsequence of N,M (which we do not rela- bel) along which W1(ΛN,ˆΛN,M)→0almost surely; it follows that along this subsequence, W1(ˆΛN,M,Λ)→0almost surely as desired. For the second claim, we study the concentration of MMD H(µn,ˆµn)around its expecta- tion. By Appendix B, we have for all t >0, P[MMD H(µn,ˆµn)≥t+E[MMD H(µn,ˆµn)]]≤exp −Mt2 DH for some constant DH>0. Take t=M−γfor some γ∈(0,1/2); with probability at least 1−Nexp −M1−2γ DH , it then holds that W1(ΛN,ˆΛN,M)≤M−γ+1 NNX i=nE[MMD H(µn,ˆµn)]. By the Borel-Cantelli lemma, this inequality holds almost surely for all but finitely many N (which implies that W1(ΛN,ˆΛN,M)→0almost surely) provided that ∞X N=1Nexp −M1−2γ DH <∞. In turn, it suffices that for some γ∈(0,1/2)andβ >1, we have Nexp −M1−2γ DH ≤N−β forNandMsufficiently large, in other words, M≥(DH(β+ 1)log N)1/(1−2γ). This holds NandMsufficiently large as long as there exists constants C >0andq >1such thatM≥C(logN)q. (Lastly, in the case where the number of samples Mninµndepends on the index n, the argument above still applies if we take M:= min 1≤n≤N: this is because M−1/2≥M−1/2 n and similarly for exp −Mt2 DH , so all the same estimates hold.) PROOF OF PROPOSITION 3.2. As before, we condition on the event that W1(ΛN,Λ)→ 0, which is guaranteed to have probability 1 by Glivenko-Cantelli. It then suffices to show thatW1(ΛN,ˆΛN,M,R )→0in probability. PRINCIPAL CURVES IN METRIC SPACES AND W2 SPACE 43 From Lemma C.2, we have that W1(ΛN,ˆΛN,M)≤1 NNX n=1W1(µn,ˆµM n) and from Proposition 1.1 of [12] we have that E[W1(µn,ˆµM n)]→0asM→ ∞ , at a uniform rate which depends only on d. Now, let ˜µM,R ndenote the finite-read approximation of ˆµM nwithRmany total reads (in the sense of equation (4)). Applying Lemma C.2 again, we have that W1(ˆΛN,M,ˆΛN,M,R )≤1 NNX n=1W1(ˆµM n,˜µM,R n) and by [54, Theorem 2.4], we have that E[W1(ˆµM n,˜µM,R n)]≤Cp M/R +oM(1)asM→ ∞ (in particular the estimate is uniform), as long Assumption
https://arxiv.org/abs/2505.04168v1
2.3 from that article holds. It follows that, assuming M/R→0, we have Eh W1(ΛN,ˆΛN,M,R )i ≤Eh W1(ΛN,ˆΛN,M)i +Eh W1(ˆΛN,M,ˆΛN,M,R )i ≤NX n=1E[W1(µn,ˆµM n)] +E[W1(ˆµM n,˜µM,R n)] →0. This implies that W1(ΛN,ˆΛN,M,R )→0in probability, as desired. C.3. Proofs for Section 4. PROOF OF PROPOSITION 4.1. (i) Let Pdenote the range of ρtinX. Since Λis supported onP, we observe thatR Xd2(x,P)dΛ(x) = 0 . At the same time, since γ∗βis a minimizer for PPC(Λ;β), we have that Z Xd2(x,Γ∗β)dΛ(x) +βLength (γ∗β)≤Z Xd2(x,P)dΛ(x) +βLength (ρ) where Γ∗βis the graph of γ∗β. It follows that 0≤Z Xd2(x,Γ∗β)dΛ(x)≤β Length (ρ)−Length (γ∗β) which establishes the first claim. (ii) Note that by change of variables for measures, Z Xd2(x,Γ∗β)dΛ(x) =Z1 0d2(ρt,Γ∗β)dt. Hence, our estimate from the proof of (i) shows that Z1 0d2(ρt,Γ∗β)dt≤β Length (ρ)−Length (γ∗β) ≤βLength (ρ). The claim now follows from Markov’s inequality, which here indicates that Leb[0,1]n t:d2(ρt,Γ∗β)≥αo ≤1 αZ1 0d2(ρt,Γ∗β)dt. 44 PROOF OF THEOREM 4.2. In Proposition 4.1, we saw that Length (γ∗β)≤Length (ρ), so by Arzelà-Ascoli’s theorem we can extract a subsequence which converges, uniformly in t, to a curve γ∗∈AC([0,1];X). By the lower semicontinuity of Length (·), we also have that Length (γ∗)≤Length (ρ). At the same time, we have the estimate Z Xd2(x,Γ∗β)dΛ(x)≤βLength (ρ). Sending β→0, we see from Lemma C.1 thatR Xd2(x,Γ∗)dΛ(x) = 0 . In particular, γ∗is a minimizer of PPC (Λ;0) . More generally the same applies for any convergent subsequence (γ∗βn)n∈N. We first check that Γ∗⊇P. Since 0 =R1 0d2(ρt,Γ∗)dt≥R {t:ρt/∈Γ}d2(ρt,Γ∗)dt,the set {t:ρt/∈Γ∗}has zero Lebesgue measure. On the other hand, it is an open set in [0,1]because γ∗andρare continuous from [0,1]. Therefore {t:ρt/∈Γ∗}is an empty set, verifying Γ∗⊇ supp(Λ) = P. Now, suppose for the sake of contradiction that Γ∗⊋P. In order to establish contradiction, it suffices to show that Γ∗⊋Pimplies Length (γ∗)>Length (ρ). (This is more or less obvi- ous for curves in Euclidean space, but since we work in a general compact metric space we give a verbose justification.) In this argument we will use the assumption that ρis injective. Observe that since Γis compact and Pis compact, {t:γ∗ t/∈P}is open inside [0,1]. IfΓ∗⊋Pthen{t:γ∗ t/∈P}is non-empty, and so contains some non-empty compact sub- interval K⊂ {t:γ∗ t/∈P}. Accordingly let γ∗ k0∈ {t:γ∗ t/∈P}; note this point is disjoint from P, since Pis compact. Define the sets T0:={t∈[0,1] :t < k 0,γ∗ t∈P}; T1:={t∈[0,1] :t > k 0,γ∗ t∈P}. Note that at least one of T0andT1must be non-empty since P∩Γ̸=∅. Without loss of generality let T0̸=∅. Let t0be the last time before k0such that γ∗ t0∈P(such a last time exists since P∩Γ∗is closed). Since γ∗ (·)is absolutely continuous, we have Length (γ∗⌞[t0,k0])≥d(γ∗ t0,γ∗ k0)>0. Moreover, we have that Length (γ∗)≥Length (γ∗⌞[0,t0]) +Length (γ∗⌞[t0,k0]). We now consider two cases. If T1is empty, then by construction the graph of γ∗⌞[0,t0] contains P. But since Length (ρ) =H1(P)and Length (γ∗⌞[0,t0])≥ H1(Γ∗⌞[0,t0])by Lemma A.5, this establishes that Length (γ∗⌞[0,t0])≥Length (ρ). Therefore Length (γ∗)> Length (ρ)which is a contradiction. (The case where T0is empty and T1is non-empty is symmetric.) Now we consider the case where T0andT1are both non-empty, which is very similar.
https://arxiv.org/abs/2505.04168v1
Let t1denote the first time after k0thatγ∗ t1∈T1. It holds that Length (γ∗⌞[t0,k0])≥d(γ∗ t0,γ∗ k0)>0, Length (γ∗⌞[k0,t1])≥d(γ∗ k0,γ∗ t1)>0, and Length (γ∗) =Length (γ∗⌞[0,t0]) +Length (γ∗⌞[t0,k0]) +Length (γ∗⌞[k0,t1]) +Length (γ∗⌞[t1,1]) PRINCIPAL CURVES IN METRIC SPACES AND W2 SPACE 45 while by construction the (disjoint) union Γ∗⌞[0,t0]∪Γ∗⌞[t1,1]contains P. Hence by Lemma A.5, Length (γ∗⌞[0,t0]) +Length (γ∗⌞[t1,1])≥ H1(Γ∗⌞[0,t0]∪Γ∗⌞[t1,1]) ≥ H1(P) =Length (ρ). This likewise shows that Length (γ∗)>Length (ρ), as desired. It remains only to argue that γ∗coincides with a reparametrization of ρwhich is ether order-preserving or order-reversing. Let ˆγ∗andˆρdenote the constant speed reparametriza- tions of γ∗andρrespectively. Of course, Length(ˆ ρ) = Length(ˆ γ∗)(by Lemma A.5). It follows from Lemma A.9 that ˆγ∗= ˆρup to time-reversal. Therefore, since ˆγ∗is an order- preserving/reversing reparametrization of γ∗(and vice versa), it follows that γ∗is either an order-preserving or order-reversing reparametrization of ˆρ, and hence of ρ. PROOF OF PROPOSITION 4.4. Let ˜ρtbe the reparametrization of ρtguaranteed by Theo- rem 4.2, and without the loss of generality, assume ˜ρtis order-preserving. Since the inverse map˜ρt7→tis continuous (Lemma A.6) and Γis compact, ˜ρt7→tis in fact uniformly contin- uous; that is, for all ϵ >0, there exists δ(ϵ)>0such that (∀s,t∈[0,1]) d(ρt,ρs)< δ(ϵ) =⇒ |t−s|< ϵ. Fix an ϵ0>0withmini,jd(ρi,ρj)>4ϵ0and let ϵ <min{ϵ0,1 2δ(ϵ0/Length( ρ))}. Again by Theorem 4.2, taking β >0sufficiently small yields supt∈[0,1]d(γ∗β t,˜ρt)< ϵ. For all i, letti∈[0,1]such that ˜ρti=ρi, and also let ˆτi∈argmint∈[0,1]d(γ∗β t,˜ρti). It suffices to show that for all i,j, we have ti< tjiffˆτi<ˆτj. To whit, applying the reverse triangle inequality at ( a), (b) below yields d(γ∗β ti,γ∗β tj)(a) ≥ |d(γ∗β ti,˜ρj)−d(˜ρj,γ∗β tj)| ≥d(γ∗β ti,˜ρj)(b) ≥ |d(γ∗β ti,˜ρi)−d(˜ρi,˜ρj)|(c) ≥3ϵ0, with ( c) following from d(γ∗β ti,˜ρti)< ϵ < ϵ 0andd(˜ρi,˜ρj)>4ϵ0. Since γ∗β tis parametrized to have constant speed, the overall inequality implies |ti−tj| ≥3ϵ0/Length( ρ). On the other hand, d(˜ρˆτi,˜ρti)≤d(˜ρˆτi,γ∗β ˆτi) +d(γ∗β ˆτi,˜ρti)(d) ≤d(˜ρˆτi,γ∗β ˆτi) +d(γ∗β ti,˜ρti)(e) < δ(ϵ0/Length( ρ)), where ( d) follows from construction of ˆτiand (e) follows from suptd(γ∗β t,˜ρt)< ϵ. By defi- nition of δ, this implies |ˆτi−ti| ≤ϵ0/Length( ρ). The symmetric argument gives |tj−ˆτj|< ϵ0/Length( ρ)as well. Together with the earlier bound |ti−tj| ≥3ϵ0/Length( ρ)we see ti< tjiffˆτi<ˆτj, as desired. APPENDIX D: EXTENSIONS FOR SECTION 2 In this appendix, we discuss some extra modifications one can make to the principal curve variational problem and its discretization, beyond what has been explained in the main text. D.1. Nonlocal discretization. Here we consider a nonlocal discretization scheme for the PPC functional, similar to what was originally proposed in practice in [42] (and likewise is the scheme used in the R principal curves package). In experiments, we have found such schemes to be useful at the finite-data level since they allow for stable performance with a greater number of knots given a fixed amount of data: in other words, they allow us to increase 46 the resolution of the discretization of the principal curve. Of separate interest, it has been suggested (see discussion in [42]) that this type of nonlocal kernel smoothing scheme might induce a form of implicit regularization , even in the absence of an explicit length penalty. We have not yet
https://arxiv.org/abs/2505.04168v1
been able to verify this claimed regularization effect mathematically, but leave this interesting direction as one for future work. We consider the following rather general nonlocal, nonuniform smoothing kernel. Let w: R+→R+be a Borel function which is compactly supported on [0,1]. We write wh(t):=1 hwt h forh >0. Note whis compactly supported on [0,h]. In practice, we suggest a wof the form (1− |t|p)q +(a.k.a. an “Epanechnikov kernel”), as is a standard choice for kernel smoothers used in nonparametric regression. For simplicity, in what follows we only consider the case where ΛN:=1 NPN n=1δxnfor points xn∈X, and where ΛNconverges to some limiting measure ΛasN→ ∞ . Now we consider the nonlocal, discrete objective PPCK w(ΛN)(γ1,...,γ K) :=1 NKX k=1KX j=1X xn∈Ijd2(xn,γk)1 Cjwhj dDist(γj,γk)PK−1 i=1d(γi,γi+1)! +βK−1X i=1d(γi,γi+1) where dDist(γj,γk)denotes the arcwise distance from γjtoγkalong the discrete curve {γk}K k=1; in other words, dDist(γj,γk):=  Pk−1 i=jd(γi,γi+1)j < kPj−1 i=kd(γi,γi+1)k < j 0 j=k. Here Cjis a normalization constant that is chosen so that1 CjPK k=1whjdDist(γj,γk)PK−1 i=1d(γi,γi+1) = 1. Note that for each jwe allow the choice of a different hj; for example, each hjcan be chosen adaptively given the data. In what follows, we write ¯h:= max j∈[1,K]hj. PROPOSITION D.1. LetXbe a compact geodesic metric space. Minimizers of the func- tional PPCK w(ΛN)converge to minimizers of PPC (Λ)asK,N→ ∞ and¯h→0, in the same sense as for Theorem 2.5. We give a proof of this proposition momentarily. The nonlocal discrete objective PPCK w(ΛN)can be used in place of PPCK(ΛN)in our coupled Lloyd’s algorithm (Algorithm 1); the interpretation is that when updating the knots γk, we do not just take into account the V oronoi cell Ikbut also give some weight to data located in V oronoi cells Ijfor nearby knots γj. For completeness, we state explicitly what the algorithm looks like with this alternate objective. PRINCIPAL CURVES IN METRIC SPACES AND W2 SPACE 47 Algorithm 2: Nonlocal Coupled Lloyd’s Algorithm for Principal Curves Input: data{xn}N n=1,parameters β >0,{hk}K k=1,ε >0 1{γk}K k=1←initialize_knots() ; 2repeat 3{γk}K k=1←TSP_ordering ({γk}K k=1);/*min-length ordering */ 4{Ik}K k=1←compute_Voronoi_cells ({γk}K k=1); 5{γk}K k=1←argmin{γ′ k}K k=1PPCK w(ΛN)(γ′ 1,...,γ′ K)using Ik’s from 4 ; 6until ε-convergence ; Result: {γk}K k=1; /*The updated output knots */ PROOF OF PROPOSITION D.1. Our proof strategy is to reduce the convergence of mini- mizers of the nonlocal discrete functional to the same result for the discrete functional, which was already established in the proof of Theorem 2.5. Let{γ1,...,γ K} ⊂X. For simplicity, denote for the first term in the functional PPCK w(ΛN), SN,K:=1 NKX k=1KX j=1X xn∈Ijd2(xn,γk)1 Cjwhj dDist(γj,γk)PK−1 i=1d(γi,γi+1)! . Step 1 . From the fact that Ijis the V oronoi cell for γj, it holds that X xn∈Ijd2(xn,γk)≥X xn∈Ijd2(xn,γj) and so SN,K≥1 NKX k=1KX j=1X xn∈Ijd2(xn,γj)1 Cjwhj dDist(γj,γk)PK−1 i=1d(γi,γi+1)! =1 NKX j=1X xn∈Ijd2(xn,γj)1 CjKX k=1whj dDist(γj,γk)PK−1 i=1d(γi,γi+1)! . Since we chose Cjso that 1 CjKX k=1whj dDist(γj,γk)PK−1 i=1d(γi,γi+1)! = 1, this ensures that for any {γk}K k=1and{xn}N n=1atoms in ΛN, and any ¯h, SN,K+βK−1X i=1d(γi,γi+1)≥1 NKX k=1X xn∈Ikd2(xn,γk) +βK−1X i=1d(γi,γi+1). Accordingly, we can apply Step 1 from the proof of Theorem 2.5, already established, to
https://arxiv.org/abs/2505.04168v1
deduce that, if γK, the piecewise geodesic curve from {γ1,...,γ K}, converges uniformly in t to some limiting AC curve γ, then liminf N,K→∞;¯h→0 SN,K+βK−1X i=1d(γi,γi+1)! 48 ≥liminf N,K→∞ 1 NKX k=1X xn∈Ikd2(xn,γk) +βK−1X i=1d(γi,γi+1)! ≥Z Xd2(x,Γ)dΛ(x) +βLength (γ). (In fact the same inequality holds even if ¯his not sent to zero; but we do not use this.) Step 2 . We observe that d2(xn,γk)≤d2(xn,γj) + 2d(xn,γj)d(γj,γk) +d2(γj,γk). By assumption on X, we have that d(xn,γj)≤Diam (X); and, since we have assumed that whjis compactly supported on [0,hj], either dDist(γj,γk)PK−1 i=1d(γi,γi+1)< hj,orwhj dDist(γj,γk)PK−1 i=1d(γi,γi+1)! = 0. Additionally, we have that d(γj,γk)≤dDist(γj,γk)simply from the triangle inequality. Thus, since hj≤¯h, we have that, for all jandksuch that whjdDist(γj,γk)PK−1 i=1d(γi,γi+1) >0, d2(xn,γk)≤d2(xn,γj) + 2d(xn,γj)dDist(γj,γk) + dDist(γj,γk)2 ≤d2(xn,γj) + 2 Diam (X)¯hK−1X i=1d(γi,γi+1) + ¯hK−1X i=1d(γi,γi+1)!2 . Now, let us assume thatPK−1 i=1d(γi,γi+1)≤Diam (X)2/β; for establishing convergence of minimizers, this shall result in no loss of generality, as we have already seen in previous arguments. Under this assumption, d2(xn,γk)≤d2(xn,γj) + 2 Diam (X)3¯h/β+¯h2Diam (X)4/β2. It follows that, for any {γk}K k=1,ΛN, and ¯h, SN,K≤1 NKX k=1KX j=1X xn∈Ij d2(xn,γj) +O(¯h)1 Cjwhj dDist(γj,γk)PK−1 i=1d(γi,γi+1)! =1 NKX j=1X xn∈Ij d2(xn,γj) +O(¯h)1 CjKX k=1whj dDist(γj,γk)PK−1 i=1d(γi,γi+1)! =1 NKX j=1X xn∈Ij d2(xn,γj) +O(¯h) = 1 NKX j=1X xn∈Ijd2(xn,γj) +O(¯h). For the last line, notice that the double summation has only Neffective terms. Consequently, PRINCIPAL CURVES IN METRIC SPACES AND W2 SPACE 49 limsup N,K→∞;¯h→0 SN,K+βK−1X i=1d(γi,γi+1)! ≤limsup N,K→∞;¯h→0 1 NKX j=1X xn∈Ij d2(xn,γj) +O(¯h) +βK−1X i=1d(γi,γi+1)  ≤limsup N,K→∞ 1 NKX j=1X xn∈Ijd2(xn,γj) +βK−1X i=1d(γi,γi+1) . Finally, applying Step 2 from the proof of Theorem 2.5, for the specific choice of {γk}K k=1 from that Step 2, it holds that limsup N,K→∞ 1 NKX j=1X xn∈Ijd2(xn,γj) +βK−1X i=1d(γi,γi+1) ≤Z Xd2(x,Γ)dΛ(x)+βLength (γ). Step 3. Lastly, the conclusion of the proof is now identical to Step 3 of the proof of Theo- rem 2.5. D.2. Fixed endpoints and semi-supervision. Fixed endpoints. Here we want to produce a curve which best fits the data (subject to length penalty), but with endpoints ¯γ0and¯γ1which are specified in advance. In other words, we solve the modified optimization problem min γ∈AC([0,1];X)Z Xd2(x,Γ)dΛ(x) +βLength (γ) :γ0= ¯γ0,γ1= ¯γ1 . We note that the set {γ∈AC([0,1];X) :γ0= ¯γ0,γ1= ¯γ1}is closed inside AC([0,1];X), so one can prove existence of minimizers by an identical argument to the one we provide in the proof of Proposition 2.1. Likewise, the corresponding optimization problem for discretized γ takes the form min γ1,...,γ K∈X(Z Xd2(x,ΓK)dΛ(x) +βK−1X k=1d(γk,γk+1) :γ1= ¯γ0,γK= ¯γK) . In particular, the Lloyd-type algorithm we provide for minimizing PPCK(Λ)can be easily modified to consider this optimization problem; one simply initializes the knots γ0andγK appropriately and leaves them fixed at each iteration. Furthermore this discretization can be shown to converge to the continuum problem in the sense of Theorem 2.5 by an identical argument, also. Likewise, one can also take γ0andγKas fixed in the argument of PPCK w(ΛN)(·)and run Algorithm 2 with γ0andγKfixed throughout. This is a minute modification of Algorithm 2, but since it is this modification which we use in our experiments
https://arxiv.org/abs/2505.04168v1
we present it explicitly below. Note that the only changes are that: in line 3, the TSP subroutine should be understood as keeping γ0andγKfixed while being allowed to permute the other indices of the knots; and, in line 5, one only updates the locations of the knots {γk}K−1 k=2. 50 Algorithm 3: Nonlocal, Fixed-Endpoint Coupled Lloyd’s Algorithm Input: data{xn}N n=1,parameters β >0,{hk}K k=1,ε >0 1{γk}K k=1←initialize_knots() ; 2repeat 3{γk}K k=1←TSP_ordering ({γk}K k=1);/*min-length ordering */ 4{Ik}K k=1←compute_Voronoi_cells ({γk}K k=1); 5{γk}K k=1←argmin{γ′ k}K−1 k=2PPCK w(ΛN)(γ′ 1,...,γ′ K)using Ik’s from 4 ; 6until ε-convergence ; Result: {γk}K k=1; /*The updated output knots */ Statistically, one can think of this modified optimization problem as follows. Let the distri- bution Λdescribe a noisy observation of a ground truth curve ρtwhich we are trying to infer. If an oracle has told us the locations of ρ0andρ1, but the other temporal values of ρ(·)are unlabeled, then we can estimate ρtup to time-reparametrization by solving the optimization problem above with ρ0= ¯γ0andρ1= ¯γ1. This semi-supervised estimation of ρtis especially important in the application to the seriation problem we consider in Section 4, as the initial and terminal temporal labels are sufficient to provide identifiability of the temporal ordering in the sense of Proposition 4.4. Semi-supervision. More generally, subsets of AC([0,1];X)of the form {γ∈AC([0,1];X) :∀j= 1,...,J,γ tj= ¯γtj} for finitely many fixed points ¯γtj, are closed inside AC([0,1];X). In particular, if we consider the semi-supervised problem where an oracle specifies the ground truth ρtfor finitely many time points, then the corresponding PPC-type optimization problem is given by min γ∈AC([0,1];X)Z Xd2(x,Γ)dΛ(x) +βLength (γ) :∀j= 1,...J,γ tj= ¯γtj . This type of semi-supervised inference problem makes sense in the setting of the seriation problem, if temporal labels are accessible in principle but substantially more costly than unlabeled data. Indeed many existing trajectory inference methods for single-cell omics data allow for semi-supervised specification of some of the data [76]. We can also impose this semi-supervision at the discrete level: suppose for simplicity that the times tjare rational and that Kis sufficiently large that for each j,tj=kj/Kfor some 1≤kj≤K. Then, min γ1,...,γ K∈X(Z Xd2(x,ΓK)dΛ(x) +βK−1X k=1d(γk,γk+1) :∀j= 1,...,J,γ kj= ¯γkj,) is a discrete objective with Jmany fixed midpoints, and its consistency with the continuum objective with fixed midpoints can be proved along identical lines to the proof of Theorem 2.5. Likewise one can employ the nonlocal discrete objective PPCK wwith fixed midpoints in an identical fashion, and modify Algorithm 3 so that the TSP subroutine in Step 2 is instead constrained to hit the intermediate fixed knots ¯γkjat the specified intermediate indices. APPENDIX E: ADDITIONAL DETAILS FOR EXPERIMENTS This Appendix provides additional details for the experiments we describe in Section 4.2 of the main text. PRINCIPAL CURVES IN METRIC SPACES AND W2 SPACE 51 Description of prior seriation methods. As mentioned in the main text, we compare our approach to seriation based on principal curves with two existing seriation methods: seriation based on the Traveling Salesman Problem (TSP) [59], and spectral seriation [6, 33, 68]. Both approaches shall take as input the matrix W= [W2(ˆρti,ˆρtj)]i,jof
https://arxiv.org/abs/2505.04168v1
pairwise W2distances between empirical measures ˆρti. For TSP, we regard Was a matrix of edge weights in a complete graph of time points. The TSP approach to seriation is to visit all nodes exactly once on a distance-minimizing path, and use the ordering given by said minimizing path. This method optionally allows the user to specify fixed initial and terminal nodes. The Traveling Salesman Problem can be solved exactly for up to thousands of nodes using solvers such as Concorde [5]. For spectral seriation, we create a similarity matrix Ausing a Gaussian kernel with band- width σ: A= exp( −W2/σ2). We then form the normalized Laplacian L=D1 2AD−1 2, where D= PN j=1A1,j ...PN j=1AN,j . The Fiedler eigenvector of Lthen imposes a seriation [6]. Solvers exist which allow us to solve eigenvector problems for large matrices [10, 65], allowing us also to apply spectral seriation alongside principle curves. Parameter selection. For our experiments described in Section 4.2, we use the version of the principal curves objective described in Appendix D with a nonlocal kernel and fixed endpoints. This objective PPCK wdepends on parameters handβ, which must be chosen by the user. Additionally, spectral seriation depends on a kernel bandwidth parameter σ, which likewise must be chosen by the user. To evaluate both methods fairly, we performed a series of parameter sweeps on training data, to find optimal parameters for both methods. For the principal curves method, two sweeps were performed for each dataset. The first sweep covers h,β∈[0.01,0.5]on a 10×10grid. Performance is taken to be the average of the Kendall Tau error over 6 repeats. The second sweep was taken on a narrower range withh,β∈[0.01,0.25]. These sweeps found that the optimal h= 0.037andβ= 0.17for the simple branching curve (Test Dataset 1), while the optimal h= 0.01, and β= 0.037for the bent curve (Test Dataset 2). A summary plot of performance can be see in Figure 5 and Figure 6. 52 Fig 5: A set of parameter sweeps on Test Dataset 1 generated with 250time points, 10000 total atoms, and a variance of 0.01. Values are given as an average of 6 repeats. The optimal value for each run is highlighted with a red rectangle. Fig 6: A set of parameter sweeps on Test Dataset 2 generated with 250time points, 10000 total atoms, and a variance of 0.01. Values are given as an average of 6 repeats. The optimal value for each run is highlighted with a red rectangle. For spectral seriation, we ran one sweep per dataset covering a range of σ∈[0,3]with 1000 equally spaced values. The optimal value σwas found to be 0.5for Test Dataset 1 and 0.315for Test Dataset 2. Summary plots of these sweeps can be seen in Figure 7. PRINCIPAL CURVES IN METRIC SPACES AND W2 SPACE 53 Fig 7: A sweep of kernel bandwidths for spectral seriation for Test Dataset 1 (A) and the Test Dataset 2 (B). Reported error is taken as the average of 6repeats.
https://arxiv.org/abs/2505.04168v1
arXiv:2505.04218v1 [math.PR] 7 May 2025Convergence rate of Euler-Maruyama scheme to the invariant probability measure under total variation distance Yinna Yea, Xiequan Fanb,∗ aDepartment of Applied Mathematics, School of Mathematics a nd Physics, Xi’an Jiaotong-Liverpool University, Suzhou 215123, P. R. China bSchool of Mathematics and Statistics, Northeastern Univer sity at Qinhuangdao, Qinhuangdao 066004, P. R. China Abstract This article shows the geometric decay rate of Euler-Maruya ma scheme for one-dimensional stochastic differential equation towards its invariant probability mea sure under total variation distance. Firstly, the existence and uniqueness of invariant probability meas ure and the uniform geometric ergodicity of the chain are studied through introduction of non-atomic Markov chains. Secondly, the equivalent conditions for uniform geometric ergodicity of the chain ar e discovered, by constructing a split Markov chain based on the original Euler-Maruyama scheme. It turns out that this convergence rate is independent with the step size under total variation distan ce. Keywords: Euler-Maruyama scheme, Invariant probability measure, To tal variation distance, Uniform geometric ergodicity, Langevin Monte Carlo, Marko v chain Monte Carlo 2000 MSC: Primary 60J27, 60B10, 62E20, Secondary 60H10, 62L20, 37M25 1. Introduction Consider the following stochastic differential equation (SD E) onR: dXt=g(Xt)dt+σdBt, X0=x0, (1.1) where (Bt)t/greaterorequalslant0is a standard Brownian motion, σ >0 is a constant, and g:R→RsatisfiesAssump- tion 1(cf. Section 2). For a given step size η∈(0,1), the Euler-Maruyama (EM) scheme ( θk)k/greaterorequalslant0for the SDE (1.1) is given by the following recursive relation: f or anyk/greaterorequalslant0, θk+1=θk+ηg(θk)+√ησεk+1, (1.2) where (εk)k/greaterorequalslant1are independent and identically distributed standard norm al random variables. Over the past decade, EM schemes are widely and intensively u sed for solution approximations of SDEs on Rdwithd/greaterorequalslant1, see for instance [2], [9]-[11] and [14]-[16]. In the case w heng(x) =−∇U(x) withUbeing a potential, (1.1) is called Langevin diffusion, and (1. 2) is called Langevin Monte Carlo (LMC) algorithm. The LMC algorithm is nowadays a popular cla ss of Markov chain Monte Carlo (MCMC)algorithms [1], whichiscommonly appliedtosolveMo nteCarlosamplingproblemsespecially ∗Corresponding author in the field of machine learning (see, for instance, [13] and [ 6]). Roberts and Tweedie [17] initially provided probabilistic analysis of asymptotic behaviour o f the LMC algorithm, and found necessary and sufficient conditions for exponential ergodicity of the p rocess (Xt)t/greaterorequalslant0driven by (1.1). Dalalyan [4] established a upper bound for the error in total variation (T V) distance between θkand the ergodic measure πof (Xt)t/greaterorequalslant0. Recently, Lu et al. [12] developed central limit theorem fo r an empirical measure of the EM scheme for a SDE similar to (1.1) on Rd, whenσ:Rd→Rd×dis a matrix-valued function with d/greaterorequalslant1. Fan et al. [8] established normalized and self-normalize d Cram´ er-type moderate deviation for the EM scheme for the SDE (1.1) on Rdwithd/greaterorequalslant1. When ( Bt)t/greaterorequalslant0in (1.1) becomes a d-dimensional α-stable L´ evy process with d/greaterorequalslant1 and the step size is decreasing in time, Chen et al. [3] studied the convergence rate of EM scheme towards the inv ariant probability measure of ( Xt)t/greaterorequalslant0 underWasserstein-1 distance. While, convergence rates of EM schemetowards its invariant probability measure under TV distance have been rarely studied so far in t he literature. The
https://arxiv.org/abs/2505.04218v1
objective of this work is toprove sucha convergence underTV distance, andpro videfurthermoretheexact convergence rate. To this end, the properties of EM scheme ( θk)k/greaterorequalslant0, as a Markov chain on continuous state space, are studied. It turns out that under Assumption 1 , (θk)k/greaterorequalslant0is irreducible, strongly aperiodic (see also Proposition 3.1), and admits a unique invariant probab ility measure πη(see also Theorem 2.1 and Proposition 3.2). Moreover, it is uniformly geometrica lly ergodic; and converges to πηunder TV distance in geometric rate, which is independent of the step sizeη(see also Theorem 2.2). By the end, the equivalent conditions for uniform geometric ergodicit y (see also Proposition 3.4) are also found with the approach of splitting construction. LetB(R) be the Borel σ-algebra on R. From (1.2), it can be seen that ( θk)k/greaterorequalslant0is a Markov chain on the continuous state space R, with Markov kernel Pηgiven by Pη(x,A) =P(x+ηg(x)+√ησε1∈A) =1√2πησ/integraldisplay Aexp/parenleftbigg −(y−x−ηg(x))2 2ησ2/parenrightbigg dy (1.3) for anyx∈RandA∈ B(R). From (1.3), Pηadmits a kernel density with respect to (w.r.t) the Lebesgue measure given by pη(x,y) =1√2πησexp/parenleftBig −(y−x−ηg(x))2 2ησ2/parenrightBig . The Markov kernel Pηcan be defined equivalently as, for any x∈Rand measurable function hon (R,B(R)), Pηh(x) =Eh(x+ηg(x)+√ησε1) =/integraldisplay Rh(y)pη(x,y)dy. Let’s introduce now TV norm /ba∇dbl·/ba∇dblTVand TV distance dTVrespectively. Suppose that ξis a finite signed measure on a measurable space ( X,X). From Theorem 6.14 in Rudin [18], there exists a unique pair of finite singular measures ( ξ+, ξ−) such that ξ=ξ+−ξ−. The pair ( ξ+, ξ−) is called the Jordan decomposition of ξ. The finite positive measure |ξ|=ξ++ξ−is called the total variation of ξ. The total variation norm of ξis defined by /ba∇dblξ/ba∇dblTV=|ξ|(X). For any two probability measures ξandξ′on (X,X), the total variation distance between ξandξ′is defined by dTV(ξ,ξ′) =1 2/ba∇dblξ−ξ′/ba∇dblTV. (1.4) 2 IfXis a metric space and B(X) is its Borel σ-field, then the convergence in TV distance of a sequence of probability measures on ( X,B(X)) implies its weak convergence (Proposition D.2.6, [7]). Throughout the paper, sometimes supplied with some indices ,τdenotes a positive constant, whose value may vary fromline to line. For any ( a,b)∈R2,a∨b= max{a,b}anda∧b= min{a,b}. When we write ”Cis an (m,ǫν)-small set” (for the definition of small set, see also Definit ion Appendix A.3 (2)), it will be always assumed that ǫ∈(0,1] andνis a probability measure (for the detailed explanation, see also Remark Appendix A.1 (2)). We will simply write ” m-small set” for ”( m,µ)-small set”, if we don’t stress themeasure µ. Thepaperis organized as follows. Themain results areprov ided in Section 2. Some properties of the Markov kernel Pηare studied in Section 3. Particularly, in Section 3.1 the existence and uniqueness of invariant probability πηfor the kernel Pηare demonstrated. Section 3.2 describes a splitting construction approach, which is used to establish the equivalent conditions for the uniform geometric ergodicity of Pηin Section 3.3. The proofs of the main results are given in Section 4. Some concepts related to non-atomic Markov chain s are reviewed in Appendix A. 2. Main results Consider the following assumptions on the function gin SDE (1.1). Assumption 1. There exist L,K1>0andK2/greaterorequalslant0, such that for
https://arxiv.org/abs/2505.04218v1
every (x,y)∈R2, |g(x)−g(y)|/lessorequalslantL|x−y|, (2.5) (g(x)−g(y))(x−y)/lessorequalslant−K1(x−y)2+K2. (2.6) Moreover, gis second order differentiable and the second order derivati ve ofgis bounded. As stated in Remark 2.2 of [12], the assumption (2.5) implies g2(x)/lessorequalslant2L2x2+2g2(0),|g′|/lessorequalslantL. And the assumption (2.6) and Young’s inequality imply xg(x)/lessorequalslant−K1 2x2+C. (2.7) For a set D⊂R, define the first hitting time τDand the first return time σDof the set Dfor the chain (θn)n/greaterorequalslant0by τD= inf{n/greaterorequalslant0|θn∈D}andσD= inf{n/greaterorequalslant1|θn∈D}, respectively. The following theorem shows the convergence of the kernel Pηto its invariant probability measure πηunder TV distance, for any initial probability measure ξon (R,B(R)). Theorem 2.1. Under Assumption 1, there exists a constant η0∈(0,1)depending only on K1and L, such that for any η∈(0, η0], the Markov kernel Pηhas a unique invariant probability measure πη. Furthermore, there exist constants δ >1,τ <∞,β0>1and an accessible (1,µ)-small set D0(all depending only on K1andL) withµ(D0)>0, such that sup x∈D0Ex/parenleftBig βσD0 0/parenrightBig <∞, (2.8) 3 and for any initial probability measure ξon(R,B(R)), +∞/summationdisplay n=1δndTV(ξPn η,πη)/lessorequalslantτEξ/parenleftBig βσD0 0/parenrightBig . (2.9) This theorem improves the result in Lemma 2.3 of [12] in the fo llowing two aspects. It shows not only the existence of invariant probability measure πη, but also the uniqueness (see also Proposition 3.2) under the same conditions, by introducing the accessib le small set D0. It establishes furthermore the geometric rate of convergence of ξPn ηtowards πηunder TV distance for any initial probability measure ξon (R,B(R)), asn→ ∞(see also the corollary below). Corollary 2.1. Under Assumption 1, for any η∈(0,1), there exists δ >1such that for any initial probability measure ξon(R,B(R)), dTV/parenleftbig ξPn η,πη/parenrightbig =o/parenleftbig δ−n/parenrightbig ,asn→ ∞. (2.10) The following theorem provides the geometric rate of conver gence for Markov kernel Pηto its invariant probability measure πη, for any initial state x∈R. Theorem 2.2. Under Assumption 1, for any η∈(0,1), there exist δ∈(1,∞]andτ <∞such that for alln∈N, sup x∈R/vextenddouble/vextenddoublePn η(x,·)−πη/vextenddouble/vextenddouble TV/lessorequalslantτδ−n(2.11) and sup x∈RdTV/parenleftbig Pn η(x,·),πη/parenrightbig /lessorequalslant(τ/2)δ−n. (2.12) It can be seen from (2.11) that under Assumption 1 ,Pηis uniformly geometrically ergodic in the sense of Douc et al. [7] (cf. Definition 15.2.1 (iii)). In this article, we will use the same definition (cf. Definition Appendix A.10) and demonstrate the necessary and sufficient conditions on the Markov chain (θk)k/greaterorequalslant0(see also Proposition 3.4). It turns out that (2.11) is equiv alent to the condition that the Markov kernel Pηis positive, aperiodic and there exist a small set Cand a constant δ >1, such that supx∈REx(δσC)<∞. 3. Properties about the Markov kernel Pη Before we describe some properties of the EM scheme ( θn)n/greaterorequalslant0, let’s start with some properties related to non-atomic Markov chains, an extension of classi cal Markov chains with discrete state space. The definitions of related concepts can be found in App endix A. One may refer to the book Douc et al. [7] for the theory on such extended Markov chains. Suppose that ( Xn)n/greaterorequalslant0is a Markov chain living on the state space X, with Markov kernel Pon a measurable space ( X,X), where Xis aσ-algebra generated by X. In the sequel, let Pxdenote the probability measure on the canonical space/parenleftbig XN,X⊗N/parenrightbig of the chain (
https://arxiv.org/abs/2505.04218v1
Xn)n/greaterorequalslant0, with the initial stateX0=x. AndExdenotes the expectation under the probability measure Px. IfB∈X, define respectively the first hitting time τBand the first return time σBof the set Bfor the chain ( Xn)n/greaterorequalslant0 by τB= inf{n/greaterorequalslant0|Xn∈B}, 4 σB= inf{n/greaterorequalslant1|Xn∈B}. Let’s introduce an important notation that is specific to ato mic chains. Suppose that αis an atom for the kernel P. If a function hdefined on Xis constant on α, then we write h(α) instead of h(x) for allx∈α. In the sequel, this convention will be used mainly in the fol lowing two cases. For every positive XN-measurable random variable Ysuch that Ex(Y) is constant on α, we will write Eα(Y) instead of Ex(Y), for any x∈α. In the same way, we will write P(α,α) instead of P(x,α), for any x∈α. We have the following basic properties of the chain ( Xn)n/greaterorequalslant0, with Markov kernel P. Lemma 3.1 (Corollary 9.2.14, [7]). Suppose that Pis irreducible. Let rbe a positive increasing sequence such that limn→∞r(n) =∞andA∈X,A/ne}ationslash=∅. Assume that supx∈AEx[r(σA)]<∞. Then the set {x∈X|Ex[r(σA)]<∞} is full and absorbing, and Ais accessible. Lemma 3.2 (Theorem 9.3.6, [7]). Suppose that Pis an irreducible Markov kernel with period d. Then there exists a sequence C0,C1,...,C d−1of mutually disjoint accessible sets such that for i= 0,...,d−1andx∈Ci,P/parenleftbig x,Ci+1[mod d]/parenrightbig = 1. Consequently,/uniontextd−1 i=0Ciis absorbing. Lemma 3.3 (Proposition 9.4.5, [7]). Suppose that Pis irreducible. Then a finite union of petite sets is petite, and Xis covered by a denumerable union of increasing petite sets. Lemma 3.4 (Lemma 9.4.7 (ii), [7]). Suppose that Pis irreducible. Let C,D∈X. IfDis petite and uniformly accessible from C, thenCis petite. Lemma 3.5 (Theorem 9.4.10, [7]). Suppose that Pis irreducible and aperiodic, then every petite set is small. Remark 3.1. From Lemma 3.5, it can be seen that if Pis irreducible and aperiodic, the ”petite sets” can be replaced by ”small sets” in the statements of Lemma 3.3 and Lemma 3.4. Lemma 3.6 (Theorem 10.1.2, [7]). Pis recurrent if and only if it admits an accessible recurrent petite set. Applying Theorem 13.4.3 in [7] with f= 1 therein, we have the following result on the convergence rate ofPnto its invariant probability measure πunder TV norm. Lemma 3.7. Letαbe an accessible, aperiodic, and positive atom. Denote by πthe unique invariant probability measure. Assume that there exists γ >1such that Eα/bracketleftBiggσα/summationdisplay n=1γn/bracketrightBigg <∞. Then there exist β∈(1,γ)and a constant τ <∞such that for every probability measure ξon(X,X), ∞/summationdisplay n=1βn/ba∇dblξPn−π/ba∇dblTV/lessorequalslantτEξ/bracketleftBiggσα/summationdisplay n=1γn/bracketrightBigg . 5 Consider now the EM scheme ( θn)n/greaterorequalslant0defined by (1.2) with Markov kernel Pηgiven by (1 .3). We have the following properties about the Markov chain ( θn)n/greaterorequalslant0and its kernel Pη. Proposition 3.1 (1) below provides a sufficient condition for the existence of accessible (1 ,µ)-small set Csatisfying µ(C)>0. Proposition 3.1. (1) IfCis a compact subset of Rsuch that Leb (C)>0, thenCis an accessible (1,ǫν)-small set, where ǫ=1√ησinf (x,y)∈C2φ/parenleftbiggy−x−ηg(x)√ησ/parenrightbigg , (3.13) ν(·) =Leb(· ∩C), andφ(x) =1√ 2πexp{−x2/2}is the probability density function of standard normal distribution. (2) For any η∈(0,1), the Markov kernel Pηis irreducible and strongly aperiodic. (3) The state space Rof the Markov chain (θk)k/greaterorequalslant0is an1-small set
https://arxiv.org/abs/2505.04218v1
and is covered by a denumerable union of small sets. Proof. (1) Recall that Pηadmits a Markov kernel pη(x,y), given by pη(x,y) =1√ησφ/parenleftbiggy−x−ηg(x)√ησ/parenrightbigg . Suppose that Cis a compact subset of Rsuch that Leb( C)>0. Then for all x∈Cand A∈ B(R), Pη(x,A) =/integraldisplay Apη(x,y)dy/greaterorequalslant/integraldisplay A∩Cpη(x,y)dy/greaterorequalslantǫLeb(A∩C), whereǫ∈(0,1] is defined in (3.13). By Definition Appendix A.3 (2), Cis a (1,ǫν)-small set, withν(·) = Leb(· ∩C). Furthermore, by Definition Appendix A.1 (1), Cis accessible since for allx∈R, Pη(x,C) =1√ησ/integraldisplay Cφ/parenleftbiggy−x−ηg(x)√ησ/parenrightbigg dy >0. (2) We have proved in Part (1) that any compact subset CofRsatisfying Leb( C)>0 is an accessible (1 ,µ)-small set with µ(C) =ǫν(C)>0. By Definition Appendix A.5 and Definition Appendix A.8 (3), Pηis thus irreducible and strongly aperiodic. (3) For a given non-empty set A∈ B(R), we have for y∈A, inf x∈Rpη(x,y)/greaterorequalslant1√2πησexp/parenleftBigg −1 2ησ2/parenleftbigg y−inf x∈R(x+g(x))/parenrightbigg2 ∨/parenleftbigg y−sup x∈R(x+g(x))/parenrightbigg2/parenrightBigg >0. Letf(y) denote the function in the middle of the two inequalities ab ove. Then for all x∈Rand A∈ B(R), Pη(x, A) =/integraldisplay Apη(x,y)dy/greaterorequalslantµ(A), whereµ(A) =/integraltext Af(y)dyis a Borel measure defined on B(R). Therefore, the state space Ris an (1,µ)-small set. From Part (2), Pηis irreducible and aperiodic. According to Lemma 3.3 and Lemma 3.5, we have the state space Ris covered by a denumerable union of small sets. 6 3.1. Existence and uniqueness of invariant probability mea sureπη In the sequel, if Cis a subset of R, letτCandσCdenote respectively the first hitting time and the first return time of Cfor the chain ( θn)n/greaterorequalslant0, i.e. τC= inf{n/greaterorequalslant0|θn∈C}, σC= inf{n/greaterorequalslant1|θn∈C}. For a given η∈(0,1), letbη=1 2K1L−2L2η2+ 2g2(0)η2+ησ2+ 2Cη. To prove the existence and uniqueness of πη, we will use the same set Dηas in [12], defined by Dη=/braceleftbigg x∈R:|x|/lessorequalslant2bη K1η/bracerightbigg , whereL,K1andCare given from (2.5), (2.6) and (2.7) respectively. Let’s al so define a function V(x) onRbyV(x) = 1+x2. We have the following lemma. The inequality (3.14) below me ans that for any initial state x∈D0, the quantity Ex/parenleftbig βσDη η/parenrightbig is uniformly bounded in η. Lemma 3.8. (1) For any η∈(0,1),Dηis an accessible (1,µ)-small set with µ(Dη)>0. (2) There exists a constant η1∈(0,1)depending only on K1andL, such that for any η∈(0,η1], and for any x∈R, Ex/parenleftBig βσDηη/parenrightBig /lessorequalslantV(x)+bηβη, whereβη>1is a constant depending on η. (3) There exists a constant η0∈(0,1)depending only on K1andL, such that for any η∈(0,η0], sup x∈D0Ex/parenleftBig βσDη η/parenrightBig /lessorequalslantsup x∈D0Ex/parenleftBig βσD0 0/parenrightBig <∞, (3.14) whereD0=Dη0andβ0=βη0. Proof. (1) Since for any η∈(0,1),Dηis a compact subset of Rand Leb( Dη) =4bη K1η>0, from Proposition 3.1 (1), we have for any η∈(0,1),Dηis an accessible (1 ,µ)-small set, with µ=εν, ε∈(0,1] andν(Dη) = Leb(Dη)>0; so that µ(Dη)>0. (2) Define a function λ(η) = 1−1 2K1η+2L2η2∈(0,1),η∈(0,1). We can prove (see also (A.2) in [12]) that for any η∈(0,1) andx∈R, the following inequality holds PηV(x)/lessorequalslantλ(η)V(x)+bη /BDDη(x). By Lemma Appendix B.1 (1) in Appendix, there exists a constan tη1∈(0,1) depending only onK1andL, such that for any η∈(0,η1], 0< λ(η)<1. Then according to Proposition 4.3.3 (ii) in [7], we have for any η∈(0,η1] andx∈R, Ex/parenleftBig βσDη η/parenrightBig /lessorequalslantV(x)+bηβη, (3.15) whereβη= 1/λ(η)>1. 7 (3) Applying Lemma Appendix B.1 (1) again and letting η0=η1∧η2, sinceλ(η) is decreasing inη, we
https://arxiv.org/abs/2505.04218v1
have for any η∈(0,η0],βη/lessorequalslantβ0; from Lemma Appendix B.1 (2), D0⊂Dη, so max η∈(0,η0]σDη/lessorequalslantσD0. Letb0=bη0. Therefore, by (3.15), we obtain sup x∈D0Ex/parenleftBig βσDηη/parenrightBig /lessorequalslantsup x∈D0Ex/parenleftBig βσD0 0/parenrightBig /lessorequalslantsup x∈D0(1+x2)+b0β0<∞. We thus come to the result (3.14). Using the lemma above, we can obtain the existence and the uni queness of invariant probability measure πηfor the kernel Pηin the following proposition. Proposition 3.2. Assume Assumption 1hold. Then there exists a constant η0∈(0,1)depending only on K1andL, such that for every given η∈(0, η0], the Markov kernel Pηadmits a unique invariant probability measure πη. Moreover, there exist constants δ >1,τ <∞,β0>1and an accessible (1,µ)-small set D0(all depending only on K1andL) withµ(D0)>0, such that for any initial probability measure ξon(R,B(R)), +∞/summationdisplay n=1δndTV(ξPn η,πη)/lessorequalslantτEξ/parenleftBig βσD0 0/parenrightBig . (3.16) Proof. From Lemma 3.8 (1) and (3), we have D0=Dη0is an accessible (1 ,µ)-small set with µ(D0)>0 and there exists β0>1 such that supx∈D0Ex(βσD0 0)<∞. Applying Theorem 11.4.2 from [7] to the accessible (1 ,εν)-small set D0, we can obtain immediately the existence and uniqueness of invariant probability measure πηfor the Markov kernel Pηand there exist constants δ >1 andτ <∞ (both depending only on K1andL) such that for any initial probability measure ξon (R,B(R)), (3.16) holds. 3.2. Splitting construction and split Markov kernel ˇPη LetˇX=R×{0,1}and (ˇX,ˇX) be a measurable space. In this section, based on the kernel Pη, we will construct a new Markov kernel ˇPηon the extended state space ( ˇX,ˇX). The detailed method of splitting construction for more general case can be found in Section 11.1 of Douc et al. [7]. Without loss of generality, we assume that Pηadmits an (1 ,2εν)-small set Cwithε∈(0,1) and ν(C) = 1. Let bεbe the Bernoulli distribution with success probability ε, given by bε= (1−ε)δ{0}+εδ{1}. For any bounded and measurable function fon (ˇX,ˇX), define a function ¯fεonRby ¯fε(x) = [δx⊗bε]f= (1−ε)f(x,0)+εf(x,1). From the definition of ¯fεabove, we have for any measure ξon (R,B(R)), ξ(¯fε) = [ξ⊗bε](f). 8 Forη∈(0,1), consider the residual kernel Rηdefined for x∈RandA∈ B(R) by Rη(x,A) =/braceleftBiggPη(x,A)−εν(A) 1−ε,ifx∈C Pη(x,A), ifx /∈C. Now, for η∈(0,1), define the split Markov kernel ˇPηon (ˇX,ˇX) as follows. For ( x,d)∈ˇXandˇA∈ˇX, set ˇPη(x,d;ˇA) =Qη(x,d;·)⊗bε(ˇA), whereQηis the Markov kernel on ˇX×B(R) defined for all B∈ B(R) by Qη(x,d;B) = /BDC(x)/bracketleftbig/BD{0}(d)Rη(x,B)+ /BD{1}(d)ν(B)/bracketrightbig + /BDCc(x)Pη(x,B). Equivalently, for all bounded and measurable function gon (R,B(R)), one has Qηg(x,0) = /BDC(x)Rηg(x)+ /BDCc(x)Pηg(x) =/braceleftBigg/integraltext Rg(y)pη(x,y)dy−εν(g) 1−εifx∈C/integraltext Rg(y)pη(x,y)dy,ifx /∈C, Qηg(x,1) = /BDC(x)ν(g)+ /BDCc(x)Pηg(x) =/braceleftBigg ν(g) if x∈C/integraltext Rg(y)pη(x,y)dy,ifx /∈C; wherepη(x,y) = (2πησ2)−1/2exp/parenleftBig −(y−x−ηg(x))2 2ησ2/parenrightBig is the kernel density of Pη. It follows that for any bounded and measurable function fon (ˇX,ˇX), ˇPηf(x,d) =Qη¯fε(x,d). Now let’s describe the canonical chain associated with the k ernelˇPηonˇXסX. For probability measure ˇ µonˇX, denote by ˇPˇµthe probability measure on the canonical space/parenleftbigˇXN,ˇX⊗N/parenrightbig such that the coordinate process, denoted by {(θk,Dk)}k/greaterorequalslant1and called split chain, is a Markov chain with initial distribution ˇ µand Markov kernel ˇPη. From the splitting construction above, ( Dn)n/greaterorequalslant0is a sequence of i.i.d Bernoulli random variables with success probability εthat is independent of ( θn)n/greaterorequalslant1. Denote by/parenleftbig Fθ k/parenrightbig k/greaterorequalslant1the natural filtration of the process ( θk)k/greaterorequalslant1. An important property of the
https://arxiv.org/abs/2505.04218v1
split chain {(θk,Dk)}k/greaterorequalslant1is that if θ0andD0are independent, then {(θk,Fθ k)}k/greaterorequalslant1is a Markov chain with kernel Pη. In the book Douc et al. [7], a split Markov kernel ˇPwas constructed for an irreducible kernel P, which has an accessible small set. In this book, it is also fou nd thatˇPadmits an atom and is such that Pis the projection of ˇPontoX(see also Proposition 11.1.4, Douc et al. [7]). And the prope rties of the split chain are directly related to those of the original chain. We use the similar way to construct the split Markov kernel ˇPηforPηhere. We can obtain the following properties about ˇPη. Let’s start with a basic relation between the n-step transition of ˇPηand the one of the original kernel Pη. Lemma 3.9. For a given η∈(0,1), suppose that Pηadmits an (1,εν)-small set C. Then for any non-negative measure ξon(R,B(R)), [ξ⊗bε]ˇPn η=ξPn η⊗bε. 9 The lemma above is obtained by applying Lemma 11.1.1 in [7] to the irreducible kernel Pη(proven by Proposition 3.1 (2)). From the lemma, it can be seen that th e transitions of split Markov kernel ˇPηare directly related to those of the original kernel Pη. As a result, the properties of the split chain {(θk,Dk)}k/greaterorequalslant1also rely on the ones of the original chain ( θk)k/greaterorequalslant1. Moreover, the following property states that the original chain ( θk)k/greaterorequalslant1, with initial distribution ξ, maintains its Markov property under ˇPξ⊗bεon the canonical space/parenleftbigˇXN,ˇX⊗N/parenrightbig . Lemma 3.10. For a given η∈(0,1), suppose that Pηadmits an (1,εν)-small set C. Then for any probability measures on (R,B(R)),{(θk,Fθ k)}k/greaterorequalslant1is under ˇPξ⊗bεa Markov chain on R× B(R)with initial distribution ξand Markov kernel Pη. The lemma above is obtained by applying Proposition 11.1.2 i n [7] to the irreducible kernel Pη (proven by Proposition 3.1 (2)). We also have the following p roperty about the relation between invariant measure of ˇPηand the one of Pη. More precisely, it states that every invariant measure for ˇPηcan always be written as the product of an invariant measure f orPηandbε. Lemma 3.11. For a given η∈(0,1), suppose that Pηadmits an (1,εν)-small set C, then we have the following two properties. (1) Ifληis a non-negative measure on (R,B(R))and isPη-invariant, then λη⊗bεisˇPη-invariant. (2) Ifˇληis a non-negative measure on (ˇX,ˇX)and isˇPη-invariant, then ˇλη=ˇλη,0⊗bε, whereˇλη,0 is a non-negative measure on (R,B(R)), defined by ˇλη,0(A) =ˇλη(A×{0,1}), A∈ B(R). In addition, ˇλη,0isPη-invariant. The lemma above is obtained by applying Proposition 11.1.3 i n [7] to the irreducible kernel Pη (proven by Proposition 3.1 (2)). An advantage to use the spli t kernel ˇPη, rather than the original kernelPη, is that its split chain becomes an atomic chain (see also the lemma below). Lemma 3.12. Setˇα=C×{1}andˇC=C×{0,1}. For a given η∈(0,1), suppose that Pηadmits an(1,2εν)-small set Cwithν(C) = 1. Then the following results are true: (1) The set ˇαis an aperiodic atom for the kernel ˇPη. (2) The set ˇCis small for the kernel ˇPη. (3) IfCis accessible, then the atom ˇαis accessible for ˇPη, and hence ˇPηis irreducible. (4) For all k/greaterorequalslant1,ˇPk η(ˇα,ˇα) =ενPk−1 η(C). (5) IfCis Harris recurrent for Pη, then for any probability measure
https://arxiv.org/abs/2505.04218v1
ξon(R,B(R))satisfying Pξ(σC<∞) = 1,ˇPξ⊗δd(σˇα<∞) = 1for alld∈ {0,1}. Moreover, if Pηis Harris recurrent, thenˇPηis Harris recurrent. (6) IfCis accessible and Pηadmits an invariant probability measure πη, thenˇαis positive for ˇPη. The lemma above is obtained by applying Proposition 11.1.4 i n [7] to the irreducible kernel Pη (proven by Proposition 3.1 (2)). We also have the following r esult about the existence and uniqueness of the split kernel ˇPη. 10 Lemma 3.13. For any η∈(0,1), suppose that Pηadmits an (1,2εν)-small set Cwithν(C) = 1. Assume that for some δ >1, sup x∈CEx/parenleftBiggσC/summationdisplay k=0δk/parenrightBigg <∞. (3.17) Then (1) there exist constants γ∈(1,δ)andτ < τ1<∞such that sup (x,d)∈C×{0,1}ˇE(x,d)/parenleftBiggσˇα/summationdisplay k=0γk/parenrightBigg /lessorequalslantτsup x∈CEx/parenleftBiggσC−1/summationdisplay k=0δk/parenrightBigg , (3.18) and for any non-negative measure ξon(R,B(R)), ˇEξ⊗bε/parenleftBiggσˇα/summationdisplay k=0γk/parenrightBigg /lessorequalslantτ1Eξ/parenleftBiggσC−1/summationdisplay k=0δk/parenrightBigg ; (3.19) (2)ˇPηadmits a unique invariant probability measure πη⊗bε, where πηis the unique invariant probability measure of Pη. Proof. (1) The condition (3.17) implies that inf x∈CPx(σC<∞) = 1, so the set Cis Harris recurrent (see also Theorem 6.2.2 and Proposition 4.2.5, [7]). By Lemma 3.12 (5), for all ( x,d)∈ˇC,ˇP(x,d)(σˇC<∞) = 1 and ˇP(x,d)(σˇα<∞) = 1. So we have for all (x,d)∈ˇCandγ∈(1,δ), ˇE(x,d)(γσˇα) =γˇE(x,d)/parenleftbig γσˇα−1/parenrightbig /lessorequalslantγˇE(x,d)/parenleftBiggσˇα−1/summationdisplay k=0γk/parenrightBigg , (3.20) which implies that for any ( x,d)∈ˇCandγ∈(1,δ), ˇE(x,d)/parenleftBiggσˇα/summationdisplay k=0γk/parenrightBigg /lessorequalslant(γ+1)ˇE(x,d)/parenleftBiggσˇα−1/summationdisplay k=0γk/parenrightBigg . (3.21) On the other hand, for any x∈C, we have by Lemma 3.10, ˇEδx⊗bε σˇC−1/summationdisplay k=0δk =Ex/parenleftBiggσC−1/summationdisplay k=0δk/parenrightBigg . (3.22) 11 Note that for any positive random variable Y, sup (x,d)∈ˇCˇE(x,d)(Y)/lessorequalslantτεsup x∈CˇEδx⊗bε(Y), withτε=ε−1∨(1−ε)−1. Applying the above inequality to Y=/summationtextσˇC−1 k=0δk, and combining (3.22) and the condition (3.17), we get sup (x,d)∈ˇCˇE(x,d) σˇC−1/summationdisplay k=0δk /lessorequalslantτεsup x∈CˇEδx⊗bε σˇC−1/summationdisplay k=0δk =τεsup x∈CEx/parenleftBiggσC−1/summationdisplay k=0δk/parenrightBigg (3.23) <∞. (3.24) Similar to (3.20), we can obtain ˇE(x,d)(δσˇC)/lessorequalslantδˇE(x,d) σˇC−1/summationdisplay k=0δk . (3.25) Combining (3.25) and (3.24), we obtain sup (x,d)∈ˇCˇE(x,d)(δσˇC)<∞. By Lemma 3.12 (5), inf(x,d)∈ˇCˇP(x,d)(X1∈ˇα)>0. Applying Theorem 14.2.3 in [7] with A=ˇC, B= ˇα,h= 1 and q= 1, and using (3.23), we get there exist γ∈(1,δ),τ0<∞such that sup (x,d)∈ˇCˇE(x,d)/parenleftBiggσˇα−1/summationdisplay k=0γk/parenrightBigg /lessorequalslantτ0sup (x,d)∈ˇCˇE(x,d) σˇC−1/summationdisplay k=0δk  /lessorequalslantτ0τεsup x∈CEx/parenleftBiggσC−1/summationdisplay k=0δk/parenrightBigg . Combining the last inequality with (3.21), the inequality ( 3.18) is thus obtained with τ= (γ+1)τ0τε<∞. Consider the shift operator θ, defined by θ(ω0,ω1...) = (ω1,ω2...), for any ω= (ω0,ω1...)∈ RN. Define inductively θ0as the identity function, i.e. θ0(ω) =ω, forω∈RN; andθn=θn−1◦θ, forn/greaterorequalslant1. Notice that σˇα/lessorequalslantσˇC+σˇα◦θσˇCon the event {σˇC<∞}and using Lemma 3.10, we get ˇEξ⊗bε/parenleftBiggσˇα/summationdisplay k=0γk/parenrightBigg /lessorequalslantˇEξ⊗bε σˇC−1/summationdisplay k=0γk +ˇEξ⊗bε σˇα◦θσˇC/summationdisplay k=σˇCγk  /lessorequalslantˇEξ⊗bε σˇC−1/summationdisplay k=0γk +ˇEξ⊗bε(γσˇC) sup (x,d)∈ˇCˇE(x,d)/parenleftBiggσˇα/summationdisplay k=0γk/parenrightBigg /lessorequalslantEξ/parenleftBiggσC−1/summationdisplay k=0γk/parenrightBigg/bracketleftBigg 1+γsup (x,d)∈ˇCˇE(x,d)/parenleftBiggσˇα/summationdisplay k=0γk/parenrightBigg/bracketrightBigg . 12 Therefore, by (3.18) and (3.17), there exists τ1∈(τ,∞) such that the inequality (3.19) is satisfied. (2) From Proposition 3.1 (2) and Proposition 3.2, Pηis positive, so by Lemma 3.1 and the condition (3.17),Cis accessible. Thus by Lemma 3.12, ˇ αis accessible, aperiodic, and positive atom for the split kernel ˇPη. Using the inequality (3.18), the condition (3.17) implies that there exists γ∈(1,δ), such that ˇEˇα(σˇα/summationdisplay k=0γk)<∞. (3.26) Applying Theorem 11.4.2 in Douc et al. [7] to ˇPη, we get it has a unique invariant probability measure ˇ πη, which is expressed as πη⊗bεby Lemma 3.11 (2), where πηis the unique invariant probability measure for Pη. By using Lemma 3.13 above,
https://arxiv.org/abs/2505.04218v1
we can obtain the following propos ition. Proposition 3.3. Under the same conditions of Lemma 3.13, there exists a const antβ >1such that for all initiate distributions ξon(R,B(R)), ∞/summationdisplay n=0βn/ba∇dblξPn η−πη/ba∇dblTV<∞. (3.27) Proof. Lemma 3.9 implies for n/greaterorequalslant1, /ba∇dblξPn η−πη/ba∇dblTV/lessorequalslant/ba∇dbl(ξ⊗bε)ˇPn η−πη⊗bε/ba∇dblTV. So we have ∞/summationdisplay n=1βn/ba∇dblξPn η−πη/ba∇dblTV/lessorequalslant∞/summationdisplay n=1βn/ba∇dbl(ξ⊗bε)ˇPn η−πη⊗bε/ba∇dblTV. (3.28) In Lemma 3.13, it is proven that ˇ αis an accessible, aperiodic, and positive atom for ˇPη, which admits unique invariant probability measure πη⊗bε; moreover, it is proven that (3.26) is true. Now, applying Lemma 3.7 to ˇPηwithα= ˇα, we obtain there exist β∈(1,γ) andτ <∞such that ∞/summationdisplay n=1βn/ba∇dbl(ξ⊗bε)ˇPn η−πη⊗bε/ba∇dblTV/lessorequalslantτˇEξ⊗bε/parenleftBiggσˇα/summationdisplay n=1γn/parenrightBigg . Combining this inequality with (3.28), (3.19) in Lemma 3.13 and the condition (3.17), we can obtain the desired inequality (3.27). 3.3. Uniform geometric ergodicity of the kernel Pη Thefollowing proposition gives equivalent conditions for uniform geometric ergodicity of the kernel Pη. It can be proved by applying the basic properties of the Mark ov kernel Pηin Proposition 3.1 and Proposition 3.3 above. 13 Proposition 3.4. For any η∈(0,1), the following statements are equivalent. (1)Pηis uniformly geometrically ergodic, i.e. Pηadmits an invariant probability measure πηsuch that there exist δ >1andτ <∞satisfying for all n∈N, sup x∈R/vextenddouble/vextenddoublePn η(x,·)−πη/vextenddouble/vextenddouble TV/lessorequalslantτδ−n. (3.29) (2)Pηis positive, aperiodic, and there exist a small set Cand a constant δ >1such that sup x∈REx(δσC)<∞. (3) The state space Ris small. Proof. (1) =⇒(2). Suppose that Pηis uniformly geometrically ergodic. We prove firstly that Pη is irreducible, then by Definition Appendix A.9 (2), Pηis positive. From Definition Appendix A.10, Pηadmits an invariant probability measure πηand there exist β >1 andM <∞such that for any n∈N,x∈RandA∈ B(R), we have |Pn η(x, A)−πη(A)|/lessorequalslant/ba∇dblPn η(x,·)−πη/ba∇dblTV/lessorequalslantMβ−n. (3.30) Therefore, for any n∈N, anyA∈ B(R) and any x∈R, Pn η(x,A)/greaterorequalslantπη(A)−Mβ−n. (3.31) Ifπη(A)>0, we can choose nlarge enough such that Pn η(x, A)>0, which shows that Pηis irreducible and by Theorem 9.2.15 of [7], πηis maximal irreducibility measure. LetCbe an accessible small set. For d >0, define set Bd={x∈R|1(x)/lessorequalslantd}. Sinceπηis a maximal irreducibility measure, by Definition Appendix A.1 1 (2),πη(C)>0 and for all x∈Bdwe can choose nlarge enough such that Pn η(x, C)/greaterorequalslantπη(C)−Mβ−n/greaterorequalslantπη(C)/2. Therefore, for all d >0,Bdis also a small set by Lemma 9.1.7 of [7]. SinceR={x∈R|1(x)<∞}andπη(R) = 1, we may choose d0large enough, so that π(Bd)>0 for alld/greaterorequalslantd0. Sinceπηis a maximal irreducibility measure, for all d/greaterorequalslantd0,Bdis an accessible set. Applying (3.31) for A=Bd, we may find nlarge enough such that for all m > n, infx∈BdPm η(x,Bd)/greaterorequalslant π(Bd)/2>0. This implies that the period of Pηis 1 and by Definition Appendix A.8 (2), Pηis hence aperiodic. On the one hand, letting A=Rin (3.30), one gets for any k∈Nandx∈R, Pk η1(x)/lessorequalslantMβ−k+1. (3.32) We can thus choose m∈N∗large enough such that Mβ−m/lessorequalslantλ <1; 14 as a result, Pm ηsatisfies the drift condition Dg(1,λ,1), i.e.Pm η/lessorequalslantλ+ 1. Now, set V0(x) = 1 + λ−1/mPη1(x)+···+λ−(m−1)/mPm−1 η1(x),x∈R. Letb=λ−(m−1)/m. Then we have PηV0(x) =Pη1(x)+λ−1/mP2 η1(x)+···+λ−(m−1)/mPm η1(x) /lessorequalslantPη1(x)+λ−1/mP2 η1(x)+···+λ−(m−2)/mPm−2 η1(x)+λ−(m−1)/m(λ+1) =λ1/m(λ−1/mPη1(x)+λ−2/mP2 η1(x)+···+λ−(m−1)/mPm−2 η1(x)+1)+b =λ1/mV0(x)+b. (3.33) On the other hand, by (3.32), we have for any x∈R, 1< V0(x)/lessorequalslant/parenleftBigg Mm−1/summationdisplay k=0λ−k/mβ−k/parenrightBigg +m−1/summationdisplay k=0λ−k/m/lessorequalslant(M+1)m−1/summationdisplay k=0λ−k/m= (M+1)λ−1−1 λ−1/m−1. Pηthus satisfies the drift condition Dg(V0,λ1/m,λ−(m−1)/m), for some
https://arxiv.org/abs/2505.04218v1
m∈N∗,λ∈(0,1) andV0: R→[1,∞) a measurable and bounded function on R. Letf(x) = (˜λ−λ1/m)V0(x), for˜λ∈(λ1/m,1). From (3.33), for any ˜λ∈(λ1/m,1) and any x∈R, PηV0(x)+f(x)/lessorequalslant˜λV0(x)+b. (3.34) Moreover, from above, we have for all d >0, the set Bdis small; and for all d/greaterorequalslantd0,Bdis accessible. Therefore, these are also true for the set {x∈R|V0(x)/lessorequalslantd}. Choose λ1∈(˜λ,1). Consider a set C={x∈R|V0(x)/lessorequalslantd1}, whered1/greaterorequalslantd0∨b(λ1−˜λ)−1; and hence Cis accessible and small. For x∈C, (3.34) yields PηV0(x)+f(x)< λ1V0(x)+b. While, for x∈Cc,−(λ1−˜λ)V0(x)<−band (3.34) imply PηV0(x)+f(x)< λ1V0(x)+b−(λ1−˜λ)V0(x)< λ1V0(x). Equivalently, for any x∈R, PηV0(x)+f(x)/lessorequalslantλ1V0(x)+b /BDC. (3.35) From the last inequality, with measurable functions V0:X→[1,∞] andf:X→[1,∞), some given δ=λ−1 1>1 and set C, we get for any x∈Cc, PηV0(x)+f(x)/lessorequalslantδ−1V0(x). According to (14.1.4) in Proposition 14.1.2 [7], for any x∈R, λ−1 1(˜λ−λ1/m)Ex/parenleftBiggσC−1/summationdisplay k=0˜λ−kV0(Xk)/parenrightBigg /lessorequalslantλ−1 1[PηV0(x)+f(x)] /BDC(x)+V0(x) /BDCc(x), where 0×∞= 0 by convention on the right-hand side of the inequality. Us ing (3.35), we obtain λ−1 1(˜λ−λ1/m)Ex/parenleftBiggσC−1/summationdisplay k=0δkV0(Xk)/parenrightBigg /lessorequalslant(sup CV0+bλ−1 1) /BDC(x)+V0(x) /BDCc(x) /lessorequalslant/parenleftbig d1+bλ−1 1+1/parenrightbig V0(x) = (d1+bδ+1)V0(x). 15 Therefore, for any x∈R, Ex/parenleftBiggσC−1/summationdisplay k=0δkV0(Xk)/parenrightBigg /lessorequalslantτV0(x), (3.36) whereτ= [(d1+1)δ+b](˜λ−λ1/m)−1<∞. Moreover, for any x∈X, Ex/parenleftBiggσC−1/summationdisplay k=0δkV0(Xk)/parenrightBigg =V0(x)+δ−1Ex/parenleftBiggσC/summationdisplay i=2δiV0(Xi−1)/parenrightBigg /greaterorequalslant1+δ−1Ex(δσC).(3.37) Combining (3.36) and (3.37), we have for any x∈X, there exists δ∈(1,∞) such that Ex(δσC)/lessorequalslantδ[τV0(x)−1]<∞. (2) =⇒(3). For any d >0, consider set Dddefined by Dd={x∈R|Ex(δτC)< d}. we will firstly prove that Ddis petite. Since Pηis irreducible and aperiodic, Cis small, then by Lemma 3.5,Cis also petite and thus Dd∩Cis petite. Suppose x∈Dd∩Cc. By Markov’s inequality, for all k∈N∗, Px(σC/greaterorequalslantk+1) =Px(τC/greaterorequalslantk+1) =Px(δτC/greaterorequalslantδk+1) /lessorequalslantδ−(k+1)Ex(δτC) /lessorequalslantdδ−(k+1). Thus, for ksufficiently large, inf x∈Dd∩CcPx(σC/lessorequalslantk)/greaterorequalslant1 2>0. ByDefinitionAppendix A.1(2), theset Cisuniformlyaccessiblefrom Dd∩Cc. SincePηisirreducible, according to Lemma3.4, wehave Dd∩Ccis petite. By Lemma3.3, theunoinof two petitesets remains petite. So Dd= (Dd∩C)∪(Dd∩Cc) is petite, for any d >0. Since sup x∈REx(στC)/lessorequalslantsup x∈REx(σσC)<∞, there exists b >0 such that R⊂ {x∈R|Ex(στC)/lessorequalslantb}=Db, which is petite; and hence Ris petite. By Lemma 3.5, the state space Ris thus small. (3) =⇒(1). Supposethat the state space Ris small set. We will firstly show that Pηis irreducible, recurrent and positive. By Definition Appendix A.3 (2), ther e existsm∈Nand nonzero measure µ on (R,B(R)) such that for all x∈RandA∈ B(R), Pm η(x,A)/greaterorequalslantµη(A), (3.38) which implies for all A∈ B(R) such that µη(A)>0, one has Pm η(x,A)>0. Consequently, µη is an irreducibility measure and Pηis irreducible by Definition Appendix A.11 (1) and Definition Appendix A.5 respectively. Since Ris an accessible small set, σR= 1Px-a.s., for all x∈R. Applying 16 Lemma 3.6, we have Pηis recurrent. According to Theorem 11.2.5 in [7], Pηadmits an invariant probability measure πηsatisfying πη(C)<∞for every accessible set C; sinceRis accessible, this impliesπη(R)<∞, showing that Pηis positive by Definition Appendix A.9 (2). Next, we will prove that Pηis aperiodic by contradiction. Supposethat Pηis an irreducibleMarkov kernel with period d >1. According to Lemma 3.2, there exist C0,C1,···,Cd−1of mutually disjoint accessible sets such that for i= 0,...,d−1, and for any x∈Ci,Pη/parenleftbig x,Ci+1(mod d)/parenrightbig = 1 and hence Pη(x,C(i+1)+1(mod d)) = 0, (3.39) whereadditions are in modulosense; and/uniontextd−1 i=0Ciis absorbing. Therefore, thereexists i0∈ {0,···,d− 1}such that Pm η(x,Ci0)/greaterorequalslantµη(Ci0)>0, which contradicts with (3.39). Finally, we can obtain (3.29) by applying Proposition 3.3, w ithC=R. 4. Proofs of main results Proof of Theorem 2.1.
https://arxiv.org/abs/2505.04218v1
For every given η∈(0,η0], the existence, uniqueness of πηand (2.9) is proved in Proposition 3.2. The inequality (2.8) is proved in Lemma 3.8 (3). Proof of Theorem 2.2. The inequality (2.11) is an immediate consequence of Propos ition 3.1 (3) and Proposition 3.4 (3). And the inequality (2.12) is obtain ed from (2.11) and the definition of TV distance (1.4). Appendix A. Some properties related to small sets, positive and geometrically ergodic kernel In this section, we will introduce some basic concepts relat ed to non-atomic Markov chains, an extension of classical Markov chains with discrete state sp ace. The definitions below can be also found in the book Douc et al. [7]. Suppose that ( Xn)n/greaterorequalslant0is a Markov chain living on the state space Xwith Markov kernel Pon a measurable space ( X,X), where Xis aσ-algebra generated by X. In the sequel, let Pxdenote the probability measure on the canonical space/parenleftbig XN,X⊗N/parenrightbig of the chain ( Xn)n/greaterorequalslant0, with the initial state X0=x. AndExdenotes the expectation under the probability measure Px. IfB∈X, letNBdenote the number of visits of ( Xn)n/greaterorequalslant0to the set B, defined as NB=/summationtext∞ k=0 /BDB(Xk). Define respectively the first hitting time τB, the first return time σBof the set B, and the expected number U(x,B) of visits toBstarting from xby τB= inf{n/greaterorequalslant0|Xn∈B}, σB= inf{n/greaterorequalslant1|Xn∈B} and U(x,B) =Ex[NB] =∞/summationdisplay k=0Pk(x,B). 17 Definition Appendix A.1 (Accessible set and uniform accessibility, [7]). (1) Aset A∈X is said to be accessible if for all x∈X, there exists an integer n/greaterorequalslant1such that Pn(x,A)>0. (2) A set Bis uniformly accessible from A, if there exists m∈N∗such that inf x∈APx(σB/lessorequalslantm)>0. Definition Appendix A.2 (Full set, [7]). A setF∈Xis said to be full, if Fcis not accessible. Definition Appendix A.3 (Atom and small set, [7]). (1) A set α∈Xis called an atom, if there exists a probability measure νon on(X,X)such that for all x∈αandA∈X, P(x,A) =ν(A). (2) A set C∈Xis called a small set if there exists positive integer mand a nonzero measure µon (X,X)such that for all x∈CandA∈X, Pm(x,A)/greaterorequalslantµ(A). (A.1) Then the set Cis said to be an (m,µ)-small set. Remark Appendix A.1. (1) From the definition above, it can be seen that atom is a part icular small set satisfying the equality in (A.1) with m= 1andµ(X) = 1, whereµbecomes a probability measure. (2) The condition (A.1) implies that µis a finite measure satisfying 0< µ(X)/lessorequalslant1. Hence it can be written µ=ǫνwithǫ=µ(X)andν(·) =µ(·)/µ(X)is a probability measure on (X,X). If ǫ= 1, then the equality in (A.1) must hold, and thus Ais an atom. Definition Appendix A.4 (Sampled kernel and petite set, [7]). (1) Letabe a probability onN, i.e. a sequence {a(n),n∈N}such that a(n)/greaterorequalslant0for alln∈Nand/summationtext∞ n=0a(n) = 1. The sample kernel Kais defined by Ka=∞/summationdisplay n=0a(n)Pn. (2) A set C∈Xis called petite if there exist a, a probability measure on N, such that for all x∈C andA∈X, Ka(x,A)/greaterorequalslantµ(A). The setCis then called an (a,µ)-petite set. Remark Appendix A.2. From the definition above, it can be seen that an (m,µ)-small set is a particular (a,µ)-petite set, when a={a(n),n∈N}is a probability on Nwith mass 1at stepm, i.e. a(m)
https://arxiv.org/abs/2505.04218v1
= 1anda(n) = 0, for any n/ne}ationslash=m. Definition Appendix A.5 (Irreducible kernel, [7]). A Markov kernel Pis said to be irreducible, if it admits an accessible small set. 18 Definition Appendix A.6 (Recurrent and Harris recurrent, [7]). (1) A set A∈Xis said to be recurrent if U(x,A) =∞for allx∈A; it is said to be Harris recurrent if Px(NA=∞) = 1, for allx∈A. (2) The kernel Pis said to be recurrent if every accessible set is recurrent; it is said to be Harris recurrent if every accessible set is Harris recurrent. Definition Appendix A.7 (Strongly aperiodic small set, [7]). An(m,µ)-small set Cis said to be strongly aperiodic, if m= 1andµ(C)>0. Definition Appendix A.8 (Period, aperiodicity and strongly aperiodicity, [7]). (1) The com- mon period of all accessible small sets is called the period o f the kernel P. (2) If the period is equal to 1, the kernel Pis said to be aperiodic. (3) If there exists an accessible (1,µ)-small set Cwithµ(C)>0, the kernel Pis said to be strongly aperiodic. Definition Appendix A.9 (Positive and null recurrent atom, positive and null Markov kernel, [7]) .(1) An atom αis said to be positive, if Eα(σα)<∞; it is said to be null recurrent, if it is recurrent andEα(σα) =∞. (2) IfPis irreducible and admits an invariant probability measure π, the Markov kernel Pis called positive. If Pdoes not admit such a measure, then Pis called null. Definition Appendix A.10 (Uniformly geometrically ergodic, [7]). A Markov kernel Pon X×Xis said to be uniformly geometrically ergodic, if it admits a n invariant probability measure π such that there exist δ >1andτ <∞satisfying for any n∈Nand any x∈X, /ba∇dblPn(x,·)−π/ba∇dblTV/lessorequalslantτδ−n. Definition Appendix A.11 (Irreducibility measure, [7]). Letφbe a non-negative and nontriv- ialσ-finite measure on (X,X). (1)φis said to be an irreducibility measure if φ(A)>0impliesAis an accessible set. (2)φis said to be amaximal irreducibility measure if φis an irreducibility measure and Ais accessible impliesφ(A)>0. Definition Appendix A.12 (Invariant measure, [7]). A nonzero measure µis said to be invari- ant if it is σ-finite and µP=µ. Definition Appendix A.13 (Condition Dg(V,λ,b,C ): Geometric drift toward C, [7]). A Markov kernelPonX×Xis said to satisfy the condition Dg(V,λ,b,C ), ifV:X→[1,∞)is a measurable function, λ∈[0,1),b∈[0,∞),C∈X, and PV/lessorequalslantλV+b /BDC. IfC=X, we simply write Dg(V,λ,b). 19 Appendix B. Some auxiliary result Recall that bη=1 2K1L−2L2η2+2g2(0)η2+ησ2+2Cη. In this part, we will make analysis on the functions λ(η) andf1(η) defined respectively as follows. For any η∈(0,1), λ(η) = 1−K1 2η+2L2η2andf1(η) =2bη K1η. We have the following result. Lemma Appendix B.1. (1) There exists a constant η1∈(0,1)depending only on K1andL, such that for all η∈(0,η1], the function η/ma√sto→λ(η)is decreasing and 0< λ(η)<1. (2) There exists a constant η2∈(0,1)depending only on K1andL, such that for all η∈(0,η2], the function η/ma√sto→f1(η)is decreasing. Proof. (1) The function λ(η) is quadratic and can be written as λ(η) = 2L2/parenleftbigg η−K1 8L2/parenrightbigg2 +1−K2 1 32L2. Taking into account that λ(0) = 1 and λ(η) is symmetric w.r.t η=K1/8L2, we can obtain immediately the result. (2) We have for any η∈(0,1), f′ 1(η) =h(η) K1η2, whereh(η) = 4/parenleftbig g2(0)−L2/parenrightbig η2−K1L. It can be seen that
https://arxiv.org/abs/2505.04218v1
there exists a constant η2∈(0,1) de- pendingonly on K1andL, such that for all η∈(0,η2],h(η)<0 and so f′ 1(η)<0. Consequently, the function f1(η) is decreasing over (0 ,η2]. References [1] Brooks, S., Gelman, A., Jones, G. L., Meng, X.L., 2011. Ha ndbook of Markov chain Monte Carlo, 1st ed. Chapman and Hall/CRC [2] Butkovsky, O., Dareiotis, K., Gerencs´ er, M., 2022. Str ong rate of convergence of the Euler scheme for SDEs with irregular drift driven by L´ ev y noise. arXiv preprint. https://arxiv.org/abs/arXiv:2204.12926. [3] ChenP., JinX., XiaoY., XuL., 2025. Approximationofthe invariantmeasureforstablestochastic differential equations by the Euler-Maruyama scheme with dec reasing step sizes. Advances in Applied Probability. Published online 2025: 1–31. https:/ /doi.org/10.1017/apr.2024.68 [4] Dalalyan, A. S., 2017. Theoretical guarantees for appro ximate sampling from smooth and log- concave densities. Journal of the Royal Statistical Societ y. Series B (Statistical Methodology) 79(3): 651–676. http://www.jstor.org/stable/44681805 20 [5] Dedecker, J., Gou¨ ezel, S., 2015. Subgaussian concentr ation inequalities for geometrically ergodic Markov chains. Electron. Commun. Probab. 20: 1–12. https:/ /doi.org/10.1214/ECP.v20-3966 [6] Ding, Z., Li Q., 2021. Langevin Monte Carlo: random coord inate descent and variance reduction. Journal of Machine Learning Research 22: 1–51. [7] Douc, R., Moulines, E., Priouret, P., Soulier, P., 2018. Markov chains, 1st ed. Springer [8] Fan, X., Hu, H., Xu, L., 2024. Normalized and self-normal ized Cram´ er-type moderate devia- tions for the Euler-Maruyama scheme for the SDE. Science Chi na Mathematics 67: 1865–1880. https://doi.org/10.1007/s11425-022-2161-4 [9] Huang, X., Liao, Z. W., 2018. The Euler-Maruyama method f or S(F)DEs with H¨ older drift and α-stable noise. Stochastic Analysis and Applications, 36(1 ): 28–39. https://api.semanticscholar.org/CorpusID:126168123 [10] K¨ uhn, F. and Schilling, R. L., 2019. Strong convergenc e of the Euler-Maruyama approximation for a class of L´ evy-driven SDEs. Stochastic Processes and t heir Applications, 129(8): 2654–2680. https://doi.org/10.1016/j.spa.2018.07.018. [11] Li, Y., Zhao, G., 2024. Euler-Maruyama scheme for SDE dr iven by L´ evy process with H¨ older drift. Statistics & Probability Letters, 215:110220. http s://doi.org/10.1016/j.spl.2024.110220. [12] Lu, J., Tan, Y., Xu, L., 2022. Central limit theorem and s elf-normalized Cram´ er-type moderate deviation for Euler-Maruyama scheme. Bernoulli, 28(2): 93 7–964. https://doi.org/10.3150/21- BEJ1372 [13] Max, W., Yee, W. T., 2011. Bayesian learning via stochas tic gradient Langevin dynamics. Pro- ceedings of the 28th International Conference on Internati onal Conference on Machine Learning, p. 681–688. [14] Pamen, O. M., Taguchi, D., 2017. Strong rate of converge nce for the Euler-Maruyama approxima- tion of SDEs with H¨ older continuous drift coefficient. Stoch astic Processes and their Applications, 127(8): 2542–2559. https://doi.org/10.1016/j.spa.2016 .11.008. [15] Panloup, F., 2008. Recursive computation of the invari ant measure of a stochastic differen- tial equation driven by a L´ evy process. The Annals of Applie d Probability, 18(2): 379–426. https://doi.org/10.1214 /105051607000000285 [16] Protter, P., Talay, D., 1997. The Euler scheme for L´ evy driven stochastic differential equations. The Annals of Probability. 25(1), 393–423. https://doi.or g/10.1214/aop/1024404293 [17] Roberts, G. O., Tweedie, R.L., 1996. Exponential conve rgence ofLangevin distributionsandtheir discrete approximations. Bernoulli 2(4): 341–363. MR1440 273 https://doi.org/10.2307/3318418 [18] Rudin, W., 1987. Real and complex analysis, 3rd ed. McGr aw-Hill Book Co., New York 21
https://arxiv.org/abs/2505.04218v1
Beyond entropic regularization: Debiased Gaussian estimators for discrete optimal transport and general linear programs Shuyu Liu1, Florentina Bunea2, and Jonathan Niles-Weed1,3 1Courant Institute of Mathematical Sciences, New York University, New York, NY, 10012 2Department of Statistics and Data Science, Cornell University, Ithaca, NY, 14853 3Center for Data Science, New York University, New York, NY, 10012 Abstract This work proposes new estimators for discrete optimal transport plans that enjoy Gaussian limits cen- tered at the true solution. This behavior stands in stark contrast with the performance of existing estimators, including those based on entropic regularization, which are asymptotically biased and only satisfy a CLT centered at a regularized version of the population-level plan. We develop a new regularization approach based on a different class of penalty functions, which can be viewed as the duals of those previously con- sidered in the literature. The key feature of these penalty schemes it that they give rise to preliminary estimates that are asymptotically linear in the penalization strength. Our final estimator is obtained by constructing an appropriate linear combination of two penalized solutions corresponding to two different tuning parameters so that the bias introduced by the penalization cancels out. Unlike classical debiasing procedures, therefore, our proposal entirely avoids the delicate problem of estimating and then subtracting the estimated bias term. Our proofs, which apply beyond the case of optimal transport, are based on a novel asymptotic analysis of penalization schemes for linear programs. As a corollary of our results, we obtain the consistency of the naive bootstrap for fully data-driven inference on the true optimal solution. Simulation results and two data analyses support strongly the benefits of our approach relative to existing techniques. 1 Introduction Optimal transport is an increasingly prominent tool in data analysis, with applications in causal inference (Char- pentier et al., 2023; Torous et al., 2024; Wang et al., 2023), hypothesis testing (Deb and Sen, 2023; Ghosal and Sen, 2022; Shi et al., 2022), and domain adaptation (Courty et al., 2016; Rakotomamonjy et al., 2022; Redko et al., 2019), as well as in computational biology (Bunne et al., 2023; Huizing et al., 2022; Schiebinger et al., 2019; Tameling et al., 2021a; Tang, 2025) and high-energy physics (Cai et al., 2020; Komiske et al., 2019; Park et al., 2023). A chief virtue of optimal transport—as compared with other measures of discrepancy between proba- bility measures, such as classical statistical divergences (Ali and Silvey, 1966; Bhattacharyya, 1943; Kullback and Leibler, 1951), MMD (Gretton et al., 2006), or integral probability metrics (M¨ uller, 1997; Sriperumbudur et al., 2009)—is that the definition of optimal transport gives rise both to a measure of similarity between two probability measures but also an optimal coupling orplan, which describes the least costly way to move mass from one measure to another. In fact, in many of the applications described above, it is the plan rather than the distance itself which is the primary object of interest (see, e.g., Charpentier et al., 2023; Courty et al., 2016; Ghosal and Sen, 2022; Schiebinger et al., 2019). The goal of this paper is to
https://arxiv.org/abs/2505.04312v1
define a new estimator of the plan, with better properties than existing approaches. We focus on the discrete version of the optimal transport problem, which compares two distributions tand ssupported on a finite set of points {v1, . . . , v p}. Associated to each pair of points {vi, vj}is acostcij. The optimal transport problem then reads min π∈Rp×ppX i,j=1πijcij s.t.π1=t, π⊤1=s, π≥0. (1) A solution π⋆is the optimal plan , which is a joint probability measure on {v1, . . . , v p}2. The discrete optimal transport problem is the one most commonly used in applications, since from a practical perspective most optimal transport problems, including those on continuous domains, are solved by binning the data and solving a discrete optimal transport problem on a finite set. In statistical contexts, the population-level marginal distributions tandsare not know exactly, and the statistician only has access to ni.i.d. samples from each measure. A natural “plug-in” estimator πnof the 1arXiv:2505.04312v1 [math.ST] 7 May 2025 optimal plan is obtained by solving an empirical version of (1), with tandsreplaced by empirical frequencies tnandsn, where ntn∼Mult( n,t) nsn∼Mult( n,s).(2) Note that tnandsnare maximum-likelihood estimators of ( t,s), and, by extension, the plug-in estimator πnis also the maximum-likelihood estimator of π⋆. More explicitly, we can view π⋆=π⋆(t,s) as a functional of the marginal measures tands. The performance of the plug-in estimator πndepends on the properties of the mapping ( t,s)7→π⋆. As we show below, the linear programming structure of (1) implies that this func- tional is non-smooth . If is well known that such non-smooth functional estimation problems possess particular challenges. (Aitchison and Silvey, 1958; Andrews, 2002; Chernoff, 1954) In our setting, this can be easily seen through the lens of asymptotic bias, already visible in the simplest optimal transport problems. A small example will make this phenomenon clear. Consider a 2 ×2 optimal transport problem, with cost cij=1{i̸=j}, which leads to the program min π∈R2×2π12+π21, s.t.π1=t, π⊤1=s, π≥0. (3) When t=s= (1/2,1/2), then the unique optimal plan is given by π⋆= (1/2,0; 0,1/2). Now, consider empirical frequencies tn= (tn,1, tn,2) and sn= (sn,1, sn,2) as in (2), and construct the corresponding plug-in estimator πn. The following result characterizes the asymptotic behavior of πn. Proposition 1.1. The plug-in estimator obtained by solving (3)with the empirical marginals (tn,sn)is given by πn=min{tn,1, sn,1}(tn,1−sn,1)+ (tn,2−sn,2)+min{tn,2, sn,2} , (4) As a consequence, √n(πn−π⋆)d→1 2min{g1, g2} (g1−g2)+ (g2−g1)+−max{g1, g2} , (5) where g1andg2are independent standard Gaussian random variables. Examining (4), we see that the plug-in estimator πnis a non-smooth function of the empirical frequen- cies (tn,sn). Geometrically, this is a consequence of the constraints in (1)—the feasible set is the intersection of an affine subspace (determined by linear constraints π1=t, π⊤1=s) with a polyhedral cone (determined by the nonnegativity constraint π≥0). Small perturbations of the linear constraints can lead to abrupt changes in the geometry of this set, which in turn gives rise to non-differentiable changes in the optimal solution. This lack of smoothness implies that πnisasymptotically biased , in the classical sense that the limit of√n(π⋆−πn) is a non-centered random
https://arxiv.org/abs/2505.04312v1
variable. For instance, examining the northwest corner entry, we see that the limit of√n((πn)11−π⋆ 11) is the minimum of two independent centered Gaussians, and therefore has negative mean. Far from being specific to this example, asymptotic bias is in fact a general phenomenon plaguing the plan estimation problem: prior work (Klatt et al., 2022; Liu et al., 2023) has shown that a limit of the form√n(πn−π⋆)d→h(g) for a Gaussian vector gand for a non-smooth function h. The limiting random variable h(g) will only be centered in exceptional cases. Previous studies of the statistical properties of the discrete optimal transport problem have sought to ad- dress this issue via regularization schemes (Courty et al., 2014; Essid and Solomon, 2018; Ferradans et al., 2014), the most prominent of which is entropic optimal transport, discussed in detail in Section 2 below. By adding a strictly convex regularizer, this approach guarantees that the optimal solution is a smooth function of the marginal distributions, thereby avoiding the pathologies of the plug-in estimator. This regularization brings substantial statistical benefits: for example, under suitable assumptions on the regularizer φ, for a fixed regularization parameter λ >0, the solution πλ,nto the empirical regularized problem min π∈Rp×ppX i,j=1πijcij+λφ(π) s.t. π1=tn, π⊤1=sn (6) satisfies√n(πλ,n−π⋆ λ)d→ N (0,Σλ,t,s), for some Σ λ,t,s∈R(p×p)×(p×p), where π⋆ λdenotes the solution to the population level version of (6), with tnandsnreplaced by tands, respectively. The empirical estimator πλ,n therefore enjoys a centered Gaussian limit around the regularized solution π⋆ λ, which does not agree with π⋆ when λ >0. 2 Existing regularization schemes therefore yield estimators with Gaussian limits, but at a price: these regular- ization schemes introduce an additional source of “bias” by replacing the original solution π⋆with the solution to the modified program. This is visible from the decomposition √n(πλ,n−π⋆) =√n(πλ,n−π⋆ λ)|{z } (I)+√n(π⋆ λ−π⋆)|{z} (II), which shows that πλ,nwill only be an asymptotically unbiased estimator of π⋆if the both (I) and (II) can be controlled simultaneously. In fact, as we show in Section 2, there is no reasonable way to tune the regularization parameter to obtain an estimator with good asymptotic properties: if λ=λntends to zero as n→ ∞ too slowly, then (II) is too large and the bias introduced by regularization is asymptotically dominant, whereas if λ=λntends to zero too quickly, then the benefits of regularization are lost and (I) no longer has a centered Gaussian limit. In short, despite the success of regularization in the statistical analysis of optimal transport, all existing approaches give rise to asymptotically biased estimators of the optimal plan. This paper introduces a novel approach to this problem. We design a new class of regularization methods whose solutions can be easily debiased to yield estimators ˆ πnwhich enjoy Gaussian limits at the√nrate, centered at the true optimal plan:√n(ˆπn−π⋆)d→ N (0,˜Σλ,t,s). (7) Moreover, the regularizers we propose are already popular in the context of interior point methods for convex optimization, so the solutions to our regularized programs can be computed quickly with off-the-shelf software. We show in experiments on real and simulated data that this approach can be used to
https://arxiv.org/abs/2505.04312v1
construct valid confidence sets in situations where existing methods fail. To our knowledge, this represents the first method to construct asymptotically unbiased estimators and Gaussian confidence sets for optimal transport plans. In fact, our approach applies in significantly more generality than the optimal transport problem: our theory applies to any “standard form” linear program min x∈Rm⟨c,x⟩, s.t.Ax=b,x≥0, (8) of which (1) is, in historical terms, the earliest and most fundamental example (Schrijver, 1998, Historical notes). We consider the setting where A∈Rk×mandc∈Rmare fixed, in which case the solution x⋆to (8) is determined entirely by b. Our interest is in the statistical setting where bis replaced by a random counterpart bn, which satisfies, for some rate qn→+∞, qn(bn−b)d→G, (9) for a non-degenerate random variable G. We construct estimators ˆxnsuch that qn(ˆxn−x⋆)d→M⋆G, (10) for a deterministic matrix M⋆. In particular, as advertised above, when the limit distribution in (9) is a centered Gaussian, the same will be true of the limit in (10). This opens the door to asymptotically valid inference for any application of linear programming, for example, to the min-cost flow problem (Ahuja et al., 1993). As in the case of optimal transport, ours is the first valid asymptotic inference methodology with Gaussian limits for these applications. We consider a class of regularized versions of the linear program (8) of the form x(r,b) = arg min x∈Rmfr(x), s.t.Ax=b, (11) where fr(x) is a regularized version of the objective function and r >0 is a tunable regularization parameter. This gives rise to a penalized form of the empirical problem x(rn,bn) = arg min x∈Rmfrn(x), s.t.Ax=bn, (12) where rnis a carefully chosen sequence which converges to 0 as n→ ∞ . Since both (11) and (12) lack the nonnegativity constraint in (8), frwill be chosen so that as r→0, the function fr(x) converges to ⟨c,x⟩if x>0but diverges to + ∞if any entry of xis negative. Intuitively, this property ensures that, when ris small, solutions to (11) are close to those of the original linear program. Our main insight is to choose the regularization in such a way that, under suitable assumptions on the decay ofrn, the solution x(rn,bn) of (12) satisfies an asymptotic expansion of the form x(rn,bn) =x⋆+rnd⋆+M⋆(bn−b) + lower order terms , (13) 3 for some deterministic vector d⋆and matrix M⋆. The term rnd⋆can be viewed as the “bias,” of order rn, introduced by the regularization scheme. The crucial fact we rely on to develop our estimator is that this additional bias is linear in the regularization parameter; this stands in sharp contrast to the case of entropic regularization, for example, for which the bias term is non-linear (see Section 2). On its own, (13) seems to indicate that achieving asymptotically unbiased estimation of x⋆will require separately estimating the term rnd⋆, perhaps via a more involved statistical procedure. This is the approach traditionally taken in the literature on, for example, the debiased lasso (Zhang and Zhang, 2013). However, we exploit the linearity of the bias term to bypass the need to estimate it explicitly. Given x(rn,bn) defined by (12), our proposed estimator takes
https://arxiv.org/abs/2505.04312v1
the form ˆxn= 2x(rn 2,bn)−x(rn,bn). (14) This trick—which has been rediscovered many times in the literature, for instance under the name “Richardson extrapolation” in numerical analysis (Richardson, 1911) and “twicing” in statistics (Newey et al., 2004; Zhang and Xia, 2012)—provides an easy solution for the problem of bias. Indeed, using (13), we observe that ˆxn satisfies ˆxn=x⋆+M⋆(bn−b) + lower order terms . Comparing this expression with (13), the bias term has completely disappeared. Moreover, the fact that ˆxn is an asymptotically linear estimator implies that the estimator ˆxenjoys centered Gaussian limits around the population-level quantity whenever bndoes. As we show in Section 7, this result opens the door to practical inference for x⋆via the na¨ ıve boostrap. We show in simulation that the resulting confidence sets are much more accurate than existing approaches for plan inference in the literature. The remainder of the paper is structured as follows. We review prior work in Section 1.1. We demonstrate some deficiencies of the entropic penalty in Section 2 and use our analysis of the entropic penalty to motivate the penalization approach we develop in this work, which we specify precisely in Section 3. To develop the asymptotic expansion (13), we first consider the case of where bn=bin Section 4, then extend our results to the case of randomly perturbed programs in Section 5. Finally, we define our debiased estimator in Section 6 and prove consistency of the bootstrap in Section 7. Simulation and experimental results appear in Sections 8 and 9. 1.1 Prior work There is a long history of studying the asymptotic properties of stochastic optimization problems (see, e.g., Dupacova and Wets, 1988; King and Rockafellar, 1993; Polyak and Juditsky, 1992; Shapiro, 1991, 1993). In statistics, the classical large sample theory of maximum-likelihood estimation provides a paradigmatic exam- ple, reaching back to the 1940’s (Wald, 1943, 1949). An important insight stemming from this work is that the asymptotic behavior of solutions depends heavily on the regularity assumptions imposed on optimization problem: in the most benign scenario (e.g., unconstrained smooth and strongly convex optimization problems), centered Gaussian limits are the norm, and while these conditions can be weakened somewhat (Aitchison and Silvey, 1958; Huber, 1967), in general Gaussian limits are the exception rather than the rule (Shapiro, 1989). Moreover, though Gaussian limits can arise in the solutions to non-classical problems (Davis et al., 2024; Duchi and Ruan, 2021), this phenomenon is typically limited to the situation of an optimization problem with a random objective function on a fixed constraint set. In light of this work, the random linear programs we study—with a non-strongly-convex objective and ran- dom constraints—represent a challenging edge case. Prior work studying the asymptotic properties of solutions to random linear programs reveals that they possess Gaussian limits only in exceptional circumstances (Klatt et al., 2022; Liu et al., 2023). In general, the limits are non-Gaussian and asymptotically biased. The explosive interest in regularization schemes for optimal transport has been partially motivated by the search for estimators with better asymptotic properties. Entropic regularization—along with its variants— gives rise to Gaussian limiting distributions
https://arxiv.org/abs/2505.04312v1
for the optimal solution (Klatt et al., 2020), a phenomenon that even extends to the case of continuous marginals (Goldfeld et al., 2024; Gonz´ alez-Sanz and Hundrieser, 2023; Gonzalez-Sanz et al., 2022; Harchaoui et al., 2024), but they are notcentered at the true optimal plan. In some cases, under strong regularity assumptions, it is possible to carefully tune the regularization to obtain Gaussian limits centered at the true optimal plan (Manole et al., 2023; Mordant, 2024), albeit not at the√nrate. Taking a linear combination of estimators with different regularization parameters, as in (14), is related to a classical idea in numerical analysis known as “Richardson extrapolation” (Joyce, 1971; Richardson, 1911), which was rediscovered in the statistics literature by Tukey under the name “twicing” (Stuetzle and Mittal, 1979; Tukey, 1977). In recent years, Bach (2021) has highlighted the effectiveness of Richardson extrapolation in machine learning applications, including regularizing convex optimization problems. This perspective has been applied to obtaining better estimates for the cost of the optimal transport problem by Chizat et al. (2020). 4 At a technical level, our results are most similar to prior work studying the “convergence trajectories” of penalized linear programs (Auslender et al., 1997; Cominetti and Martin, 1994). In the particular case of the exponential penalty function, Cominetti and Martin (1994) show an asymptotic expansion of the form (13) when bn=b. Auslender et al. (1997) analyze a more general class of penalty functions, which include those studied in this work, but do not obtain a full expansion akin to (13). 1.2 Notation We work in an asymptotic framework where n→ ∞ (and the regularization parameter r=rn→0). The symbols O(·) and o(·), as well as their probabilistic counterparts Op(·) and op(·), are to be understood in this setting. We adopt the shorthand an≪bnto mean an=o(bn). Given a function g:X →R∪ {+∞}for some X ⊆Rd, its convex conjugate g∗:Rd→R∪ {+∞}is defined by g∗(y) = sup x∈X⟨x, y⟩ −g(x). The function g∗is always convex and lower semi-continuous. If gshares these same properties, then ( g∗)∗=g. Thoughout this work, we permit functions to take values in the extended reals R∪ {+∞}. Given such a function g, we write dom g={x:g(x)<+∞}for the domain of g. Given a convex set X, we write int Xfor its interior and relint Xfor its relative interior (that is, its interior with respect to the subset topology on the affine span of X). Throughout, our goal is to perform inference on the solution x⋆to (8), where candAare fixed and known but we only have access to a random bnin place of b. We use star superscripts for quantities ( x⋆,d⋆,M⋆) that depend on the true, unknown vector b. 2 The problem of bias in entropic optimal transport The most popular form of regularized optimal transport is entropic optimal transport , which corresponds to (6) with the choice φ(π) =pX i,j=1πijlogπij. (15) Though originally motivated by computational considerations (Chizat et al., 2020; Peyr´ e et al., 2019), entropic optimal transport is now known to bring significant statistical benefits as well (Goldfeld et al., 2024; Gonz´ alez- Sanz and Hundrieser, 2023;
https://arxiv.org/abs/2505.04312v1
Klatt et al., 2020; Mordant, 2024). For example, Klatt et al. (2020) showed a central limit theorem for the empirical entropic optimal transport plan πλ,n,centered at the population solution to the regularized program π⋆ λ. Their asymptotic results hold when the regularization parameter λis fixed and ntends to infinity. Concretely, they show√n(πλ,n−π⋆ λ)d→ N (0,Σλ,t,s) As Klatt et al. note (Klatt et al., 2020, Section 3.2), this behavior is not expected in general: if λ=λntends to zero as n→ ∞ at a sufficiently fast rate, then the limit law will no longer be Gaussian. On the other hand, ifλ=λntends to zero more slowly, it is reasonable to conjecture that bias will dominate. The following result makes this intuition precise. We say a function h(λ) decays exponentially fast to 0 if λlog(1/h(λ)) is bounded away from zero and infinity as λ→0. For convenience, we impose the additional technical condition that π⋆is non-degenerate (see proof for a precise definition) in the slow decay regime of λn, but similar conclusions apply to the general case as well. Proposition 2.1. Fix a sequence λ=λn→0, and assume that√n(tn−t)is stochastically bounded away from zero and infinity. If λ≫1 lognandπ⋆is non-degenerate, then as n→ ∞ the entropy-regularized empirical optimal transport plan can be written as √n(πλ,n−π⋆) =√nE(λ) +√nM(tn−t) +Op(ϵ(λ)) (16) for a matrix M, and where ϵ(λ)andE(λ)denote two terms that decay exponentially fast to zero (at rates which depend on π⋆). On the other hand, if λ≪1 logn, then √n(πλ,n−π⋆) =√n(πn−π⋆) +Op(ϵ(λ)), (17) where ϵ(λ)decays exponentially fast to zero. 5 To interpret the above result, we note that in the first regime, if λ≫1 logn, then the regularization term is relatively large, and the “bias” term E(λ) dominates. In particular, since λlog(1/E(λ))̸→ ∞ , the term E(λ) satisfies√nE(λ)→ ∞ asn→ ∞ , so√n(πλ,n−π⋆) cannot converge to a centered limit. On the other hand, in the second regime, if λ≪1 logn, then the regularization term is small enough that the asymptotic limit of√n(πλ,n−π⋆) is the same as that of√n(πn−π⋆), so any asymptotic benefits of the regularization scheme are lost. Though we do not rigorously characterize the behavior of πλ,nwhen λ≍1 logn, it is reasonable to conjecture on the basis of our proof that (16) still holds in this case. But note that this implies that whether the bias termE(λ) is asymptotically larger or smaller than n−1/2depends on the precise constant thatλlognconverges to, as well as the (unknown) rate at which E(λ) decays. Tuning the regularization parameter this precisely is clearly outside the reach of a statistician interested in practical inference. Our goal in this work is to design a different scheme, which is much less sensitive to regularization strength, and which naturally yields centered Gaussian limits. 2.1 Designing our alternative scheme As Proposition 2.1 makes clear, the central difficulty in using entropic optimal transport to obtain asymptotically unbiased estimators is the fact that the bias term E(λ) is not explicit. We therefore consider alternative regularization schemes whose bias can be explicitly characterized. As we shall see, the resulting regularizers are, in a precise sense, the convex duals of those that
https://arxiv.org/abs/2505.04312v1