text
string
source
string
results indicate that our model achieves better calibration of predicted survival probabilities, ensuring closer align- ment between predictions and observed outcomes in both the survival CDF S(t|x)and PDF f(t|x). This underscores the reliability and robustness of our model in accurately capturing true survival behavio...
https://arxiv.org/abs/2505.03712v2
can see that it is heavily skewed. Such an skewness manifested as the concentration of events close to 0 makes it challenging to achieve good calibration in that range, i.e.,t→0. Similarly, our method attempted to predict smaller values for the initial quantiles but still allocated a disproportionately large weight to ...
https://arxiv.org/abs/2505.03712v2
to deliver closed-form solutions for key event summaries, such as means and quan- tiles, facilitating more interpretable predictions. The method outperforms both traditional parametric and nonparametric approaches in terms of discrimination and calibration by optimizing individual-level parameters through maximum likel...
https://arxiv.org/abs/2505.03712v2
distribution and generalizations: a revisit with applica- tions to communications, economics, engineering, and finance . Springer Science & Business Media, 2012. Lai, C. D. and Xie, M. Stochastic ageing and dependence for reliability . Springer Science & Business Media, 2006. L´anczky, A. and Gy ˝orffy, B. Web-based su...
https://arxiv.org/abs/2505.03712v2
to varying input distributions. The observed component Lo(y;θ, σ, κ )is defined as: Lo(y;θ, σ, κ ) = log σ−logκ κ2+ 1+√ 2 σ  κ(y−θ),ify≥θ, 1 κ(θ−y),ify < θ.(8) The censored loss component Lc(y;θ, σ, κ )is computed using the survival probability function: Lc(y;θ, σ, κ ) =  log(κ2+ 1) +√ 2 σκ(y−θ), ify≥θ, log(κ2+...
https://arxiv.org/abs/2505.03712v2
is then further simplified as: LQR(y;θq, q) =  q(y−θq), ify≥θq, (1−q)(θq−y),ify < θ q.= (y−θq)(q−I[θq> y]). (20) This formulation is also referred to as the pinball loss or “checkmark” loss (Koenker & Bassett Jr, 1978), which is widely used in quantile regression to directly optimize the q-th quantile estimate. Fo...
https://arxiv.org/abs/2505.03712v2
we have: Z λ(t|x)dt=Z −dS(t|x) S(t|x)(31) which simplifies to: Λ(t|x) =−logS(t|x) +C (32) where Cis the constant of integration and Λ(t|x)is the cumulative hazard function: Λ(t|x) = Λ 0(t)eh(x)(33) where Λ0(t)is the baseline cumulative hazard function. For survival analysis, Cis typically set to 0 when starting from t=...
https://arxiv.org/abs/2505.03712v2
censoring, sourced from various domains and character- ized by distinct features, sample sizes, and censoring proportions: •METABRIC (Molecular Taxonomy of Breast Cancer International Consortium): This dataset contains genomic and clinical data for breast cancer patients. It includes 9 features, 1523 training samples, ...
https://arxiv.org/abs/2505.03712v2
the indicator function, and eiis the event indicator ( ei= 1 if the event is observed). xirepresents the covariates, and ˜G(·)refers to the Kaplan-Meier estimate (Kaplan & Meier, 1958) of the censoring survival function. •Harrell’s C-Index: CH=P(ϕi> ϕj|yi< yj, ei= 1) =P i̸=j I(ϕi> ϕj) + 0.5∗I(ϕi=ϕj) I(yi< yj)eiP i̸=j...
https://arxiv.org/abs/2505.03712v2
to ensure a fair comparison. The implementations for CQRNN andLogNorm were sourced from the official CQRNN repository (GitHub Link). The implementations for DeepSurv andDeepHit were based on the pycox.methods module (GitHub Link). Hyperparameter settings. All experiments were repeated across 10 random seeds to ensure r...
https://arxiv.org/abs/2505.03712v2
pycox.methods module (GitHub Link). Each of the two hidden layers contains 32 hidden nodes. A validation set was created by splitting 20% of the training set. Early stopping was employed to terminate training when the validation performance ceased to improve. Batch normalization was applied. DeepHit. We adhered to the ...
https://arxiv.org/abs/2505.03712v2
axis represents the target proportions [0.1,0.2, . . . , 0.9,1.0], while the vertical axis denotes the observed proportions derived from the model predictions. 19 Learning Survival Distributions with the Asymmetric Laplace Distribution 20 Learning Survival Distributions with the Asymmetric Laplace Distribution 21 Learn...
https://arxiv.org/abs/2505.03712v2
± 0.048 0.758 ± 0.033 0.688 ± 0.028 3.150 ± 1.142 1.015 ± 0.045 0.024 ± 0.047 1.128 ± 0.051 -0.209 ± 0.027 ald 2.942 ± 2.389 0.309 ± 0.018 0.560 ± 0.008 0.560 ± 0.007 0.432 ± 0.405 0.978 ± 0.047 -0.015 ± 0.014 0.964 ± 0.049 0.016 ± 0.053 CQRNN 1.943 ± 0.297 0.317 ± 0.013 0.558 ± 0.013 0.557 ± 0.011 0.305 ± 0.129 0.976 ...
https://arxiv.org/abs/2505.03712v2
± 0.071 0.679 ± 0.126 2.249 ± 0.490 1.122 ± 0.022 0.001 ± 0.029 1.111 ± 0.031 -0.074 ± 0.032 DeepSurv 1.662 ± 0.157 0.558 ± 0.007 0.726 ± 0.035 0.582 ± 0.056 0.577 ± 0.067 1.070 ± 0.006 -0.002 ± 0.009 1.065 ± 0.011 -0.065 ± 0.010 DeepHit 0.814 ± 0.104 0.475 ± 0.037 0.913 ± 0.009 0.856 ± 0.034 1.349 ± 0.374 1.051 ± 0.04...
https://arxiv.org/abs/2505.03712v2
0.727 ± 0.021 0.043 ± 0.019 1.003 ± 0.014 -0.005 ± 0.005 0.998 ± 0.014 -0.003 ± 0.014 CQRNN 0.717 ± 0.027 0.436 ± 0.035 0.767 ± 0.009 0.718 ± 0.018 0.235 ± 0.104 0.992 ± 0.026 -0.007 ± 0.013 0.998 ± 0.032 -0.019 ± 0.035 LogNorm heavy LogNorm 0.755 ± 0.194 0.401 ± 0.012 0.643 ± 0.053 0.609 ± 0.046 0.066 ± 0.056 1.018 ± ...
https://arxiv.org/abs/2505.03712v2
0.700 ± 0.007 0.138 ± 0.040 1.017 ± 0.013 -0.005 ± 0.012 1.004 ± 0.014 0.001 ± 0.013 DeepHit 0.560 ± 0.098 0.385 ± 0.022 0.652 ± 0.066 0.633 ± 0.047 1.265 ± 1.911 0.925 ± 0.071 -0.010 ± 0.011 0.925 ± 0.088 0.058 ± 0.108 ald 1.626 ± 0.194 0.245 ± 0.012 0.637 ± 0.021 0.633 ± 0.031 0.293 ± 0.125 1.001 ± 0.033 -0.011 ± 0.0...
https://arxiv.org/abs/2505.03712v2
± 0.035 -0.018 ± 0.016 0.977 ± 0.025 0.014 ± 0.034 CQRNN 0.865 ± 0.070 0.357 ± 0.021 0.680 ± 0.015 0.672 ± 0.014 0.573 ± 0.577 0.953 ± 0.043 -0.008 ± 0.016 0.967 ± 0.030 0.002 ± 0.040 GBSG LogNorm 1.469 ± 0.105 0.577 ± 0.015 0.660 ± 0.012 0.653 ± 0.012 0.817 ± 0.303 0.968 ± 0.025 -0.057 ± 0.011 0.886 ± 0.025 0.086 ± 0....
https://arxiv.org/abs/2505.03712v2
0.025 1.018 ± 0.040 -0.012 ± 0.046 DeepHit 2.062 ± 0.285 0.377 ± 0.024 0.769 ± 0.022 0.734 ± 0.035 1.176 ± 0.539 1.085 ± 0.034 -0.052 ± 0.023 0.968 ± 0.035 0.066 ± 0.037 24 Learning Survival Distributions with the Asymmetric Laplace Distribution 25 Learning Survival Distributions with the Asymmetric Laplace Distributio...
https://arxiv.org/abs/2505.03712v2
0.019 ± 0.001 0.916 ± 0.009 0.802 ± 0.008 0.256 ± 0.150 1.011 ± 0.028 -0.004 ± 0.029 1.005 ± 0.021 0.006 ± 0.011 ald (Mode) 0.627 ± 0.072 0.911 ± 0.012 0.802 ± 0.008 ald (Mean) 0.238 ± 0.036 0.894 ± 0.005 0.872 ± 0.004 Norm med. ald (Median) 0.298 ± 0.036 0.047 ± 0.003 0.889 ± 0.006 0.868 ± 0.011 0.157 ± 0.044 0.997 ± ...
https://arxiv.org/abs/2505.03712v2
1.121 ± 0.107 0.568 ± 0.015 0.572 ± 0.015 SUPPORT ald (Median) 0.856 ± 0.062 0.362 ± 0.013 0.572 ± 0.015 0.561 ± 0.015 2.197 ± 0.667 0.900 ± 0.056 0.084 ± 0.046 1.084 ± 0.043 -0.113 ± 0.023 ald (Mode) 0.421 ± 0.051 0.532 ± 0.016 0.522 ± 0.044 ald (Mean) 1.713 ± 0.208 0.671 ± 0.014 0.665 ± 0.013 GBSG ald (Median) 1.161 ...
https://arxiv.org/abs/2505.03712v2
A Graphical Global Optimization Framework for Parameter Estimation of Statistical Models with Nonconvex Regularization Functions Danial Davarnia Mohammadreza Kiaghadi Iowa State University Iowa State University Abstract Optimization problems with norm-bounding constraints appear in various applications, from portfolio ...
https://arxiv.org/abs/2505.03899v1
and Wright, 2006) and convex optimization (Bertsekas, 2015) literature, constraints involving ℓp-norm for p∈[0,1) or their nonconvex proxies belong to the class of mixed-integer nonlinear (nonconvex) programs (MINLPs), posing greater chal-arXiv:2505.03899v1 [math.OC] 6 May 2025 Global Optimization of Statistical Models...
https://arxiv.org/abs/2505.03899v1
into the true statistical characteristics of optimal es- timators across various models. Consequently, these insights can facilitate the development of new models endowed with more desirable properties; see the dis- cussion in Fan and Li (2001) for an example of the process to design a new model. In this paper, we intr...
https://arxiv.org/abs/2505.03899v1
variant of problems in which the norm func- tions are moved to the objective function through Lagrangian relaxation and treated as a weighted penalty. Notation. Vectors of dimension n∈Nare denoted by bold letters such as x, and the non-negative or- thant in dimension nis referred to as Rn +. We define [n] :={1,2, . . ....
https://arxiv.org/abs/2505.03899v1
the literature to keep the size of DDs manageable. In a relaxed DD, if the number of nodes in a layer exceeds a predetermined width limit , a subset of nodes are merged into one node to reduce the number of nodes in that layer and thereby sat- isfy the width limit. This node-merging operation is performed in such a way...
https://arxiv.org/abs/2505.03899v1
ℓp-norm constraints for all p∈[0,∞) can be repre- sented using scale functions. Proposition 2. Consider a norm-bounding constraint of the form ||x||p≤βfor some β≥0andp∈[0,∞), where ||x||pdenotes the ℓp-norm. Then, this con- straint can be written as η(x)≤¯βfor a scale function η(x)such that (i) if p∈(0,∞), then ηi(xi) ...
https://arxiv.org/abs/2505.03899v1
j≥0then 7 create a node vwith state value s(v) =s(u) +ηi(li j) (if it does not already exist) in the node layer Ui+1 8 else 9 create a node vwith state value s(v) =s(u) (if it does not already exist) in the node layer Ui+1 10 add two arcs from utovwith label values li j and ui jrespectively 11forall u∈ Un,j∈Lndo 12 ifu...
https://arxiv.org/abs/2505.03899v1
(1a) X a∈Akl(a)ya=xk, ∀k∈[n] (1b) ya≥0, ∀u∈ U, (1c) where fs=−ft= 1,fu= 0 foru∈ U \ { s, t}, andδ+(u)(resp. δ−(u)) denotes the set of outgoing (resp. incoming) arcs at node u. Then, projxG(D) = conv(Sol( D)). Viewing yaas the network flow variable on arc a∈ A ofD, the formulation (1a)–(1a) implies that the LP re- laxat...
https://arxiv.org/abs/2505.03899v1
r-tpath P(τ)in the weighted DD and compute its encoding point xP(τ) 5 ifγ(τ)(¯x−xP(τ))>max{0,∆∗}then 6 update τ∗=τand ∆∗=γ(τ)(¯x−xP(τ)) 7 update ϕ(τ+1)=γ(τ)+ρτ(¯x−xP(τ)) for step size ρτ 8 find the projection γ(τ+1)ofϕ(τ+1)onto the unit sphere defined by ||γ||2≤1 9 setτ=τ+ 1 10if∆∗>0then 11 return inequality γ(τ∗)(x−xP...
https://arxiv.org/abs/2505.03899v1
6. Consider Fj={x∈Pj|η(x)≤β} forj= 1,2, where η(x)is a scale function, and Pj=Qn i=1[li j,ui j]is a box domain of variables. For each j= 1,2, letDjbe the DD constructed via Algorithm 1 for a single sub-interval [li j,ui j]fori∈[n]. IfP2⊆P1, then conv(Sol( D2))⊆conv(Sol( D1)). While Proposition 6 implies that the dual b...
https://arxiv.org/abs/2505.03899v1
results of Propo- sition 7 are proven for a bounded domain of variables, the SB&C can still be implemented for bounded opti- mization problems that contain unbounded variables. In this case, as is common in spatial branch-and-bound solution methods, the role of a primal bound becomes critical for pruning nodes that con...
https://arxiv.org/abs/2505.03899v1
Statlog (Landsat Satellite) UCI 36 4434 5854.507 5853.978 49.29 Connectionist Bench (Sonar, Mines vs. Rocks) UCI 60 207 71.853 68.266 195.17 Communities and Crime UCI 127 123 9.891 9.466 76.44 Urban Land Cover UCI 147 168 1.002 0.999 5.10 Relative location of CT slices on axial axis UCI 384 4000 10730.592 10426.985 281...
https://arxiv.org/abs/2505.03899v1
minlp. Optimization Methods and Software , 24:597–634, 2009. D. Bertsekas. Nonlinear Programming . Athena Scien- tific, 1999. D. Bertsekas. Convex Optimization Algorithms . Athena Scientific, 2015. D. Bertsimas and B. Van Parys. Sparse high- dimensional regression: Exact scalable algorithms and phase transitions. The A...
https://arxiv.org/abs/2505.03899v1
https://archive. ics.uci.edu . E. Khademnia and D. Davarnia. Convexification of bilinear terms over network polytopes. Mathematics of Operations Research , 2024. doi: 10.1287/moor. 2023.0001. A. Khajavirad, J. J. Michalek, and N. V. Sahini- dis. Relaxations of factorable functions with convex- transformable intermediat...
https://arxiv.org/abs/2505.03899v1
this inequality can be considered as η(x) =P i∈Nηi(xi) where ηi(xi) =|xi|p. Since ηi(xi) satisfies all three conditions in Definition 1, we conclude that η(x) is a scale function. (ii) Using the definition ||x||0=P i∈NI(xi) where I(xi) = 0 if xi= 0, and I(xi) = 1 if xi̸= 0, we can rewrite constraint ||x||0≤βasP i∈Nηi(x...
https://arxiv.org/abs/2505.03899v1
according to Definition 1, we must have ηi(0) = 0, ηi(x1)≤ηi(x2) for 0 ≤x1≤x2, and ηi(x1)≥ηi(x2) for x1≤x2≤0 for each i∈[n]. Using the fact that li j∗ i≤¯xi≤ui j∗ i, we consider three cases. (i) If ui j∗ i≤0, then ¯ xi≤ui j∗ i≤0, and thus ηi(¯xi)≥ηi(ui j∗ i) =γi. (ii) Ifli j∗ i≥0, then ¯ xi≥li j∗ i≥0, and thus ηi(¯xi)≥...
https://arxiv.org/abs/2505.03899v1
It is clear that these points correspond to the extreme points of the rectangular partition P2=Qn i=1[li 2,ui 2]. Pick one of these points, denoted by ¯x. We show that ¯x∈conv(Sol( D1)). It follows from lines 1–10 of Algorithm 1 that each layer i∈[n] includes a single node vi. Further, each node viis connected to vi+1v...
https://arxiv.org/abs/2505.03899v1
j∈N. On the other hand, we can write Fj={x∈Rn|η(x)≤β}∩Pj by definition. Since {Pj} ↓ { ˜x}, we obtain that {Fj} ↓ {x∈Rn|η(x)≤β} ∩ { ˜x}={˜x}since η(˜x)≤β by assumption. Therefore, based on the previous arguments, we can write that Fj⊆conv(Sol( Dj))⊆Pj. Because {Fj} ↓ { ˜x}and{Pj} ↓ { ˜x}, we conclude that conv(Sol( Dj...
https://arxiv.org/abs/2505.03899v1
PRINCIPAL CURVES IN METRIC SPACES AND THE SPACE OF PROBABILITY MEASURES BYANDREW WARREN1,aANTON AFANASSIEV1,bFOREST KOBAYASHI1,c YOUNG -HEON KIM1,dAND GEOFFREY SCHIEBINGER1,e 1Department of Mathematics, University of British Columbia,aawarren@math.ubc.ca;banton.a@math.ubc.ca; cfkobayashi@math.ubc.ca;dyhkim@math.ubc.ca;...
https://arxiv.org/abs/2505.04168v1
. . . . . . . . . . . . . . . . . . . . . . . . 23 5.1 Variants of principal curves . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ...
https://arxiv.org/abs/2505.04168v1
analysis (PCA) is one of the most basic and widely used tools in exploratory data analysis and unsupervised learning. Principal curves [42] provide a natural, nonlinear generalization of principal components (Figure 1). Motivated by an application to single cell RNA-sequencing (scRNA-seq), we develop a theory of princi...
https://arxiv.org/abs/2505.04168v1
two criteria: the distribution Λis close to γ⋆, and the curve γ⋆is not “too long.” We measure the fit of γtoΛby the average squared distance of points to γ, namely Fit(Λ,γ):=Z Xinf t∈[0,1]d2(x,γt)dΛ(x). 1Here continuity is with respect to convergence in distribution. PRINCIPAL CURVES IN METRIC SPACES AND W2 SPACE 3 A p...
https://arxiv.org/abs/2505.04168v1
RNA sequencing (scRNA-seq). In practice, scRNA- seq provides an imperfect snap-shot of each cell’s expression profile by sampling from the distribution of messenger RNAs within the cells. After profiling many cells at various time- points along a developmental curve ρ, the resulting data would consist of noisy empirica...
https://arxiv.org/abs/2505.04168v1
that our approach to principal curves is competitive with existing methods, even while simultaneously providing an estimate of the latent one-dimensional structure of the data, something not provided by widely used seriation methods like that of [6]. We anticipate that this framework could be useful for generating high...
https://arxiv.org/abs/2505.04168v1
[29], spline-based methods [86], and penalized princi- pal curves : a wide range of approaches close in spirit to ours, whereby the functional PC(Λ) is modified with some kind of penalty on the curve γ. Various approaches to penalized prin- cipal curves include: a hard constraint on the length of γ[28, 51], or a soft p...
https://arxiv.org/abs/2505.04168v1
that |˙γt|exists for Lebesgue- almost all t∈[0,1], and that |˙γt|is an integrable function in time. In this case we define the length ofγas follows: Lengthd(γ):=Z1 0|˙γt|dt. We note that the length of γis independent of time-reparametrization, and that a sufficient condition for Lengthd(γ)to be finite is that t7→γtis L...
https://arxiv.org/abs/2505.04168v1
for Λas an estimate of Λ’s implicit one-dimensional structure. Whether this makes sense surely depends on Λitself as well as the domain application. For example, the consistency results of Section 4 apply in the limiting case where Λitself has one-dimensional support inside X. PROPOSITION 2.2 (Non-uniqueness of princip...
https://arxiv.org/abs/2505.04168v1
for two reasons. Firstly, it turns out that we can prove that minimizers of the discrete objectivePK k=1P n∈Ikd2(xn,γtk) N+βPK−1 k=1d(γtk,γtk+1) converge (in roughly the same sense as in Proposition 2.3) to minimizers of PPC(Λ) as N,K→ ∞ . Indeed we prove such a discrete-to-continuum convergence result later in this se...
https://arxiv.org/abs/2505.04168v1
the discretized functional in Step 5, each vector pointing to a xn∈Ikis weighted by 1/N, while the vectors pointing to γk−1,γk+1are weighted by β/(2K−2)(the factor of two arising because each d2(γk,γk+1)is split into one vector on γkand one on γk+1). (d) The updated {γk}K k=1with the associated updated V oronoi cells. ...
https://arxiv.org/abs/2505.04168v1
verge to an AC curve γtwhich then minimizes PPC (Λ)? We address this question in the following proposition, which also allows for simultaneously taking a convergent sequence of data distributions (ΛN)N∈N. THEOREM 2.5 (Discrete to continuum). LetXbe a compact geodesic metric space. Let K,N∈NandΛN∈ P(X). Suppose also tha...
https://arxiv.org/abs/2505.04168v1
AND W2 SPACE 13 Here the test function φis any continuous function from P(V)toR, where P(V)is itself equipped with the weak* topology. We also note that, by applying Prokhorov’s theorem twice, P(P(V))is compact when equipped with the specific weak* topology we have just described. REMARK 3.1. The class of AC curves for...
https://arxiv.org/abs/2505.04168v1
{γk}K k=1, in other words the Ik’s are disjoint and every µn∈Iksatisfies γk∈argmin1≤k≤KW2 2(µn,γk). Lastly, the nonlocal discrete objective PPCK w(ΛN)introduced in Appendix D also makes sense in this setting, and is defined in an identical manner to PPCK(ΛN). For the sake of motivation, we give two examples where a dis...
https://arxiv.org/abs/2505.04168v1
Λ. One option is that we draw i.i.d. µn∼Λand get to “observe” each probability measure µn∈ P(V)exactly. However, this observation scheme is rather idealized for the applications we wish to consider. Rather, we wish to consider the following observation scheme: Λ∈ P(P(V)) µ1,...,µ N∼Λ X1 n,...,XMnn∼µn. This scheme has t...
https://arxiv.org/abs/2505.04168v1
exists a subsequence of N(and thus of M) along which ˆΛN,M⇀∗Λalmost surely. 2.Assume that M≥C(logN)qfor some C >0andq >1. Then ˆΛM,N⇀∗Λalmost surely. REMARK 3.4. The proof of Theorem 3.1 also works in some cases where the number of samples Mnin the empirical measure ˆµnvaries with n. For example, if we take M:= min1≤n≤...
https://arxiv.org/abs/2505.04168v1
out- lined in the introduction (Figure 1). Namely, given a data distribution which comes from observing a ground truth curve with unknown time labels, we show how to infer both the ground truth curve and the ordering of the points along said curve, and so obtain a principal curves based seriation algorithm. In Section ...
https://arxiv.org/abs/2505.04168v1
4.3. LetXbe a compact geodesic metric space. Let ρ∈AC([0,1];X)and Λ =ρ(·)#Leb[0,1]. Suppose that t7→ρtis injective. Let (ΛN)N∈Nbe a sequence in P(X) converging weak*ly to Λ. Let{γ∗β,K k}K k=1be a minimizer of PPCK(ΛN;β), and let γ∗β,K t be a constant-speed piecewise geodesic interpolation of {γ∗β,K k}K k=1. Then: up to...
https://arxiv.org/abs/2505.04168v1
βconverging to zero, along which γ∗β tconverges uniformly in t. Then, for all sufficiently small βj>0, it holds that either: 1.For all 1≤i,i′≤T,ti< ti′⇐⇒ ˆτ(ρi)<ˆτ(ρ′ i), or 2.For all 1≤i,i′≤T,ti< ti′⇐⇒ ˆτ(ρi)>ˆτ(ρi′). In other words, the ordering given by ˆτis correct up to total reversal. This result is derived direc...
https://arxiv.org/abs/2505.04168v1
[W2(ˆρti,ˆρtj)]i,jof pairwise W2distances between empirical measures ˆρti. 20 To quantify performance of each method, we use a loss on the space of permutations of time labels. Specifically, we consider the error metric E(ˆτ) =2 T(T−1)X i<j1(ˆτ(ˆρti)>ˆτ(ˆρtj)), which computes the percentage of pairs of non-identical ti...
https://arxiv.org/abs/2505.04168v1
data in the space of probability measures over features, as we do here, we can instead infer a single curve (and ordering) in the Wasserstein space for the whole dataset. For branching data, our approach therefore has the benefit of assigning an ordering which is comparable across branches. Test dataset 2. Next, we pre...
https://arxiv.org/abs/2505.04168v1
propose an estimator and prove it is consistent for recovering a curve of probability measures in Wasserstein space from empirical samples. This can be interpreted as a one-dimensional manifold learning problem, and is related to seriation [32, 83] and trajectory inference [60]. Ordering scRNA-seq datasets along a prin...
https://arxiv.org/abs/2505.04168v1
is routine: indeed, along any minimizing sequence, the number of components is necessarily bounded, so one can pass to the limit by extracting a limiting curve for each component separately. Additionally, [56] propose numerics for their multiple curves problem which are similar to ours (albeit without proof of consiste...
https://arxiv.org/abs/2505.04168v1
Initiative (KI). PRINCIPAL CURVES IN METRIC SPACES AND W2 SPACE 25 REFERENCES [1] Martial Agueh and Guillaume Carlier. Barycenters in the wasserstein space. SIAM Journal on Mathematical Analysis , 43(2):904–924, 2011. [2] L. Ambrosio and P. Tilli. Topics on Analysis in Metric Spaces . Oxford lecture series in mathemati...
https://arxiv.org/abs/2505.04168v1
and Hans-Georg Müller. Wasserstein regression. Journal of the American Statistical Association , 118(542):869–882, 2023. [19] Yen-Chi Chen, Christopher R. Genovese, and Larry Wasserman. Asymptotic theory for density ridges. Annals of Statistics , 43(5):1896–1928, 2015. [20] Yongxin Chen, Giovanni Conforti, and Tryphon ...
https://arxiv.org/abs/2505.04168v1
Surfaces . PhD thesis, Stanford University, 1984. [42] Trevor Hastie and Werner Stuetzle. Principal Curves. Journal of the American Statistical Association , 84(406):502–516, 1989. Publisher: [American Statistical Association, Taylor & Francis, Ltd.]. [43] Søren Hauberg. Principal curves on riemannian manifolds. IEEE t...
https://arxiv.org/abs/2505.04168v1
Variations , 27:8, 2021. [65] A. Marek, V . Blum, R. Johanni, V . Havu, B. Lang, T. Auckenthaler, A. Heinecke, H.-J. Bungartz, and H. Lederer. The ELPA library: scalable parallel eigenvalue solutions for electronic structure theory and computational science. Journal of Physics: Condensed Matter , 26(21):213201, May 201...
https://arxiv.org/abs/2505.04168v1
models for pairwise comparisons: Statistical and computational issues. IEEE Transactions on Information Theory , 63(2):934–959, 2016. [84] Dejan Slep ˇcev. Counterexample to regularity in average-distance problem. Annales de l’IHP Analyse non linéaire , 31(1):169–184, 2014. [85] Alexander J Smola, Sebastian Mika, Bernh...
https://arxiv.org/abs/2505.04168v1
metric spaces: LEMMA A.3 (Compactness and lower semicontinuity of length functional). (i) Let X be a compact metric space. Let n∈N, and for each nletγn∈AC([0,1];X)be con- stant speed, in the sense that |˙γn t|=Length (γn)for almost all t∈[0,1]. Suppose that supn∈NLength (γn)<∞. Then there exists some γ∗∈AC([0,1];X)(but...
https://arxiv.org/abs/2505.04168v1
and let γ∈AC[0,1];X), with range Γ. Then Length (γ)≥ H1(Γ), with equality if γis injective. PROOF . This is established in the case where γis Lipschitz by combining [2, Theorem 4.1.6] and [2, Theorem 4.4.2]. The same holds more generally by replacing γwith its con- stant speed (hence 1-Lipschitz) reparametrization, and...
https://arxiv.org/abs/2505.04168v1
First suppose his increasing. Then f,gvisit points of f([0,1])in the same order. By the definition of the constant-speed parametrization we immediately getˆf=g, as desired. Alternatively, his decreasing, in which case ˆfcoincides with the time-reversal of g. 2. Suppose that gis not injective and that {g(0),g(1)} ̸={f(0...
https://arxiv.org/abs/2505.04168v1
exist whenever Vis compact. Indeed, taking Y=ℓ2, this follows from the classical fact that every compact metric space can be embedded homeomorphically as a compact subset of ℓ2, see Section 4.C in [50]. Given a compact metric space Vand RKHS HatopV, we equip the space P(V)with the metric MMD H. In particular we choose ...
https://arxiv.org/abs/2505.04168v1
2 π/3 Fig 4: Illustration for the proof of Proposition 2.2. Take φ1,φ2,φ3to be the three distinct isometries indicated, applied to a curve with µuniform on a triangle. In this case, φ3yields a distinct image. We use the following lemma as an ingredient in the proof of Proposition 2.3. LEMMA C.1. LetXbe a compact metric...
https://arxiv.org/abs/2505.04168v1
on AC([0,1];X)/∼. REMARK C.2. We mention that Proposition 2.3 is broadly analogous to a result obtained by two of the authors in [57, Cor. 5.2], albeit in a different setting; see also [84, Lem. 3] for a similar stability result on Euclidean space for the closely related “average distance variational problem” from [15]...
https://arxiv.org/abs/2505.04168v1
γK kto 40 γK k+1, for all 1≤k≤K−1; such a γKexists because Xis a geodesic metric space. Note that by construction, we have that Length (γK) =K−1X k=1d(γK k,γK k+1). At the same time, let ΓK contdenote the graph of γK. Note that ΓK⊂ΓK cont, and so for any x∈X, we have d2(x,ΓK cont)≤d2(x,ΓK). Therefore, Z Xd2(x,ΓK cont)d...
https://arxiv.org/abs/2505.04168v1
m∈ P(V), and dP(V)(µm,µ′ m)< ε. Then, W1 1 MMX m=1δµm,1 MMX m=1δµ′ m! < ε. PROOF . Consider the map Twhich sends δµmtoδµ′ m; with respect to this map, we have that Z P(V)dP(V)(ν,T(ν))d 1 MMX m=1δµm! (ν) =1 MMX m=1dP(V)(µm,µ′ m)< ε. This directly implies that W1 1 MPM m=1δµm,1 MPM m=1δµ′ m < εas desired. 42 PROOF OF T...
https://arxiv.org/abs/2505.04168v1
2.3 from that article holds. It follows that, assuming M/R→0, we have Eh W1(ΛN,ˆΛN,M,R )i ≤Eh W1(ΛN,ˆΛN,M)i +Eh W1(ˆΛN,M,ˆΛN,M,R )i ≤NX n=1E[W1(µn,ˆµM n)] +E[W1(ˆµM n,˜µM,R n)] →0. This implies that W1(ΛN,ˆΛN,M,R )→0in probability, as desired. C.3. Proofs for Section 4. PROOF OF PROPOSITION 4.1. (i) Let Pdenote the ran...
https://arxiv.org/abs/2505.04168v1
Let t1denote the first time after k0thatγ∗ t1∈T1. It holds that Length (γ∗⌞[t0,k0])≥d(γ∗ t0,γ∗ k0)>0, Length (γ∗⌞[k0,t1])≥d(γ∗ k0,γ∗ t1)>0, and Length (γ∗) =Length (γ∗⌞[0,t0]) +Length (γ∗⌞[t0,k0]) +Length (γ∗⌞[k0,t1]) +Length (γ∗⌞[t1,1]) PRINCIPAL CURVES IN METRIC SPACES AND W2 SPACE 45 while by construction the (disjo...
https://arxiv.org/abs/2505.04168v1
been able to verify this claimed regularization effect mathematically, but leave this interesting direction as one for future work. We consider the following rather general nonlocal, nonuniform smoothing kernel. Let w: R+→R+be a Borel function which is compactly supported on [0,1]. We write wh(t):=1 hwt h forh >0. No...
https://arxiv.org/abs/2505.04168v1
deduce that, if γK, the piecewise geodesic curve from {γ1,...,γ K}, converges uniformly in t to some limiting AC curve γ, then liminf N,K→∞;¯h→0 SN,K+βK−1X i=1d(γi,γi+1)! 48 ≥liminf N,K→∞ 1 NKX k=1X xn∈Ikd2(xn,γk) +βK−1X i=1d(γi,γi+1)! ≥Z Xd2(x,Γ)dΛ(x) +βLength (γ). (In fact the same inequality holds even if ¯his not s...
https://arxiv.org/abs/2505.04168v1
we present it explicitly below. Note that the only changes are that: in line 3, the TSP subroutine should be understood as keeping γ0andγKfixed while being allowed to permute the other indices of the knots; and, in line 5, one only updates the locations of the knots {γk}K−1 k=2. 50 Algorithm 3: Nonlocal, Fixed-Endpoint...
https://arxiv.org/abs/2505.04168v1
pairwise W2distances between empirical measures ˆρti. For TSP, we regard Was a matrix of edge weights in a complete graph of time points. The TSP approach to seriation is to visit all nodes exactly once on a distance-minimizing path, and use the ordering given by said minimizing path. This method optionally allows the ...
https://arxiv.org/abs/2505.04168v1
arXiv:2505.04218v1 [math.PR] 7 May 2025Convergence rate of Euler-Maruyama scheme to the invariant probability measure under total variation distance Yinna Yea, Xiequan Fanb,∗ aDepartment of Applied Mathematics, School of Mathematics a nd Physics, Xi’an Jiaotong-Liverpool University, Suzhou 215123, P. R. China bSchool o...
https://arxiv.org/abs/2505.04218v1
objective of this work is toprove sucha convergence underTV distance, andpro videfurthermoretheexact convergence rate. To this end, the properties of EM scheme ( θk)k/greaterorequalslant0, as a Markov chain on continuous state space, are studied. It turns out that under Assumption 1 , (θk)k/greaterorequalslant0is irred...
https://arxiv.org/abs/2505.04218v1
every (x,y)∈R2, |g(x)−g(y)|/lessorequalslantL|x−y|, (2.5) (g(x)−g(y))(x−y)/lessorequalslant−K1(x−y)2+K2. (2.6) Moreover, gis second order differentiable and the second order derivati ve ofgis bounded. As stated in Remark 2.2 of [12], the assumption (2.5) implies g2(x)/lessorequalslant2L2x2+2g2(0),|g′|/lessorequalslantL....
https://arxiv.org/abs/2505.04218v1
Xn)n/greaterorequalslant0, with the initial stateX0=x. AndExdenotes the expectation under the probability measure Px. IfB∈X, define respectively the first hitting time τBand the first return time σBof the set Bfor the chain ( Xn)n/greaterorequalslant0 by τB= inf{n/greaterorequalslant0|Xn∈B}, 4 σB= inf{n/greaterorequalslan...
https://arxiv.org/abs/2505.04218v1
and is covered by a denumerable union of small sets. Proof. (1) Recall that Pηadmits a Markov kernel pη(x,y), given by pη(x,y) =1√ησφ/parenleftbiggy−x−ηg(x)√ησ/parenrightbigg . Suppose that Cis a compact subset of Rsuch that Leb( C)>0. Then for all x∈Cand A∈ B(R), Pη(x,A) =/integraldisplay Apη(x,y)dy/greaterorequalslan...
https://arxiv.org/abs/2505.04218v1
have for any η∈(0,η0],βη/lessorequalslantβ0; from Lemma Appendix B.1 (2), D0⊂Dη, so max η∈(0,η0]σDη/lessorequalslantσD0. Letb0=bη0. Therefore, by (3.15), we obtain sup x∈D0Ex/parenleftBig βσDηη/parenrightBig /lessorequalslantsup x∈D0Ex/parenleftBig βσD0 0/parenrightBig /lessorequalslantsup x∈D0(1+x2)+b0β0<∞. We thus co...
https://arxiv.org/abs/2505.04218v1
split chain {(θk,Dk)}k/greaterorequalslant1is that if θ0andD0are independent, then {(θk,Fθ k)}k/greaterorequalslant1is a Markov chain with kernel Pη. In the book Douc et al. [7], a split Markov kernel ˇPwas constructed for an irreducible kernel P, which has an accessible small set. In this book, it is also fou nd thatˇ...
https://arxiv.org/abs/2505.04218v1
ξon(R,B(R))satisfying Pξ(σC<∞) = 1,ˇPξ⊗δd(σˇα<∞) = 1for alld∈ {0,1}. Moreover, if Pηis Harris recurrent, thenˇPηis Harris recurrent. (6) IfCis accessible and Pηadmits an invariant probability measure πη, thenˇαis positive for ˇPη. The lemma above is obtained by applying Proposition 11.1.4 i n [7] to the irreducible ker...
https://arxiv.org/abs/2505.04218v1
we can obtain the following propos ition. Proposition 3.3. Under the same conditions of Lemma 3.13, there exists a const antβ >1such that for all initiate distributions ξon(R,B(R)), ∞/summationdisplay n=0βn/ba∇dblξPn η−πη/ba∇dblTV<∞. (3.27) Proof. Lemma 3.9 implies for n/greaterorequalslant1, /ba∇dblξPn η−πη/ba∇dblTV/l...
https://arxiv.org/abs/2505.04218v1
m∈N∗,λ∈(0,1) andV0: R→[1,∞) a measurable and bounded function on R. Letf(x) = (˜λ−λ1/m)V0(x), for˜λ∈(λ1/m,1). From (3.33), for any ˜λ∈(λ1/m,1) and any x∈R, PηV0(x)+f(x)/lessorequalslant˜λV0(x)+b. (3.34) Moreover, from above, we have for all d >0, the set Bdis small; and for all d/greaterorequalslantd0,Bdis accessible. ...
https://arxiv.org/abs/2505.04218v1
For every given η∈(0,η0], the existence, uniqueness of πηand (2.9) is proved in Proposition 3.2. The inequality (2.8) is proved in Lemma 3.8 (3). Proof of Theorem 2.2. The inequality (2.11) is an immediate consequence of Propos ition 3.1 (3) and Proposition 3.4 (3). And the inequality (2.12) is obtain ed from (2.11) an...
https://arxiv.org/abs/2505.04218v1
= 1anda(n) = 0, for any n/ne}ationslash=m. Definition Appendix A.5 (Irreducible kernel, [7]). A Markov kernel Pis said to be irreducible, if it admits an accessible small set. 18 Definition Appendix A.6 (Recurrent and Harris recurrent, [7]). (1) A set A∈Xis said to be recurrent if U(x,A) =∞for allx∈A; it is said to be Ha...
https://arxiv.org/abs/2505.04218v1
there exists a constant η2∈(0,1) de- pendingonly on K1andL, such that for all η∈(0,η2],h(η)<0 and so f′ 1(η)<0. Consequently, the function f1(η) is decreasing over (0 ,η2]. References [1] Brooks, S., Gelman, A., Jones, G. L., Meng, X.L., 2011. Ha ndbook of Markov chain Monte Carlo, 1st ed. Chapman and Hall/CRC [2] Butk...
https://arxiv.org/abs/2505.04218v1
Beyond entropic regularization: Debiased Gaussian estimators for discrete optimal transport and general linear programs Shuyu Liu1, Florentina Bunea2, and Jonathan Niles-Weed1,3 1Courant Institute of Mathematical Sciences, New York University, New York, NY, 10012 2Department of Statistics and Data Science, Cornell Univ...
https://arxiv.org/abs/2505.04312v1
define a new estimator of the plan, with better properties than existing approaches. We focus on the discrete version of the optimal transport problem, which compares two distributions tand ssupported on a finite set of points {v1, . . . , v p}. Associated to each pair of points {vi, vj}is acostcij. The optimal transpo...
https://arxiv.org/abs/2505.04312v1
variable. For instance, examining the northwest corner entry, we see that the limit of√n((πn)11−π⋆ 11) is the minimum of two independent centered Gaussians, and therefore has negative mean. Far from being specific to this example, asymptotic bias is in fact a general phenomenon plaguing the plan estimation problem: pri...
https://arxiv.org/abs/2505.04312v1
construct valid confidence sets in situations where existing methods fail. To our knowledge, this represents the first method to construct asymptotically unbiased estimators and Gaussian confidence sets for optimal transport plans. In fact, our approach applies in significantly more generality than the optimal transpor...
https://arxiv.org/abs/2505.04312v1
the form ˆxn= 2x(rn 2,bn)−x(rn,bn). (14) This trick—which has been rediscovered many times in the literature, for instance under the name “Richardson extrapolation” in numerical analysis (Richardson, 1911) and “twicing” in statistics (Newey et al., 2004; Zhang and Xia, 2012)—provides an easy solution for the problem of...
https://arxiv.org/abs/2505.04312v1
for the optimal solution (Klatt et al., 2020), a phenomenon that even extends to the case of continuous marginals (Goldfeld et al., 2024; Gonz´ alez-Sanz and Hundrieser, 2023; Gonzalez-Sanz et al., 2022; Harchaoui et al., 2024), but they are notcentered at the true optimal plan. In some cases, under strong regularity a...
https://arxiv.org/abs/2505.04312v1
Klatt et al., 2020; Mordant, 2024). For example, Klatt et al. (2020) showed a central limit theorem for the empirical entropic optimal transport plan πλ,n,centered at the population solution to the regularized program π⋆ λ. Their asymptotic results hold when the regularization parameter λis fixed and ntends to infinity...
https://arxiv.org/abs/2505.04312v1