Buckets:
Title: A Busemann hybrid projection-proximal point algorithm for optimization problems on Hadamard manifolds
URL Source: https://arxiv.org/html/2603.00005
Markdown Content: Back to arXiv
This is experimental HTML to improve accessibility. We invite you to report rendering errors. Use Alt+Y to toggle on accessible reporting links and Alt+Shift+Y to toggle off. Learn more about this project and help improve conversions.
Why HTML? Report Issue Back to Abstract Download PDF Abstract 1Introduction 2Basics results about Hadamard manifolds 3The Busemann function on Hadamard manifolds 4B-subdifferential on Hadamard manifold 5Projection onto horospheres in Hadamard manifolds 6The Busemann hybrid projection-proximal point algorithm 7Conclusions References License: CC BY-SA 4.0 arXiv:2603.00005v1 [math.OC] 12 Jan 2026 A Busemann hybrid projection-proximal point algorithm for optimization problems on Hadamard manifolds R. Díaz Millán School of Information Technology, Deakin University, Geelong, Australia (Email: r.diazmillan@deakin.edu.au, julien.ugon@deakin.edu.au) O.P. Ferreira Institute of Mathematics and Statistics, Federal University of Goias, Avenida Esperança, s/n, Campus II, Goiânia, GO - 75690-900, Brazil (Email: orizon@ufg.br, mauriciolouzeiro@ufg.br) M.S. Louzeiro2 J.Ugon1 Abstract
We study optimization problems on Hadamard manifolds, motivated by recent advances in geometric approaches to optimization on curved spaces, particularly those involving the structure of Busemann functions. We introduce a projection based variant of the proximal point algorithm, termed the Busemann hybrid projection proximal point algorithm, which replaces Euclidean hyperplanes with horospheres defined via convex Busemann functions. The algorithm performs projections in closed form using the gradients of these functions, resulting in a geometrically intrinsic scheme that requires no tangent space linear solves. We allow for inexact subgradient evaluations and prove global convergence under controlled inexactness, with a relative error level strictly below one. We establish a Fejér type descent and sublinear complexity with a rate proportional to the inverse square root of the iteration count, and show that the exact variant coincides with the classical Riemannian proximal point algorithm. The framework clarifies the role of Busemann based subdifferentials in optimization on spaces of nonpositive curvature.
keywords:Hadamard manifolds, Busemann function, Busemann subdifferential, horosphere projection, proximal point algorithm
AMS subject classification (2020): 90C25 ⋅ 90C26 ⋅ 90C30 ⋅ 65K10 ⋅ 58C05.
1Introduction
In this paper, we study the following problem, which generalizes classical convex minimization to the setting of Riemannian geometry:
min 𝑝 ∈ 𝕄 𝑓 ( 𝑝 ) ,
(1)
where 𝕄 is a Hadamard manifold and 𝑓 : 𝕄 → ℝ is a proper, lower semi-continuous (lsc), and convex function. Hadamard manifolds, complete, simply connected Riemannian manifolds with nonpositive sectional curvature, provide a natural framework for optimization beyond the Euclidean setting. These manifolds exhibit several desirable geometric properties, the exponential map at any point is a global diffeomorphism, geodesics are uniquely defined between any pair of points, and the squared distance function is convex along geodesics. As a result, many tools from convex analysis and variational optimization extend meaningfully to this setting, enabling the design of efficient algorithms that exploit the intrinsic curvature of the manifold. Convex functions on Hadamard manifolds are defined in terms of the convexity of their restriction to geodesics, and the notion of subdifferential and proximal maps can be generalized via the Riemannian exponential and logarithmic maps. This geometric framework arises naturally in various applications involving structured data with non-Euclidean constraints, such as symmetric positive definite matrices and hyperbolic embeddings. Consequently, designing algorithms for problem (1) that respect the underlying curvature and exploit manifold-specific structures is both theoretically appealing and practically relevant.
The proximal point algorithm (PPA), originally introduced in the ambient linear space by Martinet [36] and further developed by Rockafellar [41], plays a central role in optimization and monotone operator theory, due to its strong convergence properties and broad applicability. A refinement of the classical algorithm was proposed by Solodov and Svaiter [43], who introduced a projection step onto a hyperplane derived from an inexact proximal iteration, improving the control over inexactness while maintaining convergence. The extension of the PPA to nonlinear setting was initiated in [22], where the algorithm was adapted to solve convex optimization problems on Hadamard manifolds. This approach was further generalized in [32] to handle the problem of finding singularities of monotone vector fields, introducing tools that contributed to the development of Riemannian monotone operator theory. Since then, various works have investigated the PPA and its variants in the Riemannian context; see, for instance, [6, 8, 9, 10, 26, 30, 35, 48, 34, 47, 49]. Beyond Riemannian geometry, the PPA has also been studied in the more general setting of geodesically complete metric spaces of non-positive curvature, known as 𝐶 𝐴 𝑇 ( 0 ) spaces, as proposed by Bačák [3]. This extension made it possible to apply proximal-type algorithms in broader nonsmooth and non-linear settings, and motivated further developments; see [4, 16, 17, 20, 29, 38, 39, 46].
Recent advances in optimization in Riemannian setting have highlighted the importance of developing tools adapted to the curvature and topology of the underlying space. In this context, Busemann functions, which capture asymptotic behavior of geodesics, emerged as powerful instruments for redefining concepts of convexity, subgradients, and projection-like operations on Hadamard spaces. These functions play a role analogous to affine functions in Euclidean space, inducing analogues of hyperplanes (horospheres) and halfspaces (horoballs), thereby enabling a wide range of practical applications; see, for example, [21, 44] and the references therein. Several recent works have explored this perspective. The paper [24] introduces a notion of subdifferentiability based on Busemann functions and proposes subgradient-type algorithms for optimization on Hadamard spaces. A parallel line of work [19] develops the concept of horospherical convexity, also grounded in Busemann geometry, as a means to overcome certain limitations of classical geodesic convexity. In [31] is presented a horospherically-based subgradient algorithm that avoids tangential constructions and lower curvature bounds, applying even to spaces with singularities such as CAT(0) cubical complexes.
Building on these geometric insights, we propose a Riemannian analogue of the hybrid projection based proximal algorithm of [43], adapted to the setting of Hadamard manifolds. Our method, termed the Busemann hybrid projection proximal point algorithm(BHPPM), exploits the convexity and asymptotic structure of Busemann functions to define projection steps onto horospheres, which act as nonlinear analogues of Euclidean hyperplanes. These horospheres serve as separating sets, and the projection at each iteration is computed in closed form using the gradient of the Busemann function [5, 14, 15]. This strategy avoids tangent space linear solves and curvature dependent estimates, yielding a conceptually simple and geometrically intrinsic scheme. The algorithm allows inexact subgradient evaluations controlled by a relative error level strictly below one and reduces to the exact Riemannian proximal point algorithm when this level vanishes. The projection step enforces a Fejér type geometric descent and is essential for convergence in the presence of inexactness. Moreover, we establish a sublinear complexity bound for the inexact method, showing that the decrease in an appropriate merit function is on the order of the inverse square root of the iteration count. Taken together, these results also yield an alternative convergence proof for the exact case and point to new directions for structured optimization techniques on Hadamard manifolds.
The remainder of the paper is organized as follows. Subsection 1.1 states our contributions and situates them within the recent literature. Section 2 reviews essential concepts, notation, and preliminary results on Hadamard manifolds and convexity. Section 3 revisits Busemann functions on Hadamard manifolds, fixes notation, and collects key properties with illustrative examples. Section 4 introduces a notion of subgradient adapted to the geometry induced by Busemann functions and establishes basic properties, including inclusion in the classical subdifferential, a simple chain rule for nondecreasing convex scalars, and max type constructions under an alignment condition, accompanied by examples. Section 5 presents the geometric foundations of the projection step in the Busemann proximal point algorithm, emphasizing the role of horospheres and their closed form projections. Section 6 introduces the Busemann proximal point algorithm, including its inexact variant, and proves key descent properties, global convergence, and sublinear iteration complexity bounds. Finally, Section 7 summarizes the contributions and outlines directions for future research.
1.1Contributions and relation to the literature
This subsection states our contributions and positions them within recent work. Tools from Busemann geometry, such as horospheres, horoballs, support inequalities, and an intrinsic subdifferential, are central here. We use them in a way consistent with classical Euclidean intuition to obtain a closed form projection onto horospheres and to design a projection based proximal method on Hadamard manifolds. The horosphere serves as a Euclidean type separating set, and the projection onto it yields a Fejér type descent. In the exact case the method coincides with the classical Riemannian proximal point algorithm. First, we state our main contributions:
•
We revisit the notion of subdifferential in a way that aligns with the classical Euclidean definition. We provide new properties and explicit formulas for the subdifferential, illustrated through a range of examples drawn from the literature. In addition, we introduce new classes of examples based on Busemann functions, highlighting further instances of subdifferentiable functions.
•
We present an explicit formula for the projection onto horospheres consistent with the Euclidean case, and we use it to build additional classes of subdifferentiable functions and to define a projection based proximal method.
•
We propose a Riemannian analogue of the projection based proximal method. With a relative error level strictly below one, we prove global convergence and a sublinear complexity bound in terms of a natural Busemann residual and the norm of approximate subgradients. In the exact case, the method coincides with the classical Riemannian proximal point algorithm.
We now relate our contributions to the recent literature. The closest references are [24] and [19], which inform the geometric background but pursue different algorithmic goals. The work [24], developed for Hadamard spaces, introduces notions and examples for the Busemann subdifferential and records properties such as inclusion relations and lack of stability under sums. We recall only what is needed for our setting, noting that Hadamard manifolds form a special subclass that allows properties depending on the Riemannian structure, for example, gradient based identities and closed form expressions. The paper [19] formalizes horospherical convexity and proposes first order methods, including accelerated variants, with curvature independent guarantees and frequent use of Fréchet mean oracles. In contrast, our work develops a proximal point approach; we use only the concepts required to support a projection based proximal scheme whose main step is a closed form projection onto a horosphere, and we prove descent, global convergence, and sublinear complexity for this method.
2Basics results about Hadamard manifolds
In this section, we recall fundamental results on Hadamard manifolds, primarily drawn from [5, 14, 42], and include more recent developments that will play a role in our analysis and be referenced at the appropriate points. Throughout this paper 𝕄 represents a finite dimensional Hadamard manifold, 𝑇 𝑝 𝕄 the tangent space of 𝕄 at 𝑝 , 𝑇 𝕄 is tangent bundle of 𝑀 and 𝒳 ( 𝕄 ) the space of smooth vector fields on 𝑀 . The corresponding norm associated to the Riemannian metric ⟨ ⋅ , ⋅ ⟩ is represented by ∥ ⋅ ∥ . We use ℓ ( 𝛾 ) to express the length of a piecewise smooth curve 𝛾 : [ 𝑎 , 𝑏 ] → 𝕄 . The Riemannian distance between 𝑝 and 𝑞 in 𝕄 is denoted by 𝑑 ( 𝑝 , 𝑞 ) , which induces the original topology on 𝕄 , namely, ( 𝕄 , 𝑑 ) , which is a complete metric space. The exponential mapping exp 𝑝 : 𝑇 𝑝 𝕄 → 𝕄 is defined by exp 𝑝 𝑣
𝛾 𝑝 , 𝑣 ( 1 ) , where 𝛾 𝑝 , 𝑣 is the geodesic defined by its initial position 𝑝 , with velocity 𝑣 at 𝑝 . Hence, we have 𝛾 𝑝 , 𝑣 ( 𝑡 )
exp 𝑝 ( 𝑡 𝑣 ) . Thus, we will also use the expression exp 𝑝 ( 𝑡 𝑣 ) for denoting the geodesic 𝛾 𝑝 , 𝑣 starting at 𝑝 ∈ 𝕄 with velocity 𝑣 ∈ 𝑇 𝑝 𝕄 at 𝑝 . For a 𝑝 ∈ 𝕄 , the exponential map exp 𝑝 is a diffeomorphism and log 𝑝 : 𝕄 → 𝑇 𝑝 𝕄 indicates its inverse. In this case, 𝑑 ( 𝑝 , 𝑞 )
‖ log 𝑝 𝑞 ‖ holds, 𝑑 𝑞 : 𝕄 / { 𝑞 } → ℝ defined by 𝑑 𝑞 ( 𝑝 ) := 𝑑 ( 𝑝 , 𝑞 ) is 𝐶 ∞ for 𝑞 ∈ 𝕄 and its gradient is given by grad 𝑑 𝑞 ( 𝑝 )
( − log 𝑝 𝑞 ) / 𝑑 ( 𝑞 , 𝑝 ) , for all 𝑝 ≠ 𝑞 . In addition, 𝑑 𝑞 2 : 𝕄 → ℝ defined by 𝑑 𝑞 2 ( 𝑝 ) := 𝑑 2 ( 𝑝 , 𝑞 ) is 𝐶 ∞ for all 𝑞 ∈ 𝕄 , and grad 𝑑 𝑞 2 ( 𝑝 )
− 2 log 𝑝 𝑞 . Let 𝑝 ¯ , 𝑞 ¯ ∈ 𝕄 and ( 𝑝 𝑘 ) 𝑘 ∈ ℕ , ( 𝑞 𝑘 ) 𝑘 ∈ ℕ ⊂ 𝕄 be sequences such that lim 𝑘 → + ∞ 𝑝 𝑘
𝑝 ¯ and lim 𝑘 → + ∞ 𝑞 𝑘
𝑞 ¯ . Then, for any 𝑞 ∈ 𝑀 , lim 𝑘 → + ∞ log 𝑝 𝑘 𝑞
log 𝑝 ¯ 𝑞 lim 𝑘 → + ∞ log 𝑞 𝑝 𝑘
log 𝑞 𝑝 ¯ and lim 𝑘 → + ∞ log 𝑝 𝑘 𝑞 𝑘
log 𝑝 ¯ 𝑞 ¯ . For 𝑝 , 𝑞 ∈ 𝕄 , the symbol 𝛾 𝑝 𝑞 indicates the geodesic segment joining 𝑝 to 𝑞 , i.e., 𝛾 𝑝 𝑞 : [ 0 , 1 ] → 𝕄 with 𝛾 𝑝 𝑞 ( 0 )
𝑝 and 𝛾 𝑝 𝑞 ( 1 )
𝑞 . Since 𝑀 is a Hadamard manifold, there exists a unique geodesic segment 𝛾 𝑝 𝑞 joining 𝑝 to 𝑞 , its length is equal 𝑑 ( 𝑝 , 𝑞 ) , and the parallel transport along 𝛾 𝑝 𝑞 from 𝑝 to 𝑞 is denoted by 𝑃 𝑝 𝑞 : 𝑇 𝑝 𝑀 → 𝑇 𝑞 𝑀 .
We next recall the well-known “cosine law” for triangles in the 𝜅 -hyperbolic space form, see [42, p. 138].
Lemma 1.
Let 𝕄 be a Hadamard manifold with curvature 𝜅 < 0 and 𝑥 , 𝑦 , 𝑧 ∈ ℍ 𝜅 𝑛 . Let 𝜃 𝑥 be the angle between the vectors log 𝑥 𝑦 and log 𝑥 𝑧 . Then,
cosh ( 𝜅 𝑑 𝜅 ( 𝑦 , 𝑧 ) )
cosh ( 𝜅 𝑑 𝜅 ( 𝑥 , 𝑦 ) ) cosh ( 𝜅 𝑑 𝜅 ( 𝑥 , 𝑧 ) ) − sinh ( 𝜅 𝑑 𝜅 ( 𝑥 , 𝑦 ) ) sinh ( 𝜅 𝑑 𝜅 ( 𝑥 , 𝑧 ) ) cos 𝜃 𝑥 .
We recall the well-known “comparison theorem” for triangles in Hadamard manifolds [42, Proposition 4.5].
Lemma 2.
Let 𝕄 be a Hadamard manifold. The following inequality holds:
𝑑 2 ( 𝑥 , 𝑦 ) + 𝑑 2 ( 𝑥 , 𝑧 ) − 2 ⟨ log 𝑥 𝑦 , log 𝑥 𝑧 ⟩ ≤ 𝑑 2 ( 𝑦 , 𝑧 ) , ∀ 𝑥 , 𝑦 , 𝑧 ∈ 𝕄 .
(2)
In addition, if the sectional curvature is 𝐾 ≡ 0 in the whole 𝕄 , then (2) holds as equality.
A subset 𝐶 ⊂ 𝕄 is said to be convex if for every pair of points 𝑞 , 𝑝 ∈ 𝐶 , the geodesic segment 𝛾 𝑞 𝑝 ( 𝑡 ) := exp 𝑞 ( 𝑡 log 𝑞 𝑝 ) , for all 𝑡 ∈ [ 0 , 1 ] , is entirely contained in 𝐶 . For every closed convex set 𝐶 ⊂ 𝕄 and any point 𝑞 ∈ 𝕄 , there exists a unique nearest point in 𝐶 , called the metric projection of 𝑞 onto 𝐶 and denoted by
𝒫 𝐶 ( 𝑞 ) := arg min 𝑥 ∈ 𝐶 𝑑 ( 𝑞 , 𝑥 ) .
The following result characterizes this projection; its proof can be found in [22].
Proposition 3.
Let 𝐶 ⊂ 𝕄 be closed and convex, and let 𝑞 ∈ 𝕄 . Then, for all 𝑝 ∈ 𝐶 , the projection 𝒫 𝐶 ( 𝑞 ) is unique and satisfies
⟨ log 𝒫 𝐶 ( 𝑞 ) 𝑞 , log 𝒫 𝐶 ( 𝑞 ) 𝑝 ⟩ ≤ 0 .
Let us recall the notion of convexity on Hadamard manifolds, fundamental to the subdifferential theory developed in [45, 22, 33]. A set Ω ⊂ 𝕄 is is said to be convex, if for all 𝑝 , 𝑞 ∈ Ω we have 𝛾 𝑝 𝑞 ( 𝑡 ) ∈ Ω , for all 𝑡 ∈ [ 0 , 1 ] . A function 𝑓 : 𝕄 → ℝ is said to be convex (resp. strictly convex) if, for any 𝑝 , 𝑞 ∈ 𝕄 with 𝑝 ≠ 𝑞 the function 𝑓 ∘ 𝛾 𝑝 𝑞 : [ 0 , 1 ] → ℝ is convex (resp. strictly convex), i.e., ( 𝑓 ∘ 𝛾 𝑝 𝑞 ) ( 𝑡 ) ≤ ( 1 − 𝑡 ) 𝑓 ( 𝑝 ) + 𝑡 𝑓 ( 𝑞 ) (resp. ( 𝑓 ∘ 𝛾 𝑝 𝑞 ) ( 𝑡 ) < ( 1 − 𝑡 ) 𝑓 ( 𝑝 ) + 𝑡 𝑓 ( 𝑞 ) ), for all 𝑡 ∈ ( 0 , 1 ) . A function 𝑓 : 𝕄 → ℝ is said to be 𝜎 -strongly convex for 𝜎
0 if, for any 𝑝 , 𝑞 ∈ 𝕄 , the composition 𝑓 ∘ 𝛾 𝑝 𝑞 : [ 0 , 1 ] → ℝ is 𝜎 -strongly convex, i.e., ( 𝑓 ∘ 𝛾 𝑝 𝑞 ) ( 𝑡 ) ≤ ( 1 − 𝑡 ) 𝑓 ( 𝑝 ) + 𝑡 𝑓 ( 𝑞 ) − 𝜎 2 𝑡 ( 1 − 𝑡 ) 𝑑 2 ( 𝑞 , 𝑝 ) , for 𝑡 ∈ ( 0 , 1 ) .
Definition 4.
Let 𝑓 : 𝕄 → ℝ be a convex function and 𝑞 ∈ 𝕄 . A vector 𝑠 ∈ 𝑇 𝑞 𝕄 is said to be subgradient of 𝑓 at 𝑞 if
𝑓 ( 𝑝 ) ≥ 𝑓 ( 𝑞 ) + ⟨ 𝑠 , log 𝑞 𝑝 ⟩ , ∀ 𝑝 ∈ 𝕄 .
(3)
The set of all subgradients of 𝑓 at the point 𝑞 is called the subdifferential and is denoted by ∂ 𝑓 ( 𝑞 ) .
3The Busemann function on Hadamard manifolds
In this section, we revisit the concept of the Busemann function on Hadamard manifolds, providing essential notations and discussing its properties. Given that our definition is slightly more general than the conventional one used in the literature, we have opted to include streamlined proofs for selected results. This concise review encapsulates pertinent findings that play an essential role in optimization. For additional information on this topic, see, for example, [42].
Let 𝕄 be a Hadamard manifold and 𝑑 the Riemannian distance. The Busemann function associated with a base point 𝑞 ∈ 𝕄 and 𝑣 ∈ 𝑇 𝑞 𝕄 is defined as follows
𝐵 𝑞 , 𝑣 ( 𝑝 ) ≔ lim 𝑡 → + ∞ ( 𝑑 ( 𝑝 , exp 𝑞 ( 𝑡 𝑣 ) ) − ‖ 𝑣 ‖ 𝑡 ) , ∀ 𝑝 ∈ 𝕄 .
(4)
Note that, for 𝑣
0 in (4) we have 𝐵 𝑞 , 0 ( 𝑝 )
𝑑 ( 𝑞 , 𝑝 ) . In addition, it follows from triangular inequality that
| 𝐵 𝑞 , 𝑣 ( 𝑝 ) | ≤ 𝑑 ( 𝑞 , 𝑝 ) , ∀ 𝑞 , 𝑝 ∈ 𝕄 , ∀ 𝑣 ∈ 𝑇 𝑞 𝕄 .
(5)
Since the distance function is Lipschitz continuous with constant 𝐿
1 , the Busemann function 𝐵 𝑞 , 𝑣 is also Lipschitz continuous with constant 𝐿
1 , i.e.,
| 𝐵 𝑞 , 𝑣 ( 𝑥 ) − 𝐵 𝑞 , 𝑣 ( 𝑦 ) | ≤ 𝑑 ( 𝑥 , 𝑦 ) , ∀ 𝑥 , 𝑦 ∈ 𝕄 .
(6)
Let 𝑝 ∈ 𝕄 and [ 0 , + ∞ ) ∋ 𝑡 ↦ 𝐵 𝑞 , 𝑣 ( 𝑡 , 𝑝 ) := 𝑑 ( 𝑝 , exp 𝑞 ( 𝑡 𝑣 ) ) − ‖ 𝑣 ‖ 𝑡 be a auxiliary function. Then, using [14, Lemma II.8.18 (1), p. 268] together with the triangle inequality we conclude that
− 𝑑 ( 𝑝 , 𝑞 ) ≤ 𝐵 𝑞 , 𝑣 ( 𝑡 , 𝑝 ) ≤ 𝐵 𝑞 , 𝑣 ( 𝜏 , 𝑝 ) , 𝜏 ≤ 𝑡 , ∀ 𝑝 ∈ 𝕄 .
Remark 1.
Conventionally, the Busemann function is defined for ‖ 𝑣 ‖
1 see, for example [14, Definition 8.17, p. 268] and [42, p. 212]. However, it follows from (4) that Busemann function is scale-invariant in its direction, namely 𝐵 𝑞 , 𝜆 𝑣
𝐵 𝑞 , 𝑣 , for every 𝜆 > 0 . In particular, we have 𝐵 𝑞 , 𝑣
𝐵 𝑞 , 𝑣 ‖ 𝑣 ‖ , for all 𝑣 ≠ 0 . For the purpose of this study, it is beneficial to extend its definition to include the case of 𝑣
0 .
In the following lemma, we provide two distinct formulas for computing the gradient of the Busemann function, a fundamental object in the study of Hadamard manifolds that plays a central role in the present work. The first formula appears implicitly in the proof of [42, Lemma 4.12, p. 231].
Lemma 5.
Let 𝑞 ∈ 𝕄 be a base point in a Hadamard manifold, and let 𝑣 ∈ 𝑇 𝑞 𝕄 be such that 𝑣 ≠ 0 . Then the Busemann function 𝐵 𝑞 , 𝑣 is convex. Moreover, 𝐵 𝑞 , 𝑣 is differentiable on 𝕄 , and its gradient vector field is given by either of the following formulas:
grad 𝐵 𝑞 , 𝑣 ( 𝑝 )
− lim 𝑡 → ∞ log 𝑝 ( exp 𝑞 ( 𝑡 𝑣 ) ) 𝑑 ( 𝑝 , exp 𝑞 ( 𝑡 𝑣 ) )
− 1 ‖ 𝑣 ‖ lim 𝑡 → ∞ log 𝑝 ( exp 𝑞 ( 𝑡 𝑣 ) ) 𝑡 , ∀ 𝑝 ∈ 𝕄 .
(7)
In addition, the gradient vector field grad 𝐵 𝑞 , 𝑣 is continuous and satisfies ‖ grad 𝐵 𝑞 , 𝑣 ( 𝑝 ) ‖
1 for all 𝑝 ∈ 𝕄 . In particular, at the base point, we have grad 𝐵 𝑞 , 𝑣 ( 𝑞 )
− 𝑣 / ‖ 𝑣 ‖ .
We remark that the lemma above establishes fundamental properties of the Busemann function that will be instrumental in our subsequent analysis. Since 𝐵 𝑞 , 0
𝑑 ( 𝑞 , 𝑝 ) , we also obtain that 𝐵 𝑞 , 0 is convex and continuously differentiable with the gradient vector field grad 𝐵 𝑞 , 0 satisfying ‖ grad 𝐵 𝑞 , 0 ( 𝑝 ) ‖
1 , for all 𝑝 ≠ 𝑞 . The following lemma, whose proof is straightforward and thus omitted, in particular implies that the Busemann function 𝐵 𝑞 , 𝑣 is linear along the geodesic 𝑡 ↦ exp 𝑞 ( 𝑡 𝑣 ) that defines it.
Lemma 6.
Let 𝑞 ∈ 𝕄 be fixed and let 𝑢 , 𝑣 ∈ 𝑇 𝑞 𝕄 . Then, 𝐵 𝑞 , − 𝑣 ( exp 𝑞 ( 𝜏 𝑣 ) )
𝜏 ‖ 𝑣 ‖ , for all 𝜏 ∈ ℝ . Consequently, the Busemann function 𝐵 𝑞 , 𝑣 is unbounded both above and below.
Lemma 7.
Let 𝑞 ¯ ∈ 𝕄 and 𝑣 ¯ ∈ 𝑇 𝑞 ¯ 𝕄 . Consider sequences ( 𝑞 𝑘 ) 𝑘 ∈ ℕ ⊂ 𝕄 and ( 𝑣 𝑘 ) 𝑘 ∈ ℕ with 𝑣 𝑘 ∈ 𝑇 𝑞 𝑘 𝕄 , satisfying lim 𝑘 → + ∞ 𝑞 𝑘
𝑞 ¯ , and lim 𝑘 → + ∞ 𝑣 𝑘
𝑣 ¯ . Then, lim 𝑘 → + ∞ 𝐵 𝑞 𝑘 , 𝑣 𝑘 ( 𝑝 )
𝐵 𝑞 ¯ , 𝑣 ¯ ( 𝑝 ) , for all 𝑝 ∈ 𝕄 .
The next identity can be employed as a practical tool for computing the Busemann function.
Proposition 8.
Let 𝑞 ∈ 𝕄 be a base point and 𝑣 ∈ 𝑇 𝑞 𝕄 with 𝑣 ≠ 0 . Then, there holds
𝐵 𝑞 , 𝑣 ( 𝑝 )
lim 𝑡 → + ∞ 𝑑 2 ( 𝑝 , exp 𝑞 ( 𝑡 𝑣 ) ) − ( ‖ 𝑣 ‖ 𝑡 ) 2 2 ‖ 𝑣 ‖ 𝑡 , ∀ 𝑝 ∈ 𝕄 .
We conclude this section with an inequality comparing the Busemann function to the Riemannian pairing at the base point 𝑞 . It is a support-type bound along the ray in direction 𝑣 , becomes tight in the zero-curvature case, and will be used later to relate Busemann subgradients to classical subgradients.
Lemma 9.
Let 𝕄 be a Hadamard manifold. Then the Busemann function 𝐵 𝑞 , 𝑣 defined in (4), associated with a base point 𝑞 ∈ 𝕄 and a direction 𝑣 ∈ 𝑇 𝑞 𝕄 , satisfies the inequality
− ⟨ 𝑣 , log 𝑞 𝑝 ⟩ ≤ ‖ 𝑣 ‖ 𝐵 𝑞 , 𝑣 ( 𝑝 ) , ∀ 𝑝 ∈ 𝕄 .
(8)
Moreover, if the sectional curvature is 𝐾 ≡ 0 in whole 𝕄 , then inequality (8) holds as equality ‖ 𝑣 ‖ 𝐵 𝑞 , 𝑣 ( 𝑝 )
− ⟨ 𝑣 , log 𝑞 𝑝 ⟩ , for all 𝑝 ∈ 𝕄 .
Proof.
Proof It is immediate to conclude that (8) holds for 𝑣
0 . Now, we assume that 𝑣 ≠ 0 . It follows from Lemma 2, applied to the triangle with vertexes 𝑥
𝑞 , 𝑦
𝑝 and 𝑧
exp 𝑞 ( 𝑡 𝑣 ) , that
𝑑 2 ( 𝑞 , 𝑝 ) + 𝑑 2 ( 𝑞 , exp 𝑞 ( 𝑡 𝑣 ) ) − 2 ⟨ log 𝑞 𝑝 , log 𝑞 exp 𝑞 ( 𝑡 𝑣 ) ⟩ ≤ 𝑑 2 ( 𝑝 , exp 𝑞 ( 𝑡 𝑣 ) ) .
Since log 𝑞 exp 𝑞 ( 𝑡 𝑣 ) )
𝑡 𝑣 and 𝑑 ( 𝑞 , exp 𝑞 ( 𝑡 𝑣 ) )
𝑡 ‖ 𝑣 ‖ , it follows from the last inequality that
𝑑 2 ( 𝑞 , 𝑝 ) − 2 ⟨ 𝑡 𝑣 , log 𝑞 𝑝 ⟩ ≤ 𝑑 2 ( 𝑝 , exp 𝑞 ( 𝑡 𝑣 ) − ( ∥ 𝑣 ∥ 𝑡 ) 2
After performing some algebraic manipulations, the last inequality can be expressed as follows
𝑑 2 ( 𝑞 , 𝑝 ) 2 𝑡 − ⟨ 𝑣 , log 𝑞 𝑝 ⟩ ≤ ‖ 𝑣 ‖ 𝑑 2 ( 𝑝 , exp 𝑞 ( 𝑡 𝑣 ) ) − ( ‖ 𝑣 ‖ 𝑡 ) 2 2 ‖ 𝑣 ‖ 𝑡
By taking the limit as 𝑡 tends to + ∞ , and employing Proposition 8, we establish the inequality (8). ∎
In general, when the sectional curvature of 𝕄 is nonzero, the inequality in (8) cannot hold with equality. This is because the function on the right-hand side is neither convex, concave, nor quasi-convex, see [28].
3.1Examples of Busemann functions
In this section, we present explicit formulas and basic properties of Busemann functions in three canonical Hadamard settings, namely flat manifolds with zero sectional curvature, hyperbolic spaces of constant curvature 𝜅 < 0 , and the manifold of symmetric positive definite matrices endowed with the affine–invariant metric. We begin with the flat case, where the Busemann function is affine, proceed to the hyperbolic spaces, and conclude with the SPD manifold.
Example 1.
Let 𝕄 be a Hadamard manifold, 𝑞 ∈ 𝕄 be a base point and 𝑣 ∈ 𝑇 𝑞 𝕄 . If the sectional curvature of 𝕄 remains identically zero throughout the entire manifold, denoted as 𝐾 ≡ 0 . Then, the Busemann function 𝐵 𝑞 , 𝑣 is given by
𝐵 𝑞 , 𝑣 ( 𝑝 ) := { − ⟨ 𝑣 ‖ 𝑣 ‖ , log 𝑞 𝑝 ⟩ ,
∀ 𝑣 ∈ 𝑇 𝑞 𝕄 , 𝑣 ≠ 0 ,
𝑑
(
𝑞
,
𝑝
)
,
𝑣
0 .
(9)
In fact, if 𝐾 ≡ 0 in whole 𝕄 , then ‖ 𝑣 ‖ 𝐵 𝑞 , 𝑣 ( 𝑝 )
⟨ 𝑣 , log 𝑞 𝑝 ⟩ , for all 𝑝 ∈ 𝕄 , which together with 𝐵 𝑞 , 0 ( 𝑝 )
𝑑 ( 𝑞 , 𝑝 ) implies (9). In particular, for 𝕄
ℝ 𝑛 we have log 𝑞 𝑝
𝑝 − 𝑞 and 𝑑 ( 𝑞 , 𝑝 )
‖ 𝑝 − 𝑞 ‖ , giving that Busemann functions are linear in this setting.
In the following example, we specifically provide explicit formulas on a 𝑘 -hyperbolic space form.
Example 2.
References to this example include [7, 13, 40]. For a given 𝜅
0 , the 𝑛 -dimensional 𝜅 -hyperbolic space form and its tangent hyperplane at a point 𝑝 are denoted by
ℍ 𝜅 𝑛 := { 𝑝 ∈ ℝ 𝑛 + 1 : ⟨ 𝑝 , 𝑝 ⟩
− 1 𝜅 , 𝑝 𝑛 + 1 > 0 } , 𝑇 𝑝 ℍ 𝜅 𝑛 := { 𝑣 ∈ ℝ 𝑛 + 1 : ⟨ 𝑝 , 𝑣 ⟩
0 } ,
where, ⟨ ⋅ , ⋅ ⟩ is the Lorentzian inner product ⟨ 𝑥 , 𝑦 ⟩ := 𝑥 𝑇 J 𝑦 and J := diag ( 1 , … , 1 , − 1 ) ∈ ℝ ( 𝑛 + 1 ) × ( 𝑛 + 1 ) . The intrinsic distance on the 𝜅 -hyperbolic space form between two points 𝑝 , 𝑞 ∈ ℍ 𝜅 𝑛 is given by
𝑑 𝜅 ( 𝑝 , 𝑞 ) := 1 𝜅 arcosh ( − 𝜅 ⟨ 𝑝 , 𝑞 ⟩ ) .
Let 𝑞 ∈ ℍ 𝜅 𝑛 be a base point, 𝑣 ∈ 𝑇 𝑞 ℍ 𝜅 𝑛 . Then, the Busemann function 𝐵 𝑞 , 𝑣 : ℍ 𝜅 𝑛 → ℝ is given by
𝐵 𝑞 , 𝑣 ( 𝑝 ) := { 1 𝜅 ln ( − ⟨ 𝑝 , 𝜅 𝑞 + 𝜅 1 ‖ 𝑣 ‖ 𝑣 ⟩ ) ,
∀ 𝑣 ∈ 𝑇 𝑞 𝕄 , 𝑣 ≠ 0 ,
𝑑
(
𝑞
,
𝑝
)
,
𝑣
0 .
(10)
and its gradient, for 𝑣 ∈ 𝑇 𝑞 𝕄 with 𝑣 ≠ 0 is given by
grad 𝐵 𝑞 , 𝑣 ( 𝑝 )
1 𝜅 1 ⟨ 𝑝 , 𝜅 𝑞 + 𝜅 1 ‖ 𝑣 ‖ 𝑣 ⟩ ( 𝜅 𝑞 + 𝜅 1 ‖ 𝑣 ‖ 𝑣 + 𝜅 ⟨ 𝜅 𝑞 + 𝜅 1 ‖ 𝑣 ‖ 𝑣 , 𝑝 ⟩ 𝑝 ) .
For additional examples of Busemann functions, see [11, 14, 25].
4B-subdifferential on Hadamard manifold
In this section, we introduce a notion of subgradient adapted to the geometry of Busemann functions on Hadamard manifolds. This Busemann subdifferential (or B-subdifferential) essentially coincides with the construction in [24]; see also [19]. The motivation is that the usual linear support built from the logarithmic map need not be convex when curvature is nonzero, whereas a Busemann-based support inequality furnishes an appropriate convex lower bound. Our definition agrees with the classical subdifferential in the flat case, and we establish basic properties: inclusion into the classical subdifferential, a simple chain rule for nondecreasing convex scalars, and max-type constructions under an alignment condition. This framework will be used later for projection statements onto horospheres and for the analysis of a Busemann-based hybrid projection–proximal method.
The mapping on the right-hand side of (3) in Definition 4, namely
𝕄 ∋ 𝑝 ↦ 𝑓 ( 𝑞 ) + ⟨ 𝑠 , log 𝑞 𝑝 ⟩ ,
(11)
has often been treated as convex on Hadamard manifolds; as noted in [28], this unwarranted convexity assumption (generally false when the curvature is nonzero) has led to incorrect statements in the literature. If the sectional curvature of 𝕄 is identically zero, then (11) is linear, hence convex. When the curvature is nonzero, (11) is, in general, neither concave nor convex nor quasi-convex. To address this, we adopt a subdifferential based on the Busemann function.
Definition 10.
Let 𝑓 : 𝕄 → ℝ be a convex function and 𝑞 ∈ dom 𝑓 . A vector 𝑠 ∈ 𝑇 𝑞 𝕄 is said to be 𝐵 -subgradient of 𝑓 at 𝑞 if
𝑓 ( 𝑝 ) ≥ 𝑓 ( 𝑞 ) + ‖ 𝑠 ‖ 𝐵 𝑞 , − 𝑠 ( 𝑝 ) , ∀ 𝑝 ∈ 𝕄 .
(12)
The set of all 𝐵 -subgradients of the function 𝑓 at 𝑞 is called the 𝐵 -subdifferential and is denoted by ∂ 𝑏 𝑓 ( 𝑞 ) .
By Example 1 and Definition 10, in the zero-curvature one has ∂ 𝑓 ( 𝑞 )
∂ b 𝑓 ( 𝑞 ) for every 𝑞 ∈ 𝕄 . Thus, Definition 10 agrees with the classical subdifferential on Hadamard manifolds as in Definition 4; see also [45, Definition 4.3, p. 73]. In particular, it reduces to the usual Euclidean subgradient in ℝ 𝑛 . In general curvature, although ‖ 𝑣 ‖ 𝐵 𝑞 , 𝑣 ( 𝑝 ) need not equal ⟨ 𝑣 , log 𝑞 𝑝 ⟩ , we will show that the inclusion ∂ b 𝑓 ( 𝑞 ) ⊆ ∂ 𝑓 ( 𝑞 ) holds on any Hadamard manifold. Moreover, Lemma 5 implies that, for fixed 𝑞 and 𝑠 , the mapping 𝑝 ↦ 𝑓 ( 𝑞 ) + ‖ 𝑠 ‖ 𝐵 𝑞 , − 𝑠 ( 𝑝 ) , 𝑝 ∈ 𝕄 , is convex in 𝑝 . In the following, we establish a basic property of the B-subdifferential; this also explains why, in the definition of the Busemann function (4), we allow the zero vector.
Proposition 11.
Let 𝑓 : 𝕄 → ℝ be convex. The point 𝑞 is a minimizer of 𝑓 if and only if 0 ∈ ∂ 𝑏 𝑓 ( 𝑞 ) .
Proof.
Proof Assume that 𝑞 is a minimizer of 𝑓 . Thus, we have 𝑓 ( 𝑝 ) ≥ 𝑓 ( 𝑞 )
𝑓 ( 𝑞 ) + ‖ 0 ‖ 𝐵 𝑞 , 0 ( 𝑝 ) , for all 𝑝 ∈ 𝕄 . Therefore, 0 ∈ ∂ 𝑏 𝑓 ( 𝑞 ) . Reciprocally, assume that 0 ∈ ∂ 𝑏 𝑓 ( 𝑞 ) . Thus, Definition 10 implies that 𝑓 ( 𝑝 ) ≥ 𝑓 ( 𝑞 ) + ‖ 0 ‖ 𝐵 𝑞 , 0 ( 𝑝 )
𝑓 ( 𝑞 ) , for all 𝑝 ∈ 𝕄 . Therefore, 𝑞 is a minimizer of 𝑓 . ∎
In the next result, we formally prove that the B-subdifferential introduced in Definition 10 is a subset of the usual subdifferential from Definition 4.
Proposition 12.
Let 𝕄 be a Hadamard manifold and let 𝑓 : 𝕄 → ℝ be a convex function. Then, for every 𝑞 ∈ int dom 𝑓 , the B-subdifferential of 𝑓 at 𝑞 is a subset of the classical subdifferential, i.e.,
∂ b 𝑓 ( 𝑞 ) ⊆ ∂ 𝑓 ( 𝑞 ) .
Proof.
Proof Let 𝑠 ∈ ∂ 𝑏 𝑓 ( 𝑞 ) . Then, by using (12) we have 𝑓 ( 𝑝 ) ≥ 𝑓 ( 𝑞 ) + ‖ 𝑠 ‖ 𝐵 𝑞 , − 𝑠 ( 𝑝 ) , for all 𝑝 ∈ 𝕄 . On the other hand, Lemma 9 implies that ⟨ 𝑠 , log 𝑞 𝑝 ⟩ ≤ ‖ 𝑠 ‖ 𝐵 𝑞 , − 𝑠 ( 𝑝 ) , for all 𝑝 ∈ 𝕄 . Combining two previous inequalities, we obtain 𝑓 ( 𝑝 ) ≥ 𝑓 ( 𝑞 ) + ⟨ 𝑠 , log 𝑞 𝑝 ⟩ , for all 𝑝 ∈ 𝕄 . Therefore, using Definition 4, we conclude that 𝑠 ∈ ∂ 𝑓 ( 𝑞 ) , which proves the inclusion. ∎
Note that in the flat case, the 𝐵 -subdifferential coincides with the classical subdifferential. This observation will be useful later when comparing our results with those in Euclidean convex analysis.
Remark 2.
If the sectional curvature of 𝕄 is identically zero, denoted 𝐾 ≡ 0 , then, by combining Definitions 4 and 10 with Example 1, we obtain that for every 𝑞 ∈ 𝕄 the 𝐵 -subdifferential of 𝑓 at 𝑞 coincides with the classical subdifferential, that is, ∂ b 𝑓 ( 𝑞 )
∂ 𝑓 ( 𝑞 ) .
As the next example shows, the inclusion in Proposition 12 may become an equality for certain classes of functions on any Hadamard manifold. These examples are special cases of constructions from [24] (see also [19]); we present them here because, in the Riemannian setting of Hadamard manifolds, the structure is more explicit, the geometry is clearer, and one can derive closed-form expressions for the subdifferential together with additional properties.
Example 3.
Fix 𝑞 ∈ 𝕄 and consider the distance function 𝑝 ↦ 𝑑 𝑞 ( 𝑝 ) := 𝑑 ( 𝑞 , 𝑝 ) , where 𝑑 is the Riemannian distance in 𝕄 . Then, the B-subdifferential of 𝑑 𝑞 is given by
∂
𝑏
𝑑
𝑞
(
𝑝
)
:=
{
{
𝑠
∈
𝑇
𝑞
𝕄
:
‖
𝑠
‖
≤
1
}
𝑝
𝑞 ,
{ − log 𝑝 𝑞 / 𝑑 ( 𝑝 , 𝑞 ) } ,
𝑝 ≠ 𝑞 .
(13)
In fact, first assume that 𝑝
𝑞 and take any 𝑠 ∈ 𝑇 𝑞 𝕄 with ‖ 𝑠 ‖ ≤ 1 . It follows from (5) that | 𝐵 𝑞 , − 𝑠 ( 𝑝 ) | ≤ 𝑑 𝑞 ( 𝑝 ) , for every 𝑝 ∈ 𝕄 . Hence, taking into account that 𝑑 𝑞 ( 𝑞 )
0 and ‖ 𝑠 ‖ ≤ 1 we have
𝑑 𝑞 ( 𝑝 ) − 𝑑 𝑞 ( 𝑞 )
𝑑 𝑞 ( 𝑝 ) ≥ | 𝐵 𝑞 , − 𝑠 ( 𝑝 ) | ≥ ‖ 𝑠 ‖ 𝐵 𝑞 , − 𝑠 ( 𝑝 ) , ∀ 𝑝 ∈ 𝕄 .
Thus, it follows from Definition 10 that 𝑠 ∈ ∂ 𝑏 𝑑 𝑞 ( 𝑞 ) . Therefore, { 𝑠 ∈ 𝑇 𝑞 𝕄 : ‖ 𝑠 ‖ ≤ 1 } ⊂ ∂ 𝑏 𝑑 𝑞 ( 𝑞 ) , and taking into account that the usual subdifferential is given by ∂ 𝑑 𝑞 ( 𝑞 )
{ 𝑠 ∈ 𝑇 𝑞 𝕄 : ‖ 𝑠 ‖ ≤ 1 } , see for example [33, Theorem 5.3], we conclude that ∂ 𝑏 𝑑 𝑞 ( 𝑞 )
{ 𝑠 ∈ 𝑇 𝑞 𝕄 : ‖ 𝑠 ‖ ≤ 1 } .
For every 𝑝 ≠ 𝑞 the Busemann subdifferential of 𝑑 𝑞 at 𝑝 is ∂ 𝑏 𝑑 𝑞 ( 𝑝 )
{ − log 𝑝 𝑞 / 𝑑 ( 𝑝 , 𝑞 ) } . Indeed, for simply the notations we set 𝑣 := − ( log 𝑝 𝑞 ) / 𝑑 ( 𝑝 , 𝑞 ) . Since 𝐵 𝑝 , − 𝑣 is Lipschitz continuous with constant 𝐿
1 , we have
𝐵 𝑝 , − 𝑣 ( 𝑥 ) − 𝐵 𝑝 , − 𝑣 ( 𝑞 ) ≤ 𝑑 𝑞 ( 𝑥 ) , ∀ 𝑥 ∈ 𝕄 .
(14)
On the other hand, due to 𝑣
− ( log 𝑝 𝑞 ) / 𝑑 ( 𝑝 , 𝑞 ) we obtain that 𝑞
exp 𝑝 ( − 𝑑 ( 𝑝 , 𝑞 ) 𝑣 ) . Hence, due to 𝑞 lies on the geodesic 𝑡 ↦ exp 𝑝 ( − 𝑡 𝑣 ) we have 𝑑 ( 𝑞 , exp 𝑝 ( − 𝑡 𝑣 ) )
| 𝑡 − 𝑑 ( 𝑝 , 𝑞 ) | . Thus, taking into account that ‖ 𝑣 ‖
1 , we conclude that
𝐵 𝑝 , − 𝑣 ( 𝑞 ) ≔ lim 𝑡 → + ∞ ( 𝑑 ( 𝑞 , exp 𝑝 ( − 𝑡 𝑣 ) ) − ‖ 𝑣 ‖ 𝑡 )
lim 𝑡 → + ∞ ( | 𝑡 − 𝑑 ( 𝑝 , 𝑞 ) | − 𝑡 )
− 𝑑 𝑞 ( 𝑝 ) .
Substituting, the last inequality into (14) we obtain that
𝑑 𝑞 ( 𝑥 ) ≥ 𝑑 𝑞 ( 𝑝 ) + 𝐵 𝑝 , − 𝑣 ( 𝑥 ) , ∀ 𝑥 ∈ 𝕄 .
Therefore, 𝑣 := − ( log 𝑝 𝑞 ) / 𝑑 ( 𝑝 , 𝑞 ) ∈ ∂ 𝑏 𝑑 𝑞 ( 𝑝 ) . Because, Proposition 12 implies that ∂ b 𝑑 𝑞 ( 𝑝 ) ⊆ ∂ 𝑑 𝑞 ( 𝑝 ) and considering that the usual subdifferential is given by ∂ 𝑑 𝑞 ( 𝑝 )
{ − log 𝑝 𝑞 / 𝑑 ( 𝑝 , 𝑞 ) } , we conclude that ∂ 𝑏 𝑑 𝑞 ( 𝑝 )
{ − log 𝑝 𝑞 / 𝑑 ( 𝑝 , 𝑞 ) } and (13) holds.
Example 4.
Let 𝕄 be a Hadamard manifold, let 𝑞 ∈ 𝕄 , and let 𝑣 ∈ 𝑇 𝑞 𝕄 ∖ { 0 } . For a given 𝑝 ∈ 𝕄 the gradient vector 𝑠 := grad 𝐵 𝑞 , 𝑣 ( 𝑝 ) is a 𝐵 -subgradient of Busemann function 𝐵 𝑞 , 𝑣 at 𝑝 , i.e.,
𝐵 𝑞 , 𝑣 ( 𝑥 ) ≥ 𝐵 𝑞 , 𝑣 ( 𝑝 ) + ‖ 𝑠 ‖ 𝐵 𝑝 , − 𝑠 ( 𝑥 ) , ∀ 𝑥 ∈ 𝕄 .
In fact, consider the geodesic ray 𝛾 ( 𝑡 )
exp 𝑞 ( 𝑡 𝑣 ) . If 𝑝
𝑞 , then 𝐵 𝑞 , 𝑣 ( 𝑞 )
0 . By Remark 1, 𝐵 𝑞 , 𝑣
𝐵 𝑞 , 𝑣 / ‖ 𝑣 ‖ , so we may assume ‖ 𝑣 ‖
1 . In addition, Lemma 5 implies that grad 𝐵 𝑞 , 𝑣 ( 𝑞 )
− 𝑣 / ‖ 𝑣 ‖ . Therefore, the subgradient inequality at 𝑞 holds with 𝑠
− 𝑣 / ‖ 𝑣 ‖ . Hence − 𝑣 / ‖ 𝑣 ‖ ∈ ∂ 𝑏 𝐵 𝑞 , 𝑣 ( 𝑞 ) . Now, we assume that 𝑝 ≠ 𝑞 . Thus, we apply Lemma 5 to define the vector
𝑠 := grad 𝐵 𝑞 , 𝑣 ( 𝑝 )
− lim 𝑡 → ∞ log 𝑝 ( 𝛾 ( 𝑡 ) ) 𝑑 ( 𝑝 , 𝛾 ( 𝑡 ) ) ∈ 𝑇 𝑝 𝕄 .
and note that ∥ 𝑠 ∥
1 . For each real number 𝑡
0 introduce the unit vector
𝑤 ( 𝑡 ) := − log 𝑝 ( 𝛾 ( 𝑡 ) ) 𝑑 ( 𝑝 , 𝛾 ( 𝑡 ) ) ∈ 𝑇 𝑝 𝕄 .
Example 3 applied to the distance function 𝑥 ↦ 𝑑 ( 𝑥 , 𝛾 ( 𝑡 ) ) shows that 𝑤 ( 𝑡 ) lies in the Busemann subdifferential of that distance at the point 𝑝 . Consequently,
𝑑 ( 𝑥 , 𝛾 ( 𝑡 ) ) ≥ 𝑑 ( 𝑝 , 𝛾 ( 𝑡 ) ) + 𝐵 𝑝 , − 𝑤 ( 𝑡 ) ( 𝑥 ) , ∀ 𝑥 ∈ 𝕄 .
Subtracting the constant ∥ 𝑣 ∥ 𝑡 yields
𝑑 ( 𝑥 , 𝛾 ( 𝑡 ) ) − ∥ 𝑣 ∥ 𝑡 ≥ 𝑑 ( 𝑝 , 𝛾 ( 𝑡 ) ) − ∥ 𝑣 ∥ 𝑡 + 𝐵 𝑝 , − 𝑤 ( 𝑡 ) ( 𝑥 ) , ∀ 𝑥 ∈ 𝕄 . .
(15)
Since Lemma 7 implies that lim 𝑡 → ∞ 𝐵 𝑝 , − 𝑤 ( 𝑡 ) ( 𝑥 )
𝐵 𝑝 , − 𝑠 ( 𝑥 ) , for every 𝑥 ∈ 𝕄 . Therefore, taking the limit as 𝑡 goes to + ∞ in inequality (15) and using definition of Busemann function, we obtain
𝐵 𝑞 , 𝑣 ( 𝑥 ) ≥ 𝐵 𝑞 , 𝑣 ( 𝑝 ) + 𝐵 𝑝 , − 𝑠 ( 𝑥 ) , ∀ 𝑥 ∈ 𝕄 .
Since ∥ 𝑠 ∥
1 , multiplying the last term by ∥ 𝑠 ∥ does not change the expression, and the desired inequality follows. As a consequence, we have ∂ b 𝑓 ( 𝑝 )
∂ 𝑓 ( 𝑝 ) , for all 𝑝 ∈ 𝕄 .
Next, we define a class of functions that will be useful for presenting examples of 𝐵 -subdifferentiable functions and for simplifying subsequent constructions.
Definition 13.
A class of geodesically convex and 𝐵 -subdifferentiable functions { 𝑔 1 , … , 𝑔 𝑚 } with 𝑔 𝑗 : 𝕄 → ℝ is said to belong to the aligned 𝐵 -support class at 𝑞 ∈ 𝕄 if there exist a vector 𝑢 ∈ 𝑇 𝑞 𝕄 and scalars 𝛼 𝑗 ≥ 0 such that
𝑔 𝑗 ( 𝑝 ) ≥ 𝑔 𝑗 ( 𝑞 ) + 𝛼 𝑗 𝐵 𝑞 , − 𝑢 ( 𝑝 ) ∀ 𝑝 ∈ 𝕄 , 𝑗
1 , … , 𝑚 .
(16)
We say that { 𝑔 1 , … , 𝑔 𝑚 } belongs to the aligned 𝐵 -support class on 𝕄 if (16) holds for every 𝑞 ∈ 𝕄 .
In the following, we present a canonical class of functions satisfying Definition 13.
Example 5.
Let ℎ : 𝕄 → ℝ be geodesically convex and 𝐵 -subdifferentiable, and let 𝜓 𝑗 : ℝ → ℝ be convex and nondecreasing for 𝑗
1 , … , 𝑚 . Define 𝑔 𝑗 : 𝕄 → ℝ by
𝑔 𝑗 ( 𝑝 ) := 𝑤 𝑗 𝜓 𝑗 ( ℎ ( 𝑝 ) ) + 𝑎 𝑗 , 𝑗
1 , … , 𝑚 ,
where 𝑤 𝑗 ≥ 0 and 𝑎 𝑗 ∈ ℝ . Then , the class { 𝑔 1 , … , 𝑔 𝑚 } belongs to the aligned 𝐵 -support class on 𝕄 , i.e., it satisfies Definition 13. Indeed, fix 𝑞 ∈ 𝕄 and choose 𝑢 ∈ ∂ 𝑏 ℎ ( 𝑞 ) . By Definition 10,
ℎ ( 𝑝 ) − ℎ ( 𝑞 ) ≥ ‖ 𝑢 ‖ 𝐵 𝑞 , − 𝑢 ( 𝑝 ) ∀ 𝑝 ∈ 𝕄 .
(17)
Since 𝜓 𝑗 is nondecreasing, for any 𝜃 𝑗 ∈ ∂ 𝜓 𝑗 ( ℎ ( 𝑞 ) ) we have 𝜃 𝑗 ≥ 0 , and since 𝜓 𝑗 is convex we obtain
𝜓 𝑗 ( 𝑡 ) ≥ 𝜓 𝑗 ( ℎ ( 𝑞 ) ) + 𝜃 𝑗 ( 𝑡 − ℎ ( 𝑞 ) ) ∀ 𝑡 ∈ ℝ .
Applying this with 𝑡
ℎ ( 𝑝 ) and combining with the support inequality for ℎ , and using (17) yields
𝑔 𝑗 ( 𝑝 ) − 𝑔 𝑗 ( 𝑞 )
𝑤 𝑗 ( 𝜓 𝑗 ( ℎ ( 𝑝 ) ) − 𝜓 𝑗 ( ℎ ( 𝑞 ) ) ) ≥ 𝑤 𝑗 𝜃 𝑗 ( ℎ ( 𝑝 ) − ℎ ( 𝑞 ) ) ≥ 𝑤 𝑗 𝜃 𝑗 ‖ 𝑢 ‖ 𝐵 𝑞 , − 𝑢 ( 𝑝 ) .
Thus (16) holds with the same direction 𝑢 for all 𝑗 and 𝛼 𝑗 := 𝑤 𝑗 𝜃 𝑗 ‖ 𝑢 ‖ ≥ 0 . This completes the argument.
The next result is a chain rule for the 𝐵 -subdifferential under an alignment hypothesis, namely, the functions under consideration belong to the aligned 𝐵 -support class on 𝕄 in Definition 13.
Proposition 14.
Suppose 𝑔 1 , … , 𝑔 𝑚 : 𝕄 → ℝ are geodesically convex and 𝐵 -subdifferentiable. Assume that the family { 𝑔 1 , … , 𝑔 𝑚 } belongs to the aligned 𝐵 -support class at 𝑞 ∈ 𝕄 in the sense of Definition 13, with a fixed vector 𝑢 ∈ 𝑇 𝑞 𝕄 satisfying ‖ 𝑢 ‖
0 and coefficients 𝛼 𝑗 ≥ 0 . Let Ω ⊂ ℝ 𝑚 be a convex set and Φ : Ω → ℝ be a convex function and nondecreasing in each coordinate. Assume that 𝕄 ∋ 𝑝 ↦ ( 𝑔 1 ( 𝑝 ) , … , 𝑔 𝑚 ( 𝑝 ) ) ∈ Ω and define
𝑓 ( 𝑝 ) := Φ ( 𝑔 1 ( 𝑝 ) , … , 𝑔 𝑚 ( 𝑝 ) ) , ∀ 𝑝 ∈ 𝕄 .
(18)
Then, for every ( 𝑤 1 , … , 𝑤 𝑚 ) ∈ ∂ Φ ( ( 𝑔 1 ( 𝑞 ) , … , 𝑔 𝑚 ( 𝑞 ) ) ) , there holds
( ∑ 𝑗
1 𝑚 𝑤 𝑗 𝛼 𝑗 ) 𝑢 ‖ 𝑢 ‖ ∈ ∂ 𝑏 𝑓 ( 𝑞 ) .
(19)
In particular, 𝑓 is 𝐵 -subdifferentiable at 𝑞 , and
∂ 𝑏 𝑓 ( 𝑞 ) ⊇ { ( ∑ 𝑗
1 𝑚 𝑤 𝑗 𝛼 𝑗 ) 𝑢 ‖ 𝑢 ‖ : ( 𝑤 1 , … , 𝑤 𝑚 ) ∈ ∂ Φ ( ( 𝑔 1 ( 𝑞 ) , … , 𝑔 𝑚 ( 𝑞 ) ) ) } .
Proof.
Proof Take 𝑤
( 𝑤 1 , … , 𝑤 𝑚 ) ∈ ∂ Φ ( ( 𝑔 1 ( 𝑞 ) , … , 𝑔 𝑚 ( 𝑞 ) ) ) . By convexity of Φ we have
Φ ( 𝑦 ) ≥ Φ ( 𝑥 ) + ∑ 𝑗
1 𝑚 𝑤 𝑗 ( 𝑦 𝑗 − 𝑥 𝑗 ) , ∀ 𝑥 , 𝑦 ∈ ℝ 𝑚
Choosing 𝑥 𝑗
𝑔 𝑗 ( 𝑞 ) and 𝑦 𝑗
𝑔 𝑗 ( 𝑝 ) and using definition of 𝑓 in (18) we conclude that
𝑓 ( 𝑝 ) − 𝑓 ( 𝑞 ) ≥ ∑ 𝑗
1 𝑚 𝑤 𝑗 ( 𝑔 𝑗 ( 𝑝 ) − 𝑔 𝑗 ( 𝑞 ) ) , ∀ 𝑝 ∈ 𝕄 .
(20)
Because Φ is nondecreasing in each coordinate, every subgradient is coordinatewise nonnegative. Thus, by the aligned 𝐵 -support hypothesis at 𝑞 in Definition 13, for all 𝑝 ∈ 𝕄 and all 𝑗 , we have 𝑔 𝑗 ( 𝑝 ) − 𝑔 𝑗 ( 𝑞 ) ≥ 𝛼 𝑗 𝐵 𝑞 , − 𝑢 ( 𝑝 ) . Combining with (20) gives
𝑓 ( 𝑝 ) − 𝑓 ( 𝑞 ) ≥ ( ∑ 𝑗
1 𝑚 𝑤 𝑗 𝛼 𝑗 ) 𝐵 𝑞 , − 𝑢 ( 𝑝 ) , ∀ 𝑝 ∈ 𝕄 .
Define the vector
𝑠 := ( ∑ 𝑗
1 𝑚 𝑤 𝑗 𝛼 𝑗 ) 𝑢 ‖ 𝑢 ‖ ∈ 𝑇 𝑞 𝕄 .
Then ‖ 𝑠 ‖
∑ 𝑗
1 𝑚 𝑤 𝑗 𝛼 𝑗 , and by the scale invariance of the Busemann function in Remark 1, we have 𝐵 𝑞 , − 𝑠
𝐵 𝑞 , − 𝑢 . Therefore,
𝑓 ( 𝑝 ) − 𝑓 ( 𝑞 ) ≥ ‖ 𝑠 ‖ 𝐵 𝑞 , − 𝑠 ( 𝑝 ) ∀ 𝑝 ∈ 𝕄 ,
which is exactly 𝑠 ∈ ∂ 𝑏 𝑓 ( 𝑞 ) by Definition 10. This proves (19). The set-inclusion statement follows immediately by ranging over all 𝑤 ∈ ∂ Φ ( ( 𝑔 1 ( 𝑞 ) , … , 𝑔 𝑚 ( 𝑞 ) ) ) , which concludes the proof. ∎
In the following we obtain as application of Proposition 14 the chain rule stated in [24, Proposition 3.4]; see also [19, Proposition 2(vii)].
Corollary 15.
Let 𝕄 be a Hadamard manifold, let 𝑔 : 𝕄 → ℝ be geodesically convex and 𝐵 -subdifferentiable, and let 𝜑 : ℝ → ℝ be convex and nondecreasing. Define 𝑓 := 𝜑 ∘ 𝑔 . Then 𝑓 is 𝐵 -subdifferentiable. More precisely, for any 𝑞 ∈ 𝕄 , if 𝑠 ∈ ∂ 𝑏 𝑔 ( 𝑞 ) and 𝜁 ∈ ∂ 𝜑 ( 𝑔 ( 𝑞 ) ) , hence 𝜁 ≥ 0 , then there holds 𝜁 𝑠 ∈ ∂ 𝑏 𝑓 ( 𝑞 ) .
Proof.
Proof Fix 𝑞 ∈ 𝕄 . If 𝑠
0 , then 𝜁 𝑠
0 ∈ ∂ 𝑏 𝑓 ( 𝑞 ) trivially. Assume 𝑠 ≠ 0 . By Definition 10,
𝑔 ( 𝑝 ) − 𝑔 ( 𝑞 ) ≥ ‖ 𝑠 ‖ 𝐵 𝑞 , − 𝑠 ( 𝑝 ) ( ∀ 𝑝 ∈ 𝕄 ) .
Set 𝑢 := 𝑠 . By Remark 1 we have 𝐵 𝑞 , − 𝑢
𝐵 𝑞 , − 𝑠 . Thus 𝑔 ( 𝑝 ) − 𝑔 ( 𝑞 ) ≥ ‖ 𝑠 ‖ 𝐵 𝑞 , − 𝑢 ( 𝑝 ) . This is (16) with 𝑚
1 , 𝛼 1 := ‖ 𝑠 ‖ and the fixed vector 𝑢
𝑠 . Applying Proposition 14 to Φ
𝜑 , so 𝑤
𝜁 ∈ ∂ 𝜑 ( 𝑔 ( 𝑞 ) ) and 𝜁 ≥ 0 , we obtain
( 𝜁 𝛼 1 ) 𝑢 ‖ 𝑢 ‖
𝜁 ‖ 𝑠 ‖ 𝑠 ‖ 𝑠 ‖
𝜁 𝑠 ∈ ∂ 𝑏 𝑓 ( 𝑞 ) ,
which is the desired inclusion. ∎
As a consequence, the next example collects standard applications of the chain rule in Corollary 15; see [24, Example 3.11].
Example 6.
Let 𝕄 be a Hadamard manifold, 𝑐 ≥ 0 , 𝜏 ≥ 1 , 𝑎 ∈ ℝ and fix 𝑞 ∈ 𝕄 . Set
𝑔 ( 𝑝 ) := 𝑑 𝑞 ( 𝑝 ) := 𝑑 ( 𝑝 , 𝑞 ) , 𝜑 ( 𝑡 ) := 𝑐 𝑡 𝜏 + 𝑎 𝑓 := 𝜑 ∘ 𝑔 .
Then 𝑓 is 𝐵 -subdifferentiable on 𝕄 and 0 ∈ ∂ 𝑏 𝑓 ( 𝑞 ) and
𝑐 𝜏 𝑑 𝑞 ( 𝑝 ) 𝜏 − 1 ( − log 𝑝 𝑞 𝑑 𝑞 ( 𝑝 ) ) ∈ ∂ 𝑏 𝑓 ( 𝑝 ) , for 𝑝 ≠ 𝑞 .
Next, we list standard choices of Φ and verify that each is convex and coordinatewise nondecreasing. These include separable convex sums built from a convex nondecreasing scalar 𝜓 , and the log-sum-exp (soft-max) aggregator. Consequently, Proposition 14 applies whenever the inputs admit a common aligned 𝐵 -support, the composite Φ ∘ ( 𝑔 1 , … , 𝑔 𝑚 ) is convex and its 𝐵 -subdifferential admits an explicit description along the aligned direction.
Example 7.
Let us give common choices of Φ satisfying the hypothesis of Proposition 14.
(i)
Pointwise maximum: Φ ( 𝑥 ) := max 1 ≤ 𝑖 ≤ 𝑚 𝑥 𝑖 . The convexity of Φ follows from the fact that the pointwise maximum of affine maps is convex. To see that Φ is monotone, note that if 𝑥 ≤ 𝑦 coordinatewise, then max 𝑖 𝑥 𝑖 ≤ max 𝑖 𝑦 𝑖 . The subdifferential of Φ at 𝑥 ∈ ℝ 𝑚 is
∂ Φ ( 𝑥 )
{ 𝑤
( 𝑤 1 , … , 𝑤 𝑚 ) ∈ ℝ 𝑚 : 𝑤 𝑖 ≥ 0 , ∑ 𝑖
1 𝑚 𝑤 𝑖
1 , supp ( 𝑤 ) ⊆ arg max 𝑖 𝑥 𝑖 } ,
where supp ( 𝑤 ) := { 𝑖 ∈ { 1 , … , 𝑚 } : 𝑤 𝑖 ≠ 0 } .
(ii)
Separable convex nondecreasing aggregator: Let 𝜓 : ℝ → ℝ be convex and nondecreasing, and set Φ ( 𝑥 ) := ∑ 𝑗
1 𝑚 𝜓 ( 𝑥 𝑗 ) . The convexity of Φ follows from the convexity of each term in the sum. For monotonicity, if 𝑥 ≤ 𝑦 coordinatewise, then 𝜓 ( 𝑥 𝑗 ) ≤ 𝜓 ( 𝑦 𝑗 ) for every 𝑗 , hence Φ ( 𝑥 ) ≤ Φ ( 𝑦 ) . The subdifferential at 𝑥 ∈ ℝ 𝑚 is
∂ Φ ( 𝑥 )
{ 𝜃
( 𝜃 1 , … , 𝜃 𝑚 ) : 𝜃 𝑗 ∈ ∂ 𝜓 ( 𝑥 𝑗 ) for all 𝑗 } ,
and, because 𝜓 is nondecreasing, each 𝜃 𝑗 ≥ 0 , so every 𝑤 ∈ ∂ Φ ( 𝑥 ) is coordinatewise nonnegative.
(iii)
Log-sum-exp (soft-max): for 𝜏 > 0 set Φ 𝜏 ( 𝑥 ) := 𝜏 log ( ∑ 𝑖
1 𝑚 𝑒 𝑥 𝑖 / 𝜏 ) . The convexity of Φ 𝜏 follows since 𝑥 ↦ ∑ 𝑖 𝑒 𝑥 𝑖 / 𝜏 is convex and positive, and log is increasing and concave; thus the composition 𝜏 log ( ⋅ ) with a sum of exponentials is convex. For monotonicity, if 𝑥 ≤ 𝑦 coordinatewise then 𝑒 𝑥 𝑖 / 𝜏 ≤ 𝑒 𝑦 𝑖 / 𝜏 for each 𝑖 , hence Φ 𝜏 ( 𝑥 ) ≤ Φ 𝜏 ( 𝑦 ) . The gradient, hence a subgradient, at 𝑥 ∈ ℝ 𝑚 is
Φ 𝜏 ′ ( 𝑥 )
( 𝑒 𝑥 𝑖 / 𝜏 ∑ 𝑗
1 𝑚 𝑒 𝑥 𝑗 / 𝜏 ) 𝑖
1 𝑚 ∈ ℝ 𝑚 ,
which is coordinatewise nonnegative and sums to 1 . Thus, Φ 𝜏 ′ ( 𝑥 ) ∈ ∂ Φ 𝜏 ( 𝑥 ) .
(iv)
ℓ 𝑟 -norm on the nonnegative orthant: For 𝑟 ∈ [ 1 , ∞ ) and Ω := { 𝑥 ∈ ℝ 𝑚 : 𝑥 𝑖 ≥ 0 } , set
Φ 𝑟 ( 𝑥 ) := ( ∑ 𝑖
1 𝑚 𝑥 𝑖 𝑟 ) 1 / 𝑟 for 𝑥 ∈ Ω .
The convexity of Φ 𝑟 holds for all 𝑟 ≥ 1 (it is the ℓ 𝑟 -norm); restricting to Ω preserves convexity. For monotonicity on Ω , if 𝑥 ≤ 𝑦 coordinatewise then 𝑥 𝑖 𝑟 ≤ 𝑦 𝑖 𝑟 , hence Φ 𝑟 ( 𝑥 ) ≤ Φ 𝑟 ( 𝑦 ) . For 𝑟
1 and 𝑥 ≠ 0 ,
Φ 𝑟 ′ ( 𝑥 )
( 𝑥 𝑖 𝑟 − 1 ( ∑ 𝑗
1 𝑚 𝑥 𝑗 𝑟 ) 𝑟 − 1 𝑟 ) 𝑖
1 𝑚 ∈ ℝ 𝑚 .
Thus, Φ 𝑟 ′ ( 𝑥 ) ∈ ∂ Φ 𝑟 ( 𝑥 ) and is coordinatewise nonnegative on Ω .
In Example 7 we listed choices of Φ that satisfy the conditions of Proposition 14. Combining this with Example 5 yields a Busemann-driven class covering logistic-regression/soft-max losses [18]. Because Busemann functions act, in a suitable sense, like linear forms on Hadamard manifolds [2], log-sum-exp compositions of Busemann functions, motivated by information-theoretic and numerical aspects of log-sum-exp [12, 37] and by hyperbolic prototype learning [27], provide smooth convex surrogates of max-type models that remain 𝐵 -subdifferentiable and fit the alignment framework of Proposition 14.
Example 8.
Fix 𝑞 0 ∈ 𝕄 and a nonzero 𝑣 0 ∈ 𝑇 𝑞 0 𝕄 , and let 𝐵 𝑞 0 , 𝑣 0 denote the associated Busemann function. For 𝑗
1 , … , 𝑚 define
𝑔 𝑗 ( 𝑝 ) := 𝑤 𝑗 𝜓 𝑗 ( 𝐵 𝑞 0 , 𝑣 0 ( 𝑝 ) ) + 𝑎 𝑗 ,
where 𝑤 𝑗 ≥ 0 , 𝑎 𝑗 ∈ ℝ , and each 𝜓 𝑗 : ℝ → ℝ is convex and nondecreasing. Since 𝐵 𝑞 0 , 𝑣 0 is 𝐵 -subdifferentiable, Example 5 ensures that { 𝑔 1 , … , 𝑔 𝑚 } belongs to the aligned 𝐵 -support class at every 𝑞 ∈ 𝕄 with common direction 𝑢 ( 𝑞 ) := grad 𝐵 𝑞 0 , 𝑣 0 ( 𝑞 ) ∈ 𝑇 𝑞 𝕄 , and coefficients 𝛼 𝑗 ( 𝑞 ) ∈ 𝑤 𝑗 ∂ 𝜓 𝑗 ( 𝐵 𝑞 0 , 𝑣 0 ( 𝑞 ) ) ⊂ [ 0 , ∞ ) . Now take any Φ from Example 7 and set
𝑓 ( 𝑝 ) := Φ ( 𝑔 1 ( 𝑝 ) , … , 𝑔 𝑚 ( 𝑝 ) ) , 𝑝 ∈ 𝕄 .
By Proposition 14, the function 𝑓 is 𝐵 -subdifferentiable at every 𝑞 ∈ 𝕄 and, for every vector ( 𝑤 1 , … , 𝑤 𝑚 ) ∈ ∂ Φ ( 𝑔 1 ( 𝑞 ) , … , 𝑔 𝑚 ( 𝑞 ) ) , one has
( ∑ 𝑗
1 𝑚 𝑤 𝑗 𝛼 𝑗 ( 𝑞 ) ) 𝑢 ( 𝑞 ) ‖ 𝑢 ( 𝑞 ) ‖ ∈ ∂ 𝑏 𝑓 ( 𝑞 ) .
In the following, we state a max rule saying that the pointwise maximum of a geodesically convex and B-subdifferentiable family, continuous in the index, is B-subdifferentiable. At each point, the B-subdifferential contains those of the active indices. This corresponds to [24, Remark 3.9].
Proposition 16.
Let 𝕄 be a Hadamard manifold with the geodesic extension property, and let 𝐴 be compact and nonempty. For each 𝑎 ∈ 𝐴 , let 𝑔 𝑎 : 𝕄 → ℝ be geodesically convex. Assume in addition that the map 𝐴 × 𝕄 ∋ ( 𝑎 , 𝑝 ) ↦ 𝑔 𝑎 ( 𝑝 ) is continuous. Define 𝑓 : 𝕄 → ℝ by
𝑓 ( 𝑝 ) := max { 𝑔 𝑎 ( 𝑝 ) : 𝑎 ∈ 𝐴 } 𝑝 ∈ 𝕄 .
(21)
Moreover, assume that each 𝑔 𝑎 is B-subdifferentiable at every point. Then 𝑓 is B-subdifferentiable at every 𝑞 ∈ 𝕄 . In addition, for 𝑞 ∈ 𝕄 , letting 𝐴 ( 𝑞 )
arg max 𝑎 ∈ 𝐴 𝑔 𝑎 ( 𝑞 ) , there holds
∂ 𝑏 𝑓 ( 𝑞 ) ⊇ ⋃ 𝑎 ∈ 𝐴 ( 𝑞 ) ∂ 𝑏 𝑔 𝑎 ( 𝑞 ) .
(22) Remark 3.
Geodesic convexity of the components 𝑔 𝑎 in (21) does not, in general, ensure B-subdifferentiability of 𝑓 . The B-support inequality is strictly stronger than the usual convex subgradient inequality. Thus, the assumption that each 𝑔 𝑎 is B-subdifferentiable is the natural, and essentially minimal, transferable condition for the maximum.
We now illustrate Proposition 16 with minimum–enclosing–ball models and an additively weighted cover; for more details, see [3].
Example 9.
Let 𝕄 be a Hadamard manifold and let 𝐴 ⊂ 𝕄 be finite. For a convex nondecreasing 𝜑 : [ 0 , ∞ ) → ℝ define
𝑓 𝜑 ( 𝑝 ) := max 𝑎 ∈ 𝐴 𝜑 ( 𝑑 ( 𝑝 , 𝑎 ) ) , 𝑝 ∈ 𝕄 .
This fits Proposition 16 with 𝑔 𝑎 ( 𝑝 )
𝜑 ( 𝑑 ( 𝑝 , 𝑎 ) ) , hence 𝑓 𝜑 is B-subdifferentiable and its B-subdifferential is determined by the active indices.
Linear radius
With 𝜑 ( 𝑡 )
𝑡 we obtain 𝑓 𝜑 ( 𝑝 )
max 𝑎 ∈ 𝐴 𝑑 ( 𝑝 , 𝑎 ) . Any minimizer 𝑝 ∗ is the center of the smallest closed geodesic ball containing 𝐴 (the circumcenter), and it is unique; see [14, Proposition II.2.7]. The value 𝑓 𝜑 ( 𝑝 ∗ ) is the circumradius; see [1] for modeling uses.
Squared radius
With 𝜑 ( 𝑡 )
𝑡 2 we obtain 𝑓 𝜑 ( 𝑝 )
max 𝑎 ∈ 𝐴 𝑑 ( 𝑝 , 𝑎 ) 2 . The function 𝑓 𝑔 is strongly convex, so the minimizer is unique and equals the center of the minimal enclosing geodesic ball, with 𝑓 𝜑 ( 𝑝 ∗ ) equal to the squared circumradius; cf. [4, Example 2.2.18].
Additively weighted cover
Let now 𝐴 ⊂ 𝕄 be compact and let 𝜌 : 𝐴 → ℝ + be continuous. Define 𝑓 𝜌 ( 𝑝 ) := max 𝑎 ∈ 𝐴 ( 𝑑 ( 𝑝 , 𝑎 ) + 𝜌 ( 𝑎 ) ) , 𝑝 ∈ 𝕄 . The minimizer 𝑝 ∗ is the center of the smallest geodesic ball covering 𝐴 𝜌
⋃ 𝑎 ∈ 𝐴 𝐵 𝜌 ( 𝑎 ) ( 𝑎 ) . Indeed, for 𝑟 ≥ 0 the inclusion 𝐴 𝜌 ⊂ 𝐵 ( 𝑝 , 𝑟 ) holds if and only if 𝑑 ( 𝑝 , 𝑎 ) + 𝜌 ( 𝑎 ) ≤ 𝑟 for all 𝑎 ∈ 𝐴 , that is, iff 𝑓 𝜌 ( 𝑝 ) ≤ 𝑟 . This is a particular case of Proposition 16 with 𝑔 𝑎 ( 𝑝 )
𝜑 𝑎 ( 𝑑 ( 𝑝 , 𝑎 ) ) and 𝜑 𝑎 ( 𝑡 )
𝑡 + 𝜌 ( 𝑎 ) , which is convex and nondecreasing in 𝑡 . Hence 𝑓 𝜌 is B-subdifferentiable on 𝕄 .
For more examples and properties of the Busemann subdifferential, see [24] and [19]. Definition 10 extends the Euclidean subdifferential in a way that respects the manifold geometry. However, it is not closed under addition, i.e., the sum of two functions that are 𝐵 -subdifferentiable at a point may fail to be 𝐵 -subdifferentiable there. In particular, on general Hadamard manifolds the 𝐵 -subdifferential of a sum can be empty even when each term is convex and continuously differentiable. Thus the Euclidean rule ∂ 𝑏 ℎ ( 𝑞 )
{ ℎ ′ ( 𝑞 ) } need not hold; see Examples 10 and 11 below.
Example 10.
Using the notation of Example 2, fix 𝑞 ∈ ℍ 1 𝑛 and choose unit vectors 𝑣 1 , 𝑣 2 ∈ 𝑇 𝑞 ℍ 1 𝑛 with 𝑣 1 ≠ ± 𝑣 2 . Set 𝛼 := ⟨ 𝑣 1 , 𝑣 2 ⟩ with − 1 < 𝛼 < 1 . Define the geodesically convex 𝐶 1 function ℎ : ℍ 1 𝑛 → ℝ given by the sum of two Busemann functions ℎ ( 𝑝 )
𝐵 𝑞 , 𝑣 1 ( 𝑝 ) + 𝐵 𝑞 , 𝑣 2 ( 𝑝 ) , 𝑝 ∈ ℍ 1 𝑛 . Then ∂ 𝑏 ℎ ( 𝑞 )
∅ . Indeed, for any unit 𝑢 ∈ 𝑇 𝑞 ℍ 1 𝑛 consider the following notation for the geodesic
𝛾 𝑢 ( 𝑡 )
exp 𝑞 ( 𝑡 𝑢 )
cosh ( 𝑡 ) 𝑞 + sinh ( 𝑡 ) 𝑢 , 𝑡 ∈ ℝ ,
By (10), we have 𝐵 𝑞 , 𝑢 ( 𝛾 𝑢 ( 𝑡 ) )
− 𝑡 . Since ℎ ( 𝛾 𝑣 1 ( 𝑡 ) )
𝐵 𝑞 , 𝑣 1 ( 𝛾 𝑣 1 ( 𝑡 ) ) + 𝐵 𝑞 , 𝑣 2 ( 𝛾 𝑣 1 ( 𝑡 ) ) ,
ℎ ( 𝛾 𝑣 1 ( 𝑡 ) )
− 𝑡 + ln ( cosh 𝑡 − 𝛼 sinh 𝑡 )
ln ( 1 − 𝛼 2 + 1 + 𝛼 2 𝑒 − 2 𝑡 ) ≤ 0 ,
(23)
for every 𝑡
0 because 𝛼 ∈ [ − 1 , 1 ] . Similarly, we obtain that
ℎ ( 𝛾 𝑣 2 ( 𝑡 ) )
ln ( 1 − 𝛼 2 + 1 + 𝛼 2 𝑒 − 2 𝑡 ) ≤ 0 .
(24)
Assume, for contradiction, that 𝑠 ∈ ∂ 𝑏 ℎ ( 𝑞 ) . First, consider the case 𝑠
0 . Then, in this case, we have ℎ ( 𝑥 ) ≥ ℎ ( 𝑞 ) , for all 𝑥 . But ℎ ( 𝑞 )
0 and (23) gives ℎ ( 𝛾 𝑣 1 ( 𝑡 ) ) < 0 for 𝑡
0 , a contradiction.
Now, we assume that 𝑠 ≠ 0 . Let 𝑢 := 𝑠 / ‖ 𝑠 ‖ and set 𝛽 1 := ⟨ 𝑣 1 , 𝑢 ⟩ ∈ [ − 1 , 1 ] . Using (10) along 𝛾 𝑣 1 for 𝑡 ≥ 0 ,
𝐵 𝑞 , − 𝑠 ( 𝛾 𝑣 1 ( 𝑡 ) )
ln ( cosh ( 𝑡 ) + 𝛽 1 sinh ( 𝑡 ) )
𝑡 + ln ( 1 + 𝛽 1 2 + 1 − 𝛽 1 2 𝑒 − 2 𝑡 ) .
(25)
If 𝛽 1
− 1 , then for all 𝑡 ≥ 0 we have ‖ 𝑠 ‖ 𝐵 𝑞 , − 𝑠 ( 𝛾 𝑣 1 ( 𝑡 ) ) ≥ ‖ 𝑠 ‖ 𝑡 + ‖ 𝑠 ‖ ln ( 1 + 𝛽 1 2 ) . Combining with (23), the defining inequality ℎ ( 𝛾 𝑣 1 ( 𝑡 ) ) ≥ ℎ ( 𝑞 ) + ‖ 𝑠 ‖ 𝐵 𝑞 , − 𝑠 ( 𝛾 𝑣 1 ( 𝑡 ) ) fails for all sufficiently large 𝑡 , a contradiction.
Now, we consider the case 𝛽 1
− 1 (i.e., 𝑢
− 𝑣 1 ). To obtain a contradiction, consider instead the ray 𝛾 𝑣 2 . Let
𝛽 2
⟨ 𝑣 2 , 𝑢 ⟩
⟨ 𝑣 2 , − 𝑣 1 ⟩
− 𝛼 ∈ ( − 1 , 1 ) .
Using the similar formula as in (25), for 𝑡 ≥ 0 , we obtain that
‖ 𝑠 ‖ 𝐵 𝑞 , − 𝑠 ( 𝛾 𝑣 2 ( 𝑡 ) )
‖ 𝑠 ‖ 𝑡 + ‖ 𝑠 ‖ ln ( 1 + 𝛽 2 2 + 1 − 𝛽 2 2 𝑒 − 2 𝑡 ) ≥ 𝑡 ‖ 𝑠 ‖ + ‖ 𝑠 ‖ ln ( 1 + 𝛽 2 2 ) ,
Combining with (24), the defining inequality ℎ ( 𝛾 𝑣 2 ( 𝑡 ) ) ≥ ℎ ( 𝑞 ) + ‖ 𝑠 ‖ 𝐵 𝑞 , − 𝑠 ( 𝛾 𝑣 2 ( 𝑡 ) ) also fails for all sufficiently large 𝑡 , a contradiction. This concludes the statement that ∂ 𝑏 ℎ ( 𝑞 )
∅ .
On a Hadamard manifold, each distance map 𝑝 ↦ 𝑑 ( 𝑝 , 𝑥 ) is geodesically convex and 𝐵 -subdifferentiable for every 𝑞 ≠ 𝑥 . Example 10 shows that a sum of two Busemann functions in non-opposite directions can have empty 𝐵 -subdifferential at 𝑞 ; the next example shows the same for sums of distances, each summand has a 𝐵 -subgradient at 𝑞 , yet the sum may have none, so the sum rule fails.
Example 11.
Using the notation of Example 2, fix 𝑞 ∈ ℍ 1 𝑛 and choose unit vectors 𝑣 1 , 𝑣 2 ∈ 𝑇 𝑞 ℍ 1 𝑛 with 𝑣 1 ≠ ± 𝑣 2 and set 𝛼 := ⟨ 𝑣 1 , 𝑣 2 ⟩ ∈ ( − 1 , 1 ) . Let 𝜃 := arccos ( 𝛼 ) ∈ ( 0 , 𝜋 ) . For each 𝑡 > 0 and 𝑖
1 , 2 define
𝑎 𝑖 ( 𝑡 ) := exp 𝑞 ( 𝑡 𝑣 𝑖 ) , 𝑓 𝑡 ( 𝑝 ) := 𝑑 ( 𝑝 , 𝑎 1 ( 𝑡 ) ) + 𝑑 ( 𝑝 , 𝑎 2 ( 𝑡 ) ) , 𝑝 ∈ ℍ 1 𝑛 .
Then there exists 𝑇 > 0 such that for every 𝑡 ≥ 𝑇 one has ∂ 𝑏 𝑓 𝑡 ( 𝑞 )
∅ , whereas
∂ 𝑏 𝑑 ( ⋅ , 𝑎 1 ( 𝑡 ) ) ( 𝑞 ) + ∂ 𝑏 𝑑 ( ⋅ , 𝑎 2 ( 𝑡 ) ) ( 𝑞 )
{ − ( 𝑣 1 + 𝑣 2 ) } ≠ ∅ .
Indeed, first note that ∂ 𝑏 𝑑 ( ⋅ , 𝑎 1 ( 𝑡 ) ) ( 𝑞 )
{ − 𝑣 1 } , for 𝑞 ≠ 𝑎 1 ( 𝑡 ) and ∂ 𝑏 𝑑 ( ⋅ , 𝑎 2 ( 𝑡 ) ) ( 𝑞 )
{ − 𝑣 2 } , for 𝑞 ≠ 𝑎 1 ( 𝑡 ) , hence
∂ 𝑏 𝑑 ( ⋅ , 𝑎 1 ( 𝑡 ) ) ( 𝑞 ) + ∂ 𝑏 𝑑 ( ⋅ , 𝑎 2 ( 𝑡 ) ) ( 𝑞 )
{ − ( 𝑣 1 + 𝑣 2 ) } ≠ ∅ , ∀ 𝑡
0 .
To simplify the notation, define for 𝑖 ∈ { 1 , 2 } and 𝑡
0 the auxiliary functions
𝜙 𝑖 , 𝑡 ( 𝑝 ) := 𝑑 ( 𝑝 , 𝑎 𝑖 ( 𝑡 ) ) − 𝑡 , 𝑔 𝑡 ( 𝑝 ) := 𝜙 1 , 𝑡 ( 𝑝 ) + 𝜙 2 , 𝑡 ( 𝑝 )
𝑓 𝑡 ( 𝑝 ) − 2 𝑡 .
(26)
Evaluating at 𝑝
𝑎 𝑖 ( 𝑡 ) gives
𝜙 𝑖 , 𝑡 ( 𝑎 𝑖 ( 𝑡 ) )
| 𝑡 − 𝑡 | − 𝑡
− 𝑡 .
(27)
Using Lemma 1, for 𝑗 ≠ 𝑖 , the hyperbolic law of cosines with the triangle having vertices 𝑞 , 𝑎 𝑖 ( 𝑡 ) , 𝑎 𝑗 ( 𝑡 ) yields
cosh 𝑑 ( 𝑎 𝑖 ( 𝑡 ) , 𝑎 𝑗 ( 𝑡 ) )
cosh 2 𝑡 − 𝛼 sinh 2 𝑡
1 2 ( 𝐴 ( 𝑡 ) 𝑒 𝑡 + 𝐵 ( 𝑡 ) 𝑒 − 𝑡 ) ,
(28)
where 𝐴 ( 𝑡 ) := cosh 𝑡 − 𝛼 sinh 𝑡 and 𝐵 ( 𝑡 ) := cosh 𝑡 + 𝛼 sinh 𝑡 . Since 𝑑 ( ⋅ , ⋅ ) ≥ 0 we have cosh 𝑑 ( ⋅ , ⋅ ) ≥ 1 , using the elementary estimate arcosh 𝑦 ≤ ln ( 2 𝑦 ) for 𝑦 ≥ 1 , it follows from (26)–(28) that
𝜙 𝑗 , 𝑡 ( 𝑎 𝑖 ( 𝑡 ) )
arcosh ( 1 2 ( 𝐴 ( 𝑡 ) 𝑒 𝑡 + 𝐵 ( 𝑡 ) 𝑒 − 𝑡 ) ) − 𝑡 ≤ ln ( 𝐴 ( 𝑡 ) + 𝐵 ( 𝑡 ) 𝑒 − 2 𝑡 ) − 𝑡 .
(29)
Combining (27) and (29), we get the exact upper bound
𝑔 𝑡 ( 𝑎 𝑖 ( 𝑡 ) ) ≤ ln 𝑅 𝑡 , 𝑅 𝑡 := ( 𝐴 ( 𝑡 ) + 𝐵 ( 𝑡 ) 𝑒 − 2 𝑡 ) 𝑒 − 𝑡
1 − 𝛼 2 + ( 1 + 𝛼 ) 𝑒 − 2 𝑡 + 1 − 𝛼 2 𝑒 − 4 𝑡 .
(30)
Since 𝛼 ∈ ( − 1 , 1 ) , we have lim 𝑡 → ∞ 𝑅 𝑡
1 − 𝛼 2 < 1 ; hence there exists 𝑇 1
0 such that 𝑅 𝑡 < 1 , for all 𝑡 ≥ 𝑇 1 . Therefore, using (30) we conclude that
𝑔 𝑡 ( 𝑎 𝑖 ( 𝑡 ) ) ≤ ln ( 1 − 𝛼 2 ) < 0 , ∀ 𝑡 ≥ 𝑇 1 , 𝑖 ∈ { 1 , 2 } .
(31)
Next, for any unit 𝑢 ∈ 𝑇 𝑞 ℍ 1 𝑛 set 𝛽 𝑖 ( 𝑢 ) := ⟨ 𝑢 , 𝑣 𝑖 ⟩ for 𝑖
1 , 2 and 𝛽 ∗ ( 𝑢 ) := max { 𝛽 1 ( 𝑢 ) , 𝛽 2 ( 𝑢 ) } . We claim
𝛽 ∗ ( 𝑢 ) ≥ − cos ( 𝜃 2 )
: 𝛽 ∗
− 1 .
(32)
Indeed, ‖ 𝑣 1 + 𝑣 2 ‖ 2
2 ( 1 + 𝛼 )
4 cos 2 ( 𝜃 / 2 ) , so ‖ 𝑣 1 + 𝑣 2 ‖
2 cos ( 𝜃 / 2 ) , where 𝜃 is the smallest angle between 𝑣 1 and 𝑣 2 . If 𝛽 1 ( 𝑢 ) < − cos ( 𝜃 / 2 ) and 𝛽 2 ( 𝑢 ) < − cos ( 𝜃 / 2 ) , then ⟨ 𝑢 , 𝑣 1 + 𝑣 2 ⟩ < − 2 cos ( 𝜃 / 2 ) , contradicting Cauchy–Schwarz: ⟨ 𝑢 , 𝑣 1 + 𝑣 2 ⟩ ≥ − ‖ 𝑣 1 + 𝑣 2 ‖
− 2 cos ( 𝜃 / 2 ) . Hence, equality in (32) holds for 𝑢 ∗
− ( 𝑣 1 + 𝑣 2 ) / ‖ 𝑣 1 + 𝑣 2 ‖ .
Using the explicit Busemann formula along rays (10), we have
𝐵 𝑞 , − 𝑢 ( 𝛾 𝑣 𝑖 ( 𝑡 ) )
ln ( cosh 𝑡 + 𝛽 𝑖 ( 𝑢 ) sinh 𝑡 ) ≥ 𝑡 + ln ( 1 + 𝛽 𝑖 ( 𝑢 ) 2 ) .
For each 𝑢 , choose 𝑖
𝑖 ( 𝑢 ) ∈ { 1 , 2 } attaining 𝛽 ∗ ( 𝑢 ) . By (32), there is a constant 𝑐 ∗ := ln ( 1 + 𝛽 ∗ 2 ) ∈ ℝ such that
𝐵 𝑞 , − 𝑢 ( 𝛾 𝑣 𝑖 ( 𝑢 ) ( 𝑡 ) ) ≥ 𝑡 + 𝑐 ∗ .
Choose 𝑇 2 so large that 𝑡 + 𝑐 ∗ > 0 for all 𝑡 ≥ 𝑇 2 . Finally, suppose 𝑠 ∈ ∂ 𝑏 𝑓 𝑡 ( 𝑞 ) with 𝑡 ≥ 𝑇 := max { 𝑇 1 , 𝑇 2 } . Let 𝑢 be any unit vector if 𝑠
0 , and 𝑢 := 𝑠 / ‖ 𝑠 ‖ otherwise. Evaluating the supporting inequality at 𝑥
𝛾 𝑣 𝑖 ( 𝑢 ) ( 𝑡 ) and using 𝑔 𝑡 ( 𝑞 )
0 ,
𝑔 𝑡 ( 𝛾 𝑣 𝑖 ( 𝑢 ) ( 𝑡 ) ) ≥ ‖ 𝑠 ‖ 𝐵 𝑞 , − 𝑢 ( 𝛾 𝑣 𝑖 ( 𝑢 ) ( 𝑡 ) ) ≥ 0 ,
which contradicts (31). Hence ∂ 𝑏 𝑓 𝑡 ( 𝑞 )
∅ for all 𝑡 ≥ 𝑇 .
Although Examples 10 and 11 appear negative at first sight, the 𝐵 -subdifferential at 𝑞 is empty; they actually uncover a genuinely non-Euclidean effect on negatively curved Hadamard manifolds. Even for geodesically convex 𝐶 1 summands (e.g., Busemann or distance terms), the Euclidean heuristic
∂ 𝑏 ( 𝑓 + 𝑔 ) ( 𝑞 )
∂ 𝑏 𝑓 ( 𝑞 ) + ∂ 𝑏 𝑔 ( 𝑞 )
may fail so severely that ∂ 𝑏 ( 𝑓 + 𝑔 ) ( 𝑞 )
∅ while each ∂ 𝑏 𝑓 ( 𝑞 ) and ∂ 𝑏 𝑔 ( 𝑞 ) is nonempty. In flat spaces, the linear structure guarantees robust sum rules for convex subdifferentials without any “alignment” requirement; curvature breaks precisely this feature. The examples, therefore, delineate the limits of Euclidean intuition for 𝐵 -subdifferential calculus and, at the same time, point to the additional hypothesis that restores positive results in curved settings, namely, the existence of a common supporting direction at 𝑞 in the sense of Definition 13.
5Projection onto horospheres in Hadamard manifolds
In this section, we present the geometric foundations underlying the projection step in the Busemann proximal point algorithm. We begin by recalling the definitions of horospheres and horoballs on Hadamard manifolds, which arise as level and sublevel sets of Busemann functions and generalize the notions of hyperplanes and halfspaces in Euclidean space. We then derive a closed-form expression for the projection onto a horosphere, an essential component of our algorithm, and highlight its correspondence with the classical Euclidean case.
Definition 17.
Let 𝐵 𝑞 , 𝑣 be the Busemann function associated with 𝑞 ∈ 𝕄 and 𝑣 ∈ 𝑇 𝑞 𝕄 ∖ { 0 } . For a given 𝑐 ∈ ℝ , the horosphere centered at the asymptotic direction determined by the geodesic ray 𝛾 ( 𝑡 )
exp 𝑞 ( 𝑡 𝑣 ) is the level set
𝒮 𝑞 , 𝑣 ( 𝑐 ) ≔ { 𝑥 ∈ 𝕄 : 𝐵 𝑞 , 𝑣 ( 𝑥 )
𝑐 } .
The corresponding horoball is the sublevel set
ℋ 𝑞 , 𝑣 ( 𝑐 ) ≔ { 𝑥 ∈ 𝕄 : 𝐵 𝑞 , 𝑣 ( 𝑥 ) ≤ 𝑐 } .
The following lemma shows that a geodesically convex function with unit-norm gradient increases linearly along its gradient flow, implying alignment between the gradient vector field and the geodesic velocity.
Lemma 18.
Let 𝐹 : 𝕄 → ℝ be a 𝐶 1 geodesically convex function, and suppose that its gradient vector field satisfies ‖ grad 𝐹 ( 𝑥 ) ‖
1 for all 𝑥 ∈ 𝕄 . Fix a point 𝑝 ∈ 𝕄 , and consider the geodesic
𝛾 ( 𝑡 ) := exp 𝑝 ( 𝑡 grad 𝐹 ( 𝑝 ) ) , 𝑡 ∈ ℝ .
(33)
Then, the function 𝐹 increases linearly along 𝛾 , that is, 𝐹 ( 𝛾 ( 𝑡 ) )
𝐹 ( 𝑝 ) + 𝑡 , for all 𝑡 ∈ ℝ . As a consequence, for all 𝑡 ∈ ℝ , we have
⟨ grad 𝐹 ( 𝛾 ( 𝑡 ) ) , 𝛾 ′ ( 𝑡 ) ⟩
1 ,
(34)
and hence the gradient vector field is aligned with the vector tangent of the geodesic in (33) as follows grad 𝐹 ( 𝛾 ( 𝑡 ) )
𝛾 ′ ( 𝑡 ) , for all 𝑡 ∈ ℝ .
Proof.
Proof Set 𝜓 ( 𝑡 ) := 𝐹 ( 𝛾 ( 𝑡 ) ) . By geodesic convexity of 𝐹 , the function 𝜓 is convex on ℝ . By the chain rule, we have
𝜓 ′ ( 𝑡 )
⟨ grad 𝐹 ( 𝛾 ( 𝑡 ) ) , 𝛾 ′ ( 𝑡 ) ⟩ , 𝑡 ∈ ℝ .
(35)
Since 𝛾 ′ ( 0 )
− grad 𝐹 ( 𝑝 ) and ‖ grad 𝐹 ( 𝑝 ) ‖
1 , we have 𝜓 ′ ( 0 )
1 . Convexity of 𝜓 then yields
𝜓 ( 𝑡 ) ≥ 𝜓 ( 0 ) + 𝑡 𝜓 ′ ( 0 )
𝐹 ( 𝑝 ) + 𝑡 , 𝑡 ∈ ℝ .
(36)
On the other hand, geodesics have constant speed, hence ‖ 𝛾 ′ ( 𝑡 ) ‖
‖ 𝛾 ′ ( 0 ) ‖
‖ grad 𝐹 ( 𝑝 ) ‖
1 for all 𝑡 ∈ ℝ . Using ‖ grad 𝐹 ( ⋅ ) ‖
1 and Cauchy–Schwarz in (35) gives
𝜓 ′ ( 𝑡 )
⟨ grad 𝐹 ( 𝛾 ( 𝑡 ) ) , 𝛾 ′ ( 𝑡 ) ⟩ ≤ 1 , 𝑡 ∈ ℝ .
Integrating, we obtain 𝜓 ( 𝑡 )
𝐹 ( 𝛾 ( 𝑡 ) ) ≤ 𝐹 ( 𝑝 ) + 𝑡 , for all 𝑡 ∈ ℝ . Together with (36), this implies 𝜓 ( 𝑡 )
𝐹 ( 𝑝 ) + 𝑡 for all 𝑡 ∈ ℝ , proving the linear growth. Consequently, 𝜓 ′ ( 𝑡 )
1 for all 𝑡 ∈ ℝ , and (35) gives (34). Since both vectors in (34) have unit norm, it follows that grad 𝐹 ( 𝛾 ( 𝑡 ) )
𝛾 ′ ( 𝑡 ) for all 𝑡 ∈ ℝ . ∎
An explicit formula for the distance to a horosphere in the hyperbolic setting appears in [21]. The next lemma provides a model-independent projection formula on general Hadamard manifolds, included here for completeness. The proof follows the gradient flow of the Busemann function and clarifies the link with classical Euclidean orthogonal projection, used repeatedly below.
Lemma 19.
Let 𝑞 ∈ 𝕄 and 𝑣 ∈ 𝑇 𝑞 𝕄 ∖ { 0 } . Let 𝑝 ∈ 𝕄 and assume that 𝑝 ∉ ℋ 𝑞 , 𝑣 ( 𝑐 ) , i.e., 𝐵 𝑞 , 𝑣 ( 𝑝 ) − 𝑐
0 . Then the projection of 𝑝 onto the horosphere 𝒮 𝑞 , 𝑣 ( 𝑐 ) is given by
𝒫 𝒮 𝑞 , 𝑣 ( 𝑐 ) ( 𝑝 )
exp 𝑝 ( ( 𝐵 𝑞 , 𝑣 ( 𝑝 ) − 𝑐 ) grad ( 𝐵 𝑞 , 𝑣 ) ( 𝑝 ) ) .
(37)
As a consequence, 𝑑 ( 𝑝 , ℋ 𝑞 0 , 𝑣 0 ( 𝑐 ) )
𝑑 ( 𝑝 , 𝒮 𝑞 0 , 𝑣 0 ( 𝑐 ) )
𝐵 𝑞 0 , 𝑣 0 ( 𝑝 ) − 𝑐 .
Proof.
Proof It follows from Lemma 7 that the Busemann function 𝐵 𝑞 , 𝑣 is smooth, its gradient vector field satisfies ‖ grad ( 𝐵 𝑞 , 𝑣 ) ( 𝑥 ) ‖
1 for all 𝑥 ∈ 𝕄 , and, in addition, is Lipschitz continuous with constant 𝐿
1 , i.e., it satisfies (6). In particular, for any 𝑦 ∈ 𝒮 𝑞 , 𝑣 ( 𝑐 ) we have 𝐵 𝑞 , 𝑣 ( 𝑦 )
𝑐 . Since by the continuity of 𝐵 𝑞 , 𝑣 ( 𝑝 )
𝒮 𝑞 , 𝑣 ( 𝑐 ) is the boundary of ℋ 𝑞 , 𝑣 ( 𝑐 ) and 𝐵 𝑞 , 𝑣 ( 𝑝 ) − 𝑐
0 , inequality (6) yields
𝛽 := 𝐵 𝑞 , 𝑣 ( 𝑝 ) − 𝑐 ≤ min 𝑦 ∈ ℋ 𝑞 , 𝑣 ( 𝑐 ) 𝑑 ( 𝑝 , 𝑦 )
: 𝑑 ( 𝑝 , 𝒮 𝑞 , 𝑣 ( 𝑐 ) ) .
(38)
Define the geodesic 𝛾 : ℝ → 𝕄 by 𝛾 ( 𝑡 ) := exp 𝑝 ( 𝑡 grad 𝐵 𝑞 , 𝑣 ( 𝑝 ) ) . Since ‖ grad 𝐵 𝑞 , 𝑣 ( 𝑥 ) ‖
1 for all 𝑥 ∈ 𝕄 , applying Lemma 18 with 𝐹 := 𝐵 𝑞 , 𝑣 we obtain the identity 𝐵 𝑞 , 𝑣 ( 𝛾 ( 𝑡 ) )
𝐵 𝑞 , 𝑣 ( 𝑝 ) + 𝑡 . Setting 𝑡
− 𝛽 gives
𝐵 𝑞 , 𝑣 ( − 𝛾 ( 𝛽 ) )
𝐵 𝑞 , 𝑣 ( 𝑝 ) − 𝛽
𝑐 ,
thus 𝛾 ( − 𝛽 ) ∈ 𝒮 𝑞 , 𝑣 ( 𝑐 ) , and, taking into account that ‖ grad ( 𝐵 𝑞 , 𝑣 ) ( 𝑥 ) ‖
1 , the geodesic segment from 𝑝 to 𝛾 ( − 𝛽 ) has length 𝛽 . Therefore, 𝑑 ( 𝑝 , 𝒮 𝑞 , 𝑣 ( 𝑐 ) ) ≤ 𝛽 . Combining (38) with the last inequality gives 𝛽 ≤ 𝑑 ( 𝑝 , 𝒮 𝑞 , 𝑣 ( 𝑐 ) ) ≤ 𝛽 , which implies that 𝑑 ( 𝑝 , 𝒮 𝑞 , 𝑣 ( 𝑐 ) )
𝛽 . Thus, the point 𝛾 ( − 𝛽 ) realizes the minimum geodesic distance from 𝑝 to the horosphere. Hence, the projection is explicitly given by
𝒫 𝒮 𝑞 , 𝑣 ( 𝑐 ) ( 𝑝 )
𝛾 ( − 𝛽 )
exp 𝑝 ( ( 𝐵 𝑞 , 𝑣 ( 𝑝 ) − 𝑐 ) grad 𝐵 𝑞 , 𝑣 ( 𝑝 ) ) .
This is the unique minimizer of the distance to the convex set 𝒮 𝑞 , 𝑣 ( 𝑐 ) , since Hadamard manifolds have strictly convex distance functions and horospheres are geodesically convex as level sets of convex functions.
We proceed to prove the last equalities. Since 𝐵 𝑞 , 𝑣 ( 𝑝 ) − 𝑐 > 0 and ‖ grad ( 𝐵 𝑞 , 𝑣 ) ( 𝑥 ) ‖
1 , using (37) we obtain 𝑑 ( 𝑝 , 𝒮 𝑞 , 𝑣 ( 𝑐 ) )
𝐵 𝑞 , 𝑣 ( 𝑝 ) − 𝑐 . Since 𝒮 𝑞 0 , 𝑣 0 ( 𝑐 ) ⊂ ℋ 𝑞 0 , 𝑣 0 ( 𝑐 ) and, for 𝐵 𝑞 , 𝑣 ( 𝑝 ) > 𝑐 , the closest point in the horoball lies on its boundary, we obtain 𝑑 ( 𝑝 , ℋ 𝑞 0 , 𝑣 0 ( 𝑐 ) )
𝑑 ( 𝑝 , 𝒮 𝑞 0 , 𝑣 0 ( 𝑐 ) ) . This completes the proof. ∎
Next, we show that Lemma 19 aligns perfectly with the classical Euclidean picture and, in ℝ 𝑛 , reduces to the standard orthogonal projection onto an affine hyperplane.
Example 12.
Let the Hadamard manifold 𝕄 be the Euclidean space ℝ 𝑛 with the standard inner product. Then, exp 𝑝 ( 𝑤 )
𝑝 + 𝑤 , log 𝑝 ( 𝑞 )
𝑞 − 𝑝 , and 𝑑 ( 𝑝 , 𝑞 )
‖ 𝑝 − 𝑞 ‖ . In addition, by using Example 1, we conclude that
𝐵 𝑞 , 𝑣 ( 𝑝 )
− 1 ‖ 𝑣 ‖ ⟨ 𝑣 , 𝑝 − 𝑞 ⟩ , grad ( 𝐵 𝑞 , 𝑣 ) ( 𝑝 )
− 𝑣 ‖ 𝑣 ‖ .
(39)
Using Definition 17 together with (39), the horosphere of level 𝑐 ∈ ℝ is the Euclidean hyperplane orthogonal to 𝑣 is as follows
𝒮 𝑞 , 𝑣 ( 𝑐 )
{ 𝑥 ∈ ℝ 𝑛 : 𝐵 𝑞 , 𝑣 ( 𝑥 )
𝑐 }
{ 𝑥 ∈ ℝ 𝑛 : ⟨ 𝑣 , 𝑥 − 𝑞 ⟩
− 𝑐 ‖ 𝑣 ‖ } .
(40)
Applying formula (37) in Lemma 19 with (39), the projection of 𝑝 onto 𝒮 𝑞 , 𝑣 ( 𝑐 ) is given by
𝒫 𝒮 𝑞 , 𝑣 ( 𝑐 ) ( 𝑝 )
𝑝 − ( − 1 ‖ 𝑣 ‖ ⟨ 𝑣 , 𝑝 − 𝑞 ⟩ − 𝑐 ) ( − 𝑣 ‖ 𝑣 ‖ )
𝑝 − ⟨ 𝑣 , 𝑝 − 𝑞 ⟩ + 𝑐 ‖ 𝑣 ‖ ‖ 𝑣 ‖ 𝑣 ‖ 𝑣 ‖ .
(41)
Therefore, the formula to 𝒫 𝒮 𝑞 , 𝑣 ( 𝑐 ) ( 𝑝 ) in (41) coincides with the classical orthogonal projection onto the affine hyperplane { 𝑥 : ⟨ 𝑣 , 𝑥 ⟩
⟨ 𝑣 , 𝑞 ⟩ − 𝑐 ‖ 𝑣 ‖ } . Hence, in the Euclidean case, the general Riemannian projection formula of Lemma 19 reduces to the standard orthogonal projection, confirming consistency with the familiar theory in ℝ 𝑛 .
Now, motivated by related constructions where horospherical fronts induce hinge-type penalties and geodesically convex decision boundaries, see, e.g., [21], we construct a class of functions that satisfy Definition 13.
Example 13.
Fix 𝑞 0 ∈ 𝕄 and 𝑣 0 ∈ 𝑇 𝑞 0 𝕄 ∖ { 0 } , and let 𝐵 𝑞 0 , 𝑣 0 be the associated Busemann function. For levels 𝑐 1 , … , 𝑐 𝑚 ∈ ℝ , weights 𝑤 𝑖 ≥ 0 , and constants 𝑎 𝑖 ∈ ℝ , define the horoballs
ℋ 𝑞 0 , 𝑣 0 ( 𝑐 𝑗 ) ≔ { 𝑥 ∈ 𝕄 : 𝐵 𝑞 0 , 𝑣 0 ( 𝑥 ) ≤ 𝑐 𝑗 } , 𝑗
1 , … , 𝑚 .
First note that, for every 𝑗 and every 𝑝 ∈ 𝕄 , we have
𝑑 ( 𝑝 , ℋ 𝑞 0 , 𝑣 0 ( 𝑐 𝑗 ) )
𝜓 𝑖 ( 𝐵 𝑞 0 , 𝑣 0 ( 𝑝 ) ) , 𝜓 𝑗 ( 𝑡 ) := max { 0 , 𝑡 − 𝑐 𝑗 } .
(42)
Indeed, if 𝐵 𝑞 0 , 𝑣 0 ( 𝑝 ) ≤ 𝑐 𝑗 , then 𝑝 ∈ ℋ 𝑞 0 , 𝑣 0 ( 𝑐 𝑗 ) and 𝑑 ( 𝑝 , ℋ 𝑞 0 , 𝑣 0 ( 𝑐 𝑗 ) )
0
max { 0 , 𝐵 𝑞 0 , 𝑣 0 ( 𝑝 ) − 𝑐 𝑗 } . Assume now 𝐵 𝑞 0 , 𝑣 0 ( 𝑝 ) > 𝑐 𝑗 . Let 𝒮 𝑞 0 , 𝑣 0 ( 𝑐 𝑗 ) := { 𝑥 ∈ 𝕄 : 𝐵 𝑞 0 , 𝑣 0 ( 𝑥 )
𝑐 𝑗 } be the horosphere of level 𝑐 𝑖 . By Lemma 19, we have
𝑑 ( 𝑝 , ℋ 𝑞 0 , 𝑣 0 ( 𝑐 𝑗 ) )
𝑑 ( 𝑝 , 𝒮 𝑞 0 , 𝑣 0 ( 𝑐 𝑗 ) )
𝐵 𝑞 0 , 𝑣 0 ( 𝑝 ) − 𝑐 𝑗
max { 0 , 𝐵 𝑞 0 , 𝑣 0 ( 𝑝 ) − 𝑐 𝑗 } .
This completes the proof of (42). For each 𝑗 , using (42) define
𝑔 𝑗 ( 𝑝 ) ≔ 𝑤 𝑗 𝑑 ( 𝑝 , ℋ 𝑞 0 , 𝑣 0 ( 𝑐 𝑗 ) ) + 𝑎 𝑗
𝑤 𝑗 𝜓 𝑗 ( 𝐵 𝑞 0 , 𝑣 0 ( 𝑝 ) ) + 𝑎 𝑗 .
(43)
Then we use the idea of Example 8 to conclude that the class { 𝑔 1 , … , 𝑔 𝑚 } satisfies Definition 13.
6The Busemann hybrid projection-proximal point algorithm
In this section, we introduce a variant of the hybrid projection-proximal point algorithm, as proposed in [43], adapted to the setting of Hadamard manifolds for solving problem (1). Before presenting our main approach, let us briefly recall the classical proximal point algorithm in the Hadamard manifold context, as studied in [22], which is defined as follows:
Step 0.
Take a sequence of positive parameters { 𝜇 𝑘 } 𝑘 ∈ ℕ ⊂ ( 0 , + ∞ ) and an initial point 𝑝 0 ∈ 𝕄 . Set 𝑘 ← 0 .
Step 1.
Given the current iterate 𝑝 𝑘 ∈ 𝕄 , compute the next iterate 𝑝 𝑘 + 1 ∈ 𝕄 such that
𝜇 𝑘 log 𝑝 𝑘 + 1 𝑝 𝑘 ∈ ∂ 𝑓 ( 𝑝 𝑘 + 1 ) .
Stopping test: if 𝑝 𝑘 + 1
𝑝 𝑘 , stop and return 𝑝 𝑘 .
Step 2.
If 𝑝 𝑘 + 1
𝑝 𝑘 , then stop and return 𝑝 𝑘 ; otherwise, set 𝑘 ← 𝑘 + 1 and go to Step 1.
Algorithm 1 Exact proximal point algorithm
Next, we introduce the Busemann hybrid proximal point algorithm on Hadamard manifolds for solving problem (1). This algorithm combines elements of the classical proximal point framework with the geometry of horospheres induced by Busemann functions, thereby allowing iterates to evolve along geodesically meaningful directions. The algorithm is formally described below:
Step 0.
Choose 𝑝 0 ∈ 𝕄 and two parameters 𝜎 , 𝜌 ∈ [ 0 , 1 ) ; set the iteration counter 𝑘 ← 0 .
Step 1.
Take 𝜇 𝑘
0 and compute a triple ( 𝑞 𝑘 , 𝑣 𝑘 , 𝜀 𝑘 ) with 𝑞 𝑘 ∈ 𝕄 and 𝑣 𝑘 , 𝜀 𝑘 ∈ 𝑇 𝑞 𝑘 𝕄 satisfying the conditions
0
𝑣 𝑘 − 𝜇 𝑘 log 𝑞 𝑘 𝑝 𝑘 + 𝜀 𝑘 , 𝑣 𝑘 ∈ ∂ 𝑏 𝑓 ( 𝑞 𝑘 ) ,
(44)
with the relative error 𝜀 𝑘 satisfying
‖ 𝜀 𝑘 ‖ ≤ 𝜎 max { 𝜇 𝑘 𝑑 ( 𝑞 𝑘 , 𝑝 𝑘 ) , ‖ 𝑣 𝑘 ‖ } ,
(45)
Stopping test: if 𝑣 𝑘
0 or 𝑞 𝑘
𝑝 𝑘 , then stop and return 𝑝 𝑘 .
Step 2.
Define the horosphere
𝒮 𝑞 𝑘 , − 𝑣 𝑘
{ 𝑝 ∈ 𝕄 : 𝐵 𝑞 𝑘 , − 𝑣 𝑘 ( 𝑝 )
0 } ,
and set the next iterate by projecting 𝑝 𝑘 onto 𝒮 𝑞 𝑘 , 𝑣 𝑘 :
𝑝 𝑘 + 1 := 𝒫 𝒮 𝑞 𝑘 , − 𝑣 𝑘 ( 𝑝 𝑘 ) .
Step 3.
Set 𝑘 ← 𝑘 + 1 and go to Step 1.
Algorithm 2 Busemann hybrid projection-proximal point algorithm (BHPPM)
We begin by discussing the stopping criterion and its implications for Algorithm 2. Suppose the algorithm terminates at some iteration 𝑘 ∈ ℕ . Two cases may arise: either 𝑣 𝑘
0 or 𝑞 𝑘
𝑝 𝑘 . In the first case, the inclusion in (44) reduces to 0 ∈ ∂ 𝑏 𝑓 ( 𝑞 𝑘 ) , implying by Proposition 11 that 𝑞 𝑘 is a solution to problem (1). In the second case, 𝑞 𝑘
𝑝 𝑘 implies 𝑣 𝑘
− 𝜀 𝑘 from (44). Since the error satisfies ‖ 𝜀 𝑘 ‖ ≤ 𝜎 ‖ 𝑣 𝑘 ‖ with 0 ≤ 𝜎 < 1 , it follows that 𝑣 𝑘
0 , and we again conclude that 𝑞 𝑘 solves the problem. Thus, if the algorithm terminates, it returns a solution to problem (1). Based on this discussion, we assume without loss of generality that the algorithm generates an infinite sequence { 𝑝 𝑘 } 𝑘 ∈ ℕ throughout the remainder of the paper. We also assume throughout the analysis that the solution set of problem (1) is nonempty, i.e.,
𝑆 ∗ := arg min 𝑝 ∈ 𝕄 𝑓 ( 𝑝 ) ≠ ∅ .
We also note that BHPPM merges the proximal point framework with geometric projections onto horospheres defined by Busemann functions. At each iteration, a subgradient condition involving a proximity term and an error tolerance is imposed, with inexactness controlled by a parameter 𝜎 ∈ [ 0 , 1 ) . The key feature of BHPPM lies in its update rule; instead of using standard proximal mappings, the current iterate is projected onto a horosphere centered at a subgradient direction. This approach incorporates the intrinsic geometry of the manifold into the iteration process. When 𝜎
0 , the algorithm becomes exact and coincides with the classical proximal point algorithm on Hadamard manifolds, as will be shown below. The main idea behind the algorithm is that the horosphere used in the projection step separates the current iterate from the solution set, guiding the sequence toward optimality. To illustrate this, assume that 𝑝 ∗ ∈ 𝑆 ∗ and that 𝜎
0 . In this case, since 𝑣 𝑘 ∈ ∂ 𝑏 𝑓 ( 𝑞 𝑘 ) , it follows from Definition 10 that
𝑓 ( 𝑝 ∗ ) ≥ 𝑓 ( 𝑞 𝑘 ) + ‖ 𝑣 𝑘 ‖ 𝐵 𝑞 𝑘 , − 𝑣 𝑘 ( 𝑝 ∗ ) .
Since the sequence { 𝑝 𝑘 } 𝑘 ∈ ℕ is infinite, we must have 𝑣 𝑘 ≠ 0 , and therefore we conclude that
𝐵 𝑞 𝑘 , − 𝑣 𝑘 ( 𝑝 ∗ ) ≤ 0 .
On the other hand, since 𝜎
0 , the error term 𝜀 𝑘 vanishes, and from (44) it follows that
𝑣 𝑘 := 𝜇 𝑘 log 𝑞 𝑘 𝑝 𝑘 ∈ ∂ 𝑏 𝑓 ( 𝑞 𝑘 ) ,
which implies that 𝑝 𝑘
exp 𝑞 𝑘 ( 𝑣 𝑘 / 𝜇 𝑘 ) . Thus, taking into account that 𝑣 𝑘 ≠ 0 and considering that 𝜇 𝑘
0 , by applying Lemma 6 we obtain
𝐵 𝑞 𝑘 , − 𝑣 𝑘 ( 𝑝 𝑘 )
𝐵 𝑞 𝑘 , − 𝑣 𝑘 ( exp 𝑞 𝑘 ( 𝑣 𝑘 / 𝜇 𝑘 ) )
‖ 𝑣 𝑘 ‖ / 𝜇 𝑘
0 .
Consequently, the horosphere 𝒮 𝑞 𝑘 , − 𝑣 𝑘 separates the current iterate 𝑝 𝑘 from the solution 𝑝 ∗ . This geometric separation motivates the projection step onto the horosphere 𝒮 𝑞 𝑘 , − 𝑣 𝑘 in the algorithm and guarantees descent toward the solution set.
Before proceeding with the analysis of Algorithm 2, we show that, in the exact case ( 𝜎
0 ), Algorithm 2 reduces to Algorithm 1.
Remark 4.
Let { 𝑝 𝑘 } 𝑘 ∈ ℕ ⊂ 𝕄 be the sequence generated by Algorithm 2, and assume that 𝑝 𝑘 ≠ 𝑞 𝑘 . If the inexactness parameter is set to 𝜎
0 , then the algorithm satisfies
𝑣 𝑘
𝜇 𝑘 log 𝑞 𝑘 𝑝 𝑘 ∈ ∂ 𝑏 𝑓 ( 𝑞 𝑘 ) , 𝑝 𝑘 + 1
𝒫 𝒮 𝑞 𝑘 , − 𝑣 𝑘 ( 𝑝 𝑘 ) , ∀ 𝑘 ∈ ℕ ,
(46)
where
𝒮 𝑞 𝑘 , − 𝑣 𝑘 := { 𝑝 ∈ 𝕄 : 𝐵 𝑞 𝑘 , − 𝑣 𝑘 ( 𝑝 )
0 } ,
is the horosphere of level zero associated with the Busemann function 𝐵 𝑞 𝑘 , − 𝑣 𝑘 , and 𝒫 𝒮 𝑞 𝑘 , − 𝑣 𝑘 denotes the metric projection onto this horosphere. To show that Algorithm 2 reduces to Algorithm 1 in this case, it suffices to prove that 𝑞 𝑘
𝒫 𝒮 𝑞 𝑘 , − 𝑣 𝑘 ( 𝑝 𝑘 ) , i.e., 𝑝 𝑘 + 1
𝑞 𝑘 . For that, define
𝜌 := 𝑑 ( 𝑞 𝑘 , 𝑝 𝑘 )
‖ log 𝑞 𝑘 𝑝 𝑘 ‖
0 , 𝑣 ^ 𝑘 := 𝑣 𝑘 ‖ 𝑣 𝑘 ‖ .
Then, from the equality in (46) we have
log 𝑞 𝑘 𝑝 𝑘
𝜌 𝑣 ^ 𝑘 , ‖ 𝑣 𝑘 ‖
𝜇 𝑘 𝜌 ,
(47)
which implies 𝑝 𝑘
exp 𝑞 𝑘 ( 𝜌 𝑣 ^ 𝑘 ) , i.e., 𝑝 𝑘 lies along the geodesic ray 𝑡 ↦ exp 𝑞 𝑘 ( 𝑡 𝑣 ^ 𝑘 ) at time 𝑡
𝜌 . By Lemma 6, this yields
𝐵 𝑞 𝑘 , − 𝑣 𝑘 ( 𝑝 𝑘 )
𝜌 .
Moreover, Lemma 5 ensures that the Busemann function 𝐵 𝑞 𝑘 , − 𝑣 𝑘 is continuously differentiable, and ‖ grad ( − 𝐵 𝑞 𝑘 , − 𝑣 𝑘 ) ( 𝑝 𝑘 ) ‖
1 . Thus, by Lemma 19, the metric projection of 𝑝 𝑘 onto the horosphere 𝒮 𝑞 𝑘 , − 𝑣 𝑘 is given by
𝒫 𝒮 𝑞 𝑘 , − 𝑣 𝑘 ( 𝑝 𝑘 )
exp 𝑝 𝑘 ( 𝜌 grad 𝐵 𝑞 𝑘 , − 𝑣 𝑘 ( 𝑝 𝑘 ) ) .
(48)
On the other hand, since 𝑝 𝑘
exp 𝑞 𝑘 ( 𝜌 𝑣 ^ 𝑘 ) , we conclude from identity (7) in Lemma 5 that
grad 𝐵 𝑞 𝑘 , − 𝑣 𝑘 ( 𝑝 𝑘 )
𝑃 𝑞 𝑘 𝑝 𝑘 𝑣 ^ 𝑘 .
Combining this with (47) and (48), and using that 𝑃 𝑞 𝑘 𝑝 𝑘 log 𝑞 𝑘 𝑝 𝑘
− log 𝑝 𝑘 𝑞 𝑘 , we obtain that
𝒫 𝒮 𝑞 𝑘 , − 𝑣 𝑘 ( 𝑝 𝑘 )
exp 𝑝 𝑘 ( − 𝜌 𝑃 𝑞 𝑘 𝑝 𝑘 𝑣 ^ 𝑘 )
exp 𝑝 𝑘 ( − 𝑃 𝑞 𝑘 𝑝 𝑘 log 𝑞 𝑘 𝑝 𝑘 )
exp 𝑝 𝑘 ( log 𝑝 𝑘 𝑞 𝑘 )
𝑞 𝑘 ,
which proves that 𝑝 𝑘 + 1
𝑞 𝑘 . Finally, since horospheres are geodesically convex in Hadamard manifolds, the metric projection onto them is uniquely defined, see Proposition 3. Therefore, the update in inclusion (46) becomes 𝜇 𝑘 log 𝑝 𝑘 + 1 𝑝 𝑘 ∈ ∂ 𝑏 𝑓 ( 𝑝 𝑘 + 1 ) . Since, Proposition 12 implies that ∂ 𝑏 𝑓 ( 𝑝 𝑘 + 1 ) ⊂ ∂ 𝑓 ( 𝑝 𝑘 + 1 ) we conclude that
𝜇 𝑘 log 𝑝 𝑘 + 1 𝑝 𝑘 ∈ ∂ 𝑓 ( 𝑝 𝑘 + 1 ) ,
which matches Step 1 of Algorithm 1. Hence, in the exact case ( 𝜎
0 ), Algorithm 2 reduces to Algorithm 1.
6.1Convergence Analysis
In this section, we analyze the convergence properties of the sequence { 𝑝 𝑘 } 𝑘 ∈ ℕ , assuming it is infinite. We begin by proving a descent lemma that ensures the squared distance from each iterate to the solution set decreases along the sequence. This descent behavior is fundamental to our convergence analysis and can be regarded as a Riemannian counterpart of the classical estimate in linear spaces, as presented in [43, Lemma 2.1].
Lemma 20.
Let 𝕄 be a Hadamard manifold, 𝑝 , 𝑞 ∈ 𝕄 and 𝑣 ∈ 𝑇 𝑞 𝕄 ∖ { 0 } . Assume that 𝑝 ∗ ∈ 𝑆 ∗ and 𝑣 ∈ ∂ 𝑏 𝑓 ( 𝑞 ) . Let 𝒮 𝑞 , − 𝑣 := { 𝑥 ∈ 𝕄 : 𝐵 𝑞 , − 𝑣 ( 𝑥 )
0 } be a horosphere. Then, there holds
𝑑 2 ( 𝒫 𝒮 𝑞 , − 𝑣 ( 𝑐 ) ( 𝑝 ) , 𝑝 ∗ ) ≤ 𝑑 2 ( 𝑝 , 𝑝 ∗ ) − ( 𝐵 𝑞 , − 𝑣 ( 𝑝 ) ) 2 , ∀ 𝑝 ∉ ℋ 𝑞 , 𝑣 .
Proof.
Proof Since 𝑣 ∈ ∂ 𝑏 𝑓 ( 𝑞 ) , from Definition 10 it follows that 𝑓 ( 𝑝 ∗ ) ≥ 𝑓 ( 𝑞 ) + ‖ 𝑣 ‖ 𝐵 𝑞 , − 𝑣 ( 𝑝 ∗ ) . Thus, considering that 𝑣 ≠ 0 , we conclude that 𝐵 𝑞 , − 𝑣 ( 𝑝 ∗ ) ≤ 0 . Hence, 𝑝 ∗ ∈ ℋ 𝑞 , − 𝑣 . On the other hand, for a given 𝑧 ∈ 𝕄 , it follows from Lemma 2 that
𝑑 2 ( 𝑧 , 𝑝 ) + 𝑑 2 ( 𝑧 , 𝑝 ∗ ) − 2 ⟨ log 𝑧 𝑝 , log 𝑧 𝑝 ∗ ⟩ ≤ 𝑑 2 ( 𝑝 , 𝑝 ∗ ) .
(49)
Since ℋ 𝑞 , − 𝑣 is geodesically convex, the metric projection onto ℋ 𝑞 , − 𝑣 is well-defined and unique. Considering that the horosphere 𝒮 𝑞 , − 𝑣 is the boundary of the horoball ℋ 𝑞 , 𝑣 , for a given 𝑝 ∉ ℋ 𝑞 , 𝑣 , we have 𝒫 ℋ 𝑞 , − 𝑣 ( 𝑝 )
𝒫 𝒮 𝑞 , − 𝑣 ( 𝑝 ) . Thus, letting 𝑧 := 𝒫 𝒮 𝑞 , − 𝑣 ( 𝑝 ) and taking into account that 𝑝 ∗ ∈ ℋ 𝑞 , − 𝑣 , by applying Proposition 3 we obtain that ⟨ log 𝑧 𝑝 , log 𝑧 𝑝 ∗ ⟩ ≤ 0 . Combining this inequality with (49), we conclude that
𝑑 2 ( 𝑧 , 𝑝 ) + 𝑑 2 ( 𝑧 , 𝑝 ∗ ) ≤ 𝑑 2 ( 𝑝 , 𝑝 ∗ ) .
(50)
Furthermore, Lemma 19 shows that the projection of 𝑝 onto the horosphere 𝒮 𝑞 , 𝑣 is given by
𝑧
𝒫 𝒮 𝑞 , − 𝑣 ( 𝑝 )
exp 𝑝 ( 𝐵 𝑞 , − 𝑣 ( 𝑝 ) grad ( 𝐵 𝑞 , − 𝑣 ) ( 𝑝 ) ) .
Taking into account that the gradient of the Busemann function satisfies ‖ grad ( 𝐵 𝑞 , − 𝑣 ) ( 𝑝 ) ‖
1 and 𝐵 𝑞 , − 𝑣 ( 𝑝 ) > 0 , we obtain that 𝑑 ( 𝑧 , 𝑝 )
‖ 𝐵 𝑞 , − 𝑣 ( 𝑝 ) grad ( 𝐵 𝑞 , − 𝑣 ) ( 𝑝 ) ‖
𝐵 𝑞 , 𝑣 ( 𝑝 ) . Squaring both sides yields 𝑑 2 ( 𝑧 , 𝑝 )
( 𝐵 𝑞 , 𝑣 ( 𝑝 ) ) 2 , and substituting into (50) completes the proof. ∎
We are now prepared to establish our main global convergence result for Algorithm 2. The theorem below describes the asymptotic behavior of the sequence generated by Algorithm 2.
Theorem 21.
Let { 𝑝 𝑘 } 𝑘 ∈ ℕ be any sequence generated by Algorithm 2 and 𝑝 ∗ ∈ 𝑆 ∗ . Then, the following inequality hold
𝑑 2 ( 𝑝 𝑘 + 1 , 𝑝 ∗ ) ≤ 𝑑 2 ( 𝑝 𝑘 , 𝑝 ∗ ) − ( 1 − 𝜎 1 + 𝜎 ) 2 𝑑 2 ( 𝑝 𝑘 , 𝑞 𝑘 ) , ∀ 𝑘 ∈ ℕ .
(51)
As a consequence, the sequence { 𝑝 𝑘 } 𝑘 ∈ ℕ is bounded, and the sequence { 𝑑 ( 𝑞 𝑘 , 𝑝 𝑘 ) } 𝑘 ∈ ℕ converges to zero. In addition, if ∑ 𝑘
0 + ∞ 𝜇 𝑘 − 2
- ∞ , then there exists a subsequence of { 𝑣 𝑘 } 𝑘 ∈ ℕ which converges to zero, and the sequence { 𝑝 𝑘 } 𝑘 ∈ ℕ converges to a solution of problem (1).
Proof.
Proof Let 𝑝 ∗ ∈ 𝑆 ∗ be an arbitrary solution. Since 𝑣 𝑘 ∈ ∂ 𝑏 𝑓 ( 𝑞 𝑘 ) , from Definition 10 it follows that 𝑓 ( 𝑝 ∗ ) ≥ 𝑓 ( 𝑞 𝑘 ) + ‖ 𝑣 𝑘 ‖ 𝐵 𝑞 𝑘 , − 𝑣 𝑘 ( 𝑝 ∗ ) , for all 𝑘 ∈ ℕ . Since the sequence { 𝑝 𝑘 } 𝑘 ∈ ℕ is infinite, we must have 𝑣 𝑘 ≠ 0 , and therefore we conclude that
𝐵 𝑞 𝑘 , − 𝑣 𝑘 ( 𝑝 ∗ ) ≤ 0 , ∀ 𝑘 ∈ ℕ .
(52)
We now prove that 𝐵 𝑞 𝑘 , − 𝑣 𝑘 ( 𝑝 𝑘 )
0 . To this end, we consider the two possible cases for the relative error 𝜀 𝑘 in (45). We begin by considering the case in which the error satisfies
‖ 𝜀 𝑘 ‖ ≤ 𝜎 𝜇 𝑘 𝑑 ( 𝑞 𝑘 , 𝑝 𝑘 ) .
(53)
Using the equality in (44), we have 𝑣 𝑘
𝜇 𝑘 ( log 𝑞 𝑘 𝑝 𝑘 ) − 𝜀 𝑘 . Hence, applying Lemma 9 and then the Cauchy–Schwarz inequality, and taking into account that ‖ log 𝑞 𝑘 𝑝 𝑘 ‖
𝑑 ( 𝑞 𝑘 , 𝑝 𝑘 ) , we obtain
‖ 𝑣 𝑘 ‖ 𝐵 𝑞 𝑘 , − 𝑣 𝑘 ( 𝑝 𝑘 ) ≥ ⟨ 𝑣 𝑘 , log 𝑞 𝑘 𝑝 𝑘 ⟩ ≥ 𝜇 𝑘 𝑑 2 ( 𝑞 𝑘 , 𝑝 𝑘 ) − ‖ 𝜀 𝑘 ‖ 𝑑 ( 𝑞 𝑘 , 𝑝 𝑘 ) .
Now, by using (53) we have
‖ 𝑣 𝑘 ‖ 𝐵 𝑞 𝑘 , − 𝑣 𝑘 ( 𝑝 𝑘 ) ≥ 𝜇 𝑘 ( 1 − 𝜎 ) 𝑑 2 ( 𝑝 𝑘 , 𝑞 𝑘 ) .
(54)
On the other hand, equality in (44) implies that 𝜇 𝑖 𝑑 ( 𝑝 𝑘 , 𝑞 𝑘 )
‖ 𝑣 𝑘 + 𝜀 𝑘 ‖ ≥ ‖ 𝑣 𝑘 ‖ − ‖ 𝜀 𝑘 ‖ . Thus, fom (53) it follows that 𝜇 𝑖 𝑑 ( 𝑝 𝑘 , 𝑞 𝑘 ) ≥ ‖ 𝑣 𝑘 ‖ − 𝜎 𝜇 𝑘 𝑑 ( 𝑞 𝑘 , 𝑝 𝑘 ) , which implies
𝜇 𝑖 𝑑 ( 𝑝 𝑘 , 𝑞 𝑘 ) ≥ 1 1 + 𝜎 ‖ 𝑣 𝑘 ‖ .
(55)
Given that the sequence { 𝑝 𝑘 } 𝑘 ∈ ℕ is infinite, it follows that 𝑣 𝑘 ≠ 0 and 𝑞 𝑘 ≠ 𝑝 𝑘 . Hence, combining (55) with (54), we obtain that
𝐵 𝑞 𝑘 , − 𝑣 𝑘 ( 𝑝 𝑘 ) ≥ 1 − 𝜎 1 + 𝜎 𝑑 ( 𝑝 𝑘 , 𝑞 𝑘 )
0 .
(56)
We proceed to analyze the second bound imposed on the relative error 𝜖 𝑘 , given by
‖ 𝜀 𝑘 ‖ ≤ 𝜎 ‖ 𝑣 𝑘 ‖ .
(57)
Applying Lemma 9 and rewriting the inclusion in (44) as 𝑣 𝑘 + 𝜀 𝑘
𝜇 𝑘 log 𝑞 𝑘 𝑝 𝑘 . we have
‖ 𝑣 𝑘 ‖ 𝐵 𝑞 𝑘 , − 𝑣 𝑘 ( 𝑝 𝑘 ) ≥ ⟨ 𝑣 𝑘 , log 𝑞 𝑘 𝑝 𝑘 ⟩ ≥ 1 𝜇 𝑘 ⟨ 𝑣 𝑘 , 𝑣 𝑘 + 𝜀 𝑘 ⟩ .
Using the estimate ⟨ 𝑣 𝑘 , 𝑣 𝑘 + 𝜀 𝑘 ⟩ ≥ ‖ 𝑣 𝑘 ‖ 2 − ‖ 𝑣 𝑘 ‖ ‖ 𝜀 𝑘 ‖ , along with the error bound from (57), we obtain the following expression
‖ 𝑣 𝑘 ‖ 𝐵 𝑞 𝑘 , 𝑣 𝑘 ( 𝑝 𝑘 ) ≥ 1 𝜇 𝑘 ( ‖ 𝑣 𝑘 ‖ 2 − ‖ 𝑣 𝑘 ‖ ‖ 𝜀 𝑘 ‖ ) ≥ ( 1 − 𝜎 ) 1 𝜇 𝑘 ‖ 𝑣 𝑘 ‖ 2 .
(58)
In the second case (57), and recalling that 𝑣 𝑘 ≠ 0 , the inequality (58) implies
𝐵 𝑞 𝑘 , − 𝑣 𝑘 ( 𝑝 𝑘 ) ≥ 1 − 𝜎 𝜇 𝑘 ‖ 𝑣 𝑘 ‖ .
(59)
On the other hand, using again (44) and taking into account the error bound (57), we obtain
𝜇 𝑘 𝑑 ( 𝑝 𝑘 , 𝑞 𝑘 )
‖ 𝑣 𝑘 + 𝜀 𝑘 ‖ ≤ ‖ 𝑣 𝑘 ‖ + ‖ 𝜀 𝑘 ‖ ≤ ( 1 + 𝜎 ) ‖ 𝑣 𝑘 ‖ ,
which in turn implies the lower bound
‖ 𝑣 𝑘 ‖ ≥ 𝜇 𝑘 1 + 𝜎 𝑑 ( 𝑝 𝑘 , 𝑞 𝑘 ) .
Finally, combining the last inequality with (59), we deduce that (56) also holds in this case. Since we are assuming 𝑝 𝑘 ≠ 𝑞 𝑘 , it follows in particular that 𝑝 𝑘 ∉ 𝐻 𝑞 𝑘 , 𝑣 𝑘 , for all 𝑘 ∈ ℕ . On the other hand, from (52), we know that 𝑝 ∗ ∈ 𝐻 𝑞 𝑘 , 𝑣 𝑘 for all 𝑘 ∈ ℕ . Hence, considering that 𝑝 𝑘 + 1 := 𝒫 𝒮 𝑞 𝑘 , 𝑣 𝑘 ( 𝑝 𝑘 ) , we can invoke Lemma 20 to conclude that the following inequality holds
𝑑 2 ( 𝑝 𝑘 + 1 , 𝑝 ∗ ) ≤ 𝑑 2 ( 𝑝 𝑘 , 𝑝 ∗ ) − ( 𝐵 𝑞 𝑘 , − 𝑣 𝑘 ( 𝑝 𝑘 ) ) 2 , ∀ 𝑘 ∈ ℕ .
(60)
Combining (60) with (56), we obtain the desired estimate (51). As a consequence, (51) shows that the sequence { 𝑑 ( 𝑝 𝑘 , 𝑝 ∗ ) } 𝑘 ∈ ℕ is monotonically non-increasing and therefore bounded. Thus, the sequence { 𝑝 𝑘 } 𝑘 ∈ ℕ is also bounded. Applying (51) once more, we deduce that
( 1 − 𝜎 1 + 𝜎 ) 2 𝑑 2 ( 𝑝 𝑘 , 𝑞 𝑘 ) ≤ 𝑑 2 ( 𝑝 𝑘 , 𝑝 ∗ ) − 𝑑 2 ( 𝑝 𝑘 + 1 , 𝑝 ∗ ) , ∀ 𝑘 ∈ ℕ .
Summing the above inequality from 𝑘
0 to 𝑘
𝑁 − 1 , we obtain
( 1 − 𝜎 1 + 𝜎 ) 2 ∑ 𝑘
0 𝑁 − 1 𝑑 2 ( 𝑝 𝑘 , 𝑞 𝑘 ) ≤ 𝑑 2 ( 𝑝 0 , 𝑝 ∗ ) − 𝑑 2 ( 𝑝 𝑁 , 𝑝 ∗ ) ≤ 𝑑 2 ( 𝑝 0 , 𝑝 ∗ ) .
This estimate implies that the sequence { 𝑑 ( 𝑞 𝑘 , 𝑝 𝑘 ) } 𝑘 ∈ ℕ converges to zero, which completes the first part of the proof.
We now turn to the final part of the theorem. To this end, we assume that ∑ 𝑘
0 + ∞ 𝜇 𝑘 − 2
- ∞ . We again consider the two possible cases for the relative error 𝜀 𝑘 in (45). First, suppose that (53) holds. By combining inequalities (54) and (55), we obtain
𝐵 𝑞 𝑘 , − 𝑣 𝑘 ( 𝑝 𝑘 ) ≥ 1 − 𝜎 𝜇 𝑘 ( 1 + 𝜎 ) 2 ‖ 𝑣 𝑘 ‖ .
(61)
Next, assume that the second condition (57) is satisfied. Dividing the right-hand side of (59) by ( 1 + 𝜎 ) 2 , we see that the same bound (61) holds in this case as well. Substituting this estimate into inequality (60), we arrive at
𝑑 2 ( 𝑝 𝑘 + 1 , 𝑝 ∗ ) ≤ 𝑑 2 ( 𝑝 𝑘 , 𝑝 ∗ ) − ( 1 − 𝜎 𝜇 𝑘 ( 1 + 𝜎 ) 2 ) 2 ‖ 𝑣 𝑘 ‖ 2 .
(62)
To prove that the sequence { 𝑣 𝑘 } 𝑘 ∈ ℕ admits a subsequence converging to zero, suppose by contradiction that lim inf 𝑘 → ∞ ‖ 𝑣 𝑘 ‖
2 𝛿
0 . Then there exists 𝑘 0 ∈ ℕ such that ‖ 𝑣 𝑘 ‖ ≥ 𝛿 for all 𝑘 ≥ 𝑘 0 . Using (62), this implies that
𝛿 2 ( 1 − 𝜎 ) 2 ( 1 + 𝜎 ) 4 ∑ 𝑗
𝑘 0 𝑘 − 1 𝜇 𝑗 − 2
∑ 𝑗
𝑘 0 𝑘 − 1 ( 1 − 𝜎 𝜇 𝑗 ( 1 + 𝜎 ) 2 ) 2 𝛿 2 ≤ 𝑑 2 ( 𝑝 𝑘 0 , 𝑝 ∗ ) − 𝑑 2 ( 𝑝 𝑘 , 𝑝 ∗ ) ≤ 𝑑 2 ( 𝑝 𝑘 0 , 𝑝 ∗ ) .
Since the left-hand side tends to + ∞ as 𝑘 goes to + ∞ , we reach a contradiction. Thus, we conclude that lim inf 𝑘 → ∞ ‖ 𝑣 𝑘 ‖
0 , which implies that the sequence { 𝑣 𝑘 } 𝑘 ∈ ℕ admits a subsequence converging to zero. This establishes the first part of the final statement of the theorem.
Since the sequence { 𝑝 𝑘 } 𝑘 ∈ ℕ is bounded, we proceed to prove that there exists an accumulation point { 𝑝 𝑘 } 𝑘 ∈ ℕ that is solution of problem (1). Let { 𝑣 𝑘 ℓ } ℓ ∈ ℕ be such a subsequence with lim ℓ → + ∞ 𝑣 𝑘 ℓ
0 . Since the sequence { 𝑝 𝑘 } 𝑘 ∈ ℕ is bounded, we may assume, without introducing new indices, that { 𝑝 𝑘 ℓ } ℓ ∈ ℕ converges to some point 𝑝 ^ ∈ 𝕄 . By item (i), we have lim ℓ → + ∞ 𝑑 ( 𝑝 𝑘 ℓ , 𝑞 𝑘 ℓ )
0 , which implies that lim ℓ → + ∞ 𝑞 𝑘 ℓ
𝑝 ^ as well. Since (44) implies that 𝑣 𝑘 ℓ ∈ ∂ 𝑏 𝑓 ( 𝑞 𝑘 ℓ ) and Proposition 12 implies that ∂ 𝑏 𝑓 ( 𝑞 ) ⊂ ∂ 𝑓 ( 𝑞 ) , we conclude that 𝑣 𝑘 ℓ ∈ ∂ 𝑓 ( 𝑞 𝑘 ℓ ) . Thus,
𝑓 ( 𝑝 ) ≥ 𝑓 ( 𝑞 𝑘 ℓ ) + ⟨ 𝑣 𝑘 ℓ , log 𝑞 𝑘 ℓ 𝑝 ⟩ , ∀ 𝑝 ∈ 𝕄 .
Letting ℓ goes to + ∞ , and using that lim ℓ → + ∞ 𝑞 𝑘 ℓ
𝑝 ^ and lim ℓ → + ∞ 𝑣 𝑘 ℓ
0 , we obtain
𝑓 ( 𝑝 ) ≥ 𝑓 ( 𝑝 ^ ) + ⟨ 0 , log 𝑝 ^ 𝑝 ⟩ , ∀ 𝑝 ∈ 𝕄 ,
which implies 0 ∈ ∂ 𝑓 ( 𝑝 ^ ) , that is, 𝑝 ^ is a solution to problem (1). Finally, we prove the convergence of the entire sequence. It suffices to show that it has a unique accumulation point. Suppose, let 𝑝 ~ ∈ 𝕄 be other accumulation point of the sequence { 𝑝 𝑘 } 𝑘 ∈ ℕ . Then, there exist two subsequences { 𝑝 𝑘 𝑖 } 𝑖 ∈ ℕ and { 𝑝 𝑘 𝑗 } 𝑗 ∈ ℕ such that
lim 𝑖 → + ∞ 𝑝 𝑘 𝑖
𝑝 ^ and lim 𝑗 → + ∞ 𝑝 𝑘 𝑗
𝑝 ~ .
Since equation (62) holds for every solution, in particular it holds for 𝑝 ∗
𝑝 ^ . Hence, it follows from (62) that the sequence { 𝑑 ( 𝑝 𝑘 , 𝑝 ^ ) } 𝑘 ∈ ℕ is strictly decreasing and bounded below, and therefore convergent. Furthermore, the subsequence { 𝑑 ( 𝑝 𝑘 𝑖 , 𝑝 ^ ) } 𝑖 ∈ ℕ converges to zero, implying that the full sequence { 𝑑 ( 𝑝 𝑘 , 𝑝 ^ ) } 𝑘 ∈ ℕ also converges to zero. Consequently,
𝑑 ( 𝑝 ~ , 𝑝 ^ )
lim 𝑗 → + ∞ 𝑑 ( 𝑝 𝑘 𝑗 , 𝑝 ^ )
0 ,
and thus 𝑝 ~
𝑝 ^ . Therefore, all accumulation points of the sequence { 𝑝 𝑘 } 𝑘 ∈ ℕ must coincide with 𝑝 ^ . By the completeness of the manifold, it follows that the entire sequence converges to 𝑝 ^ . Since 𝑝 ^ was previously shown to be a solution of problem (1), the proof is complete. ∎
We now establish a nonasymptotic iteration complexity bound for the Busemann hybrid proximal point algorithm. The result below shows that the sequence generated by Algorithm 2 achieves a sublinear rate of convergence with respect to the Busemann residual and the norm of approximate subgradients.
Theorem 22.
Let { 𝑝 𝑘 } 𝑘 ∈ ℕ be any sequence generated by Algorithm 2 with inexactness parameter 𝜎 ∈ [ 0 , 1 ) . Then, for an arbitrary solution 𝑝 ∗ ∈ 𝑆 ∗ and for given 𝑁 ≥ 1 there holds
min 0 ≤ 𝑘 ≤ 𝑁 − 1 𝑑 ( 𝑝 𝑘 , 𝑞 𝑘 ) ≤ ( 1 + 𝜎 ) 𝑑 ( 𝑝 0 , 𝑝 ∗ ) 1 − 𝜎 1 𝑁 ,
(63)
In addition, if 𝜇 𝑘 ≥ 𝜇 ¯
0 for every 𝑘 , then
min 0 ≤ 𝑘 ≤ 𝑁 − 1 ‖ 𝑣 𝑘 ‖ ≤ 𝜇 ¯ ( 1 + 𝜎 ) 𝑑 ( 𝑝 0 , 𝑝 ∗ ) 1 − 𝜎 1 𝑁 .
(64) Proof.
Proof The proof is based on the descent estimate obtained in Theorem 21, namely
( 1 − 𝜎 1 + 𝜎 ) 2 𝑑 2 ( 𝑝 𝑘 , 𝑞 𝑘 ) ≤ 𝑑 2 ( 𝑝 𝑘 , 𝑝 ∗ ) − 𝑑 2 ( 𝑝 𝑘 + 1 , 𝑝 ∗ ) , ∀ 𝑘 ∈ ℕ .
Summing the last inequality from 𝑘
0 to 𝑘
𝑁 − 1 gives
( 1 − 𝜎 1 + 𝜎 ) 2 ∑ 𝑘
0 𝑁 − 1 𝑑 2 ( 𝑝 𝑘 , 𝑞 𝑘 ) ≤ ∑ 𝑘
0 𝑁 − 1 ( 𝑑 2 ( 𝑝 𝑘 , 𝑝 ∗ ) − 𝑑 2 ( 𝑝 𝑘 + 1 , 𝑝 ∗ ) ) .
The right-hand side is a telescoping sum
∑ 𝑘
0 𝑁 − 1 ( 𝑑 2 ( 𝑝 𝑘 , 𝑝 ∗ ) − 𝑑 2 ( 𝑝 𝑘 + 1 , 𝑝 ∗ ) )
𝑑 2 ( 𝑝 0 , 𝑝 ∗ ) − 𝑑 2 ( 𝑝 𝑁 , 𝑝 ∗ ) ≤ 𝑑 2 ( 𝑝 0 , 𝑝 ∗ ) ,
since squared distances are non-negative. Therefore, we conclude that
( 1 − 𝜎 1 + 𝜎 ) 2 ∑ 𝑘
0 𝑁 − 1 𝑑 2 ( 𝑝 𝑘 , 𝑞 𝑘 ) ≤ 𝑑 2 ( 𝑝 0 , 𝑝 ∗ ) ,
and this inequality gives the bound stated in (63). From the inexact optimality condition in Step 1 of Algorithm 2, we have 𝜇 𝑘 log 𝑞 𝑘 𝑝 𝑘 + 𝜀 𝑘 + 𝑣 𝑘
0 , and 𝑣 𝑘 ∈ ∂ 𝑏 𝑓 ( 𝑞 𝑘 ) , together with the relative-error rule (45), it holds that ‖ 𝜀 𝑘 ‖ ≤ 𝜎 ‖ 𝜇 𝑘 log 𝑞 𝑘 𝑝 𝑘 ‖
𝜎 𝜇 𝑘 𝑑 ( 𝑞 𝑘 , 𝑝 𝑘 ) . Then,
‖ 𝑣 𝑘 ‖
‖ 𝜇 𝑘 log 𝑞 𝑘 𝑝 𝑘 + 𝜀 𝑘 ‖ ≤ 𝜇 𝑘 𝑑 ( 𝑞 𝑘 , 𝑝 𝑘 ) + 𝜎 𝜇 𝑘 𝑑 ( 𝑞 𝑘 , 𝑝 𝑘 )
( 1 + 𝜎 ) 𝜇 𝑘 𝑑 ( 𝑝 𝑘 , 𝑞 𝑘 ) .
Using that 𝜇 𝑘 ≥ 𝜇 ¯
0 for all 𝑘 , it follows that min 0 ≤ 𝑘 ≤ 𝑁 − 1 ‖ 𝑣 𝑘 ‖ ≤ ( 1 + 𝜎 ) 𝜇 ¯ min 0 ≤ 𝑘 ≤ 𝑁 − 1 𝑑 ( 𝑝 𝑘 , 𝑞 𝑘 ) . Thus, by using the bound obtained in (63) we obtain (64) and this completes the proof. ∎
6.2The proximal subproblem
In this section, we present the subroutine ApproxTriple, shown in Algorithm 3, which serves as the computational procedure for Step 1 of the Busemann proximal point algorithm (BHPPM) stated in Algorithm 2 using the gradient or subgradient algorithm on Hadamard manifolds. Its purpose is to compute an approximate solution 𝑞 𝑘 , 𝑣 𝑘 and 𝜀 𝑘 to the regularized subproblem
min 𝑝 ∈ 𝕄 𝑓 ( 𝑝 ) + 𝜇 𝑘 2 𝑑 2 ( 𝑝 , 𝑝 𝑘 ) ,
(65)
subject to the inexact optimality condition
0
𝑣 𝑘 − 𝜇 𝑘 log 𝑞 𝑘 𝑝 𝑘 + 𝜀 𝑘 , 𝑣 𝑘 ∈ ∂ 𝑏 𝑓 ( 𝑞 𝑘 ) ,
together with the relative error condition
‖ 𝜀 𝑘 ‖ ≤ 𝜎 max { 𝜇 𝑘 𝑑 ( 𝑞 𝑘 , 𝑝 𝑘 ) , ‖ 𝑣 𝑘 ‖ } .
To begin, observe that the objective function of (65) is strongly convex and hence coercive, see [22]. It follows that (65) admits a unique (exact) solution and, therefore, by Proposition 11, the objective function is 𝐵 -subdifferentiable at that solution. In practice, we solve (65) inexactly to balance numerical effort and convergence guarantees, allowing early termination of the inner iterations without degrading the overall convergence rate of BHPPM. Routine strategies based on the Riemannian subgradient method introduced in [23] ensure that the returned triple satisfies both the inclusion constraint and the relative error criterion imposed by the outer algorithm. The algorithm is stated next.
Algorithm 3 Inner routine for Step 1: ApproxTriple ( 𝑞 𝑘 , 𝑣 𝑘 , 𝜀 𝑘 ) Input:
Current outer iterate 𝑝 𝑘 ∈ 𝕄 , penalty parameter 𝜇 𝑘
0 , relative–error factor 0 < 𝜎 ≤ 1 , maximum inner iterations 𝑇 max .
Step 0.
Set 𝑧 0 ← 𝑝 𝑘 , iteration counter ℓ ← 0 .
Step 1.
Compute a Busemann subgradient 𝑔 ℓ ∈ ∂ 𝑏 𝑓 ( 𝑧 ℓ ) . Set residual 𝑟 ℓ := 𝜇 𝑘 log 𝑧 ℓ 𝑝 𝑘 − 𝑔 ℓ .
Stopping test:
If
‖ 𝑟 ℓ ‖ ≤ 𝜎 max { 𝜇 𝑘 𝑑 ( 𝑧 ℓ , 𝑝 𝑘 ) , ‖ 𝑔 ℓ ‖ } or 𝑡
𝑇 max ,
then stop and return 𝑞 𝑘 := 𝑧 ℓ , 𝑣 𝑘 := 𝑔 ℓ , 𝜀 𝑘 := 𝑟 ℓ .
Step 2.
Compute a step size 𝛼 ℓ
0 (e.g. exogenous step size), set 𝑑 ℓ := 𝑔 ℓ − 𝜇 𝑘 log 𝑧 ℓ 𝑝 𝑘 and
𝑧 ℓ + 1 := exp 𝑧 ℓ ( − 𝛼 ℓ 𝑑 ℓ ) .
Step 3.
Set ℓ ← ℓ + 1 and go to Step 1.
Algorithm 3 describes a subgradient-type routine for computing an inexact solution to the proximal subproblem at each iteration of BHPPM. The search point is updated by moving along a direction composed of a subgradient of the objective function corrected by the proximal term 𝜇 𝑘 log 𝑧 ℓ 𝑝 𝑘 . A residual vector is computed at each step to check whether the current point satisfies the relative error criterion prescribed by the inexact inclusion condition of the outer algorithm. If the residual is sufficiently small relative to the subgradient and proximity terms, or if the maximum number of inner iterations is reached, the procedure terminates and returns the triple ( 𝑞 𝑘 , 𝑣 𝑘 , 𝜀 𝑘 ) . This guarantees that the returned point satisfies both the optimality inclusion and the relative error bound required in Step 1 of the BHPPM. The flexibility of this routine allows it to handle both smooth and nonsmooth objective functions. The step size can be computed via Armijo-type backtracking or be set exogenously.
7Conclusions
We have proposed a novel variant of the proximal point algorithm for convex optimization on Hadamard manifolds, replacing classical Euclidean hyperplane projections with geometrically intrinsic projections onto horospheres defined via Busemann functions. By using a generalized notion of Busemann subdifferential and establishing convergence under inexact subgradient conditions, we demonstrated the robustness and effectiveness of the method for a broad class of convex functions on Hadamard manifolds. Importantly, the framework developed here is not tied to a smooth Riemannian structure. The key ingredients, Busemann functions, horospheres, and subdifferentials, depend only on the geodesic structure and on the convexity of the squared distance; consequently, the ideas and results extend naturally to metric Hadamard spaces, see [24]. In particular, the algorithm and its analysis should carry over to CAT(0) spaces. As the classical proximal point method was extended from Euclidean spaces to Hadamard manifolds [22] and then to general Hadamard spaces [3], the same path of generalization can be pursued for the Busemann hybrid projection proximal point algorithm introduced here. A further challenge is to develop a cyclic proximal point scheme driven by horosphere projections and to analyze its convergence and rates in light of proximal results for Hadamard spaces [4]. These observations suggest that the geometric tools and proximal methodology introduced here may serve as a foundation for broader developments in convex optimization on non-Euclidean and non-smooth spaces. Promising directions for future research include the derivation of explicit complexity bounds under weaker regularity assumptions, the extension to structured or constrained problems, and the exploration of duality theory and primal-dual algorithms in the horospherical setting.
A natural extension would be to adapt the present hybrid projection–proximal scheme to the computation of zeros of (possibly set-valued) monotone vector fields on Hadamard manifolds. This would require an intrinsic notion of Busemann monotonicity for fields, e.g., formulated via Busemann pairings and horospherical sublevel sets, and the development of the associated Busemann resolvent. We also view this as a promising direction for future work.
References [1] ↑ M. Arnaudon and F. Nielsen (2013)On approximating the Riemannian 1-center.Comput. Geom. 46 (1), pp. 93–104.Cited by: item Linear radius. [2] ↑ M. G. Atigh, M. Keller-Ressel, and P. Mettes (2021)Hyperbolic Busemann learning with ideal prototypes.Adv. Neural Inf. Process. Syst. 34, pp. 103–115.Cited by: §4. [3] ↑ M. Bačák (2013)The proximal point algorithm in metric spaces.Israel J. Math. 194 (2), pp. 689–701.External Links: ISSN 0021-2172, Document, Link, MathReview (Francisco J. Aragón Artacho)Cited by: §1, §4, §7. [4] ↑ M. Bačák (2014)Computing medians and means in Hadamard spaces.SIAM J. Optim. 24 (3), pp. 1542–1566.External Links: ISSN 1052-6234,1095-7189, Document, Link, MathReview (Ruhollah Jahanipur)Cited by: §1, item Squared radius, §7. [5] ↑ W. Ballmann, M. Gromov, and V. Schroeder (1985)Manifolds of nonpositive curvature.Prog. Math., Vol. 61, Birkhäuser, Cham (English).External Links: ISSN 0743-1643Cited by: §1, §2. [6] ↑ E. E. A. Batista, G. d. C. Bento, and O. P. Ferreira (2016)Enlargement of monotone vector fields and an inexact proximal point method for variational inequalities in Hadamard manifolds.J. Optim. Theory Appl. 170 (3), pp. 916–931.External Links: ISSN 0022-3239, Document, Link, MathReview EntryCited by: §1. [7] ↑ R. Benedetti and C. Petronio (1992)Lectures on hyperbolic geometry.Universitext, Springer-Verlag, Berlin.External Links: ISBN 3-540-55534-X, Document, Link, MathReview (Colin C. Adams)Cited by: Example 2. [8] ↑ G. C. Bento and J. X. Cruz Neto (2014)Finite termination of the proximal point method for convex functions on Hadamard manifolds.Optimization 63 (9), pp. 1281–1288.External Links: ISSN 0233-1934, Document, Link, MathReview (A. M. Galperin)Cited by: §1. [9] ↑ G. C. Bento, O. P. Ferreira, and P. R. Oliveira (2015)Proximal point method for a special class of nonconvex functions on Hadamard manifolds.Optimization 64 (2), pp. 289–319.External Links: ISSN 0233-1934, Document, Link, MathReview EntryCited by: §1. [10] ↑ G.C. Bento, J.X. da Cruz Neto, and P. R. Oliveira (2016)A new approach to the proximal point method: convergence on general Riemannian manifolds.J. Optim. Theory Appl. 168 (3), pp. 743–755.External Links: ISSN 0022-3239, Document, Link, MathReview EntryCited by: §1. [11] ↑ G. d. C. Bento, J. Cruz Neto, and Í. D. L. Melo (2023)Fenchel conjugate via Busemann function on Hadamard manifolds.Appl. Math. Optim. 88 (3), pp. Paper No. 83, 29.External Links: ISSN 0095-4616,1432-0606, Document, Link, MathReview EntryCited by: §3.1. [12] ↑ P. Blanchard, D. J. Higham, and N. J. Higham (2021)Accurately computing the log-sum-exp and softmax functions.IMA J. Numer. Anal. 41 (4), pp. 2311–2330.External Links: ISSN 0272-4979,1464-3642, Document, Link, MathReview (Enrico De Micheli)Cited by: §4. [13] ↑ N. Boumal (2023)An introduction to optimization on smooth manifolds.Cambridge University Press, Cambridge.External Links: ISBN 978-1-009-16617-1; 978-1-009-16615-7, MathReview EntryCited by: Example 2. [14] ↑ M. R. Bridson and A. Haefliger (1999)Metric spaces of non-positive curvature.Grundlehren der mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], Vol. 319, Springer-Verlag, Berlin.External Links: ISBN 3-540-64324-9, Document, Link, MathReview (Athanase Papadopoulos)Cited by: §1, §2, §3.1, §3, item Linear radius, Remark 1. [15] ↑ H. Busemann (1955)The geometry of geodesics.Pure Appl. Math., Academic Press, Vol. 6, Academic Press, New York, NY (English).External Links: ISSN 0079-8169Cited by: §1. [16] ↑ P. Chaipunya and P. Kumam (2017)On the proximal point method in Hadamard spaces.Optimization 66 (10), pp. 1647–1665.External Links: ISSN 0233-1934, Document, Link, MathReview (Moosa Gabeleh)Cited by: §1. [17] ↑ P. Cholamjiak, A. A. N. Abdou, and Y. J. Cho (2015)Proximal point algorithms involving fixed points of nonexpansive mappings in CAT ( 0 ) spaces.Fixed Point Theory Appl., pp. 2015:227, 13.External Links: ISSN 1687-1812, Document, Link, MathReview EntryCited by: §1. [18] ↑ M. Collins, R. E. Schapire, and Y. Singer (2002)Logistic regression, AdaBoost and Bregman distances.Machine Learning 48 (1), pp. 253–285.Cited by: §4. [19] ↑ C. Criscitiello and J. Kim (2025)Horospherically convex optimization on hadamard manifolds. part i: analysis and algorithms.arXiv 2505, pp. 16970 (English).Note: Available at https://arxiv.org/abs/2505.16970External Links: 2505.16970Cited by: §1.1, §1, §4, §4, §4, §4. [20] ↑ A. Cuntavepanit and W. Phuengrattana (2018)On solving the minimization problem and the fixed-point problem for a finite family of non-expansive mappings in CAT(0) spaces.Optim. Methods Softw. 33 (2), pp. 311–321.External Links: ISSN 1055-6788, Document, Link, MathReview EntryCited by: §1. [21] ↑ X. Fan, C.-H. Yang, and B. Vemuri (2023)Horospherical decision boundaries for large margin classification in hyperbolic space.Adv. Neural Inf. Process. Syst. 36, pp. 11194–11204.Cited by: §1, §5, §5. [22] ↑ O. P. Ferreira and P. R. Oliveira (2002)Proximal point algorithm on Riemannian manifolds.Optimization 51 (2), pp. 257–270.External Links: ISSN 0233-1934,1029-4945, Document, Link, MathReview (R. Tichatschke)Cited by: §1, §2, §2, §6.2, §6, §7. [23] ↑ O. P. Ferreira, M. S. Louzeiro, and L. F. Prudente (2019)Iteration-complexity of the subgradient method on riemannian manifolds with lower bounded curvature.Optimization 68 (4), pp. 713–729.Cited by: §6.2. [24] ↑ A. Goodwin, A. S. Lewis, G. López-Acedo, and A. Nicolae (2024)A subgradient splitting algorithm for optimization on nonpositively curved metric spaces.arXiv 2412, pp. 06730 (English).Note: Available at https://arxiv.org/abs/2412.06730External Links: 2412.06730Cited by: §1.1, §1, §4, §4, §4, §4, §4, §4, §7. [25] ↑ H. Hirai (2023)Convex analysis on Hadamard spaces and scaling problems.Foundations of Computational Mathematics, pp. 1–38.Cited by: §3.1. [26] ↑ T. Kajimura and Y. Kimura (2019)Resolvents of convex functions in complete geodesic metric spaces with negative curvature.J. Fixed Point Theory Appl. 21 (1), pp. Art. 32, 15.External Links: ISSN 1661-7738, Document, Link, MathReview EntryCited by: §1. [27] ↑ M. Keller-Ressel (2020)A theory of hyperbolic prototype learning.arXiv preprint arXiv:2010.07744.Cited by: §4. [28] ↑ A. Kristály, C. Li, G. López-Acedo, and A. Nicolae (2016)What do ‘convexities’ imply on Hadamard manifolds?.J. Optim. Theory Appl. 170 (3), pp. 1068–1074.External Links: ISSN 0022-3239, Document, Link, MathReview (Jen-Chih Yao)Cited by: §3, §4. [29] ↑ K. Lerkchaiyaphum and W. Phuengrattana (2018)Iterative approaches to solving convex minimization problems and fixed point problems in complete CAT ( 0 ) spaces.Numer. Algorithms 77 (3), pp. 727–740.External Links: ISSN 1017-1398, Document, Link, MathReview (José Eduardo Souza de Cursi)Cited by: §1. [30] ↑ L. Leuştean, A. Nicolae, and A. Sipoş (2018)An abstract proximal point algorithm.J. Global Optim. 72 (3), pp. 553–577.External Links: ISSN 0925-5001, Document, Link, MathReview (Warren L. Hare)Cited by: §1. [31] ↑ A. S. Lewis, G. Lopez-Acedo, and A. Nicolae (2024)Horoballs and the subgradient method.arXiv preprint arXiv:2403.15749.Cited by: §1. [32] ↑ C. Li, G. López, and V. Martín-Márquez (2009)Monotone vector fields and the proximal point algorithm on Hadamard manifolds.J. Lond. Math. Soc. (2) 79 (3), pp. 663–683.External Links: ISSN 0024-6107, Document, Link, MathReview (Sándor Z. Németh)Cited by: §1. [33] ↑ C. Li, B. S. Mordukhovich, J. Wang, and J. Yao (2011)Weak sharp minima on Riemannian manifolds.SIAM J. Optim. 21 (4), pp. 1523–1560.External Links: ISSN 1052-6234,1095-7189, Document, Link, MathReview (Dariusz Zagrodny)Cited by: §2, Example 3. [34] ↑ C. Li, B. S. Mordukhovich, J. Wang, and J. Yao (2011)Weak sharp minima on Riemannian manifolds.SIAM J. Optim. 21 (4), pp. 1523–1560.External Links: ISSN 1052-6234, Document, Link, MathReview (Dariusz Zagrodny)Cited by: §1. [35] ↑ C. Li and J. Yao (2012)Variational inequalities for set-valued vector fields on Riemannian manifolds: convexity of the solution set and the proximal point algorithm.SIAM J. Control Optim. 50 (4), pp. 2486–2514.External Links: ISSN 0363-0129, Document, Link, MathReview EntryCited by: §1. [36] ↑ B. Martinet (1970)Régularisation d’inéquations variationnelles par approximations successives.Rev. Fr. Inform. Rech. Oper. 4, pp. 154–159.External Links: ISSN 0373-8731, MathReview EntryCited by: §1. [37] ↑ F. Nielsen and K. Sun (2016)Guaranteed bounds on information-theoretic measures of univariate mixtures using piecewise log-sum-exp inequalities.Entropy 18 (12), pp. Paper No. 442, 25.External Links: ISSN 1099-4300, Document, Link, MathReview EntryCited by: §4. [38] ↑ N. Pakkaranang, P. Kumam, and Y. J. Cho (2018)Proximal point algorithms for solving convex minimization problem and common fixed points problem of asymptotically quasi-nonexpansive mappings in CAT ( 0 ) spaces with convergence analysis.Numer. Algorithms 78 (3), pp. 827–845.External Links: ISSN 1017-1398, Document, Link, MathReview EntryCited by: §1. [39] ↑ W. Phuengrattana, N. Onjai-uea, and P. Cholamjiak (2018)Modified proximal point algorithms for solving constrained minimization and fixed point problems in complete CAT ( 0 ) spaces.Mediterr. J. Math. 15 (3), pp. Art. 97, 20.External Links: ISSN 1660-5446, Document, Link, MathReview EntryCited by: §1. [40] ↑ J. G. Ratcliffe ([2019] ©2019)Foundations of hyperbolic manifolds.Third edition, Graduate Texts in Mathematics, Vol. 149, Springer, Cham.External Links: ISBN 978-3-030-31597-9; 978-3-030-31596-2, Document, Link, MathReview EntryCited by: Example 2. [41] ↑ R. T. Rockafellar (1976)Monotone operators and the proximal point algorithm.SIAM J. Control Optim. 14 (5), pp. 877–898.External Links: ISSN 0363-0129, Document, Link, MathReview (G. M. Ewing)Cited by: §1. [42] ↑ T. Sakai (1996)Riemannian geometry.Translations of Mathematical Monographs, Vol. 149, American Mathematical Society, Providence, RI.Note: Translated from the 1992 Japanese original by the authorExternal Links: ISBN 0-8218-0284-4, Document, Link, MathReview (Conrad Plaut)Cited by: §2, §2, §2, §3, §3, Remark 1. [43] ↑ M. V. Solodov and B. F. Svaiter (1999)A hybrid projection-proximal point algorithm.J. Convex Anal. 6 (1), pp. 59–70.External Links: ISSN 0944-6532,2363-6394, MathReview (R. Tichatschke)Cited by: §1, §1, §6.1, §6. [44] ↑ P. N. Suganthan, L. Kong, V. Snášel, V. Ojha, and H. A. H. Z. Aly (2025)Euclidean and poincaré space ensemble XGBoost.Information Fusion 115, pp. 102746.Cited by: §1. [45] ↑ C. Udrişte (1994)Convex functions and optimization methods on Riemannian manifolds.Mathematics and its Applications, Vol. 297, Kluwer Academic Publishers Group, Dordrecht.External Links: ISBN 0-7923-3002-1, Document, Link, MathReview (Robert L. Foote)Cited by: §2, §4. [46] ↑ G. C. Ugwunnadi, A. R. Khan, and M. Abbas (2018)A hybrid proximal point algorithm for finding minimizers and fixed points in CAT ( 0 ) spaces.J. Fixed Point Theory Appl. 20 (2), pp. Art. 82, 19.External Links: ISSN 1661-7738, Document, Link, MathReview EntryCited by: §1. [47] ↑ J. H. Wang, G. López, V. Martín-Márquez, and C. Li (2010)Monotone and accretive vector fields on Riemannian manifolds.J. Optim. Theory Appl. 146 (3), pp. 691–708.External Links: ISSN 0022-3239, Document, Link, MathReview (Jen-Chih Yao)Cited by: §1. [48] ↑ J. Wang, C. Li, G. Lopez, and J. Yao (2016)Proximal point algorithms on Hadamard manifolds: linear convergence and finite termination.SIAM J. Optim. 26 (4), pp. 2696–2729.External Links: ISSN 1052-6234, Document, Link, MathReview (Yuanguo Zhu)Cited by: §1. [49] ↑ X. Wang, G. López, C. Li, and J. Yao (2019)Equilibrium problems on Riemannian manifolds with applications.J. Math. Anal. Appl. 473 (2), pp. 866–891.External Links: ISSN 0022-247X, Document, Link, MathReview EntryCited by: §1. Report Issue Report Issue for Selection Generated by L A T E xml Instructions for reporting errors
We are continuing to improve HTML versions of papers, and your feedback helps enhance accessibility and mobile support. To report errors in the HTML that will help us improve conversion and rendering, choose any of the methods listed below:
Click the "Report Issue" button. Open a report feedback form via keyboard, use "Ctrl + ?". Make a text selection and click the "Report Issue for Selection" button near your cursor. You can use Alt+Y to toggle on and Alt+Shift+Y to toggle off accessible reporting links at each section.
Our team has already identified the following issues. We appreciate your time reviewing and reporting rendering errors we may not have found yet. Your efforts will help us improve the HTML versions for all readers, because disability should not be a barrier to accessing research. Thank you for your continued support in championing open access for all.
Have a free development cycle? Help support accessibility at arXiv! Our collaborators at LaTeXML maintain a list of packages that need conversion, and welcome developer contributions.
Xet Storage Details
- Size:
- 128 kB
- Xet hash:
- ffd6f789de721f9c0c70c8412c253da5ae32484c8309ac0e80d19ee03611a887
Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.