Buckets:
Title: Scalable Frank-Wolfe on Generalized Self-concordant Functions via Simple Steps
URL Source: https://arxiv.org/html/2105.13913
Markdown Content: Back to arXiv
This is experimental HTML to improve accessibility. We invite you to report rendering errors. Use Alt+Y to toggle on accessible reporting links and Alt+Shift+Y to toggle off. Learn more about this project and help improve conversions.
Why HTML? Report Issue Back to Abstract Download PDF 1Introduction 2Frank-Wolfe Convergence Guarantees 3Away-step and Blended Pairwise Conditional Gradients 4Computational experiments 5A stateless simple step variant License: arXiv.org perpetual non-exclusive license arXiv:2105.13913v8 [math.OC] 08 Apr 2024 Scalable Frank-Wolfe on Generalized Self-concordant Functions via Simple Steps \nameAlejandro Carderera \emailalejandro.carderera@gatech.edu \addrDepartment of Industrial and Systems Engineering Georgia Institute of Technology Atlanta, USA \AND\nameMathieu Besanรงon \emailbesancon@zib.de \addrZuse Institute Berlin, Germany, and Univ. Grenoble Alpes, Inria, LIG, Grenoble, France \AND\nameSebastian Pokutta \emailpokutta@zib.de \addrInstitute of Mathematics Technische Universitรคt Berlin and Zuse Institute Berlin, Germany Abstract
Generalized self-concordance is a key property present in the objective function of many important learning problems. We establish the convergence rate of a simple Frank-Wolfe variant that uses the open-loop step size strategy ๐พ ๐ก
2 / ( ๐ก + 2 ) , obtaining a ๐ช โข ( 1 / ๐ก ) convergence rate for this class of functions in terms of primal gap and Frank-Wolfe gap, where ๐ก is the iteration count. This avoids the use of second-order information or the need to estimate local smoothness parameters of previous work. We also show improved convergence rates for various common cases, e.g., when the feasible region under consideration is uniformly convex or polyhedral.
1Introduction
Constrained convex optimization is the cornerstone of many machine learning problems. We consider such problems, formulated as:
min ๐ฑ โ ๐ณ โก ๐ โข ( ๐ฑ ) ,
(1.1)
where ๐ : โ ๐ โ โ โช { + โ } is a generalized self-concordant function and ๐ณ โ โ ๐ is a compact convex set. When computing projections onto the feasible regions as required in, e.g., projected gradient descent, is prohibitive, Frank-Wolfe (FW) [Frank & Wolfe, 1956] algorithms (a.k.a. Conditional Gradients (CG) [Levitin & Polyak, 1966]) are the algorithms of choice, relying on Linear Minimization Oracles (LMO) at each iteration to solve Problem (1.1). The analysis of their convergence often relies on the assumption that the gradient is Lipschitz-continuous. This assumption does not necessarily hold for generalized self-concordant functions, an important class of functions whose growth can be unbounded.
1.1Related work
In the classical analysis of Newtonโs method, when the Hessian of ๐ is assumed to be Lipschitz continuous and the function is strongly convex, one arrives at a convergence rate for the algorithm that depends on the Euclidean structure of โ ๐ , despite the fact that the algorithm is affine-invariant. This motivated the introduction of self-concordant functions in Nesterov & Nemirovskii [1994], functions for which the third derivative is bounded by the second-order derivative, with which one can obtain an affine-invariant convergence rate for the aforementioned algorithm. More importantly, many of the barrier functions used in interior-point methods are self-concordant, which extends the use of polynomial-time interior-point methods to many settings of interest.
Self-concordant functions have received strong interest in recent years due to the attractive properties that they allow to prove for many statistical estimation settings [Marteau-Ferey et al., 2019, Ostrovskii & Bach, 2021]. The original definition of self-concordance has been expanded and generalized since its inception, as many objective functions of interest have self-concordant-like properties without satisfying the strict definition of self-concordance. For example, the logistic loss function used in logistic regression is not strictly self-concordant, but it fits into a class of pseudo-self-concordant functions, which allows one to obtain similar properties and bounds as those obtained for self-concordant functions [Bach, 2010]. This was also the case in Ostrovskii & Bach [2021] and Tran-Dinh et al. [2015], in which more general properties of these pseudo-self-concordant functions were established. This was fully formalized in Sun & Tran-Dinh [2019], in which the concept of generalized self-concordant functions was introduced, along with key bounds, properties, and variants of Newton methods for the unconstrained setting which make use of this property.
Most algorithms that aim to solve Problem (1.1) assume access to second-order information, as this often allows the algorithms to make monotonic progress, remain inside the domain of ๐ , and often, converge quadratically when close enough to the optimum. Recently, several lines of work have focused on using Frank-Wolfe algorithm variants to solve these types of problems in the projection-free setting, for example constructing second-order approximations to a self-concordant ๐ using first and second-order information, and minimizing these approximations over ๐ณ using the Frank-Wolfe algorithm [Liu et al., 2020]. Other approaches, such as the ones presented in Dvurechensky et al. [2020] (later extended in Dvurechensky et al. [2022]), apply the Frank-Wolfe algorithm to a generalized self-concordant ๐ , using first and second-order information about the function to guarantee that the step sizes are so that the iterates do not leave the domain of ๐ , and monotonic progress is made. An additional Frank-Wolfe variant in that work, in the spirit of Garber & Hazan [2016], utilizes first and second order information about ๐ , along with a Local Linear Optimization Oracle for ๐ณ , to obtain a linear convergence rate in primal gap over polytopes given in inequality description. The authors in Dvurechensky et al. [2022] also present an additional Frank-Wolfe variant which does not use second-order information, and uses the backtracking line search of Pedregosa et al. [2020] to estimate local smoothness parameters at a given iterate. Other specialized Frank-Wolfe algorithms have been developed for specific problems involving generalized self-concordant functions, such as the Frank-Wolfe variant developed for marginal inference with concave maximization [Krishnan et al., 2015], the variant developed in Zhao & Freund [2023] for ๐ -homogeneous barrier functions, or the application for phase retrieval in Odor et al. [2016], where the Frank-Wolfe algorithm is shown to converge on a self-concordant non-Lipschitz smooth objective.
1.2Contribution
The contributions of this paper are detailed below and summarized in Table 1.
Simple FW variant for generalized self-concordant functions
We show that a small variation of the original Frank-Wolfe algorithm [Frank & Wolfe, 1956] with an open-loop step size of the form ๐พ ๐ก
2 / ( ๐ก + 2 ) , where ๐ก is the iteration count is all that is needed to achieve a convergence rate of ๐ช โข ( 1 / ๐ก ) in primal gap; this also answers an open question posed in Dvurechensky et al. [2022]. Our variation ensures monotonic progress while employing an open-loop strategy which, together with the iterates being convex combinations, ensures that we do not leave the domain of ๐ . In contrast to other methods that depend on either a line search or second-order information, our variant uses only a linear minimization oracle, zeroth-order and first-order information and a domain oracle for ๐ โข ( ๐ฑ ) . The assumption of the latter oracle is very mild and was also implicitly assumed in several of the algorithms presented in Dvurechensky et al. [2022]. As such, our iterations are much cheaper than those in previous work, while essentially achieving the same convergence rates for Problem (1.1).
Moreover, our variant relying on the open-loop step size ๐พ ๐ก
2 / ( ๐ก + 2 ) allows us to establish a ๐ช โข ( 1 / ๐ก ) convergence rate for the Frank-Wolfe gap, is agnostic, i.e., does not need to estimate local smoothness parameters, and is parameter-free, leading to convergence rates and oracle complexities that are independent of any tuning parameters.
Algorithm Convergence Reference 1 ๐ฌ๐ญ -order / Requirements Primal gap FW gap LS free? FW-GSC ๐ช โข ( 1 / ๐ ) [Dvurechensky et al., 2022, Alg.2] โ / โ SOO LBTFW-GSC ๐ช โข ( 1 / ๐ ) [Dvurechensky et al., 2022, Alg.3] โ / โ ZOO, DO MBTFW-GSC ๐ช โข ( 1 / ๐ ) [Dvurechensky et al., 2022, Alg.5] โ / โ ZOO, SOO, DO FW-LLOO ๐ช โข ( log โก 1 / ๐ ) [Dvurechensky et al., 2022, Alg.7] โ / โ ๐ โข ( ๐ณ ) , LLOO, SOO ASFW-GSC ๐ช โข ( log โก 1 / ๐ ) [Dvurechensky et al., 2022, Alg.8] โ / โ ๐ โข ( ๐ณ ) , SOO M-FW ๐ช โข ( 1 / ๐ )
๐ช โข ( 1 / ๐ ) This work โ / โ ZOO, DO B-{AFW/BPCG} ๐ช โข ( log โก 1 / ๐ )
๐ช โข ( log โก 1 / ๐ ) This work โ / โ ๐ โข ( ๐ณ ) , ZOO, DO Table 1: Number of iterations needed to achieve an ๐ -optimal solution for Problem 1.1. We denote line search by LS, zeroth-order oracle by ZOO, second-order oracle by SOO, domain oracle by DO, local linear optimization oracle by LLOO, and the assumption that ๐ณ is polyhedral by ๐ โข ( ๐ณ ) . The oracles listed under the Requirements column are the additional oracles required, other than the first-order oracle (FOO) and the linear minimization oracle (LMO) which all algorithms use. Faster rates in common special cases
We also obtain improved convergence rates when the optimum is contained in the interior of ๐ณ โฉ dom โข ( ๐ ) , or when the set ๐ณ is uniformly or strongly convex, using the backtracking line search of Pedregosa et al. [2020]. We also show that the Away-step Frank-Wolfe Wolfe [1970], Lacoste-Julien & Jaggi [2015] and the Blended Pairwise Conditional Gradient Tsuji et al. [2022] can use the aforementioned line search to achieve linear rates over polytopes. For clarity we want to stress that any linear rate over polytopes has to depend also on the ambient dimension of the polytope; this applies to our linear rates and those in Table 1 established elsewhere (see Diakonikolas et al. [2020]). In contrast, the ๐ช โข ( 1 / ๐ ) rates are dimension-independent.
Numerical experiments
We provide numerical experiments that showcase the performance of the algorithms on generalized self-concordant objectives to complement the theoretical results. In particular, they highlight that the simple step size strategy we propose is competitive with and sometimes outperforms other variants on many instances.
After publication of our initial draft, in a revision of their original work, Dvurechensky et al. [2022] added an analysis of the Away-step Frank-Wolfe algorithm which is complementary to ours (considering a slightly different setup and regimes) and was conducted independently; we have updated the tables to include these additional results.
1.3Preliminaries and Notation
We denote the domain of ๐ as dom โข ( ๐ )
def { ๐ฑ โ โ ๐ , ๐ โข ( ๐ฑ ) < + โ } and the (potentially non-unique) minimizer of Problem (1.1) by ๐ฑ * . Moreover, we denote the primal gap and the Frank-Wolfe gap at ๐ฑ โ ๐ณ โฉ dom โข ( ๐ ) as โ โข ( ๐ฑ )
def ๐ โข ( ๐ฑ ) โ ๐ โข ( ๐ฑ * ) and ๐ โข ( ๐ฑ )
def max ๐ฏ โ ๐ณ โก โจ โ ๐ โข ( ๐ฑ ) , ๐ฑ โ ๐ฏ โฉ , respectively. We use โฅ โ โฅ , โฅ โ โฅ ๐ป , and โจ โ , โ โฉ to denote the Euclidean norm, the matrix norm induced by a symmetric positive definite matrix ๐ป โ โ ๐ ร ๐ , and the Euclidean inner product, respectively. We denote the diameter of ๐ณ as ๐ท
def max ๐ฑ , ๐ฒ โ ๐ณ โก โ ๐ฑ โ ๐ฒ โ . Given a non-empty set ๐ณ โ โ ๐ we refer to its boundary as Bd โก ( ๐ณ ) and to its interior as Int โก ( ๐ณ ) . We use ฮ ๐ to denote the probability simplex of dimension ๐ . Given a compact convex set ๐ โ dom โข ( ๐ ) we denote:
๐ฟ ๐ ๐
max ๐ฎ โ ๐ , ๐ โ โ ๐ โก โ ๐ โ โ 2 ๐ โข ( ๐ฎ ) 2 โ ๐ โ 2 2 , ๐ ๐ ๐
min ๐ฎ โ ๐ , ๐ โ โ ๐ โก โ ๐ โ โ 2 ๐ โข ( ๐ฎ ) 2 โ ๐ โ 2 2 .
We assume access to:
1.
Domain Oracle (DO): Given ๐ฑ โ ๐ณ , return whether ๐ฑ โ dom โข ( ๐ ) .
2.
Zeroth-Order Oracle (ZOO): Given ๐ฑ โ dom โข ( ๐ ) , return ๐ โข ( ๐ฑ ) .
3.
First-Order Oracle (FOO): Given ๐ฑ โ dom โข ( ๐ ) , return โ ๐ โข ( ๐ฑ ) .
4.
Linear Minimization Oracle (LMO): Given ๐ โ โ ๐ , return argmin ๐ฑ โ ๐ณ โจ ๐ฑ , ๐ โฉ .
The FOO and LMO oracles are standard in the FW literature. The ZOO oracle is often implicitly assumed to be included with the FOO oracle; we make this explicit here for clarity. Finally, the DO oracle is motivated by the properties of generalized self-concordant functions. It is reasonable to assume the availability of the DO oracle: following the definition of the function codomain, one could simply evaluate ๐ at ๐ฑ and assert ๐ โข ( ๐ฑ ) < + โ , thereby combining the DO and ZOO oracles into one oracle. However, in many cases testing the membership of ๐ฑ โ dom โข ( ๐ ) is computationally less demanding than the function evaluation.
Remark 1.1.
Requiring access to a zeroth-order and domain oracle are mild assumptions, that were also implicitly assumed in one of the three FW-variants presented in Dvurechensky et al. [2022] when computing the step size according to the strategy from Pedregosa et al. [2020]; see 5 in Algorithm 4. The remaining two variants ensure that ๐ฑ โ dom โข ( ๐ ) by using second-order information about ๐ , which we explicitly do not rely on.
The following example motivates the use of Frank-Wolfe algorithms in the context of generalized self-concordant functions. We present more examples in the computational results.
Example 1.2 (Intersection of a convex set with a polytope).
Consider Problem (1.1) where ๐ณ
๐ซ โฉ ๐ , ๐ซ is a polytope over which we can minimize a linear function efficiently, and ๐ is a convex compact set for which one can easily build a barrier function.
(a) (b) (c) Figure 1:Minimizing ๐ โข ( ๐ฑ ) over ๐ซ โฉ ๐ , versus minimizing the sum of ๐ โข ( ๐ฑ ) and ฮฆ ๐ โข ( ๐ฑ ) over ๐ซ for two different penalty values ๐ โฒ and ๐ such that ๐ โฒ โซ ๐ .
Solving a linear optimization problem over ๐ณ may be extremely expensive. In light of this, we can incorporate ๐ into the problem through the use of a barrier penalty in the objective function, minimizing instead ๐ โข ( ๐ฑ ) + ๐ โข ฮฆ ๐ โข ( ๐ฑ ) where ฮฆ ๐ โข ( ๐ฑ ) is a log-barrier function for ๐ and ๐ is a parameter controlling the penalization. This reformulation is illustrated in Figure 1. Note that if the original objective function is generalized self-concordant, so is the new objective function (see Proposition 1 in Sun & Tran-Dinh [2019]). We assume that computing the gradient of ๐ โข ( ๐ฑ ) + ๐ โข ฮฆ ๐ โข ( ๐ฑ ) is roughly as expensive as computing the gradient for ๐ โข ( ๐ฑ ) and solving an LP over ๐ซ is inexpensive relative to solving an LP over ๐ซ โฉ ๐ . The ๐ parameter can be driven down to 0 after a solution converges in a warm-starting procedure similar to interior-point methods, ensuring convergence to the true optimum.
An additional advantage of this transformation of the problem is the solution structure. Running Frank-Wolfe on the set ๐ซ โฉ ๐ can select a large number of extremal points from Bd โก ( ๐ ) if ๐ is non-polyhedral. In contrast, ๐ซ has a finite number of vertices, a small subset of which will be selected throughout the optimization procedure. The same solution as that of the original problem can thus be constructed as a convex combination of a small number of vertices of ๐ซ , improving sparsity and interpretability in many applications.
The following definition formalizes the setting of Problem (1.1).
Definition 1.3 (Generalized self-concordant function).
Let ๐ โ ๐ถ 3 โข ( dom โข ( ๐ ) ) be a closed convex function with dom โข ( ๐ ) โ โ ๐ open. Then ๐ is ( ๐ , ๐ ) generalized self-concordant if:
| โจ D 3 โก ๐ โข ( ๐ฑ ) โข [ ๐ฐ ] โข ๐ฎ , ๐ฎ โฉ | โค ๐ โข โ ๐ฎ โ โ 2 ๐ โข ( ๐ฑ ) 2 โข โ ๐ฐ โ โ 2 ๐ โข ( ๐ฑ ) ๐ โ 2 โข โ ๐ฐ โ 2 3 โ ๐ ,
for any ๐ฑ โ dom โข ( ๐ ) and ๐ฎ , ๐ฐ โ โ ๐ , where
D 3 โก ๐ โข ( ๐ฑ ) โข [ ๐ฐ ]
lim ๐ผ โ 0 ๐ผ โ 1 โข ( โ 2 ๐ โข ( ๐ฑ + ๐ผ โข ๐ฐ ) โ โ 2 ๐ โข ( ๐ฑ ) ) .
2Frank-Wolfe Convergence Guarantees
We establish convergence rates for a Frank-Wolfe variant with an open-loop step size strategy for generalized self-concordant functions. The Monotonic Frank-Wolfe (M-FW) algorithm presented in Algorithm 1 is a rather simple, but powerful modification of the standard Frank-Wolfe algorithm, with the only difference that before taking a step, we verify if ๐ฑ ๐ก + ๐พ ๐ก โข ( ๐ฏ ๐ก โ ๐ฑ ๐ก ) โ dom โข ( ๐ ) , and if so, we check whether moving to the next iterate provides primal progress.
Algorithm 1 Monotonic Frank-Wolfe (M-FW) 1:Point ๐ฑ 0 โ ๐ณ โฉ dom โข ( ๐ ) , function ๐ 2:for ๐ก
0 to โฆ do 3: ๐ฏ ๐ก โ argmin ๐ฏ โ ๐ณ โจ โ ๐ โข ( ๐ฑ ๐ก ) , ๐ฏ โฉ , ๐พ ๐ก โ 2 / ( ๐ก + 2 ) 4: ๐ฑ ๐ก + 1 โ ๐ฑ ๐ก + ๐พ ๐ก โข ( ๐ฏ ๐ก โ ๐ฑ ๐ก ) 5: if ๐ฑ ๐ก + 1 โ dom โข ( ๐ ) or ๐ โข ( ๐ฑ ๐ก + 1 )
๐ โข ( ๐ฑ ๐ก ) then 6: ๐ฑ ๐ก + 1 โ ๐ฑ ๐ก 7: end if 8:end for
Note, that the open-loop step size rule 2 / ( ๐ก + 2 ) does not guarantee monotonic primal progress for the vanilla Frank-Wolfe algorithm in general. If either of these two checks fails, we simply do not move: the algorithm sets ๐ฑ ๐ก + 1
๐ฑ ๐ก in Line 6 of Algorithm 1. As customary, we assume short-circuit evaluation of the logical conditions in Algorithm 1, i.e., if the first condition in Line 5 is true, then the second condition is not even checked, and the algorithm directly goes to Line 6. This minor modification of the vanilla Frank-Wolfe algorithm enables us to use the monotonicity of the iterates in the proofs to come, at the expense of at most one extra function evaluation per iteration. Note that if we set ๐ฑ ๐ก + 1
๐ฑ ๐ก , we do not need to call the FOO or LMO oracle at iteration ๐ก + 1 , as we can simply reuse โ ๐ โข ( ๐ฑ ๐ก ) and ๐ฏ ๐ก . This effectively means that between successive iterations in which we search for an acceptable value of ๐พ ๐ก , we only need to call the zeroth-order and domain oracle.
In order to establish the main convergence results for the algorithm, we lower bound the progress per iteration with the help of Proposition 2.1.
Proposition 2.1 (Proposition 10, Sun & Tran-Dinh [2019]).
Given a ( ๐ , ๐ ) generalized self-concordant function, then for ๐ โฅ 2 , we have that:
๐ โข ( ๐ฒ ) โ ๐ โข ( ๐ฑ ) โ โจ โ ๐ โข ( ๐ฑ ) , ๐ฒ โ ๐ฑ โฉ โค ๐ ๐ โข ( ๐ ๐ โข ( ๐ฑ , ๐ฒ ) ) โข โ ๐ฒ โ ๐ฑ โ โ 2 ๐ โข ( ๐ฑ ) 2 ,
(2.1)
where the inequality holds if and only if ๐ ๐ โข ( ๐ฑ , ๐ฒ ) < 1 for ๐
2 , and we have that,
๐ ๐ โข ( ๐ฑ , ๐ฒ )
def
{
๐
โข
โ
๐ฒ
โ
๐ฑ
โ
if
โข
๐
2
( ๐ 2 โ 1 ) โข ๐ โข โ ๐ฒ โ ๐ฑ โ 3 โ ๐ โข โ ๐ฒ โ ๐ฑ โ โ 2 ๐ โข ( ๐ฑ ) ๐ โ 2
if โข ๐
2 ,
(2.2)
where:
๐ ๐ โข ( ๐ )
def
{
๐
๐
โ
๐
โ
1
๐
2
if
โข
๐
2
โ
๐
โ
๐๐
โข
(
1
โ
๐
)
๐
2
if
โข
๐
3
(
1
โ
๐
)
โข
๐๐
โข
(
1
โ
๐
)
+
๐
๐
2
if
โข
๐
4
( ๐ โ 2 4 โ ๐ ) โข 1 ๐ โข [ ๐ โ 2 2 โข ( 3 โ ๐ ) โข ๐ โข ( ( 1 โ ๐ ) 2 โข ( 3 โ ๐ ) 2 โ ๐ โ 1 ) โ 1 ]
otherwise.
The inequality shown in Eq. 2.1 is very similar to the one that we would obtain if the gradient of ๐ was Lipschitz continuous, however, while the Lipschitz continuity of the gradient leads to an inequality that holds globally for all ๐ฑ , ๐ฒ โ dom โข ( ๐ ) , the inequality in (2.1) only holds for ๐ ๐ โข ( ๐ฑ , ๐ฒ ) < 1 . Moreover, there are two other important differences, the norm used in (2.1) is now the norm defined by the Hessian at ๐ฑ ๐ก instead of the โ 2 norm, and the term multiplying the norm is ๐ ๐ โข ( ๐ ๐ โข ( ๐ฑ , ๐ฒ ) ) instead of 1 / 2 . We deal with the latter issue by bounding ๐ ๐ โข ( ๐ ๐ โข ( ๐ฑ , ๐ฒ ) ) with a constant that depends on ๐ for any ๐ฑ , ๐ฒ โ dom โข ( ๐ ) such that ๐ ๐ โข ( ๐ฑ , ๐ฒ ) โค 1 / 2 , as shown in Remark 2.2.
Remark 2.2.
As d โก ๐ ๐ โข ( ๐ ) / d โก ๐
0 for ๐ < 1 and ๐ โฅ 2 , then ๐ ๐ โข ( ๐ ) โค ๐ ๐ โข ( 1 / 2 ) for ๐ โค 1 / 2 .
Due to the fact that we use a simple step size ๐พ ๐ก
2 / ( ๐ก + 2 ) , that we make monotonic progress, and we ensure that the iterates are inside dom โข ( ๐ ) , careful accounting allows us to bound the number of iterations until ๐ ๐ โข ( ๐ฑ ๐ก , ๐ฑ ๐ก + ๐พ ๐ก โข ( ๐ฏ ๐ก โ ๐ฑ ๐ก ) ) โค 1 / 2 . Before formalizing the convergence rate, we first review a lemma needed in the proof.
Lemma 2.3 (Proposition 7,Sun & Tran-Dinh [2019]).
Let ๐ be a generalized self-concordant function with ๐ > 2 . If ๐ ๐ โข ( ๐ฑ , ๐ฒ ) < 1 and ๐ฑ โ dom โข ( ๐ ) then ๐ฒ โ dom โข ( ๐ ) . For the case ๐
2 we have that dom โข ( ๐ )
โ ๐ .
Putting all these things together allows us to obtain a convergence rate for Algorithm 1.
Theorem 2.4.
Suppose ๐ณ is a compact convex set and ๐ is a ( ๐ , ๐ ) generalized self-concordant function with ๐ โฅ 2 , and define the compact set
โ 0
def { ๐ฑ โ dom โข ( ๐ ) โฉ ๐ณ โฃ ๐ โข ( ๐ฑ ) โค ๐ โข ( ๐ฑ 0 ) } .
Then, the Monotonic Frank-Wolfe algorithm (Algorithm 1) satisfies:
โ โข ( ๐ฑ ๐ก ) โค 4 โข ( ๐ ๐ + 1 ) ๐ก + 1 โข max โก { โ โข ( ๐ฑ 0 ) , ๐ฟ ๐ โ 0 โข ๐ท 2 โข ๐ ๐ โข ( 1 / 2 ) } .
(2.3)
for ๐ก โฅ ๐ ๐ , ๐ ๐ is defined as:
๐ ๐
def
{
โ
4
โข
๐
โข
๐ท
โ
โ
2
if
โข
๐
2
โ 2 โข ๐ โข ๐ท โข ( ๐ฟ ๐ โ 0 ) ๐ / 2 โ 1 โข ( ๐ โ 2 ) โ โ 2
๐๐กโ๐๐๐ค๐๐ ๐ .
(2.4)
Otherwise it holds that โ โข ( ๐ฑ ๐ก ) โค โ โข ( ๐ฑ 0 ) for ๐ก < ๐ ๐ .
Proof.
As the algorithm makes monotonic progress and moves towards points such that ๐ฑ ๐ก โ ๐ณ , then ๐ฑ ๐ก โ โ 0 for ๐ก โฅ 0 . As the smoothness parameter of ๐ is bounded over โ 0 , we have from the properties of smooth functions that the bound โ ๐ โ โ 2 ๐ โข ( ๐ฑ ๐ก ) 2 / โ ๐ โ 2 โค ๐ฟ ๐ โ 0 holds for any ๐ โ โ ๐ . Particularizing for ๐
๐ฑ ๐ก โ ๐ฏ ๐ก and noting that โ ๐ฑ ๐ก โ ๐ฏ ๐ก โ โค ๐ท leads to โ ๐ฑ ๐ก โ ๐ฏ ๐ก โ โ 2 ๐ โข ( ๐ฑ ๐ก ) 2 โค ๐ฟ ๐ โ 0 โข ๐ท 2 . We then define ๐ ๐ with (2.4). Using the definition shown in (2.2) we have that for ๐ก โฅ ๐ ๐ then ๐ โข ( ๐ฑ ๐ก , ๐ฑ ๐ก + ๐พ ๐ก โข ( ๐ฏ ๐ก โ ๐ฑ ๐ก ) ) โค 1 / 2 . This fact, along with the fact that ๐ฑ ๐ก โ dom โข ( ๐ ) (by monotonicity) allows us to claim that ๐ฑ ๐ก + ๐พ ๐ก โข ( ๐ฏ ๐ก โ ๐ฑ ๐ก ) โ dom โข ( ๐ ) , by application of Lemma 2.3. This means that the non-zero step size ๐พ ๐ก will ensure that ๐ฑ ๐ก + ๐พ ๐ก โข ( ๐ฏ ๐ก โ ๐ฑ ๐ก ) โ dom โข ( ๐ ) in Line 5 of Algorithm 1. Moreover, it allows us to use the bound between the function value at points ๐ฑ ๐ก and ๐ฑ ๐ก + ๐พ ๐ก โข ( ๐ฏ ๐ก โ ๐ฑ ๐ก ) in (2.1) of Proposition 2.1, which holds for ๐ โข ( ๐ฑ ๐ก , ๐ฑ ๐ก + ๐พ ๐ก โข ( ๐ฏ ๐ก โ ๐ฑ ๐ก ) ) < 1 . With this we can estimate the primal progress we can guarantee for ๐ก โฅ ๐ ๐ if we move from ๐ฑ ๐ก to ๐ฑ ๐ก + ๐พ ๐ก โข ( ๐ฏ ๐ก โ ๐ฑ ๐ก ) :
โ โข ( ๐ฑ ๐ก + 1 )
โ โข ( ๐ฑ ๐ก + ๐พ ๐ก โข ( ๐ฏ ๐ก โ ๐ฑ ๐ก ) )
โค โ โข ( ๐ฑ ๐ก ) โ ๐พ ๐ก โข ๐ โข ( ๐ฑ ๐ก ) + ๐พ ๐ก 2 โข ๐ ๐ โข ( ๐ ๐ โข ( ๐ฑ ๐ก , ๐ฑ ๐ก + ๐พ ๐ก โข ( ๐ฏ ๐ก โ ๐ฑ ๐ก ) ) ) โข โ ๐ฏ ๐ก โ ๐ฑ ๐ก โ โ 2 ๐ โข ( ๐ฑ ๐ก ) 2
โค โ โข ( ๐ฑ ๐ก ) โข ( 1 โ ๐พ ๐ก ) + ๐พ ๐ก 2 โข ๐ฟ ๐ โ 0 โข ๐ท 2 โข ๐ ๐ โข ( 1 / 2 ) ,
where the second inequality follows from the upper bound on the primal gap via the FW gap ๐ โข ( ๐ฑ ๐ก ) , the application of Remark 2.2 as for ๐ก โฅ ๐ ๐ we have that ๐ ๐ โข ( ๐ฑ ๐ก , ๐ฑ ๐ก + ๐พ ๐ก โข ( ๐ฏ ๐ก โ ๐ฑ ๐ก ) ) โค 1 / 2 , and from the fact that ๐ฑ ๐ก โ โ 0 for all ๐ก โฅ 0 . With the previous chain of inequalities we can bound the primal progress for ๐ก โฅ ๐ ๐ as
โ โข ( ๐ฑ ๐ก ) โ โ โข ( ๐ฑ ๐ก + ๐พ ๐ก โข ( ๐ฏ ๐ก โ ๐ฑ ๐ก ) )
โฅ ๐พ ๐ก โข โ โข ( ๐ฑ ๐ก ) โ ๐พ ๐ก 2 โข ๐ฟ ๐ โ 0 โข ๐ท 2 โข ๐ ๐ โข ( 1 / 2 ) .
(2.5)
From these facts we can prove the convergence rate shown in (2.3) by induction. The base case ๐ก
๐ ๐ holds trivially by the fact that using monotonicity we have that โ โข ( ๐ฑ ๐ ๐ ) โค โ โข ( ๐ฑ 0 ) . Assuming the claim is true for some ๐ก โฅ ๐ ๐ we distinguish two cases. Case ๐พ ๐ก โข โ โข ( ๐ฑ ๐ก ) โ ๐พ ๐ก 2 โข ๐ฟ ๐ โ 0 โข ๐ท 2 โข ๐ ๐ โข ( 1 / 2 ) > 0 : Focusing on the first case, we can plug the previous inequality into (2.5) to find that ๐พ ๐ก guarantees primal progress, that is, โ โข ( ๐ฑ ๐ก ) > โ โข ( ๐ฑ ๐ก + ๐พ ๐ก โข ( ๐ฏ ๐ก โ ๐ฑ ๐ก ) ) with the step size ๐พ ๐ก , and so we know that we will not go into Line 6 of Algorithm 1, and we have that โ โข ( ๐ฑ ๐ก + 1 )
โ โข ( ๐ฑ ๐ก + ๐พ ๐ก โข ( ๐ฏ ๐ก โ ๐ฑ ๐ก ) ) . Thus using the induction hypothesis and plugging in the expression for ๐พ ๐ก
2 / ( ๐ก + 2 ) into (2.5) we have:
โ โข ( ๐ฑ ๐ก + 1 )
โค 4 โข max โก { โ โข ( ๐ฑ 0 ) , ๐ฟ ๐ โ 0 โข ๐ท 2 โข ๐ ๐ โข ( 1 / 2 ) } โข ( ( ๐ ๐ + 1 ) โข ๐ก ( ๐ก + 1 ) โข ( ๐ก + 2 ) + 1 ( ๐ก + 2 ) 2 )
โค 4 โข ( ๐ ๐ + 1 ) ๐ก + 2 โข max โก { โ โข ( ๐ฑ 0 ) , ๐ฟ ๐ โ 0 โข ๐ท 2 โข ๐ ๐ โข ( 1 / 2 ) } ,
where we use that ( ๐ ๐ + 1 ) โข ๐ก / ( ๐ก + 1 ) + 1 / ( ๐ก + 2 ) โค ๐ ๐ + 1 for all ๐ก โฅ 0 and any ๐ก โฅ ๐ ๐ . Case ๐พ ๐ก โข โ โข ( ๐ฑ ๐ก ) โ ๐พ ๐ก 2 โข ๐ฟ ๐ โ 0 โข ๐ท 2 โข ๐ ๐ โข ( 1 / 2 ) โค 0 : In this case, we cannot guarantee that the step size ๐พ ๐ก provides primal progress by plugging into (2.5), and so we cannot guarantee if a step size of ๐พ ๐ก will be accepted and we will have ๐ฑ ๐ก + 1
๐ฑ ๐ก + ๐พ ๐ก โข ( ๐ฏ ๐ก โ ๐ฑ ๐ก ) , or we will simply have ๐ฑ ๐ก + 1
๐ฑ ๐ก , that is, we may go into Line 6 of Algorithm 1. Nevertheless, if we reorganize the expression ๐พ ๐ก โข โ โข ( ๐ฑ ๐ก ) โ ๐พ ๐ก 2 โข ๐ฟ ๐ โ 0 โข ๐ท 2 โข ๐ ๐ โข ( 1 / 2 ) โค 0 , by monotonicity we will have that:
โ โข ( ๐ฑ ๐ก + 1 ) โค โ โข ( ๐ฑ ๐ก ) โค 2 ๐ก + 2 โข ๐ฟ ๐ โ 0 โข ๐ท 2 โข ๐ ๐ โข ( 1 / 2 ) โค 4 โข ( ๐ ๐ + 1 ) ๐ก + 2 โข max โก { โ โข ( ๐ฑ 0 ) , ๐ฟ ๐ โ 0 โข ๐ท 2 โข ๐ ๐ โข ( 1 / 2 ) } .
Where the last inequality holds as 2 โค 4 โข ( ๐ ๐ + 1 ) for any ๐ ๐ โฅ 0 . โ
One of the quantities that we have used in the proof of Theorem 2.4 is ๐ฟ ๐ โ 0 . Note that the function ๐ is ๐ฟ ๐ โ 0 -smooth over โ 0 . One could wonder why we have bothered to use the bound on the Bregman divergence in Proposition 2.1 for a ( ๐ , ๐ ) -generalized self-concordant function, instead of simply using the bounds from the ๐ฟ ๐ โ 0 -smoothness of ๐ over โ 0 . The reason is that the upper bound on the Bregman divergence in Proposition 2.1 applies for any ๐ฑ , ๐ฒ โ dom โข ( ๐ ) such that ๐ ๐ โข ( ๐ฑ , ๐ฒ ) < 1 , and we can easily bound the number of iterations ๐ ๐ it takes for the step size ๐พ ๐ก
2 / ( ๐ก + 2 ) to verify both ๐ฑ ๐ก , ๐ฑ ๐ก + ๐พ ๐ก โข ( ๐ฏ ๐ก โ ๐ฑ ๐ก ) โ dom โข ( ๐ ) and ๐ ๐ โข ( ๐ฑ ๐ก , ๐ฑ ๐ก + ๐พ ๐ก โข ( ๐ฏ ๐ก โ ๐ฑ ๐ก ) ) < 1 for ๐ก โฅ ๐ ๐ . However, in order to apply the bound on the Bregman divergence from ๐ฟ ๐ โ 0 -smoothness we need ๐ฑ ๐ก , ๐ฑ ๐ก + ๐พ ๐ก โข ( ๐ฏ ๐ก โ ๐ฑ ๐ก ) โ โ 0 , and while it is easy to show by monotonicity that ๐ฑ ๐ก โ โ 0 , there is no straightforward way to prove that for some ๐ ~ ๐ we have that ๐ฑ ๐ก + ๐พ ๐ก โข ( ๐ฏ ๐ก โ ๐ฑ ๐ก ) โ โ 0 for all ๐ก โฅ ๐ ~ ๐ , i.e., that from some point onward a step with a non-zero step size is taken (that is, we do not go into Line 6 of Algorithm 1) that guarantees primal progress.
Remark 2.5.
In the case where ๐
2 we can easily bound the primal gap โ โข ( ๐ฑ 1 ) , as in this setting dom โข ( ๐ )
โ ๐ , which leads to โ โข ( ๐ฑ 1 ) โค ๐ฟ ๐ ๐ณ โข ๐ท 2 from (2.5), regardless of whether we set ๐ฑ 1
๐ฑ 0 or ๐ฑ 1
๐ฏ 0 . Moreover, as the upper bound on the Bregman divergence holds for ๐
2 regardless of the value of ๐ 2 โข ( ๐ฑ , ๐ฒ ) , we can modify the proof of Theorem 2.4 to obtain a convergence rate of the form:
โ โข ( ๐ฑ ๐ก ) โค 2 ๐ก + 1 โข ๐ฟ ๐ ๐ณ โข ๐ท 2 โข ๐ 2 โข ( ๐ โข ๐ท ) โข โ ๐ก โฅ 1 ,
which is reminiscient of the ๐ช โข ( ๐ฟ ๐ ๐ณ โข ๐ท 2 / ๐ก ) rate of the original Frank-Wolfe algorithm for the smooth and convex case.
Furthermore, with this simple step size we can also prove a convergence rate for the Frank-Wolfe gap, as shown in Theorem 2.6. More specifically, the minimum of the Frank-Wolfe gap over the run of the algorithm converges at a rate of ๐ช โข ( 1 / ๐ก ) . The idea of the proof is very similar to the one in Jaggi [2013]. In a nutshell, as the primal progress per iteration is directly related to the step size times the Frank-Wolfe gap, we know that the Frank-Wolfe gap cannot remain indefinitely above a given value, as otherwise we would obtain a large amount of primal progress, which would make the primal gap become negative. This is formalized in Theorem 2.6.
Theorem 2.6.
Suppose ๐ณ is a compact convex set and ๐ is a ( ๐ , ๐ ) generalized self-concordant function with ๐ โฅ 2 . Then if the Monotonic Frank-Wolfe algorithm (Algorithm 1) is run for ๐ โฅ ๐ ๐ + 6 iterations, we will have that:
min 1 โค ๐ก โค ๐ โก ๐ โข ( ๐ฑ ๐ก ) โค ๐ช โข ( 1 / ๐ ) ,
where ๐ ๐ is defined as:
๐ ๐
def
{
โ
4
โข
๐
โข
๐ท
โ
โ
2
if
โข
๐
2
โ 2 โข ๐ โข ๐ท โข ( ๐ฟ ๐ โ 0 ) ๐ / 2 โ 1 โข ( ๐ โ 2 ) โ โ 2
๐๐กโ๐๐๐ค๐๐ ๐ .
(2.6) Proof.
In order to prove the claim, we focus on the iterations ๐ก such that:
๐ ๐ + โ ( ๐ โ ๐ ๐ ) / 3 โ โ 2 โค ๐ก โค ๐ โ 2 ,
(2.7)
where ๐ ๐ is defined in (2.6). Note that as we assume that ๐ โฅ ๐ ๐ + 6 , we know that ๐ ๐ โค ๐ ๐ + โ ( ๐ โ ๐ ๐ ) / 3 โ โ 2 , and so for iterations ๐ ๐ + โ ( ๐ โ ๐ ๐ ) / 3 โ โ 2 โค ๐ก โค ๐ โ 2 we know that ๐ ๐ โข ( ๐ฑ ๐ก , ๐ฑ ๐ก + 1 ) โค 1 / 2 , and so:
โ โข ( ๐ฑ ๐ก + 1 )
โค โ โข ( ๐ฑ ๐ก ) โ ๐พ ๐ก โข ๐ โข ( ๐ฑ ๐ก ) + ๐พ ๐ก 2 โข ๐ฟ ๐ โ 0 โข ๐ท 2 โข ๐ ๐ โข ( 1 / 2 ) .
(2.8)
In a very similar fashion as was done in the proof of Theorem 2.4, we divide the proof into two different cases. Case โ ๐พ ๐ก โข ๐ โข ( ๐ฑ ๐ก ) + ๐พ ๐ก 2 โข ๐ฟ ๐ โ 0 โข ๐ท 2 โข ๐ ๐ โข ( 1 / 2 ) โฅ 0 for some ๐ ๐ + โ ( ๐ โ ๐ ๐ ) / 3 โ โ 2 โค ๐ก โค ๐ โ 2 : Reordering the inequality above we therefore know that there exists a ๐ ๐ + โ ( ๐ โ ๐ ๐ ) / 3 โ โ 2 โค ๐พ โค ๐ โ 2 such that:
๐
โข
(
๐ฑ
๐พ
)
โค
2
2
+
๐พ
โข
๐ฟ
๐
โ
0
โข
๐ท
2
โข
๐
๐
โข
(
1
/
2
)
โค
2
๐
๐
+
โ
(
๐
โ
๐
๐
)
/
3
โ
โข
๐ฟ
๐
โ
0
โข
๐ท
2
โข
๐
๐
โข
(
1
/
2
)
6 2 โข ๐ ๐ + ๐ โข ๐ฟ ๐ โ 0 โข ๐ท 2 โข ๐ ๐ โข ( 1 / 2 ) ,
where the second inequality follows from the fact that ๐ ๐ + โ ( ๐ โ ๐ ๐ ) / 3 โ โ 2 โค ๐พ . This leads to min 1 โค ๐ก โค ๐ โก ๐ โข ( ๐ฑ ๐ก ) โค ๐ โข ( ๐ฑ ๐พ ) โค 6 2 โข ๐ ๐ + ๐ โข ๐ฟ ๐ โ 0 โข ๐ท 2 โข ๐ ๐ โข ( 1 / 2 ) . Case โ ๐พ ๐ก โข ๐ โข ( ๐ฑ ๐ก ) + ๐พ ๐ก 2 โข ๐ฟ ๐ โ 0 โข ๐ท 2 โข ๐ ๐ โข ( 1 / 2 ) < 0 for all ๐ ๐ + โ ( ๐ โ ๐ ๐ ) / 3 โ โ 2 โค ๐ก โค ๐ โ 2 : Using the inequality above and plugging into (2.8) allows us to conclude that all steps ๐ ๐ + โ ( ๐ โ ๐ ๐ ) / 3 โ โ 2 โค ๐ก โค ๐ โ 2 will produce primal progress using the step size ๐พ ๐ก , and so as we know that ๐ฑ ๐ก + 1 โ dom โข ( ๐ ) by Lemma 2.3, then for all ๐ ๐ + โ ( ๐ โ ๐ ๐ ) / 3 โ โ 2 โค ๐ก โค ๐ โ 2 we will take a non-zero step size determined by ๐พ ๐ก , as ๐ฑ ๐ก + ๐พ ๐ก โข ( ๐ฏ ๐ก โ ๐ฑ ๐ก ) โ dom โข ( ๐ ) and ๐ โข ( ๐ฑ ๐ก + ๐พ ๐ก โข ( ๐ฏ ๐ก โ ๐ฑ ๐ก ) ) < ๐ โข ( ๐ฑ ๐ก ) in 5 of Algorithm 1. Consequently, summing up (2.8) from ๐ก min
def ๐ ๐ + โ ( ๐ โ ๐ ๐ ) / 3 โ โ 2 to ๐ก max
def ๐ โ 2 we have that:
โ
โข
(
๐ฑ
๐ก
max
+
1
)
โค
โ
โข
(
๐ฑ
๐ก
min
)
โ
โ
๐ก
๐ก min ๐ก max ๐พ ๐ก โข ๐ โข ( ๐ฑ ๐ก ) + ๐ฟ ๐ โ 0 โข ๐ท 2 โข ๐ ๐ โข ( 1 / 2 ) โข โ ๐ก
๐ก min ๐ก max ๐พ ๐ก 2
(2.9)
โค โ โข ( ๐ฑ ๐ก min ) โ 2 โข min ๐ก min โค ๐ก โค ๐ก max โก ๐ โข ( ๐ฑ ๐ก ) โข โ ๐ก
๐ก min ๐ก max 1 2 + ๐ก + 4 โข ๐ฟ ๐ โ 0 โข ๐ท 2 โข ๐ ๐ โข ( 1 / 2 ) โข โ ๐ก
๐ก min ๐ก max 1 ( 2 + ๐ก ) 2
(2.10)
โค โ โข ( ๐ฑ ๐ก min ) โ 2 โข min 1 โค ๐ก โค ๐ โก ๐ โข ( ๐ฑ ๐ก ) โข ๐ก max โ ๐ก min + 1 2 + ๐ก max + 4 โข ๐ฟ ๐ โ 0 โข ๐ท 2 โข ๐ ๐ โข ( 1 / 2 ) โข ๐ก max โ ๐ก min + 1 ( 2 + ๐ก min ) 2
(2.11)
โค 4 โข ( ๐ ๐ + 1 ๐ก min + 1 + ๐ก max โ ๐ก min + 1 ( 2 + ๐ก min ) 2 ) โข max โก { โ โข ( ๐ฑ 0 ) , ๐ฟ ๐ โ 0 โข ๐ท 2 โข ๐ ๐ โข ( 1 / 2 ) }
(2.12)
โ 2 โข min 1 โค ๐ก โค ๐ โก ๐ โข ( ๐ฑ ๐ก ) โข ๐ก max โ ๐ก min + 1 2 + ๐ก max .
(2.13)
Note that (2.10) stems from the fact that min ๐ก min โค ๐ก โค ๐ก max โก ๐ โข ( ๐ฑ ๐ก ) โค ๐ โข ( ๐ฑ ๐ก ) for any ๐ก min โค ๐ก โค ๐ก max , and from plugging ๐พ ๐ก
2 / ( 2 + ๐ก ) , and (2.11) follows from the fact that โ 1 / ( 2 + ๐ก ) โค โ 1 / ( 2 + ๐ก max ) and 1 / ( 2 + ๐ก ) โค 1 / ( 2 + ๐ก min ) for all ๐ก min โค ๐ก โค ๐ก max . The last inequality, shown in (2.12) and (2.13) arises from plugging in the upper bound on the primal gap โ โข ( ๐ฑ ๐ก min ) from Theorem 2.4 and collecting terms. If we plug in the specific values of ๐ก max and ๐ก min this leads to:
โ โข ( ๐ฑ ๐ โ 1 )
โค 12 โข ( ๐ ๐ + 1 2 โข ๐ ๐ + ๐ โ 3 + 2 โข ๐ โ 2 โข ๐ ๐ + 3 ( 2 โข ๐ ๐ + ๐ ) 2 ) โข max โก { โ โข ( ๐ฑ 0 ) , ๐ฟ ๐ โ 0 โข ๐ท 2 โข ๐ ๐ โข ( 1 / 2 ) }
(2.14)
โ 2 3 โข min 1 โค ๐ก โค ๐ โก ๐ โข ( ๐ฑ ๐ก ) โข ๐ โ ๐ ๐ ๐ .
(2.15)
We establish our claim using proof by contradiction. Assuming that:
min 1 โค ๐ก โค ๐ โก ๐ โข ( ๐ฑ ๐ก )
18 โข ๐ ๐ โ ๐ ๐ โข ( ๐ ๐ + 1 2 โข ๐ ๐ + ๐ โ 3 + 2 โข ๐ โ 2 โข ๐ ๐ + 3 ( 2 โข ๐ ๐ + ๐ ) 2 ) โข max โก { โ โข ( ๐ฑ 0 ) , ๐ฟ ๐ โ 0 โข ๐ท 2 โข ๐ ๐ โข ( 1 / 2 ) }
results, together with the bound from (2.15) in โ โข ( ๐ฑ ๐ โ 1 ) < 0 , which is the desired contradiction, as the primal gap cannot be negative. Therefore we must have that:
min
1
โค
๐
โค
๐
โก
๐
โข
(
๐ฑ
๐
)
โค
18
โข
๐
๐
โ
๐
๐
โข
(
๐
๐
+
1
2
โข
๐
๐
+
๐
โ
3
+
2
โข
๐
โ
2
โข
๐
๐
+
3
(
2
โข
๐
๐
+
๐
)
2
)
โข
max
โก
{
โ
โข
(
๐ฑ
0
)
,
๐ฟ
๐
โ
0
โข
๐ท
2
โข
๐
๐
โข
(
1
/
2
)
}
๐ช โข ( 1 / ๐ ) .
This completes the proof. โ
Remark 2.7.
Note that the Monotonic Frank-Wolfe algorithm (Algorithm 1) performs at most one ZOO, FOO, DO, and LMO oracle call per iteration. This means that Theorems 2.4 and 2.6 effectively bound the number of ZOO, FOO, DO, and LMO oracle calls needed to achieve a target primal gap or Frank-Wolfe gap accuracy ๐ as a function of ๐ ๐ and ๐ ; note that ๐ ๐ is independent of ๐ . This is an important difference with respect to existing bounds, as the existing Frank-Wolfe-style first-order algorithms for generalized self-concordant functions in the literature that utilize various types of line searches may perform more than one ZOO or DO call per iteration in the line search. This means that the convergence bounds in terms of iteration count of these algorithms are only informative when considering the number of FOO and LMO calls that are needed to reach a target accuracy in primal gap, and do not directly provide any information regarding the number of ZOO or DO calls that are needed. In order to bound the latter two quantities one typically needs additional technical tools. For example, for the backtracking line search of Pedregosa et al. [2020], one can use [Pedregosa et al., 2020, Theorem 1, Appendix C], or a slightly modified version of [Nesterov, 2013, Lemma 4], to find a bound for the number of ZOO or DO calls that are needed to find an ๐ -optimal solution. Note that these bounds depend on user-defined initialization or tuning parameters.
Remark 2.8.
In practice, a halving strategy for the step size is preferred for the implementation of the Monotonic Frank-Wolfe algorithm, as opposed to the step size implementation shown in Algorithm 1. This halving strategy, which is shown in Algorithm 2, helps deal with the case in which a large number of consecutive step sizes ๐พ ๐ก are rejected either because ๐ฑ ๐ก + ๐พ ๐ก โข ( ๐ฏ ๐ก โ ๐ฑ ๐ก ) โ dom โข ( ๐ ) or ๐ โข ( ๐ฑ ๐ก + ๐พ ๐ก โข ( ๐ฏ ๐ก โ ๐ฑ ๐ก ) )
๐ โข ( ๐ฑ ๐ก ) , and helps avoid the need to potentially call the zeroth-order or domain oracle a large number of times in these cases. The halving strategy in Algorithm 2 results in a step size that is at most a factor of 2 smaller than the one that would have been accepted with the original strategy, i.e., that would have ensured that ๐ฑ ๐ก + ๐พ ๐ก โข ( ๐ฏ ๐ก โ ๐ฑ ๐ก ) โ dom โข ( ๐ ) and ๐ โข ( ๐ฑ ๐ก + ๐พ ๐ก โข ( ๐ฏ ๐ก โ ๐ฑ ๐ก ) ) โค ๐ โข ( ๐ฑ ๐ก ) , in the standard Monotonic Frank-Wolfe algorithm in Algorithm 1. However, the number of zeroth-order or domain oracles that would be needed to find this step size that satisfies both ๐ฑ ๐ก + ๐พ ๐ก โข ( ๐ฏ ๐ก โ ๐ฑ ๐ก ) โ dom โข ( ๐ ) and ๐ โข ( ๐ฑ ๐ก + 1 ) โค ๐ โข ( ๐ฑ ๐ก ) is logarithmic for the Monotonic Frank-Wolfe variant shown in Algorithm 2, when compared to the number needed for the Monotonic Frank-Wolfe variant without halving shown in Algorithm 1. Note that the convergence properties established throughout the paper for the Monotonic Frank-Wolfe algorithm in Algorithm 1 also hold for the variant in Algorithm 2; with the only difference being that we lose a very small constant factor (e.g., at most a factor of 2 for the standard case) in the convergence rate.
Algorithm 2 Halving M-FW 1:Point ๐ฑ 0 โ ๐ณ โฉ dom โข ( ๐ ) , function ๐ 2:Iterates ๐ฑ 1 , โฆ โ ๐ณ 3: ๐ โ 1 โ 0 4:for ๐ก
0 to โฆ do 5: ๐ฏ ๐ก โ argmin ๐ฏ โ ๐ณ โจ โ ๐ โข ( ๐ฑ ๐ก ) , ๐ฏ โฉ 6: ๐ ๐ก โ ๐ ๐ก โ 1 7: ๐พ ๐ก โ 2 1 โ ๐ ๐ก / ( ๐ก + 2 ) 8: ๐ฑ ๐ก + 1 โ ๐ฑ ๐ก + ๐พ ๐ก โข ( ๐ฏ ๐ก โ ๐ฑ ๐ก ) 9: while ๐ฑ ๐ก + 1 โ dom โข ( ๐ ) or ๐ โข ( ๐ฑ ๐ก + 1 )
๐ โข ( ๐ฑ ๐ก ) do 10: ๐ ๐ก โ ๐ ๐ก + 1 11: ๐พ ๐ก โ 2 1 โ ๐ ๐ก / ( ๐ก + 2 ) 12: ๐ฑ ๐ก + 1 โ ๐ฑ ๐ก + ๐พ ๐ก โข ( ๐ฏ ๐ก โ ๐ฑ ๐ก ) 13: end while 14:end for
In Table 2 we provide a detailed complexity comparison between the Monotonic Frank-Wolfe (M-FW) algorithm (Algorithm 1), and other comparable algorithms in the literature.
Algorithm SOO calls FOO calls ZOO calls LMO calls DO calls FW-GSC [Dvurechensky et al., 2022, Alg.2] ๐ช โข ( 1 / ๐ )
๐ช โข ( 1 / ๐ )
๐ช โข ( 1 / ๐ )
LBTFW-GSC โก [Dvurechensky et al., 2022, Alg.3] ๐ช โข ( 1 / ๐ )
๐ช โข ( 1 / ๐ )
๐ช โข ( 1 / ๐ )
๐ช โข ( 1 / ๐ )
MBTFW-GSC โก [Dvurechensky et al., 2022, Alg.5] ๐ช โข ( 1 / ๐ )
๐ช โข ( 1 / ๐ )
๐ช โข ( 1 / ๐ )
๐ช โข ( 1 / ๐ )
๐ช โข ( 1 / ๐ )
M-FW โ [This work] ๐ช โข ( 1 / ๐ )
๐ช โข ( 1 / ๐ )
๐ช โข ( 1 / ๐ )
๐ช โข ( 1 / ๐ ) Table 2: Complexity comparison: Number of iterations needed to reach a solution with โ โข ( ๐ฑ ) below ๐ for Problem 1.1. We use the superscript โ to indicate that the same complexities hold for reaching an ๐ -optimal solution in ๐ โข ( ๐ฑ ) . The superscript โก is used to indicate that constants in the convergence bounds depend on user-defined inputs; the other algorithms are parameter-free.
We note that the LBTFW-GSC algorithm from Dvurechensky et al. [2022] is in essence the Frank-Wolfe algorithm with a modified version of the backtracking line search of Pedregosa et al. [2020]. In the next section, we provide improved convergence guarantees for various cases of interest for this algorithm, which we refer to as the Frank-Wolfe algorithm with Backtrack (B-FW) for simplicity.
2.1Improved convergence guarantees
We will now establish improved convergence rates for various special cases. We focus on two different settings to obtain improved convergence rates; in the first, we assume that ๐ฑ * โ Int โก ( ๐ณ โฉ dom โข ( ๐ ) ) (Section 2.1.1), and in the second we assume that ๐ณ is strongly or uniformly convex (Section 2.1.2). The algorithm in this section is a slightly modified Frank-Wolfe algorithm with the adaptive line search technique of Pedregosa et al. [2020] (shown for reference in Algorithm 3 and 4). This is the same algorithm used in Dvurechensky et al. [2022], however, we show improved convergence rates in several settings of interest. Note that the adaptive line search technique of Pedregosa et al. [2020] requires user-defined inputs or parameters, which means that the algorithms in this section are not parameter-free. The parameter ๐ of Algorithm 4 corresponds to a local estimate of the Lipschitz constant of ๐ , the stopping condition defining the admissible step size requires the function decrease to be greater than the one derived from the quadratic model built from the Lipschitz estimate ๐ and gradient, hence ensuring monotonicity.
Algorithm 3 B-FW 1:Point ๐ฑ 0 โ ๐ณ โฉ dom โข ( ๐ ) , function ๐ , initial smoothness estimate ๐ฟ โ 1 2:Iterates ๐ฑ 1 , โฆ โ ๐ณ 3:for ๐ก
0 to โฆ do 4: ๐ฏ ๐ก โ argmin ๐ฏ โ ๐ณ โจ โ ๐ โข ( ๐ฑ ๐ก ) , ๐ฏ โฉ 5: ๐พ ๐ก , ๐ฟ ๐ก โ Backtrack ( ๐ , ๐ฑ ๐ก , ๐ฏ ๐ก โ ๐ฑ ๐ก , ๐ฟ ๐ก โ 1 , 1 ) 6: ๐ฑ ๐ก + 1 โ ๐ฑ ๐ก + ๐พ ๐ก โข ( ๐ฏ ๐ก โ ๐ฑ ๐ก ) 7:end for Algorithm 4 Backtrack ( ๐ , ๐ฑ , ๐ , ๐ฟ ๐ก โ 1 , ๐พ max ) (line search of Pedregosa et al. [2020]) 1:Point ๐ฑ โ ๐ณ โฉ dom โข ( ๐ ) , ๐ฏ โ โ ๐ , function ๐ , estimate ๐ฟ ๐ก โ 1 , step size ๐พ max 2: ๐พ , ๐ 3:Choose ๐ > 1 , ๐ โค 1 and ๐ โ [ ๐ โข ๐ฟ ๐ก โ 1 , ๐ฟ ๐ก โ 1 ] 4: ๐พ
min โก { โ โจ โ ๐ โข ( ๐ฑ ) , ๐ โฉ / ( ๐ โข โ ๐ โ 2 ) , ๐พ max } 5:while ๐ฑ + ๐พ โข ๐ โ dom โข ( ๐ ) or ๐ โข ( ๐ฑ + ๐พ โข ๐ ) โ ๐ โข ( ๐ฑ ) > ๐ โข ๐พ 2 2 โข โ ๐ โ 2 + ๐พ โข โจ โ ๐ โข ( ๐ฑ ) , ๐ โฉ do 6: ๐
๐ โข ๐ 7: ๐พ
min โก { โ โจ โ ๐ โข ( ๐ฑ ) , ๐ โฉ / ( ๐ โข โ ๐ โ 2 ) , ๐พ max } 8:end while 2.1.1Optimum contained in the interior
We first focus on the assumption that ๐ฑ * โ Int โก ( ๐ณ โฉ dom โข ( ๐ ) ) , obtaining improved rates when we use the FW algorithm coupled with the adaptive step size strategy from Pedregosa et al. [2020] (see Algorithm 4). This assumption is reasonable if for example Bd โก ( ๐ณ ) โ dom โข ( ๐ ) , and Int โก ( ๐ณ ) โ dom โข ( ๐ ) . That is to say, we will have that ๐ฑ * โ Int โก ( ๐ณ โฉ dom โข ( ๐ ) ) if for example we use logarithmic barrier functions to encode a set of constraints, and we have that dom โข ( ๐ ) is a proper subset of ๐ณ . In this case the optimum is guaranteed to be in Int โก ( ๐ณ โฉ dom โข ( ๐ ) ) .
The analysis in this case is reminiscent of the one in the seminal work of Guรฉlat & Marcotte [1986], and is presented in Subsection 2.1.1. Note that we can upper-bound the value of ๐ฟ ๐ก for ๐ก โฅ 0 by ๐ฟ ~
def max โก { ๐ โข ๐ฟ ๐ โ 0 , ๐ฟ โ 1 } , where ๐
1 is the backtracking parameter and ๐ฟ โ 1 is the initial smoothness estimate in Algorithm 4. Before proving the main theoretical results of this section, we first review some auxiliary results that allow us to prove linear convergence in this setting.
Proposition 2.9 (Proposition 3, Sun & Tran-Dinh [2019]).
Let ๐ be generalized self-concordant with ๐ โฅ 2 and dom โข ( ๐ ) not contain any straight line, then the Hessian โ 2 ๐ โข ( ๐ฑ ) is non-degenerate at all points ๐ฑ โ dom โข ( ๐ ) .
Note that the assumption that dom โข ( ๐ ) does not contain any straight line is without loss of generality as we can simply modify the function outside of our compact convex feasible region so that it holds.
Proposition 2.10 (Proposition 2.16, Braun et al. [2022]).
If there exists an ๐
0 such that โฌ โข ( ๐ฑ * , ๐ ) โ ๐ณ โฉ dom โข ( ๐ ) , then for all ๐ฑ โ ๐ณ โฉ dom โข ( ๐ ) we have that:
๐ โข ( ๐ฑ ) โ ๐ฑ โ ๐ฏ โ โฅ ๐ ๐ท โข โ โ ๐ โข ( ๐ฑ ) โ โฅ ๐ ๐ท โข โจ โ ๐ โข ( ๐ฑ ) , ๐ฑ โ ๐ฑ * โฉ โ ๐ฑ โ ๐ฑ * โ ,
where ๐ฏ
argmin ๐ฒ โ ๐ณ โจ โ ๐ โข ( ๐ฑ ) , ๐ฒ โฉ and ๐ โข ( ๐ฑ ) is the Frank-Wolfe gap.
With these tools at hand, we show that the Frank-Wolfe algorithm with the backtracking step-size strategy converges at a linear rate.
Theorem 2.11.
Let ๐ be a ( ๐ , ๐ ) generalized self-concordant function with ๐ โฅ 2 , let dom โข ( ๐ ) not contain any straight line and define the compact set
โ 0
def { ๐ฑ โ dom โข ( ๐ ) โฉ ๐ณ โฃ ๐ โข ( ๐ฑ ) โค ๐ โข ( ๐ฑ 0 ) } .
Furthermore, let ๐
0 be the largest value such that โฌ โข ( ๐ฑ * , ๐ ) โ ๐ณ โฉ dom โข ( ๐ ) . Then, the Frank-Wolfe algorithm (Algorithm 3) with the backtracking strategy of Pedregosa et al. [2020] results in a linear primal gap convergence rate of the form:
โ โข ( ๐ฑ ๐ก )
โค โ โข ( ๐ฑ 0 ) โข ( 1 โ ๐ ๐ โ 0 2 โข ๐ฟ ~ โข ( ๐ ๐ท ) 2 ) ๐ก ,
for ๐ก โฅ 1 , where ๐ฟ ~
def max โก { ๐ โข ๐ฟ ๐ โ 0 , ๐ฟ โ 1 } , ๐
1 is the backtracking parameter, ๐ฟ โ 1 is the initial smoothness estimate in Algorithm 4.
Proof.
As the backtracking line search makes monotonic primal progress, we know that for ๐ก โฅ 0 we will have that ๐ฑ ๐ก โ โ 0 . Since dom โข ( ๐ ) does not contain any straight line by assumption, we know from Proposition 2.9 that for all ๐ฑ โ dom โข ( ๐ ) , the Hessian is non-degenerate and therefore ๐ ๐ โ 0
0 . This allows us to claim that for any ๐ฑ , ๐ฒ โ โ 0 we have that:
๐ โข ( ๐ฑ ) โ ๐ โข ( ๐ฒ ) โ โจ โ ๐ โข ( ๐ฒ ) , ๐ฑ โ ๐ฒ โฉ
โฅ ๐ ๐ โ 0 2 โข โ ๐ฑ โ ๐ฒ โ 2 .
(2.16)
The backtracking line search in Algorithm 4 will either output a point ๐พ ๐ก
1 or ๐พ ๐ก < 1 . In any case, Algorithm 4 will find and output a smoothness estimate ๐ฟ ๐ก and a step size ๐พ ๐ก such that for ๐ฑ ๐ก + 1
๐ฑ ๐ก + ๐พ ๐ก โข ( ๐ฏ ๐ก โ ๐ฑ ๐ก ) we have that:
๐ โข ( ๐ฑ ๐ก + 1 ) โ ๐ โข ( ๐ฑ ๐ก ) โค ๐ฟ ๐ก โข ๐พ ๐ก 2 2 โข โ ๐ฑ ๐ก โ ๐ฏ ๐ก โ 2 โ ๐พ ๐ก โข ๐ โข ( ๐ฑ ๐ก ) .
(2.17)
In the case where ๐พ ๐ก
1 we know by observing Line 7 of Algorithm 4 that ๐ โข ( ๐ฑ ๐ก ) โฅ ๐ฟ ๐ก โข โ ๐ฑ ๐ก โ ๐ฏ ๐ก โ 2 , and so plugging into (2.17) we arrive at โ โข ( ๐ฑ ๐ก + 1 ) โค โ โข ( ๐ฑ ๐ก ) / 2 . In the case where ๐พ ๐ก
๐ โข ( ๐ฑ ๐ก ) / ( ๐ฟ ๐ก โข โ ๐ฑ ๐ก โ ๐ฏ ๐ก โ 2 ) < 1 , we have that ๐ โข ( ๐ฑ ๐ก ) < ๐ฟ ๐ก โข โ ๐ฑ ๐ก โ ๐ฏ ๐ก โ 2 , which leads to โ โข ( ๐ฑ ๐ก + 1 ) โค โ โข ( ๐ฑ ๐ ) โ ๐ โข ( ๐ฑ ๐ก ) 2 / ( 2 โข ๐ฟ ๐ก โข โ ๐ฑ ๐ก โ ๐ฏ ๐ก โ 2 ) , when plugging the expression for the step size in the progress bound in (2.17). In this last case where ๐พ ๐ก < 1 we have the following contraction for the primal gap:
โ โข ( ๐ฑ ๐ก ) โ โ โข ( ๐ฑ ๐ก + 1 )
โฅ ๐ โข ( ๐ฑ ๐ก ) 2 2 โข ๐ฟ ๐ก โข โ ๐ฑ ๐ก โ ๐ฏ ๐ก โ 2
โฅ ๐ 2 ๐ท 2 โข โ โ ๐ โข ( ๐ฑ ๐ก ) โ 2 2 โข ๐ฟ ๐ก
โฅ ๐ ๐ โ 0 ๐ฟ ~ โข ๐ 2 ๐ท 2 โข โ โข ( ๐ฑ ๐ก ) ,
where we have used the inequality that involves the central term and the leftmost term in Proposition 2.10, and the last inequality stems from the bound โ โข ( ๐ฑ ๐ก ) โค โ โ ๐ โข ( ๐ฑ ๐ก ) โ 2 / ( 2 โข ๐ ๐ โ 0 ) for ๐ ๐ โ 0 -strongly convex functions. Putting the above bounds together we have that:
โ โข ( ๐ฑ ๐ก + 1 )
โค โ โข ( ๐ฑ ๐ก ) โข ( 1 โ 1 2 โข min โก { 1 , ๐ ๐ โ 0 ๐ฟ ~ โข ( ๐ ๐ท ) 2 } )
โค โ โข ( ๐ฑ ๐ก ) โข ( 1 โ ๐ ๐ โ 0 2 โข ๐ฟ ~ โข ( ๐ ๐ท ) 2 ) ,
which completes the proof. โ
The previous bound depends on the largest positive ๐ such that โฌ โข ( ๐ฑ * , ๐ ) โ ๐ณ โฉ dom โข ( ๐ ) , which can be arbitrarily small. Note also that the previous proof uses the lower bound of the Bregman divergence from the ๐ ๐ โ 0 -strong convexity of the function over โ 0 to obtain linear convergence. Note that this bound is local as this ๐ ๐ โ 0 -strong convexity holds only inside โ 0 , and is only of use because the step size of Algorithm 4 automatically ensures that if ๐ฑ ๐ก โ โ 0 and ๐ ๐ก is a descent direction, then ๐ฑ ๐ก + ๐พ ๐ก โข ๐ ๐ก โ โ 0 . This is in contrast with Algorithm 1, in which the step size ๐พ ๐ก
2 / ( 2 + ๐ก ) did not automatically ensure monotonicity in primal gap, and this had to be enforced by setting ๐ฑ ๐ก + 1
๐ฑ ๐ก if ๐ โข ( ๐ฑ ๐ก + ๐พ ๐ก โข ๐ ๐ก ) > ๐ โข ( ๐ฑ ๐ก ) , where ๐ ๐ก
๐ฏ ๐ก โ ๐ฑ ๐ก . If we were to have used the lower bound on the Bregman divergence from [Sun & Tran-Dinh, 2019, Proposition 10] in the proof, which states that:
๐ โข ( ๐ฒ ) โ ๐ โข ( ๐ฑ ) โ โจ โ ๐ โข ( ๐ฑ ) , ๐ฒ โ ๐ฑ โฉ โฅ ๐ ๐ โข ( โ ๐ ๐ โข ( ๐ฑ , ๐ฒ ) ) โข โ ๐ฒ โ ๐ฑ โ โ 2 ๐ โข ( ๐ฑ ) 2 ,
for any ๐ฑ , ๐ฒ โ dom โข ( ๐ ) and any ๐ โฅ 2 , we would have arrived at a bound that holds over all dom โข ( ๐ ) . However, in order to arrive at a usable bound, and armed only with the knowledge that the Hessian is non-degenerate if dom โข ( ๐ ) does not contain any straight line, and that ๐ฑ , ๐ฒ โ โ 0 , we would have had to write:
๐ ๐ โข ( โ ๐ ๐ โข ( ๐ฑ , ๐ฒ ) ) โข โ ๐ฑ โ ๐ฒ โ โ 2 ๐ โข ( ๐ฒ ) 2 โฅ ๐ ๐ โ 0 โข ๐ ๐ โข ( โ ๐ ๐ โข ( ๐ฑ , ๐ฒ ) ) โข โ ๐ฑ โ ๐ฒ โ 2 ,
where the inequality follows from the definition of ๐ ๐ โ 0 . It is easy to see that as ๐ โข ๐ ๐ โข ( ๐ ) / ๐ โข ๐ > 0 by Remark 2.2, we have that 1 / 2
๐ ๐ โข ( 0 ) โฅ ๐ ๐ โข ( โ ๐ ๐ โข ( ๐ฑ , ๐ฒ ) ) . This results in a bound:
๐ โข ( ๐ฒ ) โ ๐ โข ( ๐ฑ ) โ โจ โ ๐ โข ( ๐ฑ ) , ๐ฒ โ ๐ฑ โฉ โฅ ๐ ๐ โ 0 โข ๐ ๐ โข ( โ ๐ ๐ โข ( ๐ฑ , ๐ฒ ) ) โข โ ๐ฑ โ ๐ฒ โ 2 .
(2.18)
When we compare the bounds obtained from local strong convexity in (2.16) and that obtained directly from generalized self-concordance in (2.18), we can see that the former is tighter than the latter, albeit local. For this reason, we have used the former bound in the proof of Theorem 2.11.
2.1.2Strongly convex or uniformly convex sets
Next, we recall the definition of uniformly convex sets, used in Kerdreux et al. [2021], which will allow us to obtain improved convergence rates for the FW algorithm over uniformly convex feasible regions.
In order to prove convergence rate results for the case where the feasible region is ( ๐ , ๐ ) -uniformly convex, we first review the definition of the ( ๐ , ๐ ) -uniform convexity of a set (see Definition 2.12), as well as a useful lemma that allows us to go from contractions to convergence rates.
Definition 2.12 ( ( ๐ , ๐ ) -uniformly convex set).
Given two positive numbers ๐ and ๐ , we say the set ๐ณ โ โ ๐ is ( ๐ , ๐ ) -uniformly convex with respect to a norm โฅ โ โฅ if for any ๐ฑ , ๐ฒ โ ๐ณ , 0 โค ๐พ โค 1 , and ๐ณ โ โ ๐ with โ ๐ณ โ
1 we have that:
๐ฒ + ๐พ โข ( ๐ฑ โ ๐ฒ ) + ๐พ โข ( 1 โ ๐พ ) โ ๐ โข โ ๐ฑ โ ๐ฒ โ ๐ โข ๐ณ โ ๐ณ .
The previous definition allows us to obtain a scaling inequality very similar to the one shown in Proposition 2.10, which is key to proving the following convergence rates, and can be implicitly found in Kerdreux et al. [2021] and Garber & Hazan [2016].
Proposition 2.13.
Let ๐ณ โ โ ๐ be ( ๐ , ๐ ) -uniformly convex, then for all ๐ฑ โ ๐ณ :
๐ โข ( ๐ฑ ) โ ๐ฑ โ ๐ฏ โ ๐ โฅ ๐ โข โ โ ๐ โข ( ๐ฑ ) โ ,
where ๐ฏ
argmin ๐ฎ โ ๐ณ โจ โ ๐ โข ( ๐ฑ ) , ๐ฎ โฉ , and ๐ โข ( ๐ฑ ) is the Frank-Wolfe gap.
The next lemma that will be presented is an extension of the one used in [Kerdreux et al., 2021, Lemma A.1] (see also Temlyakov [2015]), and allows us to go from per-iteration contractions to convergence rates.
Lemma 2.14.
We denote a sequence of nonnegative numbers by { โ ๐ก } ๐ก . Let ๐ 0 , ๐ 1 , ๐ 2 and ๐ผ be positive numbers such that ๐ 1 < 1 , โ 1 โค ๐ 0 and โ ๐ก โ โ ๐ก + 1 โฅ โ ๐ก โข min โก { ๐ 1 , ๐ 2 โข โ ๐ก ๐ผ } for ๐ก โฅ 1 , then:
โ ๐ก โค { ๐ 0 โข ( 1 โ ๐ 1 ) ๐ก โ 1
if โข 1 โค ๐ก โค ๐ก 0
( ๐ 1 / ๐ 2 ) 1 / ๐ผ ( 1 + ๐ 1 โข ๐ผ โข ( ๐ก โ ๐ก 0 ) ) 1 / ๐ผ
๐ช โข ( ๐ก โ 1 / ๐ผ )
๐๐กโ๐๐๐ค๐๐ ๐ .
where
๐ก 0
def max โก { 1 , โ log 1 โ ๐ 1 โก ( ( ๐ 1 / ๐ 2 ) 1 / ๐ผ ๐ 0 ) โ } .
This allows us to convert the per-iteration contractions to convergence rates.
Theorem 2.15.
Suppose ๐ณ is a compact ( ๐ , ๐ ) -uniformly convex set and ๐ is a ( ๐ , ๐ ) generalized self-concordant function with ๐ โฅ 2 . Furthermore, assume that min ๐ฑ โ ๐ณ โก โ โ ๐ โข ( ๐ฑ ) โ โฅ ๐ถ . Then, the Frank-Wolfe algorithm with Backtrack (Algorithm 3) results in a convergence:
โ
๐ก
โค
{
โ
โข
(
๐ฑ
0
)
โข
(
1
โ
1
2
โข
min
โก
{
1
,
๐
โข
๐ถ
๐ฟ
~
}
)
๐ก
if
โข
๐
2
โ โข ( ๐ฑ 0 ) 2 ๐ก
if โข ๐
2 , 1 โค ๐ก โค ๐ก 0
( ๐ฟ ~ ๐ / ( ๐ โข ๐ถ ) 2 ) 1 / ( ๐ โ 2 ) ( 1 + ( ๐ โ 2 ) โข ( ๐ก โ ๐ก 0 ) / ( 2 โข ๐ ) ) ๐ / ( ๐ โ 2 )
๐ช โข ( ๐ก โ ๐ / ( ๐ โ 2 ) )
if โข ๐
2 , ๐ก
๐ก 0 ,
for ๐ก โฅ 1 , where:
๐ก 0
max โก { 1 , โ log 1 / 2 โก ( ( ๐ฟ ~ ๐ / ( ๐ โข ๐ถ ) 2 ) 1 / ( ๐ โ 2 ) โ โข ( ๐ฑ 0 ) ) โ } .
and ๐ฟ ~
def max โก { ๐ โข ๐ฟ ๐ โ 0 , ๐ฟ โ 1 } , where ๐
1 is the backtracking parameter, ๐ฟ โ 1 is the initial smoothness estimate in Algorithm 4, and
๐ฟ ๐ โ 0
max ๐ฎ โ โ 0 , ๐ โ โ ๐ โก โ ๐ โ โ 2 ๐ โข ( ๐ฎ ) 2 / โ ๐ โ 2 2 .
Proof.
At iteration ๐ก , the backtracking line search strategy finds through successive function evaluations a ๐ฟ ๐ก
0 such that:
โ โข ( ๐ฑ ๐ก + 1 )
โค โ โข ( ๐ฑ ๐ก ) โ ๐พ ๐ก โข ๐ โข ( ๐ฑ ๐ก ) + ๐ฟ ๐ก โข ๐พ ๐ก 2 2 โข โ ๐ฑ ๐ก โ ๐ฏ ๐ก โ 2 .
Finding the ๐พ ๐ก that maximizes the right-hand side of the previous inequality leads to:
๐พ ๐ก
min โก { 1 , ๐ โข ( ๐ฑ ๐ก ) / ( ๐ฟ ๐ก โข โ ๐ฑ ๐ก โ ๐ฏ ๐ก โ 2 ) } ,
which is the step size ultimately taken by the algorithm at iteration ๐ก . Note that if ๐พ ๐ก
1 this means that ๐ โข ( ๐ฑ ๐ก ) โฅ ๐ฟ ๐ก โข โ ๐ฑ ๐ก โ ๐ฏ ๐ก โ 2 , which when plugged into the inequality above leads to โ โข ( ๐ฑ ๐ก + 1 ) โค โ โข ( ๐ฑ ๐ก ) / 2 . Conversely, for ๐พ ๐ก < 1 we have that โ โข ( ๐ฑ ๐ก + 1 ) โค โ โข ( ๐ฑ ๐ก ) โ ๐ โข ( ๐ฑ ๐ก ) 2 / ( 2 โข ๐ฟ ๐ก โข โ ๐ฑ ๐ก โ ๐ฏ ๐ก โ 2 ) . Focusing on this case and using the bounds ๐ โข ( ๐ฑ ๐ก ) โฅ โ โข ( ๐ฑ ๐ก ) and ๐ โข ( ๐ฑ ๐ก ) โฅ ๐ โข โ โ ๐ โข ( ๐ฑ ๐ก ) โ โข โ ๐ฑ ๐ก โ ๐ฏ ๐ก โ ๐ from Proposition 2.13 leads to:
โ โข ( ๐ฑ ๐ก + 1 )
โค โ โข ( ๐ฑ ๐ก ) โ โ โข ( ๐ฑ ๐ก ) 2 โ 2 / ๐ โข ( ๐ โข โ โ ๐ โข ( ๐ฑ ๐ก ) โ ) 2 / ๐ 2 โข ๐ฟ ๐ก
(2.19)
โค โ โข ( ๐ฑ ๐ก ) โ โ โข ( ๐ฑ ๐ก ) 2 โ 2 / ๐ โข ( ๐ โข ๐ถ ) 2 / ๐ 2 โข ๐ฟ ~ ,
(2.20)
where the last inequality simply comes from the bound on the gradient norm, and the fact that ๐ฟ ๐ก โค ๐ฟ ~ , for ๐ฟ ~
def max โก { ๐ โข ๐ฟ ๐ โ 0 , ๐ฟ โ 1 } , where ๐
1 is the backtracking parameter and ๐ฟ โ 1 is the initial smoothness estimate in Algorithm 4. Reordering this expression and putting together the two cases we have that:
โ โข ( ๐ฑ ๐ก ) โ โ โข ( ๐ฑ ๐ก + 1 )
โฅ โ โข ( ๐ฑ ๐ก ) โข min โก { 1 2 , ( ๐ โข ๐ถ ) 2 / ๐ 2 โข ๐ฟ ~ โข โ โข ( ๐ฑ ๐ก ) 1 โ 2 / ๐ } .
For the case where ๐
2 we get a linear contraction in primal gap. Using Lemma 2.14 to go from a contraction to a convergence rate for ๐
2 we have that:
โ
๐ก
โค
{
โ
โข
(
๐ฑ
0
)
โข
(
1
โ
1
2
โข
min
โก
{
1
,
๐
โข
๐ถ
๐ฟ
~
}
)
๐ก
if
โข
๐
2
โ โข ( ๐ฑ 0 ) 2 ๐ก
if โข ๐
2 , 1 โค ๐ก โค ๐ก 0
( ๐ฟ ~ ๐ / ( ๐ โข ๐ถ ) 2 ) 1 / ( ๐ โ 2 ) ( 1 + ( ๐ โ 2 ) โข ( ๐ก โ ๐ก 0 ) / ( 2 โข ๐ ) ) ๐ / ( ๐ โ 2 )
๐ช โข ( ๐ก โ ๐ / ( ๐ โ 2 ) )
if โข ๐
2 , ๐ก
๐ก 0 ,
for ๐ก โฅ 1 , where:
๐ก 0
max โก { 1 , โ log 1 / 2 โก ( ( ๐ฟ ~ ๐ / ( ๐ โข ๐ถ ) 2 ) 1 / ( ๐ โ 2 ) โ โข ( ๐ฑ 0 ) ) โ } ,
which completes the proof. โ
However, in the general case, we cannot assume that the norm of the gradient is bounded away from zero over ๐ณ . We deal with the general case using local strong convexity in Theorem 2.16.
Theorem 2.16.
Suppose ๐ณ is a compact ( ๐ , ๐ ) -uniformly convex set and ๐ is a ( ๐ , ๐ ) generalized self-concordant function with ๐ โฅ 2 for which domain does not contain any straight line. Then, the Frank-Wolfe algorithm with Backtrack (Algorithm 3) results in a convergence:
โ ๐ก โค { โ โข ( ๐ฑ 0 ) 2 ๐ก
if โข 1 โค ๐ก โค ๐ก 0
( ๐ฟ ~ ๐ / ( ๐ 2 โข ๐ ๐ โ 0 ) ) 1 / ( ๐ โ 1 ) ( 1 + ( ๐ โ 1 ) โข ( ๐ก โ ๐ก 0 ) / ( 2 โข ๐ ) ) ๐ / ( ๐ โ 1 )
๐ช โข ( ๐ก โ ๐ / ( ๐ โ 1 ) )
if โข ๐ก
๐ก 0 ,
for ๐ก โฅ 1 , where:
โ 0
{
๐ฑ
โ
dom
โข
(
๐
)
โฉ
๐ณ
โฃ
๐
โข
(
๐ฑ
)
โค
๐
โข
(
๐ฑ
0
)
}
๐ก
0
max โก { 1 , โ log 1 / 2 โก ( ( ๐ฟ ~ ๐ / ( ๐ 2 โข ๐ ๐ โ 0 ) ) 1 / ( ๐ โ 1 ) โ โข ( ๐ฑ 0 ) ) โ }
and ๐ฟ ~
def max โก { ๐ โข ๐ฟ ๐ โ 0 , ๐ฟ โ 1 } , where ๐
1 is the backtracking parameter, ๐ฟ โ 1 is the initial smoothness estimate in Algorithm 4.
Proof.
As the algorithm makes monotonic primal progress we have that ๐ฑ ๐ก โ โ 0 for ๐ก โฅ 0 . The proof proceeds very similarly as before, except for the fact that now we have to bound โ โ ๐ โข ( ๐ฑ ๐ก ) โ using ๐ ๐ โ 0 -strong convexity for points ๐ฑ ๐ก , ๐ฑ ๐ก + ๐พ ๐ก โข ( ๐ฏ ๐ก โ ๐ฑ ๐ก ) โ โ 0 . Continuing from (2.19) for the case where ๐พ ๐ก < 1 and using the fact that, given ๐ ๐ข the unconstrained minimum of ๐ , โ โข ( ๐ฑ ๐ก ) โค ๐ โข ( ๐ฑ ๐ก ) โ ๐ ๐ข โค โ โ ๐ โข ( ๐ฑ ๐ก ) โ 2 / ( 2 โข ๐ ๐ โ 0 ) we have that:
โ โข ( ๐ฑ ๐ก + 1 )
โค โ โข ( ๐ฑ ๐ก ) โ โ โข ( ๐ฑ ๐ก ) 2 โ 2 / ๐ โข ( ๐ โข โ โ ๐ โข ( ๐ฑ ๐ก ) โ ) 2 / ๐ 2 โข ๐ฟ ๐ก
โค โ โข ( ๐ฑ ๐ก ) โ โ โข ( ๐ฑ ๐ก ) 2 โ 1 / ๐ โข ๐ 2 / ๐ โข ( ๐ ๐ โ 0 ) 1 / ๐ โข 2 1 / ๐ โ 1 ๐ฟ ~ ,
where we have also used the bound ๐ฟ ๐ก โค ๐ฟ ~ in the last equation. This leads us to a contraction, together with the case where ๐พ ๐ก
1 , which is unchanged from the previous proofs, of the form:
โ โข ( ๐ฑ ๐ก ) โ โ โข ( ๐ฑ ๐ก + 1 )
โฅ โ โข ( ๐ฑ ๐ก ) โข min โก { 1 2 , ๐ 2 / ๐ โข ( ๐ ๐ โ 0 ) 1 / ๐ โข 2 1 / ๐ โ 1 ๐ฟ ~ โข โ โข ( ๐ฑ ๐ก ) 1 โ 1 / ๐ } .
Using again Lemma 2.14 to go from a contraction to a convergence rate for ๐
2 we have that:
โ ๐ก โค { โ โข ( ๐ฑ 0 ) 2 ๐ก
if โข 1 โค ๐ก โค ๐ก 0
( ๐ฟ ~ ๐ / ( ๐ 2 โข ๐ ๐ โ 0 ) ) 1 / ( ๐ โ 1 ) ( 1 + ( ๐ โ 1 ) โข ( ๐ก โ ๐ก 0 ) / ( 2 โข ๐ ) ) ๐ / ( ๐ โ 1 )
๐ช โข ( ๐ก โ ๐ / ( ๐ โ 1 ) )
if โข ๐ก
๐ก 0 ,
for ๐ก โฅ 1 , where:
๐ก 0
max โก { 1 , โ log 1 / 2 โก ( ( ๐ฟ ~ ๐ / ( ๐ 2 โข ๐ ๐ โ 0 ) ) 1 / ( ๐ โ 1 ) โ โข ( ๐ฑ 0 ) ) โ } ,
which completes the proof. โ
In Table 3 we provide an oracle complexity breakdown for the Frank-Wolfe algorithm with Backtrack (B-FW), also referred to as LBTFW-GSC in Dvurechensky et al. [2022], when minimizing over a ( ๐ , ๐ ) -uniformly convex set.
Algorithm Assumptions Oracle calls Reference
B-FW/LBTFW-GSC
โก
๐ฑ
*
โ
Int
โก
(
๐ณ
โฉ
dom
โข
(
๐
)
)
๐ช
โข
(
log
โก
1
/
๐
)
This work
B-FW/LBTFW-GSC
โก
min
๐ฑ
โ
๐ณ
โก
โ
โ
๐
โข
(
๐ฑ
)
โ
>
0
,
๐
2
๐ช โข ( log โก 1 / ๐ ) This work B-FW/LBTFW-GSC โก
min ๐ฑ โ ๐ณ โก โ โ ๐ โข ( ๐ฑ ) โ
0 , ๐
2
๐ช โข ( ๐ โ ( ๐ โ 2 ) / ๐ ) This work B-FW/LBTFW-GSC โก No straight lines in dom โข ( ๐ )
๐ช โข ( ๐ โ ( ๐ โ 1 ) / ๐ ) This work Table 3: Complexity comparison for B-FW (Algorithm 3) when minimizing over a ( ๐ , ๐ ) -uniformly convex set: Number of iterations needed to reach an ๐ -optimal solution in โ โข ( ๐ฑ ) for Problem 1.1 in several cases of interest. We use the superscript โก to indicate that constants in the convergence bounds depend on user-defined inputs. Oracle calls refer simultaneously to FOO, ZOO, LMO, and DO calls. 3Away-step and Blended Pairwise Conditional Gradients
When the domain ๐ณ is a polytope, one can obtain linear convergence in primal gap for a generalized self-concordant function using the well known Away-step Frank-Wolfe (AFW) algorithm [Guรฉlat & Marcotte, 1986, Lacoste-Julien & Jaggi, 2015] shown in Algorithm 5 and the more recent Blended Pairwise Conditional Gradients (BPCG) algorithm [Tsuji et al., 2022] with the adaptive step size of Pedregosa et al. [2020]. We use ๐ฎ ๐ก to denote the active set at iteration ๐ก , that is, the set of vertices of the polytope that gives rise to ๐ฑ ๐ก as a convex combination with positive weights.
For AFW, we can see that the algorithm either chooses to perform what is known as a Frank-Wolfe step in Line 7 of Algorithm 5 if the Frank-Wolfe gap ๐ โข ( ๐ฑ ) is greater than the away gap โจ โ ๐ โข ( ๐ฑ ๐ก ) , ๐ ๐ก โ ๐ฑ ๐ก โฉ or an Away step in 9 of Algorithm 5 otherwise. Similarly for BPCG, the algorithm performs a Frank-Wolfe step in Line 7 of Algorithm 6 if the Frank-Wolfe gap is greater than the pairwise gap โจ โ ๐ โข ( ๐ฑ ๐ก ) , ๐ ๐ก โ ๐ฌ ๐ก โฉ . For simplicity of exposition, we made both algorithms start with a vertex of of ๐ณ in dom โข ( ๐ ) . Although this could be too restrictive for some applications (e.g. when the function includes a barrier of the polytope), it is easy to warm-start the active set with an initial combination of vertices.
Algorithm 5 Away-step Frank-Wolfe (B-AFW) with the step size of Pedregosa et al. [2020] 1:Vertex ๐ฑ 0 โ dom โข ( ๐ ) of ๐ณ , function ๐ , initial smoothness estimate ๐ฟ โ 1 2: ๐ฎ 0 โ { ๐ฑ 0 } , ๐ 0 โ { 1 } 3:for ๐ก
0 to โฆ do 4: ๐ฏ ๐ก โ argmin ๐ฏ โ ๐ณ โจ โ ๐ โข ( ๐ฑ ๐ก ) , ๐ฏ โฉ 5: ๐ ๐ก โ argmax ๐ฏ โ ๐ฎ ๐ก โจ โ ๐ โข ( ๐ฑ ๐ก ) , ๐ฏ โฉ 6: if โจ โ ๐ โข ( ๐ฑ ๐ก ) , ๐ฑ ๐ก โ ๐ฏ ๐ก โฉ โฅ โจ โ ๐ โข ( ๐ฑ ๐ก ) , ๐ ๐ก โ ๐ฑ ๐ก โฉ then 7: ๐ ๐ก โ ๐ฏ ๐ก โ ๐ฑ ๐ก , ๐พ max โ 1 8: else 9: ๐ ๐ก โ ๐ฑ ๐ก โ ๐ ๐ก , ๐พ max โ ๐ ๐ก โข ( ๐ ๐ก ) / ( 1 โ ๐ ๐ก โข ( ๐ ๐ก ) ) 10: end if 11: ๐พ ๐ก , ๐ฟ ๐ก โ Backtrack ( ๐ , ๐ฑ ๐ก , ๐ ๐ก , โ ๐ โข ( ๐ฑ ๐ก ) , ๐ฟ ๐ก โ 1 , ๐พ max ) 12: ๐ฑ ๐ก + 1 โ ๐ฑ ๐ก + ๐พ ๐ก โข ๐ ๐ก 13: Update ๐ฎ ๐ก and ๐ ๐ก to ๐ฎ ๐ก + 1 and ๐ ๐ก + 1 14:end for Algorithm 6 Blended Pairwise Conditional Gradients (B-BPCG) with the step size of Pedregosa et al. [2020] 1:Vertex ๐ฑ 0 โ dom โข ( ๐ ) of ๐ณ , function ๐ , initial smoothness estimate ๐ฟ โ 1 2: ๐ฎ 0 โ { ๐ฑ 0 } , ๐ 0 โ { 1 } 3:for ๐ก
0 to โฆ do 4: ๐ฏ ๐ก โ argmin ๐ฏ โ ๐ณ โจ โ ๐ โข ( ๐ฑ ๐ก ) , ๐ฏ โฉ 5: ๐ ๐ก โ argmax ๐ฏ โ ๐ฎ ๐ก โจ โ ๐ โข ( ๐ฑ ๐ก ) , ๐ฏ โฉ 6: ๐ฌ ๐ก โ argmin ๐ฏ โ ๐ฎ ๐ก โจ โ ๐ โข ( ๐ฑ ๐ก ) , ๐ฏ โฉ 7: if โจ โ ๐ โข ( ๐ฑ ๐ก ) , ๐ฑ ๐ก โ ๐ฏ ๐ก โฉ โฅ โจ โ ๐ โข ( ๐ ๐ก ) , ๐ฌ ๐ก โ ๐ ๐ก โฉ then 8: ๐ ๐ก โ ๐ฏ ๐ก โ ๐ฑ ๐ก , ๐พ max โ 1 9: else 10: ๐ ๐ก โ ๐ ๐ก โ ๐ฌ ๐ก , ๐พ max โ ๐ ๐ก โข ( ๐ ๐ก ) 11: end if 12: ๐พ ๐ก , ๐ฟ ๐ก โ Backtrack ( ๐ , ๐ฑ ๐ก , ๐ ๐ก , โ ๐ โข ( ๐ฑ ๐ก ) , ๐ฟ ๐ก โ 1 , ๐พ max ) 13: ๐ฑ ๐ก + 1 โ ๐ฑ ๐ก + ๐พ ๐ก โข ๐ ๐ก 14: Update ๐ฎ ๐ก and ๐ ๐ก to ๐ฎ ๐ก + 1 and ๐ ๐ก + 1 15:end for
Both proofs of linear convergence follow closely from Pedregosa et al. [2020], Lacoste-Julien & Jaggi [2015] but leverage generalized self-concordant instead of smoothness and strong convexity. One of the key inequalities used in the proof is a scaling inequality from Lacoste-Julien & Jaggi [2015] very similar to the one shown in Proposition 2.10 and Proposition 2.13, which we state next:
Proposition 3.1.
Let ๐ณ โ โ ๐ be a polytope, and denote by ๐ฎ the set of vertices of the polytope ๐ณ that gives rise to ๐ฑ โ ๐ณ as a convex combination with positive weights, then for all ๐ฒ โ ๐ณ :
โจ โ ๐ โข ( ๐ฑ ) , ๐ โ ๐ฏ โฉ โฅ ๐ฟ โข โจ โ ๐ โข ( ๐ฑ ) , ๐ฑ โ ๐ฒ โฉ โ ๐ฑ โ ๐ฒ โ ,
where ๐ฏ
argmin ๐ฎ โ ๐ณ โจ โ ๐ โข ( ๐ฑ ) , ๐ฎ โฉ , ๐
argmax ๐ฎ โ ๐ฎ โจ โ ๐ โข ( ๐ฑ ) , ๐ฎ โฉ , and ๐ฟ
0 is the pyramidal width of ๐ณ .
Theorem 3.2.
Suppose ๐ณ is a polytope and ๐ is a ( ๐ , ๐ ) generalized self-concordant function with ๐ โฅ 2 for which the domain does not contain any straight line. Then, both AFW and BPCG with Backtrack achieve a convergence rate:
โ โข ( ๐ฑ ๐ก ) โค โ โข ( ๐ฑ 0 ) โข ( 1 โ ๐ ๐ โ 0 4 โข ๐ฟ ~ โข ( ๐ฟ ๐ท ) 2 ) โ ( ๐ก โ 1 ) / 2 โ ,
where ๐ฟ is the pyramidal width of the polytope ๐ณ , ๐ฟ ~
def max โก { ๐ โข ๐ฟ ๐ โ 0 , ๐ฟ โ 1 } , ๐
1 is the backtracking parameter, ๐ฟ โ 1 is the initial smoothness estimate in Algorithm 4.
Proof.
Proceeding very similarly as in the proof of Theorem 2.11, we have that as the backtracking line search makes monotonic primal progress, we know that for ๐ก โฅ 0 we will have that ๐ฑ ๐ก โ โ 0 . As the function is ๐ ๐ โ 0 -strongly convex over โ 0 , we can use the appropriate inequalities from strong convexity in the progress bounds. Using this aforementioned property, together with the scaling inequality of Proposition 3.1 results in:
โ โข ( ๐ฑ ๐ก )
๐ โข ( ๐ฑ ๐ก ) โ ๐ โข ( ๐ฑ * )
โค โจ โ ๐ โข ( ๐ฑ ๐ก ) , ๐ฑ ๐ก โ ๐ฑ * โฉ 2 2 โข ๐ ๐ โ 0 โข โ ๐ฑ ๐ก โ ๐ฑ * โ 2
โค โจ โ ๐ โข ( ๐ฑ ๐ก ) , ๐ ๐ก โ ๐ฏ ๐ก โฉ 2 2 โข ๐ ๐ โ 0 โข ๐ฟ 2 .
(3.1)
The first inequality comes from the ๐ ๐ โ 0 -strong convexity over โ 0 (see, e.g., [Braun et al., 2022, Lemma 2.13]), and the second inequality comes from applying Proposition 3.1 with ๐ฒ
๐ฑ * . For AFW, we can expand the expression of the numerator of the bound in (3.1):
๐ โข ( ๐ฑ ๐ก ) โ ๐ โข ( ๐ฑ * )
โค ( โจ โ ๐ โข ( ๐ฑ ๐ก ) , ๐ ๐ก โ ๐ฑ ๐ก โฉ + โจ โ ๐ โข ( ๐ฑ ๐ก ) , ๐ฑ ๐ก โ ๐ฏ ๐ก โฉ ) 2 2 โข ๐ ๐ โ 0 โข ๐ฟ 2 .
(3.2)
Note that if the Frank-Wolfe step is chosen in Line 7, then:
โ โจ โ ๐ โข ( ๐ฑ ๐ก ) , ๐ ๐ก โฉ
โจ โ ๐ โข ( ๐ฑ ๐ก ) , ๐ฑ ๐ก โ ๐ฏ ๐ก โฉ โฅ โจ โ ๐ โข ( ๐ฑ ๐ก ) , ๐ ๐ก โ ๐ฑ ๐ก โฉ ,
otherwise, if an away step is chosen in Line 9, then:
โจ โ ๐ โข ( ๐ฑ ๐ก ) , ๐ฑ ๐ก โ ๐ฏ ๐ก โฉ < โจ โ ๐ โข ( ๐ฑ ๐ก ) , ๐ ๐ก โ ๐ฑ ๐ก โฉ
โ โจ โ ๐ โข ( ๐ฑ ๐ก ) , ๐ ๐ก โฉ .
In both cases, we have that:
๐ โข ( ๐ฑ ๐ก ) โ ๐ โข ( ๐ฑ * )
โค 2 โข โจ โ ๐ โข ( ๐ฑ ๐ก ) , ๐ ๐ก โฉ 2 ๐ ๐ โ 0 โข ๐ฟ 2 .
(3.3)
For BPCG, we can directly exploit [Tsuji et al., 2022, Lemma 3.5], which establishes the following bound at every iteration of the algorithm:
โ 2 โข โจ โ ๐ โข ( ๐ฑ ๐ก ) , ๐ ๐ก โฉ โฅ โจ โ ๐ โข ( ๐ฑ ๐ก ) , ๐ ๐ก โ ๐ฏ ๐ก โฉ ,
resulting in the same inequality (3.3).1 Note that using a similar reasoning, as โจ โ ๐ โข ( ๐ฑ ๐ก ) , ๐ฑ ๐ก โ ๐ฏ ๐ก โฉ
๐ โข ( ๐ฑ ๐ก ) , in both cases it holds that:
โ โข ( ๐ฑ ๐ก ) โค ๐ โข ( ๐ฑ ๐ก ) โค โ โจ โ ๐ โข ( ๐ฑ ๐ก ) , ๐ ๐ก โฉ .
(3.4)
As in the preceding proofs, the backtracking line search in Algorithm 4 will either output a point ๐พ ๐ก
๐พ max or ๐พ ๐ก < ๐พ max . In any case, for both AFW and BPCG, and regardless of the type of step taken, Algorithm 4 will find and output a smoothness estimate ๐ฟ ๐ก and a step size ๐พ ๐ก such that:
โ โข ( ๐ฑ ๐ก + 1 ) โ โ โข ( ๐ฑ ๐ก ) โค ๐ฟ ๐ก โข ๐พ ๐ก 2 2 โข โ ๐ ๐ก โ 2 + ๐พ ๐ก โข โจ โ ๐ โข ( ๐ฑ ๐ก ) , ๐ ๐ก โฉ .
(3.5)
As before, we will have two cases differentiating whether the step size ๐พ ๐ก is maximal. If ๐พ ๐ก
๐พ max we know by observing Line 7 of Algorithm 4 that:
โ โจ โ ๐ โข ( ๐ฑ ๐ก ) , ๐ ๐ก โฉ โฅ ๐พ max โข ๐ฟ ๐ก โข โ ๐ ๐ก โ 2 ,
which combined with (3.5) results in:
โ โข ( ๐ฑ ๐ก + 1 ) โ โ โข ( ๐ฑ ๐ก ) โค โจ โ ๐ โข ( ๐ฑ ๐ก ) , ๐ ๐ก โฉ โข ๐พ max 2 .
In the case where ๐พ ๐ก < ๐พ max , we have:
โ
โจ
โ
๐
โข
(
๐ฑ
๐ก
)
,
๐
๐ก
โฉ
<
๐พ
max
โข
๐ฟ
๐ก
โข
โ
๐
๐ก
โ
2
,
๐พ
๐ก
โ โจ โ ๐ โข ( ๐ฑ ๐ก ) , ๐ ๐ก โฉ / ( ๐ฟ ๐ก โข โ ๐ ๐ก โ 2 ) .
Plugging the expression of ๐พ ๐ก into (3.5) yields
โ โข ( ๐ฑ ๐ก + 1 ) โ โ โข ( ๐ฑ ๐ก ) โค โ โจ โ ๐ โข ( ๐ฑ ๐ก ) , ๐ ๐ก โฉ 2 / ( 2 โข ๐ฟ ๐ก โข โ ๐ ๐ก โ 2 ) .
In any case, we can rewrite (3.5) as:
โ โข ( ๐ฑ ๐ก ) โ โ โข ( ๐ฑ ๐ก + 1 )
โฅ min โก { โ โจ โ ๐ โข ( ๐ฑ ๐ก ) , ๐ ๐ก โฉ โข ๐พ max 2 , โจ โ ๐ โข ( ๐ฑ ๐ก ) , ๐ ๐ก โฉ 2 2 โข ๐ฟ ๐ก โข โ ๐ ๐ก โ 2 } .
(3.6)
We can now use the inequality in (3.3) to bound the second term in the minimization component of (3.6), and (3.4) to bound the first term. This leads to:
โ โข ( ๐ฑ ๐ก ) โ โ โข ( ๐ฑ ๐ก + 1 )
โฅ โ โข ( ๐ฑ ๐ก ) โข min โก { ๐พ max 2 , ๐ ๐ โ 0 โข ๐ฟ 2 4 โข ๐ฟ ๐ก โข โ ๐ ๐ก โ 2 }
(3.7)
โฅ โ โข ( ๐ฑ ๐ก ) โข min โก { ๐พ max 2 , ๐ ๐ โ 0 โข ๐ฟ 2 4 โข ๐ฟ ~ โข ๐ท 2 } ,
(3.8)
where in the last inequality we use โ ๐ ๐ก โ โค ๐ท and ๐ฟ ๐ก โค ๐ฟ ~ for all ๐ก . It remains to bound ๐พ max away from zero to obtain the linear convergence bound. For Frank-Wolfe steps, we immediately have ๐พ max
1 , but for away or pairwise steps, there is no straightforward way of bounding ๐พ max away from zero. One of the key insights from Lacoste-Julien & Jaggi [2015] is that instead of bounding ๐พ max away from zero for all steps up to iteration ๐ก , we can instead bound the number of away steps with a step size ๐พ ๐ก
๐พ max up to iteration ๐ก , which are steps that reduce the cardinality of the active set ๐ฎ ๐ก and satisfy โ โข ( ๐ฑ ๐ก + 1 ) โค โ โข ( ๐ฑ ๐ก ) . The same argument is used in Tsuji et al. [2022] to prove the convergence of BPCG. This leads us to consider only the progress provided by the remaining steps, which are Frank-Wolfe steps and away steps for AFW or pairwise steps for BPCG with ๐พ ๐ก < ๐พ max . For a number of steps ๐ก , only at most half of these steps could have been away steps with ๐พ ๐ก
๐พ max , as we cannot drop more vertices from the active set than the number of vertices we could have potentially picked up with Frank-Wolfe steps. For the remaining โ ( ๐ก โ 1 ) / 2 โ steps, we know that:
โ โข ( ๐ฑ ๐ก ) โ โ โข ( ๐ฑ ๐ก + 1 ) โฅ โ โข ( ๐ฑ ๐ก ) โข ๐ ๐ โ 0 โข ๐ฟ 2 4 โข ๐ฟ ~ โข ๐ท 2 .
Therefore, we have that the primal gap satisfies:
โ โข ( ๐ฑ ๐ก ) โค โ โข ( ๐ฑ 0 ) โข ( 1 โ ๐ ๐ โ 0 โข ๐ฟ 2 4 โข ๐ฟ ~ โข ๐ท 2 ) โ ( ๐ก โ 1 ) / 2 โ .
โ
We can make use of the proof of convergence in primal gap to prove linear convergence in Frank-Wolfe gap. In order to do so, we recall a quantity formally defined in Kerdreux et al. [2019] but already implicitly used earlier in Lacoste-Julien & Jaggi [2015] as:
๐ค โข ( ๐ฑ ๐ก , ๐ฎ ๐ก )
def max ๐ฎ โ ๐ฎ ๐ก , ๐ฏ โ ๐ณ โก โจ โ ๐ โข ( ๐ฑ ๐ก ) , ๐ฎ โ ๐ฏ โฉ
max ๐ฎ โ ๐ฎ ๐ก โก โจ โ ๐ โข ( ๐ฑ ๐ก ) , ๐ฎ โ ๐ฑ ๐ก โฉ + max ๐ฏ โ ๐ณ โก โจ โ ๐ โข ( ๐ฑ ๐ก ) , ๐ฑ ๐ก โ ๐ฏ โฉ
max ๐ฎ โ ๐ฎ ๐ก โก โจ โ ๐ โข ( ๐ฑ ๐ก ) , ๐ฎ โ ๐ฑ ๐ก โฉ + ๐ โข ( ๐ฑ ๐ก ) .
Note that ๐ค โข ( ๐ฑ ๐ก , ๐ฎ ๐ก ) provides an upper bound on the Frank-Wolfe gap as the first term in the definition, the so-called away gap, is positive.
Theorem 3.3.
Suppose ๐ณ is a polytope and ๐ is a ( ๐ , ๐ ) generalized self-concordant function with ๐ โฅ 2 for which the domain does not contain any straight line. Then, AFW and BPCG with Backtrack both contract the Frank-Wolfe gap linearly, i.e., min 1 โค ๐ก โค ๐ โก ๐ โข ( ๐ฑ ๐ก ) โค ๐ after ๐
๐ช โข ( log โก 1 / ๐ ) iterations.
Proof.
We observed in the proof of Theorem 3.2 that regardless of the type of step chosen in AFW and BPCG, the following holds:
โ 2 โข โจ โ ๐ โข ( ๐ฑ ๐ก ) , ๐ ๐ก โฉ โฅ โจ โ ๐ โข ( ๐ฑ ๐ก ) , ๐ฑ ๐ก โ ๐ฏ ๐ก โฉ + โจ โ ๐ โข ( ๐ฑ ๐ก ) , ๐ ๐ก โ ๐ฑ ๐ก โฉ
๐ค โข ( ๐ฑ ๐ก , ๐ฎ ๐ก ) .
On the other hand, we also have that โ โข ( ๐ฑ ๐ก ) โ โ โข ( ๐ฑ ๐ก + 1 ) โค โ โข ( ๐ฑ ๐ก ) . Plugging these bounds into the right-hand side and the left hand side of 3.6 in Theorem 3.2, and using the fact that โ ๐ ๐ก โ โค ๐ท we have that:
min โก { ๐ค โข ( ๐ฑ ๐ก , ๐ฎ ๐ก ) โข ๐พ max 4 , ๐ค โข ( ๐ฑ ๐ก , ๐ฎ ๐ก ) 2 8 โข ๐ฟ ๐ก โข ๐ท 2 } โค โ โข ( ๐ฑ ๐ก ) โค โ โข ( ๐ฑ 0 ) โข ( 1 โ ๐ ๐ โ 0 4 โข ๐ฟ ~ โข ( ๐ฟ ๐ท ) 2 ) โ ( ๐ก โ 1 ) / 2 โ ,
where the second inequality follows from the convergence bound on the primal gap from Theorem 3.2. Considering the steps that are not away steps with ๐พ ๐ก
๐พ max as in the proof of Theorem 3.2, leads us to:
๐ โข ( ๐ฑ ๐ก ) โค ๐ค โข ( ๐ฑ ๐ก , ๐ฎ ๐ก ) โค 4 โข โ โข ( ๐ฑ 0 ) โข max โก { 1 , ๐ฟ ~ โข ๐ท 2 2 โข โ โข ( ๐ฑ 0 ) } โข ( 1 โ ๐ ๐ โ 0 4 โข ๐ฟ ~ โข ( ๐ฟ ๐ท ) 2 ) โ ( ๐ก โ 1 ) / 4 โ .
โ
In Table 4 we provide a detailed complexity comparison between the Backtracking AFW (B-AFW) Algorithm 5, and other comparable algorithms in the literature.
Algorithm SOO calls FOO calls ZOO calls LMO calls DO calls FW-LLOO [Dvurechensky et al., 2022, Alg.7] ๐ช โข ( log โก 1 / ๐ )
๐ช โข ( log โก 1 / ๐ )
๐ช โข ( log โก 1 / ๐ ) * ASFW-GSC [Dvurechensky et al., 2022, Alg.8] ๐ช โข ( log โก 1 / ๐ )
๐ช โข ( log โก 1 / ๐ )
๐ช โข ( log โก 1 / ๐ )
B-AFW/B-BPCG โ โก
๐ช โข ( log โก 1 / ๐ )
๐ช โข ( log โก 1 / ๐ )
๐ช โข ( log โก 1 / ๐ )
๐ช โข ( log โก 1 / ๐ ) Table 4: Complexity comparison: Number of iterations needed to reach a solution with โ โข ( ๐ฑ ) below ๐ for Problem 1.1 for Frank-Wolfe-type algorithms in the literature. The asterisk on FW-LLOO highlights the fact that the procedure is different from the standard LMO procedure. The complexity shown for the FW-LLOO, ASFW-GSC, and B-AFW algorithms only apply to polyhedral domains, with the additional requirement that for the former two we need an explicit polyhedral representation of the domain (see Assumption 3 in Dvurechensky et al. [2022]), whereas the latter only requires an LMO. The requirement that we have an explicit polyhedral representation may be limiting, for instance for the matching polytope over non-bipartite graphs, as the size of the polyhedral representation in this case depends exponentially on the number of nodes of the graph [Rothvoร, 2017]. We use the superscript โ to indicate that the same complexities hold when reaching an ๐ -optimal solution in ๐ โข ( ๐ฑ ) , and the superscript โก to indicate that constants in the convergence bounds depend on user-defined inputs. 4Computational experiments
We showcase the performance of the M-FW algorithm, the second-order step size and the LLOO algorithm from Dvurechensky et al. [2022] (denoted by GSC-FW and LLOO in the figures) and the Frank-Wolfe and the Away-step Frank-Wolfe algorithm with the backtracking stepsize of Pedregosa et al. [2020], denoted by B-FW and B-AFW respectively. We ran all experiments on a server with 8 Intel Xeon 3.50GHz CPUs and 32GB RAM in single-threaded mode in Julia 1.6.0 with the FrankWolfe.jl package [Besanรงon et al., 2022]. The data sets used in the problem instances can be found in Carderera et al. [2021], the code used for the experiments can be found in Carderera et al.. When running the adaptive step size from Pedregosa et al. [2020], the only parameter that we need to set is the initial smoothness estimate ๐ฟ โ 1 . We use the initialization proposed in Pedregosa et al. [2020], namely:
๐ฟ โ 1
โ โ ๐ โข ( ๐ฑ 0 ) โ โ ๐ โข ( ๐ฑ 0 + ๐ โข ( ๐ฏ 0 โ ๐ฑ 0 ) ) โ / ( ๐ โข โ ๐ฏ 0 โ ๐ฑ 0 โ )
with ๐ set to 10 โ 3 . The scaling parameters ๐
2 , ๐
0.9 are left at their default values as proposed in Pedregosa et al. [2020] and also used in Dvurechensky et al. [2022].
We also use the vanilla FW algorithm denoted by FW, which is simply Algorithm 1 without Lines 5 and 6 using the traditional ๐พ ๐ก
2 / ( ๐ก + 2 ) open-loop step size rule. Note that there are no formal convergence guarantees for this algorithm when applied to Problem (1.1). All figures show the evolution of the โ โข ( ๐ฑ ๐ก ) and ๐ โข ( ๐ฑ ๐ก ) against ๐ก and time with a log-log scale. As in Dvurechensky et al. [2022] we implemented the LLOO based variant only for the portfolio optimization instance ฮ ๐ ; for the other examples, the oracle implementation was not implemented due to the need to estimate non-trivial parameters.
As can be seen in all experiments, the Monotonic Frank-Wolfe algorithm is very competitive, outperforming previously proposed variants in both in progress per iteration and time. The only other algorithm that is sometimes faster is the Away-step Frank-Wolfe variant, which however depends on an active set and can therefore induce up a quadratic time and memory overhead, potentially rendering the method inattractive for very large-scale settings.
Portfolio optimization. We consider the portfolio problem with logarithmic returns ๐ โข ( ๐ฑ )
โ โ ๐ก
1 ๐ log โก ( โจ ๐ซ ๐ก , ๐ฑ โฉ ) , where ๐ denotes the number of periods and ๐ณ
ฮ ๐ . The results are shown in Figure 2 with all methods and in Figure 3 on larger instances with first-order methods only. We use the revenue data ๐ซ ๐ก from Dvurechensky et al. [2022] and add instances generated in a similar fashion from independent Normal random entries with dimension 1000, 2000, and 5000, and from a Log-normal distribution with ( ๐
0.0 , ๐
0.5 ) .
Figure 2:Portfolio optimization: Convergence of โ โข ( ๐ฑ ๐ก ) and ๐ โข ( ๐ฑ ๐ก ) vs. ๐ก and wall-clock time. ๐
1000 . (a) ๐
2000 (b) ๐
5000 Figure 3:Portfolio optimization: Convergence of โ โข ( ๐ฑ ๐ก ) and ๐ โข ( ๐ฑ ๐ก ) vs. ๐ก and wall-clock time.
Logistic regression. One of the motivating examples for the development of a theory of generalized self-concordant function is the logistic loss function, as it does not match the definition of a standard self-concordant function but shares many of its characteristics. We consider a design matrix with rows ๐ ๐ โ โ ๐ with 1 โค ๐ โค ๐ and a vector ๐ฒ โ { โ 1 , 1 } ๐ and formulate a logistic regression problem with elastic net regularization, in a similar fashion to Liu et al. [2020], with ๐ โข ( ๐ฑ )
1 / ๐ โข โ ๐
1 ๐ log โก ( 1 + exp โก ( โ ๐ฆ ๐ โข โจ ๐ฑ , ๐ ๐ โฉ ) ) + ๐ / 2 โข โ ๐ฑ โ 2 , and ๐ณ is the โ 1 ball of radius ๐ . The results can be seen in Figure 4.
(a)a4a: ( ๐ , ๐ )
( 4781 , 121 ) (b)a8a: ( ๐ , ๐ )
( 22696 , 123 ) Figure 4:Logistic regression: Convergence of โ โข ( ๐ฑ ๐ก ) and ๐ โข ( ๐ฑ ๐ก ) vs. ๐ก and wall-clock time for instances of the LIBSVM dataset.
Birkhoff polytope. All applications previously considered all have in common a computationally inexpensive LMO that returns highly sparse vertices. To complement the results, we consider a logistic regression problem over the Birkhoff polytope, where the LMO is implemented with the Hungarian algorithm, and is not as inexpensive as in the other examples. We use a quadratic regularization parameter ๐
100 / ๐ where ๐ is the number of samples. The results are presented in Figure 5.
Figure 5:Birkhoff polytope: Convergence of โ โข ( ๐ฑ ๐ก ) and ๐ โข ( ๐ฑ ๐ก ) vs. ๐ก and wall-clock time on a2a: ( ๐ , ๐ )
( 2265 , 114 ) . Monotonic step size: the numerical case
The computational experiments highlighted that the Monotonic Frank-Wolfe performs well in terms of iteration count and time against other Frank-Wolfe and Away-step Frank-Wolfe variants. Another advantage of a simple step size computation procedure is its numerical stability. On some instances, an ill-conditioned gradient can lead to a plateau of the primal and/or dual progress. Even worse, some step-size strategies do not guarantee monotonicity and can result in the primal value increasing over some iterations. The numerical issue that causes this phenomenon is illustrated by running the methods of the FrankWolfe.jl package over the same instance using 64 -bits floating-point numbers and Julia BigFloat types (which support arithmetic in arbitrary precision to remove numerical issues).
(a)64-bit floating point (b)Arbitrary precision using BigFloat Figure 6:Ill-conditioned portfolio optimization problem in dimension ๐
2000 .
In Fig. 6, we compare the primal and dual gap progress of different algorithms on a portfolio instance. In the finite-precision execution, we observe a plateau of the dual gap for both M-FW and B-AFW. The primal value however worsens after the iteration where B-AFW reaches its dual gap plateau. In contrast, M-FW reaches a plateau in both primal and dual gap at a certain iteration. Note that the primal value at the point where the plateau is hit is already below ๐ float โข 64 , the square root of the machine precision. In arbitrary-precision arithmetic, instead of reaching a plateau or deteriorating, B-AFW closes the dual gap tolerance and terminates before other methods. Although this observation (made on several instances of the portfolio optimization problem) only impacts ill-conditioned problems, it suggests M-FW may be a good candidate for a numerically robust default implementation of Frank-Wolfe algorithms.
5A stateless simple step variant
The simple step-size strategy presented in Algorithm 2 ensures monotonicity and domain-respecting iterates by maintaining a โmemoryโ ๐ ๐ก which is the number of performed halvings. The number of halvings to reach an accepted step is bounded, but the corresponding factor 2 ๐ ๐ก is carried over in all following iterations, which may slow down progress. We propose an alternative step size that still ensures the monotonicity and domain-preserving properties, but does not carry over information from one iteration to the next.
Algorithm 7 Stateless Monotonic Frank-Wolfe 1:Point ๐ฑ 0 โ ๐ณ โฉ dom โข ( ๐ ) , function ๐ 2:Iterates ๐ฑ 1 , โฆ โ ๐ณ 3:for ๐ก
0 to โฆ do 4: ๐ฏ ๐ก โ argmin ๐ฏ โ ๐ณ โจ โ ๐ โข ( ๐ฑ ๐ก ) , ๐ฏ โฉ 5: ๐พ ๐ก โ 2 / ( ๐ก + 2 ) 6: ๐ฑ ๐ก + 1 โ ๐ฑ ๐ก + ๐พ ๐ก โข ( ๐ฏ ๐ก โ ๐ฑ ๐ก ) 7: while ๐ฑ ๐ก + 1 โ dom โข ( ๐ ) or ๐ โข ( ๐ฑ ๐ก + 1 )
๐ โข ( ๐ฑ ๐ก ) do 8: ๐พ ๐ก โ ๐พ ๐ก / 2 9: ๐ฑ ๐ก + 1 โ ๐ฑ ๐ก + ๐พ ๐ก โข ( ๐ฏ ๐ก โ ๐ฑ ๐ก ) 10: end while 11:end for
Note that Algorithm 7, presented above, is stateless since it is equivalent to Algorithm 2 with ๐ ๐ก reset to zero between every outer iteration. This resetting step also implies that the per-iteration convergence rate of the stateless step is at least as good as the simple step, at the potential cost of a bounded number of halvings, with associated ZOO and DOO calls at each iteration. Finally, we point out that the stateless step-size strategy can be viewed as a particular instance of a backtracking line search where the initial step size estimate is the agnostic step size 2 / ( ๐ก + 2 ) .
We compare the two strategies on a random and badly conditioned problem with objective function:
๐ฅ โค โข ๐ โข ๐ฅ + โจ ๐ , ๐ฅ โฉ + ๐ โข ๐ โ + โข ( ๐ฅ )
where ๐ is a symmetric positive definite matrix with log-normally distributed eigenvalues and ๐ โ + โข ( โ ) is the log-barrier function of the positive orthant. We optimize instances of this function over the โ 1 -norm ball in dimension 30 and 1000 . The results are shown in Figure 7. On both of these instances, the simple step progress is slowed down or even seems stalled in comparison to the stateless version because a lot of halving steps were done in the early iterations for the simple step size, which penalizes progress over the whole run. The stateless step-size does not suffer from this problem, however, because the halvings have to be performed at multiple iterations when using the stateless step-size strategy, the per iteration cost of the stateless step-size is about three times that of the simple step-size. Future work will consider additional restart conditions, not only on ๐ ๐ก of Algorithm 2, but also on the base step-size strategy employed, similar to Kerdreux et al. [2019].
(a) ๐
30 (b) ๐
1000 Figure 7:Stateless step size: comparison of the stateless and simple steps on badly conditioned problems. Conclusion
We introduced FQ variants based on open-loop step sizes ๐พ ๐ก
2 / ( ๐ก + 2 ) to obtain a ๐ช โข ( 1 / ๐ก ) convergence rate for generalized self-concordant functions in terms of primal and FW gaps. This algorithm neither requires second-order information, nor line searches, and allows us to bound the number of zeroth-, first-order oracle, domain oracle, and linear oracle calls needed to obtain the target accuracy. We also show improved convergence rates for several variants in various cases of interest and prove that the AFW [Wolfe, 1970, Lacoste-Julien & Jaggi, 2015] and BPCG Tsuji et al. [2022] algorithms coupled with the backtracking line search of Pedregosa et al. [2020] can achieve linear convergence rates over polytopes when minimizing generalized self-concordant functions.
Acknowledgements
Research reported in this paper was partially supported through the Research Campus Modal funded by the German Federal Ministry of Education and Research (fund numbers 05M14ZAM,05M20ZBM) and the Deutsche Forschungsgemeinschaft (DFG) through the DFG Cluster of Excellence MATH+. We would like to thank the anonymous reviewers for their suggestions and comments.
References Bach [2010] โ Bach, F.Self-concordant analysis for logistic regression.Electronic Journal of Statistics, 4:384โ414, 2010. Besanรงon et al. [2022] โ Besanรงon, M., Carderera, A., and Pokutta, S.FrankWolfe.jl: a high-performance and flexible toolbox for Frank-Wolfe algorithms and conditional gradients.INFORMS Journal on Computing, 34(5):2611โ2620, 2022. Braun et al. [2022] โ Braun, G., Carderera, A., Combettes, C. W., Hassani, H., Karbasi, A., Mokhtari, A., and Pokutta, S.Conditional gradient methods.arXiv preprint arXiv:2211.14103, 2022. [4] โ Carderera, A., Besanรงon, M., and Pokutta, S.Frank-wolfe for generalized self-concordant functions - code repository.https://github.com/ZIB-IOL/fw-generalized-selfconcordant. Carderera et al. [2021] โ Carderera, A., Besanรงon, M., and Pokutta, S.Frank-Wolfe for Generalized Self-Concordant Functions - Problem Instances, May 2021.URL https://doi.org/10.5281/zenodo.4836009. Diakonikolas et al. [2020] โ Diakonikolas, J., Carderera, A., and Pokutta, S.Locally accelerated conditional gradients.In Proceedings of the 23th International Conference on Artificial Intelligence and Statistics, pp. 1737โ1747. PMLR, 2020. Dvurechensky et al. [2020] โ Dvurechensky, P., Ostroukhov, P., Safin, K., Shtern, S., and Staudigl, M.Self-concordant analysis of Frank-Wolfe algorithms.In Proceedings of the 37th International Conference on Machine Learning, pp. 2814โ2824. PMLR, 2020. Dvurechensky et al. [2022] โ Dvurechensky, P., Safin, K., Shtern, S., and Staudigl, M.Generalized self-concordant analysis of FrankโWolfe algorithms.Mathematical Programming, pp. 1โ69, 2022. Frank & Wolfe [1956] โ Frank, M. and Wolfe, P.An algorithm for quadratic programming.Naval research logistics quarterly, 3(1-2):95โ110, 1956. Garber & Hazan [2016] โ Garber, D. and Hazan, E.A linearly convergent variant of the conditional gradient algorithm under strong convexity, with applications to online and stochastic optimization.SIAM Journal on Optimization, 26(3):1493โ1528, 2016. Guรฉlat & Marcotte [1986] โ Guรฉlat, J. and Marcotte, P.Some comments on Wolfeโs โaway stepโ.Mathematical Programming, 35(1):110โ119, 1986. Jaggi [2013] โ Jaggi, M.Revisiting Frank-Wolfe: Projection-free sparse convex optimization.In Proceedings of the 30th International Conference on Machine Learning, pp. 427โ435. PMLR, 2013. Kerdreux et al. [2019] โ Kerdreux, T., dโAspremont, A., and Pokutta, S.Restarting Frank-Wolfe.In Proceedings of the 22nd International Conference on Artificial Intelligence and Statistics, pp. 1275โ1283. PMLR, 2019. Kerdreux et al. [2021] โ Kerdreux, T., dโAspremont, A., and Pokutta, S.Projection-free optimization on uniformly convex sets.In Proceedings of the 24th International Conference on Artificial Intelligence and Statistics, pp. 19โ27. PMLR, 2021. Krishnan et al. [2015] โ Krishnan, R. G., Lacoste-Julien, S., and Sontag, D.Barrier Frank-Wolfe for Marginal Inference.In Proceedings of the 28th Conference in Neural Information Processing Systems. PMLR, 2015. Lacoste-Julien & Jaggi [2015] โ Lacoste-Julien, S. and Jaggi, M.On the global linear convergence of Frank-Wolfe optimization variants.In Proceedings of the 29th Conference on Neural Information Processing Systems, pp. 566โ575. PMLR, 2015. Levitin & Polyak [1966] โ Levitin, E. S. and Polyak, B. T.Constrained minimization methods.USSR Computational Mathematics and Mathematical Physics, 6(5):1โ50, 1966. Liu et al. [2020] โ Liu, D., Cevher, V., and Tran-Dinh, Q.A newton frankโwolfe method for constrained self-concordant minimization.Journal of Global Optimization, pp. 1โ27, 2020. Marteau-Ferey et al. [2019] โ Marteau-Ferey, U., Ostrovskii, D., Bach, F., and Rudi, A.Beyond least-squares: Fast rates for regularized empirical risk minimization through self-concordance.In Proceedings of the 32nd Conference on Learning Theory, pp. 2294โ2340. PMLR, 2019. Nesterov [2013] โ Nesterov, Y.Gradient methods for minimizing composite functions.Mathematical Programming, 140(1):125โ161, 2013. Nesterov & Nemirovskii [1994] โ Nesterov, Y. and Nemirovskii, A.Interior-point polynomial algorithms in convex programming.SIAM, 1994. Odor et al. [2016] โ Odor, G., Li, Y.-H., Yurtsever, A., Hsieh, Y.-P., Tran-Dinh, Q., El Halabi, M., and Cevher, V.Frank-wolfe works for non-lipschitz continuous gradient objectives: scalable poisson phase retrieval.In 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 6230โ6234. Ieee, 2016. Ostrovskii & Bach [2021] โ Ostrovskii, D. M. and Bach, F.Finite-sample analysis of M-estimators using self-concordance.Electronic Journal of Statistics, 15(1):326โ391, 2021. Pedregosa et al. [2020] โ Pedregosa, F., Negiar, G., Askari, A., and Jaggi, M.Linearly convergent FrankโWolfe with backtracking line-search.In Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics. PMLR, 2020. Rothvoร [2017] โ Rothvoร, T.The matching polytope has exponential extension complexity.Journal of the ACM (JACM), 64(6):1โ19, 2017. Sun & Tran-Dinh [2019] โ Sun, T. and Tran-Dinh, Q.Generalized self-concordant functions: a recipe for Newton-type methods.Mathematical Programming, 178(1):145โ213, 2019. Temlyakov [2015] โ Temlyakov, V.Greedy approximation in convex optimization.Constructive Approximation, 41(2):269โ296, 2015. Tran-Dinh et al. [2015] โ Tran-Dinh, Q., Li, Y.-H., and Cevher, V.Composite convex minimization involving self-concordant-like cost functions.In Modelling, Computation and Optimization in Information Systems and Management Sciences, pp. 155โ168. Springer, 2015. Tsuji et al. [2022] โ Tsuji, K. K., Tanaka, K., and Pokutta, S.Pairwise conditional gradients without swap steps and sparser kernel herding.In International Conference on Machine Learning, pp. 21864โ21883. PMLR, 2022. Wolfe [1970] โ Wolfe, P.Convergence theory in nonlinear programming.In Integer and Nonlinear Programming, pp. 1โ36. North-Holland, Amsterdam, 1970. Zhao & Freund [2023] โ Zhao, R. and Freund, R. M.Analysis of the FrankโWolfe method for convex composite optimization involving a logarithmically-homogeneous barrier.Mathematical Programming, 199(1-2):123โ163, 2023. Generated by L A T E xml Instructions for reporting errors
We are continuing to improve HTML versions of papers, and your feedback helps enhance accessibility and mobile support. To report errors in the HTML that will help us improve conversion and rendering, choose any of the methods listed below:
Click the "Report Issue" button. Open a report feedback form via keyboard, use "Ctrl + ?". Make a text selection and click the "Report Issue for Selection" button near your cursor. You can use Alt+Y to toggle on and Alt+Shift+Y to toggle off accessible reporting links at each section.
Our team has already identified the following issues. We appreciate your time reviewing and reporting rendering errors we may not have found yet. Your efforts will help us improve the HTML versions for all readers, because disability should not be a barrier to accessing research. Thank you for your continued support in championing open access for all.
Have a free development cycle? Help support accessibility at arXiv! Our collaborators at LaTeXML maintain a list of packages that need conversion, and welcome developer contributions.
Report Issue Report Issue for Selection
Xet Storage Details
- Size:
- 102 kB
- Xet hash:
- 9f0061b921a74e9cd3ee5f2fe7b0e13587f677151c23987e0e0aed5da2f2fce0
Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.