Buckets:

|
download
raw
155 kB

Title: Optimal Sets and Solution Paths of ReLU Networks

URL Source: https://arxiv.org/html/2306.00119

Markdown Content: Back to arXiv

This is experimental HTML to improve accessibility. We invite you to report rendering errors. Use Alt+Y to toggle on accessible reporting links and Alt+Shift+Y to toggle off. Learn more about this project and help improve conversions.

Why HTML? Report Issue Back to Abstract Download PDF 1Introduction 2Convex Reformulations 3The Constrained Group Lasso 4Specialization to Neural Networks 5Experiments 6Conclusion

HTML conversions sometimes display errors due to content that did not convert correctly from the source. This paper uses the following packages that are not yet supported by the HTML conversion tool. Feedback on these issues are not necessary; they are known and are being worked on.

failed: silence failed: centernot

Authors: achieve the best HTML results from your LaTeX submissions by following these best practices.

License: arXiv.org perpetual non-exclusive license arXiv:2306.00119v2 [cs.LG] 19 Jan 2024 Optimal Sets and Solution Paths of ReLU Networks Aaron Mishkin Mert Pilanci Abstract

We develop an analytical framework to characterize the set of optimal ReLU neural networks by reformulating the non-convex training problem as a convex program. We show that the global optima of the convex parameterization are given by a polyhedral set and then extend this characterization to the optimal set of the non-convex training objective. Since all stationary points of the ReLU training problem can be represented as optima of sub-sampled convex programs, our work provides a general expression for all critical points of the non-convex objective. We then leverage our results to provide an optimal pruning algorithm for computing minimal networks, establish conditions for the regularization path of ReLU networks to be continuous, and develop sensitivity results for minimal ReLU networks.

Machine Learning, ICML \WarningFilter

remresetThe remreset package

1Introduction

Neural networks have transformed machine learning. Despite their success, little is known about the global optima for typical non-convex training problems, the solution path of regularized networks, or how to prune networks without degrading the model fit. This is in stark contrast to generalized linear models with โ„“ 2 or โ„“ 1 penalties; for example, it is well-known that the lasso (Tibshirani, 1996) has a piece-wise linear path (Osborne et al., 2000; Efron et al., 2004), a polyhedral solution set (Tibshirani, 2013), and admits efficient algorithms for computing minimal solutions (Tibshirani, 2013). In this paper, we close the gap by studying neural networks through the lens of convex reformulations.

Figure 1:Convex vs non-convex solution spaces for two-layer ReLU networks. We plot the first feature of three different neurons; the non-convex parameterization maps the compact polytope of solutions for the convex parameterization into a curved manifold.

One of the main challenges of neural networks is non-convexity. For non-convex problems, stationarity of the training objective does not imply optimality of the network weights and so, to the best of our knowledge, no work has been able to derive an analytical expression for the optimal set. Convex reformulations provide a solution by rewriting the non-convex optimization problem as a convex program in a lifted parameter space (Pilanci & Ergen, 2020). We focus on the convex reformulation for two-layer networks with ReLU activation functions and weight decay regularization. The resulting problem is related to the group lasso (Yuan & Lin, 2006) and induces neuron sparsity in the network.

Let ๐‘ โˆˆ โ„ ๐‘› ร— ๐‘‘ be a data matrix and ๐‘ฆ โˆˆ โ„ ๐‘› associated targets. The prediction function for two-layer ReLU networks is

๐‘“ ๐‘Š 1 , ๐‘ค 2 โข ( ๐‘ )

โˆ‘ ๐‘–

1 ๐‘š ( ๐‘ โข ๐‘Š 1 โข ๐‘– ) + โข ๐‘ค 2 โข ๐‘– ,

where ๐‘Š 1 โˆˆ โ„ ๐‘š ร— ๐‘‘ , ๐‘ค 2 โˆˆ โ„ ๐‘š are the weights of the first and second layers, ๐‘š is the number of hidden units, and ( โ‹… ) +

max โก { โ‹… , 0 } is the ReLU activation. Fitting ๐‘“ ๐‘Š 1 , ๐‘ค 2 with convex loss ๐ฟ and weight decay ( โ„“ 2 ) regularization leads to the standard non-convex optimization problem:

min ๐‘Š 1 , ๐‘ค 2 โก ๐ฟ โข ( ๐‘“ ๐‘Š 1 , ๐‘ค 2 โข ( ๐‘ ) , ๐‘ฆ ) + ๐œ† 2 โข ( โ€– ๐‘Š 1 โ€– ๐น 2 + โ€– ๐‘ค 2 โ€– 2 2 ) โŸ ๐‘… โข ( ๐‘Š 1 , ๐‘ค 2 ) .

(1)

The regularization path or solution function of this training problem is the mapping between the regularization parameter ๐œ† and the set of optimal model weights,

๐’ช * โข ( ๐œ† )

arg โข min ๐‘Š 1 , ๐‘ค 2 ๐ฟ โข ( ๐‘“ ๐‘Š 1 , ๐‘ค 2 โข ( ๐‘ ) , ๐‘ฆ ) + ๐œ† 2 โข ๐‘… โข ( ๐‘Š 1 , ๐‘ค 2 ) .

(2)

In general, the optimal neural network is not unique and ๐’ช * โข ( ๐œ† ) will be set valued. Indeed, there are always at least ๐‘š ! solutions since any permutation of the hidden units yields an identical model. We call the solution to a ReLU training p-unique when it is unique up to splits and permutations.

We study ๐’ช * by re-writing Equation 1 as an instance of the constrained group lasso (CGL). We make the following contributions by analyzing CGL:

โ€ข

We derive an analytical expression for the solution set of two-layer ReLU networks and criteria for solutions to be p-unique (unique up to permutations/splits).

โ€ข

We extend this characterization to show that the set of stationary points of a two-layer ReLU model is exactly

๐’ž ๐œ†

{ ( ๐‘Š 1 , ๐‘ค 2 ) : ๐’Ÿ ~ โІ ๐’Ÿ ๐‘ , ๐‘“ ๐‘Š 1 , ๐‘ค 2 ( ๐‘ )

๐‘ฆ ^ ๐’Ÿ ~ ,

(3)

๐‘Š 1 โข ๐‘–

( ๐›ผ ๐‘– / ๐œ† ) 1 / 2 โข ๐‘ฃ ๐‘– โข ( ๐’Ÿ ~ ) , ๐‘ค 2 โข ๐‘–

๐œ‰ ๐‘– โข ( ๐›ผ ๐‘– โข ๐œ† ) 1 / 2 ,

๐›ผ ๐‘– โ‰ฅ 0 , ๐‘– โˆˆ [ ๐‘š ] โˆ– ๐’ฎ ๐œ† โŸน ๐›ผ ๐‘–

0 } ,

where ๐’Ÿ ~ is a set of sub-sampled activation patterns, ๐‘ฆ ^ ๐’Ÿ ~ is the unique optimal model fit using those patterns, and ๐‘ฃ ๐‘– โข ( ๐’Ÿ ~ ) are uniquely given by optimal parameters for the dual of the convex reformulation. See Figure 1.

โ€ข

We provide an optimal pruning algorithm that can be used to compute minimal models โ€” the smallest-width neural networks which are optimal for a given dataset and regularization parameter โ€” and an intuitive extension for pruning beyond minimal models.

โ€ข

We prove that the regularization path of ReLU networks is discontinuous in general and establish sufficient conditions for path to be closed/continuous.

โ€ข

We give a simple algorithm for computing the unique ReLU network corresponding to the min-norm model in the convex lifting and, under additional constraint qualifications, develop differential sensitivity results for minimal ReLU networks.

In many cases, we obtain strictly stronger results for gated ReLU networks (Fiat et al., 2019), which correspond directly to an unconstrained group lasso problem (Mishkin et al., 2022). In particular, we give new sufficient conditions for (i) the group lasso to be unique, (ii) global continuity of the group lasso model fit, and (iii) weak differentiability of the solution function for gated ReLU networks.

The paper is structured as follows: we cover related work in Section 1.1 and introduce notation in Section 1.2. Then we provide details for convex reformulations of neural network in Section 2. Section 3 analyzes CGL and Section 4 interprets these results in the specific context of two-layer ReLU networks. Section 5 concludes with experiments.

1.1Related Work

The Lasso and Group Lasso: Our work is most similar to Hastie et al. (2007), who consider homotopy methods, and Tibshirani (2013), who characterize the lasso solution set. Limited results exist beyond the lasso. Tibshirani & Taylor (2011) analyze the generalized lasso, while Yuan & Lin (2006) show the group lasso is piece-wise linear when ๐‘‹ is orthogonal. Roth & Fischer (2008) partially characterize the group lasso solution set, while Vaiter et al. (2012) derive stability results and the degrees-of-freedom.

Convex Reformulations: Convex reformulations for neural networks have rapidly advanced since Pilanci & Ergen (2020); convolutions (Ergen & Pilanci, 2021b; Gupta et al., 2021), vector-outputs (Sahiner et al., 2021), batch-norm (Ergen et al., 2021), and deeper networks (Ergen & Pilanci, 2021a) have all been explored.

Neural Network Solution Sets: Characterizations of solution sets are largely empirical. Mode connectivity has been studied extensively, Garipov et al. (2018); Draxler et al. (2018). Nguyen (2019); Kuditipudi et al. (2019) attempt to theoretically explain mode connectivity. Sensitivity is connected to differentiable optimization layers (Agrawal et al., 2019) and hypergradient descent (Baydin et al., 2017). We refer to Blalock et al. (2020) for an overview on pruning.

1.2Notation

We use lower-case ๐‘Ž to denote vectors and upper-case ๐ด for matrices. For ๐‘‘ โˆˆ โ„• , [ ๐‘‘ ]

{ 1 , โ€ฆ , ๐‘‘ } . Calligraphic letters ๐’ž denote sets. For a block of indices ๐‘ ๐‘– โІ [ ๐‘‘ ] , we write ๐ด ๐‘ ๐‘– for the sub-matrix of columns indexed by ๐‘ ๐‘– . Similarly, ๐‘Ž ๐‘ ๐‘– is the sub-vector indexed by ๐‘ ๐‘– . If โ„ณ is a collection of blocks, then ๐ด โ„ณ is the submatrix and ๐‘Ž โ„ณ the sub-vector with columns/elements indexed by blocks in the collection. Finally, | โ„ณ | is cardinality of the union of blocks in โ„ณ .

2Convex Reformulations

Now we introduce background on convex reformulations. Convex reformulations re-write Equation 1 as a convex program by enumerating the activations a single neuron in the hidden layer can take on for fixed ๐‘ as follows:

๐’Ÿ ๐‘

{ ๐ท

diag โข ( ๐Ÿ™ โข ( ๐‘ โข ๐‘ข โ‰ฅ 0 ) ) : ๐‘ข โˆˆ โ„ ๐‘‘ } .

This set grows as | ๐’Ÿ ๐‘ | โ‰ค ๐‘‚ โข ( ๐‘Ÿ โข ( ๐‘› / ๐‘Ÿ ) ๐‘Ÿ ) , where ๐‘Ÿ := rank โข ( ๐‘ ) (Pilanci & Ergen, 2020). Each โ€œactivation patternโ€ ๐ท ๐‘– โˆˆ ๐’Ÿ ๐‘ is associated with a convex cone,

๐’ฆ ๐‘–

{ ๐‘ข โˆˆ โ„ ๐‘‘ : ( 2 โข ๐ท ๐‘– โˆ’ ๐ผ ) โข ๐‘ โข ๐‘ข โชฐ 0 } .

If ๐‘ข โˆˆ ๐’ฆ ๐‘– , then ๐‘ข matches ๐ท ๐‘– , meaning ๐ท ๐‘– โข ๐‘ โข ๐‘ข

( ๐‘ โข ๐‘ข ) + . For any subset ๐’Ÿ ~ โІ ๐’Ÿ ๐‘ , the convex reformulation is,

min ๐‘ฃ , ๐‘ค

๐ฟ โข ( โˆ‘ ๐ท ๐‘– โˆˆ ๐’Ÿ ~ ๐ท ๐‘– โข ๐‘ โข ( ๐‘ฃ ๐‘– โˆ’ ๐‘ข ๐‘– ) , ๐‘ฆ ) + ๐œ† โข โˆ‘ ๐ท ๐‘– โˆˆ ๐’Ÿ ~ โ€– ๐‘ฃ ๐‘– โ€– 2 + โ€– ๐‘ข ๐‘– โ€– 2

(4)

s.t. ๐‘ฃ ๐‘– , ๐‘ข ๐‘– โˆˆ ๐’ฆ ๐‘– .

Pilanci & Ergen (2020) prove that this program and Equation 1 are equivalent in the following sense: if ๐’Ÿ ~

๐’Ÿ ๐‘ and ๐‘š โ‰ฅ ๐‘š * for some ๐‘š * โ‰ค ๐‘› + 1 , then the two programs have the same optimal value and every solution to the convex program can be mapped to a solution of the non-convex training problem and vice-versa. Given a solution ( ๐‘ฃ * , ๐‘ข * ) , optimal weights for the ReLU problem are given by

๐‘Š 1 โข ๐‘–

๐‘ฃ ๐‘– * / โ€– ๐‘ฃ ๐‘– * โ€– , ๐‘ค 2 โข ๐‘–

โ€– ๐‘ฃ ๐‘– * โ€–

(5)

๐‘Š 1 โข ๐‘—

๐‘ข ๐‘– * / โ€– ๐‘ข ๐‘– * โ€– , ๐‘ค 2 โข ๐‘—

โˆ’ โ€– ๐‘ข ๐‘– * โ€– ,

where we use the convention that 0 / 0

0 .

In practice, learning with ๐’Ÿ ๐‘ is intractable except when the data are low rank. Mishkin et al. (2022) provide refined conditions on ๐’Ÿ ~ which are sufficient for Equation 4 to be equivalent to the non-convex problem, while Wang et al. (2021) show that the minimum of every sub-sampled convex program is a stationary point of the ReLU training problem.

2.1Gated ReLU Networks

An alternative is the gated ReLU activation function,

๐œ™ ๐‘” โข ( ๐‘ , ๐‘ข )

diag โข ( ๐Ÿ™ โข ( ๐‘ โข ๐‘” โ‰ฅ 0 ) ) โข ๐‘ โข ๐‘ข ,

where ๐‘” โˆˆ โ„ ๐‘‘ is a โ€œgateโ€ vector, which is also optimized. The gated ReLU activation modifies the ReLU activation to decouple the thresholding operator from the neuron weights. Two-layer gated ReLU networks predict as follows:

โ„Ž ๐‘Š 1 , ๐‘ค 2 โข ( ๐‘ )

โˆ‘ ๐‘–

1 ๐‘š ๐œ™ ๐‘” ๐‘– โข ( ๐‘ , ๐‘Š 1 โข ๐‘– ) โข ๐‘ค 2 โข ๐‘– .

(6)

Mishkin et al. (2022) show that this gated ReLU neural network has the convex reformulation,

min ๐‘ข โก ๐ฟ โข ( โˆ‘ ๐ท ๐‘– โˆˆ ๐’Ÿ ~ ๐ท ๐‘– โข ๐‘ โข ๐‘ค ๐‘– , ๐‘ฆ ) + ๐œ† โข โˆ‘ ๐ท ๐‘– โˆˆ ๐’Ÿ ~ โ€– ๐‘ค ๐‘– โ€– 2 ,

(7)

where decoupling the activations from the neuron weights allows ๐‘ข ๐‘– , ๐‘ฃ ๐‘– โˆˆ ๐’ฆ ๐‘– to be merged. The solution mapping for ๐‘ค * and conditions for for the convex program to be equivalent to Equation 6 are similar to the ReLU case.

3The Constrained Group Lasso

In this section, we develop properties of CGL, a generalized linear model which captures both the convex ReLU and convex gated ReLU programs. Let โ„ฌ

{ ๐‘ 1 , โ€ฆ , ๐‘ ๐‘š } be a disjoint partition of the feature indices [ ๐‘‘ ] . Given regularization parameter ๐œ† โ‰ฅ 0 , CGL solves the program:

๐‘ * โข ( ๐œ† )

min ๐‘ค

๐น ๐œ† โข ( ๐‘ค ) := 1 2 โข โ€– ๐‘‹ โข ๐‘ค โˆ’ ๐‘ฆ โ€– 2 2 + ๐œ† โข โˆ‘ ๐‘ ๐‘– โˆˆ โ„ฌ โ€– ๐‘ค ๐‘ ๐‘– โ€– 2

(8)

s.t. ๐พ ๐‘ ๐‘– โŠค โข ๐‘ค ๐‘ ๐‘– โ‰ค 0 โข  for all  โข ๐‘ ๐‘– โˆˆ โ„ฌ ,

where ๐พ ๐‘ ๐‘– โˆˆ โ„ | ๐‘ ๐‘– | ร— ๐‘Ž ๐‘ ๐‘– . Solutions to Equation 8 are block sparse when ๐œ† is sufficiently large, meaning ๐‘ค ๐‘ ๐‘–

0 for a subset of blocks. This is similar to the feature sparsity given by the lasso, to which CGL naturally reduces when ๐‘ ๐‘–

{ ๐‘– } and ๐พ ๐‘ ๐‘–

0 for each ๐‘ ๐‘– โˆˆ โ„ฌ . Although we consider squared-error, our results generalize to strictly convex losses โ€” see Appendix C for comments.

The convex reformulations introduced in the previous section are instances of CGL using the basis function ๐‘‹

[ ๐ท 1 โข ๐‘ โข โ€ฆ โข ๐ท ๐‘ โข ๐‘ ] , where ๐‘

| ๐’Ÿ ~ ๐‘ | . For gated ReLU models, ๐พ ๐‘ ๐‘–

0 while ReLU models set ๐พ ๐‘ ๐‘–

โˆ’ ๐‘ โŠค โข ( 2 โข ๐ท ๐‘– โˆ’ ๐ผ ) . For both problems, block sparsity from the group โ„“ 1 penalty induces neuron sparsity in the final solution.

Our goal is to characterize the solution function of CGL,

๐’ฒ * โข ( ๐œ† ) := arg โข min ๐‘ค : ๐พ ๐‘ ๐‘– โŠค โข ๐‘ค ๐‘ ๐‘– โ‰ค 0 ๐น ๐œ† โข ( ๐‘ค )

For a general data matrix, ๐น ๐œ† is not strictly convex and CGL may admit multiple solutions โ€” these correspond to networks which are not related by permutation. As such, ๐’ฒ * is a point-to-set map and we must use a criterion to define a function; for instance, the min-norm solution mapping

๐‘ค * ( ๐œ† )

arg โข min { โˆฅ ๐‘ค โˆฅ 2 : ๐‘ค โˆˆ ๐’ฒ * ( ๐œ† ) } ,

defines a function for all ๐œ† โ‰ฅ 0 .

Now we introduce notation that will be used throughout this section. Let ๐‘ฆ ^ โข ( ๐œ† )

๐‘‹ โข ๐‘ค for ๐‘ค โˆˆ ๐’ฒ * โข ( ๐œ† ) denote the optimal model fit, which is the same for any choice of optimal ๐‘ค (Lemma A.1). Similarly, define the optimal residual ๐‘Ÿ โข ( ๐œ† ) := ๐‘ฆ โˆ’ ๐‘ฆ ^ โข ( ๐œ† ) and ๐‘ ๐‘ ๐‘– โข ( ๐œ† ) := ๐‘‹ ๐‘ ๐‘– โŠค โข ๐‘Ÿ โข ( ๐œ† ) as the correlation vector for block ๐‘ ๐‘– . We write ๐‘ โˆˆ โ„ ๐‘‘ for the concatenation of these block-vectors. Finally, let ๐œŒ ๐‘ ๐‘– be the dual parameters for the constraint ๐พ ๐‘ ๐‘– โŠค โข ๐‘ค ๐‘ ๐‘– โ‰ค 0 , ๐œŒ their concatenation, and ๐พ the block-diagonal matrix with blocks given by ๐พ ๐‘ ๐‘– .

The Lagrangian associated with Equation 8 is

โ„’ โข ( ๐‘ค , ๐œŒ )

1 2 โข โ€– ๐‘‹ โข ๐‘ค โˆ’ ๐‘ฆ โ€– 2 2 + ๐œ† โข โˆ‘ ๐‘ ๐‘– โˆˆ โ„ฌ โ€– ๐‘ค ๐‘ ๐‘– โ€– 2 + โŸจ ๐พ โข ๐œŒ , ๐‘ค โŸฉ .

(9)

The constraints are linear, strong duality attains if feasibility holds, and the necessary and sufficient conditions for primal-dual pair ( ๐‘ค , ๐œŒ ) to be optimal are the KKT conditions:

๐‘‹ ๐‘ ๐‘– โŠค โข ( ๐‘‹ โข ๐‘ค โˆ’ ๐‘ฆ ) + ๐พ ๐‘ ๐‘– โข ๐œŒ ๐‘ ๐‘– + ๐‘  ๐‘ ๐‘–

0

(10)

๐พ ๐‘ ๐‘– โŠค โข ๐‘ค ๐‘ ๐‘–
โ‰ค 0

[ ๐œŒ ๐‘ ๐‘– ] ๐‘— โ‹… [ ๐พ ๐‘ ๐‘– ] ๐‘— โŠค โข ๐‘ค ๐‘ ๐‘–

0 โˆ€ ๐‘— โˆˆ [ ๐‘Ž ๐‘ ๐‘– ]

๐œŒ ๐‘ ๐‘–

โ‰ฅ 0 ,

where ๐‘  ๐‘ ๐‘– โˆˆ โˆ‚ ๐œ† โข โ€– ๐‘ค ๐‘ ๐‘– โ€– 2 . Since the KKT conditions hold for every combination of optimal primal-dual pair (Boyd & Vandenberghe, 2014), we always use the min-norm dual optimal parameter ๐œŒ * with no loss of generality. To simplify our notations, we define ๐‘ฃ ๐‘ ๐‘– := ๐‘ ๐‘ ๐‘– โˆ’ ๐พ ๐‘ ๐‘– โข ๐œŒ ๐‘ ๐‘– * . In what follows, all proofs are deferred to Appendix A.

3.1Describing the Optimal Set

Stationary of the Lagrangian implies the equicorrelation set

โ„ฐ ๐œ†

{ ๐‘ ๐‘– โˆˆ โ„ฌ : โ€– ๐‘ฃ ๐‘ ๐‘– โ€– 2

๐œ† } .

contains all blocks which may be active for fixed ๐œ† . That is, the active set ๐’œ ๐œ† โข ( ๐‘ค )

{ ๐‘ ๐‘– : ๐‘ค ๐‘ ๐‘– โ‰  0 } satisfies ๐’œ ๐œ† โข ( ๐‘ค ) โІ โ„ฐ ๐œ† for every ๐‘ค โˆˆ ๐’ฒ * โข ( ๐œ† ) . However, not all blocks in โ„ฐ ๐œ† may be non-zero for some solution. Thus, we define

๐’ฎ ๐œ†

{ ๐‘ ๐‘– โˆˆ โ„ฌ : โˆƒ ๐‘ค โˆˆ ๐’ฒ * โข ( ๐œ† ) , ๐‘ค ๐‘ ๐‘– โ‰  0 } ,

which is the set of blocks supported by some solution.

Our first result combines KKT conditions with uniqueness of ๐‘ฆ ^ โข ( ๐œ† ) to characterize the solution set for fixed ๐œ†

0 . {restatable}propositionsolFnCGL Fix ๐œ†

0 . The optimal set for the CGL problem is given by

๐’ฒ * ( ๐œ† )

{
๐‘ค โˆˆ โ„ ๐‘‘ : โˆ€ ๐‘ ๐‘– โˆˆ ๐’ฎ ๐œ† , ๐‘ค ๐‘ ๐‘–

๐›ผ ๐‘ ๐‘– โข ๐‘ฃ ๐‘ ๐‘– , ๐›ผ ๐‘ ๐‘– โ‰ฅ 0 ,

(11)

โˆ€ ๐‘ ๐‘— โˆˆ โ„ฌ โˆ– ๐’ฎ ๐œ† , ๐‘ค ๐‘ ๐‘—

0 , ๐‘‹ ๐‘ค

๐‘ฆ ^ }

Since this characterization is implicit due to the dependence on ๐’ฎ ๐œ† , we also give an alternative and explicit construction in Proposition A.2, which shows that when ๐พ

0 we may replace ๐’ฎ ๐œ† with โ„ฐ ๐œ† ; We prefer Section 3.1 to Proposition A.2 since it better mirrors this simpler setting. However, Proposition A.2 can be substituted wherever desired.

Now that we know โ€œshapeโ€ of the solution set, it is possible to obtain simple conditions for existence of a unique solution. As an immediate consequence of Section 3.1, the solution map is a subset of directions in Null โข ( ๐‘‹ โ„ฐ ๐œ† ) .

Corollary 3.1.

If ๐‘ค , ๐‘ค โ€ฒ โˆˆ ๐’ฒ * โข ( ๐œ† ) and ๐‘ง โ€ฒ

๐‘ค โˆ’ ๐‘ค โ€ฒ , then

๐‘ง โ„ฐ ๐œ† โ€ฒ โˆˆ ๐’ฉ ๐œ† := ๐‘๐‘ข๐‘™๐‘™ โข ( ๐‘‹ โ„ฐ ๐œ† ) โˆฉ { ๐‘ง โ„ฐ ๐œ† : โˆ€ ๐‘ ๐‘– โˆˆ โ„ฐ ๐œ† , ๐‘ง ๐‘ ๐‘–

๐›ผ ๐‘ ๐‘– โข ๐‘ฃ ๐‘ ๐‘– } .

As a result, the group lasso solution is unique if ๐’ฉ ๐œ†

{ 0 } .

Corollary 3.1 extends a similar result for the lasso to CGL (Tibshirani, 2013, Eq. 9) and implies the solution is unique for all ๐œ† โ‰ฅ 0 if the columns of ๐‘‹ are linearly independent. The corollary also provides a simple check for primal uniqueness given a primal-dual solution pair.

{restatable}

lemmauniqueCGL Fix ๐œ†

0 . The solution to CGL problem is unique if and only if { ๐‘‹ ๐‘ ๐‘– โข ๐‘ฃ ๐‘ ๐‘– } ๐’ฎ ๐œ† are linearly independent.

Note that a dual solution ๐œŒ is necessary to compute ๐‘ฃ in general; By uniformizing over ๐‘ฃ ๐‘ ๐‘– , we obtain a stronger condition that can be checked whenever โ„ฐ ๐œ† is known, yet is still weaker than linear independence of the columns of ๐‘‹ .

Corollary 3.2.

If the columns of ๐‘‹ โ„ฐ ๐œ† are linearly independent, then the CGL problem has a unique solution.

Finally, we consider the special case when there are no constraints and ๐พ

0 . In this setting, ๐‘ฃ ๐‘ ๐‘–

๐‘ ๐‘ ๐‘– โ€” the dual parameters are trivially zero โ€” and we can provide a global condition which is much stronger than linear independence.

{restatable}

propositionGGP[Group General Position] Suppose for every โ„ฐ โІ โ„ฌ , | โ„ฐ | โ‰ค ๐‘› + 1 , there do not exist unit vectors ๐‘ง ๐‘ ๐‘– โˆˆ โ„ | ๐‘ ๐‘– | such that for any ๐‘— โˆˆ โ„ฐ ,

๐‘‹ ๐‘ ๐‘— โข ๐‘ง ๐‘ ๐‘— โˆˆ affine โข ( { ๐‘‹ ๐‘ ๐‘– โข ๐‘ง ๐‘ ๐‘– : ๐‘ ๐‘– โˆˆ โ„ฐ โˆ– ๐‘ ๐‘— } ) .

Then the group lasso solution is unique for all ๐œ†

0 . We call this uniqueness condition group general position (GGP) because it naturally extends general position to groups of vectors. General position is itself an extension of affine independence and is sufficient for the lasso solution to be unique (Tibshirani, 2013). GGP is strictly weaker than linear independence of the columns of ๐‘‹ , but neither implies nor is implied by general position (Proposition A.3).

3.2Computing Dual Optimal Parameters

The main difficulty of Corollary 3.1 is that knowledge of a dual optimal parameter is required to check if a unique solution exists. A dual optimal parameter is also required to fully leverage our characterization of the optimal set. As such, now we turn to computing optimal dual parameters.

We give one Lagrange dual problem for CGL in Lemma A.4. A nice feature of this dual problem is ๐‘ฃ ๐‘ ๐‘– attains an alternative interpretation as dual variable. However, evaluating the dual requires computing ( ๐‘‹ โŠค โข ๐‘‹ ) + , which may be difficult even if ๐‘‹ is structured, as in the convex ReLU program. Instead, we focus on computing ๐œŒ given a primal solution.

Let ๐‘ค โˆˆ ๐’ฒ * โข ( ๐œ† ) . If ๐‘ค ๐‘ ๐‘– โ‰  0 , then KKT conditions imply

๐พ ๐‘ ๐‘– โข ๐œŒ ๐‘ ๐‘–

๐‘ ๐‘ ๐‘– โˆ’ ๐œ† โข ๐‘ค ๐‘ ๐‘– โ€– ๐‘ค ๐‘ ๐‘– โ€– 2 ,

(12)

so that the โ€œdual fitโ€ ๐‘‘ ^ ๐‘ ๐‘–

๐พ ๐‘ ๐‘– โข ๐œŒ ๐‘ ๐‘– is easily computed. Recovering the dual parameter is a linear feasibility problem:

๐œŒ ๐‘ ๐‘– โˆˆ { ๐œŒ ๐‘ ๐‘– โ‰ฅ 0 : ๐พ ๐‘ ๐‘– โข ๐œŒ ๐‘ ๐‘–

๐‘‘ ^ ๐‘ ๐‘– } .

(13)

If ๐‘ค ๐‘ ๐‘–

0 , then complementary slackness is trivially satisfied and we compute the min-norm dual parameter by solving the following program:

min โก { โ€– ๐œŒ ๐‘ ๐‘– โ€– 2 : โ€– ๐‘ ๐‘ ๐‘– โˆ’ ๐พ ๐‘ ๐‘– โข ๐œŒ ๐‘ ๐‘– โ€– 2 โ‰ค ๐œ† , ๐œŒ ๐‘ ๐‘– โ‰ฅ 0 } .

(14)

In general, however, we only need some dual optimal parameter for our results to hold; thus, is is typically easier to find ๐œŒ by solving the following non-negative regression:

๐œŒ ๐‘ ๐‘–

arg โข min { โˆฅ ๐พ ๐‘ ๐‘– ๐œŒ ๐‘ ๐‘– โˆ’ ๐‘ ๐‘ ๐‘– โˆฅ 2 2 : ๐œŒ ๐‘ ๐‘– โ‰ฅ 0 } .

(15)

See Proposition A.5 for details.

3.3Minimal Solutions and Optimal Pruning

Often we want the most parsimonious solution, i.e. the one using the fewest feature groups. We say a primal solution ๐‘ค is minimal if there does not exist ๐‘ค โ€ฒ โˆˆ ๐’ฒ * โข ( ๐œ† ) such that ๐’œ ๐œ† โข ( ๐‘ค โ€ฒ ) โŠŠ ๐’œ ๐œ† โข ( ๐‘ค ) . Building on the previous section, we start with a sufficient condition for ๐‘ค to be minimal. {restatable}propositionminimalSols For ๐œ†

0 , ๐‘ค โˆˆ ๐’ฒ * โข ( ๐œ† ) is minimal if and only if the vectors { ๐‘‹ ๐‘ ๐‘– โข ๐‘ค ๐‘ ๐‘– } ๐’œ โข ( ๐‘ค ) are linearly independent.

Linear independence of { ๐‘‹ ๐‘ ๐‘– โข ๐‘ค ๐‘ ๐‘– } ๐’œ โข ( ๐‘ค ) also identifies ๐‘ค as a vertex of ๐’ฒ * โข ( ๐œ† ) (Bertsekas, 2009), meaning minimal models are exactly the extreme points of the optimal set. Combining this characterization with our condition for uniqueness of a solution (Corollary 3.1) shows that minimal solutions are the only solution on their support.

Corollary 3.3.

Suppose ๐‘ค is a minimal solution. Then ๐‘ค is the unique solution with support ๐’œ ๐œ† โข ( ๐‘ค ) .

If ๐‘‹ satisfies group dependence (Definition A.7), then all minimal solutions have the same number of active blocks.

{restatable}

propositionminimalSolCharacterization Let ๐’ฑ

Span โข ( { ๐‘‹ ๐‘ ๐‘– โข ๐‘ค ยฏ ๐‘ ๐‘– } ) for ๐‘ค ยฏ โˆˆ ๐’ฒ * โข ( ๐œ† ) . If ๐‘‹ satisfies group dependence, then every minimal solution has ๐‘

dim โข ( ๐’ฑ ) active blocks.

Algorithm 1 Optimal Solution Pruning   Input: data matrix ๐‘‹ , solution ๐‘ค .    ๐‘˜ โ† 0 .    ๐‘ค ๐‘˜ โ† ๐‘ค .   while  โˆƒ ๐›ฝ โ‰  0 s.t. โˆ‘ ๐‘ ๐‘– โˆˆ ๐’œ ๐œ† โข ( ๐‘ค ๐‘˜ ) ๐›ฝ ๐‘ ๐‘– โข ๐‘‹ ๐‘ ๐‘– โข ๐‘ค ๐‘ ๐‘– ๐‘˜

0  do       ๐‘ ๐‘– ๐‘˜ โ† arg โข max ๐‘ ๐‘– { | ๐›ฝ ๐‘ ๐‘– | : ๐‘ ๐‘– โˆˆ ๐’œ ๐œ† ( ๐‘ค ๐‘˜ ) }       ๐‘ก ๐‘˜ โ† 1 / | ๐›ฝ ๐‘ ๐‘– ๐‘˜ |       ๐‘ค ๐‘˜ + 1 โ† ๐‘ค ๐‘˜ โข ( 1 โˆ’ ๐‘ก ๐‘˜ โข ๐›ฝ ๐‘ ๐‘– )       ๐‘˜ โ† ๐‘˜ + 1   end while   Output: final weights ๐‘ค ๐‘˜

Algorithm 1 gives a procedure which, starting from any optimal solution ๐‘ค , computes a optimal model with minimal support in ๐‘‚ ( ( ๐‘› 3 ๐‘™ + ๐‘› ๐‘‘ ) time, where ๐‘™ is the number active blocks in ๐‘ค (see Proposition A.6). Moreover, if the blocks of ๐‘‹ satisfy a regularity condition, then starting model has no affect on the cardinality of the minimal solution found. Our algorithm can also be used to verify a minimal solution, since if ๐‘ค minimal then it is unique on its support and Algorithm 1 must return ๐‘ค immediately. This procedure also implies the existence of at least one minimal solution.

Corollary 3.4.

There exists ๐‘ค โˆˆ ๐’ฒ * โข ( ๐œ† ) for which the vectors { ๐‘‹ ๐‘ ๐‘– โข ๐‘ค โข ( ๐œ† ) : ๐‘ ๐‘– โˆˆ ๐’œ โข ( ๐‘ค ) } are linearly independent.

Corollary 3.4 will be useful tool later when we study sensitivity of the model fit to perturbations in ๐‘ฆ and ๐œ† .

A disadvantage of Algorithm 1 is that it cannot continue beyond a minimal solution. However, minimal models may still be quite large. We can perform approximate pruning in such cases using the least squares fit to approximate ๐›ฝ ,

๐›ฝ ~

arg โข min โ€– ๐ด โข ๐›ฝ โˆ’ ๐‘‹ ๐‘ ๐‘— โข ๐‘ค ๐‘ ๐‘— โ€– 2 2 ,

where ๐ด

[ ๐‘‹ ๐‘ ๐‘– โข ๐‘ค ๐‘ ๐‘– ] ๐’œ ๐œ† โˆ– ๐‘ ๐‘— and ๐‘ ๐‘— โˆˆ ๐’œ ๐œ† is chosen randomly. Using ๐›ฝ ~ in Algorithm 1 is optimal when { ๐‘‹ ๐‘ ๐‘– โข ๐‘ค ๐‘ ๐‘– } ๐’œ ๐œ† are dependent and chooses the update parameters to minimize degradation of the model fit otherwise.

3.4Continuity of the Solution Path

A major concern when learning with regularizers is how to tune the parameter ๐œ† . Typical strategies like grid-search on the (cross) validation loss are effective only if the solution function satisfies basic continuity properties. For example, if ๐’ฒ * is single-valued but discontinuous in ๐œ† , then the sample complexity of grid-search can be made arbitrarily poor by โ€œhidingโ€ the optimal ๐œ† in a discontinuity (Nesterov et al., 2018, Sec. 1.1). In this section, we justify grid-search for CGL by proving several continuity properties of the solution function, particularly when the solution is unique. We start with basic definitions of continuity for point-to-set maps.

Definition 3.5 (Closed).

๐‘‡ : ๐’ณ โ†’ 2 ๐’ต is closed if { ๐‘ฅ ๐‘˜ } โŠ‚ ๐’ณ , ๐‘ฅ ๐‘˜ โ†’ ๐‘ฅ ยฏ and ๐‘ง ๐‘˜ โˆˆ ๐‘‡ โข ( ๐‘ฅ ๐‘˜ ) , ๐‘ง ๐‘˜ โ†’ ๐‘ง ยฏ implies ๐‘ง ยฏ โˆˆ ๐‘‡ โข ( ๐‘ฅ ยฏ ) .

Definition 3.6 (Open).

๐‘‡ : ๐’ณ โ†’ 2 ๐’ต is open if { ๐‘ฅ ๐‘˜ } โŠ‚ ๐’ณ , ๐‘ฅ ๐‘˜ โ†’ ๐‘ฅ ยฏ and ๐‘ง ยฏ โˆˆ ๐‘‡ โข ( ๐‘ฅ ยฏ ) , implies there exists ๐‘˜ โ€ฒ โˆˆ โ„• , ๐‘ง ๐‘˜ โˆˆ ๐‘‡ โข ( ๐‘ฅ ๐‘˜ ) for ๐‘˜ โ‰ฅ ๐‘˜ โ€ฒ , such that ๐‘ง ๐‘˜ โ†’ ๐‘ง ยฏ .

We say that ๐‘‡ is continuous if it is both closed and open. If ๐‘‡ โข ( ๐‘ฅ ) is a singleton for all ๐‘ฅ โˆˆ ๐’ณ , then openness/closedness are equivalent and imply continuity. We start with (functional) continuity of the optimal objective.

{restatable}

propositionvalueContinuityCGL ๐œ† โ†ฆ ๐‘ * โข ( ๐œ† ) is continuous for all ๐œ† โ‰ฅ 0 .

While standard sensitivity results imply that ๐’ฒ * is closed, unfortunately openness is not possible in the general setting.

{restatable}

propositionmapContinuityCGL While ๐’ฒ * is closed on โ„ + , it is open if only if ๐‘‹ is full column rank. However, if the solution is unique on ฮ› โŠ‚ โ„ + , then ๐’ฒ * is open at every ๐œ† โˆˆ ฮ› .

As a corollary of Definition 3.6, ๐’ฒ * is open on โ„ + if and only if ๐‘‹ is full column rank. Continuity of ๐’ฒ * is impossible in general because, as Hogan (1973) shows, openness is a local stability property; since ๐’ฒ * โข ( 0 ) is unbounded, many โ€œunstableโ€ solutions exist at ๐œ†

0 which are not limit points of other solutions. Continuity of the unique solution path is an immediate corollary of Definition 3.6.

Corollary 3.7.

If the CGL solution is unique on an interval ฮ› โŠ‚ โ„ + , then it is also continuous on ฮ› .

In particular, if ๐พ

0 and GGP holds, then the group lasso solution is continuous for all ๐œ† > 0 . We can strengthen our continuity results when ๐พ ๐‘ ๐‘–

0 in another way: by analyzing the dual of the group lasso problem, we extend continuity from ๐‘ * to the optimal model fit.

{restatable}

propositionmodelFitContinuity If ๐พ

0 , then ๐‘ฆ ^ โข ( ๐œ† ) is continuous on โ„ + and the penalty โˆ‘ ๐‘ ๐‘– โˆˆ โ„ฌ โ€– ๐‘ค ๐‘ ๐‘– โข ( ๐œ† ) โ€– 2 is continuous for ๐œ†

0 .

3.5The Min-Norm Path

Now we turn our attention to the min-norm solution path. Min-norm solutions are typically used in under-determined problems since the norm of the solution is connected to generalization (Neyshabur et al., 2015; Gunasekar et al., 2017). Furthermore, the min-norm solution is a function of ๐œ† , unlike ๐’ฒ * . Throughout this section, ๐’œ ๐œ† *

๐’œ ๐œ† โข ( ๐‘ค * ) denotes the active set of the min-norm solution.

Unfortunately, studying the min-norm path immediately encounters a surprising difficulty: as opposed to least-squares problems or the lasso (see Tibshirani (2013)), the min-norm solution may not lie in the row space of the active set.

{restatable}

propositionrowCE Suppose ๐พ ๐‘ ๐‘–

0 . There exists ( ๐‘‹ , ๐‘ฆ ) and ๐œ†

0 such that ๐‘ค ๐’œ ๐œ† * * โข ( ๐œ† ) โˆ‰ Row โข ( ๐‘‹ ๐’œ ๐œ† * ) .

Since the min-norm solution is not given by projecting onto Row โข ( ๐‘‹ ๐’œ ๐œ† * ) , how can we compute and study it? Again, our characterization for the solution set provides a way forward.

{restatable}

propositionminNormProgram Let ๐œ†

0 and consider the program:

๐›ผ *

arg โข min ๐›ผ โ‰ฅ 0 โ€– ๐›ผ โ€– 2 2 โข  s.t. โข โˆ‘ ๐‘ ๐‘– โˆˆ ๐’ฎ ๐œ† ๐›ผ ๐‘ ๐‘– โข ๐‘‹ ๐‘ ๐‘– โข ๐‘ฃ ๐‘ ๐‘–

๐‘ฆ ^ .

(16)

Then the min-norm solution is given by ๐‘ค ๐‘ ๐‘– *

๐›ผ ๐‘ ๐‘– * โข ๐‘ฃ ๐‘ ๐‘– .

Equation 16 is a quadratic program (QP) that can be solved with off-the-shelf software like cvxpy (Diamond & Boyd, 2016). If | โ„ฌ | and ๐‘‘ are large, this QP may be too expensive to handle directly. In such situations, we propose to the solve the following elastic-net-type problem

min ๐‘ค

1 2 โข โ€– ๐‘‹ โข ๐‘ค โˆ’ ๐‘ฆ โ€– 2 2 + ๐œ† โข โˆ‘ ๐‘ ๐‘– โˆˆ โ„ฌ โ€– ๐‘ค ๐‘ ๐‘– โ€– 2 + ๐›ฟ 2 โข โ€– ๐‘ค โ€– 2 2

(17)

s.t. ๐พ ๐‘ ๐‘– โŠค โข ๐‘ค ๐‘ ๐‘– โ‰ค 0 โข  for all  โข ๐‘ ๐‘– โˆˆ โ„ฌ .

This โ„“ 2 -penalized CGL problem is equivalent to CGL with modified dataset ( ๐‘‹ ~ , ๐‘ฆ ~ ) (Lemma A.13). Since the optimization problem is strongly convex ( ๐‘‹ ~ is full column rank), invoking Definition 3.6 implies the solution ๐‘ค ๐›ฟ โข ( ๐œ† ) is continuous for all ๐œ† โ‰ฅ 0 . Moreover, as ๐›ฟ โ†’ 0 , the penalized solution converges to the min-norm solution to CGL. {restatable}propositionpenConvergence The solution to the โ„“ 2 -penalized problem converges to the min-norm solution as ๐›ฟ โ†’ 0 . That is,

lim ๐›ฟ โ†’ 0 ๐‘ค ๐›ฟ โข ( ๐œ† )

๐‘ค * โข ( ๐œ† ) .

Uniqueness and continuity of the solution path for the penalized CGL problem mean we may prefer to solve Problem (17) with small ๐›ฟ > 0 when tuning ๐œ† . Section 3.5 guarantees that the bias induced by ๐›ฟ will be small and a polishing step with ๐›ฟ

0 can always be used. Finally, non-zero ๐›ฟ ensures the objective is strongly convex, meaning we can use linearly convergent methods to solve the problem.

3.6Sensitivity

Now we move onto the problem of sensitivity of a solution ๐‘ค โˆˆ ๐’ฒ * โข ( ๐œ† , ๐‘ฆ ) to perturbations, either in ๐œ† or the targets ๐‘ฆ . The main tool for measuring such perturbations are the gradients, for example โˆ‡ ๐œ† ๐‘ค โข ( ๐œ† , ๐‘ฆ ) . However, since the solution path of the group lasso is non-smooth, we must cope with the fact that gradients are not available everywhere.

We show that the gradients of minimal solutions exist almost everywhere under additional constraint qualifications (CQs). We do so by considering a reduced problem and showing that the solution to this reduced problem is exactly ๐‘ค ๐’œ ๐œ† . Define the reduced problem as follows:

min ๐‘ค ๐’œ ๐œ†

1 2 โข โ€– ๐‘‹ ๐’œ ๐œ† โข ๐‘ค ๐’œ ๐œ† โˆ’ ๐‘ฆ โ€– 2 2 + ๐œ† โข โˆ‘ ๐‘ ๐‘– โˆˆ ๐’œ ๐œ† โ€– ๐‘ค ๐‘ ๐‘– โ€– 2

(18)

s.t. ๐พ ๐’œ ๐œ† โŠค โข ๐‘ค ๐’œ ๐œ† โ‰ค 0

If ๐’œ ๐œ† โข ( ๐‘ค ) is the support of a minimal solution, then ๐‘ค is the only solution with support ๐’œ ๐œ† and Equation 18 can be used to compute the unique active weights. {restatable}propositionconstraintReduced Let ๐‘ค โˆˆ ๐’ฒ * โข ( ๐œ† , ๐‘ฆ ) be minimal. The active blocks ๐‘ค ๐’œ ๐œ† are the unique solution to Problem (18). We use this fact to obtain a local solution function for CGL using the implicit function theorem. Given a solution ๐‘ค , let

โ„ฌ โข ( ๐‘ค )

โ‹ƒ ๐‘ ๐‘– โˆˆ ๐’œ ๐œ† { ๐‘— โˆˆ [ ๐‘Ž ๐‘ ๐‘– ] : [ ๐พ ๐‘ ๐‘– ] ๐‘— โŠค โข ๐‘ค ๐‘ ๐‘–

0 } ,

be the active constraints. We now need two classical CQs.

Definition 3.8 (LICQ).

๐‘ค โˆˆ ๐’ฒ * โข ( ๐œ† , ๐‘ฆ ) satisfies linear independence CQ if { [ ๐พ ] ๐‘— : ๐‘— โˆˆ โ„ฌ โข ( ๐‘ค ) } are linearly independent.

Definition 3.9 (SCS).

Primal solution ๐‘ค โˆˆ ๐’ฒ * โข ( ๐œ† ) satisfies strict complementary slackness if there exists a dual optimal parameter ๐œŒ such that [ ๐œŒ ] ๐‘—

0 for every ๐‘— โˆˆ โ„ฌ .

Now we can state our main differential sensitivity result.

{restatable}

propositionlocalSolFn Let ๐‘ค โˆˆ ๐’ฒ * โข ( ๐œ† ยฏ , ๐‘ฆ ยฏ ) be minimal and suppose ๐‘ค satisfies LICQ on the active set ๐’œ ๐œ† and SCS on the equicorrelation set โ„ฐ ๐œ† . Then ๐‘ค has a locally continuous solution function ( ๐œ† , ๐‘ฆ ) โ†ฆ ๐‘ค โข ( ๐œ† , ๐‘ฆ ) . Moreover, if

๐ท

[ ๐‘‹ ๐’œ ๐œ† โŠค โข ๐‘‹ ๐’œ ๐œ† + ๐‘€ โข ( ๐‘ค ยฏ )

๐พ ๐’œ ๐œ†

๐œŒ ยฏ ๐’œ ๐œ† โŠ™ ๐พ ๐’œ ๐œ†

diag โข ( ๐พ ๐’œ ๐œ† โŠค โข ๐‘ค ยฏ ๐’œ ๐œ† ) ] ,

where โŠ™ is the element-wise product, ๐‘ข ๐‘ ๐‘–

๐‘ค ๐‘ ๐‘– โ€– ๐‘ค ๐‘ ๐‘– โ€– 2 , ๐‘ข is the concatenation of these vectors, and ๐‘€ is block-diagonal projection matrix in Equation 26, then the Jacobians of ๐‘ค โข ( ๐œ† ยฏ , ๐‘ฆ ยฏ ) with respect to ๐œ† and ๐‘ฆ are given as follows:

โˆ‡ ๐œ† ๐‘ค โข ( ๐œ† ยฏ , ๐‘ฆ ยฏ )

โˆ’ [ ๐ท โˆ’ 1 ] ๐’œ ๐œ† โข ๐‘ข ๐’œ ๐œ† โข โˆ‡ ๐‘ฆ ๐‘ค โข ( ๐œ† ยฏ , ๐‘ฆ ยฏ )

[ ๐ท โˆ’ 1 ] ๐’œ ๐œ† โข ๐‘‹ ๐’œ ๐œ† โŠค ,

where [ ๐ท ๐’œ ๐œ† โˆ’ 1 ] ๐’œ ๐œ† is the | ๐’œ ๐œ† | ร— | ๐’œ ๐œ† | dimensional leading principle submatrix of ๐ท .

4Specialization to Neural Networks Algorithm 2 Approximate ReLU Pruning   Input: data matrix ๐‘ , weights ๐‘Š 1 , ๐‘ค 2 , score function ๐‘  .    ๐‘š โ† | ๐’œ ๐œ† โข ( ๐‘Š 1 ) |    ( ๐‘Š 1 0 , ๐‘ค 2 0 ) โ† ( ๐‘Š 1 , ๐‘ค 2 ) .    ๐‘ž ๐‘– 0 โ† ( ๐‘‹ โข ๐‘Š 1 โข ๐‘– 0 ) + โข ๐‘ค 2 0   for  ๐‘˜

0 โข  to  โข ๐‘š โˆ’ 1  do       ๐‘— ๐‘˜

arg โข min ๐‘– โˆˆ ๐’œ ๐œ† โข ( ๐‘Š 1 โข ๐‘– ๐‘˜ ) ๐‘  โข ( ๐‘Š 1 โข ๐‘– ๐‘˜ )       ๐›ฝ ๐‘˜

arg โข min ๐›ฝ โ€– โˆ‘ ๐‘– โ‰  ๐‘— ๐‘˜ ๐›ฝ ๐‘– โข ๐‘ž ๐‘– ๐‘˜ โˆ’ ๐‘ž ๐‘— ๐‘˜ ๐‘˜ โ€– 2 2       ๐‘– ๐‘˜ โ† arg โข max ๐‘– { | ๐›ฝ ๐‘– | : ๐‘– โˆˆ ๐’œ ๐œ† ( ๐‘Š 1 โข ๐‘– ๐‘˜ ) }       ๐‘ก ๐‘˜ โ† 1 / | ๐›ฝ ๐‘– ๐‘˜ |       ( ๐‘Š 1 โข ๐‘– ๐‘˜ + 1 , ๐‘ค 2 โข ๐‘– ๐‘˜ + 1 ) โ† ( ๐‘Š 1 โข ๐‘– ๐‘˜ , ๐‘ค 2 โข ๐‘– ๐‘˜ ) โ‹… ( 1 โˆ’ ๐‘ก ๐‘˜ โข ๐›ฝ ๐‘– ) 1 / 2       ๐‘ž ๐‘– ๐‘˜ + 1 โ† ๐‘ž ๐‘– ๐‘˜ โ‹… ( 1 โˆ’ ๐‘ก ๐‘˜ โข ๐›ฝ ๐‘– )   end for   Output: final weights ๐‘Š 1 ๐‘˜ , ๐‘ค 2 ๐‘˜

Now we specialize our results for CGL to two-layer neural networks with ReLU or gated ReLU activations. We state and prove our results for ReLU networks, but they are easily adapted to gated ReLUs. We start by interpreting conditions for uniqueness in the context of non-convex ReLU models and then move on to discussing optimal pruning for ReLU networks and continuity properties of the solution function. Proofs are deferred to Appendix B.

Optimal Sets and Uniqueness: Combining the mapping between solutions for the convex reformulation and the original non-convex training problem (Equation 5) and Section 3.1 immediately allows us to characterize the solution set for the full ReLU problem: {restatable}theoremreluSolFn Suppose ๐‘š โ‰ฅ ๐‘š * and ๐ท ~

๐’Ÿ ๐‘ (no sub-sampling), with ๐‘

| ๐’Ÿ ~ ๐‘ | . Then the optimal set for the ReLU problem up to permutation/splitting symmetries is

๐’ช ๐œ†

{ ( ๐‘Š 1 , ๐‘ค 2 ) : ๐‘“ ๐‘Š 1 , ๐‘ค 2 ( ๐‘ )

๐‘ฆ ^ , ๐‘Š 1 โข ๐‘–

( ๐›ผ ๐‘– / ๐œ† ) 1 / 2 ๐‘ฃ ๐‘– ,

(19)

๐‘ค 2 โข ๐‘–

๐œ‰ ๐‘– ( ๐›ผ ๐‘– ๐œ† ) 1 / 2 , ๐›ผ ๐‘– โ‰ฅ 0 , ๐‘– โˆˆ [ 2 ๐‘ ] โˆ– ๐’ฎ ๐œ† โ‡’ ๐›ผ ๐‘–

0 } .

Eq. (19) abuses notation by using ๐’ฎ ๐œ† as a subset of the neuron indices { 1 , โ€ฆ โข 2 โข ๐‘ } , where { 1 , โ€ฆ โข ๐‘ } index positive neurons for which ๐œ‰ ๐‘–

  • 1 (corresponding to blocks ๐ท ๐‘– โข ๐‘ ) and { ๐‘
  • 1 , โ€ฆ , 2 โข ๐‘ } index the negative neurons (corresponding to โˆ’ ๐ท ๐‘– โข ๐‘ ), for which ๐œ‰ ๐‘–

    โˆ’ 1 . Figure 1 plots the first feature of three neurons as they vary over this solution set. The mapping from convex to non-convex parameterization transforms the polytope of solutions into a curved manifold.

Choosing a sub-sampled set of patterns ๐’Ÿ ~ โŠ‚ ๐’Ÿ ๐‘ corresponds to finding a stationary point of the non-convex training problem (Wang et al., 2021). Using this fact with Algorithm 2 finally justifies description of all stationary points of the ReLU problem given in the introduction. {restatable}propositionstationaryPoints The set of stationary points of two-layer ReLU networks up to permutation/splitting symmetries is

๐’ž ๐œ†

{
( ๐‘Š 1 , ๐‘ค 2 ) : ๐’Ÿ ~ โІ ๐’Ÿ ๐‘ , ๐‘“ ๐‘Š 1 , ๐‘ค 2 โข ( ๐‘ )

๐‘ฆ ^ ๐’Ÿ ~ ,

๐‘Š 1 โข ๐‘–

( ๐›ผ ๐‘– / ๐œ† ) 1 / 2 โข ๐‘ฃ ๐‘– โข ( ๐’Ÿ ~ ) , ๐‘ค 2 โข ๐‘–

๐œ‰ ๐‘– โข ( ๐›ผ ๐‘– โข ๐œ† ) 1 / 2 ,

๐›ผ ๐‘– โ‰ฅ 0 , ๐‘– โˆˆ [ 2 | ๐’Ÿ ~ | ] โˆ– ๐’ฎ ๐œ† โŸน ๐›ผ ๐‘–

0 } ,

where ๐’Ÿ ~ are sub-sampled activation patterns, ๐‘ฆ ^ ๐’Ÿ ~ is the optimal model fit using those patterns, and ๐‘ฃ ๐‘– โข ( ๐’Ÿ ~ )

๐‘ ๐‘ ๐‘– โข ( ๐’Ÿ ~ ) โˆ’ ๐พ ๐‘ ๐‘– โข ๐œŒ ๐‘ ๐‘– โข ( ๐’Ÿ ~ ) is determined by the fit and the dual parameters.

We note that since deeper networks are also related to CGL through convex reformulations (Ergen & Pilanci, 2021a), Algorithm 2 may also be applied beyond two layers.

Recall that the solution set for a two-layer ReLU network is typically not unique due to model symmetries. However, if the convex solution is unique, then the non-convex ReLU training problem is p-unique. Combining this with our results for CGL gives the following sufficient conditions.

{restatable}

propositionpUnique Let ๐œ†

0 and suppose that the convex ReLU problem has a unique solution. Then the ReLU model solution is p-unique. In particular, if { ๐ท ๐‘– โข ๐‘ โข ๐‘ฃ ๐‘ ๐‘– } โ„ฐ ๐œ† are linearly independent, then the non-convex solution is p-unique.

For gated ReLU networks, it is also sufficient to check the blocks [ ๐ท ๐‘– โข ๐‘ ] ๐ท ๐‘– โˆˆ ๐’Ÿ ~ to see if they satisfy GGP. By looking at the structure of ๐ท ๐‘– โข ๐‘‹ , we give simple sufficient conditions for sub-sampled convex ReLU programs to be p-unique.

{restatable}

propositionreluUnique Let ๐œ† > 0 and ๐‘

| ๐’Ÿ ~ | . Suppose ๐‘ follows a continuous probability distribution and nnz โข ( ๐ท ๐‘– ) โ‰ฅ ๐‘ โ‹… ๐‘‘ for every ๐ท ๐‘– โˆˆ ๐’Ÿ ~ . If โ„ฐ ๐œ† does not contain two blocks with the same activation pattern, then the sub-sampled convex ReLU program has a p-unique solution almost surely.

Algorithm 2 requires ๐‘› to be much greater than ๐‘‘ to be useful due to the trivial bound nnz โข ( ๐ท ๐‘– ) โ‰ค ๐‘› . In practice, the condition on activation patterns can be enforced by constraining ๐‘ฃ ๐‘–

0 or ๐‘ค ๐‘–

0 for each activation pattern before solving the convex reformulation.

Table 1:Tuning neural networks by searching over the optimal set. We fit two-layer ReLU networks on the training set and compute the minimum โ„“ 2 norm solution (Min L 2 ). Then we tune by finding an extreme point approximating the maximum โ„“ 2 -norm solution (EP), minimizing validation MSE over the optimal set (V-MSE), and minimizing test MSE over the optimal set (T-MSE). Results show median test accuracy; Max Diff. reports the difference between the best and worse models found. Exploring the optimal set reveals a huge disparity in the performance of optimal networks, with the generalization gap exceeding 20 points on four datasets. Dataset Min L 2 EP V-MSE T-MSE Max Diff. fertility 0.66 0.69 0.65 0.64 0.05 heart-hung. 0.75 0.75 0.71 0.85 0.14 mammogr. 0.77 0.77 0.57 0.78 0.21 monks-1 0.67 0.66 0.49 0.57 0.17 planning 0.53 0.52 0.53 0.7 0.17 spectf 0.64 0.64 0.56 0.58 0.08 horse-colic 0.75 0.59 0.74 0.85 0.26 ilpd-indian 0.59 0.59 0.53 0.72 0.19 parkinsons 0.74 0.74 0.65 0.88 0.23 pima 0.68 0.68 0.68 0.87 0.2 Figure 2:Pruning neurons from two-layer ReLU networks on binary classification tasks from the UCI repository. We compare our theory-inspired approach (Optimal/LS), against removing the neuron with smallest โ„“ 2 norm (Neuron Magnitude), removing the neuron with the smallest weighted gradient norm (Gradient Magnitude), and random pruning (Random). For Optimal/LS, we use Algorithm 2, which begins with optimal pruning and then switches to a least-squares heuristic. We plot test accuracy against number of active neurons. Optimal/LS dominates the baseline methods on every dataset and even improves test accuracy on breast-cancer and fertility.

Pruning: If ( ๐‘ฃ , ๐‘ข ) is a minimal solution to the convex reformulation, then the corresponding ReLU network is the p-unique model using only those activation patterns (Propositions 3.3 and 2). Thus, Algorithm 1 can be used to prune any solution to obtain a neuron-sparse neural network achieving the optimal training objective. Algorithm 2 specializes our pruning algorithm to the ReLU problem and extends it to support approximate pruning. Note the resulting procedure is completely independent of the convex reformulation. The complexity of this method is as follows.

{restatable}

propositionminimalComplexity Suppose ๐‘Ÿ

rank โข ( ๐‘‹ ) . Then an optimal and minimal ReLU network with at most ๐‘š * โ‰ค ๐‘› non-zero neurons can be computed in ๐‘‚ โข ( ๐‘‘ 3 โข ๐‘Ÿ 3 โข ( ๐‘› / ๐‘Ÿ ) 3 โข ๐‘Ÿ ) time.

As a consequence, the complexity of computing an optimal and minimal ReLU network is fully polynomial when ๐‘Ÿ is bounded. We also have a more sensitive statement for the minimal width: if ( ๐‘Š 1 * , ๐‘ค 2 * ) are optimal weights for the ReLU model, then ๐‘š * is at most the dimension of Span โข { ( ๐‘‹ โข ๐‘Š 1 โข ๐‘– * ) + } ๐‘– and exactly this dimension under group dependence (Corollary 3.3). We experiment with pruning ReLU networks using this approach in Section 5 and that show it is more effective than naive pruning strategies.

Continuity: First we give a negative result for singular networks, that is, models where ๐‘š < ๐‘š * and no convex reformulation exists. In this setting, the solution map can be made to behave arbitrarily poorly. {restatable}propositiononeNeuron There exists ( ๐‘ , ๐‘ฆ ) for which ๐’ช * is not open nor is the model fit ๐‘“ ๐‘Š 1 , ๐‘ค 2 โข ( ๐‘ ) continuous in ๐œ† . Combined with the next result, Figure 2 indicates that the threshold ๐‘š * may be crucial for continuity to extend to the non-convex parameterization.

Corollary 4.1.

Suppose ๐‘š โ‰ฅ ๐‘š * . Then the optimal model fit for two-layer gated ReLU networks is continuous at all ๐œ†

0 . Similarly, if the (gated) ReLU solution is p-unique on an open interval ฮ› , then the regularization path is also continuous on ฮ› up to permutations of the weights.

Together, Corollary 4.1 and Algorithm 2 are concrete conditions for the model fit and regularization path of a sub-sampled problem to be continuous.

Min-Norm Solutions: In Section 3.5, we examined minimum โ„“ 2 -norm solutions to CGL. However, all optimal ReLU models have the same โ„“ 2 -norm when ๐œ†

0 . Minimizing the Euclidean norm of solutions to the convex reformulation instead minimizes the sum of norms to the fourth power. {restatable}lemmaminNormNN The minimum โ„“ 2 -norm solution to the convex reformulation of a (gated) ReLU model corresponds to the optimal neural network which minimizes,

๐‘Ÿ โข ( ๐‘Š 1 , ๐‘ค 2 )

โˆ‘ ๐‘–

1 ๐‘š โ€– ๐‘Š 1 โข ๐‘– โ€– 2 4 + โ€– ๐‘ค 2 โข ๐‘– โ€– 2 4 ,

and admits no neuron-merge symmetries.

As a result, we can compute the ๐‘Ÿ -minimal optimal ReLU network by solving Problem (16). If ๐’ฎ ๐œ† is unknown, then using ๐’œ ๐œ† โข ( ๐‘ค ) for some solution ๐‘ค as an approximation gives the ๐‘Ÿ -minimal network using a subset of those activations.

Sensitivity: Definition 3.9 extends similar results for the group lasso by Vaiter et al. (2012) to CGL using standard CQs. Since ๐พ is block-diagonal, LICQ will be satisfied whenever the rows of ๐‘ are linearly independent. SCS is more challenging; while the classical theorem of Goldman & Tucker (2016) establishes that SCS is satisfied for linear programs, it is known that SCS can fail for general cone programs (Tunรงel & Wolkowicz, 2012). As such, SCS must be checked on a per-problem basis in general.

In the context of gated ReLU problems, ๐พ

0 and there is no requirement for CSC/LICQ. Minimal models ๐‘ค โข ( ๐œ† , ๐‘ฆ ) are weakly differentiable, which Vaiter et al. (2012) uses to compute the degrees of freedom of ๐‘ค via Steinโ€™s Lemma (Stein, 1981). It is straightforward to extend this calculation to the gated ReLU weights using the chain rule, which can then be used to calculate Steinโ€™s unbiased risk estimator.

5Experiments

Through convex reformulations, we have characterized the optimal sets of ReLU networks, computed minimal networks, and provided sensitivity results. Our goal in this section is to illustrate the power of our framework for analyzing ReLU networks and developing new algorithms.

Tuning: We first consider a tuning task on 10 binary classification datasets from the UCI repository (Dua & Graff, 2017). For each dataset, we do a train/validation/test split, fit a two-layer ReLU model on the training set, and then compute the minimum โ„“ 2 -norm model. We use this to explore the optimal set in three ways: (i) we compute an extreme point that (approximately) maximizes the modelโ€™s โ„“ 2 -norm; (ii) we minimize the validation MSE over ๐’ฒ * โข ( ๐œ† ) ; (iii) we minimize test MSE over ๐’ฒ * โข ( ๐œ† ) . These procedures select for different optimal models, have no effect on the training objective, and are only possible because we know ๐’ฒ * .

The results are summarized in Table 1. We see that optimal models can perform very differently at test time despite having exactly the same training error and model norm. Indeed, 9/10 datasets show at least a 10 percent gap between the best and worst models and 4/10 have a gap exceeding 20 percent accuracy. We conclude that the training objective is badly under-determined even for shallow neural networks, implying that implicit regularization is critical in practice. See Appendix D for results on additional datasets.

Figure 3:Pruning neurons from two-layer ReLU networks on two binary classification tasks drawn from the CIFAR-10 dataset. We compare our method (Optimal/LS) against baselines; see Figure 2 for details. Our approach, which makes use of a weight correction after pruning, outperforms every baseline.

Pruning: We also consider several neuron pruning tasks. We use two-layer ReLU networks and start pruning from the model given by optimizing the convex reformulation. We compare four strategies: (i) pruning neurons optimally using Algorithm 2 until { ( ๐‘‹ โข ๐‘Š 1 โข ๐‘– ) + } ๐’œ ๐œ† are linearly independent and then approximately using least-squares fits; and (ii) by removing the neuron with the smallest magnitude, โ€– ๐‘Š 1 โข ๐‘– โ‹… ๐‘ค 2 โข ๐‘– โ€– ; (iii) by removing the neuron with the smallest weighted gradient; and (iv) by random pruning.

Figure 2 shows test performance of the these methods for five UCI datasets. Our theory-based pruning method has better test performance than the baselines on every dataset considered; on hill-valley, the gap between our approach and magnitude-based pruning is approximately 40 % . Figure 3 presents similar results for two binary tasks taken from the CIFAR-10 dataset (Krizhevsky et al., 2009). We provide experiments on additional datasets, including MNIST (LeCun et al., 1998), and experimental details in Appendix D.

6Conclusion

We study the structure and properties of solution sets for shallow neural networks with (gated) ReLU activations. Unlike previous work, we avoid non-convexity of neural networks by studying the constrained group lasso, a generalized linear model which unifies the convex reformulations of both ReLU and gated ReLU networks. We derive analytical expressions for the optima and all stationary points of the training objective for two-layer ReLU networks. Building on this characterization, we develop conditions for the optimal neural network to be p-unique, an algorithm for optimal pruning of neural networks, and sensitivity results. We demonstrate the utility of our framework in experiments on MNIST, CIFAR-10, and the UCI datasets.

There is still much work to do in this area. For example, we conjecture that the min-norm CGL solution, which corresponds to the network minimizing a fourth-power penalty, always has a continuous regularization path. More generally, it remains to extend our characterization of the solution set to deeper networks and vector-output models.

Acknowledgements

This work was supported by the National Science Foundation (NSF), Grants ECCS-2037304, DMS-2134248; by the NSF CAREER Award, Grant CCF-2236829; by the U.S. Army Research Office Early Career Award, Grant W911NF-21-1-0242; by the Stanford Precourt Institute; by the Stanford Research Computing Center; and by the ACCESSโ€”AI Chip Center for Emerging Smart Systems through InnoHK, Hong Kong, SAR. Aaron Mishkin was supported by NSF Grant DGE-1656518 and by NSERC Grant PGSD3-547242-2020. We thank Frederik Kunstner and Victor Sanches Portella for many insightful discussions.

References Agrawal et al. (2019) โ†‘ Agrawal, A., Amos, B., Barratt, S., Boyd, S., Diamond, S., and Kolter, J. Z.Differentiable convex optimization layers.Advances in neural information processing systems, 32, 2019. ApS (2022) โ†‘ ApS, M.Mosek optimizer API for python.Version, 9(17):6โ€“4, 2022. Baydin et al. (2017) โ†‘ Baydin, A. G., Cornish, R., Rubio, D. M., Schmidt, M., and Wood, F.Online learning rate adaptation with hypergradient descent.arXiv preprint arXiv:1703.04782, 2017. Berge (1997) โ†‘ Berge, C.Topological Spaces: including a treatment of multi-valued functions, vector spaces, and convexity.Courier Corporation, 1997. Bertsekas (2009) โ†‘ Bertsekas, D.Convex optimization theory, volume 1.Athena Scientific, 2009. Blalock et al. (2020) โ†‘ Blalock, D. W., Ortiz, J. J. G., Frankle, J., and Guttag, J. V.What is the state of neural network pruning?In Dhillon, I. S., Papailiopoulos, D. S., and Sze, V. (eds.), Proceedings of Machine Learning and Systems 2020, MLSys 2020, Austin, TX, USA, March 2-4, 2020. mlsys.org, 2020. Boyd & Vandenberghe (2014) โ†‘ Boyd, S. P. and Vandenberghe, L.Convex Optimization.Cambridge University Press, 2014. Clarke (1990) โ†‘ Clarke, F. H.Optimization and nonsmooth analysis.SIAM, 1990. Delgado et al. (2014) โ†‘ Delgado, M. F., Cernadas, E., Barro, S., and Amorim, D. G.Do we need hundreds of classifiers to solve real world classification problems?J. Mach. Learn. Res., 15(1):3133โ€“3181, 2014. Diamond & Boyd (2016) โ†‘ Diamond, S. and Boyd, S.CVXPY: A Python-embedded modeling language for convex optimization.Journal of Machine Learning Research, 17(83):1โ€“5, 2016. Draxler et al. (2018) โ†‘ Draxler, F., Veschgini, K., Salmhofer, M., and Hamprecht, F. A.Essentially no barriers in neural network energy landscape.In Dy, J. G. and Krause, A. (eds.), Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsmรคssan, Stockholm, Sweden, July 10-15, 2018, volume 80 of Proceedings of Machine Learning Research, pp.  1308โ€“1317. PMLR, 2018. Dua & Graff (2017) โ†‘ Dua, D. and Graff, C.UCI machine learning repository, 2017.URL http://archive.ics.uci.edu/ml. Efron et al. (2004) โ†‘ Efron, B., Hastie, T., Johnstone, I., and Tibshirani, R.Least angle regression.The Annals of statistics, 32(2):407โ€“499, 2004. Ergen & Pilanci (2021a) โ†‘ Ergen, T. and Pilanci, M.Global optimality beyond two layers: Training deep ReLU networks via convex programs.In Meila, M. and Zhang, T. (eds.), Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event, volume 139 of Proceedings of Machine Learning Research, pp.  2993โ€“3003. PMLR, 2021a. Ergen & Pilanci (2021b) โ†‘ Ergen, T. and Pilanci, M.Implicit convex regularizers of CNN architectures: Convex optimization of two- and three-layer networks in polynomial time.In International Conference on Learning Representations: ICLR 2021, 2021b. Ergen et al. (2021) โ†‘ Ergen, T., Sahiner, A., Ozturkler, B., Pauly, J. M., Mardani, M., and Pilanci, M.Demystifying batch normalization in ReLU networks: Equivalent convex optimization models and implicit regularization.In International Conference on Learning Representations, 2021. Fiacco & Ishizuka (1990) โ†‘ Fiacco, A. V. and Ishizuka, Y.Sensitivity and stability analysis for nonlinear programming.Annals of Operations Research, 27(1):215โ€“235, 1990. Fiat et al. (2019) โ†‘ Fiat, J., Malach, E., and Shalev-Shwartz, S.Decoupling gating from linearity.arXiv preprint arXiv:1906.05032, 2019. Garipov et al. (2018) โ†‘ Garipov, T., Izmailov, P., Podoprikhin, D., Vetrov, D. P., and Wilson, A. G.Loss surfaces, mode connectivity, and fast ensembling of DNNs.In Bengio, S., Wallach, H. M., Larochelle, H., Grauman, K., Cesa-Bianchi, N., and Garnett, R. (eds.), Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, December 3-8, 2018, Montrรฉal, Canada, pp.  8803โ€“8812, 2018. Goldman & Tucker (2016) โ†‘ Goldman, A. J. and Tucker, A. W.4. theory of linear programming.In Linear Inequalities and Related Systems.(AM-38), Volume 38, pp.  53โ€“98. Princeton University Press, 2016. Gunasekar et al. (2017) โ†‘ Gunasekar, S., Woodworth, B. E., Bhojanapalli, S., Neyshabur, B., and Srebro, N.Implicit regularization in matrix factorization.In Guyon, I., von Luxburg, U., Bengio, S., Wallach, H. M., Fergus, R., Vishwanathan, S. V. N., and Garnett, R. (eds.), Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pp.  6151โ€“6159, 2017. Gupta et al. (2021) โ†‘ Gupta, V., Bartan, B., Ergen, T., and Pilanci, M.Exact and relaxed convex formulations for shallow neural autoregressive models.In International Conference on Acoustics, Speech, and Signal Processing, 2021. Hastie et al. (2007) โ†‘ Hastie, T., Taylor, J., Tibshirani, R., and Walther, G.Forward stagewise regression and the monotone lasso.Electronic Journal of Statistics, 1:1โ€“29, 2007. Hogan (1973) โ†‘ Hogan, W. W.Point-to-set maps in mathematical programming.SIAM review, 15(3):591โ€“603, 1973. Krizhevsky et al. (2009) โ†‘ Krizhevsky, A., Hinton, G., et al.Learning multiple layers of features from tiny images.2009. Kuditipudi et al. (2019) โ†‘ Kuditipudi, R., Wang, X., Lee, H., Zhang, Y., Li, Z., Hu, W., Ge, R., and Arora, S.Explaining landscape connectivity of low-cost solutions for multilayer nets.In Wallach, H. M., Larochelle, H., Beygelzimer, A., dโ€™Alchรฉ-Buc, F., Fox, E. B., and Garnett, R. (eds.), Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pp.  14574โ€“14583, 2019. LeCun et al. (1998) โ†‘ LeCun, Y., Bottou, L., Bengio, Y., and Haffner, P.Gradient-based learning applied to document recognition.Proceedings of the IEEE, 86(11):2278โ€“2324, 1998. Mishkin et al. (2022) โ†‘ Mishkin, A., Sahiner, A., and Pilanci, M.Fast convex optimization for two-layer ReLU networks: Equivalent model classes and cone decompositions.In Chaudhuri, K., Jegelka, S., Song, L., Szepesvรกri, C., Niu, G., and Sabato, S. (eds.), International Conference on Machine Learning, ICML 2022, 17-23 July 2022, Baltimore, Maryland, USA, volume 162 of Proceedings of Machine Learning Research, pp.  15770โ€“15816. PMLR, 2022. Nesterov et al. (2018) โ†‘ Nesterov, Y. et al.Lectures on convex optimization, volume 137.Springer, 2018. Neyshabur et al. (2015) โ†‘ Neyshabur, B., Tomioka, R., and Srebro, N.In search of the real inductive bias: On the role of implicit regularization in deep learning.In Bengio, Y. and LeCun, Y. (eds.), 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Workshop Track Proceedings, 2015. Nguyen (2019) โ†‘ Nguyen, Q.On connected sublevel sets in deep learning.In Chaudhuri, K. and Salakhutdinov, R. (eds.), Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, volume 97 of Proceedings of Machine Learning Research, pp.  4790โ€“4799. PMLR, 2019. Osborne et al. (2000) โ†‘ Osborne, M. R., Presnell, B., and Turlach, B. A.A new approach to variable selection in least squares problems.IMA journal of numerical analysis, 20(3):389โ€“403, 2000. Pilanci & Ergen (2020) โ†‘ Pilanci, M. and Ergen, T.Neural networks are convex regularizers: Exact polynomial-time convex optimization formulations for two-layer networks.In Proceedings of the 37th International Conference on Machine Learning, ICML 2020, volume 119 of Proceedings of Machine Learning Research, pp.  7695โ€“7705. PMLR, 2020. Robinson & Day (1974) โ†‘ Robinson, S. M. and Day, R. H.A sufficient condition for continuity of optimal sets in mathematical programming.Journal of Mathematical Analysis and Applications, 45(2):506โ€“511, 1974. Roth & Fischer (2008) โ†‘ Roth, V. and Fischer, B.The group-lasso for generalized linear models: uniqueness of solutions and efficient algorithms.In Cohen, W. W., McCallum, A., and Roweis, S. T. (eds.), Machine Learning, Proceedings of the Twenty-Fifth International Conference (ICML 2008), Helsinki, Finland, June 5-9, 2008, volume 307 of ACM International Conference Proceeding Series, pp.  848โ€“855. ACM, 2008. Sahiner et al. (2021) โ†‘ Sahiner, A., Ergen, T., Pauly, J. M., and Pilanci, M.Vector-output ReLU neural network problems are copositive programs: Convex analysis of two layer networks and polynomial-time algorithms.In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net, 2021. Stein (1981) โ†‘ Stein, C. M.Estimation of the mean of a multivariate normal distribution.The annals of Statistics, pp.  1135โ€“1151, 1981. Tibshirani (1996) โ†‘ Tibshirani, R.Regression shrinkage and selection via the lasso.Journal of the Royal Statistical Society: Series B (Methodological), 58(1):267โ€“288, 1996. Tibshirani (2013) โ†‘ Tibshirani, R. J.The lasso problem and uniqueness.Electronic Journal of statistics, 7:1456โ€“1490, 2013. Tibshirani & Taylor (2011) โ†‘ Tibshirani, R. J. and Taylor, J.The solution path of the generalized lasso.The annals of statistics, 39(3):1335โ€“1371, 2011. Tunรงel & Wolkowicz (2012) โ†‘ Tunรงel, L. and Wolkowicz, H.Strong duality and minimal representations for cone optimization.Computational optimization and applications, 53(2):619โ€“648, 2012. Vaiter et al. (2012) โ†‘ Vaiter, S., Deledalle, C., Peyrรฉ, G., Fadili, J., and Dossal, C.The degrees of freedom of the group lasso for a general design.CoRR, abs/1212.6478, 2012. Wang et al. (2021) โ†‘ Wang, Y., Lacotte, J., and Pilanci, M.The hidden convex optimization landscape of regularized two-layer ReLU networks: an exact characterization of optimal solutions.In International Conference on Learning Representations, 2021. Yuan & Lin (2006) โ†‘ Yuan, M. and Lin, Y.Model selection and estimation in regression with grouped variables.Journal of the Royal Statistical Society: Series B (Statistical Methodology), 68(1):49โ€“67, 2006. Appendix AConstrained Group Lasso: Proofs A.1Describing the Optimal Set Lemma A.1.

The model fit is the same for all optimal solutions to the CGL problem. That is,

๐‘‹ โข ๐‘ค

๐‘‹ โข ๐‘ค โ€ฒ  for all  โข ๐‘ค , ๐‘ค โ€ฒ โˆˆ ๐’ฒ * โข ( ๐œ† ) .

As a consequence, the sum of group norms is also constant when ๐œ†

0 .

Proof.

This follows in a similar fashion to the classic result for the group lasso. Let ๐‘ค , ๐‘ค โ€ฒ โˆˆ ๐’ฒ * โข ( ๐œ† ) , ๐‘ค ยฏ

1 2 โข ๐‘ค + 1 2 โข ๐‘ค โ€ฒ , and suppose ๐‘ * is the optimal value of the constrained program. By convexity, we have

1 2 โข โ€– ๐‘‹ โข ๐‘ค ยฏ โˆ’ ๐‘ฆ โ€– 2 2 + ๐œ† โข โˆ‘ ๐‘ ๐‘– โˆˆ โ„ฌ โ€– ๐‘ค ยฏ ๐‘ ๐‘– โ€– 2
โ‰ค 1 2 โข ๐‘ * + 1 2 โข ๐‘ *

๐‘ * ,

where the inequality is strict if ๐‘‹ โข ๐‘ค โ‰  ๐‘‹ โข ๐‘ค โ€ฒ by strong convexity of ๐‘“ โข ( ๐‘ข )

โ€– ๐‘ข โˆ’ ๐‘ฆ โ€– 2 2 . Since ๐‘ค , ๐‘ค โ€ฒ are both feasible, ๐‘ค ยฏ is also feasible and clearly ๐‘ค ยฏ cannot obtain an objective value less than ๐‘ * . Thus, ๐‘‹ โข ๐‘ค

๐‘‹ โข ๐‘ค โ€ฒ must hold.

To see the second part of the result, observe that

๐œ† โข โˆ‘ ๐‘ ๐‘– โˆˆ โ„ฌ โ€– ๐‘ค ยฏ ๐‘ ๐‘– โ€– 2

๐‘ * โˆ’ 1 2 โข โ€– ๐‘ฆ ^ โข ( ๐œ† ) โˆ’ ๐‘ฆ โ€– 2 2 ,

is also constant over ๐’ฒ * โข ( ๐œ† ) . โˆŽ

\solFnCGL

Proof.

Fix ๐œ†

0 and let ๐‘ค โˆˆ ๐’ฒ * โข ( ๐œ† ) . If ๐‘ค ๐‘ ๐‘– โ‰  0 , then the KKT conditions require

๐œ† โข ๐‘ค ๐‘ ๐‘– โ€– ๐‘ค ๐‘ ๐‘– โ€– 2

๐‘ฃ ๐‘ ๐‘– โŸน ๐‘ค ๐‘ ๐‘–

๐›ผ ๐‘ ๐‘– โข ๐‘ฃ ๐‘ ๐‘– ,

for ๐›ผ > 0 . If ๐‘ค ๐‘ ๐‘–

0 , then ๐‘ค ๐‘ ๐‘–

๐›ผ ๐‘ ๐‘– โข ๐‘ฃ ๐‘ ๐‘– holds trivially for ๐›ผ ๐‘ ๐‘–

0 . Since ๐‘ค is optimal, it must satisfy

๐‘‹ โข ๐‘ค

๐‘ฆ ^ ,

by Lemma A.1. Finally, ๐’œ ๐œ† โข ( ๐‘ค ) โІ ๐’ฎ ๐œ† so that ๐‘ค satisfies the characterization.

For the reverse direction, we start by defining

๐’ณ

{ ๐‘ค โˆˆ โ„ ๐‘‘ :
โˆ€ ๐‘ ๐‘– โˆˆ ๐’ฎ ๐œ† , ๐‘ค ๐‘ ๐‘–

๐›ผ ๐‘ ๐‘– โข ๐‘ฃ ๐‘ ๐‘– , ๐›ผ ๐‘ ๐‘– โ‰ฅ 0 ,

โˆ€ ๐‘ ๐‘— โˆˆ โ„ฌ โˆ– ๐’ฎ ๐œ† , ๐‘ค ๐‘ ๐‘—

0 , ๐‘‹ ๐‘ค

๐‘ฆ ^ }

Take any ๐‘ค โ€ฒ โˆˆ ๐’ณ . If ๐‘ค ๐‘ ๐‘– โ€ฒ โ‰  0 , then ๐‘ ๐‘– โˆˆ โ„ฐ ๐œ† and โ€– ๐‘ฃ ๐‘ ๐‘– โ€– 2

๐œ† so that we may write

๐‘ค ๐‘ ๐‘– โ€ฒ

๐›ผ ๐‘ ๐‘– โข ๐‘ฃ ๐‘ ๐‘–
โŸน ๐‘ค ๐‘ ๐‘– โ€ฒ โ€– ๐‘ค ๐‘ ๐‘– โ€ฒ โ€– 2

๐›ผ ๐‘ ๐‘– โข ๐‘ฃ ๐‘ ๐‘– โ€– ๐›ผ ๐‘ ๐‘– โข ๐‘ฃ ๐‘ ๐‘– โ€– 2

โŸน ๐‘ค ๐‘ ๐‘– โ€ฒ โ€– ๐‘ค ๐‘ ๐‘– โ€ฒ โ€– 2

๐‘ฃ ๐‘ ๐‘– ๐œ†

โŸน ๐œ† โข ๐‘ค ๐‘ ๐‘– โ€ฒ โ€– ๐‘ค ๐‘ ๐‘– โ€ฒ โ€– 2

๐‘ฃ ๐‘ ๐‘– .

That is,

๐‘‹ ๐‘ ๐‘– โŠค โข ( ๐‘‹ โข ๐‘ค โˆ’ ๐‘ฆ ) + ๐œ† โข ๐‘ค ๐‘ ๐‘– โ€ฒ โ€– ๐‘ค ๐‘ ๐‘– โ€ฒ โ€– 2 + ๐พ ๐‘ ๐‘– โข ๐œŒ ๐‘ ๐‘– *

0 ,

which is exactly stationarity of the Lagrangian.

If ๐‘ค ๐‘ ๐‘– โ€ฒ

0 , then ๐‘‹ โข ๐‘ค โ€ฒ

๐‘ฆ ^ implies

โ€– ๐‘‹ ๐‘ ๐‘– โŠค โข ( ๐‘ฆ โˆ’ ๐‘‹ โข ๐‘ค โ€ฒ ) + ๐พ โข ๐œŒ ๐‘ ๐‘– * โ€– 2

โ€– ๐‘‹ ๐‘ ๐‘– โŠค โข ( ๐‘ฆ โˆ’ ๐‘‹ โข ๐‘ค ) + ๐พ โข ๐œŒ ๐‘ ๐‘– * โ€– 2 โ‰ค ๐œ† ,

for some ๐‘ค โˆˆ ๐’ฒ * โข ( ๐œ† ) , which also implies the Lagrangian is stationary.

Now we show optimality of ๐‘ค โ€ฒ by checking feasibility and complementary slackness. If ๐‘ค ๐‘ ๐‘– โ€ฒ โ‰  0 then ๐‘ค ๐‘ ๐‘– โ€ฒ

๐›ผ ๐‘ ๐‘– โ€ฒ ๐›ผ ๐‘ ๐‘– โข ๐‘ค ๐‘ ๐‘– for some optimal solution ๐‘ค with ๐›ผ ๐‘ ๐‘–

0 . This follows since ๐‘ค ๐‘ ๐‘– โ€ฒ โ‰  0 implies ๐‘ ๐‘– โˆˆ ๐’ฎ ๐œ† . Thus,

๐พ ๐‘ ๐‘– โŠค โข ๐‘ค ๐‘ ๐‘– โ€ฒ

๐›ผ ๐‘ ๐‘– โ€ฒ ๐›ผ ๐‘ ๐‘– โข ๐‘ค ๐‘ ๐‘– โ‰ค 0 ,

by feasibility of ๐‘ค ๐‘ ๐‘– . Similarly, we find that complementary slackness is satisfied as follows:

[ ๐œŒ ๐‘ ๐‘– ] ๐‘— โ‹… [ ๐พ ๐‘ ๐‘– ] ๐‘— โŠค โข ๐‘ค ๐‘ ๐‘– โ€ฒ

๐›ผ ๐‘ ๐‘– โ€ฒ ๐›ผ ๐‘ ๐‘– โข [ ๐œŒ ๐‘ ๐‘– ] ๐‘— โ‹… [ ๐พ ๐‘ ๐‘– ] ๐‘— โŠค โข ๐‘ค ๐‘ ๐‘–

0 .

If ๐‘ค ๐‘ ๐‘–

0 , then both feasibility and complementary slackness are trivial. Since ( ๐‘ค โ€ฒ , ๐œŒ * ) are feasible and ๐œŒ * is dual optimal, we conclude the KKT conditions are satisfied and thus ๐‘ค โ€ฒ โˆˆ ๐’ฒ * โข ( ๐œ† ) . This completes the proof. โˆŽ

Proposition A.2.

Fix ๐œ†

0 . The optimal set for CGL problem is also given by

๐’ฒ * ( ๐œ† )

{ ๐‘ค โˆˆ โ„ ๐‘‘ :
โˆ€ ๐‘ ๐‘– โˆˆ โ„ฐ ๐œ† , ๐‘ค ๐‘ ๐‘–

๐›ผ ๐‘ ๐‘– โข ๐‘ฃ ๐‘ ๐‘– , ๐›ผ ๐‘ ๐‘– โ‰ฅ 0 ,

(20)

โˆ€ ๐‘ ๐‘— โˆˆ โ„ฌ โˆ– โ„ฐ ๐œ† , ๐‘ค ๐‘ ๐‘—

0 , ๐‘‹ โข ๐‘ค

๐‘ฆ ^ ,

๐พ โŠค ๐‘ค โ‰ค 0 , โŸจ ๐œŒ * , ๐พ โŠค ๐‘ค โŸฉ

0 }

Proof.

Fix ๐œ†

0 and let ๐‘ค โˆˆ ๐’ฒ * โข ( ๐œ† ) . If ๐‘ค ๐‘ ๐‘– โ‰  0 , then the KKT conditions require

๐œ† โข ๐‘ค ๐‘ ๐‘– โ€– ๐‘ค ๐‘ ๐‘– โ€– 2

๐‘ฃ ๐‘ ๐‘– โŸน ๐‘ค ๐‘ ๐‘–

๐›ผ ๐‘ ๐‘– โข ๐‘ฃ ๐‘ ๐‘– ,

for ๐›ผ > 0 . If ๐‘ค ๐‘ ๐‘–

0 , then ๐‘ค ๐‘ ๐‘–

๐›ผ ๐‘ ๐‘– โข ๐‘ฃ ๐‘ ๐‘– holds trivially for ๐›ผ ๐‘ ๐‘–

0 . Since ๐‘ค is optimal, it must satisfy

๐‘‹ โข ๐‘ค

๐‘ฆ ^ ,

by Lemma A.1. Finally, โŸจ ๐œŒ * , ๐พ โข ๐‘ค โŸฉ

0 and is feasible by KKT conditions so that ๐‘ค satisfies the characterization.

For the reverse direction, we start by defining

๐’ณ

{ ๐‘ค โˆˆ โ„ ๐‘‘ :
โˆ€ ๐‘ ๐‘– โˆˆ โ„ฐ ๐œ† , ๐‘ค ๐‘ ๐‘–

๐›ผ ๐‘ ๐‘– โข ๐‘ฃ ๐‘ ๐‘– , ๐›ผ ๐‘ ๐‘– โ‰ฅ 0 ,

โˆ€ ๐‘ ๐‘— โˆˆ โ„ฌ โˆ– โ„ฐ ๐œ† , ๐‘ค ๐‘ ๐‘—

0 , ๐‘‹ โข ๐‘ค

๐‘ฆ ^ ,

๐พ โŠค ๐‘ค โ‰ค 0 , โŸจ ๐œŒ * , ๐พ โŠค ๐‘ค โŸฉ

0 }

Take any ๐‘ค โ€ฒ โˆˆ ๐’ณ . If ๐‘ค ๐‘ ๐‘– โ€ฒ โ‰  0 , then ๐‘ ๐‘– โˆˆ โ„ฐ ๐œ† and โ€– ๐‘ฃ ๐‘ ๐‘– โ€– 2

๐œ† so that we may write

๐‘ค ๐‘ ๐‘– โ€ฒ

๐›ผ ๐‘ ๐‘– โข ๐‘ฃ ๐‘ ๐‘–
โŸน ๐‘ค ๐‘ ๐‘– โ€ฒ โ€– ๐‘ค ๐‘ ๐‘– โ€ฒ โ€– 2

๐›ผ ๐‘ ๐‘– โข ๐‘ฃ ๐‘ ๐‘– โ€– ๐›ผ ๐‘ ๐‘– โข ๐‘ฃ ๐‘ ๐‘– โ€– 2

โŸน ๐‘ค ๐‘ ๐‘– โ€ฒ โ€– ๐‘ค ๐‘ ๐‘– โ€ฒ โ€– 2

๐‘ฃ ๐‘ ๐‘– ๐œ†

โŸน ๐œ† โข ๐‘ค ๐‘ ๐‘– โ€ฒ โ€– ๐‘ค ๐‘ ๐‘– โ€ฒ โ€– 2

๐‘ฃ ๐‘ ๐‘– .

That is,

๐‘‹ ๐‘ ๐‘– โŠค โข ( ๐‘‹ โข ๐‘ค โˆ’ ๐‘ฆ ) + ๐œ† โข ๐‘ค ๐‘ ๐‘– โ€ฒ โ€– ๐‘ค ๐‘ ๐‘– โ€ฒ โ€– 2 + ๐พ ๐‘ ๐‘– โข ๐œŒ ๐‘ ๐‘– *

0 ,

which is exactly stationarity of the Lagrangian.

If ๐‘ค ๐‘ ๐‘– โ€ฒ

0 , then

๐‘‹ โข ๐‘ค โ€ฒ

๐‘ฆ ^ โŸน โ€– ๐‘‹ ๐‘ ๐‘– โŠค โข ( ๐‘ฆ โˆ’ ๐‘‹ โข ๐‘ค โ€ฒ ) + ๐พ โข ๐œŒ ๐‘ ๐‘– * โ€– 2 โ‰ค ๐œ† ,

which also implies the Lagrangian is stationary.

Since โŸจ ๐œŒ * , ๐พ โŠค โข ๐‘ค โ€ฒ โŸฉ

0 and ๐พ โŠค โข ๐‘ค โ€ฒ โ‰ค 0 , i.e. ๐‘ค โ€ฒ is feasible, it is clear that complementary slackness must also hold. We conclude that ( ๐‘ค โ€ฒ , ๐œŒ * ) are primal-dual optimal by the KKT conditions and the proof is complete. โˆŽ

\uniqueCGL

Proof.

Suppose by way of contradiction that ๐‘ค , ๐‘ค โ€ฒ โˆˆ ๐’ฒ * โข ( ๐œ† ) such that ๐‘ค โ‰  ๐‘ค โ€ฒ . By Section 3.1, we have

0

๐‘‹ โข ( ๐‘ค โˆ’ ๐‘ค โ€ฒ )

โˆ‘ ๐‘ ๐‘– โˆˆ ๐’ฎ ๐œ† ๐‘‹ ๐‘ ๐‘– โข ( ๐‘ค ๐‘ ๐‘– โˆ’ ๐‘ค ๐‘ ๐‘– โ€ฒ )

โˆ‘ ๐‘ ๐‘– โˆˆ ๐’ฎ ๐œ† ( ๐›ผ ๐‘ ๐‘– โˆ’ ๐›ผ ๐‘ ๐‘– โ€ฒ ) โข ๐‘‹ ๐‘ ๐‘– โข ๐‘ฃ ๐‘ ๐‘– ,

which implies that the vectors { ๐‘‹ ๐‘ ๐‘– โข ๐‘ฃ ๐‘ ๐‘– } ๐’ฎ ๐œ† are linearly dependent.

Necessity follows from Algorithm 1, which shows that, given a solution ๐‘ค โˆˆ ๐’ฒ * โข ( ๐œ† ) , linear dependence of { ๐‘‹ ๐‘ ๐‘– โข ๐‘ค โข ( ๐œ† ) : ๐‘ ๐‘– โˆˆ ๐’œ โข ( ๐‘ค โข ( ๐œ† ) ) } implies the exist of at least one additional solution. โˆŽ

\GGP

Proof.

Suppose the group Lasso solution is not unique. Then, Corollary 3.1 implies

๐’ฉ ๐œ†

Null โข ( ๐‘‹ โ„ฐ ๐œ† ) โข โ‹‚ { ๐‘ง : ๐‘ง ๐‘ ๐‘–

๐›ผ ๐‘ ๐‘– โข ๐‘ ๐‘ ๐‘– , ๐‘ ๐‘– โˆˆ โ„ฐ ๐œ† } ,

is non-empty. That is, there exist ๐›ผ ๐‘ ๐‘– โ‰ฅ 0 such that

๐‘‹ ๐‘ ๐‘— โข ๐‘ ๐‘ ๐‘—

โˆ‘ ๐‘ ๐‘– โˆˆ โ„ฐ ๐œ† โˆ– ๐‘— ๐›ผ ๐‘ ๐‘– โข ๐‘‹ ๐‘ ๐‘– โข ๐‘ ๐‘ ๐‘– .
Taking inner-products on both sides with the residual ๐‘Ÿ ,
๐‘Ÿ โŠค โข ๐‘‹ ๐‘ ๐‘— โข ๐‘ ๐‘ ๐‘—

โˆ‘ ๐‘ ๐‘– โˆˆ โ„ฐ ๐œ† โˆ– ๐‘— ๐›ผ ๐‘ ๐‘– โข ๐‘Ÿ โŠค โข ๐‘‹ ๐‘ ๐‘– โข ๐‘ ๐‘ ๐‘–

โŸน ๐‘ ๐‘ ๐‘— โŠค โข ๐‘ ๐‘ ๐‘—

โˆ‘ ๐‘ ๐‘– โˆˆ โ„ฐ ๐œ† โˆ– ๐‘— ๐›ผ ๐‘ ๐‘– โข ๐‘ ๐‘ ๐‘– โŠค โข ๐‘ ๐‘ ๐‘–

โŸน ๐œ† 2

โˆ‘ ๐‘ ๐‘– โˆˆ โ„ฐ ๐œ† โˆ– ๐‘— ๐›ผ ๐‘ ๐‘– โข ๐œ† 2

โŸน 1

โˆ‘ ๐‘ ๐‘– โˆˆ โ„ฐ ๐œ† โˆ– ๐‘— ๐›ผ ๐‘ ๐‘– .

Thus, we deduce that

๐‘‹ ๐‘ ๐‘— โข ๐‘ ๐‘ ๐‘—

โˆ‘ ๐‘ ๐‘– โˆˆ โ„ฐ ๐œ† โˆ– ๐‘— ๐›ฝ ๐‘ ๐‘– โข ๐‘‹ ๐‘ ๐‘– โข ๐‘ ๐‘ ๐‘– ,

(21)

where โˆ‘ ๐‘ ๐‘– โˆˆ โ„ฐ ๐œ† โˆ– ๐‘— ๐›ฝ ๐‘ ๐‘–

1 , meaning ๐‘‹ ๐‘ ๐‘— ๐‘ ๐‘ ๐‘— โˆˆ affine ( ๐‘‹ ๐‘ ๐‘– ๐‘ง ๐‘ ๐‘– : ๐‘ ๐‘– โˆˆ โ„ฐ ๐œ† โˆ– ๐‘ ๐‘— ) Now, suppose that | โ„ฐ ๐œ† | > ๐‘› + 1 . Then, { ๐‘‹ ๐‘ ๐‘– โข ๐‘ ๐‘ ๐‘– : ๐‘ ๐‘– โˆˆ โ„ฐ ๐œ† โˆ– ๐‘— } are linearly dependent and, by eliminating dependent vectors ๐‘‹ ๐‘ ๐‘– โข ๐‘ ๐‘ ๐‘– , we can repeat the above proof with a subset โ„ฐ โ€ฒ of at most ๐‘› + 1 blocks. Noting โ€– ๐‘ ๐‘ ๐‘– โ€– 2

๐œ† for each ๐‘ ๐‘– โˆˆ โ„ฐ ๐œ† and rescaling both sides of Equation 21 by ๐œ† implies the existence of unit vectors ๐‘ง ๐‘ ๐‘– which contradict GGP. This completes the proof. โˆŽ

Proposition A.3.

Group general position does not imply the columns of ๐‘‹ are in general position. Similarly, general position of the columns of ๐‘‹ does not imply group general position.

Proof.

Let ๐‘ฅ 1 , โ€ฆ , ๐‘ฅ ๐‘‘ โˆˆ โ„ ๐‘‘ โˆ’ 1 . Consider the simple case where we have two groups: ๐‘ 1

{ 1 } and ๐‘ 2

{ 2 , โ€ฆ , ๐‘‘ } . Group general position is violated if there exists a unit vector ๐‘ง ๐‘ 2 such that

๐‘ฅ 1

๐‘‹ ๐‘ 2 โข ๐‘ง ๐‘ 2 .

โ‡” ๐‘ฅ 1

โˆˆ ๐‘‹ ๐‘ 2 โข ๐ต ๐‘‘ โˆ’ 1 ,

where ๐ต ๐‘‘ โˆ’ 1

{ ๐‘ง โˆˆ โ„ ๐‘‘ โˆ’ 1 : โ€– ๐‘ง โ€– 2 โ‰ค 1 } . In contrast, general position is violated if

๐‘ฅ 1
โˆˆ affine โข ( ๐‘ฅ 2 , โ€ฆ , ๐‘ฅ ๐‘‘ )

โ‡” ๐‘ฅ 1
โˆˆ ๐‘‹ ๐‘ 2 โข { ๐‘ง : โŸจ ๐‘ง , 1 โŸฉ

1 } .

Taking ๐‘‹ ๐‘ 2

๐ผ , it is trivial to see that group general position can hold when general position is violated and vice-versa. โˆŽ

A.2Computing Dual Optimal Parameters Lemma A.4.

One Lagrange dual of CGL is the following:

max ๐œ‚ , ๐œŒ

โˆ’ 1 2 โข ( ๐œ‚ + ๐พ โข ๐œŒ โˆ’ ๐‘‹ โŠค โข ๐‘ฆ ) โข ( ๐‘‹ โŠค โข ๐‘‹ ) + โข ( ๐œ‚ + ๐พ โข ๐œŒ โˆ’ ๐‘‹ โŠค โข ๐‘ฆ ) + 1 2 โข โ€– ๐‘ฆ โ€– 2 2

(22)

s.t. ๐œ‚ + ๐พ โข ๐œŒ โˆˆ ๐‘…๐‘œ๐‘ค โข ( ๐‘‹ ) , โ€– ๐œ‚ ๐‘ ๐‘– โ€– 2 โ‰ค ๐œ† โข โˆ€ ๐‘ ๐‘– โˆˆ โ„ฌ ,

where ๐œ‚ ๐‘ ๐‘–

๐‘ ๐‘ ๐‘– โˆ’ ๐พ ๐‘ ๐‘– โข ๐œŒ ๐‘ ๐‘– shows that the vectors ๐‘ฃ ๐‘ ๐‘– are, in fact, dual variables. Moreover, if ๐พ

0 , then ๐œŒ *

0 and ๐œ‚ has the unique solution ๐œ‚ ๐‘ ๐‘–

๐‘‹ ๐‘ ๐‘– โŠค โข ( ๐‘ฆ โˆ’ ๐‘‹ โข ๐‘ค )

๐‘ ๐‘ ๐‘– . That is, the dual parameters are the (unique) block correlation vectors.

Proof.

We re-write the group Lasso problem as follows:

min ๐‘ค โก 1 2 โข โ€– ๐‘‹ โข ๐‘ง โˆ’ ๐‘ฆ โ€– 2 2 + ๐œ† โข โˆ‘ ๐‘ ๐‘– โˆˆ โ„ฌ โ€– ๐‘ค ๐‘ ๐‘– โ€– 2 s.t. โข ๐‘ง

๐‘ค , ๐พ ๐‘ ๐‘– โŠค โข ๐‘ง ๐‘ ๐‘– โ‰ค 0 .

The Lagrangian for this problem is

โ„’ โข ( ๐‘ค , ๐‘ง , ๐œ‚ , ๐œŒ )

1 2 โข โ€– ๐‘‹ โข ๐‘ง โˆ’ ๐‘ฆ โ€– 2 2 + โŸจ ๐œ‚ , ๐‘ง โˆ’ ๐‘ค โŸฉ + โŸจ ๐œŒ , ๐พ โŠค โข ๐‘ง โŸฉ + ๐œ† โข โˆ‘ ๐‘ ๐‘– โˆˆ โ„ฌ โ€– ๐‘ค ๐‘ ๐‘– โ€– 2

1 2 โข โ€– ๐‘‹ โข ๐‘ง โˆ’ ๐‘ฆ โ€– 2 2 + โŸจ ๐œ‚ + ๐พ โข ๐œŒ , ๐‘ง โŸฉ โˆ’ โŸจ ๐œ‚ , ๐‘ค โŸฉ + ๐œ† โข โˆ‘ ๐‘ ๐‘– โˆˆ โ„ฌ โ€– ๐‘ค ๐‘ ๐‘– โ€– 2 .

Minimizing over ๐‘ง , we find that stationarity implies

๐‘‹ โŠค โข ( ๐‘ฆ โˆ’ ๐‘‹ โข ๐‘ง )

๐œ‚ + ๐พ โข ๐œŒ ,

so that ๐œ‚ + ๐พ โข ๐œŒ โˆˆ Row โข ( ๐‘‹ ) . Solving this system, we find

๐‘‹ โŠค โข ๐‘‹ โข ๐‘ง

๐‘‹ โŠค โข ๐‘ฆ โˆ’ ๐œ‚ โˆ’ ๐พ โข ๐œŒ โŸน ๐‘ง

( ๐‘‹ โŠค โข ๐‘‹ ) + โข [ ๐‘‹ โŠค โข ๐‘ฆ โˆ’ ๐œ‚ โˆ’ ๐พ โข ๐œŒ ] + ๐‘ ,

where ๐‘ โˆˆ Null โข ( ๐‘‹ ) . Let us minimize over ๐‘ค similarly. The Lagrangian decouples block-wise in ๐‘ค , so that we must solve

min ๐‘ค ๐‘ ๐‘– โก ๐œ† โข โ€– ๐‘ค ๐‘ ๐‘– โ€– 2 โˆ’ โŸจ ๐œ‚ ๐‘ ๐‘– , ๐‘ค ๐‘ ๐‘– โŸฉ ,

for each ๐‘ ๐‘– โˆˆ โ„ฌ . The minimum value is achieved by the (negative) Fenchel conjugate of ๐œ† โข โ€– ๐‘ค ๐‘ ๐‘– โ€– 2 evaluated at ๐œ‚ ๐‘ ๐‘– ; that is,

min ๐‘ค ๐‘ ๐‘– โก ๐œ† โข โ€– ๐‘ค ๐‘ ๐‘– โ€– 2 โˆ’ โŸจ ๐œ‚ ๐‘ ๐‘– , ๐‘ค ๐‘ ๐‘– โŸฉ

โˆ’ ๐Ÿ™ โข ( โ€– ๐œ‚ ๐‘ ๐‘– โ€– 2 โ‰ค ๐œ† ) .

Combining this with the expression for ๐‘ง , we obtain

โ„’ โข ( ๐‘ , ๐œ‚ , ๐œŒ )

โˆ’ 1 2 โข ( ๐œ‚ + ๐พ โข ๐œŒ โˆ’ ๐‘‹ โŠค โข ๐‘ฆ ) โข ( ๐‘‹ โŠค โข ๐‘‹ ) + โข ( ๐œ‚ + ๐พ โข ๐œŒ โˆ’ ๐‘‹ โŠค โข ๐‘ฆ ) + โŸจ ๐œ‚ + ๐พ โข ๐œŒ , ๐‘ โŸฉ โˆ’ โˆ‘ ๐‘ ๐‘– โˆˆ โ„ฌ ๐Ÿ™ โข ( โ€– ๐œ‚ ๐‘ ๐‘– โ€– 2 โ‰ค ๐œ† )

โˆ’ 1 2 โข ( ๐œ‚ + ๐พ โข ๐œŒ โˆ’ ๐‘‹ โŠค โข ๐‘ฆ ) โข ( ๐‘‹ โŠค โข ๐‘‹ ) + โข ( ๐œ‚ + ๐พ โข ๐œŒ โˆ’ ๐‘‹ โŠค โข ๐‘ฆ ) โˆ’ โˆ‘ ๐‘ ๐‘– โˆˆ โ„ฌ ๐Ÿ™ โข ( โ€– ๐œ‚ ๐‘ ๐‘– โ€– 2 โ‰ค ๐œ† ) ,

where the second equality follows since ๐œ‚ + ๐พ โข ๐œŒ โˆˆ Row โข ( ๐‘‹ ) and ๐‘ โˆˆ Null โข ( ๐‘‹ ) are orthogonal. (Alternatively, one can observe that the dual problem is unbounded below whenever โŸจ ๐‘ , ๐œ‚ + ๐พ โข ๐œŒ โŸฉ โ‰  0 .) Thus, the dual problem is equal to

max ๐œ‚ , ๐œŒ โˆ’ 1 2 โข ( ๐œ‚ + ๐พ โข ๐œŒ โˆ’ ๐‘‹ โŠค โข ๐‘ฆ ) โข ( ๐‘‹ โŠค โข ๐‘‹ ) + โข ( ๐œ‚ ๐พ โข ๐œŒ โˆ’ ๐‘‹ โŠค โข ๐‘ฆ ) โˆ’ โˆ‘ ๐‘ ๐‘– โˆˆ โ„ฌ ๐Ÿ™ โข ( โ€– ๐œ‚ ๐‘ ๐‘– โ€– 2 โ‰ค ๐œ† ) โˆ’ ๐Ÿ™ โข ( ๐œ‚ + ๐พ โข ๐œŒ โˆˆ Row โข ( ๐‘‹ ) ) ,

which completes the derivation.

Recalling ๐‘ง

๐‘ค for any primal-dual optimal pair and

๐‘‹ โŠค โข ( ๐‘ฆ โˆ’ ๐‘‹ โข ๐‘ง )

๐œ‚ + ๐พ โข ๐œŒ ,

shows that ๐œ‚ ๐‘ ๐‘–

๐‘ ๐‘ ๐‘– โˆ’ ๐พ โข ๐œŒ as claimed. Moreover, if ๐พ

0 , then we may assume without loss of generality that he corresponding dual vectors ๐œŒ ๐‘ ๐‘– are zero. In this case, ๐œ‚ ๐‘ ๐‘–

๐‘ ๐‘ ๐‘– and, since ๐‘ ๐‘ ๐‘– is unique, the dual solution must also be unique. โˆŽ

Proposition A.5.

Let ๐œ† > 0 and ๐‘ค โˆˆ ๐’ฒ * โข ( ๐œ† ) . If ๐‘ค ๐‘ ๐‘–

0 , then any solution to Equation 15 is dual optimal for block ๐‘ ๐‘– .

Proof.

Let ๐œŒ ๐‘ ๐‘– be a solution to Equation 15. Since ๐‘ค is optimal and strong duality holds, there exists some min-norm dual optimal vector ๐œŒ * . Moreover ๐œŒ * satisfies ๐œŒ ๐‘ ๐‘– * โ‰ฅ 0 and

โ€– ๐พ ๐‘ ๐‘– โข ๐œŒ ๐‘ ๐‘– โˆ’ ๐‘ ๐‘ ๐‘– โ€– 2 2 โ‰ค โ€– ๐พ ๐‘ ๐‘– โข ๐œŒ ๐‘ ๐‘– * โˆ’ ๐‘ ๐‘ ๐‘– โ€– 2 2 โ‰ค ๐œ† 2 ,

so ๐‘ค is both feasible satisfies stationarity of the Lagrangian, Finally, because ๐‘ค ๐‘ ๐‘–

0 , complementary slackness,

[ ๐œŒ ๐‘ ๐‘– ] ๐‘— โ‹… [ ๐พ ๐‘ ๐‘– ] ๐‘— โŠค โข ๐‘ค ๐‘ ๐‘–

0 ,

is verified for every ๐‘— โˆˆ [ ๐‘Ž ๐‘ ๐‘– ] . Since the KKT conditions are sufficient for primal-dual optimality, we conclude that ๐œŒ ๐‘ ๐‘– is dual optimal. This completes the proof. โˆŽ

A.3Minimal Solutions and Optimal Pruning \minimalSols

Proof.

Let ๐‘ค โˆˆ ๐’ฒ * โข ( ๐œ† ) and assume that the vectors { ๐‘‹ ๐‘ ๐‘– โข ๐‘ค ๐‘ ๐‘– } ๐’œ โข ( ๐‘ค ) are linearly independent. By way of contradiction, assume there exists ๐‘ค โ€ฒ โˆˆ ๐’ฒ * โข ( ๐œ† ) with strictly smaller support. By Section 3.1, we have

๐‘ค ๐‘ ๐‘– โ€ฒ

๐›ฝ ๐‘ ๐‘– โข ๐‘ค ๐‘ ๐‘– ,

for some ๐›ฝ ๐‘ ๐‘– โ‰ฅ 0 . This holds for each ๐‘ ๐‘– โˆˆ ๐’œ ๐œ† โข ( ๐‘ค ) (with ๐›ฝ ๐‘ ๐‘–

0 when ๐‘ ๐‘– โˆˆ ๐’œ ๐œ† โข ( ๐‘ค ) โˆ– ๐’œ ๐œ† โข ( ๐‘ค โ€ฒ ) ) so that

๐‘‹ โข ๐‘ค

๐‘‹ โข ๐‘ค โ€ฒ โŸน โˆ‘ ๐‘ ๐‘– โˆˆ ๐’œ ๐œ† โข ( ๐‘ค ) ( 1 โˆ’ ๐›ฝ ๐‘ ๐‘– ) โข ๐‘‹ ๐‘ ๐‘– โข ๐‘ค ๐‘ ๐‘–

0 ,

which is a contradiction.

For the reverse direction, assume that ๐‘ค is minimal, but that { ๐‘‹ ๐‘ ๐‘– โข ๐‘ค ๐‘ ๐‘– } ๐’œ โข ( ๐‘ค ) are not linearly independent. Then the correctness of Algorithm 1 (see Proposition A.6) implies ๐‘ค is not minimal. โˆŽ

Proposition A.6.

Algorithm 1 returns a minimal solution to the constrained group lasso problem in at most ๐‘‚ โข ( ๐‘› 3 โข ๐‘™ + ๐‘› โข ๐‘‘ ) time, where ๐‘™ is the number of non-zero groups in the initial solution.

Proof.

Correctness: Let ๐‘ค โˆˆ ๐’ฒ * โข ( ๐œ† ) and ๐’œ be the associated active set. If ๐‘ค 0

๐‘ค is minimal, then Section 3.3 implies { ๐‘‹ ๐‘ ๐‘– โข ๐‘ค ๐‘ ๐‘– } ๐’œ are linearly independent the algorithm returns a minimal solution.

Let ๐‘˜ โ‰ฅ 0 and suppose ๐‘ค ๐‘˜ is not minimal. Then there exist weights ๐›ฝ ๐‘ ๐‘– such that

โˆ‘ ๐‘ ๐‘– โˆˆ ๐’œ ๐›ฝ ๐‘ ๐‘– โข ๐‘‹ ๐‘ ๐‘– โข ๐‘ค ๐‘˜

0 .

Let ๐‘ค ๐‘ ๐‘– ๐‘ก

( 1 โˆ’ ๐‘ก โข ๐›ฝ ๐‘ ๐‘– ) โข ๐‘ค ๐‘ ๐‘– and let ๐‘ก ๐‘˜ be as defined in the algorithm. By construction, ๐‘ก ๐‘˜ is the smallest magnitude ๐‘ก such that ( 1 โˆ’ ๐‘ก โข ๐›ฝ ๐‘ ๐‘– )

0 for some ๐‘ ๐‘– โˆˆ ๐’œ . We assume without loss of generality that ๐‘ก ๐‘˜

0 .

Fix 0 < ๐‘ก < ๐‘ก ๐‘˜ . Letโ€™s show that ๐‘ค ๐‘ก is a solution to the constrained problem. Firstly, we have

๐‘‹ โข ๐‘ค ๐‘ก

๐‘‹ โข ๐‘ค ๐‘˜ โˆ’ ๐‘ก โข โˆ‘ ๐‘ ๐‘– โˆˆ ๐’œ ๐›ฝ ๐‘ ๐‘– โข ๐‘‹ ๐‘ ๐‘– โข ๐‘ค ๐‘ ๐‘– ๐‘˜

๐‘‹ โข ๐‘ค ๐‘˜ ,

showing that the model fit preserved. Moreover,

๐‘ค ๐‘ ๐‘– ๐‘ก

( 1 โˆ’ ๐‘ก โข ๐›ฝ ๐‘ ๐‘– ) โข ๐‘ค ๐‘ ๐‘– ๐‘˜

( 1 โˆ’ ๐‘ก โข ๐›ฝ ๐‘ ๐‘– ) โข ๐›ผ ๐‘ ๐‘– โข ๐‘ฃ ๐‘ ๐‘– ,

where ( 1 โˆ’ ๐‘ก โข ๐›ฝ ๐‘ ๐‘– ) โข ๐›ผ ๐‘ ๐‘–

0 by the choice of ๐‘ก . We conclude that ๐‘ค ๐‘ก is optimal by Section 3.1.

By construction,

lim ๐‘ก โ†‘ ๐‘ก ๐‘˜ ๐‘ค ๐‘ก

๐‘ค ๐‘˜ + 1 .

Since ๐‘ค ๐‘ก is an optimal solution, it has the (unique) optimal squared error and sum of group norms. Taking limits as ๐‘ก โ†‘ ๐‘ก ๐‘˜ , we see that

๐‘‹ โข ๐‘ค ๐‘˜ + 1

๐‘‹ โข ๐‘ค ๐‘˜ , โˆ‘ ๐‘ ๐‘– โˆˆ โ„ฌ โ€– ๐‘ค ๐‘ ๐‘– ๐‘˜ + 1 โ€– 2

โˆ‘ ๐‘ ๐‘– โˆˆ โ„ฌ โ€– ๐‘ค ๐‘ ๐‘– โ€– 2 , ๐พ โŠค โข ๐‘ค ๐‘˜ + 1 โ‰ค 0 ,

which implies that ๐‘ค ๐‘˜ + 1 obtains the optimal objective value and is feasible. Thus ๐‘ค ๐‘˜ + 1 is also a solution. Finally, ๐’œ โข ( ๐‘ค ๐‘˜ + 1 ) is strictly smaller than ๐’œ โข ( ๐‘ค ๐‘˜ ) , as required.

Arguing by induction now implies that Algorithm 1 returns an minimal solution in a finite number of steps.

Complexity: First, observe that we can pre-compute the block-wise model fits ๐‘ž ๐‘ ๐‘–

๐‘‹ ๐‘ ๐‘– โข ๐‘ค ๐‘ ๐‘– before running the algorithm. The complexity is at most ๐‘‚ โข ( ๐‘› โข ๐‘‘ ) . At each iteration of the algorithm, we must do two things: (i) compute a non-trivial solution to a homogeneous equation and (ii) update the weights of the model fits. For (i), it is clear that any set of ๐‘› + 1 of the ๐‘ž ๐‘ ๐‘– vectors will be linearly dependent, so that we compute a non-trivial solution to the homogeneous equation using the SVD in an at most ๐‘‚ โข ( ๐‘› 3 ) operations. For (ii), updating at most ๐‘› of the ๐›ฝ ๐‘ ๐‘– โ€™s requires ๐‘‚ โข ( ๐‘› ) time. Since the algorithm runs at most ๐‘™ iterations, we obtain a final complexity of ๐‘‚ โข ( ๐‘› 3 โข ๐‘™ + ๐‘› โข ๐‘‘ ) , as claimed. โˆŽ

Definition A.7.

We say that ๐‘‹ satisfies group dependence if for every choice of vectors ๐‘ง ๐‘ ๐‘– โˆˆ โ„ | ๐‘ ๐‘– | and partition ๐’Ÿ โˆช โ„

โ„ฌ , ๐’Ÿ โˆฉ โ„

โˆ… such that

๐‘” ๐’Ÿ := โˆ‘ ๐‘ ๐‘— โˆˆ ๐’Ÿ ๐‘‹ ๐‘ ๐‘— โข ๐‘ง ๐‘ ๐‘— โˆˆ Span โข ( { ๐‘‹ ๐‘ ๐‘– โข ๐‘ง ๐‘ ๐‘– : ๐‘ ๐‘– โˆˆ โ„ } ) ,

it holds that

Span โข ( { ๐‘‹ ๐‘ ๐‘— โข ๐‘ง ๐‘ ๐‘— : ๐‘ ๐‘— โˆˆ ๐’Ÿ } ) โІ Span โข ( { ๐‘‹ ๐‘ ๐‘– โข ๐‘ง ๐‘ ๐‘– : ๐‘ ๐‘– โˆˆ โ„ } ) .

In other words, linear dependence of ๐‘” ๐’Ÿ and { ๐‘‹ ๐‘ ๐‘– โข ๐‘ง ๐‘ ๐‘– : ๐‘ ๐‘– โˆˆ โ„ } implies linear dependence of ๐‘‹ ๐‘ ๐‘— โข ๐‘ง ๐‘ ๐‘— and { ๐‘‹ ๐‘ ๐‘– โข ๐‘ง ๐‘ ๐‘– : ๐‘ ๐‘– โˆˆ โ„ } for every ๐‘ ๐‘— โˆˆ ๐’Ÿ .

Lemma A.8.

If ๐‘‹ satisfies group dependence, then each step of the Algorithm 1 preserves the span of { ๐‘‹ ๐‘ ๐‘– โข ๐‘ค ๐‘ ๐‘– ๐‘˜ } . That is, only linearly dependent vectors are removed.

Proof.

Let ๐’ฑ

Span โข { ๐‘‹ ๐‘ ๐‘– โข ๐‘ค ๐‘ ๐‘– } be the span of the initial solution. Since ๐‘ค 0

๐‘ค by definition, the base case holds trivially.

Now, suppose Span โข ( ๐‘‹ ๐‘ ๐‘– โข ๐‘ค ๐‘ ๐‘– ๐‘˜ )

๐’ฑ and let ๐’Ÿ ๐‘˜

๐’œ ๐œ† โข ( ๐‘ค ๐‘˜ ) โˆ– ๐’œ ๐œ† โข ( ๐‘ค ๐‘˜ + 1 ) . For every ๐‘ ๐‘— โˆˆ ๐’Ÿ ๐‘˜ , it holds that ๐›ฝ ๐‘ ๐‘—

๐›ฝ ๐‘ ๐‘– ๐‘˜ , meaning

โˆ‘ ๐‘— โˆˆ ๐’Ÿ ๐‘˜ ๐‘‹ ๐‘ ๐‘— โข ๐‘ค ๐‘ ๐‘— ๐‘˜ โˆˆ Span โข ( { ๐‘‹ ๐‘ ๐‘– โข ๐‘ค ๐‘ ๐‘– ๐‘˜ : ๐‘ ๐‘– โˆˆ ๐’œ ๐œ† โข ( ๐‘ค ๐‘˜ + 1 ) } ) .

Group dependence now implies for every ๐‘ ๐‘— โˆˆ ๐’Ÿ ๐‘˜ there exists ๐œ ๐‘— such that

๐‘‹ ๐‘ ๐‘— โข ๐‘ค ๐‘ ๐‘— ๐‘˜

โˆ‘ ๐‘ ๐‘– โˆˆ ๐’œ ๐œ† โข ( ๐‘ค ๐‘˜ ) ๐œ ๐‘ ๐‘– ๐‘— โข ๐‘‹ ๐‘ ๐‘– โข ๐‘ค ๐‘ ๐‘– ๐‘˜ .

Let ๐‘ฃ โˆˆ ๐’ฑ and observe that

๐‘ฃ

โˆ‘ ๐‘ ๐‘– โˆˆ ๐’œ ๐œ† โข ( ๐‘ค ๐‘˜ ) ๐›ผ ๐‘ ๐‘– โข ๐‘‹ ๐‘ ๐‘– โข ๐‘ค ๐‘ ๐‘– ๐‘˜

โˆ‘ ๐‘ ๐‘– โˆˆ ๐’œ ๐œ† โข ( ๐‘ค ๐‘˜ + 1 ) ๐›ผ ๐‘ ๐‘– โข ๐‘‹ ๐‘ ๐‘– โข ๐‘ค ๐‘ ๐‘– ๐‘˜ + โˆ‘ ๐‘ ๐‘— โˆˆ ๐’Ÿ ๐‘˜ ๐›ผ ๐‘ ๐‘— โข ๐‘‹ ๐‘ ๐‘— โข ๐‘ค ๐‘ ๐‘— ๐‘˜

โˆ‘ ๐‘ ๐‘– โˆˆ ๐’œ ๐œ† โข ( ๐‘ค ๐‘˜ + 1 ) ๐›ผ ๐‘ ๐‘– โข ๐‘‹ ๐‘ ๐‘– โข ๐‘ค ๐‘ ๐‘– ๐‘˜ + โˆ‘ ๐‘ ๐‘— โˆˆ ๐’Ÿ ๐‘˜ โˆ‘ ๐‘ ๐‘– โˆˆ ๐’œ ๐œ† โข ( ๐‘ค ๐‘˜ + 1 ) ๐›ผ ๐‘ ๐‘— โข ๐œ ๐‘ ๐‘– ๐‘— โข ๐‘‹ ๐‘ ๐‘– โข ๐‘ค ๐‘ ๐‘– ๐‘˜

โˆ‘ ๐‘ ๐‘– โˆˆ ๐’œ ๐œ† โข ( ๐‘ค ๐‘˜ + 1 ) ( ๐›ผ ๐‘ ๐‘– + [ โˆ‘ ๐‘ ๐‘— โˆˆ ๐’Ÿ ๐‘˜ ๐›ผ ๐‘ ๐‘— โข ๐œ ๐‘ ๐‘– ๐‘— ] ) โข ๐‘‹ ๐‘ ๐‘– โข ๐‘ค ๐‘ ๐‘– ๐‘˜

โˆ‘ ๐‘ ๐‘– โˆˆ ๐’œ ๐œ† โข ( ๐‘ค ๐‘˜ + 1 ) ( ๐›ผ ๐‘ ๐‘– + [ โˆ‘ ๐‘ ๐‘— โˆˆ ๐’Ÿ ๐‘˜ ๐›ผ ๐‘ ๐‘— โข ๐œ ๐‘ ๐‘– ๐‘— ] 1 โˆ’ ๐‘ก ๐‘˜ โข ๐›ฝ ๐‘ ๐‘– ) โข ๐‘‹ ๐‘ ๐‘– โข ๐‘ค ๐‘ ๐‘– ๐‘˜ + 1 ,

Thus, Span โข ( ๐‘‹ ๐‘ ๐‘– โข ๐‘ค ๐‘ ๐‘– ๐‘˜ + 1 )

๐’ฑ . Arguing by induction completes the proof. โˆŽ

Lemma A.9.

Let ๐’ณ

{ ๐‘ฅ 1 , โ€ฆ , ๐‘ฅ ๐‘˜ } be a set of linearly dependent vectors. Every linearly independent subset of ๐’ณ obtained by iteratively removing linearly dependent vectors has the same cardinality.

Proof.

Let ๐’ด , ๐’ด โ€ฒ โŠ‚ ๐’ณ be linearly independent subsets obtained by pruning linearly dependent vectors from ๐’ณ and assume | ๐’ด โ€ฒ | < | ๐’ด | . Since ๐’ด and ๐’ด โ€ฒ are obtained by pruning only linearly dependent vectors, it must be that

Span โข ( ๐’ด โ€ฒ )

Span โข ( ๐’ด )

Span โข ( ๐’ณ ) .

Let ๐‘

dim โข ( Span โข ( ๐’ณ ) ) . Only a set of ๐‘ linearly independent vectors can span Span โข ( ๐’ณ ) ; thus, | ๐’ด |

๐‘ must hold and ๐’ด โ€ฒ cannot span Span โข ( ๐’ณ ) . This is a contradiction. We conclude | ๐’ด โ€ฒ |

| ๐’ด | as claimed. โˆŽ

Lemma A.10.

There exists a solution to CGL with support exactly ๐’ฎ ๐œ† .

Proof.

By definition, there exists ๐‘ค โˆˆ ๐’ฒ * โข ( ๐œ† ) such that ๐‘ค ๐‘ ๐‘– โ‰  0 for every ๐‘ ๐‘– โˆˆ ๐’ฎ ๐œ† . Taking convex combination of these solutions yields ๐‘ค โ€ฒ with support exactly ๐’ฎ ๐œ† . Since ๐’ฒ * โข ( ๐œ† ) is convex, ๐‘ค โ€ฒ is also a solution. This completes the proof. โˆŽ

Lemma A.11.

Let ๐‘ค be a minimal solution and ๐‘ค ยฏ be a solution with support ๐’ฎ ๐œ† , which exists by Lemma A.10. Then, ๐‘ค ยฏ can be pruned in one step to obtain ๐‘ค .

Proof.

Suppose ๐‘ค ยฏ is minimal. Then { ๐‘‹ ๐‘ ๐‘– โข ๐‘ค ยฏ ๐‘ ๐‘– } ๐’ฎ ๐œ† are linearly independent, which implies { ๐‘‹ ๐‘ ๐‘– โข ๐‘ฃ ๐‘ ๐‘– } ๐’ฎ ๐œ† are also linearly independent. We conclude ๐‘ค ยฏ is the unique solution to CGL by Corollary 3.1 and the claim holds trivially.

Suppose ๐‘ค ยฏ is not minimal. Since ๐’œ ๐œ† โข ( ๐‘ค ) โŠ‚ ๐’ฎ ๐œ† , Span โข { ๐‘‹ ๐‘ ๐‘– โข ๐‘ค ๐‘ ๐‘– } โŠ‚ Span โข { ๐‘‹ ๐‘ ๐‘– โข ๐‘ค ยฏ ๐‘ ๐‘– } so that every vector ๐‘‹ ๐‘ ๐‘– โข ๐‘ค ๐‘ ๐‘– can be written as a linear combination of vectors in { ๐‘‹ ๐‘ ๐‘– โข ๐‘ค ยฏ ๐‘ ๐‘– } . Thus, we find that

โˆ‘ ๐‘ ๐‘– โˆˆ ๐’œ ๐œ† โข ( ๐‘ค ยฏ ) ๐‘‹ ๐‘ ๐‘– โข ๐‘ค ยฏ ๐‘ ๐‘–

๐‘ฆ ^

โˆ‘ ๐‘ ๐‘– โˆˆ ๐’œ ๐œ† โข ( ๐‘ค ) ๐‘‹ ๐‘ ๐‘– โข ๐‘ค ๐‘ ๐‘–

โŸน โˆ‘ ๐‘ ๐‘– โˆˆ ๐’ฎ ๐œ† โˆ– ๐’œ ๐œ† โข ( ๐‘ค ) ๐‘‹ ๐‘ ๐‘– โข ๐‘ค ยฏ ๐‘ ๐‘– + โˆ‘ ๐‘ ๐‘– โˆˆ ๐’œ ๐œ† โข ( ๐‘ค ) ๐›ฝ ๐‘ ๐‘– โข ๐‘‹ ๐‘ ๐‘– โข ๐‘ค ยฏ ๐‘ ๐‘–

0 ,

where ๐›ฝ ๐‘ ๐‘–

1 โˆ’ ๐›ผ ๐‘ ๐‘– ๐›ผ ยฏ ๐‘ ๐‘– for ๐‘ค ๐‘ ๐‘–

๐›ผ ๐‘ ๐‘– โข ๐‘ฃ ๐‘ ๐‘– , ๐‘ค ยฏ ๐‘ ๐‘–

๐›ผ ยฏ ๐‘ ๐‘– โข ๐‘ฃ ๐‘ ๐‘– . Thus, { ๐‘‹ ๐‘ ๐‘– โข ๐‘ค ยฏ ๐‘ ๐‘– } are linearly dependent and it is possible to prune the solution.

Now we show that we can, in fact, prune all vectors in ๐’ฎ ๐œ† โˆ– ๐’œ ๐œ† โข ( ๐‘ค ) in one pruning step. First, observe that ๐›ผ ๐‘ ๐‘– ๐›ผ ยฏ ๐‘ ๐‘–

0 so that ๐›ฝ ๐‘ ๐‘– < 1 for every ๐‘ ๐‘– โˆˆ ๐’œ ๐œ† โข ( ๐‘ค ) . Following the proof of Proposition A.6, define

๐‘ค ๐‘ ๐‘– ๐‘ก

( 1 โˆ’ ๐‘ก โข ๐›ฝ ๐‘ ๐‘– ) โข ๐‘ค ยฏ ๐‘ ๐‘–

( 1 โˆ’ ๐‘ก โข ๐›ฝ ๐‘ ๐‘– ) โข ๐›ผ ยฏ ๐‘ ๐‘– โข ๐‘ฃ ๐‘ ๐‘– ,

for 0 < ๐‘ก < 1 . Since ๐›ฝ ๐‘ ๐‘– < 1 for every ๐‘ ๐‘– โˆˆ ๐’ฎ ๐œ† , it is straightforward to deduce that ๐‘ค ๐‘ ๐‘– ๐‘ก is optimal by Section 3.1. Arguing as in Proposition A.6, we can show ๐‘ค 1 is also optimal. It remains only to notice that ๐‘ค ๐‘ ๐‘– 1

0 for ๐‘ ๐‘– โˆˆ ๐’ฎ ๐œ† โˆ– ๐’œ ๐œ† โข ( ๐‘ค ) and ๐‘ค ๐‘ ๐‘– 1

๐‘ค ๐‘ ๐‘– for ๐‘ ๐‘– โˆˆ ๐’œ ๐œ† โข ( ๐‘ค ) . Thus, the pruning algorithm can move from ๐‘ค ยฏ to ๐‘ค in one step. โˆŽ

\minimalSolCharacterization

Proof.

Suppose ๐‘ค and ๐‘ค โ€ฒ are two minimal solutions. Both can be obtained by pruning the maximal solution with support ๐’ฎ ๐œ† (Lemma A.11). Thus, ๐‘ค and ๐‘ค โ€ฒ both span Span โข ( { ๐‘‹ ๐‘ ๐‘– โข ๐‘ค ๐‘ ๐‘– } ) by Lemma A.8. Lemma A.9 now implies ๐‘ค and ๐‘ค โ€ฒ have the same number of active blocks. This number must be ๐‘ , otherwise there would be a linearly dependent vector in { ๐‘‹ ๐‘ ๐‘– โข ๐‘ค ๐‘ ๐‘– } ๐’œ ๐œ† โข ( ๐‘ค ) and ๐‘ค would not be minimal. This completes the proof. โˆŽ

A.4Continuity of the Solution Path {restatable}

lemmaboundedSolutionsCGL Every solution to the constrained group lasso problem is bounded by an absolute constant independent of ๐œ† . Specifically, every ๐‘ค โˆˆ ๐’ฒ * โข ( ๐œ† ) satisfies

โˆ‘ ๐‘ ๐‘– โˆˆ โ„ฌ โ€– ๐‘ค ๐‘ ๐‘– โ€– 2 โ‰ค โˆ‘ ๐‘ ๐‘– โˆˆ โ„ฌ โ€– ๐‘ค ยฏ ๐‘ ๐‘– โ€– 2 .

where ๐‘ค ยฏ โˆˆ ๐’ฒ * โข ( 0 ) is the least-squares solution with minimum โ„“ 2 -norm.

Proof.

Let โ„Ž โข ( ๐‘ค )

โˆ‘ ๐‘ ๐‘– โˆˆ โ„ฌ โ€– ๐‘ค ๐‘ ๐‘– โ€– 2 , and define ๐‘Š ๐‘” to be the set of least squares solutions with minimum group norm,

๐‘Š ๐‘”

arg โข min { โ„Ž โข ( ๐‘ค ) : ๐‘ค โˆˆ ๐’ฒ * โข ( 0 ) } .

Let ๐‘ค ๐‘” โˆˆ ๐‘Š ๐‘” , and suppose that โ„Ž โข ( ๐‘ค ๐‘” ) < โ„Ž โข ( ๐‘ค โข ( ๐œ† ) ) for some ๐œ†

0 , ๐‘ค โˆˆ ๐’ฒ * โข ( ๐œ† ) . Since

1 2 โข โ€– ๐‘‹ โข ๐‘ค ๐‘” โˆ’ ๐‘ฆ โ€– 2 2 โ‰ค 1 2 โข โ€– ๐‘‹ โข ๐‘ค โข ( ๐œ† ) โˆ’ ๐‘ฆ โ€– 2 2

we deduce

1 2 โข โ€– ๐‘‹ โข ๐‘ค ๐‘” โˆ’ ๐‘ฆ โ€– 2 2 + ๐œ† โข โ„Ž โข ( ๐‘ค ๐‘” ) โข < 1 2 โˆฅ โข ๐‘‹ โข ๐‘ค โข ( ๐œ† ) โˆ’ ๐‘ฆ โˆฅ 2 2 + ๐œ† โข โ„Ž โข ( ๐‘ค โข ( ๐œ† ) ) ,

which is a contradiction. So โ„Ž โข ( ๐‘ค โข ( ๐œ† ) ) โ‰ค โ„Ž โข ( ๐‘ค ๐‘” ) for all ๐œ†

0 . Observing โ„Ž โข ( ๐‘ค ๐‘” ) โ‰ค โ„Ž โข ( ๐‘ค ยฏ ) since ๐‘ค ยฏ may not be in ๐‘Š ๐‘” gives the result. Since โ„Ž โข ( ๐‘ค ยฏ ) is independent of ๐œ† , we conclude that ๐’ฒ * โข ( ๐œ† ) is bounded independent of ๐œ† . โˆŽ

\valueContinuityCGL

Proof.

Define the joint objective function

๐‘“ โข ( ๐‘ค , ๐œ† )

1 2 โข โ€– ๐‘‹ โข ๐‘ค โˆ’ ๐‘ฆ โ€– 2 2 + ๐œ† โข โˆ‘ ๐‘ ๐‘– โˆˆ โ„ฌ โ€– ๐‘ค ๐‘ ๐‘– โ€– 2 .

Clearly ๐‘“ โข ( ๐‘ค , ๐œ† ) is jointly continuous in ๐‘ค and ๐œ† . By Section A.4, minimization of ๐‘“ โข ( ๐‘ค , ๐œ† ) subject to ๐พ ๐‘ ๐‘– โŠค โข ๐‘ค ๐‘ ๐‘– โ‰ค 0 is equivalent to the constrained minimization problem,

๐‘ * โข ( ๐œ† )

min ๐‘ค โก ๐‘“ โข ( ๐‘ค , ๐œ† ) s.t. โข โˆ‘ ๐‘ ๐‘– โˆˆ โ„ฌ โ€– ๐‘ค ๐‘ ๐‘– โ€– 2 โ‰ค ๐ถ , ๐พ ๐‘ ๐‘– โŠค โข ๐‘ค ๐‘ ๐‘– โ‰ค 0 ,

where ๐ถ is a finite absolute constant. Note that this expression is also valid when ๐œ†

0 as the min-norm solution to the unregularized least squares problem obeys the constraint.

Thus, ๐’ฒ * is a continuous optimization problem over a continuous (constant in this case) compact constraint set and the classical result of Berge (1997) (see also Hogan (1973)[Theorem 7]) implies ๐‘ * is continuous. โˆŽ

\mapContinuityCGL

Proof.

Joint continuity of the objective

๐‘“ โข ( ๐‘ค , ๐œ† )

1 2 โข โ€– ๐‘‹ โข ๐‘ค โˆ’ ๐‘ฆ โ€– 2 2 + ๐œ† โข โˆ‘ ๐‘ ๐‘– โˆˆ โ„ฌ โ€– ๐‘ค ๐‘ ๐‘– โ€– 2 ,

combined with continuity of the (constant) constraint allows us to use Robinson & Day (1974, Theorem 1) to obtain that that ๐’ฒ * is upper semi-continuous. Since ๐’ฒ * is convex and bounded, it is compact. It is thus also uniformly compact and upper semi-continuity is equivalent to closedness (Hogan, 1973)[Theorem 3]. We conclude that ๐’ฒ * is closed as claimed.

If ๐‘‹ is full column rank, then the constrained group lasso solution is unique for all ๐œ† โ‰ฅ 0 . The solution map is a singleton on โ„ + , and closedness and openness are equivalent properties for singleton maps. Since we have already shown it is closed, the solution map must also be open. An identical argument shows that the solution map is open at on any interval over which the solution is unique.

Now we show the reverse implication by proving the contrapositive. Assume ๐‘‹ is not full column-rank and suppose ๐พ ๐‘ ๐‘–

0 for each ๐‘ ๐‘– โˆˆ โ„ฌ . The solution map at ๐œ†

0 is the solution set to the least squares problem,

min ๐‘ค โก 1 2 โข โ€– ๐‘‹ โข ๐‘ค โˆ’ ๐‘ฆ โ€– 2 2 ,

which is known to be ๐’ฒ * โข ( 0 )

{ ๐‘ค * โข ( 0 ) + ๐‘ง : ๐‘ง โˆˆ Null โข ( ๐‘‹ ) } . While ๐’ฒ * โข ( 0 ) is unbounded, it holds that ๐’ฒ * โข ( ๐œ† ) โŠ‚ ๐ถ for some bounded ๐ถ for every ๐œ†

0 (Section A.4). As a result, there exist uncountably many solutions in ๐’ฒ * โข ( 0 ) which are not limit points of solutions in ๐’ฒ * โข ( ๐œ† ๐‘˜ ) as ๐œ† ๐‘˜ โ†’ 0 . In other words, ๐’ฒ * โข ( 0 ) is not open at 0 . โˆŽ

\modelFitContinuity

Proof.

Consider the dual problem from Lemma A.4,

max ๐œ‚

โˆ’ 1 2 โข ( ๐œ‚ โˆ’ ๐‘‹ โŠค โข ๐‘ฆ ) โข ( ๐‘‹ โŠค โข ๐‘‹ ) + โข ( ๐œ‚ โˆ’ ๐‘‹ โŠค โข ๐‘ฆ ) + 1 2 โข โ€– ๐‘ฆ โ€– 2 2

s.t. ๐œ‚ โˆˆ Row โข ( ๐‘‹ ) , โ€– ๐œ‚ ๐‘ ๐‘– โ€– 2 โ‰ค ๐œ† โข โˆ€ ๐‘ ๐‘– โˆˆ โ„ฌ .

The objective function is a convex quadratic and continuous in ๐œ‚ . The constraint set is

๐’ž โข ( ๐œ† )

{ ๐œ‚ : ๐œ‚ โˆˆ Row ( ๐‘‹ ) , : โˆฅ ๐œ‚ ๐‘ ๐‘– โˆฅ 2 โ‰ค ๐œ† โˆ€ ๐‘ ๐‘– โˆˆ โ„ฌ } .

Letโ€™s show that ๐’ž is continuous.

Let ๐œ† ๐‘˜ โ‰ฅ 0 , ๐œ† ๐‘˜ โ†’ ๐œ† ยฏ and ๐œ‚ ๐‘˜ โˆˆ ๐’ž โข ( ๐œ† ๐‘˜ ) such that ๐œ‚ ๐‘˜ โ†’ ๐œ‚ ยฏ . Since ๐œ‚ ๐‘˜ โˆˆ Row โข ( ๐‘‹ ) , ๐œ‚ ยฏ โˆˆ Row โข ( ๐‘‹ ) . Moreover,

โ€– [ ๐œ‚ ๐‘˜ ] ๐‘ ๐‘– โ€– 2 โ‰ค ๐œ† ๐‘˜ ,

so that taking limits on both sides implies โ€– ๐œ‚ ยฏ ๐‘˜ โ€– 2 โ‰ค ๐œ† ยฏ . Thus, ๐œ‚ ยฏ โˆˆ ๐’ž โข ( ๐œ† ยฏ ) , showing that ๐’ž is closed.

Hogan (1973, Theorem 12) states that ๐’ž โข ( ๐œ† ) is open at ๐œ† ยฏ if for each ๐‘ ๐‘– โˆˆ โ„ฌ , ๐‘” ๐‘ ๐‘– โข ( ๐œ† , ๐œ‚ )

โ€– ๐œ‚ ๐‘ ๐‘– โ€– 2 โˆ’ ๐œ† is continuous on ๐œ† ยฏ ร— ๐’ž โข ( ๐œ† ) , convex in ๐œ‚ , and for fixed ๐œ† , and there exists ๐œ‚ ยฏ such that such that ๐‘” โข ( ๐œ† , ๐œ‚ ยฏ ) < 0 .

Let us check these conditions. First, observe that ๐‘” ๐‘ ๐‘– is continuous and convex in ๐œ‚ for any choice of ๐œ† . Taking ๐œ‚ ยฏ

0 , we find

๐‘” ๐‘ ๐‘– โข ( ๐œ† , 0 )

โˆ’ ๐œ† < 0 ,

as long as ๐œ†

0 . Thus, ๐’ž โข ( ๐œ† ) is open at each ๐œ†

0 .

At ๐œ†

0 , ๐’ž โข ( ๐œ† )

{ 0 } . Since ๐’ž is closed everywhere, we conclude it is open at ๐œ†

0 . Putting these results together proves that ๐’ž is continuous.

Recall that the dual solution satisfies ๐œ‚ ๐‘ ๐‘– *

๐‘ ๐‘ ๐‘– , so that it is always unique. Combining this fact with Robinson & Day (1974, Theorem 1) implies that ๐œ† โ†ฆ ๐œ‚ * โข ( ๐œ† ) is a continuous function. Since we have

๐œ‚ * โข ( ๐œ† )

๐‘ โข ( ๐œ† )

๐‘‹ โŠค โข ( ๐‘ฆ โˆ’ ๐‘‹ โข ๐‘ค * โข ( ๐œ† ) ) ,

it must be that the model fit ๐‘ฆ ^ โข ( ๐œ† )

๐‘‹ โข ๐‘ค * โข ( ๐œ† ) is continuous as well (any discontinuities must be in Null โข ( ๐‘‹ ) , but ๐‘ฆ ^ is orthogonal to Null โข ( ๐‘‹ ) ).

Finally, using Definition 3.6, we have

๐‘ * โข ( ๐œ† ) โˆ’ 1 2 โข โ€– ๐‘ฆ ^ โข ( ๐œ† ) โˆ’ ๐‘ฆ โ€– 2 2

๐œ† โข โˆ‘ ๐‘ ๐‘– โˆˆ โ„ฌ โ€– ๐‘ค ๐‘ ๐‘– โ€– 2 ,

is a sum of continuous functions and thus continuous. Writing

๐‘” โข ( ๐œ† )

โˆ‘ ๐‘ ๐‘– โˆˆ โ„ฌ โ€– ๐‘ค ๐‘ ๐‘– โ€– 2

[ ๐œ† โข โˆ‘ ๐‘ ๐‘– โˆˆ โ„ฌ โ€– ๐‘ค ๐‘ ๐‘– โ€– 2 ] โข [ 1 ๐œ† ] ,

as the product of continuous functions shows ๐‘” โข ( ๐œ† ) is continuous at every ๐œ†

0 . โˆŽ

A.5The Min-Norm Path Proposition A.12.

Consider the min group-norm interpolation problem,

min ๐‘ค โข โˆ‘ ๐‘ ๐‘– โˆˆ โ„ฌ โ€– ๐‘ค ๐‘ ๐‘– โ€– 2 s.t. โข ๐‘‹ โข ๐‘ค

๐‘ฆ .

There exist ๐‘‹ , ๐‘ฆ such that the minimum-group-norm solution to this problem is unique and not in ๐‘…๐‘œ๐‘ค โข ( ๐‘‹ ๐’œ ๐œ† * ) .

Proof.

We provide a counter-example where the solution is not the row space of the active set. Choose

๐‘‹

[ 1

2

0

1
0
2 ] , ๐‘ฆ

[ 1

1 ] ,

where the vertical line indicates the block structure, i.e., ๐‘ 1

{ 1 , 2 } and ๐‘ 2

{ 3 } .

Clearly a solution using only ๐‘ 2 cannot interpolate the data, so the active set must be { ๐‘ 1 , ๐‘ 2 } or { ๐‘ 1 } . If the active set is ๐‘ 1 , then the minimum norm interpolating solution can only be ๐‘ค

[ 1โ€„0โ€„0 ] โŠค , which has group norm 1 .

Now, consider when the active set is { ๐‘ 1 , ๐‘ 2 } . The interpolating solution in Row โข ( ๐‘‹ ) satisfies the following system

[ 1

2

0

1

0

2 ] โข ( ๐›ผ โข [ 1

2

0 ] + ๐›ฝ โข [ 1

0

2 ] )

[ 1

1 ]

โŸน [ 5 โข ๐›ผ + ๐›ฝ

๐›ผ + 5 โข ๐›ฝ ]

[ 1

1 ] .

Solving for ๐›ผ and ๐›ฝ yields ๐›ผ

1 โˆ’ 5 โข ๐›ฝ and 24 โข ๐›ฝ

4 , which implies ๐›ฝ

1 / 6 and ๐›ผ

1 / 6 . The optimal ๐‘ค * is thus

๐‘ค *

[ 1 / 3

1 / 3

1 / 3 ] ,

and the group norm of ๐‘ค * is

โˆ‘ ๐‘ ๐‘– โˆˆ โ„ฌ โ€– ๐‘ค ๐‘ ๐‘– * โ€– 2

2 / 9 + 1 / 3

( 1 + 2 ) / 3 < 1 .

Thus, we can discount the interpolating solution using only ๐‘ 1 .

Now letโ€™s see if we can reduce the norm by including directions in Null โข ( ๐‘‹ ) . The Null space is orthogonal to both rows of ๐‘‹ , from which we conclude

Null โข ( ๐‘‹ )

{ ๐›พ โข ๐‘ง : ๐‘ง

[ โˆ’ 2 / 3

1 / 3

1 / 3 ] , ๐›พ โˆˆ โ„ } .

Any vector ๐‘ค โ€ฒ

๐‘ค * + ๐›พ โข ๐‘ง is an interpolating solution, so it only remains to check if there is a choice of ๐›พ that decreases the group norm. Assuming ๐›พ

โˆ’ 1 ,

โˆ‘ ๐‘ ๐‘– โˆˆ โ„ฌ โ€– ๐‘ค ๐‘ ๐‘– * โ€– 2
โ‰ค โˆ‘ ๐‘ ๐‘– โˆˆ โ„ฌ โ€– ๐‘ค ๐‘ ๐‘– โ€ฒ โ€– 2

โ‡” 1 + 2 3

( 1 3 โˆ’ 2 โข ๐›พ 3 ) 2 + ( 1 3 + ๐›พ 3 ) 2 + | 1 + ๐›พ 3 |

โ‡” 1 + 2 3 โˆ’ 1 + ๐›พ 3

โ‰ค ( 1 3 โˆ’ 2 โข ๐›พ 3 ) 2 + ( 1 3 + ๐›พ 3 ) 2

(since ๐›พ

โˆ’ 1 )

โ‡” ( 1 + 2 3 โˆ’ 1 + ๐›พ 3 ) 2

โ‰ค ( 1 3 โˆ’ 2 โข ๐›พ 3 ) 2 + ( 1 3 + ๐›พ 3 ) 2

โ‡” ( 2 โˆ’ ๐›พ ) 2

โ‰ค ( 1 โˆ’ 2 โข ๐›พ ) 2 + ( 1 + ๐›พ ) 2

The left-hand side satisfies

( 2 โˆ’ ๐›พ ) 2

2 โˆ’ 2 โข 2 โข ๐›พ + ๐›พ 2 ,

while the right-hand side is

( 1 โˆ’ 2 โข ๐›พ ) 2 + ( 1 + ๐›พ ) 2

1 โˆ’ 4 โข ๐›พ + 4 โข ๐›พ 2 + 1 + 2 โข ๐›พ + ๐›พ 2

2 โˆ’ 2 โข ๐›พ + 5 โข ๐›พ 2 .

As a result,

โˆ‘ ๐‘ ๐‘– โˆˆ โ„ฌ โ€– ๐‘ค ๐‘ ๐‘– * โ€– 2

โ‰ค โˆ‘ ๐‘ ๐‘– โˆˆ โ„ฌ โ€– ๐‘ค ๐‘ ๐‘– โ€ฒ โ€– 2

โ‡” 2 โˆ’ 2 โข ๐›พ + 5 โข ๐›พ 2 โˆ’ 2 โˆ’ ๐›พ 2 + 2 โข 2 โข ๐›พ

โ‰ฅ 0

โ‡” 2 โข ( 2 โˆ’ 1 ) โข ๐›พ + 4 โข ๐›พ 2

โ‰ฅ 0 .

However, it is easy to check that this fails for ๐›พ โˆˆ ( ( 1 โˆ’ 2 ) / 2 , 0 ) . So the minimum-group-norm interpolating solution is not in Row โข ( ๐‘‹ ๐’œ ๐œ† * ) .

Now letโ€™s show that the minimum-group-norm solution is unique. This holds if the group norm as a function of ๐›พ ,

๐‘” โข ( ๐›พ )

( 1 3 โˆ’ 2 โข ๐›พ 3 ) 2 + ( 1 3 + ๐›พ 3 ) 2 + | 1 + ๐›พ 3 | ,

is strictly convex. The function โ„Ž 2 โข ( ๐›พ )

| 1 + ๐›พ 3 | is convex, so we need only show that

โ„Ž 1 โข ( ๐›พ )

( 1 3 โˆ’ 2 โข ๐›พ 3 ) 2 + ( 1 3 + ๐›พ 3 ) 2

1 3 โข 2 โˆ’ 2 โข ๐›พ + 5 โข ๐›พ 2 ,

is strictly convex. Taking the second derivative, we find

โ„Ž 1 โ€ฒโ€ฒ โข ( ๐›พ )

3 ( 2 โˆ’ 2 โข ๐›พ + 5 โข ๐›พ 2 ) 3 / 2

0

for all ๐›พ . This completes the proof. โˆŽ

Lemma A.13.

Let ๐‘Š ๐‘” be the set of least squares solutions with minimum group norm. That is,

๐‘Š ๐‘”

arg โข min ๐‘ค { โˆ‘ ๐‘ ๐‘– โˆˆ โ„ฌ โ€– ๐‘ค ๐‘ ๐‘– โˆฅ 2 : ๐‘‹ โŠค โข ๐‘‹ โข ๐‘ค

๐‘‹ โŠค โข ๐‘ฆ } .

Then every limit point of the min-norm group lasso solution as ๐œ† โ†’ 0 lies in ๐‘Š ๐‘” .

Proof.

Let ๐œ† ๐‘˜ โ†’ 0 and observe that ๐‘ค * โข ( ๐œ† ๐‘˜ ) has at least one limit point since it is bounded (Section A.4). Since โ€– ๐‘ ๐‘ ๐‘– โข ( ๐œ† ๐‘˜ ) โ€– 2 โ‰ค ๐œ† ๐‘˜ , we see that lim ๐‘˜ โ€– ๐‘ ๐‘ ๐‘– โข ( ๐œ† ๐‘˜ ) โ€– 2

0 and thus lim ๐‘˜ ๐‘ ๐‘ ๐‘– โข ( ๐œ† ๐‘˜ )

0 .

FO optimality conditions imply

( ๐‘‹ โŠค โข ๐‘‹ ) โข ๐‘ค * โข ( ๐œ† )

๐‘‹ โŠค โข ๐‘ฆ โˆ’ ๐‘ โข ( ๐œ† ) ,

(23)

which, taking limits on both sides, gives

lim ๐‘˜ ( ๐‘‹ โŠค โข ๐‘‹ ) โข ๐‘ค * โข ( ๐œ† ๐‘˜ )

๐‘‹ โŠค โข ๐‘ฆ .

(24)

That is, every limit point ๐‘ค ยฏ of ๐‘ค * โข ( ๐œ† ๐‘˜ ) is a least squares solution satisfying โ„Ž โข ( ๐‘ค ยฏ ) โ‰ค โ„Ž โข ( ๐‘ค ๐‘” ) . We conclude that ๐‘ค ยฏ โˆˆ ๐‘Š ๐‘” . โˆŽ

\rowCE

Proof.

Consider the setting of Proposition A.12, with

๐‘‹

[ 1

2

0

1
0
2 ] , ๐‘ฆ

[ 1

1 ] ,

where the vertical line indicates the block structure, i.e., ๐‘ 1

{ 1 , 2 } and ๐‘ 2

{ 3 } . We have shown that the min group norm interpolant is unique, is supported on ๐‘ 1 and ๐‘ 2 , and does not lie in ๐‘… โข ๐‘œ โข ๐‘ค โข ( ๐‘‹ ) . Let ๐‘ค ๐‘” be this solution.

Let ๐œ† ๐‘˜ โ†“ 0 . By Lemma A.13, every limit point of ๐‘ค * โข ( ๐œ† ๐‘˜ )

๐‘ค ๐‘” . Thus, lim ๐‘˜ ๐‘ค * โข ( ๐œ† ๐‘˜ ) exists and is exactly ๐‘ค ๐‘” . Moreover, ๐‘ค * โข ( ๐œ† ๐‘˜ ) must be supported on ๐‘ 1 and ๐‘ 2 for all ๐‘˜ sufficiently large.

Decomposing ๐‘ค ๐‘”

๐‘Ž + ๐‘ and ๐‘ค * โข ( ๐œ† ๐‘˜ )

๐‘Ÿ ๐‘˜ + ๐‘› ๐‘˜ where ๐‘Ž , ๐‘Ÿ ๐‘˜ โˆˆ Row โข ( ๐‘‹ ) and ๐‘ , ๐‘› ๐‘˜ โˆˆ Null โข ( ๐‘‹ ) , we see that

โ€– ๐‘ค ๐‘” โˆ’ ๐‘ค * โข ( ๐œ† ๐‘˜ ) โ€– 2 2

โ€– ๐‘Ž โˆ’ ๐‘Ÿ ๐‘˜ โ€– 2 2 + โ€– ๐‘ โˆ’ ๐‘› ๐‘˜ โ€– 2 2 โ†’ 0 ,

implying that ๐‘› ๐‘˜ โ‰  0 for sufficiently large ๐‘˜ . In other words, the min-norm solution to the group lasso problem fails to fall in Row โข ( ๐‘‹ ) for some ๐œ†

0 . โˆŽ

\minNormProgram

Proof.

Let ๐‘ค โˆˆ ๐’ฒ * โข ( ๐œ† ) . By Section 3.1, ๐‘ค ๐‘ ๐‘–

๐›ผ ๐‘ ๐‘– โข ๐‘ฃ ๐‘ ๐‘– where ๐›ผ ๐‘ ๐‘– โ‰ฅ 0 . Moreover, ๐›ผ ๐‘ ๐‘–

0 for every ๐‘ ๐‘– โˆˆ โ„ฌ โˆ– ๐’ฎ ๐œ† . As a result,

โ€– ๐‘ค โ€– 2 2

โ€– ๐‘ค โ„ฐ ๐œ† โ€– 2 2

โˆ‘ ๐‘ ๐‘– โˆˆ ๐’ฎ ๐œ† โ€– ๐›ผ ๐‘ ๐‘– โข ๐‘ฃ ๐‘ ๐‘– โ€– 2 2

๐œ† โข โ€– ๐›ผ โ€– 2 2 ,

where the last equality follows from ๐‘ ๐‘– โˆˆ ๐’ฎ ๐œ† โŸน ๐‘ ๐‘– โˆˆ โ„ฐ ๐œ† , which enforces โ€– ๐‘ฃ ๐‘ ๐‘– โ€– 2

๐œ† .

Now suppose ๐›ผ * is optimal for the cone program and let ๐‘ค โˆˆ โ„ ๐‘‘ such that ๐‘ค ๐‘ ๐‘–

๐›ผ ๐‘ ๐‘– * โข ๐‘ฃ ๐‘ ๐‘– . By construction, ๐›ผ ๐‘ ๐‘– *

0 for all ๐‘ ๐‘– โˆˆ โ„ฌ โˆ– ๐’ฎ ๐œ† (or it could not be optimal) so that ๐‘ค ๐‘ ๐‘–

0 for all ๐‘ ๐‘– โˆˆ โ„ฌ โˆ– ๐’ฎ ๐œ† . Moreover,

๐‘‹ โข ๐‘ค

โˆ‘ ๐‘ ๐‘– โˆˆ ๐’ฎ ๐œ† ๐›ผ ๐‘ ๐‘– * โข ๐‘‹ ๐‘ ๐‘– โข ๐‘ฃ ๐‘ ๐‘–

๐‘ฆ ^ ,

Thus, ๐‘ค solves

arg โข min ๐‘ค โ€– ๐‘ค โ€– 2 2 โข  s.t.
โˆ€ ๐‘ ๐‘– โˆˆ ๐’ฎ ๐œ† , ๐‘ค ๐‘ ๐‘–

๐›ผ ๐‘ ๐‘– โข ๐‘ฃ ๐‘ ๐‘– , ๐›ผ ๐‘ ๐‘– โ‰ฅ 0 ,

โˆ€ ๐‘ ๐‘— โˆˆ โ„ฌ โˆ– ๐’ฎ ๐œ† , ๐‘ค ๐‘ ๐‘—

0 , ๐‘‹ โข ๐‘ค

๐‘ฆ ^ .

Invoking Section 3.1 now proves that ๐‘ค is the min-norm solution. โˆŽ

{restatable}

lemmal2Reformulation The โ„“ 2 -penalized group lasso problem in Equation 17 is equivalent to the following CGL problem:

min ๐‘ค

1 2 โข โ€– ๐‘‹ ~ โข ๐‘ค โˆ’ ๐‘ฆ ~ โ€– 2 2 + ๐œ† โข โˆ‘ ๐‘ ๐‘– โˆˆ โ„ฌ โ€– ๐‘ค ๐‘ ๐‘– โ€– 2

(25)

s.t. ๐พ ๐‘ ๐‘– โŠค โข ๐‘ค ๐‘ ๐‘– โ‰ค 0 โข  for all  โข ๐‘ ๐‘– โˆˆ โ„ฌ .

where we have defined the extended data matrix and targets

๐‘‹ ~

[ ๐‘‹

๐›ฟ โข ๐ผ ] ๐‘ฆ ~

[ ๐‘ฆ

0 ] .

Moreover, ๐‘‹ ~ is full column-rank and thus the CGL solution is unique.

Proof.

It is straightforward to show the equivalence by direct calculation. For any ๐‘ค โˆˆ โ„ ๐‘‘ ,

1 2 โข โ€– ๐‘‹ ~ โข ๐‘ค โˆ’ ๐‘ฆ ~ โ€– 2 2

1 2 โข โ€– ๐‘‹ โข ๐‘ค โˆ’ ๐‘ฆ โ€– 2 2 + 1 2 โข โ€– ๐›ฟ โข ๐ผ โข ๐‘ค โˆ’ 0 โ€– 2 2

1 2 โข โ€– ๐‘‹ โข ๐‘ค โˆ’ ๐‘ฆ โ€– 2 2 + ๐›ฟ 2 โข โ€– ๐‘ค โ€– 2 2 ,

Substituting this identity into Equation 25 establishes the equivalence.

It is clear by inspection that ๐‘‹ ~ is full column rank. Then Null โข ( ๐‘‹ ๐’ž )

โˆ… for all ๐’ž โŠ‚ โ„ฌ and the solution is unique by Section 3.1. โˆŽ

\penConvergence

Proof.

First we show that โ€– ๐‘ค ๐›ฟ โข ( ๐œ† ) โ€– 2 โ‰ค โ€– ๐‘ค * โข ( ๐œ† ) โ€– 2 . Suppose by way of contradiction that โ€– ๐‘ค ๐›ฟ โข ( ๐œ† ) โ€– 2

โ€– ๐‘ค * โข ( ๐œ† ) โ€– 2 for some ๐›ฟ

0 . Since

1 2 โข โ€– ๐‘‹ โข ๐‘ค * โข ( ๐œ† ) โˆ’ ๐‘ฆ โ€– 2 2 + ๐œ† โข โˆ‘ ๐‘ ๐‘– โˆˆ โ„ฌ โ€– ๐‘ค ๐‘ ๐‘– * โข ( ๐œ† ) โ€– 2

min ๐‘ค : ๐พ ๐‘ ๐‘– โข ๐‘ค ๐‘ ๐‘– โ‰ค 0 โก 1 2 โข โ€– ๐‘‹ โข ๐‘ค โˆ’ ๐‘ฆ โ€– 2 2 + ๐œ† โข โˆ‘ ๐‘ ๐‘– โˆˆ โ„ฌ โ€– ๐‘ค ๐‘ ๐‘– โ€– 2

โ‰ค 1 2 โข โ€– ๐‘‹ โข ๐‘ค ๐›ฟ โข ( ๐œ† ) โˆ’ ๐‘ฆ โ€– 2 2 + ๐œ† โข โˆ‘ ๐‘ ๐‘– โˆˆ โ„ฌ โ€– ๐‘ค ๐‘ ๐‘– ๐›ฟ โข ( ๐œ† ) โ€– 2 ,

we deduce

1 2 โข โ€– ๐‘‹ โข ๐‘ค * โข ( ๐œ† ) โˆ’ ๐‘ฆ โ€– 2 2 + ๐œ† โข โˆ‘ ๐‘ ๐‘– โˆˆ โ„ฌ โ€– ๐‘ค ๐‘ ๐‘– * โข ( ๐œ† ) โ€– 2 + ๐›ฟ 2 โข โ€– ๐‘ค * โข ( ๐œ† ) โ€– 2 2 โข < 1 2 โˆฅ โข ๐‘‹ โข ๐‘ค ๐›ฟ โข ( ๐œ† ) โˆ’ ๐‘ฆ โˆฅ 2 2 + ๐œ† โข โˆ‘ ๐‘ ๐‘– โˆˆ โ„ฌ โ€– [ ๐‘ค ๐›ฟ โข ( ๐œ† ) ] ๐‘ ๐‘– โ€– 2 + ๐›ฟ 2 โข โ€– ๐‘ค ๐›ฟ โข ( ๐œ† ) โ€– 2 2 ,

which contradicts optimality of ๐‘ค ๐›ฟ โข ( ๐œ† ) . So โ€– ๐‘ค ๐›ฟ โข ( ๐œ† ) โ€– 2 โ‰ค โ€– ๐‘ค * โข ( ๐œ† ) โ€– 2 for all ๐›ฟ

0 . As a result, the sequence { ๐‘ค ๐›ฟ ๐‘˜ โข ( ๐œ† ) } ๐›ฟ ๐‘˜ , where ๐›ฟ ๐‘˜ โ†“ 0 , is bounded and admits at least one convergent subsequence. Let ๐‘ค ยฏ โข ( ๐œ† ) be the limit point associated with one such subsequence; clearly โ€– ๐‘ค ยฏ โข ( ๐œ† ) โ€– 2 โ‰ค โ€– ๐‘ค โข ( ๐œ† ) * โ€– 2 .

Letโ€™s show that ๐‘ค ยฏ โข ( ๐œ† ) is a solution to the group lasso problem by checking the KKT conditions. Suppose ๐œ†

0 . Stationarity of the Lagrangian is

๐‘‹ โŠค โข ( ๐‘‹ โข ๐‘ค ๐›ฟ ๐‘˜ โข ( ๐œ† ) โˆ’ ๐‘ฆ ) + ๐พ โข ๐œŒ ๐›ฟ ๐‘˜ โข ( ๐œ† ) + ๐‘  ๐›ฟ ๐‘˜ โข ( ๐œ† ) + ๐›ฟ ๐‘˜ โข ๐‘ค ๐›ฟ ๐‘˜ โข ( ๐œ† )

0 ,

where ๐‘  ๐‘ ๐‘– ๐›ฟ ๐‘˜ โข ( ๐œ† ) โˆˆ โˆ‚ ๐œ† โข โ€– ๐‘ค ๐‘ ๐‘– ๐›ฟ ๐‘˜ โ€– 2 . Since โ€– ๐‘  ๐‘ ๐‘– ๐›ฟ ๐‘˜ โข ( ๐œ† ) โ€– 2 โ‰ค ๐œ† and ๐‘ค ๐›ฟ ๐‘˜ โข ( ๐œ† ) is bounded, clearly ๐พ โข ๐œŒ ๐›ฟ ๐‘˜ โข ( ๐œ† ) is also bounded.

Dropping to a subsequence if necessary, let lim ๐‘˜ ๐‘ค ๐›ฟ ๐‘˜ โข ( ๐œ† )

๐‘ค ยฏ and lim ๐‘˜ ๐พ ๐‘ ๐‘– โข ๐œŒ ๐‘ ๐‘– ๐›ฟ ๐‘˜

๐‘ง ยฏ ๐‘ ๐‘– . Define

๐‘… 1 / ๐‘›

{ ๐œŒ ๐‘ ๐‘– : โ€– ๐พ ๐‘ ๐‘– โข ๐œŒ ๐‘ ๐‘– โˆ’ ๐‘ง ยฏ ๐‘ ๐‘– โ€– โˆž โ‰ค 1 ๐‘› , ๐œŒ ๐‘ ๐‘– โ‰ฅ 0 } .

The sequence of sets ๐‘… 1 / ๐‘› is polyhedral and thus retractive. Moreover, for each ๐‘› โˆˆ โ„• , there exists ๐‘˜ such that

โ€– ๐พ ๐‘ ๐‘– โข ๐œŒ ๐‘ ๐‘– ๐›ฟ ๐‘˜ โˆ’ ๐‘ง ยฏ ๐‘ ๐‘– โ€– โˆž โ‰ค 1 / ๐‘› ,

since ๐พ ๐‘ ๐‘– โข ๐œŒ ๐‘ ๐‘– ๐›ฟ ๐‘˜ โ†’ ๐‘ง ยฏ ๐‘ ๐‘– . Recalling ๐œŒ ๐‘ ๐‘– ๐›ฟ ๐‘˜ โ‰ฅ 0 shows that โ„ 1 / ๐‘› is non-empty. The limit of a sequence of nested, non-empty, retractive sets is also non-empty (Bertsekas, 2009, Proposition 1.4.10). Moreover, since the limit is exactly

๐‘… ยฏ

{ ๐œŒ ๐‘ ๐‘– : ๐พ ๐‘ ๐‘– โข ๐œŒ ๐‘ ๐‘–

๐‘ง ยฏ ๐‘ ๐‘– , ๐œŒ ๐‘ ๐‘– โ‰ฅ 0 } ,

we deduce that there exists ๐œŒ ยฏ โ‰ฅ 0 such that ๐พ ๐‘ ๐‘– โข ๐œŒ ยฏ ๐‘ ๐‘–

๐‘ง ยฏ ๐‘ ๐‘– .

Taking limits on either side of the stationarity condition, we find

๐‘‹ ๐‘ ๐‘– โŠค โข ( ๐‘ฆ โˆ’ ๐‘‹ โข ๐‘ค ยฏ โข ( ๐œ† ) ) โˆ’ ๐พ ๐‘ ๐‘– โข ๐œŒ ยฏ โข ( ๐œ† )

๐‘  ยฏ ๐‘ ๐‘– ,

where the limit point ๐‘  ยฏ ๐‘ ๐‘– satisfies โ€– ๐‘  ยฏ ๐‘ ๐‘– โ€– โ‰ค ๐œ† . If ๐‘ ๐‘– โˆˆ โ„ฌ โˆ– ๐’œ ๐œ† โข ( ๐‘ค ยฏ โข ( ๐œ† ) ) , then ๐‘ค ยฏ ๐‘ ๐‘– satisfies stationarity.

Let ๐‘ ๐‘– โˆˆ ๐’œ ๐œ† โข ( ๐‘ค ยฏ โข ( ๐œ† ) ) . Since ๐‘ค ๐›ฟ ๐‘˜ โข ( ๐œ† ) โ†’ ๐‘ค ยฏ โข ( ๐œ† ) , โ€– ๐‘ค ๐‘ ๐‘– ๐›ฟ ๐‘˜ โˆ’ ๐‘ค ยฏ ๐‘ ๐‘– โ€– 2 โ†’ 0 and it must happen that ๐‘ค ๐‘ ๐‘– ๐›ฟ ๐‘˜

0 for all ๐‘˜ sufficiently large. That is, ๐’œ โข ( ๐‘ค ๐›ฟ ๐‘˜ โข ( ๐œ† ) ) โЇ ๐’œ โข ( ๐‘ค ยฏ ) for all ๐‘˜ โ‰ฅ ๐‘˜ โ€ฒ . Using ๐‘ ๐‘– โˆˆ ๐’œ โข ( ๐‘ค ๐›ฟ ๐‘˜ โข ( ๐œ† ) ) provides a closed-form expression for ๐‘  ๐‘ ๐‘– ๐›ฟ ๐‘˜ :

lim ๐‘˜ ๐‘  ๐›ฟ ๐‘˜ โข ( ๐œ† )

๐œ† โข lim ๐‘˜ ๐‘ค ๐‘ ๐‘– ๐›ฟ ๐‘˜ โข ( ๐œ† ) โ€– ๐‘ค ๐‘ ๐‘– ๐›ฟ ๐‘˜ โข ( ๐œ† ) โ€– 2

๐œ† โข ๐‘ค ยฏ ๐‘ ๐‘– โข ( ๐œ† ) โ€– ๐‘ค ยฏ ๐‘ ๐‘– โข ( ๐œ† ) โ€– 2 ,

which shows that ๐‘  ยฏ ๐‘ ๐‘– is a subgradient of ๐œ† โข โ€– ๐‘ค ๐‘ ๐‘– โ€– 2 . We conclude that the Lagrangian is stationary in ๐‘ค ยฏ ๐‘ ๐‘– as well.

Let us check the remainder of the KKT conditions. For feasibility, it is straightforward to observe that

๐พ ๐‘ ๐‘– โŠค โข ๐‘ค ๐‘ ๐‘– ๐›ฟ ๐‘˜ โข ( ๐œ† ) โ‰ค 0 โˆ€ ๐‘˜ โŸน ๐พ ๐‘ ๐‘– โŠค โข ๐‘ค ยฏ ๐‘ ๐‘– โข ( ๐œ† ) โ‰ค 0 .

Similarly,

โŸจ ๐œŒ ๐‘ ๐‘– ๐›ฟ ๐‘˜ , ๐พ ๐‘ ๐‘– โŠค โข ๐‘ค ๐‘ ๐‘– ๐›ฟ ๐‘˜ โŸฉ

0 โˆ€ ๐‘˜ โŸน โŸจ ๐œŒ ยฏ ๐‘ ๐‘– , ๐พ ๐‘ ๐‘– โŠค โข ๐‘ค ยฏ ๐‘ ๐‘– โŸฉ

0 ,

which, combined with ๐œŒ ยฏ โ‰ฅ 0 , is sufficient to establish complementary slackness. We have shown the subsequential limits ( ๐‘ค ยฏ , ๐œŒ ยฏ ) satisfies the KKT conditions and thus ๐‘ค ยฏ is a solution to the constrained group lasso problem.

Since the min-norm solution is unique and โ€– ๐‘ค ยฏ โข ( ๐œ† ) โ€– 2 โ‰ค โ€– ๐‘ค * โข ( ๐œ† ) โ€– 2 , it must be that ๐‘ค ยฏ โข ( ๐œ† )

๐‘ค * โข ( ๐œ† ) . Noting that this holds for every limit point implies lim ๐›ฟ โ†“ 0 ๐‘ค ๐›ฟ โข ( ๐œ† ) exists and is ๐‘ค * โข ( ๐œ† ) . This completes the proof for ๐œ†

0 .

If ๐œ†

0 , then the proposition follows similarly with the additional observation that ๐‘  ๐›ฟ ๐‘˜ โข ( 0 )

0 for all ๐‘˜ . โˆŽ

A.6Sensitivity \constraintReduced

Proof.

Let ๐‘ค be as in the theorem statement. We starting by showing that ๐‘ค obtains the optimal objective value for the reduced problem:

1 2 โข โ€– ๐‘‹ ๐’œ ๐œ† โข ๐‘ค ๐’œ ๐œ† โˆ’ ๐‘ฆ โ€– 2 2 + ๐œ† โข โˆ‘ ๐‘ ๐‘– โˆˆ ๐’œ ๐œ† โ€– ๐‘ค ๐‘ ๐‘– โ€– 2

min ๐‘ค : ๐พ ๐‘ ๐‘– โข ๐‘ค ๐‘ ๐‘– โ‰ค 0 โก 1 2 โข โ€– ๐‘‹ โข ๐‘ค โˆ’ ๐‘ฆ โ€– 2 2 + ๐œ† โข โˆ‘ ๐‘ ๐‘– โˆˆ โ„ฌ โ€– ๐‘ค ๐‘ ๐‘– โ€– 2

โ‰ค min ๐‘ค ๐’œ ๐œ† : ๐พ ๐’œ ๐œ† โข ๐‘ค ๐’œ ๐œ† โ‰ค 0 โก 1 2 โข โ€– ๐‘‹ ๐’œ ๐œ† โข ๐‘ค ๐’œ ๐œ† โˆ’ ๐‘ฆ โ€– 2 2 + ๐œ† โข โˆ‘ ๐‘ ๐‘– โˆˆ ๐’œ ๐œ† โ€– ๐‘ค ๐‘ ๐‘– โ€– 2

โ‰ค 1 2 โข โ€– ๐‘‹ ๐’œ ๐œ† โข ๐‘ค ๐’œ ๐œ† โˆ’ ๐‘ฆ โ€– 2 2 + ๐œ† โข โˆ‘ ๐‘ ๐‘– โˆˆ ๐’œ ๐œ† โ€– ๐‘ค ๐‘ ๐‘– โ€– 2 ,

where the last inequality makes explicit use of feasibility of ๐‘ค ๐’œ ๐œ† . Since ๐‘ค ๐’œ ๐œ† is feasible for the reduced problem and attains the minimum objective value, it must be optimal. Note that it is straightforward to check that the active blocks of the min-norm dual parameter ๐œŒ ๐’œ ๐œ† * are dual optimal for the reduced problem.

Now, let ๐‘ค ๐’œ ๐œ† โ€ฒ be an optimal solution to the reduced problem. Since ๐œŒ ๐’œ ๐œ† * is dual optimal for the reduced problem, Section 3.1 implies

๐‘ค ๐‘ ๐‘– โ€ฒ

๐›ผ ๐‘ ๐‘– โ€ฒ โข ๐‘ฃ ๐‘ ๐‘– ,

for every ๐‘ ๐‘– โˆˆ ๐’œ ๐œ† , with ๐›ผ ๐‘ ๐‘– โ€ฒ โ‰ฅ 0 . Since ๐‘‹ ๐’œ ๐œ† โข ๐‘ค ๐’œ ๐œ† โ€ฒ

๐‘‹ ๐’œ ๐œ† โข ๐‘ค ๐’œ ๐œ† , we deduce

โˆ‘ ๐‘ ๐‘– โˆˆ ๐’œ ๐œ† ( ๐›ผ ๐‘ ๐‘– โˆ’ ๐›ผ ๐‘ ๐‘– โ€ฒ ) โข ๐‘‹ ๐‘ ๐‘– โข ๐‘ ๐‘–

0 ,

which contradicts minimality of ๐‘ค unless ๐›ผ ๐‘ ๐‘–

๐›ผ ๐‘ ๐‘– โ€ฒ . That is, ๐‘ค ๐’œ ๐œ† โ€ฒ

๐‘ค ๐’œ ๐œ† . We conclude that the reduced problem provides the unique minimal solution with support ๐’œ ๐œ† . โˆŽ

Lemma A.14.

Let ๐‘ค โˆˆ ๐’ฒ * โข ( ๐œ† ) be minimal. Then ๐‘ค is a second-order stationary point of the reduced problem (Equation 18).

Proof.

Define ๐‘€ โข ( ๐‘ค ) to be the block-diagonal projection matrix given by

๐‘€ โข ( ๐‘ค ) ๐‘ ๐‘–

1 โ€– ๐‘ค ๐‘ ๐‘– โ€– 2 โข ( ๐ผ โˆ’ ๐‘ค ๐‘ ๐‘– โ€– ๐‘ค ๐‘ ๐‘– โ€– 2 โข ๐‘ค ๐‘ ๐‘– โŠค โ€– ๐‘ค ๐‘ ๐‘– โ€– 2 ) .

(26)

The Hessian of the Lagrangian of the reduced problem with respect to ๐‘ค is exactly

โˆ‡ ๐‘ค 2 โ„’ โข ( ๐‘ค ๐’œ ๐œ† , ๐œŒ ๐’œ ๐œ† )

๐‘‹ ๐’œ ๐œ† โŠค โข ๐‘‹ ๐’œ ๐œ† + ๐œ† โข ๐‘€ ๐’œ ๐œ† โข ( ๐‘ค ) .

A sufficient condition for ๐‘ค to be second-order stationary is that this Hessian is positive-definite. We now shows this fact holds.

Clearly โˆ‡ ๐‘ค 2 โ„’ โข ( ๐‘ค ๐’œ ๐œ† , ๐œŒ ๐’œ ๐œ† ) is positive semi-definite as it is the sum of a PSD projection matrix and a Gram matrix, which is always PSD. Let ๐‘ค ยฏ โˆˆ โ„ | ๐’œ ๐œ† | such that ๐‘ค ยฏ โ‰  0 . Suppose that

0

๐‘ค ยฏ โŠค โข โˆ‡ ๐‘ค 2 โ„’ โข ( ๐‘ค ๐’œ ๐œ† , ๐œŒ ๐’œ ๐œ† ) โข ๐‘ค ยฏ

๐‘ค ยฏ โŠค โข ๐‘‹ ๐’œ ๐œ† โŠค โข ๐‘‹ ๐’œ ๐œ† โข ๐‘ค ยฏ + ๐‘ค ยฏ โŠค โข ๐œ† โข ๐‘€ โข ( ๐‘ค ) โข ๐‘ค ยฏ

โ€– ๐‘‹ ๐’œ ๐œ† โข ๐‘ค โ€– 2 2 + ๐œ† โข ๐‘ค โŠค โข ๐‘€ โข ( ๐‘ค ) โข ๐‘ค .

Since ๐‘€ โข ( ๐‘ค ยฏ ) is PSD, it must hold that

๐‘ค ยฏ โŠค โข ๐‘€ โข ( ๐‘ค ) โข ๐‘ค ยฏ

0 ,

which is true if and only if ๐‘ค ยฏ ๐‘ ๐‘–

๐›ฝ ๐‘ ๐‘– โข ๐‘ค ๐‘ ๐‘– , ๐›ฝ ๐‘ ๐‘– โˆˆ โ„ , for each ๐‘ ๐‘– โˆˆ ๐’œ ๐œ† . As a result, we find that

๐‘‹ ๐’œ ๐œ† โข ๐‘ค โˆ’ ๐‘‹ ๐’œ ๐œ† โข ๐‘ค ยฏ

โˆ‘ ๐‘ ๐‘– โˆˆ ๐’œ ๐œ† ( 1 โˆ’ ๐›ฝ ๐‘ ๐‘– ) โข ๐›ผ ๐‘ ๐‘– โข ๐‘‹ ๐‘ ๐‘– โข ๐‘ฃ ๐‘ ๐‘–

0 ,

which contradicts minimality of ๐‘ค ยฏ unless ๐›ฝ ๐‘ ๐‘–

1 for each ๐‘ ๐‘– โˆˆ ๐’œ ๐œ† . But then โ€– ๐‘‹ ๐’œ ๐œ† โข ๐‘ค โ€– 2 2

0 and we conclude that the Hessian is positive-definite as desired. โˆŽ

\localSolFn

Proof.

Recall from Equation 18 that ๐‘ค ๐’œ ๐œ† is the unique solution to the reduced group lasso problem. In fact, as we show in Lemma A.14, ๐‘ค ๐’œ ๐œ† is a second order stationary point for the reduced problem. Now, combining this fact with LICQ and SCS and using standard results on differential sensitivity from optimization theory (see, e.g. Fiacco & Ishizuka (1990, Theorem 5.1) and the references therein) we obtain the following:

For ( ๐œ† , ๐‘ฆ ) in a neighborhood of ๐œ† ยฏ , ๐‘ฆ ยฏ , there exists a unique once continuously differentiable function

๐‘™ ~ โข ( ๐œ† , ๐‘ฆ )

[ โ„Ž ~ โข ( ๐œ† , ๐‘ฆ )

๐‘” ~ โข ( ๐œ† , ๐‘ฆ ) , ]

such that โ„Ž ~ โข ( ๐œ† ยฏ , ๐‘ฆ ยฏ )

๐‘ค ๐’œ ๐œ† , ๐‘” ~ โข ( ๐œ† ยฏ , ๐‘ฆ ยฏ )

๐œŒ ๐’œ ๐œ† * , and ๐‘™ ~ โข ( ๐œ† , ๐‘ฆ ) is a primal-dual solution to the reduced problem.

Now we show that ๐‘™ ~ can be extended from the reduced problem to obtain a local solution function for the constrained group lasso. Define โ„Ž โข ( ๐œ† , ๐‘ฆ ) such that โ„Ž ๐’œ ๐œ† โข ( ๐œ† , ๐‘ฆ )

โ„Ž ~ โข ( ๐œ† , ๐‘ฆ ) and โ„Ž โ„ฌ โˆ– ๐’œ ๐œ† โข ( ๐œ† , ๐‘ฆ )

0 . We shall show how to extend ๐‘” shortly. For ๐‘ ๐‘– โˆˆ ๐’œ ๐œ† , the pair โ„Ž ๐‘ ๐‘– โข ( ๐œ† , ๐‘ฆ ) , ๐‘” ๐‘ ๐‘– โข ( ๐œ† , ๐‘ฆ ) verifies the KKT conditions (which are separable over block) since it verifies them for the reduced problem. So, we need only consider ๐‘ ๐‘– โˆˆ โ„ฌ โˆ– ๐’œ ๐œ† .

First, consider ๐‘ ๐‘– โˆˆ โ„ฌ โˆ– โ„ฐ ๐œ† . In this case, we have

โ€– ๐‘‹ ๐‘ ๐‘– โŠค โข ( ๐‘ฆ ยฏ โˆ’ ๐‘‹ โข โ„Ž โข ( ๐œ† ยฏ , ๐‘ฆ ยฏ ) ) + ๐พ ๐‘ ๐‘– โข ๐œŒ ๐‘ ๐‘– โข ( ๐œ† ยฏ , ๐‘ฆ ยฏ ) โ€– 2 < ๐œ† ยฏ ,

Since this inequality is strict and

๐‘ง โข ( ๐œ† ยฏ , ๐‘ฆ ยฏ )

๐‘‹ ๐‘ ๐‘– โŠค โข ( ๐‘ฆ ยฏ โˆ’ ๐‘‹ โข โ„Ž โข ( ๐œ† ยฏ , ๐‘ฆ ยฏ ) ) ,

is continuous in ๐œ† ยฏ , ๐‘ฆ ยฏ , there exists a neighborhood of ๐œ† ยฏ , ๐‘ฆ ยฏ on which

โ€– ๐‘ง โข ( ๐œ† , ๐‘ฆ ) + ๐พ ๐‘ ๐‘– โข ๐œŒ ๐‘ ๐‘– โข ( ๐œ† ยฏ , ๐‘ฆ ยฏ ) โ€– 2 โ‰ค ๐œ† .

Since ๐œŒ ๐‘ ๐‘– โข ( ๐œ† ยฏ , ๐‘ฆ ยฏ ) โ‰ฅ 0 and ๐‘ค ๐‘ ๐‘–

0 , dual feasibility and complementary slackness hold. We conclude that the extension ๐‘” ๐‘ ๐‘– โข ( ๐œ† , ๐‘ฆ )

๐œŒ ๐‘ ๐‘– โข ( ๐œ† ยฏ , ๐‘ฆ ยฏ ) satisfies KKT conditions on this neighbourhood.

Now suppose ๐‘ ๐‘– โˆˆ โ„ฐ ๐œ† โข ( ๐œ† ยฏ , ๐‘ฆ ยฏ ) โˆ– ๐’œ ๐œ† . If

โ€– ๐‘‹ ๐‘ ๐‘– โŠค โข ( ๐‘ฆ โˆ’ ๐‘‹ โข โ„Ž โข ( ๐œ† , ๐‘ฆ ) ) โ€– 2

๐œ† ,

then taking ๐‘” ๐‘ ๐‘– โข ( ๐‘ฆ , ๐œ† )

0 satisfies KKT conditions. Otherwise, observe that

โ€– ๐‘‹ ๐‘ ๐‘– โŠค โข ( ๐‘ฆ ยฏ โˆ’ ๐‘‹ โข โ„Ž โข ( ๐œ† ยฏ , ๐‘ฆ ยฏ ) ) + ๐พ ๐‘ ๐‘– โข ๐œŒ ๐‘ ๐‘– โข ( ๐œ† ยฏ , ๐‘ฆ ยฏ ) โ€– 2

๐œ† ยฏ ,

must hold for some dual parameter ๐œŒ ๐‘ ๐‘– โข ( ๐œ† ยฏ , ๐‘ฆ ยฏ ) by KKT conditions. Moreover, SCS implies that we can choose the dual parameter to satisfy,

๐œŒ ๐‘ ๐‘– โข ( ๐œ† ยฏ , ๐‘ฆ ยฏ )

0 ,

since ๐พ ๐‘ ๐‘– โŠค โข ๐‘ค ๐‘ ๐‘– โข ( ๐œ† ยฏ , ๐‘ฆ ยฏ )

0 . Finally, because

๐‘‹ ๐‘ ๐‘– โŠค โข ( ๐‘ฆ โˆ’ ๐‘‹ โข โ„Ž โข ( ๐œ† , ๐‘ฆ ) )

is a continuous function of ( ๐‘ฆ , ๐œ† ) , taking ๐œ† , ๐‘ฆ sufficiently close to ๐œ† ยฏ , ๐‘ฆ ยฏ implies there exists ๐œŒ ๐‘ ๐‘– โข ( ๐œ† , ๐‘ฆ ) โ‰ฅ 0 such that

โ€– ๐‘‹ ๐‘ ๐‘– โŠค โข ( ๐‘ฆ โˆ’ ๐‘‹ โข โ„Ž โข ( ๐œ† , ๐‘ฆ ) ) + ๐พ ๐‘ ๐‘– โข ๐œŒ ๐‘ ๐‘– โข ( ๐œ† , ๐‘ฆ ) โ€– 2 โ‰ค ๐œ† .

Now we choose our extension to be ๐‘” ๐‘ ๐‘– โข ( ๐œ† , ๐‘ฆ )

๐œŒ ๐‘ ๐‘– โข ( ๐œ† , ๐‘ฆ ) so that ( โ„Ž ๐‘ ๐‘– , ๐‘” ๐‘ ๐‘– ) satisfies stationarity of the Lagrangian as well. Since ๐‘” ๐‘ ๐‘– is feasible and โ„Ž ๐‘ ๐‘– is the zero function, primal feasibility, dual feasibility, and complementary slackness also hold.

Since ๐‘™

( ๐‘” , โ„Ž ) satisfies the KKT conditions in a local neighborhood of ๐œ† ยฏ , ๐‘ฆ ยฏ , it is exactly a local solution function. Moreover, since ๐‘” โ„ฌ โˆ– ๐’œ โข ( ๐œ† , ๐‘ฆ )

0 over this neighborhood, it is easy to see that the gradient for parameter blocks in โ„ฌ โˆ– ๐’œ is 0 . For ๐‘” ๐’œ ๐œ† , Fiacco & Ishizuka (1990, Theorem 5.1) implies that the gradients are given as follows:

Recall from Lemma A.14, that ๐‘€ ๐’œ ๐œ† is a block-diagonal projection matrix with blocks given by

๐‘€ โข ( ๐‘ค ยฏ ) ๐‘ ๐‘–

1 โ€– ๐‘ค ยฏ ๐‘ ๐‘– โ€– 2 โข ( ๐ผ โˆ’ ๐‘ค ยฏ ๐‘ ๐‘– โ€– ๐‘ค ยฏ ๐‘ ๐‘– โ€– 2 โข ๐‘ค ยฏ ๐‘ ๐‘– โŠค โ€– ๐‘ค ยฏ ๐‘ ๐‘– โ€– 2 ) .

Then, the Jacobian of โˆ‡ ๐‘ค โ„’ โข ( ๐‘ค ยฏ ๐’œ ๐œ† , ๐œŒ ยฏ ๐’œ ๐œ† ) for the reduced problem with respect to the primal-dual parameters is given by

๐ท

[ ๐‘‹ ๐’œ ๐œ† โŠค โข ๐‘‹ ๐’œ ๐œ† + ๐‘€ โข ( ๐‘ค ยฏ )

๐พ ๐’œ ๐œ†

๐œŒ ยฏ ๐’œ ๐œ† โŠ™ ๐พ ๐’œ ๐œ†

diag โข ( ๐พ ๐’œ ๐œ† โŠค โข ๐‘ค ยฏ ๐’œ ๐œ† ) ] .

It also holds that ๐ท is invertible. Finally, let ๐‘ข ๐‘–

๐‘ค ๐‘– โ€– ๐‘ค ๐‘ ๐‘– โ€– 2 and ๐‘ข the concatenation of these vectors. We are now able to write the Jacobians of ๐‘ค โข ( ๐‘ฆ , ๐œ† ) with respect to ๐‘ฆ and ๐œ† as follows:

โˆ‡ ๐œ† ๐‘ค โข ( ๐œ† ยฏ , ๐‘ฆ ยฏ )

โˆ’ [ ๐ท โˆ’ 1 ] ๐’œ ๐œ† โข ๐‘ข ๐’œ ๐œ† โˆ‡ ๐‘ฆ ๐‘ค โข ( ๐œ† ยฏ , ๐‘ฆ ยฏ )

[ ๐ท โˆ’ 1 ] ๐’œ ๐œ† โข ๐‘‹ ๐’œ ๐œ† โŠค ,

where [ ๐ท ๐’œ ๐œ† โˆ’ 1 ] ๐’œ ๐œ† is the | ๐’œ ๐œ† | ร— | ๐’œ ๐œ† | dimensional leading principle submatrix of ๐ท . โˆŽ

Appendix BSpecialization: Proofs Lemma B.1.

Let ( ๐‘Š 1 , ๐‘ค 2 ) and ( ๐‘Š 1 โ€ฒ , ๐‘ค 2 โ€ฒ ) be two solutions to the non-convex ReLU training problem. If for every ๐‘– โˆˆ [ ๐‘š ] , it holds that

๐‘Š 1 โข ๐‘– โข ๐‘ค 2 โข ๐‘–

๐‘Š 1 โข ๐‘– โ€ฒ โข ๐‘ค 2 โข ๐‘– โ€ฒ ,

and ๐‘ ๐‘–๐‘”๐‘› โข ( ๐‘ค 2 โข ๐‘– )

๐‘ ๐‘–๐‘”๐‘› โข ( ๐‘ค 2 โข ๐‘– โ€ฒ ) , then ๐‘Š 1

๐‘Š 1 โ€ฒ and ๐‘ค 2

๐‘ค 2 โ€ฒ . That is, the solutions are the same.

Proof.

The ReLU prediction function ๐‘“ ๐‘Š 1 , ๐‘ค 2 is invariant to scalings of the form

๐‘Š ยฏ 1 โข ๐‘–

๐›ผ โข ๐‘Š 1 โข ๐‘– ๐‘ค ยฏ 2 โข ๐‘–

๐‘ค 2 โข ๐‘– / ๐›ผ ,

where ๐›ผ

0 . Using this, we deduce that both solutions must satisfy the following equations:

1

arg โข min ๐›ผ ๐›ผ 2 โข โ€– ๐‘Š 1 โข ๐‘– โ€– 2 2 + โ€– ๐‘ค 2 โข ๐‘– โ€– 2 2 / ๐›ผ 2

1

arg โข min ๐›ผ ๐›ผ 2 โข โ€– ๐‘Š 1 โข ๐‘– โ€ฒ โ€– 2 2 + โ€– ๐‘ค 2 โข ๐‘– โ€ฒ โ€– 2 2 / ๐›ผ 2 ,

which in turn implies that

โ€– ๐‘Š 1 โข ๐‘– โ€– 2

โ€– ๐‘ค 2 โข ๐‘– โ€– 2 , โ€– ๐‘Š 1 โข ๐‘– โ€ฒ โ€– 2

โ€– ๐‘ค 2 โข ๐‘– โ€ฒ โ€– 2

We deduce

โ€– ๐‘Š 1 โข ๐‘– โ€– 2 2

โ€– ๐‘Š 1 โข ๐‘– โ€– 2 โข โ€– ๐‘ค 2 โข ๐‘– โ€– 2

โ€– ๐‘Š 1 โข ๐‘– โข ๐‘ค 2 โข ๐‘– โ€– 2

โ€– ๐‘Š 1 โข ๐‘– โ€ฒ โข ๐‘ค 2 โข ๐‘– โ€ฒ โ€– 2

โ€– ๐‘Š 1 โข ๐‘– โ€ฒ โ€– 2 โข โ€– ๐‘ค 2 โข ๐‘– โ€ฒ โ€– 2

โ€– ๐‘Š 1 โข ๐‘– โ€ฒ โ€– 2 2 ,

where we have used the fact that โ€– ๐‘ค 2 โข ๐‘– โ€– 2

| ๐‘ค 2 โข ๐‘– | . This implies ๐‘Š 1 โข ๐‘–

๐‘Š 1 โข ๐‘– โ€ฒ since ๐‘Š 1 โข ๐‘– and ๐‘Š 1 โข ๐‘– โ€ฒ are collinear, meaning ๐‘ค 2 โข ๐‘–

๐‘ค 2 โข ๐‘– โ€ฒ must also hold. โˆŽ

Lemma B.2.

Let ( ๐‘Š 1 , ๐‘ค 2 ) and ( ๐‘Š ~ 1 , ๐‘ค ~ 2 ) be two solutions to the non-convex ReLU training problem. If ( ๐‘Š 1 , ๐‘ค 2 ) and ( ๐‘Š ~ 1 , ๐‘ค ~ 2 โ€ฒ ) map to the same solution in the convex reformulation, then they are equal up to permutations or splits/merges of of neurons with collinear weights.

Proof.

Let ( ๐‘Š 1 , ๐‘ค 2 ) be a solution to the non-convex ReLU training problem. For each neuron, associate ๐‘Š 1 โข ๐‘– with the sparsest activation pattern ๐ท ๐‘– to which ๐‘Š 1 โข ๐‘– conforms. That is, given ๐‘Š 1 โข ๐‘– , we associate it with the following pattern:

๐ท ๐‘–

arg โข min { nnz โข ( ๐ท ๐‘— ) : ๐ท ๐‘— โˆˆ ๐’Ÿ ๐‘ , 2 โข ( ๐ท ๐‘— โˆ’ ๐ผ ) โข ๐‘ โข ๐‘Š 1 โข ๐‘– โ‰ฅ 0 } .

If ๐‘Š 1 โข ๐‘– and ๐‘Š 1 โข ๐‘— are associated with the same pattern ๐ท ๐‘– and sign โข ( ๐‘ค 2 โข ๐‘– )

sign โข ( ๐‘ค 2 โข ๐‘— ) , then Mishkin et al. (2022) prove ๐‘Š 1 โข ๐‘– and ๐‘Š 1 โข ๐‘— are collinear, i.e. ๐‘Š 1 โข ๐‘–

๐›ผ โข ๐‘Š 1 โข ๐‘— for some ๐›ผ

0 . They also prove that the model obtained by merging these collinear neurons as

๐‘Š 1 โข ๐‘– โ€ฒ

๐‘Š 1 โข ๐‘– โข | ๐‘ค 2 โข ๐‘– | + ๐‘Š 1 โข ๐‘— โข | ๐‘ค 2 โข ๐‘— | โ€– ๐‘Š 1 โข ๐‘– โข | ๐‘ค 2 โข ๐‘– | + ๐‘Š 1 โข ๐‘— โข | ๐‘ค 2 โข ๐‘— | โ€– 2 1 / 2 , ๐‘ค 2 โข ๐‘– โ€ฒ

sign โข ( ๐‘ค 2 โข ๐‘– ) โข โ€– ๐‘Š 1 โข ๐‘– โข | ๐‘ค 2 โข ๐‘– | + ๐‘Š 1 โข ๐‘— โข | ๐‘ค 2 โข ๐‘— | โ€– 2 1 / 2 ,

where ๐‘Š 1 โข ๐‘— โ€ฒ

0 , and ๐‘ค 2 โข ๐‘— โ€ฒ

0 , is also an optimal solution to the non-convex ReLU problem. We term this the merge/split symmetry.

By recursively merging all collinear neurons assigned to the same activation pattern for which sign โข ( ๐‘ค 2 โข ๐‘– )

sign โข ( ๐‘ค 2 โข ๐‘— ) holds, we eventually obtain an optimal reduced model ( ๐‘Š 1 โ€ฒ , ๐‘ค 2 โ€ฒ ) such that no two non-zero neurons share an activation pattern ๐ท ๐‘– and have second-layer weights with the same sign. We may then place an ordering on the neurons by ordering the ๐ท ๐‘– matrices from 1 to ๐‘ . This removes all permutation symmetries.

Since every signed neuron in the non-convex model now corresponds to a unique pattern ๐ท ๐‘– , the mapping to the convex program is given by

๐‘ฃ ๐‘–

{ ๐‘Š 1 โข ๐‘– โ€ฒ โข ๐‘ค 2 โข ๐‘– โ€ฒ

if  ๐‘ค 2 โข ๐‘– โ€ฒ โ‰ฅ 0

0
otherwise

๐‘ข ๐‘–

{ โˆ’ ๐‘Š 1 โข ๐‘– โ€ฒ โข ๐‘ค 2 โข ๐‘– โ€ฒ

if  ๐‘ค 2 โข ๐‘– โ€ฒ < 0

0

otherwise ,

Applying this procedure to ( ๐‘Š 1 , ๐‘ค 2 ) and ( ๐‘Š ~ 1 , ๐‘ค ~ 2 ) and recalling that they map to the same solution implies

๐‘Š 1 โข ๐‘– โ€ฒ โข ๐‘ค 2 โข ๐‘– โ€ฒ

๐‘Š ~ 1 โข ๐‘– โ€ฒ โข ๐‘ค ~ 2 โข ๐‘– โ€ฒ ,

and sign โข ( ๐‘ค 2 โข ๐‘– โ€ฒ )

sign โข ( ๐‘ค ~ 2 โข ๐‘– โ€ฒ ) for every ๐‘– . Invoking Lemma B.1 is sufficient to prove ( ๐‘Š 1 โ€ฒ , ๐‘ค 2 โ€ฒ ) and ( ๐‘Š ~ 1 โ€ฒ , ๐‘ค ~ 2 โ€ฒ ) are identical.

To obtain this result, we (i) merged all collinear neurons with the same sign in the second layer and then (ii) sorted the neurons according to their activation patterns. Thus, it must be that ( ๐‘Š 1 , ๐‘ค 2 ) and ( ๐‘Š ~ 1 , ๐‘ค ~ 2 ) differ only by (i) splitting of neurons into collinear neurons and (ii) the ordering (i.e. permutation) of the neurons. This completes the proof. โˆŽ

\reluSolFn

Proof.

Let the proposed optimal set be the completion of

๐’ฐ ๐œ†

{ ( ๐‘Š 1 , ๐‘ค 2 ) : ๐‘“ ๐‘Š 1 , ๐‘ค 2 ( ๐‘ )

๐‘ฆ ^ , ๐‘Š 1 โข ๐‘–

( ๐›ผ ๐‘– / ๐œ† ) 1 / 2 ๐‘ฃ ๐‘– ,

๐‘ค 2 โข ๐‘–

๐œ‰ ๐‘– ( ๐›ผ ๐‘– ๐œ† ) 1 / 2 , ๐›ผ ๐‘– โ‰ฅ 0 , ๐‘– โˆˆ [ 2 ๐‘ ] โˆ– ๐’ฎ ๐œ† โ‡’ ๐›ผ ๐‘–

0 } ,

over all permutations of the neurons and splits/merges of collinear neurons. Let ( ๐‘Š 1 , ๐‘ค 2 ) โˆˆ ๐’ฐ ๐œ† . Inverting Equation 5, we compute a candidate solution ๐‘ข to the convex reformulation as

๐‘ข ๐‘–

๐‘Š 1 โข ๐‘– โข ๐‘ค 2 โข ๐‘–

๐›ผ ๐‘– โข ๐‘ฃ ๐‘– ,

for some ๐›ผ ๐‘– โ‰ฅ 0 , where we assumed without loss of generality that ๐‘ค 2 โข ๐‘– โ‰ฅ 0 (if ๐œ‰ ๐‘–

โˆ’ 1 , then we simply map to a โ€œnegativeโ€ neuron in the convex reformulation). Moreover, ๐›ผ ๐‘–

0 if ๐‘– โˆˆ [ 2 โข ๐‘ ] โˆ– ๐’ฎ ๐œ† . Since the solution mapping preserves the prediction of the neural network, we also have ๐‘‹ โข ๐‘ข

๐‘ฆ ^ , which is enough to guarantee ๐‘ข is a solution to the convex reformulation using Section 3.1. Since ๐‘ข is a solution, we conclude that ( ๐‘Š 1 , ๐‘ค 2 ) must be optimal for the non-convex ReLU problem.

For the reverse inclusion, let ( ๐‘Š 1 , ๐‘ค 2 ) โˆˆ ๐’ช ๐œ† and suppose no permutation or merge/split symmetry of ( ๐‘Š 1 , ๐‘ค 2 ) is in ๐’ฐ ๐œ† . While ( ๐‘Š 1 , ๐‘ค 2 ) โˆ‰ ๐’ฐ ๐œ† , it still maps to some solution to the convex reformulation, call it ๐‘ข . As ๐’ฐ ๐œ† is obtained by mapping every solution to the convex reformulation (i.e. every ๐‘ข โˆˆ ๐’ฒ ๐œ† * ) to the non-convex parameterization, it must be that ๐‘ข also maps to some ( ๐‘Š ~ 1 , ๐‘ค ~ 2 ) in ๐’ฐ ๐œ† . Lemma B.2 now shows that ( ๐‘Š 1 , ๐‘ค 2 ) and ( ๐‘Š ~ 1 , ๐‘ค ~ 2 ) are identical up to permutations and splits/merges of collinear neurons But, this implies ( ๐‘Š 1 , ๐‘ค 2 ) โˆˆ ๐’ฐ ๐œ† up to permutation and merge/split symmetries, which is a contradiction. We conclude that the optimal set is the completion of ๐’ฐ ๐œ† as claimed. โˆŽ

\stationaryPoints

Proof.

The proof is almost immediate.

Given any sub-sampled set of activation patterns ๐’Ÿ ~ โŠ‚ ๐’Ÿ ๐‘ , Wang et al. (2021, Theorem 3) prove that the solutions to the sub-sampled convex program are Clarke stationary points (Clarke, 1990) of the non-convex ReLU optimization problem in Equation 1, and vice-versa. Using the expression for the CGL solution set in Section 3.1, which applies to sub-sampled convex reformulations as well as the full program, we obtain a version of Algorithm 2 for stationary points. That is, every model ( ๐‘Š 1 , ๐‘ค 2 ) in

๐’ž ๐œ† ( ๐’Ÿ ~ )

{
( ๐‘Š 1 , ๐‘ค 2 ) : ๐‘“ ๐‘Š 1 , ๐‘ค 2 โข ( ๐‘ )

๐‘ฆ ^ ๐’Ÿ ~ , ๐‘Š 1 โข ๐‘–

( ๐›ผ ๐‘– / ๐œ† ) 1 / 2 โข ๐‘ฃ ๐‘– โข ( ๐’Ÿ ~ ) , ๐‘ค 2 โข ๐‘–

๐œ‰ ๐‘– โข ( ๐›ผ ๐‘– โข ๐œ† ) 1 / 2 ,

๐›ผ ๐‘– โ‰ฅ 0 , ๐‘– โˆˆ [ ๐‘š ] โˆ– ๐’ฎ ๐œ† โŸน ๐›ผ ๐‘–

0 } ,

is a stationary point of the non-convex ReLU program. Taking the union over all sub-sampled sets of activation patterns gives ๐’ž ๐œ† , which is guaranteed to contain every stationary point of the non-convex objective. This completes the proof. โˆŽ

\pUnique

Proof.

Since there is only one solution to the convex reformulation, all solutions to the non-convex training problem must map to that solution. Lemma B.2 now implies that the solution map for the non-convex problem is p-unique, i.e. unique up to permutations and splits/merges of collinear neurons. โˆŽ

\reluUnique

Proof.

We assume without loss of generality that only indices from 1 to ๐‘ are in โ„ฐ ๐œ† . By Corollary 3.1, the constrained group lasso admits a unique solution if and only if

โ‹ƒ ๐‘– โˆˆ โ„ฐ ๐œ† { ๐ท ๐‘– โข ๐‘ โข ๐‘ฃ ๐‘ ๐‘– } ,

are linearly independent. We now show that this fact holds under the proposed sufficient condition by proving the stronger fact that โ‹ƒ ๐‘– โˆˆ โ„ฐ ๐œ† { [ ๐ท ๐‘– โข ๐‘ ] ๐‘— : ๐‘— โˆˆ [ ๐‘‘ ] } are linearly independent with probability one, where [ ๐ท ๐‘– โข ๐‘ ] ๐‘— is the ๐‘— th column of ๐ท ๐‘– โข ๐‘ . Since nnz โข ( ๐ท ๐‘– ) โ‰ฅ ๐‘‘ * ๐‘ and ๐‘ has a continuous probability distribution, it holds that [ ๐ท ๐‘– โข ๐‘ ] ๐‘— has at least ๐‘‘ * ๐‘ non-zero entries with probability 1. Let

๐’ฎ ๐‘– โข ๐‘—

Span โข ( โ‹ƒ ๐‘– โˆˆ โ„ฐ ๐œ† { [ ๐ท ๐‘– โข ๐‘ ] ๐‘— : ๐‘— โˆˆ [ ๐‘‘ ] } โˆ– [ ๐ท ๐‘– โข ๐‘ ] ๐‘— ) ,

and observe that dim โข ( ๐’ฎ ๐‘– โข ๐‘— ) โ‰ค ๐‘‘ * ๐‘ โˆ’ 1 . As a result, the conditional probability [ ๐ท ๐‘– โข ๐‘ ] ๐‘— falls in this subspace satisfies

Pr โก ( [ ๐ท ๐‘– โข ๐‘ ] ๐‘— โˆˆ ๐’ฎ ๐‘– โข ๐‘— | โ‹ƒ ๐‘– โˆˆ โ„ฐ ๐œ† { [ ๐ท ๐‘– โข ๐‘ ] ๐‘— : ๐‘— โˆˆ [ ๐‘‘ ] } โˆ– [ ๐ท ๐‘– โข ๐‘ ] ๐‘— )

0 .

Taking expectations over the remaining vectors in ๐‘ implies

Pr โก ( [ ๐ท ๐‘– โข ๐‘ ] ๐‘— โˆˆ ๐’ฎ ๐‘– โข ๐‘— )

0 .

Finally, using a union bound over ๐‘– , ๐‘— implies that โ‹ƒ ๐‘– โˆˆ โ„ฐ ๐œ† { [ ๐ท ๐‘– โข ๐‘ ] ๐‘— : ๐‘— โˆˆ [ ๐‘‘ ] } are linearly independent almost surely. โˆŽ

\minimalComplexity

Proof.

The proof follows directly from existing results.

Recall from Pilanci & Ergen (2020) that there are at most

๐‘ โˆˆ ๐‘‚ โข ( ๐‘Ÿ โข ๐‘› ๐‘Ÿ 3 โข ๐‘Ÿ ) ,

activation patterns in the convex reformulation and that the complexity of computing an optimal ReLU model using a standard interior-point solver is ๐‘‚ โข ( ๐‘‘ 3 โข ๐‘Ÿ 3 โข ( ๐‘› / ๐‘Ÿ ) 3 โข ๐‘Ÿ ) .

We know from Proposition A.6 that the complexity of pruning an optimal neural network with at most 2 โข ๐‘ neuron is ๐‘‚ โข ( ๐‘› 3 โข ๐‘ + ๐‘› โข ๐‘‘ ) . Combining these complexities, we find that the cost of optimization dominates and overall complexity of computing an optimal and minimal neural network grows as ๐‘‚ โข ( ๐‘‘ 3 โข ๐‘Ÿ 3 โข ( ๐‘› / ๐‘Ÿ ) 3 โข ๐‘Ÿ ) .

Finally, the bound on the number of active neurons follows from the fact that dim Span โข ( { ( ๐‘‹ โข ๐‘Š 1 โข ๐‘– * ) + } ๐‘– ) โ‰ค ๐‘› . This completes the proof. โˆŽ

\minNormNN

Proof.

Let ( ๐‘ข , ๐‘ฃ ) be an optimal solution to the convex reformulation. Pilanci & Ergen (2020) show that an optimal solution to the original two-layer ReLU optimization problem is given by setting

๐‘Š 1 โข ๐‘–

๐‘ข ๐‘– โ€– ๐‘ข ๐‘– โ€– 2 , ๐‘ค 2 โข ๐‘–

โ€– ๐‘ข ๐‘– โ€– 2 ,

and

๐‘Š 1 โข ๐‘—

๐‘ฃ ๐‘– โ€– ๐‘ฃ ๐‘– โ€– 2 , ๐‘ค 2 โข ๐‘—

โˆ’ โ€– ๐‘ฃ ๐‘– โ€– 2 ,

where we define 0 0

0 . Then, the ๐‘Ÿ -value of any such solution can be calculated as

๐‘Ÿ โข ( ๐‘Š 1 , ๐‘ค 2 )

โˆ‘ ๐‘–

1 ๐‘š โ€– ๐‘Š 1 โข ๐‘– โ€– 2 4 + โ€– ๐‘ค 2 โข ๐‘– โ€– 2 4

โˆ‘ ๐‘ข ๐‘– โ‰  0 โ€– ๐‘ข ๐‘– โ€– ๐‘ข ๐‘– โ€– 2 โ€– 2 4 + โ€– โ€– ๐‘ข ๐‘– โ€– 2 โ€– 2 4 + โˆ‘ ๐‘ฃ ๐‘– โ‰  0 โ€– ๐‘ฃ ๐‘– โ€– ๐‘ฃ ๐‘– โ€– 2 โ€– 2 4 + โ€– โ€– ๐‘ฃ ๐‘– โ€– 2 โ€– 2 4

โˆ‘ ๐‘ข ๐‘– โ‰  0 โ€– ๐‘ข ๐‘– โ€– 2 2 + โ€– ๐‘ข ๐‘– โ€– 2 2 + โˆ‘ ๐‘ฃ ๐‘– โ‰  0 โ€– ๐‘ฃ ๐‘– โ€– 2 2 + โ€– ๐‘ฃ ๐‘– โ€– 2 2

2 โข โ€– ๐‘ข โ€– 2 2 + 2 โข โ€– ๐‘ฃ โ€– 2 2 .

That is, ๐‘Ÿ โข ( ๐‘Š 1 , ๐‘ค 2 ) is a monotone transformation of the Euclidean norm of ( ๐‘ข , ๐‘ฃ ) . Moreover, all optimal ReLU networks up to permutation and neuron-split symmetries can be obtained by applying this mapping to the solution to a convex reformulation. We conclude that the minimum ๐‘Ÿ -valued optimal ReLU network before any such symmetries are applied is given by the minimum โ„“ 2 -norm solution to the convex reformulation.

Now, clearly permutation symmetries have no affect on the ๐‘Ÿ value. However, splitting neurons into multiple collinear neurons can, in fact, reduce ๐‘Ÿ . For example, consider splitting a neuron ( ๐‘Š 1 โข ๐‘– , ๐‘ค 2 โข ๐‘– ) into ( ๐‘Š 1 โข ๐‘– โ€ฒ , ๐‘ค 2 โข ๐‘– โ€ฒ ) and ( ๐‘Š 1 โข ๐‘— โ€ฒ , ๐‘ค 2 โข ๐‘— โ€ฒ ) such that ๐‘Š 1 โข ๐‘– โ€ฒ and ๐‘Š 1 โข ๐‘— โ€ฒ are collinear, sign โข ( ๐‘ค 2 โข ๐‘– โ€ฒ )

sign โข ( ๐‘ค 2 โข ๐‘— โ€ฒ ) , and

๐‘Š 1 โข ๐‘–

๐‘Š 1 โข ๐‘– โ€ฒ โข ๐‘ค 2 โข ๐‘– โ€ฒ + ๐‘Š 1 โข ๐‘— โ€ฒ โข ๐‘ค 2 โข ๐‘— โ€ฒ โ€– ๐‘Š 1 โข ๐‘– โ€ฒ โข ๐‘ค 2 โข ๐‘– โ€ฒ + ๐‘Š 1 โข ๐‘— โ€ฒ โข ๐‘ค 2 โข ๐‘— โ€ฒ โ€– 2 1 / 2 , ๐‘ค 2 โข ๐‘–

sign โข ( ๐‘ค 2 โข ๐‘– ) โข โ€– ๐‘Š 1 โข ๐‘– โ€ฒ โข ๐‘ค 2 โข ๐‘– โ€ฒ + ๐‘Š 1 โข ๐‘— โ€ฒ โข ๐‘ค 2 โข ๐‘— โ€ฒ โ€– 2 1 / 2 ,

and โ€– ๐‘Š 1 โข ๐‘– โ€ฒ โ€– 2

โ€– ๐‘ค 2 โข ๐‘– โ€ฒ โ€– 2 , โ€– ๐‘Š 1 โข ๐‘— โ€ฒ โ€– 2

โ€– ๐‘ค 2 โข ๐‘— โ€ฒ โ€– 2 . This operation is known to preserve optimality (Mishkin et al., 2022). However, the conditions on the split neurons also imply

โ€– ๐‘Š 1 โข ๐‘– โ€– 2 4 + โ€– ๐‘ค 2 โข ๐‘– โ€– 2 4

2 โข โ€– ๐‘Š 1 โข ๐‘– โ€ฒ โข ๐‘ค 2 โข ๐‘– โ€ฒ + ๐‘Š 1 โข ๐‘— โ€ฒ โข ๐‘ค 2 โข ๐‘— โ€ฒ โ€– 2 2

2 โข โ€– ๐‘Š 1 โข ๐‘– โ€ฒ โข ๐‘ค 2 โข ๐‘– โ€ฒ โ€– 2 2 + 2 โข โ€– ๐‘Š 1 โข ๐‘— โ€ฒ โข ๐‘ค 2 โข ๐‘— โ€ฒ โ€– 2 2 + 4 โข โŸจ ๐‘Š 1 โข ๐‘– โ€ฒ โข ๐‘ค 2 โข ๐‘– โ€ฒ , ๐‘Š 1 โข ๐‘— โ€ฒ โข ๐‘ค 2 โข ๐‘— โ€ฒ โŸฉ

> 2 โข โ€– ๐‘Š 1 โข ๐‘– โ€ฒ โข ๐‘ค 2 โข ๐‘– โ€ฒ โ€– 2 2 + 2 โข โ€– ๐‘Š 1 โข ๐‘— โ€ฒ โข ๐‘ค 2 โข ๐‘— โ€ฒ โ€– 2 2

โ€– ๐‘Š 1 โข ๐‘– โ€ฒ โ€– 2 4 + โ€– ๐‘ค 2 โข ๐‘– โ€ฒ โ€– 2 4 + โ€– ๐‘Š 1 โข ๐‘— โ€ฒ โ€– 2 4 + โ€– ๐‘ค 2 โข ๐‘— โ€ฒ โ€– 2 4 ,

where have used positive collinearity of ๐‘Š 1 โข ๐‘– โ€ฒ and ๐‘Š 2 โข ๐‘– โ€ฒ . As a result, the ๐‘Ÿ -value of a solution can be decreased by splitting neurons. Thus, the minimum-norm solution to the convex reformulation is only the minimum ๐‘Ÿ -value optimal ReLU network out of all networks which admit no neuron-merge symmetries. โˆŽ

\oneNeuron

Proof.

Mishkin et al. (2022) show that Equation 1 has the same global optimal values as

min ๐‘ฃ , ๐›พ โˆˆ { โˆ’ 1 , 1 } โก 1 2 โข โ€– โˆ‘ ๐‘–

1 ๐‘ ( ๐‘‹ โข ๐‘ฃ ๐‘– ) + โข ๐›พ โˆ’ ๐‘ฆ โ€– + ๐œ† โข โˆ‘ ๐‘–

1 ๐‘ โ€– ๐‘ค ๐‘– โ€– 2 .

(27)

Moreover, the objective-preserving mapping ๐‘Š 1 โข ๐‘–

๐‘ฃ ๐‘– / โ€– ๐‘ฃ ๐‘– โ€– 2 , ๐‘ค 2 โข ๐‘–

๐›พ ๐‘– โข โ€– ๐‘ฃ ๐‘– โ€– 2 can be used to obtain an optimal ReLU network from a solution to Equation 27. We proceed by analyzing this equivalent problem and then use the mapping to return to the original non-convex formulation.

Consider the case ๐‘

1 . Let ๐‘‹ , ๐‘ฆ consist of two training points, ( ๐‘ฅ 1 , ๐‘ฆ 1 )

( โˆ’ 100 , 1 ) and ( ๐‘ฅ 2 , ๐‘ฆ 2 )

( 1 , 10 ) . In what follows, we drop the subscript for ๐‘ฃ and ๐›พ since ๐‘

1 and we consider a one-dimensional. The optimization problem of interest is

min ๐‘ฃ , ๐›พ โก 1 2 โข ( ( ๐‘ฅ 1 โข ๐‘ฃ ) + โข ๐›พ โˆ’ ๐‘ฆ 1 ) 2 + ( ( ๐‘ฅ 2 โข ๐‘ฃ ) + โข ๐›พ โˆ’ ๐‘ฆ 2 ) 2 + ๐œ† โข | ๐‘ฃ | .

Since ๐‘ฅ 1 < 0 and ๐‘ฅ 2

0 , we can re-write this optimization problem as

min ๐‘ฃ , ๐›พ โก 1 2 โข ๐Ÿ™ ๐‘ฃ โ‰ค 0 โข ( ( ๐‘ฅ 1 โข ๐‘ฃ โข ๐›พ โˆ’ ๐‘ฆ 1 ) 2 + ๐‘ฆ 2 2 ) + ๐Ÿ™ ๐‘ฃ

0 โข ( ( ๐‘ฅ 2 โข ๐‘ฃ โข ๐›พ โˆ’ ๐‘ฆ 2 ) 2 + ๐‘ฆ 1 2 ) + ๐œ† โข | ๐‘ฃ | .

By inspection, we see that ๐›พ

  • 1 is optimal in both cases, leading to the following simplified expression:

min ๐‘ฃ โก 1 2 โข ๐Ÿ™ ๐‘ฃ โ‰ค 0 โข ( ( ๐‘ฅ 1 โข ๐‘ฃ โˆ’ ๐‘ฆ 1 ) 2 + ๐‘ฆ 2 2 ) + ๐Ÿ™ ๐‘ฃ

0 โข ( ( ๐‘ฅ 2 โข ๐‘ฃ โˆ’ ๐‘ฆ 2 ) 2 + ๐‘ฆ 1 2 ) + ๐œ† โข | ๐‘ฃ | .

This is a piece-wise continuous (but non-smooth) quadratic with a breakpoint at ๐‘ฃ *

0 . We determine the solution to this minimization problem via a case analysis.

Case 1: ๐‘ฃ *

0 . Then, the optimal objective is trivially ๐‘“ โข ( ๐‘ฃ * )

๐‘ฆ 1 2 + ๐‘ฆ 2 2

101 .

Case 2: ๐‘ฃ * < 0 . First order optimality conditions are

๐‘ฅ 1 โข ( ๐‘ฅ 1 โข ๐‘ฃ โˆ’ * โˆ’ ๐‘ฆ 1 ) โˆ’ ๐œ†

0 โŸน ๐‘ฃ โˆ’ *

๐‘ฆ 1 โ‹… ๐‘ฅ 1 + ๐œ† ๐‘ฅ 1 2 ,

which is valid only if ๐œ† < | ๐‘ฆ 1 โ‹… ๐‘ฅ 1 |

100 . The minimum objective value is then

1 2 โข ( ( ๐‘ฅ 1 โข ๐‘ฃ โˆ’ * โˆ’ ๐‘ฆ 1 ) 2 + ๐‘ฆ 2 2 ) โˆ’ ๐œ† โข ๐‘ฃ โˆ’ *

( ( ๐‘ฅ 1 โข ๐‘ฆ 1 โ‹… ๐‘ฅ 1 + ๐œ† ๐‘ฅ 1 2 โˆ’ ๐‘ฆ 1 ) 2 + ๐‘ฆ 2 2 ) โˆ’ ๐œ† โข ( ๐‘ฆ 1 โ‹… ๐‘ฅ 1 + ๐œ† ๐‘ฅ 1 2 )

๐œ† 2 ๐‘ฅ 1 2 + ๐‘ฆ 2 2 โˆ’ ๐œ† โข ( ๐‘ฆ 1 โ‹… ๐‘ฅ 1 + ๐œ† ๐‘ฅ 1 2 )

โˆ’ ๐œ† โข ๐‘ฆ 1 ๐‘ฅ 1 + ๐‘ฆ 2 2

๐œ† 100 + 100 .

Case 3: ๐‘ฃ + *

0 . Similarly to the previous case, we obtain

๐‘ฅ 2 โข ( ๐‘ฅ 2 โข ๐‘ฃ + * โˆ’ ๐‘ฆ 2 ) + ๐œ†

0 โŸน ๐‘ฃ + *

๐‘ฆ 2 โ‹… ๐‘ฅ 2 โˆ’ ๐œ† ๐‘ฅ 2 2 ,

which is valid only if ๐œ† < | ๐‘ฆ 2 โ‹… ๐‘ฅ 2 |

10 . In this case, the minimum objective is

1 2 โข ( ( ๐‘ฅ 2 โข ๐‘ฃ + * โˆ’ ๐‘ฆ 2 ) 2 + ๐‘ฆ 1 2 ) + ๐œ† โข ๐‘ฃ + *

( ( ๐‘ฅ 2 โข ๐‘ฆ 2 โ‹… ๐‘ฅ 2 + ๐œ† ๐‘ฅ 2 2 โˆ’ ๐‘ฆ 2 ) 2 + ๐‘ฆ 1 2 ) โˆ’ ๐œ† โข ( ๐‘ฆ 2 โ‹… ๐‘ฅ 2 + ๐œ† ๐‘ฅ 2 2 )

๐œ† 2 ๐‘ฅ 2 2 + ๐‘ฆ 2 2 + ๐œ† โข ( ๐‘ฆ 2 โ‹… ๐‘ฅ 2 โˆ’ ๐œ† ๐‘ฅ 2 2 )

๐œ† โข ๐‘ฆ 2 ๐‘ฅ 2 + ๐‘ฆ 1 2

10 โข ๐œ† + 1 .

To combine the cases, observe that

๐‘“ โข ( ๐‘ฃ + * ) โˆ’ ๐‘“ โข ( ๐‘ฃ โˆ’ * )

๐œ† โข ๐‘ฆ 2 ๐‘ฅ 2 + ๐‘ฆ 1 2 โˆ’ [ โˆ’ ๐œ† โข ๐‘ฆ 1 ๐‘ฅ 1 + ๐‘ฆ 2 2 ]

๐œ† โข ( ๐‘ฅ 1 โข ๐‘ฆ 2 + ๐‘ฅ 2 โข ๐‘ฆ 1 ๐‘ฅ 1 โข ๐‘ฅ 2 ) + ๐‘ฆ 1 2 โˆ’ ๐‘ฆ 2 2

๐œ† โข ( โˆ’ 100 โข ( 10 ) + 1 โข ( 1 ) โˆ’ 100 โข ( 1 ) ) + 1 โˆ’ 100

9.99 โข ๐œ† โˆ’ 99 .

We deduce that ๐‘“ โข ( ๐‘ฃ + * ) โˆ’ ๐‘“ โข ( ๐‘ฃ โˆ’ * )

0 (and thus ๐‘ฃ * โ‰ค 0 ) whenever ๐œ†

99 9.99 โ‰ˆ 10 and ๐‘“ โข ( ๐‘ฃ + * ) โˆ’ ๐‘“ โข ( ๐‘ฃ โˆ’ * )

0 < 0 otherwise. In this latter case, ๐‘ฃ * โ‰ฅ 0 .

Taking ๐œ†

10 , we find

๐‘“ โข ( ๐‘ฃ โˆ’ * )

100 โˆ’ 0.1

99.99 < 101

๐‘ฆ 1 2 + ๐‘ฆ 2 2

๐‘“ โข ( 0 ) ,

so that ๐‘ฃ โˆ’ * is optimal and ๐‘ฃ โˆ’ * < 0 . Moreover, ๐‘ฃ โˆ’ * is strictly increasing as a function of ๐œ† , for all ๐œ†

99 9.99 so that ๐‘ฃ โˆ’ * is optimal and strictly negative on the interval [ 99 9.99 , 10 ] .

Now, consider ๐œ†

99 9.99 โˆ’ ๐œ– to see that

๐‘“ โข ( ๐‘ฃ + * )

990 9.99 + 1 โˆ’ 10 โข ๐œ– < 101

๐‘ฆ 1 2 + ๐‘ฆ 2 2

๐‘“ โข ( 0 ) ,

for all ๐œ–

0 . We deduce that ๐‘ฃ + * is optimal for all ๐œ–

0 and thus ๐‘ฃ + * is optimal and strictly positive on the interval [ 0 , 99 9.99 ] .

To summarize, the solution function for this problem is as follows:

๐’ฒ * โข ( ๐œ† )

{ ๐œ† 100 2 โˆ’ 0.01

if  ๐œ†

99 9.99

{ ๐œ† 100 2 โˆ’ 0.01 , 0.1 โˆ’ ๐œ† 100 }
if  ๐œ†

99 9.99

0.1 โˆ’ ๐œ† 100

if  ๐œ† < 99 9.99 .

This point-to-set map is clearly not open: for every sequence ๐œ† ๐‘˜ โ†‘ 99 9.99 , ๐‘ฃ ๐‘˜ โˆˆ ๐’ฒ * โข ( ๐œ† ๐‘˜ ) implies lim ๐‘˜ ๐‘ฃ ๐‘˜ โ‰  ๐œ† 100 2 โˆ’ 0.01 . Moreover, a similar result holds for limits from above. Finally, it is clear by inspection that the optimal model fit is not unique at ๐œ†

99 9.99 , cannot be continuous in the functional sense, and, since it is not open, also fails to be continuous in the sense of point-to-set maps. โˆŽ

Appendix CExtension to General Losses

In this section, we briefly discuss how to extend our results to general loss functions. Although we use the least-squares error throughout our derivations, this can be generalized to a smooth and strictly convex loss function ๐ฟ : โ„ ๐‘› ร— โ„ ๐‘› โ†’ โ„ without difficulty. To do so, consider the more general problem,

๐‘ * โข ( ๐œ† )

min ๐‘ค

๐น ๐œ† โข ( ๐‘ค ) := 1 2 โข ๐ฟ โข ( ๐‘‹ โข ๐‘ค , ๐‘ฆ ) + ๐œ† โข โˆ‘ ๐‘ ๐‘– โˆˆ โ„ฌ โ€– ๐‘ค ๐‘ ๐‘– โ€– 2

(28)

s.t. ๐พ ๐‘ ๐‘– โŠค โข ๐‘ค ๐‘ ๐‘– โ‰ค 0 โข  for all  โข ๐‘ ๐‘– โˆˆ โ„ฌ .

If ๐ฟ is strictly convex, then uniqueness of the optimal model fit ๐‘ฆ ^ โข ( ๐œ† )

๐‘‹ โข ๐‘ค follows from straightforward adaption of Lemma A.1. Indeed, the only property of the squared-error used in this lemma is strict convexity.

Since the model fit is constant in ๐’ฒ * and ๐ฟ is both smooth and strictly convex, the gradient โˆ‡ ๐‘ค ๐ฟ โข ( ๐‘‹ โข ๐‘ค , ๐‘ฆ ) , which is given by

๐‘‹ โŠค โข โˆ‡ ๐‘ฆ ^ ๐ฟ โข ( ๐‘ฆ ^ , ๐‘ฆ ) ,

must also be constant over ๐’ฒ * . Thus, it is straightforward to replace the correlation vector ๐‘ ๐‘ ๐‘–

๐‘‹ ๐‘ ๐‘– โŠค โข ( ๐‘ฆ โˆ’ ๐‘‹ โข ๐‘ค ) with ๐‘‹ ๐‘ ๐‘– โŠค โข โˆ‡ ๐‘ฆ ^ ๐ฟ โข ( ๐‘ฆ ^ , ๐‘ฆ ) throughout our derivations.

We form a Lagrange dual problem for CGL for one continuity-type result. Corollary 3.7 uses the Lagrange dual to show that the correlation vector ๐‘ ๐‘ ๐‘– is the unique solution to a convex optimization program and applies standard sensitivity results to obtain continuity of ๐‘ฆ ^ . In this same fashion, ๐‘‹ ๐‘ ๐‘– โŠค โข โˆ‡ ๐‘ฆ ^ ๐ฟ โข ( ๐‘ฆ ^ , ๐‘ฆ ) is the unique solution to a Lagrange dual problem where the dual objective uses the convex conjugate ๐ฟ * , rather than the dual of the quadratic penalty. If โˆ‡ ๐‘ฆ ^ ๐ฟ โข ( ๐‘ฆ ^ , ๐‘ฆ ) is continuous in ๐‘ฆ ^ , then this is sufficient to deduce continuity of the model fit using the same argument.

Appendix DAdditional Experiments Figure 4:Pruning neurons on five datasets from the UCI repository. This figure extends Figure 2 with training accuracy in addition to the test accuracies shown in the main paper. Figure 5:Pruning neurons on five additional datasets from the UCI repository. See Figure 2 for details. Our method (Optimal/LS) preservers test accuracy for longer than the baseline methods, leading to compact models with better generalization.

In this section we provide additional experimental results as well as the necessary details to replicate our pruning experiments. Code to replicate all of our experiments is provided at https://github.com/pilancilab/relu_optimal_sets.

D.1Additional Results Figure 6:Pruning experiments on binary classification tasks from the CIFAR-10 dataset. This figure reproduces results from Figure 3 with training accuracies added and also includes results for an additional task, cats vs dogs, not presented in the main paper. Figure 7:Pruning experiments on three binary classification tasks taken from MNIST. Table 2:Tuning neural networks by searching over the optimal set. We fit two-layer ReLU networks on the training set and compute the minimum โ„“ 2 norm solution (Min L 2 ). Then we tune by finding an extreme point approximating the maximum โ„“ 2 -norm solution (EP), minimizing validation MSE over the optimal set (V-MSE), and minimizing test MSE over the optimal set (T-MSE). Max Diff. reports the difference between the best and worse models found. For each method we give the median and interquartile range as median (lower/upper). Dataset Min L 2 EP V-MSE T-MSE Max Diff. blood 0.72 (0.72/0.74) 0.72 (0.72/0.74) 0.62 (0.61/0.62) 0.7 (0.68/0.71) 0.1 (0.11/0.12) breast-cancer 0.64 (0.61/0.65) 0.64 (0.61/0.65) 0.61 (0.6/0.64) 0.71 (0.66/0.71) 0.1 (0.06/0.08) fertility 0.66 (0.62/0.7) 0.69 (0.62/0.69) 0.65 (0.64/0.7) 0.64 (0.57/0.64) 0.05 (0.06/0.06) heart-hungarian 0.75 (0.7/0.77) 0.75 (0.7/0.77) 0.71 (0.56/0.72) 0.85 (0.82/0.86) 0.14 (0.26/0.14) hepatitis 0.75 (0.74/0.78) 0.75 (0.74/0.78) 0.73 (0.69/0.75) 0.77 (0.77/0.9) 0.05 (0.08/0.15) hill-valley 0.64 (0.64/0.65) 0.65 (0.64/0.65) 0.64 (0.64/0.67) 0.64 (0.64/0.65) 0.0 (0.0/0.01) mammographic 0.77 (0.77/0.77) 0.77 (0.77/0.77) 0.57 (0.56/0.62) 0.78 (0.78/0.8) 0.21 (0.22/0.18) monks-1 0.67 (0.64/0.71) 0.66 (0.64/0.71) 0.49 (0.48/0.51) 0.57 (0.51/0.61) 0.17 (0.15/0.2) planning 0.53 (0.51/0.61) 0.52 (0.51/0.61) 0.53 (0.52/0.53) 0.7 (0.68/0.74) 0.17 (0.17/0.21) spectf 0.64 (0.62/0.7) 0.64 (0.62/0.7) 0.56 (0.53/0.56) 0.58 (0.56/0.66) 0.08 (0.09/0.14) horse-colic 0.75 (0.75/0.76) 0.59 (0.57/0.61) 0.74 (0.73/0.75) 0.85 (0.85/0.85) 0.26 (0.27/0.24) ilpd-indian-liver 0.59 (0.57/0.6) 0.59 (0.57/0.6) 0.53 (0.53/0.57) 0.72 (0.7/0.73) 0.19 (0.17/0.17) parkinsons 0.74 (0.72/0.74) 0.74 (0.72/0.74) 0.65 (0.65/0.74) 0.88 (0.86/0.9) 0.23 (0.21/0.16) pima 0.68 (0.66/0.68) 0.68 (0.66/0.68) 0.68 (0.64/0.7) 0.87 (0.86/0.88) 0.2 (0.22/0.19) tic-tac-toe 0.98 (0.98/0.98) 0.76 (0.69/0.8) 0.98 (0.98/0.99) 1.0 (1.0/1.0) 0.24 (0.31/0.2) statlog-heart 0.71 (0.7/0.73) 0.71 (0.7/0.73) 0.7 (0.67/0.73) 0.84 (0.83/0.86) 0.14 (0.17/0.13) ionosphere 0.85 (0.83/0.86) 0.76 (0.73/0.76) 0.84 (0.84/0.84) 0.88 (0.88/0.89) 0.12 (0.15/0.12)

Tuning Table 2 shows results for our tuning task on an additional 7 datasets, as well as the 10 given in Table 1. We report the interquartile range as well as median test accuracies for each method. We observe similar results as presented in the main text. Only one dataset (tic-tac-toe) shows no variation in test accuracy as we explore the optimal set.

Pruning: Figure 4 shows train and test accuracy for our optimal/least-squares pruning method as well as magnitude/gradient-based pruning and random pruning on the same five datasets from the UCI repository as presented in Figure 2. Our approach shows significantly less decay in train accuracy as neurons are pruned; this matches the intuition of the least-squares heuristic for pruning, which selects the coefficients ๐›ฝ to best preserve the model fit.

We observe that both our pruning method and pruning by neuron/gradient norm show very similar training accuracy until most of the neurons have been pruned. While this behavior is expected from our theory-based approach, it is somewhat surprising that pruning by neuron/gradient-norm also maintains train accuracy nearly as well. This behavior suggests that there are many neurons with very small norm which can be eliminated without significantly affecting the model prediction.

Figure 5 presents results for neuron pruning on five additional datasets from the UCI repository, while Figure 7 shows results for three binary classification tasks taken from the MNIST dataset. The trends are generally the same as in Figure 4, with our approach (Theory/LS) outperforming the baselines. Finally Figure 6 extends the results on CIFAR-10 given in Figure 3 with one additional task and with training accuracies.

D.2Experimental Details

Now we give the details necessary to reproduce our experiments. Our experiments use the pre-processed versions of UCI datasets provided by Delgado et al. (2014), but do we do not use their evaluation procedure as it is known to have data leakage.

D.2.1Tuning

We select 17 binary classification datasets from the UCI repository. For each dataset we use a random 60/20/20 split of the data into train, validation, and test sets. We use the commercial interior point method MOSEK (ApS, 2022) through the interface provided by CVXPY (Diamond & Boyd, 2016) to compute the initial model which is then tuned. We modify the tolerances of this method to use ๐œ

10 โˆ’ 8 for measuring both primal convergence and violation of the constraints. For each dataset, we use fixed ๐œ†

0.001 and a maximum of 1000 neurons. To compute the min โ„“ 2 -norm optimal model, we use the MOSEK and the optimization problem given in Section 3.5.

To approximate the maximum โ„“ 2 -norm model, we solve the following program:

๐›ผ *

arg โข max ๐›ผ โ‰ฅ 0 โˆ‘ ๐‘ ๐‘– โˆˆ ๐’ฎ ๐œ† ๐›ผ ๐‘ ๐‘– โข  s.t. โข โˆ‘ ๐‘ ๐‘– โˆˆ ๐’ฎ ๐œ† ๐›ผ ๐‘ ๐‘– โข ๐‘‹ ๐‘ ๐‘– โข ๐‘ฃ ๐‘ ๐‘–

๐‘ฆ ^ .

This is a linear program which is straightforward to solve using interior point methods. Moreover, we have

โ€– ๐‘ค * โ€– 2 2

๐œ† 2 โข โˆ‘ ๐‘ ๐‘– ( ๐›ผ ๐‘ ๐‘– * ) 2 ,

so that โˆ‘ ๐‘ ๐‘– ๐›ผ ๐‘ ๐‘– acts as an approximation, where we recall ๐›ผ ๐‘ ๐‘– โ‰ฅ 0 necessarily.

To tune each model with respect to the validation/test MSE, we solve the following optimization problem:

min ๐‘ค โก { 1 2 โข โ€– ๐‘‹ ~ โข ๐‘ค โˆ’ ๐‘ฆ ~ โ€– 2 2 : ๐‘ค โˆˆ ๐’ฒ * โข ( ๐œ† ) } ,

with respect to the parameters of the convex formulation. Here, ( ๐‘‹ ~ , ๐‘ฆ ~ ) is either the validation or test set. We repeat each experiment five times with different random splits of the data and random resamplings of 500 activation patterns ๐ท ๐‘– . This guarantees that each non-convex network has at most 1000 neurons after optimization, although it may have less due to the sparsity inducing penalty.

D.2.2Pruning

Methods: We use the augmented Lagrangian method of Mishkin et al. (2022) to compute the starting model which is then pruned. We modify the tolerances of this method to use ๐œ

10 โˆ’ 8 for measuring both primal convergence and violation of the constraints.

Pruning by neuron magnitude is straightforward: we sort the neurons by โ€– ๐‘Š 1 โข ๐‘– โข ๐‘ค 2 โข ๐‘– โ€– 2 , which measures the total magnitude of the neuron, and then drop the smallest one. For pruning by gradient norm, we compute ๐บ 1 โข ๐‘–

โˆ‡ ๐‘Š 1 โข ๐‘– ๐ฟ โข ( ๐‘“ ๐‘Š 1 , ๐‘ค 2 โข ( ๐‘ ) , ๐‘ฆ ) , ๐‘” 2 โข ๐‘–

โˆ‡ ๐‘Š 2 โข ๐‘– ๐ฟ โข ( ๐‘“ ๐‘Š 1 , ๐‘ค 2 โข ( ๐‘ ) , ๐‘ฆ ) and then score each neuron as

๐‘  ๐‘–

โ€– ๐‘Š 1 โข ๐‘– โ‹… ๐บ 1 โข ๐‘– โข ๐‘ค 2 โข ๐‘– โข ๐‘” 2 โข ๐‘– โ€– 2 ,

where โ‹… indicates the element-wise product. The neuron with smallest score is zeroed. This is consistent the existing implementations of pruning by gradient norm (Blalock et al., 2020) and attempts to measure the variation of a linearization of the loss in neuron ๐‘– . For Random, we simply select a neuron from a uniform random distribution.

For Optimal/LS, we start by using Algorithm 1 to prune until no linear dependence exists amongst the neuron fits ๐ท ๐‘– โข ๐‘ โข ๐‘Š 1 โข ๐‘– . At this point, we choose ๐›ฝ to minimize the squared-error in the training fit. We choose the neuron to prune by selecting the index that minimizing the residual in the least-squares fit,

๐‘– ๐‘˜

arg โข min ๐‘— min ๐›ฝ โก 1 2 โข โ€– โˆ‘ ๐‘– โ‰  ๐‘— ๐›ฝ ๐‘– โข ๐ท ๐‘– โข ๐‘ โข ๐‘ค ๐‘– โˆ’ ๐ท ๐‘— โข ๐‘ โข ๐‘ค ๐‘— โ€– .

This produced the best result in all of our experiments, although you can also select ๐‘– ๐‘˜ using neuron magnitude or any other rule in the literature on structured pruning.

UCI Datasets: We select 10 moderately-sized binary classification datasets from the UCI repository. For each dataset we use a random 50/50 split of the data into train and test sets, fixed ๐œ†

0.01 , and sample 25 activation patterns ๐ท ๐‘– . This results in a maximum of 50 neurons in each final model; since the datasets are low dimensional, randomly sampling activation patterns typically results in fewer than 50 neurons. All results are repeated for five different random splits and we plot the median and interquartile ranges of the results.

MNIST and CIFAR-10: We select three binary classification tasks from each dataset such that no task shares a target with another task. For each dataset we use a random 50/50 split of the data into train and test sets. For MNIST, we use ๐œ† โˆ’ 0.01 , while we used ๐œ†

0.05 for CIFAR-10. We sample 50 activation patterns ๐ท ๐‘– for each tasking, which produces a maximum of 100 neurons in each final model. All results are repeated for five different random splits and we plot the median and interquartile ranges of the results.

Generated by L A T E xml Instructions for reporting errors

We are continuing to improve HTML versions of papers, and your feedback helps enhance accessibility and mobile support. To report errors in the HTML that will help us improve conversion and rendering, choose any of the methods listed below:

Click the "Report Issue" button. Open a report feedback form via keyboard, use "Ctrl + ?". Make a text selection and click the "Report Issue for Selection" button near your cursor. You can use Alt+Y to toggle on and Alt+Shift+Y to toggle off accessible reporting links at each section.

Our team has already identified the following issues. We appreciate your time reviewing and reporting rendering errors we may not have found yet. Your efforts will help us improve the HTML versions for all readers, because disability should not be a barrier to accessing research. Thank you for your continued support in championing open access for all.

Have a free development cycle? Help support accessibility at arXiv! Our collaborators at LaTeXML maintain a list of packages that need conversion, and welcome developer contributions.

Report Issue Report Issue for Selection

Xet Storage Details

Size:
155 kB
ยท
Xet hash:
c598599f266dce15f3cea3f70316bbd9fe4f22c62af31d60f565b0e49d4403dc

Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.