chunk_id
int64 0
448
| chunk_text
stringlengths 1
10.8k
| chunk_text_tokens
int64 1
2.01k
| serialized_text
stringlengths 1
11.1k
| serialized_text_tokens
int64 1
2.02k
|
|---|---|---|---|---|
0
|
In this paper we discuss an application of Stochastic Approximation to statistical estimation of high-dimensional sparse parameters. The proposed solution reduces to resolving a penalized stochastic optimization problem on each stage of a multistage algorithm; each problem being solved to a prescribed accuracy by the non-Euclidean Composite Stochastic Mirror Descent (CSMD) algorithm. Assuming that the problem objective is smooth and quadratically minorated and stochastic perturbations are sub-Gaussian, our analysis prescribes the method parameters which ensure fast convergence of the estimation error (the radius of a confidence ball of a given norm around the approximate solution). This convergence is linear during the first “preliminary” phase of the routine and is sublinear during the second “asymptotic” phase.
We consider an application of the proposed approach to sparse Generalized Linear Regression problem. In this setting, we show that the proposed algorithm attains the optimal convergence of the estimation error under weak assumptions on the regressor distribution. We also present a numerical study illustrating the performance of the algorithm on high-dimensional simulation data.
| 215
|
Stochastic Mirror Descent for Large-Scale Sparse Recovery
Abstract
In this paper we discuss an application of Stochastic Approximation to statistical estimation of high-dimensional sparse parameters. The proposed solution reduces to resolving a penalized stochastic optimization problem on each stage of a multistage algorithm; each problem being solved to a prescribed accuracy by the non-Euclidean Composite Stochastic Mirror Descent (CSMD) algorithm. Assuming that the problem objective is smooth and quadratically minorated and stochastic perturbations are sub-Gaussian, our analysis prescribes the method parameters which ensure fast convergence of the estimation error (the radius of a confidence ball of a given norm around the approximate solution). This convergence is linear during the first “preliminary” phase of the routine and is sublinear during the second “asymptotic” phase.
We consider an application of the proposed approach to sparse Generalized Linear Regression problem. In this setting, we show that the proposed algorithm attains the optimal convergence of the estimation error under weak assumptions on the regressor distribution. We also present a numerical study illustrating the performance of the algorithm on high-dimensional simulation data.
| 229
|
1
|
Our original motivation is the well known problem of (generalized) linear high-dimensional regression with random design. Formally, consider a dataset of N𝑁N points (ϕi,ηi),i∈{1,…,N}subscriptitalic-ϕ𝑖subscript𝜂𝑖𝑖1…𝑁(\phi_{i},\eta_{i}),i\in\{1,\ldots,N\}, where ϕi∈𝐑nsubscriptitalic-ϕ𝑖superscript𝐑𝑛\phi_{i}\in{\mathbf{R}}^{n} are (random) features and ηi∈𝐑subscript𝜂𝑖𝐑\eta_{i}\in{\mathbf{R}} are observations, linked by the following equation
, 1 = ηi=𝔯(ϕiTx∗)+σξi,i∈[N]:={1,…,N}formulae-sequencesubscript𝜂𝑖𝔯superscriptsubscriptitalic-ϕ𝑖𝑇subscript𝑥𝜎subscript𝜉𝑖𝑖delimited-[]𝑁assign1…𝑁\displaystyle\eta_{i}=\mathfrak{r}(\phi_{i}^{T}x_{*})+\sigma\xi_{i},\quad i\in[N]:=\{1,\ldots,N\}. , 2 = . , 3 = (1)
where ξi∈𝐑subscript𝜉𝑖𝐑\xi_{i}\in{\mathbf{R}} are i.i.d. observation noises. The standard objective is to recover the unknown parameter x∗∈𝐑nsubscript𝑥superscript𝐑𝑛x_{*}\in{\mathbf{R}}^{n} of the Generalized Linear Regression (1) – which is assumed to belong to a given convex closed set X𝑋X and to be s𝑠s-sparse, i.e., to have at most s≪nmuch-less-than𝑠𝑛s\ll n non-vanishing entries from the data-set.
As mentioned before, we consider random design, where ϕisubscriptitalic-ϕ𝑖\phi_{i} are i.i.d. random variables, so that the estimation problem of x∗subscript𝑥x_{*} can be recast as the following generic Stochastic Optimization problem:
, 1 = g∗=minx∈Xg(x),whereg(x)=𝐄{G(x,(ϕ,η))},G(x,(ϕ,η))=𝔰(ϕTx)−ϕTxη,formulae-sequencesubscript𝑔subscript𝑥𝑋𝑔𝑥whereformulae-sequence𝑔𝑥𝐄𝐺𝑥italic-ϕ𝜂𝐺𝑥italic-ϕ𝜂𝔰superscriptitalic-ϕ𝑇𝑥superscriptitalic-ϕ𝑇𝑥𝜂\displaystyle g_{*}=\min_{x\in X}g(x),\quad\text{where}\quad g(x)={\mathbf{E}}\big{\{}G\big{(}x,(\phi,\eta)\big{)}\big{\}},\quad G(x,(\phi,\eta))=\mathfrak{s}(\phi^{T}x)-\phi^{T}x\eta,. , 2 = . , 3 = (2)
with 𝔰(⋅)𝔰⋅\mathfrak{s}(\cdot) any primitive of 𝔯(⋅)𝔯⋅\mathfrak{r}(\cdot), i.e., 𝔯(t)=𝔰′(t)𝔯𝑡superscript𝔰′𝑡\mathfrak{r}(t)=\mathfrak{s}^{\prime}(t). The equivalence between the original and the stochastic optimization problems comes from the fact that x∗subscript𝑥x_{*} is a critical point of g(⋅)𝑔⋅g(\cdot), i.e., ∇g(x∗)=0∇𝑔subscript𝑥0\nabla g(x_{*})=0 since, under mild assumptions, ∇g(x)=𝐄{ϕ[𝔯(ϕTx)−𝔯(ϕTx∗)]}∇𝑔𝑥𝐄italic-ϕdelimited-[]𝔯superscriptitalic-ϕ𝑇𝑥𝔯superscriptitalic-ϕ𝑇subscript𝑥\nabla g(x)={\mathbf{E}}\{\phi[\mathfrak{r}(\phi^{T}x)-\mathfrak{r}(\phi^{T}x_{*})]\}. Hence, as soon as g𝑔g as a unique minimizer (say, g𝑔g is strongly convex over X𝑋X), solutions of both problems are identical.
As a consequence, we shall focus on the generic problem (2), that has already been widely tackled. For instance, when given an observation sample (ϕi,ηi)subscriptitalic-ϕ𝑖subscript𝜂𝑖(\phi_{i},\eta_{i}), i∈[N]𝑖delimited-[]𝑁i\in[N], one may build a Sample Average Approximation (SAA) of the objective g(x)𝑔𝑥g(x)
, 1 = g^N(x)=1N∑i=1NG(x,(ϕi,ηi))=1N∑i=1N[𝔰(ϕiTx)−ϕiTxηi]subscript^𝑔𝑁𝑥1𝑁superscriptsubscript𝑖1𝑁𝐺𝑥subscriptitalic-ϕ𝑖subscript𝜂𝑖1𝑁superscriptsubscript𝑖1𝑁delimited-[]𝔰subscriptsuperscriptitalic-ϕ𝑇𝑖𝑥subscriptsuperscriptitalic-ϕ𝑇𝑖𝑥subscript𝜂𝑖\displaystyle{\widehat{g}}_{N}(x)=\frac{1}{N}\sum_{i=1}^{N}G(x,(\phi_{i},\eta_{i}))=\frac{1}{N}\sum_{i=1}^{N}[\mathfrak{s}(\phi^{T}_{i}x)-\phi^{T}_{i}x\eta_{i}]. , 2 = . , 3 = (3)
and then solve the resulting problem of minimizing g^N(x)subscript^𝑔𝑁𝑥{\widehat{g}}_{N}(x) over sparse x𝑥x’s. The celebrated ℓ1subscriptℓ1\ell_{1}-norm minimization approach allows to reduce this problem to convex optimization. We will provide a new algorithm adapted to this high-dimensional case, and instantiating it to the original problem 1.
| 1,715
|
Stochastic Mirror Descent for Large-Scale Sparse Recovery
1 Introduction
Our original motivation is the well known problem of (generalized) linear high-dimensional regression with random design. Formally, consider a dataset of N𝑁N points (ϕi,ηi),i∈{1,…,N}subscriptitalic-ϕ𝑖subscript𝜂𝑖𝑖1…𝑁(\phi_{i},\eta_{i}),i\in\{1,\ldots,N\}, where ϕi∈𝐑nsubscriptitalic-ϕ𝑖superscript𝐑𝑛\phi_{i}\in{\mathbf{R}}^{n} are (random) features and ηi∈𝐑subscript𝜂𝑖𝐑\eta_{i}\in{\mathbf{R}} are observations, linked by the following equation
, 1 = ηi=𝔯(ϕiTx∗)+σξi,i∈[N]:={1,…,N}formulae-sequencesubscript𝜂𝑖𝔯superscriptsubscriptitalic-ϕ𝑖𝑇subscript𝑥𝜎subscript𝜉𝑖𝑖delimited-[]𝑁assign1…𝑁\displaystyle\eta_{i}=\mathfrak{r}(\phi_{i}^{T}x_{*})+\sigma\xi_{i},\quad i\in[N]:=\{1,\ldots,N\}. , 2 = . , 3 = (1)
where ξi∈𝐑subscript𝜉𝑖𝐑\xi_{i}\in{\mathbf{R}} are i.i.d. observation noises. The standard objective is to recover the unknown parameter x∗∈𝐑nsubscript𝑥superscript𝐑𝑛x_{*}\in{\mathbf{R}}^{n} of the Generalized Linear Regression (1) – which is assumed to belong to a given convex closed set X𝑋X and to be s𝑠s-sparse, i.e., to have at most s≪nmuch-less-than𝑠𝑛s\ll n non-vanishing entries from the data-set.
As mentioned before, we consider random design, where ϕisubscriptitalic-ϕ𝑖\phi_{i} are i.i.d. random variables, so that the estimation problem of x∗subscript𝑥x_{*} can be recast as the following generic Stochastic Optimization problem:
, 1 = g∗=minx∈Xg(x),whereg(x)=𝐄{G(x,(ϕ,η))},G(x,(ϕ,η))=𝔰(ϕTx)−ϕTxη,formulae-sequencesubscript𝑔subscript𝑥𝑋𝑔𝑥whereformulae-sequence𝑔𝑥𝐄𝐺𝑥italic-ϕ𝜂𝐺𝑥italic-ϕ𝜂𝔰superscriptitalic-ϕ𝑇𝑥superscriptitalic-ϕ𝑇𝑥𝜂\displaystyle g_{*}=\min_{x\in X}g(x),\quad\text{where}\quad g(x)={\mathbf{E}}\big{\{}G\big{(}x,(\phi,\eta)\big{)}\big{\}},\quad G(x,(\phi,\eta))=\mathfrak{s}(\phi^{T}x)-\phi^{T}x\eta,. , 2 = . , 3 = (2)
with 𝔰(⋅)𝔰⋅\mathfrak{s}(\cdot) any primitive of 𝔯(⋅)𝔯⋅\mathfrak{r}(\cdot), i.e., 𝔯(t)=𝔰′(t)𝔯𝑡superscript𝔰′𝑡\mathfrak{r}(t)=\mathfrak{s}^{\prime}(t). The equivalence between the original and the stochastic optimization problems comes from the fact that x∗subscript𝑥x_{*} is a critical point of g(⋅)𝑔⋅g(\cdot), i.e., ∇g(x∗)=0∇𝑔subscript𝑥0\nabla g(x_{*})=0 since, under mild assumptions, ∇g(x)=𝐄{ϕ[𝔯(ϕTx)−𝔯(ϕTx∗)]}∇𝑔𝑥𝐄italic-ϕdelimited-[]𝔯superscriptitalic-ϕ𝑇𝑥𝔯superscriptitalic-ϕ𝑇subscript𝑥\nabla g(x)={\mathbf{E}}\{\phi[\mathfrak{r}(\phi^{T}x)-\mathfrak{r}(\phi^{T}x_{*})]\}. Hence, as soon as g𝑔g as a unique minimizer (say, g𝑔g is strongly convex over X𝑋X), solutions of both problems are identical.
As a consequence, we shall focus on the generic problem (2), that has already been widely tackled. For instance, when given an observation sample (ϕi,ηi)subscriptitalic-ϕ𝑖subscript𝜂𝑖(\phi_{i},\eta_{i}), i∈[N]𝑖delimited-[]𝑁i\in[N], one may build a Sample Average Approximation (SAA) of the objective g(x)𝑔𝑥g(x)
, 1 = g^N(x)=1N∑i=1NG(x,(ϕi,ηi))=1N∑i=1N[𝔰(ϕiTx)−ϕiTxηi]subscript^𝑔𝑁𝑥1𝑁superscriptsubscript𝑖1𝑁𝐺𝑥subscriptitalic-ϕ𝑖subscript𝜂𝑖1𝑁superscriptsubscript𝑖1𝑁delimited-[]𝔰subscriptsuperscriptitalic-ϕ𝑇𝑖𝑥subscriptsuperscriptitalic-ϕ𝑇𝑖𝑥subscript𝜂𝑖\displaystyle{\widehat{g}}_{N}(x)=\frac{1}{N}\sum_{i=1}^{N}G(x,(\phi_{i},\eta_{i}))=\frac{1}{N}\sum_{i=1}^{N}[\mathfrak{s}(\phi^{T}_{i}x)-\phi^{T}_{i}x\eta_{i}]. , 2 = . , 3 = (3)
and then solve the resulting problem of minimizing g^N(x)subscript^𝑔𝑁𝑥{\widehat{g}}_{N}(x) over sparse x𝑥x’s. The celebrated ℓ1subscriptℓ1\ell_{1}-norm minimization approach allows to reduce this problem to convex optimization. We will provide a new algorithm adapted to this high-dimensional case, and instantiating it to the original problem 1.
| 1,730
|
2
|
Sparse recovery by Lasso and Dantzig Selector has been extensively studied [11, 8, 5, 46, 10, 9]. It computes a solution x^Nsubscript^𝑥𝑁{\widehat{x}}_{N} to the ℓ1subscriptℓ1\ell_{1}-penalized problem
minxg^N(x)+λ‖x‖1subscript𝑥subscript^𝑔𝑁𝑥𝜆subscriptnorm𝑥1\min_{x}{\widehat{g}}_{N}(x)+\lambda\|x\|_{1} where λ≥0𝜆0\lambda\geq 0 is the algorithm parameter [35]. This delivers “good solutions”, with high probability for sparsity level s𝑠s as large as O(NκΣlnn)𝑂𝑁subscript𝜅Σ𝑛O\left(\frac{N\kappa_{\Sigma}}{\ln n}\right), as soon as the random regressors (the ϕisubscriptitalic-ϕ𝑖\phi_{i}) are drawn independently from a normal distribution with a covariance matrix ΣΣ\Sigma such that κΣI⪯Σ⪯ρκΣIprecedes-or-equalssubscript𝜅Σ𝐼Σprecedes-or-equals𝜌subscript𝜅Σ𝐼\kappa_{\Sigma}I\preceq\Sigma\preceq\rho\kappa_{\Sigma}I111We use A⪯Bprecedes-or-equals𝐴𝐵A\preceq B for two symmetric matrices A𝐴A and B𝐵B if B−A⪰0succeeds-or-equals𝐵𝐴0B-A\succeq 0, i.e. B−A𝐵𝐴B-A is positive semidefinite., for some κΣ>0,ρ≥1formulae-sequencesubscript𝜅Σ0𝜌1\kappa_{\Sigma}>0,\rho\geq 1.
However, computing this solution may be challenging in a very high-dimensional setting: even popular iterative algorithms, like coordinate descent, loops over a large number of variables. To mitigate this, randomized algorithms [3, 22], screening rules and working sets [19, 30, 34] may be used to diminish the size of the optimization problem at hand, while iterative thresholding [1, 7, 20, 16, 33] is a “direct” approach to enhance sparsity of the solution.
Another approach relies on Stochastic Approximation (SA). As ∇G(x,(ϕi,ηi))=ϕi(𝔯(ϕiTx)−ηi)∇𝐺𝑥subscriptitalic-ϕ𝑖subscript𝜂𝑖subscriptitalic-ϕ𝑖𝔯subscriptsuperscriptitalic-ϕ𝑇𝑖𝑥subscript𝜂𝑖\nabla G(x,(\phi_{i},\eta_{i}))=\phi_{i}(\mathfrak{r}(\phi^{T}_{i}x)-\eta_{i}) is an unbiased estimate of ∇g(x)∇𝑔𝑥\nabla g(x), iterative Stochastic Gradient Descent (SGD) algorithm may be used to build approximate solutions.
Unfortunately, unless regressors ϕitalic-ϕ\phi are sparse or possess a special structure, standard SA leads to accuracy bounds for sparse recovery proportional to the dimension n𝑛n which are essentially useless in the high-dimensional setting.
This motivates non-Euclidean SA procedures, such as Stochastic Mirror Descent (SMD) [37], its application to sparse recovery enjoys almost dimension free convergence and it has been well studied in the literature.
For instance, under bounded regressors and with sub-Gaussian noise, SMD reaches “slow rate” of sparse recovery of the type g(x^N)−g∗=O(σsln(n)/N)𝑔subscript^𝑥𝑁subscript𝑔𝑂𝜎𝑠𝑛𝑁g({\widehat{x}}_{N})-g_{*}=O\left({\sigma\sqrt{s\ln(n)/N}}\right) where x^Nsubscript^𝑥𝑁{\widehat{x}}_{N} is the approximate solution after
N𝑁N iterations [44, 45]. Multistage routines may be used to improve the error estimates of SA under strong or uniform convexity assumptions [27, 29, 18]. However, they do not always hold, as in sparse Generalized Linear Regression, where they are replaced by Restricted Strong Convexity conditions. Then multistage procedures [2, 17] based on standard SMD algorithms [24, 38] control the ℓ2subscriptℓ2\ell_{2}-error ‖x^N−x∗‖2subscriptnormsubscript^𝑥𝑁subscript𝑥2\|{\widehat{x}}_{N}-x_{*}\|_{2} at the rate O(σκΣslnnN)𝑂𝜎subscript𝜅Σ𝑠𝑛𝑁O\Big{(}\frac{\sigma}{\kappa_{\Sigma}}\sqrt{\frac{s\ln n}{N}}\Big{)}
with high probability. This is the best “asymptotic” rate attainable when solving (2).
However, those algorithms have two major limitations. They both need a number of iterations to reach a given accuracy proportional to the initial error R=‖x∗−x0‖1𝑅subscriptnormsubscript𝑥subscript𝑥01R=\|x_{*}-x_{0}\|_{1} and the sparsity level s𝑠s must be of order O(κΣNlnn)𝑂subscript𝜅Σ𝑁𝑛O\Big{(}\kappa_{\Sigma}\sqrt{\tfrac{N}{\ln n}}\Big{)} for the sparse linear regression. These limits may be seen as a consequence of dealing with non-smooth objective g(x)𝑔𝑥g(x).
Although it slightly restricts the scope of corresponding algorithms, we shall consider smooth objectives and algorithm for minimizing composite objectives (cf. [25, 32, 39]) to mitigate the aforementioned drawbacks of the multistage algorithms from [2, 17].
| 1,505
|
Stochastic Mirror Descent for Large-Scale Sparse Recovery
1 Introduction
Existing approaches and related works.
Sparse recovery by Lasso and Dantzig Selector has been extensively studied [11, 8, 5, 46, 10, 9]. It computes a solution x^Nsubscript^𝑥𝑁{\widehat{x}}_{N} to the ℓ1subscriptℓ1\ell_{1}-penalized problem
minxg^N(x)+λ‖x‖1subscript𝑥subscript^𝑔𝑁𝑥𝜆subscriptnorm𝑥1\min_{x}{\widehat{g}}_{N}(x)+\lambda\|x\|_{1} where λ≥0𝜆0\lambda\geq 0 is the algorithm parameter [35]. This delivers “good solutions”, with high probability for sparsity level s𝑠s as large as O(NκΣlnn)𝑂𝑁subscript𝜅Σ𝑛O\left(\frac{N\kappa_{\Sigma}}{\ln n}\right), as soon as the random regressors (the ϕisubscriptitalic-ϕ𝑖\phi_{i}) are drawn independently from a normal distribution with a covariance matrix ΣΣ\Sigma such that κΣI⪯Σ⪯ρκΣIprecedes-or-equalssubscript𝜅Σ𝐼Σprecedes-or-equals𝜌subscript𝜅Σ𝐼\kappa_{\Sigma}I\preceq\Sigma\preceq\rho\kappa_{\Sigma}I111We use A⪯Bprecedes-or-equals𝐴𝐵A\preceq B for two symmetric matrices A𝐴A and B𝐵B if B−A⪰0succeeds-or-equals𝐵𝐴0B-A\succeq 0, i.e. B−A𝐵𝐴B-A is positive semidefinite., for some κΣ>0,ρ≥1formulae-sequencesubscript𝜅Σ0𝜌1\kappa_{\Sigma}>0,\rho\geq 1.
However, computing this solution may be challenging in a very high-dimensional setting: even popular iterative algorithms, like coordinate descent, loops over a large number of variables. To mitigate this, randomized algorithms [3, 22], screening rules and working sets [19, 30, 34] may be used to diminish the size of the optimization problem at hand, while iterative thresholding [1, 7, 20, 16, 33] is a “direct” approach to enhance sparsity of the solution.
Another approach relies on Stochastic Approximation (SA). As ∇G(x,(ϕi,ηi))=ϕi(𝔯(ϕiTx)−ηi)∇𝐺𝑥subscriptitalic-ϕ𝑖subscript𝜂𝑖subscriptitalic-ϕ𝑖𝔯subscriptsuperscriptitalic-ϕ𝑇𝑖𝑥subscript𝜂𝑖\nabla G(x,(\phi_{i},\eta_{i}))=\phi_{i}(\mathfrak{r}(\phi^{T}_{i}x)-\eta_{i}) is an unbiased estimate of ∇g(x)∇𝑔𝑥\nabla g(x), iterative Stochastic Gradient Descent (SGD) algorithm may be used to build approximate solutions.
Unfortunately, unless regressors ϕitalic-ϕ\phi are sparse or possess a special structure, standard SA leads to accuracy bounds for sparse recovery proportional to the dimension n𝑛n which are essentially useless in the high-dimensional setting.
This motivates non-Euclidean SA procedures, such as Stochastic Mirror Descent (SMD) [37], its application to sparse recovery enjoys almost dimension free convergence and it has been well studied in the literature.
For instance, under bounded regressors and with sub-Gaussian noise, SMD reaches “slow rate” of sparse recovery of the type g(x^N)−g∗=O(σsln(n)/N)𝑔subscript^𝑥𝑁subscript𝑔𝑂𝜎𝑠𝑛𝑁g({\widehat{x}}_{N})-g_{*}=O\left({\sigma\sqrt{s\ln(n)/N}}\right) where x^Nsubscript^𝑥𝑁{\widehat{x}}_{N} is the approximate solution after
N𝑁N iterations [44, 45]. Multistage routines may be used to improve the error estimates of SA under strong or uniform convexity assumptions [27, 29, 18]. However, they do not always hold, as in sparse Generalized Linear Regression, where they are replaced by Restricted Strong Convexity conditions. Then multistage procedures [2, 17] based on standard SMD algorithms [24, 38] control the ℓ2subscriptℓ2\ell_{2}-error ‖x^N−x∗‖2subscriptnormsubscript^𝑥𝑁subscript𝑥2\|{\widehat{x}}_{N}-x_{*}\|_{2} at the rate O(σκΣslnnN)𝑂𝜎subscript𝜅Σ𝑠𝑛𝑁O\Big{(}\frac{\sigma}{\kappa_{\Sigma}}\sqrt{\frac{s\ln n}{N}}\Big{)}
with high probability. This is the best “asymptotic” rate attainable when solving (2).
However, those algorithms have two major limitations. They both need a number of iterations to reach a given accuracy proportional to the initial error R=‖x∗−x0‖1𝑅subscriptnormsubscript𝑥subscript𝑥01R=\|x_{*}-x_{0}\|_{1} and the sparsity level s𝑠s must be of order O(κΣNlnn)𝑂subscript𝜅Σ𝑁𝑛O\Big{(}\kappa_{\Sigma}\sqrt{\tfrac{N}{\ln n}}\Big{)} for the sparse linear regression. These limits may be seen as a consequence of dealing with non-smooth objective g(x)𝑔𝑥g(x).
Although it slightly restricts the scope of corresponding algorithms, we shall consider smooth objectives and algorithm for minimizing composite objectives (cf. [25, 32, 39]) to mitigate the aforementioned drawbacks of the multistage algorithms from [2, 17].
| 1,526
|
3
|
We provide a refined analysis of Composite Stochastic Mirror Descent (CSMD) algorithms for computing sparse solutions to Stochastic Optimization problem leveraging smoothness of the objective. This leads to a new “aggressive” choice of parameters in a multistage algorithm with significantly improved performances compared to those in [2]. We summarize below some properties of the proposed procedure for problem (2).
Each stage of the algorithm is a specific CSMD recursion; They fall into two phases. During the first (preliminary) phase, the estimation error decreases linearly with the exponent proportional to κΣslnnsubscript𝜅Σ𝑠𝑛\frac{\kappa_{\Sigma}}{s\ln n}. When it reaches the value O(σsκΣ)𝑂𝜎𝑠subscript𝜅ΣO\Big{(}\frac{\sigma s}{\sqrt{\kappa_{\Sigma}}}\Big{)}, the second (asymptotic) phase begins, and its stages contain exponentially increasing number of iterations per stage, hence the estimation error decreases as O(σsκΣlnnN)𝑂𝜎𝑠subscript𝜅Σ𝑛𝑁O\Big{(}\frac{\sigma s}{{\kappa_{\Sigma}}}\sqrt{\frac{\ln n}{N}}\Big{)} where N𝑁N is the total iteration count.
| 311
|
Stochastic Mirror Descent for Large-Scale Sparse Recovery
1 Introduction
Principal contributions.
We provide a refined analysis of Composite Stochastic Mirror Descent (CSMD) algorithms for computing sparse solutions to Stochastic Optimization problem leveraging smoothness of the objective. This leads to a new “aggressive” choice of parameters in a multistage algorithm with significantly improved performances compared to those in [2]. We summarize below some properties of the proposed procedure for problem (2).
Each stage of the algorithm is a specific CSMD recursion; They fall into two phases. During the first (preliminary) phase, the estimation error decreases linearly with the exponent proportional to κΣslnnsubscript𝜅Σ𝑠𝑛\frac{\kappa_{\Sigma}}{s\ln n}. When it reaches the value O(σsκΣ)𝑂𝜎𝑠subscript𝜅ΣO\Big{(}\frac{\sigma s}{\sqrt{\kappa_{\Sigma}}}\Big{)}, the second (asymptotic) phase begins, and its stages contain exponentially increasing number of iterations per stage, hence the estimation error decreases as O(σsκΣlnnN)𝑂𝜎𝑠subscript𝜅Σ𝑛𝑁O\Big{(}\frac{\sigma s}{{\kappa_{\Sigma}}}\sqrt{\frac{\ln n}{N}}\Big{)} where N𝑁N is the total iteration count.
| 329
|
4
|
The remaining of the paper is organized as follows. In Section 2, the general problem is set, and the multistage optimization routine and the study of its basic properties are presented. Then, in Section 3, we discuss the properties of the method and conditions under which it leads to “small error” solutions to sparse GLR estimation problems. Finally, a small simulation study illustrating numerical performance of the proposed routines in high-dimensional GLR estimation problem is presented in Section 3.3.
In the following, E𝐸E is a Euclidean space and ∥⋅∥\|\cdot\| is a norm on E𝐸E; we denote ∥⋅∥∗\|\cdot\|_{*} the conjugate norm (i.e., ‖x‖∗=sup‖y‖≤1⟨y,x⟩subscriptnorm𝑥subscriptsupremumnorm𝑦1𝑦𝑥\|x\|_{*}=\sup_{\|y\|\leq 1}{\langle}y,x{\rangle}).
Given a positive semidefinite matrix Σ∈𝐒nΣsubscript𝐒𝑛\Sigma\in{\mathbf{S}}_{n}, for x∈𝐑n𝑥superscript𝐑𝑛x\in{\mathbf{R}}^{n} we denote ‖x‖Σ=xTΣxsubscriptnorm𝑥Σsuperscript𝑥𝑇Σ𝑥\|x\|_{\Sigma}=\sqrt{x^{T}\Sigma x} and for any matrix Q𝑄Q, we denote ‖Q‖∞=maxij|[Q]ij|subscriptnorm𝑄subscript𝑖𝑗subscriptdelimited-[]𝑄𝑖𝑗\|Q\|_{\infty}=\max_{ij}|[Q]_{ij}|.
We use a generic notation c𝑐c and C𝐶C for absolute constants; a shortcut notation a≲bless-than-or-similar-to𝑎𝑏a\lesssim b (a≳bgreater-than-or-equivalent-to𝑎𝑏a\gtrsim b) means that the ratio a/b𝑎𝑏a/b (ratio b/a𝑏𝑎b/a) is bounded by an absolute constant; the symbols ⋁\bigvee,⋀\bigwedge and the notation (.)+(.)_{+} respectively refer to ”maximum between”, ”minimum between” and ”positive part”.
| 583
|
Stochastic Mirror Descent for Large-Scale Sparse Recovery
1 Introduction
Organization and notation
The remaining of the paper is organized as follows. In Section 2, the general problem is set, and the multistage optimization routine and the study of its basic properties are presented. Then, in Section 3, we discuss the properties of the method and conditions under which it leads to “small error” solutions to sparse GLR estimation problems. Finally, a small simulation study illustrating numerical performance of the proposed routines in high-dimensional GLR estimation problem is presented in Section 3.3.
In the following, E𝐸E is a Euclidean space and ∥⋅∥\|\cdot\| is a norm on E𝐸E; we denote ∥⋅∥∗\|\cdot\|_{*} the conjugate norm (i.e., ‖x‖∗=sup‖y‖≤1⟨y,x⟩subscriptnorm𝑥subscriptsupremumnorm𝑦1𝑦𝑥\|x\|_{*}=\sup_{\|y\|\leq 1}{\langle}y,x{\rangle}).
Given a positive semidefinite matrix Σ∈𝐒nΣsubscript𝐒𝑛\Sigma\in{\mathbf{S}}_{n}, for x∈𝐑n𝑥superscript𝐑𝑛x\in{\mathbf{R}}^{n} we denote ‖x‖Σ=xTΣxsubscriptnorm𝑥Σsuperscript𝑥𝑇Σ𝑥\|x\|_{\Sigma}=\sqrt{x^{T}\Sigma x} and for any matrix Q𝑄Q, we denote ‖Q‖∞=maxij|[Q]ij|subscriptnorm𝑄subscript𝑖𝑗subscriptdelimited-[]𝑄𝑖𝑗\|Q\|_{\infty}=\max_{ij}|[Q]_{ij}|.
We use a generic notation c𝑐c and C𝐶C for absolute constants; a shortcut notation a≲bless-than-or-similar-to𝑎𝑏a\lesssim b (a≳bgreater-than-or-equivalent-to𝑎𝑏a\gtrsim b) means that the ratio a/b𝑎𝑏a/b (ratio b/a𝑏𝑎b/a) is bounded by an absolute constant; the symbols ⋁\bigvee,⋀\bigwedge and the notation (.)+(.)_{+} respectively refer to ”maximum between”, ”minimum between” and ”positive part”.
| 602
|
5
|
This section is dedicated to the formulation of the generic stochastic optimization problem, the description and the analysis of the generic algorithm.
| 24
|
Stochastic Mirror Descent for Large-Scale Sparse Recovery
2 Multistage Stochastic Mirror Descent for Sparse Stochastic Optimization
This section is dedicated to the formulation of the generic stochastic optimization problem, the description and the analysis of the generic algorithm.
| 51
|
6
|
Let X𝑋X be a convex closed subset of an Euclidean space E𝐸E and (Ω,P)Ω𝑃(\Omega,P) a probability space. We consider a mapping G:X×Ω→𝐑:𝐺→𝑋Ω𝐑G:X\times\Omega\rightarrow{\mathbf{R}} such that, for all ω∈Ω𝜔Ω\omega\in\Omega, G(⋅,ω)𝐺⋅𝜔G(\cdot,\omega) is convex on X𝑋X and smooth, meaning that ∇G(⋅,ω)∇𝐺⋅𝜔\nabla G(\cdot,\omega) is Lipschitz continuous on X𝑋X with a.s. bounded Lipschitz constant,
, 1 = ∀x,x′∈X,‖∇G(x,ω)−∇G(x′,ω)‖≤ℒ(ω)‖x−x′‖,ℒ(ω)≤νa.s..formulae-sequencefor-all𝑥superscript𝑥′𝑋formulae-sequencenorm∇𝐺𝑥𝜔∇𝐺superscript𝑥′𝜔ℒ𝜔norm𝑥superscript𝑥′ℒ𝜔𝜈𝑎𝑠\displaystyle\forall x,x^{\prime}\in X,\quad\|\nabla G(x,\omega)-\nabla G(x^{\prime},\omega)\|\leq{\cal L}(\omega)\|x-x^{\prime}\|,\qquad{\cal L}(\omega)\leq\nu\quad a.s... , 2 = . , 3 = (4)
We define g(x):=𝐄{G(x,ω)}assign𝑔𝑥𝐄𝐺𝑥𝜔g(x):={\mathbf{E}}\{G(x,\omega)\}, where 𝐄{⋅}𝐄⋅{\mathbf{E}}\{\cdot\} stands for the expectation with respect to ω𝜔\omega, drawn from P𝑃P. We shall assume that the mapping g(⋅)𝑔⋅g(\cdot) is finite, convex and differentiable on X𝑋X and we aim at solving the following stochastic optimization problem
, 1 = minx∈X[g(x)=𝐄{G(x,ω)}],subscript𝑥𝑋𝑔𝑥𝐄𝐺𝑥𝜔\min_{x\in X}[g(x)={\mathbf{E}}\{G(x,\omega)\}],. , 2 = . , 3 = (5)
assuming it admits an s𝑠s-sparse optimal solution x∗subscript𝑥x_{*} for some sparsity structure.
To solve this problem, stochastic oracle can be queried: when given at input a point x∈X𝑥𝑋x\in X, generates an ω∈Ω𝜔Ω\omega\in\Omega from P𝑃P and outputs
G(x,ω)𝐺𝑥𝜔G(x,\omega) and ∇G(x,ω):=∇xG(x,ω)assign∇𝐺𝑥𝜔subscript∇𝑥𝐺𝑥𝜔\nabla G(x,\omega):=\nabla_{x}G(x,\omega) (with a slight abuse of notations). We assume that the oracle is unbiased, i.e.,
, 1 = 𝐄{∇G(x,ω)}=∇g(x),∀x∈X.formulae-sequence𝐄∇𝐺𝑥𝜔∇𝑔𝑥for-all𝑥𝑋{\mathbf{E}}\{\nabla G(x,\omega)\}=\nabla g(x),\qquad\forall x\in X.. , 2 =
To streamline presentation, we assume, as it is often the case in applications of stochastic optimization problem (5), that x∗subscript𝑥x_{*} is unconditional, i.e., ∇g(x∗)=0∇𝑔subscript𝑥0\nabla g(x_{*})=0.
or stated otherwise 𝐄{∇G(x∗,ω)}=0𝐄∇𝐺subscript𝑥𝜔0{\mathbf{E}}\{\nabla G(x_{*},\omega)\}=0;
we also suppose the sub-Gaussianity of ∇G(x∗,ω)∇𝐺subscript𝑥𝜔\nabla G(x_{*},\omega), namely that,
for some σ∗<∞subscript𝜎\sigma_{*}<\infty
, 1 = 𝐄{exp(‖∇G(x∗,ω)‖∗2/σ∗2)}≤exp(1).𝐄subscriptsuperscriptnorm∇𝐺subscript𝑥𝜔2superscriptsubscript𝜎21\displaystyle{\mathbf{E}}\Big{\{}\exp\Big{(}{\|\nabla G(x_{*},\omega)\|^{2}_{*}/\sigma_{*}^{2}}\Big{)}\Big{\}}\leq\exp(1).. , 2 = . , 3 = (6)
| 1,321
|
Stochastic Mirror Descent for Large-Scale Sparse Recovery
2 Multistage Stochastic Mirror Descent for Sparse Stochastic Optimization
2.1 Problem statement
Let X𝑋X be a convex closed subset of an Euclidean space E𝐸E and (Ω,P)Ω𝑃(\Omega,P) a probability space. We consider a mapping G:X×Ω→𝐑:𝐺→𝑋Ω𝐑G:X\times\Omega\rightarrow{\mathbf{R}} such that, for all ω∈Ω𝜔Ω\omega\in\Omega, G(⋅,ω)𝐺⋅𝜔G(\cdot,\omega) is convex on X𝑋X and smooth, meaning that ∇G(⋅,ω)∇𝐺⋅𝜔\nabla G(\cdot,\omega) is Lipschitz continuous on X𝑋X with a.s. bounded Lipschitz constant,
, 1 = ∀x,x′∈X,‖∇G(x,ω)−∇G(x′,ω)‖≤ℒ(ω)‖x−x′‖,ℒ(ω)≤νa.s..formulae-sequencefor-all𝑥superscript𝑥′𝑋formulae-sequencenorm∇𝐺𝑥𝜔∇𝐺superscript𝑥′𝜔ℒ𝜔norm𝑥superscript𝑥′ℒ𝜔𝜈𝑎𝑠\displaystyle\forall x,x^{\prime}\in X,\quad\|\nabla G(x,\omega)-\nabla G(x^{\prime},\omega)\|\leq{\cal L}(\omega)\|x-x^{\prime}\|,\qquad{\cal L}(\omega)\leq\nu\quad a.s... , 2 = . , 3 = (4)
We define g(x):=𝐄{G(x,ω)}assign𝑔𝑥𝐄𝐺𝑥𝜔g(x):={\mathbf{E}}\{G(x,\omega)\}, where 𝐄{⋅}𝐄⋅{\mathbf{E}}\{\cdot\} stands for the expectation with respect to ω𝜔\omega, drawn from P𝑃P. We shall assume that the mapping g(⋅)𝑔⋅g(\cdot) is finite, convex and differentiable on X𝑋X and we aim at solving the following stochastic optimization problem
, 1 = minx∈X[g(x)=𝐄{G(x,ω)}],subscript𝑥𝑋𝑔𝑥𝐄𝐺𝑥𝜔\min_{x\in X}[g(x)={\mathbf{E}}\{G(x,\omega)\}],. , 2 = . , 3 = (5)
assuming it admits an s𝑠s-sparse optimal solution x∗subscript𝑥x_{*} for some sparsity structure.
To solve this problem, stochastic oracle can be queried: when given at input a point x∈X𝑥𝑋x\in X, generates an ω∈Ω𝜔Ω\omega\in\Omega from P𝑃P and outputs
G(x,ω)𝐺𝑥𝜔G(x,\omega) and ∇G(x,ω):=∇xG(x,ω)assign∇𝐺𝑥𝜔subscript∇𝑥𝐺𝑥𝜔\nabla G(x,\omega):=\nabla_{x}G(x,\omega) (with a slight abuse of notations). We assume that the oracle is unbiased, i.e.,
, 1 = 𝐄{∇G(x,ω)}=∇g(x),∀x∈X.formulae-sequence𝐄∇𝐺𝑥𝜔∇𝑔𝑥for-all𝑥𝑋{\mathbf{E}}\{\nabla G(x,\omega)\}=\nabla g(x),\qquad\forall x\in X.. , 2 =
To streamline presentation, we assume, as it is often the case in applications of stochastic optimization problem (5), that x∗subscript𝑥x_{*} is unconditional, i.e., ∇g(x∗)=0∇𝑔subscript𝑥0\nabla g(x_{*})=0.
or stated otherwise 𝐄{∇G(x∗,ω)}=0𝐄∇𝐺subscript𝑥𝜔0{\mathbf{E}}\{\nabla G(x_{*},\omega)\}=0;
we also suppose the sub-Gaussianity of ∇G(x∗,ω)∇𝐺subscript𝑥𝜔\nabla G(x_{*},\omega), namely that,
for some σ∗<∞subscript𝜎\sigma_{*}<\infty
, 1 = 𝐄{exp(‖∇G(x∗,ω)‖∗2/σ∗2)}≤exp(1).𝐄subscriptsuperscriptnorm∇𝐺subscript𝑥𝜔2superscriptsubscript𝜎21\displaystyle{\mathbf{E}}\Big{\{}\exp\Big{(}{\|\nabla G(x_{*},\omega)\|^{2}_{*}/\sigma_{*}^{2}}\Big{)}\Big{\}}\leq\exp(1).. , 2 = . , 3 = (6)
| 1,354
|
7
|
As mentioned in the introduction, (stochastic) optimization over the set of sparse solutions can be done through ”composite” techniques. We take a similar approach here, by transforming the generic problem 5 into the following composite Stochastic Optimization problem, adapted to some norm ∥⋅∥\|\cdot\|, and parameterized by κ≥0𝜅0\kappa\geq 0,
, 1 = minx∈X[Fκ(x):=12g(x)+κ‖x‖=12𝐄{G(x,ω)}+κ‖x‖].subscript𝑥𝑋assignsubscript𝐹𝜅𝑥12𝑔𝑥𝜅norm𝑥12𝐄𝐺𝑥𝜔𝜅norm𝑥\min_{x\in X}\big{[}F_{\kappa}(x):=\mbox{\small$\frac{1}{2}$}g(x)+\kappa\|x\|=\mbox{\small$\frac{1}{2}$}{\mathbf{E}}\{G(x,\omega)\}+\kappa\|x\|\big{]}.. , 2 = . , 3 = (7)
The purpose of this section is to derive a new (proximal) algorithm. We first provide necessary backgrounds and notations.
| 317
|
Stochastic Mirror Descent for Large-Scale Sparse Recovery
2 Multistage Stochastic Mirror Descent for Sparse Stochastic Optimization
2.2 Composite Stochastic Mirror Descent algorithm
As mentioned in the introduction, (stochastic) optimization over the set of sparse solutions can be done through ”composite” techniques. We take a similar approach here, by transforming the generic problem 5 into the following composite Stochastic Optimization problem, adapted to some norm ∥⋅∥\|\cdot\|, and parameterized by κ≥0𝜅0\kappa\geq 0,
, 1 = minx∈X[Fκ(x):=12g(x)+κ‖x‖=12𝐄{G(x,ω)}+κ‖x‖].subscript𝑥𝑋assignsubscript𝐹𝜅𝑥12𝑔𝑥𝜅norm𝑥12𝐄𝐺𝑥𝜔𝜅norm𝑥\min_{x\in X}\big{[}F_{\kappa}(x):=\mbox{\small$\frac{1}{2}$}g(x)+\kappa\|x\|=\mbox{\small$\frac{1}{2}$}{\mathbf{E}}\{G(x,\omega)\}+\kappa\|x\|\big{]}.. , 2 = . , 3 = (7)
The purpose of this section is to derive a new (proximal) algorithm. We first provide necessary backgrounds and notations.
| 355
|
8
|
Let B𝐵B be the unit ball of the norm ∥⋅∥\|\cdot\| and θ:B→𝐑:𝜃→𝐵𝐑\theta:\,B\to{\mathbf{R}} be a distance-generating function (d.-g.f.) of B𝐵B, i.e., a continuously differentiable convex function which is strongly convex with respect to the norm ∥⋅∥\|\cdot\|,
, 1 = ⟨∇θ(x)−∇θ(x′),x−x′⟩≥‖x−x′‖2,∀x,x′∈X.formulae-sequence∇𝜃𝑥∇𝜃superscript𝑥′𝑥superscript𝑥′superscriptnorm𝑥superscript𝑥′2for-all𝑥superscript𝑥′𝑋{\langle}\nabla\theta(x)-\nabla\theta(x^{\prime}),x-x^{\prime}\rangle\geq\|x-x^{\prime}\|^{2},\quad\forall x,x^{\prime}\in X.. , 2 =
We assume w.l.o.g. that θ(x)≥θ(0)=0𝜃𝑥𝜃00\theta(x)\geq\theta(0)=0 and denote Θ=max‖z‖≤1θ(z)Θsubscriptnorm𝑧1𝜃𝑧\Theta=\max_{\|z\|\leq 1}\theta(z).
We now introduce a local and renormalized version of the d.-g.f. θ𝜃\theta.
| 393
|
Stochastic Mirror Descent for Large-Scale Sparse Recovery
2 Multistage Stochastic Mirror Descent for Sparse Stochastic Optimization
2.2 Composite Stochastic Mirror Descent algorithm
Proximal setup, Bregman divergences and Proximal mapping.
Let B𝐵B be the unit ball of the norm ∥⋅∥\|\cdot\| and θ:B→𝐑:𝜃→𝐵𝐑\theta:\,B\to{\mathbf{R}} be a distance-generating function (d.-g.f.) of B𝐵B, i.e., a continuously differentiable convex function which is strongly convex with respect to the norm ∥⋅∥\|\cdot\|,
, 1 = ⟨∇θ(x)−∇θ(x′),x−x′⟩≥‖x−x′‖2,∀x,x′∈X.formulae-sequence∇𝜃𝑥∇𝜃superscript𝑥′𝑥superscript𝑥′superscriptnorm𝑥superscript𝑥′2for-all𝑥superscript𝑥′𝑋{\langle}\nabla\theta(x)-\nabla\theta(x^{\prime}),x-x^{\prime}\rangle\geq\|x-x^{\prime}\|^{2},\quad\forall x,x^{\prime}\in X.. , 2 =
We assume w.l.o.g. that θ(x)≥θ(0)=0𝜃𝑥𝜃00\theta(x)\geq\theta(0)=0 and denote Θ=max‖z‖≤1θ(z)Θsubscriptnorm𝑧1𝜃𝑧\Theta=\max_{\|z\|\leq 1}\theta(z).
We now introduce a local and renormalized version of the d.-g.f. θ𝜃\theta.
| 448
|
9
|
For any x0∈Xsubscript𝑥0𝑋x_{0}\in X, let XR(x0):={z∈X:‖z−x0‖≤R}assignsubscript𝑋𝑅subscript𝑥0conditional-set𝑧𝑋norm𝑧subscript𝑥0𝑅X_{R}(x_{0}):=\{z\in X:\|z-x_{0}\|\leq R\} be the ball of radius R𝑅R around x0subscript𝑥0x_{0}. It is equipped with the d.-g.f. ϑx0R(z):=R2θ((z−x0)/R)assignsubscriptsuperscriptitalic-ϑ𝑅subscript𝑥0𝑧superscript𝑅2𝜃𝑧subscript𝑥0𝑅\vartheta^{R}_{x_{0}}(z):=R^{2}\theta\left((z-x_{0})/R\right).
Note that ϑx0R(z)subscriptsuperscriptitalic-ϑ𝑅subscript𝑥0𝑧\vartheta^{R}_{x_{0}}(z) is strongly convex on XR(x0)subscript𝑋𝑅subscript𝑥0{X}_{R}(x_{0}) with modulus 1, ϑx0R(x0)=0subscriptsuperscriptitalic-ϑ𝑅subscript𝑥0subscript𝑥00\vartheta^{R}_{x_{0}}(x_{0})=0, and ϑx0R(z)≤ΘR2subscriptsuperscriptitalic-ϑ𝑅subscript𝑥0𝑧Θsuperscript𝑅2\vartheta^{R}_{x_{0}}(z)\leq\Theta R^{2}.
| 443
|
Stochastic Mirror Descent for Large-Scale Sparse Recovery
2 Multistage Stochastic Mirror Descent for Sparse Stochastic Optimization
2.2 Composite Stochastic Mirror Descent algorithm
Proximal setup, Bregman divergences and Proximal mapping.
Definition 2.1
For any x0∈Xsubscript𝑥0𝑋x_{0}\in X, let XR(x0):={z∈X:‖z−x0‖≤R}assignsubscript𝑋𝑅subscript𝑥0conditional-set𝑧𝑋norm𝑧subscript𝑥0𝑅X_{R}(x_{0}):=\{z\in X:\|z-x_{0}\|\leq R\} be the ball of radius R𝑅R around x0subscript𝑥0x_{0}. It is equipped with the d.-g.f. ϑx0R(z):=R2θ((z−x0)/R)assignsubscriptsuperscriptitalic-ϑ𝑅subscript𝑥0𝑧superscript𝑅2𝜃𝑧subscript𝑥0𝑅\vartheta^{R}_{x_{0}}(z):=R^{2}\theta\left((z-x_{0})/R\right).
Note that ϑx0R(z)subscriptsuperscriptitalic-ϑ𝑅subscript𝑥0𝑧\vartheta^{R}_{x_{0}}(z) is strongly convex on XR(x0)subscript𝑋𝑅subscript𝑥0{X}_{R}(x_{0}) with modulus 1, ϑx0R(x0)=0subscriptsuperscriptitalic-ϑ𝑅subscript𝑥0subscript𝑥00\vartheta^{R}_{x_{0}}(x_{0})=0, and ϑx0R(z)≤ΘR2subscriptsuperscriptitalic-ϑ𝑅subscript𝑥0𝑧Θsuperscript𝑅2\vartheta^{R}_{x_{0}}(z)\leq\Theta R^{2}.
| 504
|
10
|
Given x0∈Xsubscript𝑥0𝑋x_{0}\in X and R>0𝑅0R>0, the Bregman divergence V𝑉V associated to ϑitalic-ϑ\vartheta is defined by
, 1 = Vx0(x,z)=ϑx0R(z)−ϑx0R(x)−⟨∇ϑx0R(x),z−x⟩,x,z∈X.formulae-sequencesubscript𝑉subscript𝑥0𝑥𝑧subscriptsuperscriptitalic-ϑ𝑅subscript𝑥0𝑧subscriptsuperscriptitalic-ϑ𝑅subscript𝑥0𝑥∇subscriptsuperscriptitalic-ϑ𝑅subscript𝑥0𝑥𝑧𝑥𝑥𝑧𝑋V_{x_{0}}(x,z)=\vartheta^{R}_{x_{0}}(z)-\vartheta^{R}_{x_{0}}(x)-{\langle}\nabla\vartheta^{R}_{x_{0}}(x),z-x{\rangle},\quad x,z\in X.. , 2 =
We can now define composite proximal mapping on XR(x0)subscript𝑋𝑅subscript𝑥0X_{R}(x_{0}) [39, 40] with respect to some convex and continuous mapping h:X→𝐑:ℎ→𝑋𝐑h:\,X\to{\mathbf{R}}.
| 368
|
Stochastic Mirror Descent for Large-Scale Sparse Recovery
2 Multistage Stochastic Mirror Descent for Sparse Stochastic Optimization
2.2 Composite Stochastic Mirror Descent algorithm
Proximal setup, Bregman divergences and Proximal mapping.
Definition 2.2
Given x0∈Xsubscript𝑥0𝑋x_{0}\in X and R>0𝑅0R>0, the Bregman divergence V𝑉V associated to ϑitalic-ϑ\vartheta is defined by
, 1 = Vx0(x,z)=ϑx0R(z)−ϑx0R(x)−⟨∇ϑx0R(x),z−x⟩,x,z∈X.formulae-sequencesubscript𝑉subscript𝑥0𝑥𝑧subscriptsuperscriptitalic-ϑ𝑅subscript𝑥0𝑧subscriptsuperscriptitalic-ϑ𝑅subscript𝑥0𝑥∇subscriptsuperscriptitalic-ϑ𝑅subscript𝑥0𝑥𝑧𝑥𝑥𝑧𝑋V_{x_{0}}(x,z)=\vartheta^{R}_{x_{0}}(z)-\vartheta^{R}_{x_{0}}(x)-{\langle}\nabla\vartheta^{R}_{x_{0}}(x),z-x{\rangle},\quad x,z\in X.. , 2 =
We can now define composite proximal mapping on XR(x0)subscript𝑋𝑅subscript𝑥0X_{R}(x_{0}) [39, 40] with respect to some convex and continuous mapping h:X→𝐑:ℎ→𝑋𝐑h:\,X\to{\mathbf{R}}.
| 429
|
11
|
The composite proximal mapping with respect to hℎh and x𝑥x is defined by
, 1 = Proxh,x0(ζ,x)subscriptProxℎsubscript𝑥0𝜁𝑥\displaystyle\mathrm{Prox}_{h,x_{0}}(\zeta,x). , 2 = :=assign\displaystyle:=. , 3 = argminz∈XR(x0){⟨ζ,z⟩+h(z)+Vx0(x,z)}subscriptargmin𝑧subscript𝑋𝑅subscript𝑥0𝜁𝑧ℎ𝑧subscript𝑉subscript𝑥0𝑥𝑧\displaystyle\operatorname*{arg\,min}_{z\in{X_{R}(x_{0})}}\big{\{}\langle\zeta,z\rangle+h(z)+V_{x_{0}}(x,z)\big{\}}. , 4 = . , 5 = (8). , 1 = . , 2 = =\displaystyle=. , 3 = argminz∈XR(x0){⟨ζ−∇ϑx0R(x),z⟩+h(z)+ϑx0R(z)}subscriptargmin𝑧subscript𝑋𝑅subscript𝑥0𝜁∇subscriptsuperscriptitalic-ϑ𝑅subscript𝑥0𝑥𝑧ℎ𝑧subscriptsuperscriptitalic-ϑ𝑅subscript𝑥0𝑧\displaystyle\operatorname*{arg\,min}_{z\in{X_{R}(x_{0})}}\big{\{}\langle\zeta-\nabla\vartheta^{R}_{x_{0}}(x),z\rangle+h(z)+\vartheta^{R}_{x_{0}}(z)\big{\}}. , 4 = . , 5 = (8)
If (8) can be efficiently solved to high accuracy and ΘΘ\Theta is “not too large” (we refer to [27, 36, 40]); those setups will be called “prox-friendly”. We now introduce the main building block of our algorithm, the Composite Stochastic Mirror Descent.
| 527
|
Stochastic Mirror Descent for Large-Scale Sparse Recovery
2 Multistage Stochastic Mirror Descent for Sparse Stochastic Optimization
2.2 Composite Stochastic Mirror Descent algorithm
Proximal setup, Bregman divergences and Proximal mapping.
Definition 2.3
The composite proximal mapping with respect to hℎh and x𝑥x is defined by
, 1 = Proxh,x0(ζ,x)subscriptProxℎsubscript𝑥0𝜁𝑥\displaystyle\mathrm{Prox}_{h,x_{0}}(\zeta,x). , 2 = :=assign\displaystyle:=. , 3 = argminz∈XR(x0){⟨ζ,z⟩+h(z)+Vx0(x,z)}subscriptargmin𝑧subscript𝑋𝑅subscript𝑥0𝜁𝑧ℎ𝑧subscript𝑉subscript𝑥0𝑥𝑧\displaystyle\operatorname*{arg\,min}_{z\in{X_{R}(x_{0})}}\big{\{}\langle\zeta,z\rangle+h(z)+V_{x_{0}}(x,z)\big{\}}. , 4 = . , 5 = (8). , 1 = . , 2 = =\displaystyle=. , 3 = argminz∈XR(x0){⟨ζ−∇ϑx0R(x),z⟩+h(z)+ϑx0R(z)}subscriptargmin𝑧subscript𝑋𝑅subscript𝑥0𝜁∇subscriptsuperscriptitalic-ϑ𝑅subscript𝑥0𝑥𝑧ℎ𝑧subscriptsuperscriptitalic-ϑ𝑅subscript𝑥0𝑧\displaystyle\operatorname*{arg\,min}_{z\in{X_{R}(x_{0})}}\big{\{}\langle\zeta-\nabla\vartheta^{R}_{x_{0}}(x),z\rangle+h(z)+\vartheta^{R}_{x_{0}}(z)\big{\}}. , 4 = . , 5 = (8)
If (8) can be efficiently solved to high accuracy and ΘΘ\Theta is “not too large” (we refer to [27, 36, 40]); those setups will be called “prox-friendly”. We now introduce the main building block of our algorithm, the Composite Stochastic Mirror Descent.
| 588
|
12
|
Given a sequence of positive step sizes γi>0subscript𝛾𝑖0\gamma_{i}>0, the Composite Stochastic Mirror Descent (CSMD) is defined by the following recursion
, 1 = xisubscript𝑥𝑖\displaystyle x_{i}. , 2 = =Proxγih,x0(γi−1∇G(xi−1,ωi),xi−1),x0∈X.formulae-sequenceabsentsubscriptProxsubscript𝛾𝑖ℎsubscript𝑥0subscript𝛾𝑖1∇𝐺subscript𝑥𝑖1subscript𝜔𝑖subscript𝑥𝑖1subscript𝑥0𝑋\displaystyle=\mathrm{Prox}_{\gamma_{i}h,x_{0}}(\gamma_{i-1}\nabla G(x_{i-1},\omega_{i}),x_{i-1}),\quad x_{0}\in X.. , 3 = . , 4 = . , 5 = (9)
After m𝑚m steps of CSMD, the final output is x^msubscript^𝑥𝑚{\widehat{x}}_{m} (approximate solution) defined by
, 1 = x^m=∑i=0m−1γixi∑i=0m−1γisubscript^𝑥𝑚superscriptsubscript𝑖0𝑚1subscript𝛾𝑖subscript𝑥𝑖superscriptsubscript𝑖0𝑚1subscript𝛾𝑖\displaystyle{\widehat{x}}_{m}=\frac{\sum_{i=0}^{m-1}\gamma_{i}x_{i}}{\sum_{i=0}^{m-1}\gamma_{i}}. , 2 = . , 3 = (10)
For any integer L∈𝐍𝐿𝐍L\in{\mathbf{N}}, we can also define the L𝐿L-minibatch CSMD. Let ωi(L)=[ωi1,…,ωiL]superscriptsubscript𝜔𝑖𝐿superscriptsubscript𝜔𝑖1…superscriptsubscript𝜔𝑖𝐿\omega_{i}^{(L)}=[\omega_{i}^{1},...,\omega_{i}^{L}] be i.i.d. realizations of ωisubscript𝜔𝑖\omega_{i}. The associated (average) stochastic gradient is then simply defined as
, 1 = H(xi−1,ωi(L))=1L∑ℓ=1L∇G(xi−1,ωiℓ),𝐻subscript𝑥𝑖1subscriptsuperscript𝜔𝐿𝑖1𝐿superscriptsubscriptℓ1𝐿∇𝐺subscript𝑥𝑖1subscriptsuperscript𝜔ℓ𝑖H\left(x_{i-1},\omega^{(L)}_{i}\right)={1\over L}\sum_{\ell=1}^{L}\nabla G(x_{i-1},\omega^{\ell}_{i}),. , 2 =
which yields the following recursion for the L𝐿L-minibatch CSMD recursion:
, 1 = xi(L)superscriptsubscript𝑥𝑖𝐿\displaystyle x_{i}^{(L)}. , 2 = =Proxγih,x0(γi−1H(xi−1,ωi(L)),xi−1(L)),x0∈X,formulae-sequenceabsentsubscriptProxsubscript𝛾𝑖ℎsubscript𝑥0subscript𝛾𝑖1𝐻subscript𝑥𝑖1subscriptsuperscript𝜔𝐿𝑖superscriptsubscript𝑥𝑖1𝐿subscript𝑥0𝑋\displaystyle=\mathrm{Prox}_{\gamma_{i}h,x_{0}}\left(\gamma_{i-1}H\left(x_{i-1},\omega^{(L)}_{i}\right),x_{i-1}^{(L)}\right),\quad x_{0}\in X,. , 3 = . , 4 = . , 5 = (11)
with its approximate solution x^m(L)=∑i=0m−1γixi(L)/∑i=0m−1γisuperscriptsubscript^𝑥𝑚𝐿superscriptsubscript𝑖0𝑚1subscript𝛾𝑖superscriptsubscript𝑥𝑖𝐿superscriptsubscript𝑖0𝑚1subscript𝛾𝑖{\widehat{x}}_{m}^{(L)}=\sum_{i=0}^{m-1}\gamma_{i}x_{i}^{(L)}/\sum_{i=0}^{m-1}\gamma_{i} after m𝑚m iterations.
From now on, we set h(x)=κ‖x‖ℎ𝑥𝜅norm𝑥h(x)=\kappa\|x\|.
| 1,249
|
Stochastic Mirror Descent for Large-Scale Sparse Recovery
2 Multistage Stochastic Mirror Descent for Sparse Stochastic Optimization
2.2 Composite Stochastic Mirror Descent algorithm
Composite Stochastic Mirror Descent algorithm.
Given a sequence of positive step sizes γi>0subscript𝛾𝑖0\gamma_{i}>0, the Composite Stochastic Mirror Descent (CSMD) is defined by the following recursion
, 1 = xisubscript𝑥𝑖\displaystyle x_{i}. , 2 = =Proxγih,x0(γi−1∇G(xi−1,ωi),xi−1),x0∈X.formulae-sequenceabsentsubscriptProxsubscript𝛾𝑖ℎsubscript𝑥0subscript𝛾𝑖1∇𝐺subscript𝑥𝑖1subscript𝜔𝑖subscript𝑥𝑖1subscript𝑥0𝑋\displaystyle=\mathrm{Prox}_{\gamma_{i}h,x_{0}}(\gamma_{i-1}\nabla G(x_{i-1},\omega_{i}),x_{i-1}),\quad x_{0}\in X.. , 3 = . , 4 = . , 5 = (9)
After m𝑚m steps of CSMD, the final output is x^msubscript^𝑥𝑚{\widehat{x}}_{m} (approximate solution) defined by
, 1 = x^m=∑i=0m−1γixi∑i=0m−1γisubscript^𝑥𝑚superscriptsubscript𝑖0𝑚1subscript𝛾𝑖subscript𝑥𝑖superscriptsubscript𝑖0𝑚1subscript𝛾𝑖\displaystyle{\widehat{x}}_{m}=\frac{\sum_{i=0}^{m-1}\gamma_{i}x_{i}}{\sum_{i=0}^{m-1}\gamma_{i}}. , 2 = . , 3 = (10)
For any integer L∈𝐍𝐿𝐍L\in{\mathbf{N}}, we can also define the L𝐿L-minibatch CSMD. Let ωi(L)=[ωi1,…,ωiL]superscriptsubscript𝜔𝑖𝐿superscriptsubscript𝜔𝑖1…superscriptsubscript𝜔𝑖𝐿\omega_{i}^{(L)}=[\omega_{i}^{1},...,\omega_{i}^{L}] be i.i.d. realizations of ωisubscript𝜔𝑖\omega_{i}. The associated (average) stochastic gradient is then simply defined as
, 1 = H(xi−1,ωi(L))=1L∑ℓ=1L∇G(xi−1,ωiℓ),𝐻subscript𝑥𝑖1subscriptsuperscript𝜔𝐿𝑖1𝐿superscriptsubscriptℓ1𝐿∇𝐺subscript𝑥𝑖1subscriptsuperscript𝜔ℓ𝑖H\left(x_{i-1},\omega^{(L)}_{i}\right)={1\over L}\sum_{\ell=1}^{L}\nabla G(x_{i-1},\omega^{\ell}_{i}),. , 2 =
which yields the following recursion for the L𝐿L-minibatch CSMD recursion:
, 1 = xi(L)superscriptsubscript𝑥𝑖𝐿\displaystyle x_{i}^{(L)}. , 2 = =Proxγih,x0(γi−1H(xi−1,ωi(L)),xi−1(L)),x0∈X,formulae-sequenceabsentsubscriptProxsubscript𝛾𝑖ℎsubscript𝑥0subscript𝛾𝑖1𝐻subscript𝑥𝑖1subscriptsuperscript𝜔𝐿𝑖superscriptsubscript𝑥𝑖1𝐿subscript𝑥0𝑋\displaystyle=\mathrm{Prox}_{\gamma_{i}h,x_{0}}\left(\gamma_{i-1}H\left(x_{i-1},\omega^{(L)}_{i}\right),x_{i-1}^{(L)}\right),\quad x_{0}\in X,. , 3 = . , 4 = . , 5 = (11)
with its approximate solution x^m(L)=∑i=0m−1γixi(L)/∑i=0m−1γisuperscriptsubscript^𝑥𝑚𝐿superscriptsubscript𝑖0𝑚1subscript𝛾𝑖superscriptsubscript𝑥𝑖𝐿superscriptsubscript𝑖0𝑚1subscript𝛾𝑖{\widehat{x}}_{m}^{(L)}=\sum_{i=0}^{m-1}\gamma_{i}x_{i}^{(L)}/\sum_{i=0}^{m-1}\gamma_{i} after m𝑚m iterations.
From now on, we set h(x)=κ‖x‖ℎ𝑥𝜅norm𝑥h(x)=\kappa\|x\|.
| 1,295
|
13
|
If step-sizes are constant, i.e., γi≡γ≤(4ν)−1subscript𝛾𝑖𝛾superscript4𝜈1\gamma_{i}\equiv\gamma\leq(4\nu)^{-1}, i=0,1,…𝑖01…i=0,1,..., and the initial point x0∈Xsubscript𝑥0𝑋x_{0}\in X such that x∗∈XR(x0)subscript𝑥subscript𝑋𝑅subscript𝑥0x_{*}\in X_{R}(x_{0}) then for any t≳1+lnmgreater-than-or-equivalent-to𝑡1𝑚t\gtrsim\sqrt{1+\ln m}, with probability at least 1−4e−t14superscript𝑒𝑡1-4e^{-t}
, 1 = Fκ(x^m)−Fκ(x∗)≲m−1[γ−1R2(Θ+t)+κR+γσ∗2(m+t)],less-than-or-similar-tosubscript𝐹𝜅subscript^𝑥𝑚subscript𝐹𝜅subscript𝑥superscript𝑚1delimited-[]superscript𝛾1superscript𝑅2Θ𝑡𝜅𝑅𝛾subscriptsuperscript𝜎2𝑚𝑡\displaystyle F_{\kappa}({\widehat{x}}_{m})-F_{\kappa}({x_{*}})\lesssim m^{-1}\big{[}\gamma^{-1}R^{2}(\Theta+t)+\kappa R+\gamma\sigma^{2}_{*}(m+t)\big{]},. , 2 = . , 3 = (12)
and the approximate solution x^m(L)superscriptsubscript^𝑥𝑚𝐿{\widehat{x}}_{m}^{(L)} of the L𝐿L-minibatch CSMD satisfies
, 1 = Fκ(x^m(L))−Fκ(x∗)≲m−1[γ−1R2(Θ+t)+κR+γσ∗2ΘL−1(m+t)].less-than-or-similar-tosubscript𝐹𝜅superscriptsubscript^𝑥𝑚𝐿subscript𝐹𝜅subscript𝑥superscript𝑚1delimited-[]superscript𝛾1superscript𝑅2Θ𝑡𝜅𝑅𝛾subscriptsuperscript𝜎2Θsuperscript𝐿1𝑚𝑡\displaystyle F_{\kappa}({\widehat{x}}_{m}^{(L)})-F_{\kappa}({x_{*}})\lesssim m^{-1}\big{[}\gamma^{-1}R^{2}(\Theta+t)+\kappa R+\gamma\sigma^{2}_{*}\Theta L^{-1}(m+t)\big{]}.. , 2 = . , 3 = (13)
For the sake of clarity and conciseness, we denote CSMD(x0,γ,κ,R,m,Lsubscript𝑥0𝛾𝜅𝑅𝑚𝐿x_{0},\gamma,\kappa,R,m,L) the approximate solution x^m(L)subscriptsuperscript^𝑥𝐿𝑚\widehat{x}^{(L)}_{m} computed after m𝑚m iterations of L𝐿L-minibatch CSMD algorithm with initial point x0subscript𝑥0x_{0}, step-size γ𝛾\gamma, and radius R𝑅R using recursion (11).
| 902
|
Stochastic Mirror Descent for Large-Scale Sparse Recovery
2 Multistage Stochastic Mirror Descent for Sparse Stochastic Optimization
2.2 Composite Stochastic Mirror Descent algorithm
Composite Stochastic Mirror Descent algorithm.
Proposition 2.1
If step-sizes are constant, i.e., γi≡γ≤(4ν)−1subscript𝛾𝑖𝛾superscript4𝜈1\gamma_{i}\equiv\gamma\leq(4\nu)^{-1}, i=0,1,…𝑖01…i=0,1,..., and the initial point x0∈Xsubscript𝑥0𝑋x_{0}\in X such that x∗∈XR(x0)subscript𝑥subscript𝑋𝑅subscript𝑥0x_{*}\in X_{R}(x_{0}) then for any t≳1+lnmgreater-than-or-equivalent-to𝑡1𝑚t\gtrsim\sqrt{1+\ln m}, with probability at least 1−4e−t14superscript𝑒𝑡1-4e^{-t}
, 1 = Fκ(x^m)−Fκ(x∗)≲m−1[γ−1R2(Θ+t)+κR+γσ∗2(m+t)],less-than-or-similar-tosubscript𝐹𝜅subscript^𝑥𝑚subscript𝐹𝜅subscript𝑥superscript𝑚1delimited-[]superscript𝛾1superscript𝑅2Θ𝑡𝜅𝑅𝛾subscriptsuperscript𝜎2𝑚𝑡\displaystyle F_{\kappa}({\widehat{x}}_{m})-F_{\kappa}({x_{*}})\lesssim m^{-1}\big{[}\gamma^{-1}R^{2}(\Theta+t)+\kappa R+\gamma\sigma^{2}_{*}(m+t)\big{]},. , 2 = . , 3 = (12)
and the approximate solution x^m(L)superscriptsubscript^𝑥𝑚𝐿{\widehat{x}}_{m}^{(L)} of the L𝐿L-minibatch CSMD satisfies
, 1 = Fκ(x^m(L))−Fκ(x∗)≲m−1[γ−1R2(Θ+t)+κR+γσ∗2ΘL−1(m+t)].less-than-or-similar-tosubscript𝐹𝜅superscriptsubscript^𝑥𝑚𝐿subscript𝐹𝜅subscript𝑥superscript𝑚1delimited-[]superscript𝛾1superscript𝑅2Θ𝑡𝜅𝑅𝛾subscriptsuperscript𝜎2Θsuperscript𝐿1𝑚𝑡\displaystyle F_{\kappa}({\widehat{x}}_{m}^{(L)})-F_{\kappa}({x_{*}})\lesssim m^{-1}\big{[}\gamma^{-1}R^{2}(\Theta+t)+\kappa R+\gamma\sigma^{2}_{*}\Theta L^{-1}(m+t)\big{]}.. , 2 = . , 3 = (13)
For the sake of clarity and conciseness, we denote CSMD(x0,γ,κ,R,m,Lsubscript𝑥0𝛾𝜅𝑅𝑚𝐿x_{0},\gamma,\kappa,R,m,L) the approximate solution x^m(L)subscriptsuperscript^𝑥𝐿𝑚\widehat{x}^{(L)}_{m} computed after m𝑚m iterations of L𝐿L-minibatch CSMD algorithm with initial point x0subscript𝑥0x_{0}, step-size γ𝛾\gamma, and radius R𝑅R using recursion (11).
| 955
|
14
|
Our approach to find sparse solution to the original stochastic optimization problem (7) consists in solving a sequence of auxiliary composite problems (7), with their sequence of parameters (κ𝜅\kappa, x0subscript𝑥0x_{0}, R𝑅R) defined recursively. For the latter, we need to infer the quality of approximate solution to (5). To this end, we introduce the following Reduced Strong Convexity (RSC) assumption, satisfied in the motivating example (it is discussed in the appendix for the sake of fluency):
| 117
|
Stochastic Mirror Descent for Large-Scale Sparse Recovery
2 Multistage Stochastic Mirror Descent for Sparse Stochastic Optimization
2.3 Main contribution: a multistage adaptive algorithm
Our approach to find sparse solution to the original stochastic optimization problem (7) consists in solving a sequence of auxiliary composite problems (7), with their sequence of parameters (κ𝜅\kappa, x0subscript𝑥0x_{0}, R𝑅R) defined recursively. For the latter, we need to infer the quality of approximate solution to (5). To this end, we introduce the following Reduced Strong Convexity (RSC) assumption, satisfied in the motivating example (it is discussed in the appendix for the sake of fluency):
| 157
|
15
|
There exist some δ>0𝛿0\delta>0 and ρ<∞𝜌\rho<\infty such that for any feasible solution x^∈X^𝑥𝑋\widehat{x}\in X to the composite problem (7) satisfying, with probability at least 1−ε1𝜀1-\varepsilon,
, 1 = Fκ(x^)−Fκ(x∗)≤υ,subscript𝐹𝜅^𝑥subscript𝐹𝜅subscript𝑥𝜐F_{\kappa}(\widehat{x})-F_{\kappa}(x_{*})\leq\upsilon,. , 2 =
it holds, with probability at least 1−ε1𝜀1-\varepsilon, that
, 1 = ‖x^−x∗‖≤δ[ρsκ+υκ−1].norm^𝑥subscript𝑥𝛿delimited-[]𝜌𝑠𝜅𝜐superscript𝜅1\displaystyle\|{\widehat{x}}-x_{*}\|\leq\delta\left[\rho s\kappa+{\upsilon\kappa^{-1}}\right].. , 2 = . , 3 = (14)
Given the different problem parameters s,ν,δ,ρ,κ,R𝑠𝜈𝛿𝜌𝜅𝑅s,\nu,\delta,\rho,\kappa,R and some initial point x0∈Xsubscript𝑥0𝑋x_{0}\in X such that x∗∈XR(x0)subscript𝑥subscript𝑋𝑅subscript𝑥0x_{*}\in X_{R}(x_{0}) Algorithm 1 works in stages. Each stage represents a run of CSMD algorithm with properly set penalty parameter κ𝜅\kappa. More precisely, at stage k+1𝑘1k+1, given the approximate solution x^mksubscriptsuperscript^𝑥𝑘𝑚\widehat{x}^{k}_{m} of stage k𝑘k, a new instance of CSMD is initialized on XRk+1(x0k+1)subscript𝑋subscript𝑅𝑘1subscriptsuperscript𝑥𝑘10X_{R_{k+1}}(x^{k+1}_{0}) with x0k+1=x^mksubscriptsuperscript𝑥𝑘10subscriptsuperscript^𝑥𝑘𝑚x^{k+1}_{0}=\widehat{x}^{k}_{m} and Rk+1=Rk/2subscript𝑅𝑘1subscript𝑅𝑘2R_{k+1}=R_{k}/2.
Furthermore, those stages are divided into two phases which we refer to as preliminary and asymptotic:
During this phase, the step-sizes γ𝛾\gamma and the number of CSMD iterations per stage are fixed; the error of approximate solutions converges linearly with the total number of calls to stochastic oracle. This phase terminates when the error of approximate solution becomes independent of the initial error of the algorithm; then the asymptotic phase begins.
In this phase, the step-size decreases and the length of the stage increases linearly; the solution converges sublinearly, with the “standard” rate O(N−1/2)𝑂superscript𝑁12O\big{(}N^{-1/2}\big{)} where N𝑁N is the total number of oracle calls. When expensive proximal computation (8) results in high numerical cost of the iterative algorithm, minibatches are used to keep the number of iterations per stage fixed.
In the algorithm description, K¯1subscript¯𝐾1\overline{K}_{1} and K¯2≍1+log(Nm0)asymptotically-equalssubscript¯𝐾21𝑁subscript𝑚0\overline{K}_{2}\asymp 1+\log(\frac{N}{m_{0}}) stand for the respective maximal number of stages of the two phases of the method, here, m0≍sρνδ2(Θ+t)asymptotically-equalssubscript𝑚0𝑠𝜌𝜈superscript𝛿2Θ𝑡m_{0}\asymp{s\rho\nu\delta^{2}(\Theta+t)} is the length of stages of the first (preliminary) phase. The pseudo-code for the variant of the asymptotic phase with minibatches is given in Algorithm 2.
The following theorem states the main result of this paper, an upper bound on the precision of the estimator computed by our multistage method.
| 1,104
|
Stochastic Mirror Descent for Large-Scale Sparse Recovery
2 Multistage Stochastic Mirror Descent for Sparse Stochastic Optimization
2.3 Main contribution: a multistage adaptive algorithm
Assumption [RSC]
There exist some δ>0𝛿0\delta>0 and ρ<∞𝜌\rho<\infty such that for any feasible solution x^∈X^𝑥𝑋\widehat{x}\in X to the composite problem (7) satisfying, with probability at least 1−ε1𝜀1-\varepsilon,
, 1 = Fκ(x^)−Fκ(x∗)≤υ,subscript𝐹𝜅^𝑥subscript𝐹𝜅subscript𝑥𝜐F_{\kappa}(\widehat{x})-F_{\kappa}(x_{*})\leq\upsilon,. , 2 =
it holds, with probability at least 1−ε1𝜀1-\varepsilon, that
, 1 = ‖x^−x∗‖≤δ[ρsκ+υκ−1].norm^𝑥subscript𝑥𝛿delimited-[]𝜌𝑠𝜅𝜐superscript𝜅1\displaystyle\|{\widehat{x}}-x_{*}\|\leq\delta\left[\rho s\kappa+{\upsilon\kappa^{-1}}\right].. , 2 = . , 3 = (14)
Given the different problem parameters s,ν,δ,ρ,κ,R𝑠𝜈𝛿𝜌𝜅𝑅s,\nu,\delta,\rho,\kappa,R and some initial point x0∈Xsubscript𝑥0𝑋x_{0}\in X such that x∗∈XR(x0)subscript𝑥subscript𝑋𝑅subscript𝑥0x_{*}\in X_{R}(x_{0}) Algorithm 1 works in stages. Each stage represents a run of CSMD algorithm with properly set penalty parameter κ𝜅\kappa. More precisely, at stage k+1𝑘1k+1, given the approximate solution x^mksubscriptsuperscript^𝑥𝑘𝑚\widehat{x}^{k}_{m} of stage k𝑘k, a new instance of CSMD is initialized on XRk+1(x0k+1)subscript𝑋subscript𝑅𝑘1subscriptsuperscript𝑥𝑘10X_{R_{k+1}}(x^{k+1}_{0}) with x0k+1=x^mksubscriptsuperscript𝑥𝑘10subscriptsuperscript^𝑥𝑘𝑚x^{k+1}_{0}=\widehat{x}^{k}_{m} and Rk+1=Rk/2subscript𝑅𝑘1subscript𝑅𝑘2R_{k+1}=R_{k}/2.
Furthermore, those stages are divided into two phases which we refer to as preliminary and asymptotic:
During this phase, the step-sizes γ𝛾\gamma and the number of CSMD iterations per stage are fixed; the error of approximate solutions converges linearly with the total number of calls to stochastic oracle. This phase terminates when the error of approximate solution becomes independent of the initial error of the algorithm; then the asymptotic phase begins.
In this phase, the step-size decreases and the length of the stage increases linearly; the solution converges sublinearly, with the “standard” rate O(N−1/2)𝑂superscript𝑁12O\big{(}N^{-1/2}\big{)} where N𝑁N is the total number of oracle calls. When expensive proximal computation (8) results in high numerical cost of the iterative algorithm, minibatches are used to keep the number of iterations per stage fixed.
In the algorithm description, K¯1subscript¯𝐾1\overline{K}_{1} and K¯2≍1+log(Nm0)asymptotically-equalssubscript¯𝐾21𝑁subscript𝑚0\overline{K}_{2}\asymp 1+\log(\frac{N}{m_{0}}) stand for the respective maximal number of stages of the two phases of the method, here, m0≍sρνδ2(Θ+t)asymptotically-equalssubscript𝑚0𝑠𝜌𝜈superscript𝛿2Θ𝑡m_{0}\asymp{s\rho\nu\delta^{2}(\Theta+t)} is the length of stages of the first (preliminary) phase. The pseudo-code for the variant of the asymptotic phase with minibatches is given in Algorithm 2.
The following theorem states the main result of this paper, an upper bound on the precision of the estimator computed by our multistage method.
| 1,150
|
16
|
Assume that the total sample budget satisfies N≥m0𝑁subscript𝑚0N\geq m_{0}, so that at least one stage of the preliminary phase of Algorithm 1 is completed, then for t≳lnNgreater-than-or-equivalent-to𝑡𝑁t\gtrsim\sqrt{\ln N} the approximate solution x^Nsubscript^𝑥𝑁\widehat{x}_{N} of Algorithm 1 satisfies, with probability at least 1−C(K¯1+K¯2)e−t1𝐶subscript¯𝐾1subscript¯𝐾2superscript𝑒𝑡1-C(\overline{K}_{1}+\overline{K}_{2})e^{-t},
, 1 = ‖x^N−x∗‖≲Rexp{−cδ2ρνNs(Θ+t)}+δ2ρσ∗sΘ+tN.less-than-or-similar-tonormsubscript^𝑥𝑁subscript𝑥𝑅𝑐superscript𝛿2𝜌𝜈𝑁𝑠Θ𝑡superscript𝛿2𝜌subscript𝜎𝑠Θ𝑡𝑁\|\widehat{x}_{N}-x_{*}\|\lesssim R\exp\left\{-\frac{c}{{\delta^{2}}\rho\nu}{N\over s(\Theta+t)}\right\}+{{\delta^{2}}\rho\sigma_{*}s}\sqrt{\frac{\Theta+t}{N}}.. , 2 =
The corresponding solution x^N(b)subscriptsuperscript^𝑥𝑏𝑁\widehat{x}^{(b)}_{N} of the minibatch Algorithm 2 satisfies with probability ≥1−C(K¯1+K~2)e−tabsent1𝐶subscript¯𝐾1subscript~𝐾2superscript𝑒𝑡\geq 1-C(\overline{K}_{1}+\widetilde{K}_{2})e^{-t}
, 1 = ‖x^N(b)−x∗‖≲Rexp{−cδ2ρνNs(Θ+t)}+δ2ρσ∗sΘ(Θ+t)N.less-than-or-similar-tonormsubscriptsuperscript^𝑥𝑏𝑁subscript𝑥𝑅𝑐superscript𝛿2𝜌𝜈𝑁𝑠Θ𝑡superscript𝛿2𝜌subscript𝜎𝑠ΘΘ𝑡𝑁\|\widehat{x}^{(b)}_{N}-x_{*}\|\lesssim R\exp\left\{-\frac{c}{{\delta^{2}}\rho\nu}{N\over s\left(\Theta+t\right)}\right\}+{{\delta^{2}}\rho\sigma_{*}s}{}\sqrt{\frac{\Theta\left(\Theta+t\right)}{N}}.. , 2 =
where K~2≍1+ln(NΘm0)asymptotically-equalssubscript~𝐾21𝑁Θsubscript𝑚0\widetilde{K}_{2}\asymp 1+\ln\big{(}\frac{N}{\Theta m_{0}}\big{)} is the bound for the number of stages of the asymptotic phase of the minibatch algorithm.
| 838
|
Stochastic Mirror Descent for Large-Scale Sparse Recovery
2 Multistage Stochastic Mirror Descent for Sparse Stochastic Optimization
2.3 Main contribution: a multistage adaptive algorithm
Assumption [RSC]
Theorem 2.1
Assume that the total sample budget satisfies N≥m0𝑁subscript𝑚0N\geq m_{0}, so that at least one stage of the preliminary phase of Algorithm 1 is completed, then for t≳lnNgreater-than-or-equivalent-to𝑡𝑁t\gtrsim\sqrt{\ln N} the approximate solution x^Nsubscript^𝑥𝑁\widehat{x}_{N} of Algorithm 1 satisfies, with probability at least 1−C(K¯1+K¯2)e−t1𝐶subscript¯𝐾1subscript¯𝐾2superscript𝑒𝑡1-C(\overline{K}_{1}+\overline{K}_{2})e^{-t},
, 1 = ‖x^N−x∗‖≲Rexp{−cδ2ρνNs(Θ+t)}+δ2ρσ∗sΘ+tN.less-than-or-similar-tonormsubscript^𝑥𝑁subscript𝑥𝑅𝑐superscript𝛿2𝜌𝜈𝑁𝑠Θ𝑡superscript𝛿2𝜌subscript𝜎𝑠Θ𝑡𝑁\|\widehat{x}_{N}-x_{*}\|\lesssim R\exp\left\{-\frac{c}{{\delta^{2}}\rho\nu}{N\over s(\Theta+t)}\right\}+{{\delta^{2}}\rho\sigma_{*}s}\sqrt{\frac{\Theta+t}{N}}.. , 2 =
The corresponding solution x^N(b)subscriptsuperscript^𝑥𝑏𝑁\widehat{x}^{(b)}_{N} of the minibatch Algorithm 2 satisfies with probability ≥1−C(K¯1+K~2)e−tabsent1𝐶subscript¯𝐾1subscript~𝐾2superscript𝑒𝑡\geq 1-C(\overline{K}_{1}+\widetilde{K}_{2})e^{-t}
, 1 = ‖x^N(b)−x∗‖≲Rexp{−cδ2ρνNs(Θ+t)}+δ2ρσ∗sΘ(Θ+t)N.less-than-or-similar-tonormsubscriptsuperscript^𝑥𝑏𝑁subscript𝑥𝑅𝑐superscript𝛿2𝜌𝜈𝑁𝑠Θ𝑡superscript𝛿2𝜌subscript𝜎𝑠ΘΘ𝑡𝑁\|\widehat{x}^{(b)}_{N}-x_{*}\|\lesssim R\exp\left\{-\frac{c}{{\delta^{2}}\rho\nu}{N\over s\left(\Theta+t\right)}\right\}+{{\delta^{2}}\rho\sigma_{*}s}{}\sqrt{\frac{\Theta\left(\Theta+t\right)}{N}}.. , 2 =
where K~2≍1+ln(NΘm0)asymptotically-equalssubscript~𝐾21𝑁Θsubscript𝑚0\widetilde{K}_{2}\asymp 1+\ln\big{(}\frac{N}{\Theta m_{0}}\big{)} is the bound for the number of stages of the asymptotic phase of the minibatch algorithm.
| 891
|
17
|
Along with the oracle computation, proximal computation to be implemented at each iteration of the algorithm is an important part of the computational cost of the method. It becomes even more important during the asymptotic phase when number of iterations per stage increases exponentially fast with the stage count, and may result in poor real-time convergence. The interest of minibatch implementation of the second phase of the algorithm is in reducing drastically the number of iterations per asymptotic stage. The price to paid is an extra factor ΘΘ\sqrt{\Theta} that could also theoretically hinder convergence. However, in the problems of interest (sparse and group-sparse recovery, low rank matrix recovery) ΘΘ\Theta is logarithmic in problem dimension. Furthermore, in our numerical experiments we did not observe any accuracy degradation when using the minibatch variant of the method.
| 165
|
Stochastic Mirror Descent for Large-Scale Sparse Recovery
2 Multistage Stochastic Mirror Descent for Sparse Stochastic Optimization
2.3 Main contribution: a multistage adaptive algorithm
Assumption [RSC]
Remark 2.1
Along with the oracle computation, proximal computation to be implemented at each iteration of the algorithm is an important part of the computational cost of the method. It becomes even more important during the asymptotic phase when number of iterations per stage increases exponentially fast with the stage count, and may result in poor real-time convergence. The interest of minibatch implementation of the second phase of the algorithm is in reducing drastically the number of iterations per asymptotic stage. The price to paid is an extra factor ΘΘ\sqrt{\Theta} that could also theoretically hinder convergence. However, in the problems of interest (sparse and group-sparse recovery, low rank matrix recovery) ΘΘ\Theta is logarithmic in problem dimension. Furthermore, in our numerical experiments we did not observe any accuracy degradation when using the minibatch variant of the method.
| 217
|
18
|
We now consider again the original problem of recovery of a s𝑠s-sparse signal x∗∈X⊂𝐑nsubscript𝑥𝑋superscript𝐑𝑛x_{*}\in X\subset{\mathbf{R}}^{n} from random observations defined by
, 1 = ηi=𝔯(ϕiTx∗)+σξi,i=1,2,…,N,formulae-sequencesubscript𝜂𝑖𝔯superscriptsubscriptitalic-ϕ𝑖𝑇subscript𝑥𝜎subscript𝜉𝑖𝑖12…𝑁\displaystyle\eta_{i}=\mathfrak{r}(\phi_{i}^{T}x_{*})+\sigma\xi_{i},\;\;\;i=1,2,...,N,. , 2 = . , 3 = (15)
where 𝔯:𝐑→𝐑:𝔯→𝐑𝐑\mathfrak{r}:{\mathbf{R}}\to{\mathbf{R}} is some non-decreasing and continuous “activation function”, and ϕi∈𝐑nsubscriptitalic-ϕ𝑖superscript𝐑𝑛\phi_{i}\in{\mathbf{R}}^{n} and ξi∈𝐑subscript𝜉𝑖𝐑\xi_{i}\in{\mathbf{R}} are mutually independent.
We assume that ξisubscript𝜉𝑖\xi_{i} are sub-Gaussian, i.e., 𝐄{eξi2}≤exp(1)𝐄superscript𝑒superscriptsubscript𝜉𝑖21{\mathbf{E}}\big{\{}e^{\xi_{i}^{2}}\big{\}}\leq\exp(1),
while regressors ϕisubscriptitalic-ϕ𝑖\phi_{i} are bounded, i.e.,
‖ϕi‖∞≤ν¯subscriptnormsubscriptitalic-ϕ𝑖¯𝜈\|\phi_{i}\|_{\infty}\leq{\overline{\nu}}. We also denote Σ=𝐄{ϕiϕiT}Σ𝐄subscriptitalic-ϕ𝑖superscriptsubscriptitalic-ϕ𝑖𝑇\Sigma={\mathbf{E}}\{\phi_{i}\phi_{i}^{T}\}, with Σ⪰κΣIsucceeds-or-equalsΣsubscript𝜅Σ𝐼\Sigma\succeq\kappa_{\Sigma}I with some κΣ>0subscript𝜅Σ0\kappa_{\Sigma}>0, and ‖Σj‖∞≤υ<∞subscriptnormsubscriptΣ𝑗𝜐\|\Sigma_{j}\|_{\infty}\leq\upsilon<\infty.
We will apply the machinery developed in Section 2, with respect to
, 1 = g(x)=𝐄{𝔰(ϕTx)−xTϕη}𝑔𝑥𝐄𝔰superscriptitalic-ϕ𝑇𝑥superscript𝑥𝑇italic-ϕ𝜂g(x)={\mathbf{E}}\big{\{}\mathfrak{s}(\phi^{T}x)-x^{T}\phi\eta\big{\}}. , 2 =
where 𝔯(t)=∇𝔰(t)𝔯𝑡∇𝔰𝑡\mathfrak{r}(t)=\nabla\mathfrak{s}(t) for some convex and continuously differentiable 𝔰𝔰\mathfrak{s}, applied with the norm ∥⋅∥=∥⋅∥1\|\cdot\|=\|\cdot\|_{1} (hence ∥⋅∥∗=∥⋅∥∞\|\cdot\|_{*}=\|\cdot\|_{\infty}), from some initial point x0∈Xsubscript𝑥0𝑋x_{0}\in X such that ‖x∗−x0‖1≤Rsubscriptnormsubscript𝑥subscript𝑥01𝑅\|x_{*}-x_{0}\|_{1}\leq R. It remains to prove that the different assumptions of Section 2 are satisfied.
| 1,064
|
Stochastic Mirror Descent for Large-Scale Sparse Recovery
3 Sparse generalized linear regression by stochastic approximation
3.1 Problem setting
We now consider again the original problem of recovery of a s𝑠s-sparse signal x∗∈X⊂𝐑nsubscript𝑥𝑋superscript𝐑𝑛x_{*}\in X\subset{\mathbf{R}}^{n} from random observations defined by
, 1 = ηi=𝔯(ϕiTx∗)+σξi,i=1,2,…,N,formulae-sequencesubscript𝜂𝑖𝔯superscriptsubscriptitalic-ϕ𝑖𝑇subscript𝑥𝜎subscript𝜉𝑖𝑖12…𝑁\displaystyle\eta_{i}=\mathfrak{r}(\phi_{i}^{T}x_{*})+\sigma\xi_{i},\;\;\;i=1,2,...,N,. , 2 = . , 3 = (15)
where 𝔯:𝐑→𝐑:𝔯→𝐑𝐑\mathfrak{r}:{\mathbf{R}}\to{\mathbf{R}} is some non-decreasing and continuous “activation function”, and ϕi∈𝐑nsubscriptitalic-ϕ𝑖superscript𝐑𝑛\phi_{i}\in{\mathbf{R}}^{n} and ξi∈𝐑subscript𝜉𝑖𝐑\xi_{i}\in{\mathbf{R}} are mutually independent.
We assume that ξisubscript𝜉𝑖\xi_{i} are sub-Gaussian, i.e., 𝐄{eξi2}≤exp(1)𝐄superscript𝑒superscriptsubscript𝜉𝑖21{\mathbf{E}}\big{\{}e^{\xi_{i}^{2}}\big{\}}\leq\exp(1),
while regressors ϕisubscriptitalic-ϕ𝑖\phi_{i} are bounded, i.e.,
‖ϕi‖∞≤ν¯subscriptnormsubscriptitalic-ϕ𝑖¯𝜈\|\phi_{i}\|_{\infty}\leq{\overline{\nu}}. We also denote Σ=𝐄{ϕiϕiT}Σ𝐄subscriptitalic-ϕ𝑖superscriptsubscriptitalic-ϕ𝑖𝑇\Sigma={\mathbf{E}}\{\phi_{i}\phi_{i}^{T}\}, with Σ⪰κΣIsucceeds-or-equalsΣsubscript𝜅Σ𝐼\Sigma\succeq\kappa_{\Sigma}I with some κΣ>0subscript𝜅Σ0\kappa_{\Sigma}>0, and ‖Σj‖∞≤υ<∞subscriptnormsubscriptΣ𝑗𝜐\|\Sigma_{j}\|_{\infty}\leq\upsilon<\infty.
We will apply the machinery developed in Section 2, with respect to
, 1 = g(x)=𝐄{𝔰(ϕTx)−xTϕη}𝑔𝑥𝐄𝔰superscriptitalic-ϕ𝑇𝑥superscript𝑥𝑇italic-ϕ𝜂g(x)={\mathbf{E}}\big{\{}\mathfrak{s}(\phi^{T}x)-x^{T}\phi\eta\big{\}}. , 2 =
where 𝔯(t)=∇𝔰(t)𝔯𝑡∇𝔰𝑡\mathfrak{r}(t)=\nabla\mathfrak{s}(t) for some convex and continuously differentiable 𝔰𝔰\mathfrak{s}, applied with the norm ∥⋅∥=∥⋅∥1\|\cdot\|=\|\cdot\|_{1} (hence ∥⋅∥∗=∥⋅∥∞\|\cdot\|_{*}=\|\cdot\|_{\infty}), from some initial point x0∈Xsubscript𝑥0𝑋x_{0}\in X such that ‖x∗−x0‖1≤Rsubscriptnormsubscript𝑥subscript𝑥01𝑅\|x_{*}-x_{0}\|_{1}\leq R. It remains to prove that the different assumptions of Section 2 are satisfied.
| 1,091
|
19
|
Assume that 𝔯𝔯\mathfrak{r} is r¯¯𝑟{\overline{r}}-Lipschitz continuous and r¯¯𝑟{\underline{r}}-strongly monotone (i.e., |𝔯(t)−𝔯(t′)|≥r¯|t−t′|𝔯𝑡𝔯superscript𝑡′¯𝑟𝑡superscript𝑡′|\mathfrak{r}(t)-\mathfrak{r}(t^{\prime})|\geq{\underline{r}}|t-t^{\prime}| which implies that 𝔰𝔰\mathfrak{s} is r¯¯𝑟{\underline{r}}-strongly convex) then
1.
[Smoothness] G(⋅,ω)𝐺⋅𝜔G(\cdot,\omega) is ℒ(ω)ℒ𝜔\mathcal{L}(\omega)-smooth with ℒ(ω)≤r¯ν¯2.ℒ𝜔¯𝑟superscript¯𝜈2\mathcal{L}(\omega)\leq{\overline{r}}{\overline{\nu}}^{2}.
2.
[Quadratic minoration] g𝑔g satisfies
g(x)−g(x∗)≥12r¯‖x−x∗‖Σ2.𝑔𝑥𝑔subscript𝑥12¯𝑟subscriptsuperscriptnorm𝑥subscript𝑥2Σ\displaystyle g(x)-g(x_{*})\geq\mbox{\small$\frac{1}{2}$}{\underline{r}}\|x-x_{*}\|^{2}_{\Sigma}.
(16)
3.
[Reduced Strong Convexity] Assumption [RSC] holds with δ=1𝛿1\delta=1 and ρ=(κΣr¯)−1𝜌superscriptsubscript𝜅Σ¯𝑟1\rho=(\kappa_{\Sigma}{\underline{r}})^{-1}.
4.
[Sub-Gaussianity] ∇G(x∗,ωi)∇𝐺subscript𝑥subscript𝜔𝑖\nabla G(x_{*},\omega_{i}) is σ2ν¯2superscript𝜎2superscript¯𝜈2\sigma^{2}{\overline{\nu}}^{2}-sub Gaussian.
The proof is postponed to the appendix. The last point is a consequence of a generalization of the Restricted Eigenvalue property [5], that we detail below (as it gives insight on why Proposition 3.1 holds).
This condition, that we state and call 𝐐(λ,ψ)𝐐𝜆𝜓{\mathbf{Q}}(\lambda,\psi) in the following Lemma 3.1, and is reminiscent of [26] with the corresponding assumptions of [41, 14].
| 698
|
Stochastic Mirror Descent for Large-Scale Sparse Recovery
3 Sparse generalized linear regression by stochastic approximation
3.1 Problem setting
Proposition 3.1
Assume that 𝔯𝔯\mathfrak{r} is r¯¯𝑟{\overline{r}}-Lipschitz continuous and r¯¯𝑟{\underline{r}}-strongly monotone (i.e., |𝔯(t)−𝔯(t′)|≥r¯|t−t′|𝔯𝑡𝔯superscript𝑡′¯𝑟𝑡superscript𝑡′|\mathfrak{r}(t)-\mathfrak{r}(t^{\prime})|\geq{\underline{r}}|t-t^{\prime}| which implies that 𝔰𝔰\mathfrak{s} is r¯¯𝑟{\underline{r}}-strongly convex) then
1.
[Smoothness] G(⋅,ω)𝐺⋅𝜔G(\cdot,\omega) is ℒ(ω)ℒ𝜔\mathcal{L}(\omega)-smooth with ℒ(ω)≤r¯ν¯2.ℒ𝜔¯𝑟superscript¯𝜈2\mathcal{L}(\omega)\leq{\overline{r}}{\overline{\nu}}^{2}.
2.
[Quadratic minoration] g𝑔g satisfies
g(x)−g(x∗)≥12r¯‖x−x∗‖Σ2.𝑔𝑥𝑔subscript𝑥12¯𝑟subscriptsuperscriptnorm𝑥subscript𝑥2Σ\displaystyle g(x)-g(x_{*})\geq\mbox{\small$\frac{1}{2}$}{\underline{r}}\|x-x_{*}\|^{2}_{\Sigma}.
(16)
3.
[Reduced Strong Convexity] Assumption [RSC] holds with δ=1𝛿1\delta=1 and ρ=(κΣr¯)−1𝜌superscriptsubscript𝜅Σ¯𝑟1\rho=(\kappa_{\Sigma}{\underline{r}})^{-1}.
4.
[Sub-Gaussianity] ∇G(x∗,ωi)∇𝐺subscript𝑥subscript𝜔𝑖\nabla G(x_{*},\omega_{i}) is σ2ν¯2superscript𝜎2superscript¯𝜈2\sigma^{2}{\overline{\nu}}^{2}-sub Gaussian.
The proof is postponed to the appendix. The last point is a consequence of a generalization of the Restricted Eigenvalue property [5], that we detail below (as it gives insight on why Proposition 3.1 holds).
This condition, that we state and call 𝐐(λ,ψ)𝐐𝜆𝜓{\mathbf{Q}}(\lambda,\psi) in the following Lemma 3.1, and is reminiscent of [26] with the corresponding assumptions of [41, 14].
| 732
|
20
|
Let λ>0𝜆0\lambda>0 and 0<ψ≤10𝜓10<\psi\leq 1, and suppose that for all subsets I⊂{1,…,n}𝐼1…𝑛I\subset\{1,...,n\} of cardinality smaller than s𝑠s the following property is verified:
, 1 = ∀z∈𝐑n‖zI‖1≤sλ‖z‖Σ+12(1−ψ)‖z‖1formulae-sequencefor-all𝑧superscript𝐑𝑛subscriptnormsubscript𝑧𝐼1𝑠𝜆subscriptnorm𝑧Σ121𝜓subscriptnorm𝑧1\displaystyle\forall z\in{\mathbf{R}}^{n}\quad\|z_{I}\|_{1}\leq\sqrt{s\over\lambda}\|z\|_{\Sigma}+\mbox{\small$\frac{1}{2}$}(1-\psi)\|z\|_{1}. , 2 = . , 3 = 𝐐(λ,ψ)𝐐𝜆𝜓{\mathbf{Q}}(\lambda,\psi)
where zIsubscript𝑧𝐼z_{I} is obtained by zeroing all its components with indices i∉I𝑖𝐼i\notin I.
If g(⋅)𝑔⋅g(\cdot) satisfies the quadratic minoration condition, i.e., for some μ>0𝜇0\mu>0,
, 1 = g(x)−g(x∗)≥12μ‖x−x∗‖Σ2,𝑔𝑥𝑔subscript𝑥12𝜇superscriptsubscriptnorm𝑥subscript𝑥Σ2\displaystyle g(x)-g(x_{*})\geq\mbox{\small$\frac{1}{2}$}\mu\|x-x_{*}\|_{\Sigma}^{2},. , 2 = . , 3 = (17)
and that x^^𝑥{\widehat{x}} is an admissible solution to (7) satisfying, with probability at least 1−ε1𝜀1-\varepsilon,
, 1 = Fκ(x^)≤Fκ(x∗)+υ.subscript𝐹𝜅^𝑥subscript𝐹𝜅subscript𝑥𝜐F_{\kappa}({\widehat{x}})\leq F_{\kappa}(x_{*})+\upsilon.. , 2 =
Then, with probability at least 1−ε1𝜀1-\varepsilon,
, 1 = ‖x^−x∗‖1≤sκλμψ+υκψ.subscriptnorm^𝑥subscript𝑥1𝑠𝜅𝜆𝜇𝜓𝜐𝜅𝜓\displaystyle\|{\widehat{x}}-x_{*}\|_{1}\leq{s\kappa\over\lambda\mu\psi}+{\upsilon\over\kappa\psi}.. , 2 = . , 3 = (18)
| 763
|
Stochastic Mirror Descent for Large-Scale Sparse Recovery
3 Sparse generalized linear regression by stochastic approximation
3.1 Problem setting
Lemma 3.1
Let λ>0𝜆0\lambda>0 and 0<ψ≤10𝜓10<\psi\leq 1, and suppose that for all subsets I⊂{1,…,n}𝐼1…𝑛I\subset\{1,...,n\} of cardinality smaller than s𝑠s the following property is verified:
, 1 = ∀z∈𝐑n‖zI‖1≤sλ‖z‖Σ+12(1−ψ)‖z‖1formulae-sequencefor-all𝑧superscript𝐑𝑛subscriptnormsubscript𝑧𝐼1𝑠𝜆subscriptnorm𝑧Σ121𝜓subscriptnorm𝑧1\displaystyle\forall z\in{\mathbf{R}}^{n}\quad\|z_{I}\|_{1}\leq\sqrt{s\over\lambda}\|z\|_{\Sigma}+\mbox{\small$\frac{1}{2}$}(1-\psi)\|z\|_{1}. , 2 = . , 3 = 𝐐(λ,ψ)𝐐𝜆𝜓{\mathbf{Q}}(\lambda,\psi)
where zIsubscript𝑧𝐼z_{I} is obtained by zeroing all its components with indices i∉I𝑖𝐼i\notin I.
If g(⋅)𝑔⋅g(\cdot) satisfies the quadratic minoration condition, i.e., for some μ>0𝜇0\mu>0,
, 1 = g(x)−g(x∗)≥12μ‖x−x∗‖Σ2,𝑔𝑥𝑔subscript𝑥12𝜇superscriptsubscriptnorm𝑥subscript𝑥Σ2\displaystyle g(x)-g(x_{*})\geq\mbox{\small$\frac{1}{2}$}\mu\|x-x_{*}\|_{\Sigma}^{2},. , 2 = . , 3 = (17)
and that x^^𝑥{\widehat{x}} is an admissible solution to (7) satisfying, with probability at least 1−ε1𝜀1-\varepsilon,
, 1 = Fκ(x^)≤Fκ(x∗)+υ.subscript𝐹𝜅^𝑥subscript𝐹𝜅subscript𝑥𝜐F_{\kappa}({\widehat{x}})\leq F_{\kappa}(x_{*})+\upsilon.. , 2 =
Then, with probability at least 1−ε1𝜀1-\varepsilon,
, 1 = ‖x^−x∗‖1≤sκλμψ+υκψ.subscriptnorm^𝑥subscript𝑥1𝑠𝜅𝜆𝜇𝜓𝜐𝜅𝜓\displaystyle\|{\widehat{x}}-x_{*}\|_{1}\leq{s\kappa\over\lambda\mu\psi}+{\upsilon\over\kappa\psi}.. , 2 = . , 3 = (18)
| 796
|
21
|
Condition 𝐐(λ,ψ)𝐐𝜆𝜓{\mathbf{Q}}(\lambda,\psi) generalizes the classical Restricted Eigenvalue (RE) property [5] and Compatibility Condition [46], and is the most relaxed condition under which classical bounds for the error of ℓ1subscriptℓ1\ell_{1}-recovery routines were established. Validity of 𝐐(λ,ψ)𝐐𝜆𝜓{\mathbf{Q}}(\lambda,\psi) with some λ>0𝜆0\lambda>0 is necessary for ΣΣ\Sigma to possess the celebrated null-space property [13]
, 1 = ∃ψ>0:maxI,|I|≤s‖zI‖1≤12(1−ψ)‖z‖1∀z∈Ker(Σ):𝜓0subscript𝐼𝐼𝑠subscriptnormsubscript𝑧𝐼1121𝜓subscriptnorm𝑧1for-all𝑧KerΣ\exists\psi>0:\;\max_{I,\,|I|\leq s}\|z_{I}\|_{1}\leq\mbox{\small$\frac{1}{2}$}(1-\psi)\|z\|_{1}\;\;\forall z\in\mathrm{Ker}(\Sigma). , 2 =
which is necessary and sufficient for the s𝑠s-goodness of ΣΣ\Sigma (i.e., x^∈missingArgminu{∥u∥:Σu=Σx∗}\widehat{x}\in\mathop{\mathrm{missing}}{Argmin}_{u}\left\{\|u\|:\;\Sigma u=\Sigma x_{*}\right\} reproduces exactly every s𝑠s-sparse signal x∗subscript𝑥x_{*} in the noiseless case).
When ΣΣ\Sigma possesses the nullspace property, 𝐐(λ,ψ)𝐐𝜆𝜓{\mathbf{Q}}(\lambda,\psi) may hold for ΣΣ\Sigma with nontrivial kernel; this is typically the case for random matrices [41, 42] such as rank deficient Wishart matrices, etc. When ΣΣ\Sigma is a regular matrix, condition 𝐐(λ,ψ)𝐐𝜆𝜓{\mathbf{Q}}(\lambda,\psi) may also holds with constant λ𝜆\lambda which is much higher that the minimal eigenvalue of ΣΣ\Sigma when the eigenspace corresponding to small eigenvalues of ΣΣ\Sigma does not contain vectors z𝑧z with ‖zI‖1>12(1−ψ)‖z‖1subscriptnormsubscript𝑧𝐼1121𝜓subscriptnorm𝑧1\|z_{I}\|_{1}>\mbox{\small$\frac{1}{2}$}(1-\psi)\|z\|_{1}.
| 680
|
Stochastic Mirror Descent for Large-Scale Sparse Recovery
3 Sparse generalized linear regression by stochastic approximation
3.1 Problem setting
Remark 3.1
Condition 𝐐(λ,ψ)𝐐𝜆𝜓{\mathbf{Q}}(\lambda,\psi) generalizes the classical Restricted Eigenvalue (RE) property [5] and Compatibility Condition [46], and is the most relaxed condition under which classical bounds for the error of ℓ1subscriptℓ1\ell_{1}-recovery routines were established. Validity of 𝐐(λ,ψ)𝐐𝜆𝜓{\mathbf{Q}}(\lambda,\psi) with some λ>0𝜆0\lambda>0 is necessary for ΣΣ\Sigma to possess the celebrated null-space property [13]
, 1 = ∃ψ>0:maxI,|I|≤s‖zI‖1≤12(1−ψ)‖z‖1∀z∈Ker(Σ):𝜓0subscript𝐼𝐼𝑠subscriptnormsubscript𝑧𝐼1121𝜓subscriptnorm𝑧1for-all𝑧KerΣ\exists\psi>0:\;\max_{I,\,|I|\leq s}\|z_{I}\|_{1}\leq\mbox{\small$\frac{1}{2}$}(1-\psi)\|z\|_{1}\;\;\forall z\in\mathrm{Ker}(\Sigma). , 2 =
which is necessary and sufficient for the s𝑠s-goodness of ΣΣ\Sigma (i.e., x^∈missingArgminu{∥u∥:Σu=Σx∗}\widehat{x}\in\mathop{\mathrm{missing}}{Argmin}_{u}\left\{\|u\|:\;\Sigma u=\Sigma x_{*}\right\} reproduces exactly every s𝑠s-sparse signal x∗subscript𝑥x_{*} in the noiseless case).
When ΣΣ\Sigma possesses the nullspace property, 𝐐(λ,ψ)𝐐𝜆𝜓{\mathbf{Q}}(\lambda,\psi) may hold for ΣΣ\Sigma with nontrivial kernel; this is typically the case for random matrices [41, 42] such as rank deficient Wishart matrices, etc. When ΣΣ\Sigma is a regular matrix, condition 𝐐(λ,ψ)𝐐𝜆𝜓{\mathbf{Q}}(\lambda,\psi) may also holds with constant λ𝜆\lambda which is much higher that the minimal eigenvalue of ΣΣ\Sigma when the eigenspace corresponding to small eigenvalues of ΣΣ\Sigma does not contain vectors z𝑧z with ‖zI‖1>12(1−ψ)‖z‖1subscriptnormsubscript𝑧𝐼1121𝜓subscriptnorm𝑧1\|z_{I}\|_{1}>\mbox{\small$\frac{1}{2}$}(1-\psi)\|z\|_{1}.
| 713
|
22
|
In the case of linear regression where 𝔯(t)=t𝔯𝑡𝑡\mathfrak{r}(t)=t, it holds
, 1 = g(x)𝑔𝑥\displaystyle g(x). , 2 = =\displaystyle=. , 3 = 𝐄{12(ϕTx)2−xTϕη}=12𝐄{(ϕT(x∗−x))2−(ϕTx∗)2}𝐄12superscriptsuperscriptitalic-ϕ𝑇𝑥2superscript𝑥𝑇italic-ϕ𝜂12𝐄superscriptsuperscriptitalic-ϕ𝑇subscript𝑥𝑥2superscriptsuperscriptitalic-ϕ𝑇subscript𝑥2\displaystyle{\mathbf{E}}\big{\{}\mbox{\small$\frac{1}{2}$}(\phi^{T}x)^{2}-x^{T}\phi\eta\big{\}}=\mbox{\small$\frac{1}{2}$}{\mathbf{E}}\big{\{}(\phi^{T}(x_{*}-x))^{2}-(\phi^{T}x_{*})^{2}\big{\}}. , 4 = . , 1 = . , 2 = =\displaystyle=. , 3 = 12(x−x∗)TΣ(x−x∗)−12x∗TΣx∗=12‖x−x∗‖Σ2−12‖x∗‖Σ212superscript𝑥subscript𝑥𝑇Σ𝑥subscript𝑥12superscriptsubscript𝑥𝑇Σsubscript𝑥12superscriptsubscriptnorm𝑥subscript𝑥Σ212superscriptsubscriptnormsubscript𝑥Σ2\displaystyle\mbox{\small$\frac{1}{2}$}(x-x_{*})^{T}\Sigma(x-x_{*})-\mbox{\small$\frac{1}{2}$}x_{*}^{T}\Sigma x_{*}=\mbox{\small$\frac{1}{2}$}\|x-x_{*}\|_{\Sigma}^{2}-\mbox{\small$\frac{1}{2}$}\|x_{*}\|_{\Sigma}^{2}. , 4 =
and
∇G(x,ω)=ϕϕT(x−x∗)−σξϕ∇𝐺𝑥𝜔italic-ϕsuperscriptitalic-ϕ𝑇𝑥subscript𝑥𝜎𝜉italic-ϕ\nabla G(x,\omega)=\phi\phi^{T}(x-x_{*})-\sigma\xi\phi.
In this case
ℒ(ω)≤‖ϕϕT‖∞≤ν¯2.ℒ𝜔subscriptnormitalic-ϕsuperscriptitalic-ϕ𝑇superscript¯𝜈2{\cal L}(\omega)\leq\|\phi\phi^{T}\|_{\infty}\leq{\overline{\nu}}^{2}.
Note that quadratic minoration bound (16) for g(x)−g(x∗)𝑔𝑥𝑔subscript𝑥g(x)-g(x_{*}) is often overly pessimistic. Indeed, consider for instance, Gaussian regressor ϕ∼𝒩(0,Σ)similar-toitalic-ϕ𝒩0Σ\phi\sim{\cal N}(0,\Sigma) (such regressors are not a.s. bounded, we consider this example only for illustration purposes) and activation 𝔯𝔯\mathfrak{r}, define for some 0≤α≤10𝛼10\leq\alpha\leq 1 (with the convention, 0/0=00000/0=0)
, 1 = 𝔯(t)={t,|t|≤1,sign(t)[α−1(|t|α−1)+1],|t|>1.𝔯𝑡cases𝑡𝑡1sign𝑡delimited-[]superscript𝛼1superscript𝑡𝛼11𝑡1\displaystyle\mathfrak{r}(t)=\left\{\begin{array}[]{ll}t,&|t|\leq 1,\\
\hbox{\rm sign}(t)[\alpha^{-1}(|t|^{\alpha}-1)+1],&|t|>1.\end{array}\right.. , 2 = . , 3 = (21)
When passing from ϕitalic-ϕ\phi to φ=Σ−1/2ϕ𝜑superscriptΣ12italic-ϕ\varphi=\Sigma^{-1/2}\phi and from x𝑥x to z=Σ1/2x𝑧superscriptΣ12𝑥z=\Sigma^{1/2}x and using the fact
that
, 1 = φ=zzT‖z‖22φ+(I−zzT‖z‖22)φ⏟=:χ𝜑𝑧superscript𝑧𝑇superscriptsubscriptnorm𝑧22𝜑subscript⏟𝐼𝑧superscript𝑧𝑇superscriptsubscriptnorm𝑧22𝜑:absent𝜒\varphi={zz^{T}\over\|z\|_{2}^{2}}\varphi+\underbrace{\left(I-{zz^{T}\over\|z\|_{2}^{2}}\right)\varphi}_{=:\chi}. , 2 =
with independent zzT‖z‖22φ𝑧superscript𝑧𝑇superscriptsubscriptnorm𝑧22𝜑{zz^{T}\over\|z\|_{2}^{2}}\varphi and χ𝜒\chi, we obtain
, 1 = H(x)=𝐄{ϕ[𝔯(ϕTx)]}=𝐄{zzT‖z‖22φ𝔯(φTz)}=z‖z‖2𝐄{ς𝔯(ς‖z‖2)}=Σ1/2x‖x‖Σ𝐄{ς𝔯(ς‖x‖Σ)}𝐻𝑥𝐄italic-ϕdelimited-[]𝔯superscriptitalic-ϕ𝑇𝑥𝐄𝑧superscript𝑧𝑇superscriptsubscriptnorm𝑧22𝜑𝔯superscript𝜑𝑇𝑧𝑧subscriptnorm𝑧2𝐄𝜍𝔯𝜍subscriptnorm𝑧2superscriptΣ12𝑥subscriptnorm𝑥Σ𝐄𝜍𝔯𝜍subscriptnorm𝑥ΣH(x)={\mathbf{E}}\{\phi[\mathfrak{r}(\phi^{T}x)]\}={\mathbf{E}}\left\{{zz^{T}\over\|z\|_{2}^{2}}\varphi\,\mathfrak{r}(\varphi^{T}z)\right\}={z\over\|z\|_{2}}{\mathbf{E}}\left\{\varsigma\mathfrak{r}(\varsigma\|z\|_{2})\right\}={\Sigma^{1/2}x\over\|x\|_{\Sigma}}{\mathbf{E}}\left\{\varsigma\mathfrak{r}(\varsigma\|x\|_{\Sigma})\right\}. , 2 =
| 1,886
|
Stochastic Mirror Descent for Large-Scale Sparse Recovery
3 Sparse generalized linear regression by stochastic approximation
3.1 Problem setting
Remarks.
In the case of linear regression where 𝔯(t)=t𝔯𝑡𝑡\mathfrak{r}(t)=t, it holds
, 1 = g(x)𝑔𝑥\displaystyle g(x). , 2 = =\displaystyle=. , 3 = 𝐄{12(ϕTx)2−xTϕη}=12𝐄{(ϕT(x∗−x))2−(ϕTx∗)2}𝐄12superscriptsuperscriptitalic-ϕ𝑇𝑥2superscript𝑥𝑇italic-ϕ𝜂12𝐄superscriptsuperscriptitalic-ϕ𝑇subscript𝑥𝑥2superscriptsuperscriptitalic-ϕ𝑇subscript𝑥2\displaystyle{\mathbf{E}}\big{\{}\mbox{\small$\frac{1}{2}$}(\phi^{T}x)^{2}-x^{T}\phi\eta\big{\}}=\mbox{\small$\frac{1}{2}$}{\mathbf{E}}\big{\{}(\phi^{T}(x_{*}-x))^{2}-(\phi^{T}x_{*})^{2}\big{\}}. , 4 = . , 1 = . , 2 = =\displaystyle=. , 3 = 12(x−x∗)TΣ(x−x∗)−12x∗TΣx∗=12‖x−x∗‖Σ2−12‖x∗‖Σ212superscript𝑥subscript𝑥𝑇Σ𝑥subscript𝑥12superscriptsubscript𝑥𝑇Σsubscript𝑥12superscriptsubscriptnorm𝑥subscript𝑥Σ212superscriptsubscriptnormsubscript𝑥Σ2\displaystyle\mbox{\small$\frac{1}{2}$}(x-x_{*})^{T}\Sigma(x-x_{*})-\mbox{\small$\frac{1}{2}$}x_{*}^{T}\Sigma x_{*}=\mbox{\small$\frac{1}{2}$}\|x-x_{*}\|_{\Sigma}^{2}-\mbox{\small$\frac{1}{2}$}\|x_{*}\|_{\Sigma}^{2}. , 4 =
and
∇G(x,ω)=ϕϕT(x−x∗)−σξϕ∇𝐺𝑥𝜔italic-ϕsuperscriptitalic-ϕ𝑇𝑥subscript𝑥𝜎𝜉italic-ϕ\nabla G(x,\omega)=\phi\phi^{T}(x-x_{*})-\sigma\xi\phi.
In this case
ℒ(ω)≤‖ϕϕT‖∞≤ν¯2.ℒ𝜔subscriptnormitalic-ϕsuperscriptitalic-ϕ𝑇superscript¯𝜈2{\cal L}(\omega)\leq\|\phi\phi^{T}\|_{\infty}\leq{\overline{\nu}}^{2}.
Note that quadratic minoration bound (16) for g(x)−g(x∗)𝑔𝑥𝑔subscript𝑥g(x)-g(x_{*}) is often overly pessimistic. Indeed, consider for instance, Gaussian regressor ϕ∼𝒩(0,Σ)similar-toitalic-ϕ𝒩0Σ\phi\sim{\cal N}(0,\Sigma) (such regressors are not a.s. bounded, we consider this example only for illustration purposes) and activation 𝔯𝔯\mathfrak{r}, define for some 0≤α≤10𝛼10\leq\alpha\leq 1 (with the convention, 0/0=00000/0=0)
, 1 = 𝔯(t)={t,|t|≤1,sign(t)[α−1(|t|α−1)+1],|t|>1.𝔯𝑡cases𝑡𝑡1sign𝑡delimited-[]superscript𝛼1superscript𝑡𝛼11𝑡1\displaystyle\mathfrak{r}(t)=\left\{\begin{array}[]{ll}t,&|t|\leq 1,\\
\hbox{\rm sign}(t)[\alpha^{-1}(|t|^{\alpha}-1)+1],&|t|>1.\end{array}\right.. , 2 = . , 3 = (21)
When passing from ϕitalic-ϕ\phi to φ=Σ−1/2ϕ𝜑superscriptΣ12italic-ϕ\varphi=\Sigma^{-1/2}\phi and from x𝑥x to z=Σ1/2x𝑧superscriptΣ12𝑥z=\Sigma^{1/2}x and using the fact
that
, 1 = φ=zzT‖z‖22φ+(I−zzT‖z‖22)φ⏟=:χ𝜑𝑧superscript𝑧𝑇superscriptsubscriptnorm𝑧22𝜑subscript⏟𝐼𝑧superscript𝑧𝑇superscriptsubscriptnorm𝑧22𝜑:absent𝜒\varphi={zz^{T}\over\|z\|_{2}^{2}}\varphi+\underbrace{\left(I-{zz^{T}\over\|z\|_{2}^{2}}\right)\varphi}_{=:\chi}. , 2 =
with independent zzT‖z‖22φ𝑧superscript𝑧𝑇superscriptsubscriptnorm𝑧22𝜑{zz^{T}\over\|z\|_{2}^{2}}\varphi and χ𝜒\chi, we obtain
, 1 = H(x)=𝐄{ϕ[𝔯(ϕTx)]}=𝐄{zzT‖z‖22φ𝔯(φTz)}=z‖z‖2𝐄{ς𝔯(ς‖z‖2)}=Σ1/2x‖x‖Σ𝐄{ς𝔯(ς‖x‖Σ)}𝐻𝑥𝐄italic-ϕdelimited-[]𝔯superscriptitalic-ϕ𝑇𝑥𝐄𝑧superscript𝑧𝑇superscriptsubscriptnorm𝑧22𝜑𝔯superscript𝜑𝑇𝑧𝑧subscriptnorm𝑧2𝐄𝜍𝔯𝜍subscriptnorm𝑧2superscriptΣ12𝑥subscriptnorm𝑥Σ𝐄𝜍𝔯𝜍subscriptnorm𝑥ΣH(x)={\mathbf{E}}\{\phi[\mathfrak{r}(\phi^{T}x)]\}={\mathbf{E}}\left\{{zz^{T}\over\|z\|_{2}^{2}}\varphi\,\mathfrak{r}(\varphi^{T}z)\right\}={z\over\|z\|_{2}}{\mathbf{E}}\left\{\varsigma\mathfrak{r}(\varsigma\|z\|_{2})\right\}={\Sigma^{1/2}x\over\|x\|_{\Sigma}}{\mathbf{E}}\left\{\varsigma\mathfrak{r}(\varsigma\|x\|_{\Sigma})\right\}. , 2 =
| 1,915
|
23
|
where ς∼𝒩(0,1)similar-to𝜍𝒩01\varsigma\sim{\cal N}(0,1). Thus, H(x)𝐻𝑥H(x) is proportional to Σ1/2x‖x‖ΣsuperscriptΣ12𝑥subscriptnorm𝑥Σ{\Sigma^{1/2}x\over\|x\|_{\Sigma}} with coefficient
, 1 = h(‖x‖Σ)=𝐄{ς𝔯(ς‖x‖Σ)}.ℎsubscriptnorm𝑥Σ𝐄𝜍𝔯𝜍subscriptnorm𝑥Σh\big{(}\|x\|_{\Sigma}\big{)}={\mathbf{E}}\left\{\varsigma\mathfrak{r}(\varsigma\|x\|_{\Sigma})\right\}.. , 2 =
Figure 1 represents the mapping hℎh for different values of α𝛼\alpha (on the left), along with the corresponding mapping H𝐻H on a ∥⋅∥Σ\|\cdot\|_{\Sigma}-ball centered at the origin of radius r𝑟r (on the right).
| 296
|
Stochastic Mirror Descent for Large-Scale Sparse Recovery
3 Sparse generalized linear regression by stochastic approximation
3.1 Problem setting
Remarks.
where ς∼𝒩(0,1)similar-to𝜍𝒩01\varsigma\sim{\cal N}(0,1). Thus, H(x)𝐻𝑥H(x) is proportional to Σ1/2x‖x‖ΣsuperscriptΣ12𝑥subscriptnorm𝑥Σ{\Sigma^{1/2}x\over\|x\|_{\Sigma}} with coefficient
, 1 = h(‖x‖Σ)=𝐄{ς𝔯(ς‖x‖Σ)}.ℎsubscriptnorm𝑥Σ𝐄𝜍𝔯𝜍subscriptnorm𝑥Σh\big{(}\|x\|_{\Sigma}\big{)}={\mathbf{E}}\left\{\varsigma\mathfrak{r}(\varsigma\|x\|_{\Sigma})\right\}.. , 2 =
Figure 1 represents the mapping hℎh for different values of α𝛼\alpha (on the left), along with the corresponding mapping H𝐻H on a ∥⋅∥Σ\|\cdot\|_{\Sigma}-ball centered at the origin of radius r𝑟r (on the right).
| 325
|
24
|
In this section, we describe the statistical properties of approximate solutions of Algorithm 1 when applied to the sparse recovery problem. We shall use the following distance-generating function of the ℓ1subscriptℓ1\ell_{1}-ball of 𝐑nsuperscript𝐑𝑛{\mathbf{R}}^{n} (cf. [27, Section 5.7.1])
, 1 = θ(x)=cp‖x‖pp,p={2,n=21+1ln(n),n≥3,c={2,n=2,elnn,n≥3.formulae-sequence𝜃𝑥𝑐𝑝superscriptsubscriptnorm𝑥𝑝𝑝formulae-sequence𝑝cases2𝑛211𝑛𝑛3𝑐cases2𝑛2𝑒𝑛𝑛3\displaystyle\theta(x)={c\over p}\|x\|_{p}^{p},\quad p=\left\{\begin{array}[]{ll}2,&n=2\\
1+\frac{1}{\ln(n)},&n\geq 3,\end{array}\right.\quad c=\left\{\begin{array}[]{ll}2,&n=2,\\
e\ln n,&n\geq 3.\end{array}\right.. , 2 = . , 3 = (26)
It immediately follows that θ𝜃\theta is strongly convex with modulus 1 w.r.t. the norm ∥⋅∥1\|\cdot\|_{1} on its unit ball, and that Θ≤elnnΘ𝑒𝑛\Theta\leq e\ln n. In particular, Theorem 2.1 entails the following statement.
| 409
|
Stochastic Mirror Descent for Large-Scale Sparse Recovery
3 Sparse generalized linear regression by stochastic approximation
3.2 Stochastic Mirror Descent algorithm
In this section, we describe the statistical properties of approximate solutions of Algorithm 1 when applied to the sparse recovery problem. We shall use the following distance-generating function of the ℓ1subscriptℓ1\ell_{1}-ball of 𝐑nsuperscript𝐑𝑛{\mathbf{R}}^{n} (cf. [27, Section 5.7.1])
, 1 = θ(x)=cp‖x‖pp,p={2,n=21+1ln(n),n≥3,c={2,n=2,elnn,n≥3.formulae-sequence𝜃𝑥𝑐𝑝superscriptsubscriptnorm𝑥𝑝𝑝formulae-sequence𝑝cases2𝑛211𝑛𝑛3𝑐cases2𝑛2𝑒𝑛𝑛3\displaystyle\theta(x)={c\over p}\|x\|_{p}^{p},\quad p=\left\{\begin{array}[]{ll}2,&n=2\\
1+\frac{1}{\ln(n)},&n\geq 3,\end{array}\right.\quad c=\left\{\begin{array}[]{ll}2,&n=2,\\
e\ln n,&n\geq 3.\end{array}\right.. , 2 = . , 3 = (26)
It immediately follows that θ𝜃\theta is strongly convex with modulus 1 w.r.t. the norm ∥⋅∥1\|\cdot\|_{1} on its unit ball, and that Θ≤elnnΘ𝑒𝑛\Theta\leq e\ln n. In particular, Theorem 2.1 entails the following statement.
| 440
|
25
|
For t≳lnNgreater-than-or-equivalent-to𝑡𝑁t\gtrsim\sqrt{\ln N}, assuming the samples budget is large enough, i.e., N≥m0𝑁subscript𝑚0N\geq m_{0} (so that at least one stage of the preliminary phase of Algorithm 1 is completed), the approximate solution x^Nsubscript^𝑥𝑁\widehat{x}_{N} output satisfies with probability at least 1−Ce−tlnN1𝐶superscript𝑒𝑡𝑁1-Ce^{-t}\ln N,
, 1 = ‖x^N−x∗‖1≲Rexp{−cr¯κΣr¯ν¯2Ns(lnn+t)}+σν¯sr¯κΣlnn+tNless-than-or-similar-tosubscriptnormsubscript^𝑥𝑁subscript𝑥1𝑅𝑐¯𝑟subscript𝜅Σ¯𝑟superscript¯𝜈2𝑁𝑠𝑛𝑡𝜎¯𝜈𝑠¯𝑟subscript𝜅Σ𝑛𝑡𝑁\displaystyle{\|\widehat{x}_{N}-x_{*}\|_{1}}\lesssim R\exp\left\{-c\frac{{\underline{r}}\kappa_{\Sigma}}{{\overline{r}}{\overline{\nu}}^{2}}{N\over s({\ln n}+t)}\right\}+\frac{\sigma{\overline{\nu}}s}{{\underline{r}}\kappa_{\Sigma}}\sqrt{\frac{{\ln n}+t}{N}}. , 2 = . , 3 = (27)
The corresponding solution x^N(b)subscriptsuperscript^𝑥𝑏𝑁\widehat{x}^{(b)}_{N} of the minibatch variant of the algorithm satisfies with probability ≥1−Ce−tlnNabsent1𝐶superscript𝑒𝑡𝑁\geq 1-Ce^{-t}\ln N,
, 1 = ‖x^N(b)−x∗‖1≲Rexp{−cr¯κΣr¯ν¯2Ns(lnn+t)}+σν¯sr¯κΣlnn(lnn+t)Nless-than-or-similar-tosubscriptnormsuperscriptsubscript^𝑥𝑁𝑏subscript𝑥1𝑅𝑐¯𝑟subscript𝜅Σ¯𝑟superscript¯𝜈2𝑁𝑠𝑛𝑡𝜎¯𝜈𝑠¯𝑟subscript𝜅Σ𝑛𝑛𝑡𝑁{\|\widehat{x}_{N}^{(b)}-x_{*}\|_{1}}\lesssim R\exp\left\{-c\frac{{\underline{r}}\kappa_{\Sigma}}{{\overline{r}}{\overline{\nu}}^{2}}{N\over s\left({\ln n}+t\right)}\right\}+\frac{\sigma{\overline{\nu}}s}{{\underline{r}}\kappa_{\Sigma}}\sqrt{\frac{{\ln n}\left({\ln n}+t\right)}{N}}. , 2 =
| 841
|
Stochastic Mirror Descent for Large-Scale Sparse Recovery
3 Sparse generalized linear regression by stochastic approximation
3.2 Stochastic Mirror Descent algorithm
Proposition 3.2
For t≳lnNgreater-than-or-equivalent-to𝑡𝑁t\gtrsim\sqrt{\ln N}, assuming the samples budget is large enough, i.e., N≥m0𝑁subscript𝑚0N\geq m_{0} (so that at least one stage of the preliminary phase of Algorithm 1 is completed), the approximate solution x^Nsubscript^𝑥𝑁\widehat{x}_{N} output satisfies with probability at least 1−Ce−tlnN1𝐶superscript𝑒𝑡𝑁1-Ce^{-t}\ln N,
, 1 = ‖x^N−x∗‖1≲Rexp{−cr¯κΣr¯ν¯2Ns(lnn+t)}+σν¯sr¯κΣlnn+tNless-than-or-similar-tosubscriptnormsubscript^𝑥𝑁subscript𝑥1𝑅𝑐¯𝑟subscript𝜅Σ¯𝑟superscript¯𝜈2𝑁𝑠𝑛𝑡𝜎¯𝜈𝑠¯𝑟subscript𝜅Σ𝑛𝑡𝑁\displaystyle{\|\widehat{x}_{N}-x_{*}\|_{1}}\lesssim R\exp\left\{-c\frac{{\underline{r}}\kappa_{\Sigma}}{{\overline{r}}{\overline{\nu}}^{2}}{N\over s({\ln n}+t)}\right\}+\frac{\sigma{\overline{\nu}}s}{{\underline{r}}\kappa_{\Sigma}}\sqrt{\frac{{\ln n}+t}{N}}. , 2 = . , 3 = (27)
The corresponding solution x^N(b)subscriptsuperscript^𝑥𝑏𝑁\widehat{x}^{(b)}_{N} of the minibatch variant of the algorithm satisfies with probability ≥1−Ce−tlnNabsent1𝐶superscript𝑒𝑡𝑁\geq 1-Ce^{-t}\ln N,
, 1 = ‖x^N(b)−x∗‖1≲Rexp{−cr¯κΣr¯ν¯2Ns(lnn+t)}+σν¯sr¯κΣlnn(lnn+t)Nless-than-or-similar-tosubscriptnormsuperscriptsubscript^𝑥𝑁𝑏subscript𝑥1𝑅𝑐¯𝑟subscript𝜅Σ¯𝑟superscript¯𝜈2𝑁𝑠𝑛𝑡𝜎¯𝜈𝑠¯𝑟subscript𝜅Σ𝑛𝑛𝑡𝑁{\|\widehat{x}_{N}^{(b)}-x_{*}\|_{1}}\lesssim R\exp\left\{-c\frac{{\underline{r}}\kappa_{\Sigma}}{{\overline{r}}{\overline{\nu}}^{2}}{N\over s\left({\ln n}+t\right)}\right\}+\frac{\sigma{\overline{\nu}}s}{{\underline{r}}\kappa_{\Sigma}}\sqrt{\frac{{\ln n}\left({\ln n}+t\right)}{N}}. , 2 =
| 879
|
26
|
Bounds for the ℓ1subscriptℓ1\ell_{1}-norm of the error x^N−x∗subscript^𝑥𝑁subscript𝑥{\widehat{x}}_{N}-x_{*} (or x^N(b)−x∗superscriptsubscript^𝑥𝑁𝑏subscript𝑥\widehat{x}_{N}^{(b)}-x_{*}) established in Proposition 3.2 allows us to quantify prediction error g(x^N)−g(x∗)𝑔subscript^𝑥𝑁𝑔subscript𝑥g({\widehat{x}}_{N})-g(x_{*}) (and g(x^(b)N)−g(x∗)g({\widehat{x}}^{(b})_{N})-g(x_{*}), and also lead to bounds for ‖x^N−x∗‖Σsubscriptnormsubscript^𝑥𝑁subscript𝑥Σ\|{\widehat{x}}_{N}-x_{*}\|_{\Sigma} and ‖x^N−x∗‖2subscriptnormsubscript^𝑥𝑁subscript𝑥2\|{\widehat{x}}_{N}-x_{*}\|_{2} (respectively, for ‖x^N(b)−x∗‖Σsubscriptnormsuperscriptsubscript^𝑥𝑁𝑏subscript𝑥Σ\|{\widehat{x}}_{N}^{(b)}-x_{*}\|_{\Sigma} and ‖x^N(b)−x∗‖2subscriptnormsuperscriptsubscript^𝑥𝑁𝑏subscript𝑥2\|{\widehat{x}}_{N}^{(b)}-x_{*}\|_{2}).
For instance, Proposition 2.1 in the present setting implies the bound on the prediction error after N𝑁N steps of the algorithm that reads
, 1 = g(x^N)−g(x∗)≲R2κΣr¯sexp{−cκΣr¯δ2r¯ν¯2Ns(Θ+t)}+σ2ν¯2s(Θ+t)κΣr¯Nless-than-or-similar-to𝑔subscript^𝑥𝑁𝑔subscript𝑥superscript𝑅2subscript𝜅Σ¯𝑟𝑠𝑐subscript𝜅Σ¯𝑟superscript𝛿2¯𝑟superscript¯𝜈2𝑁𝑠Θ𝑡superscript𝜎2superscript¯𝜈2𝑠Θ𝑡subscript𝜅Σ¯𝑟𝑁\displaystyle g({\widehat{x}}_{N})-g(x_{*})\lesssim{R^{2}\kappa_{\Sigma}{\underline{r}}\over s}\exp\left\{-\frac{c\kappa_{\Sigma}{\underline{r}}}{\delta^{2}{\overline{r}}{\overline{\nu}}^{2}}{N\over s(\Theta+t)}\right\}+{\sigma^{2}{\overline{\nu}}^{2}s(\Theta+t)\over\kappa_{\Sigma}{\underline{r}}N}. , 2 =
with probability ≥1−ClnNe−tabsent1𝐶𝑁superscript𝑒𝑡\geq 1-C\ln Ne^{-t}. We conclude by (16) that
, 1 = ‖x^N−x∗‖22≤κΣ−1‖x^N−x∗‖Σ2≤2κΣ−1r¯−1[g(x^N)−g(x∗)]superscriptsubscriptnormsubscript^𝑥𝑁subscript𝑥22superscriptsubscript𝜅Σ1subscriptsuperscriptnormsubscript^𝑥𝑁subscript𝑥2Σ2superscriptsubscript𝜅Σ1superscript¯𝑟1delimited-[]𝑔subscript^𝑥𝑁𝑔subscript𝑥\displaystyle\|{\widehat{x}}_{N}-x_{*}\|_{2}^{2}\leq\kappa_{\Sigma}^{-1}\|{\widehat{x}}_{N}-x_{*}\|^{2}_{\Sigma}\leq 2\kappa_{\Sigma}^{-1}{\underline{r}}^{-1}[g({\widehat{x}}_{N})-g(x_{*})]. , 2 = . , 1 = ≲R2sexp{−cκΣr¯δ2r¯ν¯2Ns(Θ+t)}+σ2ν¯2s(Θ+t)κΣ2r¯2N.less-than-or-similar-toabsentsuperscript𝑅2𝑠𝑐subscript𝜅Σ¯𝑟superscript𝛿2¯𝑟superscript¯𝜈2𝑁𝑠Θ𝑡superscript𝜎2superscript¯𝜈2𝑠Θ𝑡subscriptsuperscript𝜅2Σsuperscript¯𝑟2𝑁\displaystyle\quad\lesssim{R^{2}\over s}\exp\left\{-\frac{c\kappa_{\Sigma}{\underline{r}}}{\delta^{2}{\overline{r}}{\overline{\nu}}^{2}}{N\over s(\Theta+t)}\right\}+{\sigma^{2}{\overline{\nu}}^{2}s(\Theta+t)\over\kappa^{2}_{\Sigma}{\underline{r}}^{2}N}.. , 2 =
In other words, the error ‖x^N−x∗‖2subscriptnormsubscript^𝑥𝑁subscript𝑥2\|{\widehat{x}}_{N}-x_{*}\|_{2} converges geometrically to the “asymptotic rate” σν¯κΣr¯s(Θ+t)N𝜎¯𝜈subscript𝜅Σ¯𝑟𝑠Θ𝑡𝑁{\sigma{\overline{\nu}}\over\kappa_{\Sigma}{\underline{r}}}\sqrt{s(\Theta+t)\over N} which is the “standard” rate established in the setting (cf. [1, 5, 35], etc).
| 1,564
|
Stochastic Mirror Descent for Large-Scale Sparse Recovery
3 Sparse generalized linear regression by stochastic approximation
3.2 Stochastic Mirror Descent algorithm
Remark 3.2
Bounds for the ℓ1subscriptℓ1\ell_{1}-norm of the error x^N−x∗subscript^𝑥𝑁subscript𝑥{\widehat{x}}_{N}-x_{*} (or x^N(b)−x∗superscriptsubscript^𝑥𝑁𝑏subscript𝑥\widehat{x}_{N}^{(b)}-x_{*}) established in Proposition 3.2 allows us to quantify prediction error g(x^N)−g(x∗)𝑔subscript^𝑥𝑁𝑔subscript𝑥g({\widehat{x}}_{N})-g(x_{*}) (and g(x^(b)N)−g(x∗)g({\widehat{x}}^{(b})_{N})-g(x_{*}), and also lead to bounds for ‖x^N−x∗‖Σsubscriptnormsubscript^𝑥𝑁subscript𝑥Σ\|{\widehat{x}}_{N}-x_{*}\|_{\Sigma} and ‖x^N−x∗‖2subscriptnormsubscript^𝑥𝑁subscript𝑥2\|{\widehat{x}}_{N}-x_{*}\|_{2} (respectively, for ‖x^N(b)−x∗‖Σsubscriptnormsuperscriptsubscript^𝑥𝑁𝑏subscript𝑥Σ\|{\widehat{x}}_{N}^{(b)}-x_{*}\|_{\Sigma} and ‖x^N(b)−x∗‖2subscriptnormsuperscriptsubscript^𝑥𝑁𝑏subscript𝑥2\|{\widehat{x}}_{N}^{(b)}-x_{*}\|_{2}).
For instance, Proposition 2.1 in the present setting implies the bound on the prediction error after N𝑁N steps of the algorithm that reads
, 1 = g(x^N)−g(x∗)≲R2κΣr¯sexp{−cκΣr¯δ2r¯ν¯2Ns(Θ+t)}+σ2ν¯2s(Θ+t)κΣr¯Nless-than-or-similar-to𝑔subscript^𝑥𝑁𝑔subscript𝑥superscript𝑅2subscript𝜅Σ¯𝑟𝑠𝑐subscript𝜅Σ¯𝑟superscript𝛿2¯𝑟superscript¯𝜈2𝑁𝑠Θ𝑡superscript𝜎2superscript¯𝜈2𝑠Θ𝑡subscript𝜅Σ¯𝑟𝑁\displaystyle g({\widehat{x}}_{N})-g(x_{*})\lesssim{R^{2}\kappa_{\Sigma}{\underline{r}}\over s}\exp\left\{-\frac{c\kappa_{\Sigma}{\underline{r}}}{\delta^{2}{\overline{r}}{\overline{\nu}}^{2}}{N\over s(\Theta+t)}\right\}+{\sigma^{2}{\overline{\nu}}^{2}s(\Theta+t)\over\kappa_{\Sigma}{\underline{r}}N}. , 2 =
with probability ≥1−ClnNe−tabsent1𝐶𝑁superscript𝑒𝑡\geq 1-C\ln Ne^{-t}. We conclude by (16) that
, 1 = ‖x^N−x∗‖22≤κΣ−1‖x^N−x∗‖Σ2≤2κΣ−1r¯−1[g(x^N)−g(x∗)]superscriptsubscriptnormsubscript^𝑥𝑁subscript𝑥22superscriptsubscript𝜅Σ1subscriptsuperscriptnormsubscript^𝑥𝑁subscript𝑥2Σ2superscriptsubscript𝜅Σ1superscript¯𝑟1delimited-[]𝑔subscript^𝑥𝑁𝑔subscript𝑥\displaystyle\|{\widehat{x}}_{N}-x_{*}\|_{2}^{2}\leq\kappa_{\Sigma}^{-1}\|{\widehat{x}}_{N}-x_{*}\|^{2}_{\Sigma}\leq 2\kappa_{\Sigma}^{-1}{\underline{r}}^{-1}[g({\widehat{x}}_{N})-g(x_{*})]. , 2 = . , 1 = ≲R2sexp{−cκΣr¯δ2r¯ν¯2Ns(Θ+t)}+σ2ν¯2s(Θ+t)κΣ2r¯2N.less-than-or-similar-toabsentsuperscript𝑅2𝑠𝑐subscript𝜅Σ¯𝑟superscript𝛿2¯𝑟superscript¯𝜈2𝑁𝑠Θ𝑡superscript𝜎2superscript¯𝜈2𝑠Θ𝑡subscriptsuperscript𝜅2Σsuperscript¯𝑟2𝑁\displaystyle\quad\lesssim{R^{2}\over s}\exp\left\{-\frac{c\kappa_{\Sigma}{\underline{r}}}{\delta^{2}{\overline{r}}{\overline{\nu}}^{2}}{N\over s(\Theta+t)}\right\}+{\sigma^{2}{\overline{\nu}}^{2}s(\Theta+t)\over\kappa^{2}_{\Sigma}{\underline{r}}^{2}N}.. , 2 =
In other words, the error ‖x^N−x∗‖2subscriptnormsubscript^𝑥𝑁subscript𝑥2\|{\widehat{x}}_{N}-x_{*}\|_{2} converges geometrically to the “asymptotic rate” σν¯κΣr¯s(Θ+t)N𝜎¯𝜈subscript𝜅Σ¯𝑟𝑠Θ𝑡𝑁{\sigma{\overline{\nu}}\over\kappa_{\Sigma}{\underline{r}}}\sqrt{s(\Theta+t)\over N} which is the “standard” rate established in the setting (cf. [1, 5, 35], etc).
| 1,601
|
27
|
The proposed approach allows also to address the situation in which regressors are not a.s. bounded. For instance, consider the case of random regressors with i.i.d sub-Gaussian entries such that
, 1 = ∀j≤n,𝐄[exp([ϕi]j2ϰ2)]≤1.formulae-sequencefor-all𝑗𝑛𝐄delimited-[]superscriptsubscriptdelimited-[]subscriptitalic-ϕ𝑖𝑗2superscriptitalic-ϰ21\forall j\leq n,\quad{\mathbf{E}}\left[\exp\left(\tfrac{[\phi_{i}]_{j}^{2}}{\varkappa^{2}}\right)\right]\leq 1.. , 2 =
Using the fact that the maximum of uniform norms ‖ϕi‖∞subscriptnormsubscriptitalic-ϕ𝑖\|\phi_{i}\|_{\infty}, 1≤i≤m1𝑖𝑚1\leq i\leq m, concentrates around ϰlnmnitalic-ϰ𝑚𝑛\varkappa\sqrt{\ln mn} along with independence of noises ξisubscript𝜉𝑖\xi_{i} of ϕisubscriptitalic-ϕ𝑖\phi_{i}, the “smoothness” and “sub-Gaussianity” assumptions of Proposition 3.2 can be stated “conditionally” to the event {ω:maxi≤m‖ϕi‖∞2≲ϰ2(ln[mn]+t)}conditional-set𝜔less-than-or-similar-tosubscript𝑖𝑚superscriptsubscriptnormsubscriptitalic-ϕ𝑖2superscriptitalic-ϰ2𝑚𝑛𝑡\left\{\omega:\;\max_{i\leq m}\|\phi_{i}\|_{\infty}^{2}\lesssim\varkappa^{2}(\ln[mn]+t)\right\} of probability greater than 1−e−t1superscript𝑒𝑡1-e^{-t}. For instance, when replacing the bound for the uniform norm of regressors with ϰ2(ln[mn]+t)superscriptitalic-ϰ2𝑚𝑛𝑡\varkappa^{2}(\ln[mn]+t) in the definition of algorithm parameters and combining with appropriate deviation inequality for martingales (cf., e.g., [4]), one arrives at the bound for the error ‖x^N−x∗‖1subscriptnormsubscript^𝑥𝑁subscript𝑥1\|\widehat{x}_{N}-x_{*}\|_{1} of Algorithm 1 which is similar to (27) of Proposition 3.2 in which ν¯¯𝜈{\overline{\nu}} is replaced
with ϰln[mn]+titalic-ϰ𝑚𝑛𝑡\varkappa\sqrt{\ln[mn]+t}.
| 711
|
Stochastic Mirror Descent for Large-Scale Sparse Recovery
3 Sparse generalized linear regression by stochastic approximation
3.2 Stochastic Mirror Descent algorithm
Remark 3.3
The proposed approach allows also to address the situation in which regressors are not a.s. bounded. For instance, consider the case of random regressors with i.i.d sub-Gaussian entries such that
, 1 = ∀j≤n,𝐄[exp([ϕi]j2ϰ2)]≤1.formulae-sequencefor-all𝑗𝑛𝐄delimited-[]superscriptsubscriptdelimited-[]subscriptitalic-ϕ𝑖𝑗2superscriptitalic-ϰ21\forall j\leq n,\quad{\mathbf{E}}\left[\exp\left(\tfrac{[\phi_{i}]_{j}^{2}}{\varkappa^{2}}\right)\right]\leq 1.. , 2 =
Using the fact that the maximum of uniform norms ‖ϕi‖∞subscriptnormsubscriptitalic-ϕ𝑖\|\phi_{i}\|_{\infty}, 1≤i≤m1𝑖𝑚1\leq i\leq m, concentrates around ϰlnmnitalic-ϰ𝑚𝑛\varkappa\sqrt{\ln mn} along with independence of noises ξisubscript𝜉𝑖\xi_{i} of ϕisubscriptitalic-ϕ𝑖\phi_{i}, the “smoothness” and “sub-Gaussianity” assumptions of Proposition 3.2 can be stated “conditionally” to the event {ω:maxi≤m‖ϕi‖∞2≲ϰ2(ln[mn]+t)}conditional-set𝜔less-than-or-similar-tosubscript𝑖𝑚superscriptsubscriptnormsubscriptitalic-ϕ𝑖2superscriptitalic-ϰ2𝑚𝑛𝑡\left\{\omega:\;\max_{i\leq m}\|\phi_{i}\|_{\infty}^{2}\lesssim\varkappa^{2}(\ln[mn]+t)\right\} of probability greater than 1−e−t1superscript𝑒𝑡1-e^{-t}. For instance, when replacing the bound for the uniform norm of regressors with ϰ2(ln[mn]+t)superscriptitalic-ϰ2𝑚𝑛𝑡\varkappa^{2}(\ln[mn]+t) in the definition of algorithm parameters and combining with appropriate deviation inequality for martingales (cf., e.g., [4]), one arrives at the bound for the error ‖x^N−x∗‖1subscriptnormsubscript^𝑥𝑁subscript𝑥1\|\widehat{x}_{N}-x_{*}\|_{1} of Algorithm 1 which is similar to (27) of Proposition 3.2 in which ν¯¯𝜈{\overline{\nu}} is replaced
with ϰln[mn]+titalic-ϰ𝑚𝑛𝑡\varkappa\sqrt{\ln[mn]+t}.
| 748
|
28
|
In this section, we present results of a small simulation study illustrating the theoretical part of the previous section.222The reader is invited to check Section C of the supplementary material for more experimental results. We consider the GLR model (15) with activation function (21) where α=1/2𝛼12\alpha=1/2.
In our simulations, x∗subscript𝑥x_{*} is an s𝑠s-sparse vector with s𝑠s nonvanishing components sampled independently from the standard s𝑠s-dimensional Gaussian distribution; regressors ϕisubscriptitalic-ϕ𝑖\phi_{i} are sampled from a multivariate Gaussian distribution ϕ∼𝒩(0,Σ)similar-toitalic-ϕ𝒩0Σ\phi\sim\mathcal{N}(0,\Sigma), where ΣΣ\Sigma is a diagonal covariance matrix with diagonal entries σ1≤…≤σnsubscript𝜎1…subscript𝜎𝑛\sigma_{1}\leq...\leq\sigma_{n}. In Figure 2 we report on the experiment in which we compare the performance of the CSMD-SR algorithm from Section 2.3 to that of four other methods. The contenders are (1) “vanilla” non-Euclidean SMD algorithm constrained to the ℓ1subscriptℓ1\ell_{1}-ball equipped with the distance generating function (26), (2) composite non-Euclidean dual averaging algorithm (p𝑝p-Norm RDA) from [47], (3) multistage SMD-SR of [23], and (4) “vanilla” Euclidean SGD.
The regularization parameter of the ℓ1subscriptℓ1\ell_{1} penalty in (2) is set to the theoretically optimal value λ=2σ2log(n)/T𝜆2𝜎2𝑛𝑇\lambda=2\sigma\sqrt{2\log(n)/T}. The corresponding dimension of the parameter space is n=500000𝑛500000n=500000, the sparsity level of the optimal point x∗subscript𝑥x_{*} is s=200𝑠200s=200, and the “total budget” of oracle calls is N=250000𝑁250000N=250000; we use the identity regressor covariance matrix (Σ=InΣsubscript𝐼𝑛\Sigma=I_{n}) and σ∈{0.001,0.1}𝜎0.0010.1\sigma\in\{0.001,0.1\}.
To reduce computation time we use the minibatch versions of the multi-stage algorithms—CSMD-SR and algorithm (3)), the data to compute stochastic gradient realizations ∇G(xi,ω)=ϕ(𝔯(ϕTxi)−η)∇𝐺subscript𝑥𝑖𝜔italic-ϕ𝔯superscriptitalic-ϕ𝑇subscript𝑥𝑖𝜂\nabla G(x_{i},\omega)=\phi(\mathfrak{r}(\phi^{T}x_{i})-\eta) at the current search point xisubscript𝑥𝑖x_{i} being generated “on the fly.”
We repeat simulations 20 times and plot the median value along with the first and the last deciles of the error
‖x^i−x∗‖1subscriptnormsubscript^𝑥𝑖subscript𝑥1\|{\widehat{x}}_{i}-x_{*}\|_{1} at each iteration of the algorithm against the number of oracle calls.
The proposed method outperforms other algorithms which struggle to reach the regime where the stochastic noise is dominant.
In the second experiment we report on here, we study the behavior of the multistage algorithm derived from Algorithm 2 in which, instead of using independent data samples, we reuse the same data at each stage of the method. In Figure 3 we present results of comparison of the CSMD-SR algorithm with its variant with data recycle. This version is of interest as it attains fast the noise regime while using limited amount of samples.
In our first experiment, we consider linear regression problem with parameter dimension n=100 000𝑛100000n=100\,000 and sparsity level s=75𝑠75s=75 of the optimal solution; we consider the GLR model (15) with activation function 𝔯1/10(t)subscript𝔯110𝑡\mathfrak{r}_{1/10}(t) in the second experiment. We choose Σ=InΣsubscript𝐼𝑛\Sigma=I_{n} and σ=0.001𝜎0.001\sigma=0.001; we run 14 (preliminary) stages of the algorithm with m0=3500subscript𝑚03500m_{0}=3500 in the first simulation and m0=4500subscript𝑚04500m_{0}=4500 in the second. We believe that the results speak for themselves.
Acknowledgements
This work was supported by Multidisciplinary Institute in Artificial intelligence MIAI @ Grenoble Alpes (ANR-19-P3IA-0003), “Investissements d’avenir” program (ANR20-CE23-0007-01), FAIRPLAY project, LabEx Ecodec (ANR11-LABX-0047), and ANR-19-CE23-0026. The authors would also like to acknowledge CRITEO AI Lab for supporting this work.
| 1,254
|
Stochastic Mirror Descent for Large-Scale Sparse Recovery
3 Sparse generalized linear regression by stochastic approximation
3.3 Numerical experiments
In this section, we present results of a small simulation study illustrating the theoretical part of the previous section.222The reader is invited to check Section C of the supplementary material for more experimental results. We consider the GLR model (15) with activation function (21) where α=1/2𝛼12\alpha=1/2.
In our simulations, x∗subscript𝑥x_{*} is an s𝑠s-sparse vector with s𝑠s nonvanishing components sampled independently from the standard s𝑠s-dimensional Gaussian distribution; regressors ϕisubscriptitalic-ϕ𝑖\phi_{i} are sampled from a multivariate Gaussian distribution ϕ∼𝒩(0,Σ)similar-toitalic-ϕ𝒩0Σ\phi\sim\mathcal{N}(0,\Sigma), where ΣΣ\Sigma is a diagonal covariance matrix with diagonal entries σ1≤…≤σnsubscript𝜎1…subscript𝜎𝑛\sigma_{1}\leq...\leq\sigma_{n}. In Figure 2 we report on the experiment in which we compare the performance of the CSMD-SR algorithm from Section 2.3 to that of four other methods. The contenders are (1) “vanilla” non-Euclidean SMD algorithm constrained to the ℓ1subscriptℓ1\ell_{1}-ball equipped with the distance generating function (26), (2) composite non-Euclidean dual averaging algorithm (p𝑝p-Norm RDA) from [47], (3) multistage SMD-SR of [23], and (4) “vanilla” Euclidean SGD.
The regularization parameter of the ℓ1subscriptℓ1\ell_{1} penalty in (2) is set to the theoretically optimal value λ=2σ2log(n)/T𝜆2𝜎2𝑛𝑇\lambda=2\sigma\sqrt{2\log(n)/T}. The corresponding dimension of the parameter space is n=500000𝑛500000n=500000, the sparsity level of the optimal point x∗subscript𝑥x_{*} is s=200𝑠200s=200, and the “total budget” of oracle calls is N=250000𝑁250000N=250000; we use the identity regressor covariance matrix (Σ=InΣsubscript𝐼𝑛\Sigma=I_{n}) and σ∈{0.001,0.1}𝜎0.0010.1\sigma\in\{0.001,0.1\}.
To reduce computation time we use the minibatch versions of the multi-stage algorithms—CSMD-SR and algorithm (3)), the data to compute stochastic gradient realizations ∇G(xi,ω)=ϕ(𝔯(ϕTxi)−η)∇𝐺subscript𝑥𝑖𝜔italic-ϕ𝔯superscriptitalic-ϕ𝑇subscript𝑥𝑖𝜂\nabla G(x_{i},\omega)=\phi(\mathfrak{r}(\phi^{T}x_{i})-\eta) at the current search point xisubscript𝑥𝑖x_{i} being generated “on the fly.”
We repeat simulations 20 times and plot the median value along with the first and the last deciles of the error
‖x^i−x∗‖1subscriptnormsubscript^𝑥𝑖subscript𝑥1\|{\widehat{x}}_{i}-x_{*}\|_{1} at each iteration of the algorithm against the number of oracle calls.
The proposed method outperforms other algorithms which struggle to reach the regime where the stochastic noise is dominant.
In the second experiment we report on here, we study the behavior of the multistage algorithm derived from Algorithm 2 in which, instead of using independent data samples, we reuse the same data at each stage of the method. In Figure 3 we present results of comparison of the CSMD-SR algorithm with its variant with data recycle. This version is of interest as it attains fast the noise regime while using limited amount of samples.
In our first experiment, we consider linear regression problem with parameter dimension n=100 000𝑛100000n=100\,000 and sparsity level s=75𝑠75s=75 of the optimal solution; we consider the GLR model (15) with activation function 𝔯1/10(t)subscript𝔯110𝑡\mathfrak{r}_{1/10}(t) in the second experiment. We choose Σ=InΣsubscript𝐼𝑛\Sigma=I_{n} and σ=0.001𝜎0.001\sigma=0.001; we run 14 (preliminary) stages of the algorithm with m0=3500subscript𝑚03500m_{0}=3500 in the first simulation and m0=4500subscript𝑚04500m_{0}=4500 in the second. We believe that the results speak for themselves.
Acknowledgements
This work was supported by Multidisciplinary Institute in Artificial intelligence MIAI @ Grenoble Alpes (ANR-19-P3IA-0003), “Investissements d’avenir” program (ANR20-CE23-0007-01), FAIRPLAY project, LabEx Ecodec (ANR11-LABX-0047), and ANR-19-CE23-0026. The authors would also like to acknowledge CRITEO AI Lab for supporting this work.
| 1,282
|
29
|
We use notation 𝐄isubscript𝐄𝑖{\mathbf{E}}_{i} for conditional expectation given x0subscript𝑥0x_{0} and ω1,…,ωisubscript𝜔1…subscript𝜔𝑖\omega_{1},...,\omega_{i}.
| 73
|
Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix A Proofs
We use notation 𝐄isubscript𝐄𝑖{\mathbf{E}}_{i} for conditional expectation given x0subscript𝑥0x_{0} and ω1,…,ωisubscript𝜔1…subscript𝜔𝑖\omega_{1},...,\omega_{i}.
| 91
|
30
|
The result of Proposition 2.1 is an immediate consequence of the following statement.
| 17
|
Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix A Proofs
A.1 Proof of Proposition 2.1
The result of Proposition 2.1 is an immediate consequence of the following statement.
| 46
|
31
|
Let
, 1 = f(x)=12g(x)+h(x),x∈X.formulae-sequence𝑓𝑥12𝑔𝑥ℎ𝑥𝑥𝑋f(x)=\mbox{\small$\frac{1}{2}$}g(x)+h(x),\quad x\in X.. , 2 =
In the situation of Section 2.2, let γi≤(4ν)−1subscript𝛾𝑖superscript4𝜈1\gamma_{i}\leq(4\nu)^{-1} for all i=0,1,…𝑖01…i=0,1,..., and let x^msubscript^𝑥𝑚\widehat{x}_{m} be defined in (10), where xisubscript𝑥𝑖x_{i}
are iterations (9). Then for any t≥22+lnm𝑡22𝑚t\geq 2\sqrt{2+\ln m} there is Ω¯m⊂Ωsubscript¯Ω𝑚Ω\overline{\Omega}_{m}\subset\Omega such that Prob(Ω¯m)≥1−4e−tProbsubscript¯Ω𝑚14superscript𝑒𝑡\hbox{\rm Prob}(\overline{\Omega}_{m})\geq 1-4e^{-t} and for all ωm=[ω1,…,ωm]∈Ω¯msuperscript𝜔𝑚subscript𝜔1…subscript𝜔𝑚subscript¯Ω𝑚\omega^{m}=[\omega_{1},...,\omega_{m}]\in\overline{\Omega}_{m},
, 1 = (∑i=0m−1γi)[f(x^m)−f(x∗)]superscriptsubscript𝑖0𝑚1subscript𝛾𝑖delimited-[]𝑓subscript^𝑥𝑚𝑓subscript𝑥\displaystyle\left(\sum_{i=0}^{m-1}\gamma_{i}\right)[f(\widehat{x}_{m})-f(x_{*})]. , 2 = ≤∑i=0m−1[12γi⟨∇g(xi),xi−x∗⟩+γi+1(h(xi+1)−h(x∗))]absentsuperscriptsubscript𝑖0𝑚1delimited-[]12subscript𝛾𝑖∇𝑔subscript𝑥𝑖subscript𝑥𝑖subscript𝑥subscript𝛾𝑖1ℎsubscript𝑥𝑖1ℎsubscript𝑥\displaystyle\leq\sum_{i=0}^{m-1}\Big{[}\mbox{\small$\frac{1}{2}$}\gamma_{i}{\langle}\nabla g(x_{i}),x_{i}-x_{*}{\rangle}+\gamma_{i+1}(h(x_{i+1})-h(x_{*}))\Big{]}. , 3 = . , 4 = . , 1 = . , 2 = ≤V(x0,x∗)+γ0[h(x0)−h(x∗)]−γm[h(xm)−h(x∗)]absent𝑉subscript𝑥0subscript𝑥subscript𝛾0delimited-[]ℎsubscript𝑥0ℎsubscript𝑥subscript𝛾𝑚delimited-[]ℎsubscript𝑥𝑚ℎsubscript𝑥\displaystyle\leq V({x_{0}},x_{*})+\gamma_{0}[h(x_{0})-h(x_{*})]-\gamma_{m}[h(x_{m})-h(x_{*})]. , 3 = . , 4 = . , 1 = . , 2 = +V(x0,x∗)+15tR2+σ∗2[7∑i=0m−1γi2+24tγ¯2].𝑉subscript𝑥0subscript𝑥15𝑡superscript𝑅2subscriptsuperscript𝜎2delimited-[]7superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖224𝑡superscript¯𝛾2\displaystyle\qquad+V(x_{0},x_{*})+{15}tR^{2}+\sigma^{2}_{*}\left[{7}\sum_{i=0}^{m-1}\gamma_{i}^{2}+24t\overline{\gamma}^{2}\right].. , 3 = . , 4 = (28)
In particular, when using the constant stepsize strategy with γi≡γsubscript𝛾𝑖𝛾\gamma_{i}\equiv\gamma, 0<γ≤(4ν)−10𝛾superscript4𝜈10<\gamma\leq(4\nu)^{-1}, one has
, 1 = 12[g(x^m)−g(x∗)]+[h(x^m)−h(x∗)]12delimited-[]𝑔subscript^𝑥𝑚𝑔subscript𝑥delimited-[]ℎsubscript^𝑥𝑚ℎsubscript𝑥\displaystyle\mbox{\small$\frac{1}{2}$}[g({\widehat{x}}_{m})-g(x_{*})]+[h({\widehat{x}}_{m})-h(x_{*})]. , 2 = . , 3 = . , 1 = ≤V(x0,x∗)+15tR2γm+h(x0)−h(xm)m+γσ∗2(7+24tm).absent𝑉subscript𝑥0subscript𝑥15𝑡superscript𝑅2𝛾𝑚ℎsubscript𝑥0ℎsubscript𝑥𝑚𝑚𝛾subscriptsuperscript𝜎2724𝑡𝑚\displaystyle\qquad\leq{V(x_{0},x_{*})+{15}tR^{2}\over\gamma m}+{h(x_{0})-h(x_{m})\over m}+\gamma\sigma^{2}_{*}\left({7}+{24t\over m}\right).. , 2 = . , 3 = (29)
| 1,580
|
Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix A Proofs
A.1 Proof of Proposition 2.1
Proposition A.1
Let
, 1 = f(x)=12g(x)+h(x),x∈X.formulae-sequence𝑓𝑥12𝑔𝑥ℎ𝑥𝑥𝑋f(x)=\mbox{\small$\frac{1}{2}$}g(x)+h(x),\quad x\in X.. , 2 =
In the situation of Section 2.2, let γi≤(4ν)−1subscript𝛾𝑖superscript4𝜈1\gamma_{i}\leq(4\nu)^{-1} for all i=0,1,…𝑖01…i=0,1,..., and let x^msubscript^𝑥𝑚\widehat{x}_{m} be defined in (10), where xisubscript𝑥𝑖x_{i}
are iterations (9). Then for any t≥22+lnm𝑡22𝑚t\geq 2\sqrt{2+\ln m} there is Ω¯m⊂Ωsubscript¯Ω𝑚Ω\overline{\Omega}_{m}\subset\Omega such that Prob(Ω¯m)≥1−4e−tProbsubscript¯Ω𝑚14superscript𝑒𝑡\hbox{\rm Prob}(\overline{\Omega}_{m})\geq 1-4e^{-t} and for all ωm=[ω1,…,ωm]∈Ω¯msuperscript𝜔𝑚subscript𝜔1…subscript𝜔𝑚subscript¯Ω𝑚\omega^{m}=[\omega_{1},...,\omega_{m}]\in\overline{\Omega}_{m},
, 1 = (∑i=0m−1γi)[f(x^m)−f(x∗)]superscriptsubscript𝑖0𝑚1subscript𝛾𝑖delimited-[]𝑓subscript^𝑥𝑚𝑓subscript𝑥\displaystyle\left(\sum_{i=0}^{m-1}\gamma_{i}\right)[f(\widehat{x}_{m})-f(x_{*})]. , 2 = ≤∑i=0m−1[12γi⟨∇g(xi),xi−x∗⟩+γi+1(h(xi+1)−h(x∗))]absentsuperscriptsubscript𝑖0𝑚1delimited-[]12subscript𝛾𝑖∇𝑔subscript𝑥𝑖subscript𝑥𝑖subscript𝑥subscript𝛾𝑖1ℎsubscript𝑥𝑖1ℎsubscript𝑥\displaystyle\leq\sum_{i=0}^{m-1}\Big{[}\mbox{\small$\frac{1}{2}$}\gamma_{i}{\langle}\nabla g(x_{i}),x_{i}-x_{*}{\rangle}+\gamma_{i+1}(h(x_{i+1})-h(x_{*}))\Big{]}. , 3 = . , 4 = . , 1 = . , 2 = ≤V(x0,x∗)+γ0[h(x0)−h(x∗)]−γm[h(xm)−h(x∗)]absent𝑉subscript𝑥0subscript𝑥subscript𝛾0delimited-[]ℎsubscript𝑥0ℎsubscript𝑥subscript𝛾𝑚delimited-[]ℎsubscript𝑥𝑚ℎsubscript𝑥\displaystyle\leq V({x_{0}},x_{*})+\gamma_{0}[h(x_{0})-h(x_{*})]-\gamma_{m}[h(x_{m})-h(x_{*})]. , 3 = . , 4 = . , 1 = . , 2 = +V(x0,x∗)+15tR2+σ∗2[7∑i=0m−1γi2+24tγ¯2].𝑉subscript𝑥0subscript𝑥15𝑡superscript𝑅2subscriptsuperscript𝜎2delimited-[]7superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖224𝑡superscript¯𝛾2\displaystyle\qquad+V(x_{0},x_{*})+{15}tR^{2}+\sigma^{2}_{*}\left[{7}\sum_{i=0}^{m-1}\gamma_{i}^{2}+24t\overline{\gamma}^{2}\right].. , 3 = . , 4 = (28)
In particular, when using the constant stepsize strategy with γi≡γsubscript𝛾𝑖𝛾\gamma_{i}\equiv\gamma, 0<γ≤(4ν)−10𝛾superscript4𝜈10<\gamma\leq(4\nu)^{-1}, one has
, 1 = 12[g(x^m)−g(x∗)]+[h(x^m)−h(x∗)]12delimited-[]𝑔subscript^𝑥𝑚𝑔subscript𝑥delimited-[]ℎsubscript^𝑥𝑚ℎsubscript𝑥\displaystyle\mbox{\small$\frac{1}{2}$}[g({\widehat{x}}_{m})-g(x_{*})]+[h({\widehat{x}}_{m})-h(x_{*})]. , 2 = . , 3 = . , 1 = ≤V(x0,x∗)+15tR2γm+h(x0)−h(xm)m+γσ∗2(7+24tm).absent𝑉subscript𝑥0subscript𝑥15𝑡superscript𝑅2𝛾𝑚ℎsubscript𝑥0ℎsubscript𝑥𝑚𝑚𝛾subscriptsuperscript𝜎2724𝑡𝑚\displaystyle\qquad\leq{V(x_{0},x_{*})+{15}tR^{2}\over\gamma m}+{h(x_{0})-h(x_{m})\over m}+\gamma\sigma^{2}_{*}\left({7}+{24t\over m}\right).. , 2 = . , 3 = (29)
| 1,615
|
32
|
Denote Hi=∇G(xi−1,ωi)subscript𝐻𝑖∇𝐺subscript𝑥𝑖1subscript𝜔𝑖H_{i}=\nabla G(x_{i-1},\omega_{i}). In the sequel, we use the shortcut notation ϑ(z)italic-ϑ𝑧\vartheta(z) and V(x,z)𝑉𝑥𝑧V(x,z) for ϑx0R(z)superscriptsubscriptitalic-ϑsubscript𝑥0𝑅𝑧\vartheta_{x_{0}}^{R}(z) and Vx0(x,z)subscript𝑉subscript𝑥0𝑥𝑧V_{x_{0}}(x,z) when exact values x0subscript𝑥0x_{0} and R𝑅R are clear from the context.
| 220
|
Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix A Proofs
A.1 Proof of Proposition 2.1
Proof.
Denote Hi=∇G(xi−1,ωi)subscript𝐻𝑖∇𝐺subscript𝑥𝑖1subscript𝜔𝑖H_{i}=\nabla G(x_{i-1},\omega_{i}). In the sequel, we use the shortcut notation ϑ(z)italic-ϑ𝑧\vartheta(z) and V(x,z)𝑉𝑥𝑧V(x,z) for ϑx0R(z)superscriptsubscriptitalic-ϑsubscript𝑥0𝑅𝑧\vartheta_{x_{0}}^{R}(z) and Vx0(x,z)subscript𝑉subscript𝑥0𝑥𝑧V_{x_{0}}(x,z) when exact values x0subscript𝑥0x_{0} and R𝑅R are clear from the context.
| 251
|
33
|
From the definition of xisubscript𝑥𝑖x_{i} and of the composite prox-mapping (8) (cf. Lemma A.1 of [40]), we conclude that there is ηi∈∂h(xi)subscript𝜂𝑖ℎsubscript𝑥𝑖\eta_{i}\in\partial h(x_{i}) such that
, 1 = ⟨γi−1Hi+γiηi+∇ϑ(xi)−∇ϑ(xi−1),z−xi⟩≥0,∀z∈𝒳,formulae-sequencesubscript𝛾𝑖1subscript𝐻𝑖subscript𝛾𝑖subscript𝜂𝑖∇italic-ϑsubscript𝑥𝑖∇italic-ϑsubscript𝑥𝑖1𝑧subscript𝑥𝑖0for-all𝑧𝒳{\langle}\gamma_{i-1}H_{i}+\gamma_{i}\eta_{i}+\nabla\vartheta(x_{i})-\nabla\vartheta(x_{i-1}),z-x_{i}{\rangle}\geq 0,\;\;\forall\;z\in{\cal X},. , 2 =
implying, as usual [12], that ∀z∈𝒳for-all𝑧𝒳\forall z\in{\cal X}
, 1 = ⟨γi−1Hi+γiηi,xi−z⟩≤V(xi−1,z)−V(xi,z)−V(xi−1,xi).subscript𝛾𝑖1subscript𝐻𝑖subscript𝛾𝑖subscript𝜂𝑖subscript𝑥𝑖𝑧𝑉subscript𝑥𝑖1𝑧𝑉subscript𝑥𝑖𝑧𝑉subscript𝑥𝑖1subscript𝑥𝑖{\langle}\gamma_{i-1}H_{i}+\gamma_{i}\eta_{i},x_{i}-z{\rangle}\leq V(x_{i-1},z)-V(x_{i},z)-V(x_{i-1},x_{i}).. , 2 =
In particular,
, 1 = γi−1⟨Hi,xi−1−x∗⟩+γi⟨ηi,xi−x∗⟩subscript𝛾𝑖1subscript𝐻𝑖subscript𝑥𝑖1subscript𝑥subscript𝛾𝑖subscript𝜂𝑖subscript𝑥𝑖subscript𝑥\displaystyle\gamma_{i-1}{\langle}H_{i},x_{i-1}-x_{*}{\rangle}+\gamma_{i}{\langle}\eta_{i},x_{i}-x_{*}{\rangle}. , 2 = . , 1 = ≤V(xi−1,x∗)−V(xi,x∗)−V(xi−1,xi)+γi−1⟨Hi,xi−1−xi⟩absent𝑉subscript𝑥𝑖1subscript𝑥𝑉subscript𝑥𝑖subscript𝑥𝑉subscript𝑥𝑖1subscript𝑥𝑖subscript𝛾𝑖1subscript𝐻𝑖subscript𝑥𝑖1subscript𝑥𝑖\displaystyle\leq V(x_{i-1},x_{*})-V(x_{i},x_{*})-V(x_{i-1},x_{i})+\gamma_{i-1}{\langle}H_{i},x_{i-1}-x_{i}{\rangle}. , 2 = . , 1 = ≤V(xi−1,x∗)−V(xi,x∗)+12γi−12‖Hi‖∗2.absent𝑉subscript𝑥𝑖1subscript𝑥𝑉subscript𝑥𝑖subscript𝑥12subscriptsuperscript𝛾2𝑖1superscriptsubscriptnormsubscript𝐻𝑖2\displaystyle\leq V(x_{i-1},x_{*})-V(x_{i},x_{*})+\mbox{\small$\frac{1}{2}$}\gamma^{2}_{i-1}\|H_{i}\|_{*}^{2}.. , 2 =
Observe that due to the Lipschitz continuity of ∇G(⋅,ω)∇𝐺⋅𝜔\nabla G(\cdot,\omega) one has
, 1 = ν⟨∇G(x,ω)−∇G(x′,ω),x−x′⟩≥‖∇G(x,ω)−∇G(x′,ω)‖∗2,∀x,x′∈𝒳,formulae-sequence𝜈∇𝐺𝑥𝜔∇𝐺superscript𝑥′𝜔𝑥superscript𝑥′superscriptsubscriptnorm∇𝐺𝑥𝜔∇𝐺superscript𝑥′𝜔2for-all𝑥superscript𝑥′𝒳\displaystyle\nu{\langle}\nabla G(x,\omega)-\nabla G(x^{\prime},\omega),x-x^{\prime}{\rangle}\geq\|\nabla G(x,\omega)-\nabla G(x^{\prime},\omega)\|_{*}^{2},\quad\forall x,x^{\prime}\in{\cal X},. , 2 = . , 3 = (30)
so that
| 1,389
|
Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix A Proofs
A.1 Proof of Proposition 2.1
1o.
From the definition of xisubscript𝑥𝑖x_{i} and of the composite prox-mapping (8) (cf. Lemma A.1 of [40]), we conclude that there is ηi∈∂h(xi)subscript𝜂𝑖ℎsubscript𝑥𝑖\eta_{i}\in\partial h(x_{i}) such that
, 1 = ⟨γi−1Hi+γiηi+∇ϑ(xi)−∇ϑ(xi−1),z−xi⟩≥0,∀z∈𝒳,formulae-sequencesubscript𝛾𝑖1subscript𝐻𝑖subscript𝛾𝑖subscript𝜂𝑖∇italic-ϑsubscript𝑥𝑖∇italic-ϑsubscript𝑥𝑖1𝑧subscript𝑥𝑖0for-all𝑧𝒳{\langle}\gamma_{i-1}H_{i}+\gamma_{i}\eta_{i}+\nabla\vartheta(x_{i})-\nabla\vartheta(x_{i-1}),z-x_{i}{\rangle}\geq 0,\;\;\forall\;z\in{\cal X},. , 2 =
implying, as usual [12], that ∀z∈𝒳for-all𝑧𝒳\forall z\in{\cal X}
, 1 = ⟨γi−1Hi+γiηi,xi−z⟩≤V(xi−1,z)−V(xi,z)−V(xi−1,xi).subscript𝛾𝑖1subscript𝐻𝑖subscript𝛾𝑖subscript𝜂𝑖subscript𝑥𝑖𝑧𝑉subscript𝑥𝑖1𝑧𝑉subscript𝑥𝑖𝑧𝑉subscript𝑥𝑖1subscript𝑥𝑖{\langle}\gamma_{i-1}H_{i}+\gamma_{i}\eta_{i},x_{i}-z{\rangle}\leq V(x_{i-1},z)-V(x_{i},z)-V(x_{i-1},x_{i}).. , 2 =
In particular,
, 1 = γi−1⟨Hi,xi−1−x∗⟩+γi⟨ηi,xi−x∗⟩subscript𝛾𝑖1subscript𝐻𝑖subscript𝑥𝑖1subscript𝑥subscript𝛾𝑖subscript𝜂𝑖subscript𝑥𝑖subscript𝑥\displaystyle\gamma_{i-1}{\langle}H_{i},x_{i-1}-x_{*}{\rangle}+\gamma_{i}{\langle}\eta_{i},x_{i}-x_{*}{\rangle}. , 2 = . , 1 = ≤V(xi−1,x∗)−V(xi,x∗)−V(xi−1,xi)+γi−1⟨Hi,xi−1−xi⟩absent𝑉subscript𝑥𝑖1subscript𝑥𝑉subscript𝑥𝑖subscript𝑥𝑉subscript𝑥𝑖1subscript𝑥𝑖subscript𝛾𝑖1subscript𝐻𝑖subscript𝑥𝑖1subscript𝑥𝑖\displaystyle\leq V(x_{i-1},x_{*})-V(x_{i},x_{*})-V(x_{i-1},x_{i})+\gamma_{i-1}{\langle}H_{i},x_{i-1}-x_{i}{\rangle}. , 2 = . , 1 = ≤V(xi−1,x∗)−V(xi,x∗)+12γi−12‖Hi‖∗2.absent𝑉subscript𝑥𝑖1subscript𝑥𝑉subscript𝑥𝑖subscript𝑥12subscriptsuperscript𝛾2𝑖1superscriptsubscriptnormsubscript𝐻𝑖2\displaystyle\leq V(x_{i-1},x_{*})-V(x_{i},x_{*})+\mbox{\small$\frac{1}{2}$}\gamma^{2}_{i-1}\|H_{i}\|_{*}^{2}.. , 2 =
Observe that due to the Lipschitz continuity of ∇G(⋅,ω)∇𝐺⋅𝜔\nabla G(\cdot,\omega) one has
, 1 = ν⟨∇G(x,ω)−∇G(x′,ω),x−x′⟩≥‖∇G(x,ω)−∇G(x′,ω)‖∗2,∀x,x′∈𝒳,formulae-sequence𝜈∇𝐺𝑥𝜔∇𝐺superscript𝑥′𝜔𝑥superscript𝑥′superscriptsubscriptnorm∇𝐺𝑥𝜔∇𝐺superscript𝑥′𝜔2for-all𝑥superscript𝑥′𝒳\displaystyle\nu{\langle}\nabla G(x,\omega)-\nabla G(x^{\prime},\omega),x-x^{\prime}{\rangle}\geq\|\nabla G(x,\omega)-\nabla G(x^{\prime},\omega)\|_{*}^{2},\quad\forall x,x^{\prime}\in{\cal X},. , 2 = . , 3 = (30)
so that
| 1,421
|
34
|
, 1 = ‖∇G(x,ω)‖∗2superscriptsubscriptnorm∇𝐺𝑥𝜔2\displaystyle\|\nabla G(x,\omega)\|_{*}^{2}. , 2 = ≤2‖∇G(x,ω)−∇G(x∗,ω)‖∗2+2‖∇G(x∗,ω)‖∗2absent2superscriptsubscriptnorm∇𝐺𝑥𝜔∇𝐺subscript𝑥𝜔22superscriptsubscriptnorm∇𝐺subscript𝑥𝜔2\displaystyle\leq 2\|\nabla G(x,\omega)-\nabla G(x_{*},\omega)\|_{*}^{2}+2\|\nabla G(x_{*},\omega)\|_{*}^{2}. , 3 = . , 1 = . , 2 = ≤2ν⟨∇G(x,ω)−∇G(x∗,ω),x−x∗⟩+2‖∇G(x∗,ω)‖∗2absent2𝜈∇𝐺𝑥𝜔∇𝐺subscript𝑥𝜔𝑥subscript𝑥2superscriptsubscriptnorm∇𝐺subscript𝑥𝜔2\displaystyle\leq 2\nu{\langle}\nabla G(x,\omega)-\nabla G(x_{*},\omega),x-x_{*}{\rangle}+2\|\nabla G(x_{*},\omega)\|_{*}^{2}. , 3 = . , 1 = . , 2 = =2ν⟨∇G(x,ω),x−x∗⟩−2ν⟨∇G(x∗,ω),x−x∗⟩+2‖∇G(x∗,ω)‖∗2absent2𝜈∇𝐺𝑥𝜔𝑥subscript𝑥2𝜈∇𝐺subscript𝑥𝜔𝑥subscript𝑥2superscriptsubscriptnorm∇𝐺subscript𝑥𝜔2\displaystyle=2\nu{\langle}\nabla G(x,\omega),x-x_{*}{\rangle}-2\nu{\langle}\nabla G(x_{*},\omega),x-x_{*}{\rangle}+2\|\nabla G(x_{*},\omega)\|_{*}^{2}. , 3 =
so that
, 1 = γi−1⟨Hi,xi−1−x∗⟩+γi⟨ηi,,xi−x∗⟩\displaystyle\gamma_{i-1}{\langle}H_{i},x_{i-1}-x_{*}{\rangle}+\gamma_{i}{\langle}\eta_{i},,x_{i}-x_{*}{\rangle}. , 2 = . , 1 = ≤V(xi−1,x∗)−V(xi,x∗)+γi−12[ν⟨Hi,xi−1−x∗⟩−νζi+τi]absent𝑉subscript𝑥𝑖1subscript𝑥𝑉subscript𝑥𝑖subscript𝑥subscriptsuperscript𝛾2𝑖1delimited-[]𝜈subscript𝐻𝑖subscript𝑥𝑖1subscript𝑥𝜈subscript𝜁𝑖subscript𝜏𝑖\displaystyle\leq V(x_{i-1},x_{*})-V(x_{i},x_{*})+\gamma^{2}_{i-1}[\nu{\langle}H_{i},x_{i-1}-x_{*}{\rangle}-\nu\zeta_{i}+\tau_{i}]. , 2 =
where ζi=⟨∇G(x∗,ωi),xi−1−x∗⟩subscript𝜁𝑖∇𝐺subscript𝑥subscript𝜔𝑖subscript𝑥𝑖1subscript𝑥\zeta_{i}={\langle}\nabla G(x_{*},\omega_{i}),x_{i-1}-x_{*}{\rangle} and τi=‖∇G(x∗,ω)‖∗2subscript𝜏𝑖superscriptsubscriptnorm∇𝐺subscript𝑥𝜔2\tau_{i}=\|\nabla G(x_{*},\omega)\|_{*}^{2}.
As a result, by convexity of hℎh we have for γi≤(4ν)−1subscript𝛾𝑖superscript4𝜈1\gamma_{i}\leq(4\nu)^{-1}
, 1 = 34γi−1⟨∇g(xi−1),xi−1−x∗⟩+γi[h(xi)−h(x∗)]34subscript𝛾𝑖1∇𝑔subscript𝑥𝑖1subscript𝑥𝑖1subscript𝑥subscript𝛾𝑖delimited-[]ℎsubscript𝑥𝑖ℎsubscript𝑥\displaystyle\mbox{\small$\frac{3}{4}$}\gamma_{i-1}{\langle}\nabla g(x_{i-1}),x_{i-1}-x_{*}{\rangle}+\gamma_{i}[h(x_{i})-h(x_{*})]. , 2 = . , 1 = ≤(γi−1−γi−12ν)⟨∇g(xi−1),xi−1−x∗⟩+γi⟨ηi,xi−x∗⟩absentsubscript𝛾𝑖1superscriptsubscript𝛾𝑖12𝜈∇𝑔subscript𝑥𝑖1subscript𝑥𝑖1subscript𝑥subscript𝛾𝑖subscript𝜂𝑖subscript𝑥𝑖subscript𝑥\displaystyle\leq(\gamma_{i-1}-\gamma_{i-1}^{2}\nu){\langle}\nabla g(x_{i-1}),x_{i-1}-x_{*}{\rangle}+\gamma_{i}{\langle}\eta_{i},x_{i}-x_{*}{\rangle}. , 2 = . , 1 = ≤V(xi−1,x∗)−V(xi,x∗)+(γi−1−γi−12ν)⟨ξi,xi−1−x∗⟩+γi−12[τi−νζi]absent𝑉subscript𝑥𝑖1subscript𝑥𝑉subscript𝑥𝑖subscript𝑥subscript𝛾𝑖1superscriptsubscript𝛾𝑖12𝜈subscript𝜉𝑖subscript𝑥𝑖1subscript𝑥superscriptsubscript𝛾𝑖12delimited-[]subscript𝜏𝑖𝜈subscript𝜁𝑖\displaystyle\leq V(x_{i-1},x_{*})-V(x_{i},x_{*})+(\gamma_{i-1}-\gamma_{i-1}^{2}\nu){\langle}\xi_{i},x_{i-1}-x_{*}{\rangle}+\gamma_{i-1}^{2}[\tau_{i}-\nu\zeta_{i}]. , 2 =
where we put ξi=Hi−∇g(xi−1)subscript𝜉𝑖subscript𝐻𝑖∇𝑔subscript𝑥𝑖1\xi_{i}=H_{i}-\nabla g(x_{i-1}).
When summing from i=1𝑖1i=1 to m𝑚m we obtain
| 1,970
|
Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix A Proofs
A.1 Proof of Proposition 2.1
1o.
, 1 = ‖∇G(x,ω)‖∗2superscriptsubscriptnorm∇𝐺𝑥𝜔2\displaystyle\|\nabla G(x,\omega)\|_{*}^{2}. , 2 = ≤2‖∇G(x,ω)−∇G(x∗,ω)‖∗2+2‖∇G(x∗,ω)‖∗2absent2superscriptsubscriptnorm∇𝐺𝑥𝜔∇𝐺subscript𝑥𝜔22superscriptsubscriptnorm∇𝐺subscript𝑥𝜔2\displaystyle\leq 2\|\nabla G(x,\omega)-\nabla G(x_{*},\omega)\|_{*}^{2}+2\|\nabla G(x_{*},\omega)\|_{*}^{2}. , 3 = . , 1 = . , 2 = ≤2ν⟨∇G(x,ω)−∇G(x∗,ω),x−x∗⟩+2‖∇G(x∗,ω)‖∗2absent2𝜈∇𝐺𝑥𝜔∇𝐺subscript𝑥𝜔𝑥subscript𝑥2superscriptsubscriptnorm∇𝐺subscript𝑥𝜔2\displaystyle\leq 2\nu{\langle}\nabla G(x,\omega)-\nabla G(x_{*},\omega),x-x_{*}{\rangle}+2\|\nabla G(x_{*},\omega)\|_{*}^{2}. , 3 = . , 1 = . , 2 = =2ν⟨∇G(x,ω),x−x∗⟩−2ν⟨∇G(x∗,ω),x−x∗⟩+2‖∇G(x∗,ω)‖∗2absent2𝜈∇𝐺𝑥𝜔𝑥subscript𝑥2𝜈∇𝐺subscript𝑥𝜔𝑥subscript𝑥2superscriptsubscriptnorm∇𝐺subscript𝑥𝜔2\displaystyle=2\nu{\langle}\nabla G(x,\omega),x-x_{*}{\rangle}-2\nu{\langle}\nabla G(x_{*},\omega),x-x_{*}{\rangle}+2\|\nabla G(x_{*},\omega)\|_{*}^{2}. , 3 =
so that
, 1 = γi−1⟨Hi,xi−1−x∗⟩+γi⟨ηi,,xi−x∗⟩\displaystyle\gamma_{i-1}{\langle}H_{i},x_{i-1}-x_{*}{\rangle}+\gamma_{i}{\langle}\eta_{i},,x_{i}-x_{*}{\rangle}. , 2 = . , 1 = ≤V(xi−1,x∗)−V(xi,x∗)+γi−12[ν⟨Hi,xi−1−x∗⟩−νζi+τi]absent𝑉subscript𝑥𝑖1subscript𝑥𝑉subscript𝑥𝑖subscript𝑥subscriptsuperscript𝛾2𝑖1delimited-[]𝜈subscript𝐻𝑖subscript𝑥𝑖1subscript𝑥𝜈subscript𝜁𝑖subscript𝜏𝑖\displaystyle\leq V(x_{i-1},x_{*})-V(x_{i},x_{*})+\gamma^{2}_{i-1}[\nu{\langle}H_{i},x_{i-1}-x_{*}{\rangle}-\nu\zeta_{i}+\tau_{i}]. , 2 =
where ζi=⟨∇G(x∗,ωi),xi−1−x∗⟩subscript𝜁𝑖∇𝐺subscript𝑥subscript𝜔𝑖subscript𝑥𝑖1subscript𝑥\zeta_{i}={\langle}\nabla G(x_{*},\omega_{i}),x_{i-1}-x_{*}{\rangle} and τi=‖∇G(x∗,ω)‖∗2subscript𝜏𝑖superscriptsubscriptnorm∇𝐺subscript𝑥𝜔2\tau_{i}=\|\nabla G(x_{*},\omega)\|_{*}^{2}.
As a result, by convexity of hℎh we have for γi≤(4ν)−1subscript𝛾𝑖superscript4𝜈1\gamma_{i}\leq(4\nu)^{-1}
, 1 = 34γi−1⟨∇g(xi−1),xi−1−x∗⟩+γi[h(xi)−h(x∗)]34subscript𝛾𝑖1∇𝑔subscript𝑥𝑖1subscript𝑥𝑖1subscript𝑥subscript𝛾𝑖delimited-[]ℎsubscript𝑥𝑖ℎsubscript𝑥\displaystyle\mbox{\small$\frac{3}{4}$}\gamma_{i-1}{\langle}\nabla g(x_{i-1}),x_{i-1}-x_{*}{\rangle}+\gamma_{i}[h(x_{i})-h(x_{*})]. , 2 = . , 1 = ≤(γi−1−γi−12ν)⟨∇g(xi−1),xi−1−x∗⟩+γi⟨ηi,xi−x∗⟩absentsubscript𝛾𝑖1superscriptsubscript𝛾𝑖12𝜈∇𝑔subscript𝑥𝑖1subscript𝑥𝑖1subscript𝑥subscript𝛾𝑖subscript𝜂𝑖subscript𝑥𝑖subscript𝑥\displaystyle\leq(\gamma_{i-1}-\gamma_{i-1}^{2}\nu){\langle}\nabla g(x_{i-1}),x_{i-1}-x_{*}{\rangle}+\gamma_{i}{\langle}\eta_{i},x_{i}-x_{*}{\rangle}. , 2 = . , 1 = ≤V(xi−1,x∗)−V(xi,x∗)+(γi−1−γi−12ν)⟨ξi,xi−1−x∗⟩+γi−12[τi−νζi]absent𝑉subscript𝑥𝑖1subscript𝑥𝑉subscript𝑥𝑖subscript𝑥subscript𝛾𝑖1superscriptsubscript𝛾𝑖12𝜈subscript𝜉𝑖subscript𝑥𝑖1subscript𝑥superscriptsubscript𝛾𝑖12delimited-[]subscript𝜏𝑖𝜈subscript𝜁𝑖\displaystyle\leq V(x_{i-1},x_{*})-V(x_{i},x_{*})+(\gamma_{i-1}-\gamma_{i-1}^{2}\nu){\langle}\xi_{i},x_{i-1}-x_{*}{\rangle}+\gamma_{i-1}^{2}[\tau_{i}-\nu\zeta_{i}]. , 2 =
where we put ξi=Hi−∇g(xi−1)subscript𝜉𝑖subscript𝐻𝑖∇𝑔subscript𝑥𝑖1\xi_{i}=H_{i}-\nabla g(x_{i-1}).
When summing from i=1𝑖1i=1 to m𝑚m we obtain
| 2,002
|
35
|
, 1 = ∑i=1mγi−1(34⟨∇g(xi−1),xi−1−x∗⟩+[h(xi−1)−h(x∗)])superscriptsubscript𝑖1𝑚subscript𝛾𝑖134∇𝑔subscript𝑥𝑖1subscript𝑥𝑖1subscript𝑥delimited-[]ℎsubscript𝑥𝑖1ℎsubscript𝑥\displaystyle\sum_{i=1}^{m}\gamma_{i-1}\Big{(}\mbox{\small$\frac{3}{4}$}{\langle}\nabla g(x_{i-1}),x_{i-1}-x_{*}{\rangle}+[h(x_{i-1})-h(x_{*})]\Big{)}. , 2 = . , 3 = . , 1 = ≤V(x0,x∗)+∑i=1m[γi−12(τi−νζi)+γi−1(1−γi−1ν)⟨ξi,xi−1−x∗⟩]⏟=:Rmabsent𝑉subscript𝑥0subscript𝑥subscript⏟superscriptsubscript𝑖1𝑚delimited-[]superscriptsubscript𝛾𝑖12subscript𝜏𝑖𝜈subscript𝜁𝑖subscript𝛾𝑖11subscript𝛾𝑖1𝜈subscript𝜉𝑖subscript𝑥𝑖1subscript𝑥:absentsubscript𝑅𝑚\displaystyle\qquad\leq V(x_{0},x_{*})+\underbrace{\sum_{i=1}^{m}[\gamma_{i-1}^{2}(\tau_{i}-\nu\zeta_{i})+\gamma_{i-1}(1-\gamma_{i-1}\nu){\langle}\xi_{i},x_{i-1}-x_{*}{\rangle}]}_{=:R_{m}}. , 2 = . , 3 = . , 1 = +γ0[h(x0)−h(x∗)]−γm[h(xm)−h(x∗)].subscript𝛾0delimited-[]ℎsubscript𝑥0ℎsubscript𝑥subscript𝛾𝑚delimited-[]ℎsubscript𝑥𝑚ℎsubscript𝑥\displaystyle\qquad\qquad+\gamma_{0}[h(x_{0})-h(x_{*})]-\gamma_{m}[h(x_{m})-h(x_{*})].. , 2 = . , 3 = (31)
| 668
|
Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix A Proofs
A.1 Proof of Proposition 2.1
1o.
, 1 = ∑i=1mγi−1(34⟨∇g(xi−1),xi−1−x∗⟩+[h(xi−1)−h(x∗)])superscriptsubscript𝑖1𝑚subscript𝛾𝑖134∇𝑔subscript𝑥𝑖1subscript𝑥𝑖1subscript𝑥delimited-[]ℎsubscript𝑥𝑖1ℎsubscript𝑥\displaystyle\sum_{i=1}^{m}\gamma_{i-1}\Big{(}\mbox{\small$\frac{3}{4}$}{\langle}\nabla g(x_{i-1}),x_{i-1}-x_{*}{\rangle}+[h(x_{i-1})-h(x_{*})]\Big{)}. , 2 = . , 3 = . , 1 = ≤V(x0,x∗)+∑i=1m[γi−12(τi−νζi)+γi−1(1−γi−1ν)⟨ξi,xi−1−x∗⟩]⏟=:Rmabsent𝑉subscript𝑥0subscript𝑥subscript⏟superscriptsubscript𝑖1𝑚delimited-[]superscriptsubscript𝛾𝑖12subscript𝜏𝑖𝜈subscript𝜁𝑖subscript𝛾𝑖11subscript𝛾𝑖1𝜈subscript𝜉𝑖subscript𝑥𝑖1subscript𝑥:absentsubscript𝑅𝑚\displaystyle\qquad\leq V(x_{0},x_{*})+\underbrace{\sum_{i=1}^{m}[\gamma_{i-1}^{2}(\tau_{i}-\nu\zeta_{i})+\gamma_{i-1}(1-\gamma_{i-1}\nu){\langle}\xi_{i},x_{i-1}-x_{*}{\rangle}]}_{=:R_{m}}. , 2 = . , 3 = . , 1 = +γ0[h(x0)−h(x∗)]−γm[h(xm)−h(x∗)].subscript𝛾0delimited-[]ℎsubscript𝑥0ℎsubscript𝑥subscript𝛾𝑚delimited-[]ℎsubscript𝑥𝑚ℎsubscript𝑥\displaystyle\qquad\qquad+\gamma_{0}[h(x_{0})-h(x_{*})]-\gamma_{m}[h(x_{m})-h(x_{*})].. , 2 = . , 3 = (31)
| 700
|
36
|
We have
, 1 = γi−1⟨ξi,xi−1−x∗⟩subscript𝛾𝑖1subscript𝜉𝑖subscript𝑥𝑖1subscript𝑥\displaystyle\gamma_{i-1}{\langle}\xi_{i},x_{i-1}-x_{*}{\rangle}. , 2 = =\displaystyle=. , 3 = γi−1⟨[∇G(xi−1,ωi)−∇G(x∗,ωi)]−∇g(xi−1),xi−1−x∗⟩⏞υisubscript𝛾𝑖1superscript⏞delimited-[]∇𝐺subscript𝑥𝑖1subscript𝜔𝑖∇𝐺subscript𝑥subscript𝜔𝑖∇𝑔subscript𝑥𝑖1subscript𝑥𝑖1subscript𝑥subscript𝜐𝑖\displaystyle\gamma_{i-1}\overbrace{{\langle}[\nabla G(x_{i-1},\omega_{i})-\nabla G(x_{*},\omega_{i})]-\nabla g(x_{i-1}),x_{i-1}-x_{*}{\rangle}}^{\upsilon_{i}}. , 4 = . , 1 = . , 2 = . , 3 = +γi−1⟨∇G(x∗,ωi),xi−1−x∗⟩subscript𝛾𝑖1∇𝐺subscript𝑥subscript𝜔𝑖subscript𝑥𝑖1subscript𝑥\displaystyle+\gamma_{i-1}{\langle}\nabla G(x_{*},\omega_{i}),x_{i-1}-x_{*}{\rangle}. , 4 = . , 1 = . , 2 = =\displaystyle=. , 3 = γi−1[υi+ζi],subscript𝛾𝑖1delimited-[]subscript𝜐𝑖subscript𝜁𝑖\displaystyle\gamma_{i-1}[\upsilon_{i}+\zeta_{i}],. , 4 =
so that
, 1 = Rm=∑i=1mγi−12τi+∑i=1m(γi−1−γi−12ν)υi+∑i=1m(γi−1−2νγi−12)ζi=:rm(1)+rm(2)+rm(3).\displaystyle R_{m}={\sum_{i=1}^{m}\gamma_{i-1}^{2}\tau_{i}}+{\sum_{i=1}^{m}(\gamma_{i-1}-\gamma^{2}_{i-1}\nu)\upsilon_{i}}+{\sum_{i=1}^{m}(\gamma_{i-1}-2\nu\gamma_{i-1}^{2})\zeta_{i}}=:r^{(1)}_{m}+r^{(2)}_{m}+r^{(3)}_{m}.. , 2 = . , 3 = (32)
Note that rm(3)subscriptsuperscript𝑟3𝑚r^{(3)}_{m} is a sub-Gaussian martingale. Indeed, one has 𝐄i−1{ζi}=0subscript𝐄𝑖1subscript𝜁𝑖0{\mathbf{E}}_{i-1}\{\zeta_{i}\}=0 a.s.,333We use notation 𝐄i−1subscript𝐄𝑖1{\mathbf{E}}_{i-1} for the conditional expectation given x0,ω1,…,ωi−1subscript𝑥0subscript𝜔1…subscript𝜔𝑖1x_{0},\omega_{1},...,\omega_{i-1}. and
, 1 = |ζi|≤‖xi−1−x∗‖‖∇G(x∗,ω)‖∗,subscript𝜁𝑖normsubscript𝑥𝑖1subscript𝑥subscriptnorm∇𝐺subscript𝑥𝜔|\zeta_{i}|\leq\|x_{i-1}-x_{*}\|\,\|\nabla G(x_{*},\omega)\|_{*},. , 2 =
so that by the sub-Gaussian hypothesis (6), 𝐄i−1{exp(ζi24R2σ∗2⏟ν∗2)}≤exp(1)subscript𝐄𝑖1subscript⏟subscriptsuperscript𝜁2𝑖4superscript𝑅2superscriptsubscript𝜎2superscriptsubscript𝜈21{\mathbf{E}}_{i-1}\Big{\{}\exp\Big{(}\underbrace{\zeta^{2}_{i}\over 4R^{2}\sigma_{*}^{2}}_{\nu_{*}^{2}}\Big{)}\Big{\}}\leq\exp(1).
As a result (cf. the proof of Proposition 4.2 in [28]),
, 1 = ∀t𝐄i−1{etζi}≤exp(t𝐄i−1{ζi}+34t2ν∗2)=exp(3t2R2σ∗2),for-all𝑡subscript𝐄𝑖1superscript𝑒𝑡subscript𝜁𝑖𝑡subscript𝐄𝑖1subscript𝜁𝑖34superscript𝑡2subscriptsuperscript𝜈23superscript𝑡2superscript𝑅2subscriptsuperscript𝜎2\forall t\quad{\mathbf{E}}_{i-1}\left\{e^{t\zeta_{i}}\right\}\leq\exp\left(t{\mathbf{E}}_{i-1}\{\zeta_{i}\}+{\tfrac{3}{4}t^{2}\nu^{2}_{*}}\right)=\exp\left({3t^{2}R^{2}\sigma^{2}_{*}}\right),. , 2 =
and applying (37a) to Sm=rm(3)subscript𝑆𝑚superscriptsubscript𝑟𝑚3S_{m}=r_{m}^{(3)} with
, 1 = rm=6R2σ∗2∑i=0m−1(γi−2νγi2)2≤6R2σ∗2∑i=0m−1γi2subscript𝑟𝑚6superscript𝑅2superscriptsubscript𝜎2superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖2𝜈superscriptsubscript𝛾𝑖226superscript𝑅2superscriptsubscript𝜎2superscriptsubscript𝑖0𝑚1subscriptsuperscript𝛾2𝑖r_{m}=6R^{2}\sigma_{*}^{2}\sum_{i=0}^{m-1}(\gamma_{i}-2\nu\gamma_{i}^{2})^{2}\leq{6}R^{2}\sigma_{*}^{2}\sum_{i=0}^{m-1}\gamma^{2}_{i}. , 2 =
we conclude that for some Ωm(3)subscriptsuperscriptΩ3𝑚\Omega^{(3)}_{m} such that Prob(Ωm(3))≥1−e−tProbsubscriptsuperscriptΩ3𝑚1superscript𝑒𝑡\hbox{\rm Prob}(\Omega^{(3)}_{m})\geq 1-e^{-t} and all ωm∈Ωm(3)superscript𝜔𝑚subscriptsuperscriptΩ3𝑚\omega^{m}\in\Omega^{(3)}_{m}
| 1,951
|
Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix A Proofs
A.1 Proof of Proposition 2.1
2o.
We have
, 1 = γi−1⟨ξi,xi−1−x∗⟩subscript𝛾𝑖1subscript𝜉𝑖subscript𝑥𝑖1subscript𝑥\displaystyle\gamma_{i-1}{\langle}\xi_{i},x_{i-1}-x_{*}{\rangle}. , 2 = =\displaystyle=. , 3 = γi−1⟨[∇G(xi−1,ωi)−∇G(x∗,ωi)]−∇g(xi−1),xi−1−x∗⟩⏞υisubscript𝛾𝑖1superscript⏞delimited-[]∇𝐺subscript𝑥𝑖1subscript𝜔𝑖∇𝐺subscript𝑥subscript𝜔𝑖∇𝑔subscript𝑥𝑖1subscript𝑥𝑖1subscript𝑥subscript𝜐𝑖\displaystyle\gamma_{i-1}\overbrace{{\langle}[\nabla G(x_{i-1},\omega_{i})-\nabla G(x_{*},\omega_{i})]-\nabla g(x_{i-1}),x_{i-1}-x_{*}{\rangle}}^{\upsilon_{i}}. , 4 = . , 1 = . , 2 = . , 3 = +γi−1⟨∇G(x∗,ωi),xi−1−x∗⟩subscript𝛾𝑖1∇𝐺subscript𝑥subscript𝜔𝑖subscript𝑥𝑖1subscript𝑥\displaystyle+\gamma_{i-1}{\langle}\nabla G(x_{*},\omega_{i}),x_{i-1}-x_{*}{\rangle}. , 4 = . , 1 = . , 2 = =\displaystyle=. , 3 = γi−1[υi+ζi],subscript𝛾𝑖1delimited-[]subscript𝜐𝑖subscript𝜁𝑖\displaystyle\gamma_{i-1}[\upsilon_{i}+\zeta_{i}],. , 4 =
so that
, 1 = Rm=∑i=1mγi−12τi+∑i=1m(γi−1−γi−12ν)υi+∑i=1m(γi−1−2νγi−12)ζi=:rm(1)+rm(2)+rm(3).\displaystyle R_{m}={\sum_{i=1}^{m}\gamma_{i-1}^{2}\tau_{i}}+{\sum_{i=1}^{m}(\gamma_{i-1}-\gamma^{2}_{i-1}\nu)\upsilon_{i}}+{\sum_{i=1}^{m}(\gamma_{i-1}-2\nu\gamma_{i-1}^{2})\zeta_{i}}=:r^{(1)}_{m}+r^{(2)}_{m}+r^{(3)}_{m}.. , 2 = . , 3 = (32)
Note that rm(3)subscriptsuperscript𝑟3𝑚r^{(3)}_{m} is a sub-Gaussian martingale. Indeed, one has 𝐄i−1{ζi}=0subscript𝐄𝑖1subscript𝜁𝑖0{\mathbf{E}}_{i-1}\{\zeta_{i}\}=0 a.s.,333We use notation 𝐄i−1subscript𝐄𝑖1{\mathbf{E}}_{i-1} for the conditional expectation given x0,ω1,…,ωi−1subscript𝑥0subscript𝜔1…subscript𝜔𝑖1x_{0},\omega_{1},...,\omega_{i-1}. and
, 1 = |ζi|≤‖xi−1−x∗‖‖∇G(x∗,ω)‖∗,subscript𝜁𝑖normsubscript𝑥𝑖1subscript𝑥subscriptnorm∇𝐺subscript𝑥𝜔|\zeta_{i}|\leq\|x_{i-1}-x_{*}\|\,\|\nabla G(x_{*},\omega)\|_{*},. , 2 =
so that by the sub-Gaussian hypothesis (6), 𝐄i−1{exp(ζi24R2σ∗2⏟ν∗2)}≤exp(1)subscript𝐄𝑖1subscript⏟subscriptsuperscript𝜁2𝑖4superscript𝑅2superscriptsubscript𝜎2superscriptsubscript𝜈21{\mathbf{E}}_{i-1}\Big{\{}\exp\Big{(}\underbrace{\zeta^{2}_{i}\over 4R^{2}\sigma_{*}^{2}}_{\nu_{*}^{2}}\Big{)}\Big{\}}\leq\exp(1).
As a result (cf. the proof of Proposition 4.2 in [28]),
, 1 = ∀t𝐄i−1{etζi}≤exp(t𝐄i−1{ζi}+34t2ν∗2)=exp(3t2R2σ∗2),for-all𝑡subscript𝐄𝑖1superscript𝑒𝑡subscript𝜁𝑖𝑡subscript𝐄𝑖1subscript𝜁𝑖34superscript𝑡2subscriptsuperscript𝜈23superscript𝑡2superscript𝑅2subscriptsuperscript𝜎2\forall t\quad{\mathbf{E}}_{i-1}\left\{e^{t\zeta_{i}}\right\}\leq\exp\left(t{\mathbf{E}}_{i-1}\{\zeta_{i}\}+{\tfrac{3}{4}t^{2}\nu^{2}_{*}}\right)=\exp\left({3t^{2}R^{2}\sigma^{2}_{*}}\right),. , 2 =
and applying (37a) to Sm=rm(3)subscript𝑆𝑚superscriptsubscript𝑟𝑚3S_{m}=r_{m}^{(3)} with
, 1 = rm=6R2σ∗2∑i=0m−1(γi−2νγi2)2≤6R2σ∗2∑i=0m−1γi2subscript𝑟𝑚6superscript𝑅2superscriptsubscript𝜎2superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖2𝜈superscriptsubscript𝛾𝑖226superscript𝑅2superscriptsubscript𝜎2superscriptsubscript𝑖0𝑚1subscriptsuperscript𝛾2𝑖r_{m}=6R^{2}\sigma_{*}^{2}\sum_{i=0}^{m-1}(\gamma_{i}-2\nu\gamma_{i}^{2})^{2}\leq{6}R^{2}\sigma_{*}^{2}\sum_{i=0}^{m-1}\gamma^{2}_{i}. , 2 =
we conclude that for some Ωm(3)subscriptsuperscriptΩ3𝑚\Omega^{(3)}_{m} such that Prob(Ωm(3))≥1−e−tProbsubscriptsuperscriptΩ3𝑚1superscript𝑒𝑡\hbox{\rm Prob}(\Omega^{(3)}_{m})\geq 1-e^{-t} and all ωm∈Ωm(3)superscript𝜔𝑚subscriptsuperscriptΩ3𝑚\omega^{m}\in\Omega^{(3)}_{m}
| 1,983
|
37
|
, 1 = rm(3)≤23tR2σ∗2∑i=0m−1γi2≤3tR2+3σ∗2∑i=0m−1γi2.subscriptsuperscript𝑟3𝑚23𝑡superscript𝑅2superscriptsubscript𝜎2superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖23𝑡superscript𝑅23superscriptsubscript𝜎2superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖2\displaystyle r^{(3)}_{m}\leq 2\sqrt{{3tR^{2}\sigma_{*}^{2}}\sum_{i=0}^{m-1}\gamma_{i}^{2}}\leq{{3}tR^{2}}+{3}\sigma_{*}^{2}\sum_{i=0}^{m-1}\gamma_{i}^{2}.. , 2 = . , 3 = (33)
Next, again by (6), due to the Jensen inequality, 𝐄i−1{τi}≤σ∗2subscript𝐄𝑖1subscript𝜏𝑖superscriptsubscript𝜎2{\mathbf{E}}_{i-1}\{\tau_{i}\}\leq\sigma_{*}^{2}, and
, 1 = 𝐄i−1{exp(t‖∇G(x∗,ωi)‖∗)}≤exp(t𝐄i−1{‖∇G(x∗,ωi)‖∗}+34t2σ∗2)≤exp(tσ∗+34t2σ∗2).subscript𝐄𝑖1𝑡subscriptnorm∇𝐺subscript𝑥subscript𝜔𝑖𝑡subscript𝐄𝑖1subscriptnorm∇𝐺subscript𝑥subscript𝜔𝑖34superscript𝑡2subscriptsuperscript𝜎2𝑡subscript𝜎34superscript𝑡2subscriptsuperscript𝜎2{\mathbf{E}}_{i-1}\left\{\exp\left(t\|\nabla G(x_{*},\omega_{i})\|_{*}\right)\right\}\leq\exp\left(t{\mathbf{E}}_{i-1}\{\|\nabla G(x_{*},\omega_{i})\|_{*}\}+{\tfrac{3}{4}t^{2}\sigma^{2}_{*}}\right)\leq\exp\left(t\sigma_{*}+\tfrac{3}{4}t^{2}\sigma^{2}_{*}\right).. , 2 =
Thus, when setting
, 1 = μi=γi−1σ∗,si2=32γi−1σ∗2,s¯=maxiγisi,formulae-sequencesubscript𝜇𝑖subscript𝛾𝑖1subscript𝜎formulae-sequencesuperscriptsubscript𝑠𝑖232subscript𝛾𝑖1superscriptsubscript𝜎2¯𝑠subscript𝑖subscript𝛾𝑖subscript𝑠𝑖\mu_{i}=\gamma_{i-1}\sigma_{*},\;\;s_{i}^{2}=\tfrac{3}{2}\gamma_{i-1}\sigma_{*}^{2},\;\;\overline{s}=\max_{i}\gamma_{i}s_{i},. , 2 =
Mm=rm(1)subscript𝑀𝑚subscriptsuperscript𝑟1𝑚M_{m}=r^{(1)}_{m},
vm+hm=214σ∗4∑i=0m−1γi4subscript𝑣𝑚subscriptℎ𝑚214superscriptsubscript𝜎4superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖4v_{m}+h_{m}=\tfrac{21}{4}\sigma_{*}^{4}\sum_{i=0}^{m-1}\gamma_{i}^{4},
and applying the bound (37b) of Lemma A.1 we obtain
, 1 = rm(1)≤3σ∗2∑i=0m−1γi2+21tσ∗4∑i=0m−1γi4⏟=:Δm(1)+3tγ¯2σ∗2subscriptsuperscript𝑟1𝑚3subscriptsuperscript𝜎2superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖2subscript⏟21𝑡superscriptsubscript𝜎4superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖4:absentsubscriptsuperscriptΔ1𝑚3𝑡superscript¯𝛾2superscriptsubscript𝜎2r^{(1)}_{m}\leq 3\sigma^{2}_{*}\sum_{i=0}^{m-1}\gamma_{i}^{2}+\underbrace{\sqrt{21t\sigma_{*}^{4}\sum_{i=0}^{m-1}\gamma_{i}^{4}}}_{=:\Delta^{(1)}_{m}}+3t\overline{\gamma}^{2}\sigma_{*}^{2}. , 2 =
for γ¯=maxiγi¯𝛾subscript𝑖subscript𝛾𝑖\overline{\gamma}=\max_{i}\gamma_{i} and ωm∈Ωm(1)superscript𝜔𝑚subscriptsuperscriptΩ1𝑚\omega^{m}\in\Omega^{(1)}_{m} where Ωm(1)subscriptsuperscriptΩ1𝑚\Omega^{(1)}_{m} is of probability at least 1−e−x1superscript𝑒𝑥1-e^{-x}. Because
, 1 = γ¯2∑i=0m−1γi2≥∑i=0m−1γi4,superscript¯𝛾2superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖2superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖4\overline{\gamma}^{2}\sum_{i=0}^{m-1}\gamma_{i}^{2}\geq\sum_{i=0}^{m-1}\gamma_{i}^{4},. , 2 =
whenever
21tσ∗4∑i=0m−1γi4≥∑i=0m−1γi2,21𝑡superscriptsubscript𝜎4superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖4superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖2\sqrt{21t\sigma_{*}^{4}\sum_{i=0}^{m-1}\gamma_{i}^{4}}\geq\sum_{i=0}^{m-1}\gamma_{i}^{2}, one has
21tγ¯2≥∑i=0m−1γi221𝑡superscript¯𝛾2superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖221t\overline{\gamma}^{2}\geq\sum_{i=0}^{m-1}\gamma_{i}^{2} and
| 1,900
|
Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix A Proofs
A.1 Proof of Proposition 2.1
2o.
, 1 = rm(3)≤23tR2σ∗2∑i=0m−1γi2≤3tR2+3σ∗2∑i=0m−1γi2.subscriptsuperscript𝑟3𝑚23𝑡superscript𝑅2superscriptsubscript𝜎2superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖23𝑡superscript𝑅23superscriptsubscript𝜎2superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖2\displaystyle r^{(3)}_{m}\leq 2\sqrt{{3tR^{2}\sigma_{*}^{2}}\sum_{i=0}^{m-1}\gamma_{i}^{2}}\leq{{3}tR^{2}}+{3}\sigma_{*}^{2}\sum_{i=0}^{m-1}\gamma_{i}^{2}.. , 2 = . , 3 = (33)
Next, again by (6), due to the Jensen inequality, 𝐄i−1{τi}≤σ∗2subscript𝐄𝑖1subscript𝜏𝑖superscriptsubscript𝜎2{\mathbf{E}}_{i-1}\{\tau_{i}\}\leq\sigma_{*}^{2}, and
, 1 = 𝐄i−1{exp(t‖∇G(x∗,ωi)‖∗)}≤exp(t𝐄i−1{‖∇G(x∗,ωi)‖∗}+34t2σ∗2)≤exp(tσ∗+34t2σ∗2).subscript𝐄𝑖1𝑡subscriptnorm∇𝐺subscript𝑥subscript𝜔𝑖𝑡subscript𝐄𝑖1subscriptnorm∇𝐺subscript𝑥subscript𝜔𝑖34superscript𝑡2subscriptsuperscript𝜎2𝑡subscript𝜎34superscript𝑡2subscriptsuperscript𝜎2{\mathbf{E}}_{i-1}\left\{\exp\left(t\|\nabla G(x_{*},\omega_{i})\|_{*}\right)\right\}\leq\exp\left(t{\mathbf{E}}_{i-1}\{\|\nabla G(x_{*},\omega_{i})\|_{*}\}+{\tfrac{3}{4}t^{2}\sigma^{2}_{*}}\right)\leq\exp\left(t\sigma_{*}+\tfrac{3}{4}t^{2}\sigma^{2}_{*}\right).. , 2 =
Thus, when setting
, 1 = μi=γi−1σ∗,si2=32γi−1σ∗2,s¯=maxiγisi,formulae-sequencesubscript𝜇𝑖subscript𝛾𝑖1subscript𝜎formulae-sequencesuperscriptsubscript𝑠𝑖232subscript𝛾𝑖1superscriptsubscript𝜎2¯𝑠subscript𝑖subscript𝛾𝑖subscript𝑠𝑖\mu_{i}=\gamma_{i-1}\sigma_{*},\;\;s_{i}^{2}=\tfrac{3}{2}\gamma_{i-1}\sigma_{*}^{2},\;\;\overline{s}=\max_{i}\gamma_{i}s_{i},. , 2 =
Mm=rm(1)subscript𝑀𝑚subscriptsuperscript𝑟1𝑚M_{m}=r^{(1)}_{m},
vm+hm=214σ∗4∑i=0m−1γi4subscript𝑣𝑚subscriptℎ𝑚214superscriptsubscript𝜎4superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖4v_{m}+h_{m}=\tfrac{21}{4}\sigma_{*}^{4}\sum_{i=0}^{m-1}\gamma_{i}^{4},
and applying the bound (37b) of Lemma A.1 we obtain
, 1 = rm(1)≤3σ∗2∑i=0m−1γi2+21tσ∗4∑i=0m−1γi4⏟=:Δm(1)+3tγ¯2σ∗2subscriptsuperscript𝑟1𝑚3subscriptsuperscript𝜎2superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖2subscript⏟21𝑡superscriptsubscript𝜎4superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖4:absentsubscriptsuperscriptΔ1𝑚3𝑡superscript¯𝛾2superscriptsubscript𝜎2r^{(1)}_{m}\leq 3\sigma^{2}_{*}\sum_{i=0}^{m-1}\gamma_{i}^{2}+\underbrace{\sqrt{21t\sigma_{*}^{4}\sum_{i=0}^{m-1}\gamma_{i}^{4}}}_{=:\Delta^{(1)}_{m}}+3t\overline{\gamma}^{2}\sigma_{*}^{2}. , 2 =
for γ¯=maxiγi¯𝛾subscript𝑖subscript𝛾𝑖\overline{\gamma}=\max_{i}\gamma_{i} and ωm∈Ωm(1)superscript𝜔𝑚subscriptsuperscriptΩ1𝑚\omega^{m}\in\Omega^{(1)}_{m} where Ωm(1)subscriptsuperscriptΩ1𝑚\Omega^{(1)}_{m} is of probability at least 1−e−x1superscript𝑒𝑥1-e^{-x}. Because
, 1 = γ¯2∑i=0m−1γi2≥∑i=0m−1γi4,superscript¯𝛾2superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖2superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖4\overline{\gamma}^{2}\sum_{i=0}^{m-1}\gamma_{i}^{2}\geq\sum_{i=0}^{m-1}\gamma_{i}^{4},. , 2 =
whenever
21tσ∗4∑i=0m−1γi4≥∑i=0m−1γi2,21𝑡superscriptsubscript𝜎4superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖4superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖2\sqrt{21t\sigma_{*}^{4}\sum_{i=0}^{m-1}\gamma_{i}^{4}}\geq\sum_{i=0}^{m-1}\gamma_{i}^{2}, one has
21tγ¯2≥∑i=0m−1γi221𝑡superscript¯𝛾2superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖221t\overline{\gamma}^{2}\geq\sum_{i=0}^{m-1}\gamma_{i}^{2} and
| 1,932
|
38
|
, 1 = 21t∑i=0m−1γi4≤21tγ¯2∑i=0m−1γi2≤(21tγ¯2)221𝑡superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖421𝑡superscript¯𝛾2superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖2superscript21𝑡superscript¯𝛾2221t\sum_{i=0}^{m-1}\gamma_{i}^{4}\leq{21t\overline{\gamma}^{2}\sum_{i=0}^{m-1}\gamma_{i}^{2}}\leq(21t\overline{\gamma}^{2})^{2}. , 2 =
Thus,
, 1 = Δm(1)≤min[21tσ∗2γ¯2,σ∗2∑i=0m−1γi2]≤21tσ∗2γ¯2+σ∗2∑i=0m−1γi2,subscriptsuperscriptΔ1𝑚21𝑡subscriptsuperscript𝜎2superscript¯𝛾2superscriptsubscript𝜎2superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖221𝑡subscriptsuperscript𝜎2superscript¯𝛾2superscriptsubscript𝜎2superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖2\Delta^{(1)}_{m}\leq\min\left[21t\sigma^{2}_{*}\overline{\gamma}^{2},\sigma_{*}^{2}\sum_{i=0}^{m-1}\gamma_{i}^{2}\right]\leq 21t\sigma^{2}_{*}\overline{\gamma}^{2}+\sigma_{*}^{2}\sum_{i=0}^{m-1}\gamma_{i}^{2},. , 2 =
and
, 1 = rm(1)≤σ∗2[4∑i=0m−1γi2+24tγ¯2]subscriptsuperscript𝑟1𝑚subscriptsuperscript𝜎2delimited-[]4superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖224𝑡superscript¯𝛾2\displaystyle r^{(1)}_{m}\leq\sigma^{2}_{*}\left[4\sum_{i=0}^{m-1}\gamma_{i}^{2}+24t\overline{\gamma}^{2}\right]. , 2 = . , 3 = (34)
for ωm∈Ωm(1)superscript𝜔𝑚subscriptsuperscriptΩ1𝑚\omega^{m}\in\Omega^{(1)}_{m}.
Finally, by the Lipschitz continuity of ∇G∇𝐺\nabla G (cf. (30)), when taking expectation w.r.t. the distribution of ωisubscript𝜔𝑖\omega_{i}, we get
, 1 = 𝐄i−1{υi2}subscript𝐄𝑖1superscriptsubscript𝜐𝑖2\displaystyle{\mathbf{E}}_{i-1}\{\upsilon_{i}^{2}\}. , 2 = ≤\displaystyle\leq. , 3 = 4R2𝐄i−1{‖∇G(xi−1,ωi)−∇G(x∗,ωi)‖∗2}4superscript𝑅2subscript𝐄𝑖1superscriptsubscriptnorm∇𝐺subscript𝑥𝑖1subscript𝜔𝑖∇𝐺subscript𝑥subscript𝜔𝑖2\displaystyle 4R^{2}{\mathbf{E}}_{i-1}\{\|\nabla G(x_{i-1},\omega_{i})-\nabla G(x_{*},\omega_{i})\|_{*}^{2}\}. , 4 = . , 1 = . , 2 = ≤\displaystyle\leq. , 3 = 4R2ν𝐄i−1{⟨∇G(xi−1,ωi)−∇G(x∗,ωi),xi−1−x∗⟩}=4R2ν⟨∇g(xi−1),xi−1−x∗⟩.4superscript𝑅2𝜈subscript𝐄𝑖1∇𝐺subscript𝑥𝑖1subscript𝜔𝑖∇𝐺subscript𝑥subscript𝜔𝑖subscript𝑥𝑖1subscript𝑥4superscript𝑅2𝜈∇𝑔subscript𝑥𝑖1subscript𝑥𝑖1subscript𝑥\displaystyle 4R^{2}\nu{\mathbf{E}}_{i-1}\{{\langle}\nabla G(x_{i-1},\omega_{i})-\nabla G(x_{*},\omega_{i}),x_{i-1}-x_{*}{\rangle}\}=4R^{2}\nu{\langle}\nabla g(x_{i-1}),x_{i-1}-x_{*}{\rangle}.. , 4 =
On the other hand, one also has |υi|≤2ν‖xi−1−xi‖2≤8νR2subscript𝜐𝑖2𝜈superscriptnormsubscript𝑥𝑖1subscript𝑥𝑖28𝜈superscript𝑅2|\upsilon_{i}|\leq 2\nu\|x_{i-1}-x_{i}\|^{2}\leq 8\nu R^{2}. We can now apply Lemma A.2 with
σi2=4γi−12R2ν⟨∇g(xi−1),xi−1−x∗⟩superscriptsubscript𝜎𝑖24superscriptsubscript𝛾𝑖12superscript𝑅2𝜈∇𝑔subscript𝑥𝑖1subscript𝑥𝑖1subscript𝑥\sigma_{i}^{2}=4\gamma_{i-1}^{2}R^{2}\nu{\langle}\nabla g(x_{i-1}),x_{i-1}-x_{*}{\rangle} to conclude that for t≥22+lnm𝑡22𝑚t\geq 2\sqrt{2+\ln m}
, 1 = rm(2)≤4tR2ν∑i=0m−1γi2⟨∇g(xi),xi−x∗⟩⏟=:Δm(2)+16tνR2γ¯superscriptsubscript𝑟𝑚24subscript⏟𝑡superscript𝑅2𝜈superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖2∇𝑔subscript𝑥𝑖subscript𝑥𝑖subscript𝑥:absentsubscriptsuperscriptΔ2𝑚16𝑡𝜈superscript𝑅2¯𝛾r_{m}^{(2)}\leq{4}\underbrace{\sqrt{tR^{2}\nu\sum_{i=0}^{m-1}\gamma_{i}^{2}{\langle}\nabla g(x_{i}),x_{i}-x_{*}{\rangle}}}_{=:\Delta^{(2)}_{m}}+{{16}t\nu R^{2}\overline{\gamma}}. , 2 =
| 1,943
|
Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix A Proofs
A.1 Proof of Proposition 2.1
2o.
, 1 = 21t∑i=0m−1γi4≤21tγ¯2∑i=0m−1γi2≤(21tγ¯2)221𝑡superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖421𝑡superscript¯𝛾2superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖2superscript21𝑡superscript¯𝛾2221t\sum_{i=0}^{m-1}\gamma_{i}^{4}\leq{21t\overline{\gamma}^{2}\sum_{i=0}^{m-1}\gamma_{i}^{2}}\leq(21t\overline{\gamma}^{2})^{2}. , 2 =
Thus,
, 1 = Δm(1)≤min[21tσ∗2γ¯2,σ∗2∑i=0m−1γi2]≤21tσ∗2γ¯2+σ∗2∑i=0m−1γi2,subscriptsuperscriptΔ1𝑚21𝑡subscriptsuperscript𝜎2superscript¯𝛾2superscriptsubscript𝜎2superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖221𝑡subscriptsuperscript𝜎2superscript¯𝛾2superscriptsubscript𝜎2superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖2\Delta^{(1)}_{m}\leq\min\left[21t\sigma^{2}_{*}\overline{\gamma}^{2},\sigma_{*}^{2}\sum_{i=0}^{m-1}\gamma_{i}^{2}\right]\leq 21t\sigma^{2}_{*}\overline{\gamma}^{2}+\sigma_{*}^{2}\sum_{i=0}^{m-1}\gamma_{i}^{2},. , 2 =
and
, 1 = rm(1)≤σ∗2[4∑i=0m−1γi2+24tγ¯2]subscriptsuperscript𝑟1𝑚subscriptsuperscript𝜎2delimited-[]4superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖224𝑡superscript¯𝛾2\displaystyle r^{(1)}_{m}\leq\sigma^{2}_{*}\left[4\sum_{i=0}^{m-1}\gamma_{i}^{2}+24t\overline{\gamma}^{2}\right]. , 2 = . , 3 = (34)
for ωm∈Ωm(1)superscript𝜔𝑚subscriptsuperscriptΩ1𝑚\omega^{m}\in\Omega^{(1)}_{m}.
Finally, by the Lipschitz continuity of ∇G∇𝐺\nabla G (cf. (30)), when taking expectation w.r.t. the distribution of ωisubscript𝜔𝑖\omega_{i}, we get
, 1 = 𝐄i−1{υi2}subscript𝐄𝑖1superscriptsubscript𝜐𝑖2\displaystyle{\mathbf{E}}_{i-1}\{\upsilon_{i}^{2}\}. , 2 = ≤\displaystyle\leq. , 3 = 4R2𝐄i−1{‖∇G(xi−1,ωi)−∇G(x∗,ωi)‖∗2}4superscript𝑅2subscript𝐄𝑖1superscriptsubscriptnorm∇𝐺subscript𝑥𝑖1subscript𝜔𝑖∇𝐺subscript𝑥subscript𝜔𝑖2\displaystyle 4R^{2}{\mathbf{E}}_{i-1}\{\|\nabla G(x_{i-1},\omega_{i})-\nabla G(x_{*},\omega_{i})\|_{*}^{2}\}. , 4 = . , 1 = . , 2 = ≤\displaystyle\leq. , 3 = 4R2ν𝐄i−1{⟨∇G(xi−1,ωi)−∇G(x∗,ωi),xi−1−x∗⟩}=4R2ν⟨∇g(xi−1),xi−1−x∗⟩.4superscript𝑅2𝜈subscript𝐄𝑖1∇𝐺subscript𝑥𝑖1subscript𝜔𝑖∇𝐺subscript𝑥subscript𝜔𝑖subscript𝑥𝑖1subscript𝑥4superscript𝑅2𝜈∇𝑔subscript𝑥𝑖1subscript𝑥𝑖1subscript𝑥\displaystyle 4R^{2}\nu{\mathbf{E}}_{i-1}\{{\langle}\nabla G(x_{i-1},\omega_{i})-\nabla G(x_{*},\omega_{i}),x_{i-1}-x_{*}{\rangle}\}=4R^{2}\nu{\langle}\nabla g(x_{i-1}),x_{i-1}-x_{*}{\rangle}.. , 4 =
On the other hand, one also has |υi|≤2ν‖xi−1−xi‖2≤8νR2subscript𝜐𝑖2𝜈superscriptnormsubscript𝑥𝑖1subscript𝑥𝑖28𝜈superscript𝑅2|\upsilon_{i}|\leq 2\nu\|x_{i-1}-x_{i}\|^{2}\leq 8\nu R^{2}. We can now apply Lemma A.2 with
σi2=4γi−12R2ν⟨∇g(xi−1),xi−1−x∗⟩superscriptsubscript𝜎𝑖24superscriptsubscript𝛾𝑖12superscript𝑅2𝜈∇𝑔subscript𝑥𝑖1subscript𝑥𝑖1subscript𝑥\sigma_{i}^{2}=4\gamma_{i-1}^{2}R^{2}\nu{\langle}\nabla g(x_{i-1}),x_{i-1}-x_{*}{\rangle} to conclude that for t≥22+lnm𝑡22𝑚t\geq 2\sqrt{2+\ln m}
, 1 = rm(2)≤4tR2ν∑i=0m−1γi2⟨∇g(xi),xi−x∗⟩⏟=:Δm(2)+16tνR2γ¯superscriptsubscript𝑟𝑚24subscript⏟𝑡superscript𝑅2𝜈superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖2∇𝑔subscript𝑥𝑖subscript𝑥𝑖subscript𝑥:absentsubscriptsuperscriptΔ2𝑚16𝑡𝜈superscript𝑅2¯𝛾r_{m}^{(2)}\leq{4}\underbrace{\sqrt{tR^{2}\nu\sum_{i=0}^{m-1}\gamma_{i}^{2}{\langle}\nabla g(x_{i}),x_{i}-x_{*}{\rangle}}}_{=:\Delta^{(2)}_{m}}+{{16}t\nu R^{2}\overline{\gamma}}. , 2 =
| 1,975
|
39
|
for all ωm∈Ωm(2)superscript𝜔𝑚subscriptsuperscriptΩ2𝑚\omega^{m}\in\Omega^{(2)}_{m} such that Prob(Ωm(2))≥1−2e−tProbsubscriptsuperscriptΩ2𝑚12superscript𝑒𝑡\hbox{\rm Prob}(\Omega^{(2)}_{m})\geq 1-2e^{-t}. Note that
, 1 = Δm(2)≤2tR2+14ν∑i=0m−1γi2⟨∇g(xi),xi−x∗⟩,superscriptsubscriptΔ𝑚22𝑡superscript𝑅214𝜈superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖2∇𝑔subscript𝑥𝑖subscript𝑥𝑖subscript𝑥\Delta_{m}^{(2)}\leq{2}{tR^{2}}+{\tfrac{1}{4}}\nu\sum_{i=0}^{m-1}\gamma_{i}^{2}{\langle}\nabla g(x_{i}),x_{i}-x_{*}{\rangle},. , 2 =
and γi≤(4ν)−1subscript𝛾𝑖superscript4𝜈1\gamma_{i}\leq(4\nu)^{-1}, so that
, 1 = rm(2)≤ν∑i=0m−1γi2⟨∇g(xi),xi−x∗⟩+12tR2≤14∑i=0m−1γi⟨∇g(xi),xi−x∗⟩+12tR2superscriptsubscript𝑟𝑚2𝜈superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖2∇𝑔subscript𝑥𝑖subscript𝑥𝑖subscript𝑥12𝑡superscript𝑅214superscriptsubscript𝑖0𝑚1subscript𝛾𝑖∇𝑔subscript𝑥𝑖subscript𝑥𝑖subscript𝑥12𝑡superscript𝑅2\displaystyle r_{m}^{(2)}\leq\nu\sum_{i=0}^{m-1}\gamma_{i}^{2}{\langle}\nabla g(x_{i}),x_{i}-x_{*}{\rangle}+{12}{tR^{2}}\leq\tfrac{1}{4}\sum_{i=0}^{m-1}\gamma_{i}{\langle}\nabla g(x_{i}),x_{i}-x_{*}{\rangle}+{12}{tR^{2}}. , 2 = . , 3 = (35)
for ωm∈Ωm(2)superscript𝜔𝑚subscriptsuperscriptΩ2𝑚\omega^{m}\in\Omega^{(2)}_{m}.
| 755
|
Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix A Proofs
A.1 Proof of Proposition 2.1
2o.
for all ωm∈Ωm(2)superscript𝜔𝑚subscriptsuperscriptΩ2𝑚\omega^{m}\in\Omega^{(2)}_{m} such that Prob(Ωm(2))≥1−2e−tProbsubscriptsuperscriptΩ2𝑚12superscript𝑒𝑡\hbox{\rm Prob}(\Omega^{(2)}_{m})\geq 1-2e^{-t}. Note that
, 1 = Δm(2)≤2tR2+14ν∑i=0m−1γi2⟨∇g(xi),xi−x∗⟩,superscriptsubscriptΔ𝑚22𝑡superscript𝑅214𝜈superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖2∇𝑔subscript𝑥𝑖subscript𝑥𝑖subscript𝑥\Delta_{m}^{(2)}\leq{2}{tR^{2}}+{\tfrac{1}{4}}\nu\sum_{i=0}^{m-1}\gamma_{i}^{2}{\langle}\nabla g(x_{i}),x_{i}-x_{*}{\rangle},. , 2 =
and γi≤(4ν)−1subscript𝛾𝑖superscript4𝜈1\gamma_{i}\leq(4\nu)^{-1}, so that
, 1 = rm(2)≤ν∑i=0m−1γi2⟨∇g(xi),xi−x∗⟩+12tR2≤14∑i=0m−1γi⟨∇g(xi),xi−x∗⟩+12tR2superscriptsubscript𝑟𝑚2𝜈superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖2∇𝑔subscript𝑥𝑖subscript𝑥𝑖subscript𝑥12𝑡superscript𝑅214superscriptsubscript𝑖0𝑚1subscript𝛾𝑖∇𝑔subscript𝑥𝑖subscript𝑥𝑖subscript𝑥12𝑡superscript𝑅2\displaystyle r_{m}^{(2)}\leq\nu\sum_{i=0}^{m-1}\gamma_{i}^{2}{\langle}\nabla g(x_{i}),x_{i}-x_{*}{\rangle}+{12}{tR^{2}}\leq\tfrac{1}{4}\sum_{i=0}^{m-1}\gamma_{i}{\langle}\nabla g(x_{i}),x_{i}-x_{*}{\rangle}+{12}{tR^{2}}. , 2 = . , 3 = (35)
for ωm∈Ωm(2)superscript𝜔𝑚subscriptsuperscriptΩ2𝑚\omega^{m}\in\Omega^{(2)}_{m}.
| 787
|
40
|
When substituting bounds (33)–(35) into (32) we obtain
, 1 = Rmsubscript𝑅𝑚\displaystyle R_{m}. , 2 = ≤\displaystyle\leq. , 3 = 14∑i=0m−1γi⟨∇g(xi),xi−x∗⟩+12tR2+σ∗2[4∑i=0m−1γi2+24tγ¯2]+23tR2σ∗2∑i=0m−1γi214superscriptsubscript𝑖0𝑚1subscript𝛾𝑖∇𝑔subscript𝑥𝑖subscript𝑥𝑖subscript𝑥12𝑡superscript𝑅2subscriptsuperscript𝜎2delimited-[]4superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖224𝑡superscript¯𝛾223𝑡superscript𝑅2superscriptsubscript𝜎2superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖2\displaystyle\tfrac{1}{4}\sum_{i=0}^{m-1}\gamma_{i}{\langle}\nabla g(x_{i}),x_{i}-x_{*}{\rangle}+{12}{tR^{2}}+\sigma^{2}_{*}\left[4\sum_{i=0}^{m-1}\gamma_{i}^{2}+24t\overline{\gamma}^{2}\right]+{2}\sqrt{{3}tR^{2}\sigma_{*}^{2}\sum_{i=0}^{m-1}\gamma_{i}^{2}}. , 4 = . , 1 = . , 2 = ≤\displaystyle\leq. , 3 = 14∑i=0m−1γi⟨∇g(xi),xi−x∗⟩+15tR2+σ∗2[7∑i=0m−1γi2+24tγ¯2]14superscriptsubscript𝑖0𝑚1subscript𝛾𝑖∇𝑔subscript𝑥𝑖subscript𝑥𝑖subscript𝑥15𝑡superscript𝑅2subscriptsuperscript𝜎2delimited-[]7superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖224𝑡superscript¯𝛾2\displaystyle\tfrac{1}{4}\sum_{i=0}^{m-1}\gamma_{i}{\langle}\nabla g(x_{i}),x_{i}-x_{*}{\rangle}+{15}tR^{2}+\sigma^{2}_{*}\left[{7}\sum_{i=0}^{m-1}\gamma_{i}^{2}+24t\overline{\gamma}^{2}\right]. , 4 =
for all ωm∈Ω¯m=⋂i=13Ωm(i)superscript𝜔𝑚subscript¯Ω𝑚superscriptsubscript𝑖13subscriptsuperscriptΩ𝑖𝑚\omega^{m}\in\overline{\Omega}_{m}=\bigcap_{i=1}^{3}\Omega^{(i)}_{m} with
Prob(Ω¯m)≥1−4e−tProbsubscript¯Ω𝑚14superscript𝑒𝑡\hbox{\rm Prob}(\overline{\Omega}_{m})\geq 1-4e^{-t} and t≥22+lnm𝑡22𝑚t\geq 2\sqrt{2+\ln m}.
When substituting the latter bound into (31) and utilizing the convexity of g𝑔g and hℎh we arrive at
, 1 = (∑i=0m−1γi)(12[g(x^m)−g(x∗)]+[h(x^m)−h(x∗)])≤∑i=0m−1γi(12[g(xi)−g(x∗)]+[h(xi)−h(x∗)])superscriptsubscript𝑖0𝑚1subscript𝛾𝑖12delimited-[]𝑔subscript^𝑥𝑚𝑔subscript𝑥delimited-[]ℎsubscript^𝑥𝑚ℎsubscript𝑥superscriptsubscript𝑖0𝑚1subscript𝛾𝑖12delimited-[]𝑔subscript𝑥𝑖𝑔subscript𝑥delimited-[]ℎsubscript𝑥𝑖ℎsubscript𝑥\displaystyle\left(\sum_{i=0}^{m-1}\gamma_{i}\right)\Big{(}\mbox{\small$\frac{1}{2}$}[g({\widehat{x}}_{m})-g(x_{*})]+[h({\widehat{x}}_{m})-h(x_{*})]\Big{)}\leq\sum_{i=0}^{m-1}\gamma_{i}\Big{(}\mbox{\small$\frac{1}{2}$}[g(x_{i})-g(x_{*})]+[h(x_{i})-h(x_{*})]\Big{)}. , 2 = . , 1 = ≤∑i=1mγi−1(12⟨∇g(xi−1,xi−1−x∗⟩+[h(xi−1)−h(x∗)])\displaystyle\leq\sum_{i=1}^{m}\gamma_{i-1}\Big{(}\mbox{\small$\frac{1}{2}$}{\langle}\nabla g(x_{i-1},x_{i-1}-x_{*}{\rangle}+[h(x_{i-1})-h(x_{*})]\Big{)}. , 2 = . , 1 = ≤V(x0,x∗)+15tR2+σ∗2[7∑i=0m−1γi2+24tγ¯2]+γ0[h(x0)−h(x∗)]−γm[h(xm)−h(x∗)].absent𝑉subscript𝑥0subscript𝑥15𝑡superscript𝑅2subscriptsuperscript𝜎2delimited-[]7superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖224𝑡superscript¯𝛾2subscript𝛾0delimited-[]ℎsubscript𝑥0ℎsubscript𝑥subscript𝛾𝑚delimited-[]ℎsubscript𝑥𝑚ℎsubscript𝑥\displaystyle\leq V(x_{0},x_{*})+{15}tR^{2}+\sigma^{2}_{*}\left[{7}\sum_{i=0}^{m-1}\gamma_{i}^{2}+24t\overline{\gamma}^{2}\right]+\gamma_{0}[h(x_{0})-h(x_{*})]-\gamma_{m}[h(x_{m})-h(x_{*})].. , 2 =
In particular, for constant stepsizes γi≡γsubscript𝛾𝑖𝛾\gamma_{i}\equiv\gamma we get
| 1,808
|
Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix A Proofs
A.1 Proof of Proposition 2.1
3o.
When substituting bounds (33)–(35) into (32) we obtain
, 1 = Rmsubscript𝑅𝑚\displaystyle R_{m}. , 2 = ≤\displaystyle\leq. , 3 = 14∑i=0m−1γi⟨∇g(xi),xi−x∗⟩+12tR2+σ∗2[4∑i=0m−1γi2+24tγ¯2]+23tR2σ∗2∑i=0m−1γi214superscriptsubscript𝑖0𝑚1subscript𝛾𝑖∇𝑔subscript𝑥𝑖subscript𝑥𝑖subscript𝑥12𝑡superscript𝑅2subscriptsuperscript𝜎2delimited-[]4superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖224𝑡superscript¯𝛾223𝑡superscript𝑅2superscriptsubscript𝜎2superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖2\displaystyle\tfrac{1}{4}\sum_{i=0}^{m-1}\gamma_{i}{\langle}\nabla g(x_{i}),x_{i}-x_{*}{\rangle}+{12}{tR^{2}}+\sigma^{2}_{*}\left[4\sum_{i=0}^{m-1}\gamma_{i}^{2}+24t\overline{\gamma}^{2}\right]+{2}\sqrt{{3}tR^{2}\sigma_{*}^{2}\sum_{i=0}^{m-1}\gamma_{i}^{2}}. , 4 = . , 1 = . , 2 = ≤\displaystyle\leq. , 3 = 14∑i=0m−1γi⟨∇g(xi),xi−x∗⟩+15tR2+σ∗2[7∑i=0m−1γi2+24tγ¯2]14superscriptsubscript𝑖0𝑚1subscript𝛾𝑖∇𝑔subscript𝑥𝑖subscript𝑥𝑖subscript𝑥15𝑡superscript𝑅2subscriptsuperscript𝜎2delimited-[]7superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖224𝑡superscript¯𝛾2\displaystyle\tfrac{1}{4}\sum_{i=0}^{m-1}\gamma_{i}{\langle}\nabla g(x_{i}),x_{i}-x_{*}{\rangle}+{15}tR^{2}+\sigma^{2}_{*}\left[{7}\sum_{i=0}^{m-1}\gamma_{i}^{2}+24t\overline{\gamma}^{2}\right]. , 4 =
for all ωm∈Ω¯m=⋂i=13Ωm(i)superscript𝜔𝑚subscript¯Ω𝑚superscriptsubscript𝑖13subscriptsuperscriptΩ𝑖𝑚\omega^{m}\in\overline{\Omega}_{m}=\bigcap_{i=1}^{3}\Omega^{(i)}_{m} with
Prob(Ω¯m)≥1−4e−tProbsubscript¯Ω𝑚14superscript𝑒𝑡\hbox{\rm Prob}(\overline{\Omega}_{m})\geq 1-4e^{-t} and t≥22+lnm𝑡22𝑚t\geq 2\sqrt{2+\ln m}.
When substituting the latter bound into (31) and utilizing the convexity of g𝑔g and hℎh we arrive at
, 1 = (∑i=0m−1γi)(12[g(x^m)−g(x∗)]+[h(x^m)−h(x∗)])≤∑i=0m−1γi(12[g(xi)−g(x∗)]+[h(xi)−h(x∗)])superscriptsubscript𝑖0𝑚1subscript𝛾𝑖12delimited-[]𝑔subscript^𝑥𝑚𝑔subscript𝑥delimited-[]ℎsubscript^𝑥𝑚ℎsubscript𝑥superscriptsubscript𝑖0𝑚1subscript𝛾𝑖12delimited-[]𝑔subscript𝑥𝑖𝑔subscript𝑥delimited-[]ℎsubscript𝑥𝑖ℎsubscript𝑥\displaystyle\left(\sum_{i=0}^{m-1}\gamma_{i}\right)\Big{(}\mbox{\small$\frac{1}{2}$}[g({\widehat{x}}_{m})-g(x_{*})]+[h({\widehat{x}}_{m})-h(x_{*})]\Big{)}\leq\sum_{i=0}^{m-1}\gamma_{i}\Big{(}\mbox{\small$\frac{1}{2}$}[g(x_{i})-g(x_{*})]+[h(x_{i})-h(x_{*})]\Big{)}. , 2 = . , 1 = ≤∑i=1mγi−1(12⟨∇g(xi−1,xi−1−x∗⟩+[h(xi−1)−h(x∗)])\displaystyle\leq\sum_{i=1}^{m}\gamma_{i-1}\Big{(}\mbox{\small$\frac{1}{2}$}{\langle}\nabla g(x_{i-1},x_{i-1}-x_{*}{\rangle}+[h(x_{i-1})-h(x_{*})]\Big{)}. , 2 = . , 1 = ≤V(x0,x∗)+15tR2+σ∗2[7∑i=0m−1γi2+24tγ¯2]+γ0[h(x0)−h(x∗)]−γm[h(xm)−h(x∗)].absent𝑉subscript𝑥0subscript𝑥15𝑡superscript𝑅2subscriptsuperscript𝜎2delimited-[]7superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖224𝑡superscript¯𝛾2subscript𝛾0delimited-[]ℎsubscript𝑥0ℎsubscript𝑥subscript𝛾𝑚delimited-[]ℎsubscript𝑥𝑚ℎsubscript𝑥\displaystyle\leq V(x_{0},x_{*})+{15}tR^{2}+\sigma^{2}_{*}\left[{7}\sum_{i=0}^{m-1}\gamma_{i}^{2}+24t\overline{\gamma}^{2}\right]+\gamma_{0}[h(x_{0})-h(x_{*})]-\gamma_{m}[h(x_{m})-h(x_{*})].. , 2 =
In particular, for constant stepsizes γi≡γsubscript𝛾𝑖𝛾\gamma_{i}\equiv\gamma we get
| 1,840
|
41
|
, 1 = 12[g(x^m)−g(x∗)]+[h(x^m)−h(x∗)]12delimited-[]𝑔subscript^𝑥𝑚𝑔subscript𝑥delimited-[]ℎsubscript^𝑥𝑚ℎsubscript𝑥\displaystyle\mbox{\small$\frac{1}{2}$}[g({\widehat{x}}_{m})-g(x_{*})]+[h({\widehat{x}}_{m})-h(x_{*})]. , 2 = . , 1 = ≤V(x0,x∗)+15tR2γm+h(x0)−h(xm)m+γσ∗2(7+24tm).absent𝑉subscript𝑥0subscript𝑥15𝑡superscript𝑅2𝛾𝑚ℎsubscript𝑥0ℎsubscript𝑥𝑚𝑚𝛾subscriptsuperscript𝜎2724𝑡𝑚\displaystyle\quad\leq{V(x_{0},x_{*})+{15}tR^{2}\over\gamma m}+{h(x_{0})-h(x_{m})\over m}+\gamma\sigma^{2}_{*}\left({7}+\frac{24t}{m}\right).. , 2 =
This implies the first statement of the proposition.
| 351
|
Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix A Proofs
A.1 Proof of Proposition 2.1
3o.
, 1 = 12[g(x^m)−g(x∗)]+[h(x^m)−h(x∗)]12delimited-[]𝑔subscript^𝑥𝑚𝑔subscript𝑥delimited-[]ℎsubscript^𝑥𝑚ℎsubscript𝑥\displaystyle\mbox{\small$\frac{1}{2}$}[g({\widehat{x}}_{m})-g(x_{*})]+[h({\widehat{x}}_{m})-h(x_{*})]. , 2 = . , 1 = ≤V(x0,x∗)+15tR2γm+h(x0)−h(xm)m+γσ∗2(7+24tm).absent𝑉subscript𝑥0subscript𝑥15𝑡superscript𝑅2𝛾𝑚ℎsubscript𝑥0ℎsubscript𝑥𝑚𝑚𝛾subscriptsuperscript𝜎2724𝑡𝑚\displaystyle\quad\leq{V(x_{0},x_{*})+{15}tR^{2}\over\gamma m}+{h(x_{0})-h(x_{m})\over m}+\gamma\sigma^{2}_{*}\left({7}+\frac{24t}{m}\right).. , 2 =
This implies the first statement of the proposition.
| 383
|
42
|
To prove the bound for the minibatch solution x^m(L)=(∑i=0m−1γi)−1∑i=0m−1γixi(L)superscriptsubscript^𝑥𝑚𝐿superscriptsuperscriptsubscript𝑖0𝑚1subscript𝛾𝑖1superscriptsubscript𝑖0𝑚1subscript𝛾𝑖superscriptsubscript𝑥𝑖𝐿{\widehat{x}}_{m}^{(L)}=\left(\sum_{i=0}^{m-1}\gamma_{i}\right)^{-1}\sum_{i=0}^{m-1}\gamma_{i}x_{i}^{(L)},
it suffices to note that minibatch gradient observation H(x,ω(L))𝐻𝑥superscript𝜔𝐿H(x,\omega^{(L)}) is Lipschitz-continuous with Lipschitz constant ν𝜈\nu, and that
H(x∗,ω(L))𝐻subscript𝑥superscript𝜔𝐿H(x_{*},\omega^{(L)}) is sub-Gaussian with parameter σ∗2superscriptsubscript𝜎2\sigma_{*}^{2} replaced with σ¯∗,L2≲Θσ∗2Lless-than-or-similar-tosubscriptsuperscript¯𝜎2𝐿Θsuperscriptsubscript𝜎2𝐿{\overline{\sigma}}^{2}_{*,L}\lesssim{\Theta\sigma_{*}^{2}\over L}, see Lemma A.3. □□\Box
| 392
|
Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix A Proofs
A.1 Proof of Proposition 2.1
5o.
To prove the bound for the minibatch solution x^m(L)=(∑i=0m−1γi)−1∑i=0m−1γixi(L)superscriptsubscript^𝑥𝑚𝐿superscriptsuperscriptsubscript𝑖0𝑚1subscript𝛾𝑖1superscriptsubscript𝑖0𝑚1subscript𝛾𝑖superscriptsubscript𝑥𝑖𝐿{\widehat{x}}_{m}^{(L)}=\left(\sum_{i=0}^{m-1}\gamma_{i}\right)^{-1}\sum_{i=0}^{m-1}\gamma_{i}x_{i}^{(L)},
it suffices to note that minibatch gradient observation H(x,ω(L))𝐻𝑥superscript𝜔𝐿H(x,\omega^{(L)}) is Lipschitz-continuous with Lipschitz constant ν𝜈\nu, and that
H(x∗,ω(L))𝐻subscript𝑥superscript𝜔𝐿H(x_{*},\omega^{(L)}) is sub-Gaussian with parameter σ∗2superscriptsubscript𝜎2\sigma_{*}^{2} replaced with σ¯∗,L2≲Θσ∗2Lless-than-or-similar-tosubscriptsuperscript¯𝜎2𝐿Θsuperscriptsubscript𝜎2𝐿{\overline{\sigma}}^{2}_{*,L}\lesssim{\Theta\sigma_{*}^{2}\over L}, see Lemma A.3. □□\Box
| 424
|
43
|
Let us assume that (ξi,ℱi)i=1,2,…subscriptsubscript𝜉𝑖subscriptℱ𝑖𝑖12…(\xi_{i},{\cal F}_{i})_{i=1,2,...} is a sequence of sub-Gaussian random variables satisfying444Here, same as above, we denote 𝐄i−1subscript𝐄𝑖1{\mathbf{E}}_{i-1} the expectation conditional to ℱi−1subscriptℱ𝑖1{\cal F}_{i-1}.
, 1 = 𝐄i−1{etξi}≤etμi+t2si22,a.s.formulae-sequencesubscript𝐄𝑖1superscript𝑒𝑡subscript𝜉𝑖superscript𝑒𝑡subscript𝜇𝑖superscript𝑡2superscriptsubscript𝑠𝑖22𝑎𝑠{\mathbf{E}}_{i-1}\left\{e^{t\xi_{i}}\right\}\leq e^{t\mu_{i}+\frac{t^{2}s_{i}^{2}}{2}},\hskip 14.22636pta.s.. , 2 = . , 3 = (36)
for some nonrandom μi,sisubscript𝜇𝑖subscript𝑠𝑖\mu_{i},s_{i}, si≤s¯subscript𝑠𝑖¯𝑠s_{i}\leq\overline{s}.
We denote by Sn=∑i=1nξi−μisubscript𝑆𝑛superscriptsubscript𝑖1𝑛subscript𝜉𝑖subscript𝜇𝑖S_{n}=\sum_{i=1}^{n}{\xi_{i}-\mu_{i}}, rn=∑i=1nsi2subscript𝑟𝑛superscriptsubscript𝑖1𝑛superscriptsubscript𝑠𝑖2r_{n}=\sum_{i=1}^{n}{s_{i}^{2}}, vn=∑i=1nsi4,Mn=∑i=1nξi2−(si2+μi2)formulae-sequencesubscript𝑣𝑛superscriptsubscript𝑖1𝑛superscriptsubscript𝑠𝑖4subscript𝑀𝑛superscriptsubscript𝑖1𝑛superscriptsubscript𝜉𝑖2superscriptsubscript𝑠𝑖2superscriptsubscript𝜇𝑖2v_{n}=\sum_{i=1}^{n}{s_{i}^{4}},M_{n}=\sum_{i=1}^{n}\xi_{i}^{2}-(s_{i}^{2}+\mu_{i}^{2}), and hn=∑i=1n2μi2si2.subscriptℎ𝑛superscriptsubscript𝑖1𝑛2superscriptsubscript𝜇𝑖2superscriptsubscript𝑠𝑖2h_{n}=\sum_{i=1}^{n}2\mu_{i}^{2}s_{i}^{2}. The following well known result is provided for reader’s convenience.
| 787
|
Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix A Proofs
A.2 Deviation inequalities
Let us assume that (ξi,ℱi)i=1,2,…subscriptsubscript𝜉𝑖subscriptℱ𝑖𝑖12…(\xi_{i},{\cal F}_{i})_{i=1,2,...} is a sequence of sub-Gaussian random variables satisfying444Here, same as above, we denote 𝐄i−1subscript𝐄𝑖1{\mathbf{E}}_{i-1} the expectation conditional to ℱi−1subscriptℱ𝑖1{\cal F}_{i-1}.
, 1 = 𝐄i−1{etξi}≤etμi+t2si22,a.s.formulae-sequencesubscript𝐄𝑖1superscript𝑒𝑡subscript𝜉𝑖superscript𝑒𝑡subscript𝜇𝑖superscript𝑡2superscriptsubscript𝑠𝑖22𝑎𝑠{\mathbf{E}}_{i-1}\left\{e^{t\xi_{i}}\right\}\leq e^{t\mu_{i}+\frac{t^{2}s_{i}^{2}}{2}},\hskip 14.22636pta.s.. , 2 = . , 3 = (36)
for some nonrandom μi,sisubscript𝜇𝑖subscript𝑠𝑖\mu_{i},s_{i}, si≤s¯subscript𝑠𝑖¯𝑠s_{i}\leq\overline{s}.
We denote by Sn=∑i=1nξi−μisubscript𝑆𝑛superscriptsubscript𝑖1𝑛subscript𝜉𝑖subscript𝜇𝑖S_{n}=\sum_{i=1}^{n}{\xi_{i}-\mu_{i}}, rn=∑i=1nsi2subscript𝑟𝑛superscriptsubscript𝑖1𝑛superscriptsubscript𝑠𝑖2r_{n}=\sum_{i=1}^{n}{s_{i}^{2}}, vn=∑i=1nsi4,Mn=∑i=1nξi2−(si2+μi2)formulae-sequencesubscript𝑣𝑛superscriptsubscript𝑖1𝑛superscriptsubscript𝑠𝑖4subscript𝑀𝑛superscriptsubscript𝑖1𝑛superscriptsubscript𝜉𝑖2superscriptsubscript𝑠𝑖2superscriptsubscript𝜇𝑖2v_{n}=\sum_{i=1}^{n}{s_{i}^{4}},M_{n}=\sum_{i=1}^{n}\xi_{i}^{2}-(s_{i}^{2}+\mu_{i}^{2}), and hn=∑i=1n2μi2si2.subscriptℎ𝑛superscriptsubscript𝑖1𝑛2superscriptsubscript𝜇𝑖2superscriptsubscript𝑠𝑖2h_{n}=\sum_{i=1}^{n}2\mu_{i}^{2}s_{i}^{2}. The following well known result is provided for reader’s convenience.
| 812
|
44
|
For all x>0𝑥0x>0 one has
, 1 = . , 2 = . , 3 = . , 4 = . , 1 = Prob{Sn≥2xrn}Probsubscript𝑆𝑛2𝑥subscript𝑟𝑛\displaystyle\hbox{\rm Prob}\left\{S_{n}\geq\sqrt{2xr_{n}}\right\}. , 2 = ≤e−x,absentsuperscript𝑒𝑥\displaystyle\leq e^{-x},. , 3 = . , 4 = (37a). , 1 = Prob{Mn≥2x(vn+hn)+2xs¯2}Probsubscript𝑀𝑛2𝑥subscript𝑣𝑛subscriptℎ𝑛2𝑥superscript¯𝑠2\displaystyle\hbox{\rm Prob}\left\{M_{n}\geq 2\sqrt{x(v_{n}+h_{n})}+2x{\overline{s}}^{2}\right\}. , 2 = ≤e−x.absentsuperscript𝑒𝑥\displaystyle\leq e^{-x}.. , 3 = . , 4 = (37b)
| 304
|
Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix A Proofs
A.2 Deviation inequalities
Lemma A.1
For all x>0𝑥0x>0 one has
, 1 = . , 2 = . , 3 = . , 4 = . , 1 = Prob{Sn≥2xrn}Probsubscript𝑆𝑛2𝑥subscript𝑟𝑛\displaystyle\hbox{\rm Prob}\left\{S_{n}\geq\sqrt{2xr_{n}}\right\}. , 2 = ≤e−x,absentsuperscript𝑒𝑥\displaystyle\leq e^{-x},. , 3 = . , 4 = (37a). , 1 = Prob{Mn≥2x(vn+hn)+2xs¯2}Probsubscript𝑀𝑛2𝑥subscript𝑣𝑛subscriptℎ𝑛2𝑥superscript¯𝑠2\displaystyle\hbox{\rm Prob}\left\{M_{n}\geq 2\sqrt{x(v_{n}+h_{n})}+2x{\overline{s}}^{2}\right\}. , 2 = ≤e−x.absentsuperscript𝑒𝑥\displaystyle\leq e^{-x}.. , 3 = . , 4 = (37b)
| 334
|
45
|
The inequality (37a) is straightforward. To prove (37b), note that for t<12s¯−2𝑡12superscript¯𝑠2t<\frac{1}{2}\overline{s}^{-2} and η∼𝒩(0,1)similar-to𝜂𝒩01\eta\sim\mathcal{N}(0,1) independent of ξ0,…,ξnsubscript𝜉0…subscript𝜉𝑛\xi_{0},...,\xi_{n} , we have:
, 1 = 𝐄i−1{etξi2}subscript𝐄𝑖1superscript𝑒𝑡superscriptsubscript𝜉𝑖2\displaystyle{\mathbf{E}}_{i-1}\left\{e^{t\xi_{i}^{2}}\right\}. , 2 = =𝐄i−1{𝐄η{e2tξiη}}=𝐄η{𝐄i−1{e2tξiη}}absentsubscript𝐄𝑖1subscript𝐄𝜂superscript𝑒2𝑡subscript𝜉𝑖𝜂subscript𝐄𝜂subscript𝐄𝑖1superscript𝑒2𝑡subscript𝜉𝑖𝜂\displaystyle={\mathbf{E}}_{i-1}\left\{{\mathbf{E}}_{\eta}\left\{e^{\sqrt{2t}\xi_{i}\eta}\right\}\right\}={\mathbf{E}}_{\eta}\left\{{\mathbf{E}}_{i-1}\left\{e^{\sqrt{2t}\xi_{i}\eta}\right\}\right\}. , 3 = . , 1 = . , 2 = ≤𝐄η{exp{2tημi+tη2si2}}=(1−2tsi2)−1/2exp{tμi21−2tsi2}a.s.,absentsubscript𝐄𝜂2𝑡𝜂subscript𝜇𝑖𝑡superscript𝜂2superscriptsubscript𝑠𝑖2superscript12𝑡superscriptsubscript𝑠𝑖212𝑡superscriptsubscript𝜇𝑖212𝑡superscriptsubscript𝑠𝑖2a.s.\displaystyle\leq{\mathbf{E}}_{\eta}\left\{\exp\left\{\sqrt{2t}\eta\mu_{i}+t\eta^{2}s_{i}^{2}\right\}\right\}=(1-2ts_{i}^{2})^{-1/2}\exp\left\{\frac{t\mu_{i}^{2}}{1-2ts_{i}^{2}}\right\}\hskip 5.69046pt\text{a.s.},. , 3 =
and because, cf [31, Lemma 1],
, 1 = −12ln(1−2tsi2)+tμi21−2tsi2−t(si2+μi2)≤t2si2(si2+2μi2)1−2tsi2≤t2si2(si2+2μi2)1−2ts¯2,12ln12𝑡superscriptsubscript𝑠𝑖2𝑡superscriptsubscript𝜇𝑖212𝑡superscriptsubscript𝑠𝑖2𝑡superscriptsubscript𝑠𝑖2superscriptsubscript𝜇𝑖2superscript𝑡2superscriptsubscript𝑠𝑖2superscriptsubscript𝑠𝑖22superscriptsubscript𝜇𝑖212𝑡superscriptsubscript𝑠𝑖2superscript𝑡2superscriptsubscript𝑠𝑖2superscriptsubscript𝑠𝑖22superscriptsubscript𝜇𝑖212𝑡superscript¯𝑠2\displaystyle-\tfrac{1}{2}\text{ln}(1-2ts_{i}^{2})+\frac{t\mu_{i}^{2}}{1-2ts_{i}^{2}}-t(s_{i}^{2}+\mu_{i}^{2})\leq\frac{t^{2}s_{i}^{2}(s_{i}^{2}+2\mu_{i}^{2})}{1-2ts_{i}^{2}}\leq\frac{t^{2}s_{i}^{2}(s_{i}^{2}+2\mu_{i}^{2})}{1-2t\overline{s}^{2}},. , 2 =
one has
for t<12s¯−2𝑡12superscript¯𝑠2t<\tfrac{1}{2}\overline{s}^{-2}
, 1 = 𝐄{etMn}≤exp{t2(vn+hn)1−2ts¯2}.𝐄superscript𝑒𝑡subscript𝑀𝑛superscript𝑡2subscript𝑣𝑛subscriptℎ𝑛12𝑡superscript¯𝑠2{\mathbf{E}}\left\{e^{tM_{n}}\right\}\leq\exp\left\{\frac{t^{2}(v_{n}+h_{n})}{1-2t\overline{s}^{2}}\right\}.. , 2 =
By Lemma 8 of [6], this implies that
, 1 = Prob{Mn≥2x(vn+hn)+2xs¯2}≤e−xProbsubscript𝑀𝑛2𝑥subscript𝑣𝑛subscriptℎ𝑛2𝑥superscript¯𝑠2superscript𝑒𝑥\hbox{\rm Prob}\left\{M_{n}\geq 2\sqrt{x(v_{n}+h_{n})}+2x{\overline{s}}^{2}\right\}\leq e^{-x}. , 2 =
for all x>0𝑥0x>0. □□\Box
Now, suppose that ζi,i=1,2,…formulae-sequencesubscript𝜁𝑖𝑖12…\zeta_{i},\;i=1,2,... is a sequence of random variables satisfying
, 1 = 𝐄i−1{ζi}=μi,𝐄i−1{ζi2}≤σi2,|ζi|≤1a.s.formulae-sequenceformulae-sequencesubscript𝐄𝑖1subscript𝜁𝑖subscript𝜇𝑖formulae-sequencesubscript𝐄𝑖1superscriptsubscript𝜁𝑖2superscriptsubscript𝜎𝑖2subscript𝜁𝑖1as\displaystyle{\mathbf{E}}_{i-1}\{\zeta_{i}\}=\mu_{i},\,{\mathbf{E}}_{i-1}\{\zeta_{i}^{2}\}\leq\sigma_{i}^{2},\;\;|\zeta_{i}|\leq 1\;\;\mathrm{a.s.}. , 2 = . , 3 = (38)
| 1,825
|
Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix A Proofs
A.2 Deviation inequalities
Proof.
The inequality (37a) is straightforward. To prove (37b), note that for t<12s¯−2𝑡12superscript¯𝑠2t<\frac{1}{2}\overline{s}^{-2} and η∼𝒩(0,1)similar-to𝜂𝒩01\eta\sim\mathcal{N}(0,1) independent of ξ0,…,ξnsubscript𝜉0…subscript𝜉𝑛\xi_{0},...,\xi_{n} , we have:
, 1 = 𝐄i−1{etξi2}subscript𝐄𝑖1superscript𝑒𝑡superscriptsubscript𝜉𝑖2\displaystyle{\mathbf{E}}_{i-1}\left\{e^{t\xi_{i}^{2}}\right\}. , 2 = =𝐄i−1{𝐄η{e2tξiη}}=𝐄η{𝐄i−1{e2tξiη}}absentsubscript𝐄𝑖1subscript𝐄𝜂superscript𝑒2𝑡subscript𝜉𝑖𝜂subscript𝐄𝜂subscript𝐄𝑖1superscript𝑒2𝑡subscript𝜉𝑖𝜂\displaystyle={\mathbf{E}}_{i-1}\left\{{\mathbf{E}}_{\eta}\left\{e^{\sqrt{2t}\xi_{i}\eta}\right\}\right\}={\mathbf{E}}_{\eta}\left\{{\mathbf{E}}_{i-1}\left\{e^{\sqrt{2t}\xi_{i}\eta}\right\}\right\}. , 3 = . , 1 = . , 2 = ≤𝐄η{exp{2tημi+tη2si2}}=(1−2tsi2)−1/2exp{tμi21−2tsi2}a.s.,absentsubscript𝐄𝜂2𝑡𝜂subscript𝜇𝑖𝑡superscript𝜂2superscriptsubscript𝑠𝑖2superscript12𝑡superscriptsubscript𝑠𝑖212𝑡superscriptsubscript𝜇𝑖212𝑡superscriptsubscript𝑠𝑖2a.s.\displaystyle\leq{\mathbf{E}}_{\eta}\left\{\exp\left\{\sqrt{2t}\eta\mu_{i}+t\eta^{2}s_{i}^{2}\right\}\right\}=(1-2ts_{i}^{2})^{-1/2}\exp\left\{\frac{t\mu_{i}^{2}}{1-2ts_{i}^{2}}\right\}\hskip 5.69046pt\text{a.s.},. , 3 =
and because, cf [31, Lemma 1],
, 1 = −12ln(1−2tsi2)+tμi21−2tsi2−t(si2+μi2)≤t2si2(si2+2μi2)1−2tsi2≤t2si2(si2+2μi2)1−2ts¯2,12ln12𝑡superscriptsubscript𝑠𝑖2𝑡superscriptsubscript𝜇𝑖212𝑡superscriptsubscript𝑠𝑖2𝑡superscriptsubscript𝑠𝑖2superscriptsubscript𝜇𝑖2superscript𝑡2superscriptsubscript𝑠𝑖2superscriptsubscript𝑠𝑖22superscriptsubscript𝜇𝑖212𝑡superscriptsubscript𝑠𝑖2superscript𝑡2superscriptsubscript𝑠𝑖2superscriptsubscript𝑠𝑖22superscriptsubscript𝜇𝑖212𝑡superscript¯𝑠2\displaystyle-\tfrac{1}{2}\text{ln}(1-2ts_{i}^{2})+\frac{t\mu_{i}^{2}}{1-2ts_{i}^{2}}-t(s_{i}^{2}+\mu_{i}^{2})\leq\frac{t^{2}s_{i}^{2}(s_{i}^{2}+2\mu_{i}^{2})}{1-2ts_{i}^{2}}\leq\frac{t^{2}s_{i}^{2}(s_{i}^{2}+2\mu_{i}^{2})}{1-2t\overline{s}^{2}},. , 2 =
one has
for t<12s¯−2𝑡12superscript¯𝑠2t<\tfrac{1}{2}\overline{s}^{-2}
, 1 = 𝐄{etMn}≤exp{t2(vn+hn)1−2ts¯2}.𝐄superscript𝑒𝑡subscript𝑀𝑛superscript𝑡2subscript𝑣𝑛subscriptℎ𝑛12𝑡superscript¯𝑠2{\mathbf{E}}\left\{e^{tM_{n}}\right\}\leq\exp\left\{\frac{t^{2}(v_{n}+h_{n})}{1-2t\overline{s}^{2}}\right\}.. , 2 =
By Lemma 8 of [6], this implies that
, 1 = Prob{Mn≥2x(vn+hn)+2xs¯2}≤e−xProbsubscript𝑀𝑛2𝑥subscript𝑣𝑛subscriptℎ𝑛2𝑥superscript¯𝑠2superscript𝑒𝑥\hbox{\rm Prob}\left\{M_{n}\geq 2\sqrt{x(v_{n}+h_{n})}+2x{\overline{s}}^{2}\right\}\leq e^{-x}. , 2 =
for all x>0𝑥0x>0. □□\Box
Now, suppose that ζi,i=1,2,…formulae-sequencesubscript𝜁𝑖𝑖12…\zeta_{i},\;i=1,2,... is a sequence of random variables satisfying
, 1 = 𝐄i−1{ζi}=μi,𝐄i−1{ζi2}≤σi2,|ζi|≤1a.s.formulae-sequenceformulae-sequencesubscript𝐄𝑖1subscript𝜁𝑖subscript𝜇𝑖formulae-sequencesubscript𝐄𝑖1superscriptsubscript𝜁𝑖2superscriptsubscript𝜎𝑖2subscript𝜁𝑖1as\displaystyle{\mathbf{E}}_{i-1}\{\zeta_{i}\}=\mu_{i},\,{\mathbf{E}}_{i-1}\{\zeta_{i}^{2}\}\leq\sigma_{i}^{2},\;\;|\zeta_{i}|\leq 1\;\;\mathrm{a.s.}. , 2 = . , 3 = (38)
| 1,852
|
46
|
Denote Mn=∑i=1n[ζi−μi]subscript𝑀𝑛superscriptsubscript𝑖1𝑛delimited-[]subscript𝜁𝑖subscript𝜇𝑖M_{n}=\sum_{i=1}^{n}[\zeta_{i}-\mu_{i}] and qn=∑i=1nσi2subscript𝑞𝑛superscriptsubscript𝑖1𝑛superscriptsubscript𝜎𝑖2q_{n}=\sum_{i=1}^{n}\sigma_{i}^{2}. Note that qn≤nsubscript𝑞𝑛𝑛q_{n}\leq n.
| 173
|
Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix A Proofs
A.2 Deviation inequalities
Proof.
Denote Mn=∑i=1n[ζi−μi]subscript𝑀𝑛superscriptsubscript𝑖1𝑛delimited-[]subscript𝜁𝑖subscript𝜇𝑖M_{n}=\sum_{i=1}^{n}[\zeta_{i}-\mu_{i}] and qn=∑i=1nσi2subscript𝑞𝑛superscriptsubscript𝑖1𝑛superscriptsubscript𝜎𝑖2q_{n}=\sum_{i=1}^{n}\sigma_{i}^{2}. Note that qn≤nsubscript𝑞𝑛𝑛q_{n}\leq n.
| 200
|
47
|
Let x≥1𝑥1x\geq 1; one has
, 1 = Prob{Mn≥2xqn+x}≤[e(2xln[9n2x]+1)+1]e−x.Probsubscript𝑀𝑛2𝑥subscript𝑞𝑛𝑥delimited-[]𝑒2𝑥9𝑛2𝑥11superscript𝑒𝑥\hbox{\rm Prob}\left\{M_{n}\geq\sqrt{2xq_{n}}+{x}\right\}\leq\left[e\left(2x\ln\left[{9n\over 2x}\right]+1\right)+1\right]e^{-x}.. , 2 =
In particular, for x≥42+lnn𝑥42𝑛x\geq 4\sqrt{2+\ln n} one has
, 1 = Prob{Mn≥2xqn+x}≤2e−x/2.Probsubscript𝑀𝑛2𝑥subscript𝑞𝑛𝑥2superscript𝑒𝑥2\hbox{\rm Prob}\left\{M_{n}\geq\sqrt{2xq_{n}}+{x}\right\}\leq 2e^{-x/2}.. , 2 =
Proof.
In the premise of the lemma, applying Bernstein’s inequality for martingales [4, 15] we obtain for all x>0𝑥0x>0 and u>0𝑢0u>0,
, 1 = Prob{Mn≥2xu+x3,qn≤u}≤e−x.Probformulae-sequencesubscript𝑀𝑛2𝑥𝑢𝑥3subscript𝑞𝑛𝑢superscript𝑒𝑥\hbox{\rm Prob}\left\{M_{n}\geq\sqrt{2xu}+{x\over 3},\,q_{n}\leq u\right\}\leq e^{-x}.. , 2 =
We conclude that
, 1 = Prob{Mn≥x,qn≤2x9}≤e−x,Probformulae-sequencesubscript𝑀𝑛𝑥subscript𝑞𝑛2𝑥9superscript𝑒𝑥\hbox{\rm Prob}\left\{M_{n}\geq{x},\,q_{n}\leq{2x\over 9}\right\}\leq e^{-x},. , 2 =
and for any u>0𝑢0u>0
, 1 = Prob{Mn≥2(x+1)qn+x3,u≤qn≤u(1+1/x)}≤e−x,Probformulae-sequencesubscript𝑀𝑛2𝑥1subscript𝑞𝑛𝑥3𝑢subscript𝑞𝑛𝑢11𝑥superscript𝑒𝑥\hbox{\rm Prob}\left\{M_{n}\geq\sqrt{2(x+1)q_{n}}+{x\over 3},\,u\leq q_{n}\leq u\big{(}1+{1/x}\big{)}\right\}\leq e^{-x},. , 2 =
so that
, 1 = δn(x;u):=Prob{Mn≥2xqn+x3,u≤qn≤u(1+1/x)}≤e−x+1.assignsubscript𝛿𝑛𝑥𝑢Probformulae-sequencesubscript𝑀𝑛2𝑥subscript𝑞𝑛𝑥3𝑢subscript𝑞𝑛𝑢11𝑥superscript𝑒𝑥1\delta_{n}(x;u):=\hbox{\rm Prob}\left\{M_{n}\geq\sqrt{2xq_{n}}+{x\over 3},\,u\leq q_{n}\leq u\big{(}1+{1/x}\big{)}\right\}\leq e^{-x+1}.. , 2 =
Let now u0=2x/9subscript𝑢02𝑥9u_{0}=2x/9, uj=min{n,(1+1/x)ju0}subscript𝑢𝑗𝑛superscript11𝑥𝑗subscript𝑢0u_{j}=\min\{n,(1+1/x)^{j}u_{0}\}, j=0,…,J𝑗0…𝐽j=0,...,J, with
, 1 = J=⌋ln[n/u0]ln−1[1+1/x]⌊.J=\left\rfloor\ln\big{[}{n/u_{0}}\big{]}\ln^{-1}[1+1/x]\right\lfloor.. , 2 =
Note that ln[1+1/x]≥1/(2x)11𝑥12𝑥\ln[1+1/x]\geq 1/(2x) for x≥1𝑥1x\geq 1, so that
, 1 = J≤ln[n/u0]ln−1[1+1/x]+1≤2xln[n/u0]+1.𝐽𝑛subscript𝑢0superscript111𝑥12𝑥𝑛subscript𝑢01J\leq\ln\big{[}{n/u_{0}}\big{]}\ln^{-1}[1+1/x]+1\leq 2x\ln\big{[}{n/u_{0}}\big{]}+1.. , 2 =
On the other hand,
, 1 = Prob{Mn≥2xqn+x}Probsubscript𝑀𝑛2𝑥subscript𝑞𝑛𝑥\displaystyle\hbox{\rm Prob}\left\{M_{n}\geq\sqrt{2xq_{n}}+{x}\right\}. , 2 = ≤\displaystyle\leq. , 3 = e−x+∑j=1Jδn(x;uj)≤e−x+Je−x+1superscript𝑒𝑥superscriptsubscript𝑗1𝐽subscript𝛿𝑛𝑥subscript𝑢𝑗superscript𝑒𝑥𝐽superscript𝑒𝑥1\displaystyle e^{-x}+\sum_{j=1}^{J}\delta_{n}(x;u_{j})\leq e^{-x}+Je^{-x+1}. , 4 = . , 1 = . , 2 = ≤\displaystyle\leq. , 3 = [e(2xln[9n2x]+1)+1]e−xdelimited-[]𝑒2𝑥9𝑛2𝑥11superscript𝑒𝑥\displaystyle\Big{[}e\Big{(}2x\ln\Big{[}{9n\over 2x}\Big{]}+1\Big{)}+1\Big{]}e^{-x}. , 4 =
Finally, we verify explicitly that for x≥42+lnn𝑥42𝑛x\geq 4\sqrt{2+\ln n} one has
, 1 = [e(2xln[9n2x]+1)+1]e−x/2≤2,delimited-[]𝑒2𝑥9𝑛2𝑥11superscript𝑒𝑥22\Big{[}e\Big{(}2x\ln\Big{[}{9n\over 2x}\Big{]}+1\Big{)}+1\Big{]}e^{-x/2}\leq 2,. , 2 =
implying that for such x𝑥x
| 1,946
|
Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix A Proofs
A.2 Deviation inequalities
Proof.
Lemma A.2
Let x≥1𝑥1x\geq 1; one has
, 1 = Prob{Mn≥2xqn+x}≤[e(2xln[9n2x]+1)+1]e−x.Probsubscript𝑀𝑛2𝑥subscript𝑞𝑛𝑥delimited-[]𝑒2𝑥9𝑛2𝑥11superscript𝑒𝑥\hbox{\rm Prob}\left\{M_{n}\geq\sqrt{2xq_{n}}+{x}\right\}\leq\left[e\left(2x\ln\left[{9n\over 2x}\right]+1\right)+1\right]e^{-x}.. , 2 =
In particular, for x≥42+lnn𝑥42𝑛x\geq 4\sqrt{2+\ln n} one has
, 1 = Prob{Mn≥2xqn+x}≤2e−x/2.Probsubscript𝑀𝑛2𝑥subscript𝑞𝑛𝑥2superscript𝑒𝑥2\hbox{\rm Prob}\left\{M_{n}\geq\sqrt{2xq_{n}}+{x}\right\}\leq 2e^{-x/2}.. , 2 =
Proof.
In the premise of the lemma, applying Bernstein’s inequality for martingales [4, 15] we obtain for all x>0𝑥0x>0 and u>0𝑢0u>0,
, 1 = Prob{Mn≥2xu+x3,qn≤u}≤e−x.Probformulae-sequencesubscript𝑀𝑛2𝑥𝑢𝑥3subscript𝑞𝑛𝑢superscript𝑒𝑥\hbox{\rm Prob}\left\{M_{n}\geq\sqrt{2xu}+{x\over 3},\,q_{n}\leq u\right\}\leq e^{-x}.. , 2 =
We conclude that
, 1 = Prob{Mn≥x,qn≤2x9}≤e−x,Probformulae-sequencesubscript𝑀𝑛𝑥subscript𝑞𝑛2𝑥9superscript𝑒𝑥\hbox{\rm Prob}\left\{M_{n}\geq{x},\,q_{n}\leq{2x\over 9}\right\}\leq e^{-x},. , 2 =
and for any u>0𝑢0u>0
, 1 = Prob{Mn≥2(x+1)qn+x3,u≤qn≤u(1+1/x)}≤e−x,Probformulae-sequencesubscript𝑀𝑛2𝑥1subscript𝑞𝑛𝑥3𝑢subscript𝑞𝑛𝑢11𝑥superscript𝑒𝑥\hbox{\rm Prob}\left\{M_{n}\geq\sqrt{2(x+1)q_{n}}+{x\over 3},\,u\leq q_{n}\leq u\big{(}1+{1/x}\big{)}\right\}\leq e^{-x},. , 2 =
so that
, 1 = δn(x;u):=Prob{Mn≥2xqn+x3,u≤qn≤u(1+1/x)}≤e−x+1.assignsubscript𝛿𝑛𝑥𝑢Probformulae-sequencesubscript𝑀𝑛2𝑥subscript𝑞𝑛𝑥3𝑢subscript𝑞𝑛𝑢11𝑥superscript𝑒𝑥1\delta_{n}(x;u):=\hbox{\rm Prob}\left\{M_{n}\geq\sqrt{2xq_{n}}+{x\over 3},\,u\leq q_{n}\leq u\big{(}1+{1/x}\big{)}\right\}\leq e^{-x+1}.. , 2 =
Let now u0=2x/9subscript𝑢02𝑥9u_{0}=2x/9, uj=min{n,(1+1/x)ju0}subscript𝑢𝑗𝑛superscript11𝑥𝑗subscript𝑢0u_{j}=\min\{n,(1+1/x)^{j}u_{0}\}, j=0,…,J𝑗0…𝐽j=0,...,J, with
, 1 = J=⌋ln[n/u0]ln−1[1+1/x]⌊.J=\left\rfloor\ln\big{[}{n/u_{0}}\big{]}\ln^{-1}[1+1/x]\right\lfloor.. , 2 =
Note that ln[1+1/x]≥1/(2x)11𝑥12𝑥\ln[1+1/x]\geq 1/(2x) for x≥1𝑥1x\geq 1, so that
, 1 = J≤ln[n/u0]ln−1[1+1/x]+1≤2xln[n/u0]+1.𝐽𝑛subscript𝑢0superscript111𝑥12𝑥𝑛subscript𝑢01J\leq\ln\big{[}{n/u_{0}}\big{]}\ln^{-1}[1+1/x]+1\leq 2x\ln\big{[}{n/u_{0}}\big{]}+1.. , 2 =
On the other hand,
, 1 = Prob{Mn≥2xqn+x}Probsubscript𝑀𝑛2𝑥subscript𝑞𝑛𝑥\displaystyle\hbox{\rm Prob}\left\{M_{n}\geq\sqrt{2xq_{n}}+{x}\right\}. , 2 = ≤\displaystyle\leq. , 3 = e−x+∑j=1Jδn(x;uj)≤e−x+Je−x+1superscript𝑒𝑥superscriptsubscript𝑗1𝐽subscript𝛿𝑛𝑥subscript𝑢𝑗superscript𝑒𝑥𝐽superscript𝑒𝑥1\displaystyle e^{-x}+\sum_{j=1}^{J}\delta_{n}(x;u_{j})\leq e^{-x}+Je^{-x+1}. , 4 = . , 1 = . , 2 = ≤\displaystyle\leq. , 3 = [e(2xln[9n2x]+1)+1]e−xdelimited-[]𝑒2𝑥9𝑛2𝑥11superscript𝑒𝑥\displaystyle\Big{[}e\Big{(}2x\ln\Big{[}{9n\over 2x}\Big{]}+1\Big{)}+1\Big{]}e^{-x}. , 4 =
Finally, we verify explicitly that for x≥42+lnn𝑥42𝑛x\geq 4\sqrt{2+\ln n} one has
, 1 = [e(2xln[9n2x]+1)+1]e−x/2≤2,delimited-[]𝑒2𝑥9𝑛2𝑥11superscript𝑒𝑥22\Big{[}e\Big{(}2x\ln\Big{[}{9n\over 2x}\Big{]}+1\Big{)}+1\Big{]}e^{-x/2}\leq 2,. , 2 =
implying that for such x𝑥x
| 1,978
|
48
|
, 1 = Prob{Mn≥2xqn+x}≤2e−x/2.Probsubscript𝑀𝑛2𝑥subscript𝑞𝑛𝑥2superscript𝑒𝑥2\hbox{\rm Prob}\left\{M_{n}\geq\sqrt{2xq_{n}}+{x}\right\}\leq 2e^{-x/2}.. , 2 = . , 3 = □□\Box
Let (ξi)i=1,…subscriptsubscript𝜉𝑖𝑖1…(\xi_{i})_{i=1,...} be a sequence of independent random vectors in 𝐑nsuperscript𝐑𝑛{\mathbf{R}}^{n} such that
, 1 = 𝐄i−1{exp(‖ξi‖∗2s2)}≤exp(1),subscript𝐄𝑖1superscriptsubscriptnormsubscript𝜉𝑖2superscript𝑠21{\mathbf{E}}_{i-1}\left\{\exp\left({\|\xi_{i}\|_{*}^{2}\over s^{2}}\right)\right\}\leq\exp(1),. , 2 =
and let η=∑i=1mξi𝜂superscriptsubscript𝑖1𝑚subscript𝜉𝑖\eta=\sum_{i=1}^{m}\xi_{i}, m∈𝐙+𝑚subscript𝐙m\in{\mathbf{Z}}_{+}. We are interested in “sub-Gaussian characteristics” of r.v.
ζ=⟨u,η⟩𝜁𝑢𝜂\zeta={\langle}u,\eta{\rangle} for some u∈𝐑n𝑢superscript𝐑𝑛u\in{\mathbf{R}}^{n}, ‖u‖≤Rnorm𝑢𝑅\|u\|\leq R, and of τ=‖η‖∗𝜏subscriptnorm𝜂\tau=\|\eta\|_{*}.
Because 𝐄{⟨u,ξi⟩}=0𝐄𝑢subscript𝜉𝑖0{\mathbf{E}}\{{\langle}u,\xi_{i}{\rangle}\}=0 and |⟨u,ξi⟩|≤‖u‖‖ξi‖∗𝑢subscript𝜉𝑖norm𝑢subscriptnormsubscript𝜉𝑖|{\langle}u,\xi_{i}{\rangle}|\leq\|u\|\,\|\xi_{i}\|_{*}, for all t𝑡t one has (cf.,e.g., Proposition 4.2 of [28])
, 1 = 𝐄{et⟨u,η⟩}=∏i=1m𝐄{et⟨u,ξi⟩}≤∏i=1mexp(34t2s2)=exp(34mt2s2).𝐄superscript𝑒𝑡𝑢𝜂superscriptsubscriptproduct𝑖1𝑚𝐄superscript𝑒𝑡𝑢subscript𝜉𝑖superscriptsubscriptproduct𝑖1𝑚34superscript𝑡2superscript𝑠234𝑚superscript𝑡2superscript𝑠2{\mathbf{E}}\Big{\{}e^{t{\langle}u,\eta{\rangle}}\Big{\}}=\prod_{i=1}^{m}{\mathbf{E}}\Big{\{}e^{t{\langle}u,\xi_{i}{\rangle}}\Big{\}}\leq\prod_{i=1}^{m}\exp\left({\tfrac{3}{4}t^{2}s^{2}}\right)=\exp\left(\tfrac{3}{4}mt^{2}s^{2}\right).. , 2 =
Let ξℓsubscript𝜉ℓ\xi_{\ell}, ℓ=1,2,…ℓ12…\ell=1,2,... be a sequence of independent random vectors ξℓ∈Esubscript𝜉ℓ𝐸\xi_{\ell}\in E, such that 𝐄{ξℓ}=0𝐄subscript𝜉ℓ0{\mathbf{E}}\{\xi_{\ell}\}=0 and 𝐄{e‖ξℓ‖∗2/s2}≤exp(1)𝐄superscript𝑒superscriptsubscriptnormsubscript𝜉ℓ2superscript𝑠21{\mathbf{E}}\Big{\{}e^{\|\xi_{\ell}\|_{*}^{2}/s^{2}}\Big{\}}\leq\exp(1). Denote ηj=∑ℓ=1jξℓsubscript𝜂𝑗superscriptsubscriptℓ1𝑗subscript𝜉ℓ\eta_{j}=\sum_{\ell=1}^{j}\xi_{\ell}.
We have the following result.
| 1,239
|
Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix A Proofs
A.2 Deviation inequalities
Proof.
Lemma A.2
, 1 = Prob{Mn≥2xqn+x}≤2e−x/2.Probsubscript𝑀𝑛2𝑥subscript𝑞𝑛𝑥2superscript𝑒𝑥2\hbox{\rm Prob}\left\{M_{n}\geq\sqrt{2xq_{n}}+{x}\right\}\leq 2e^{-x/2}.. , 2 = . , 3 = □□\Box
Let (ξi)i=1,…subscriptsubscript𝜉𝑖𝑖1…(\xi_{i})_{i=1,...} be a sequence of independent random vectors in 𝐑nsuperscript𝐑𝑛{\mathbf{R}}^{n} such that
, 1 = 𝐄i−1{exp(‖ξi‖∗2s2)}≤exp(1),subscript𝐄𝑖1superscriptsubscriptnormsubscript𝜉𝑖2superscript𝑠21{\mathbf{E}}_{i-1}\left\{\exp\left({\|\xi_{i}\|_{*}^{2}\over s^{2}}\right)\right\}\leq\exp(1),. , 2 =
and let η=∑i=1mξi𝜂superscriptsubscript𝑖1𝑚subscript𝜉𝑖\eta=\sum_{i=1}^{m}\xi_{i}, m∈𝐙+𝑚subscript𝐙m\in{\mathbf{Z}}_{+}. We are interested in “sub-Gaussian characteristics” of r.v.
ζ=⟨u,η⟩𝜁𝑢𝜂\zeta={\langle}u,\eta{\rangle} for some u∈𝐑n𝑢superscript𝐑𝑛u\in{\mathbf{R}}^{n}, ‖u‖≤Rnorm𝑢𝑅\|u\|\leq R, and of τ=‖η‖∗𝜏subscriptnorm𝜂\tau=\|\eta\|_{*}.
Because 𝐄{⟨u,ξi⟩}=0𝐄𝑢subscript𝜉𝑖0{\mathbf{E}}\{{\langle}u,\xi_{i}{\rangle}\}=0 and |⟨u,ξi⟩|≤‖u‖‖ξi‖∗𝑢subscript𝜉𝑖norm𝑢subscriptnormsubscript𝜉𝑖|{\langle}u,\xi_{i}{\rangle}|\leq\|u\|\,\|\xi_{i}\|_{*}, for all t𝑡t one has (cf.,e.g., Proposition 4.2 of [28])
, 1 = 𝐄{et⟨u,η⟩}=∏i=1m𝐄{et⟨u,ξi⟩}≤∏i=1mexp(34t2s2)=exp(34mt2s2).𝐄superscript𝑒𝑡𝑢𝜂superscriptsubscriptproduct𝑖1𝑚𝐄superscript𝑒𝑡𝑢subscript𝜉𝑖superscriptsubscriptproduct𝑖1𝑚34superscript𝑡2superscript𝑠234𝑚superscript𝑡2superscript𝑠2{\mathbf{E}}\Big{\{}e^{t{\langle}u,\eta{\rangle}}\Big{\}}=\prod_{i=1}^{m}{\mathbf{E}}\Big{\{}e^{t{\langle}u,\xi_{i}{\rangle}}\Big{\}}\leq\prod_{i=1}^{m}\exp\left({\tfrac{3}{4}t^{2}s^{2}}\right)=\exp\left(\tfrac{3}{4}mt^{2}s^{2}\right).. , 2 =
Let ξℓsubscript𝜉ℓ\xi_{\ell}, ℓ=1,2,…ℓ12…\ell=1,2,... be a sequence of independent random vectors ξℓ∈Esubscript𝜉ℓ𝐸\xi_{\ell}\in E, such that 𝐄{ξℓ}=0𝐄subscript𝜉ℓ0{\mathbf{E}}\{\xi_{\ell}\}=0 and 𝐄{e‖ξℓ‖∗2/s2}≤exp(1)𝐄superscript𝑒superscriptsubscriptnormsubscript𝜉ℓ2superscript𝑠21{\mathbf{E}}\Big{\{}e^{\|\xi_{\ell}\|_{*}^{2}/s^{2}}\Big{\}}\leq\exp(1). Denote ηj=∑ℓ=1jξℓsubscript𝜂𝑗superscriptsubscriptℓ1𝑗subscript𝜉ℓ\eta_{j}=\sum_{\ell=1}^{j}\xi_{\ell}.
We have the following result.
| 1,271
|
49
|
, 1 = ∀L∈𝐙+𝐄{exp(‖ηL‖∗210Θs2L)}≤exp(1)formulae-sequencefor-all𝐿subscript𝐙𝐄superscriptsubscriptnormsubscript𝜂𝐿210Θsuperscript𝑠2𝐿1\displaystyle\forall L\in{\mathbf{Z}}_{+}\quad{\mathbf{E}}\left\{\exp\left({\|\eta_{L}\|_{*}^{2}\over 10\Theta s^{2}L}\right)\right\}\leq\exp(1). , 2 = . , 3 = (39)
where Θ=max‖z‖≤1θ(z)Θsubscriptnorm𝑧1𝜃𝑧\Theta=\max_{\|z\|\leq 1}\theta(z) for the d.-g.f. θ𝜃\theta of the unit ball of norm ∥⋅∥\|\cdot\| in E𝐸E, as defined in Section 2.2.
Proof.
Let for η∈E𝜂𝐸\eta\in E, π(η)=sup‖z‖≤1[⟨η,z⟩−θ(z)]𝜋𝜂subscriptsupremumnorm𝑧1delimited-[]𝜂𝑧𝜃𝑧\pi(\eta)=\sup_{\|z\|\leq 1}[{\langle}\eta,z{\rangle}-\theta(z)].
Observe that for all β>0𝛽0\beta>0,
, 1 = ‖ηL‖∗=sup‖z‖≤1⟨ηL,z⟩≤max‖z‖≤1βθ(z)+βπ(ηL/β)≤βΘ+βπ(ηLβ).subscriptnormsubscript𝜂𝐿subscriptsupremumnorm𝑧1subscript𝜂𝐿𝑧subscriptnorm𝑧1𝛽𝜃𝑧𝛽𝜋subscript𝜂𝐿𝛽𝛽Θ𝛽𝜋subscript𝜂𝐿𝛽\displaystyle\|\eta_{L}\|_{*}=\sup_{\|z\|\leq 1}{\langle}\eta_{L},z{\rangle}\leq\max_{\|z\|\leq 1}\beta\theta(z)+\beta\pi(\eta_{L}/\beta)\leq\beta\Theta+\beta\pi\big{(}{\eta_{L}\over\beta}\big{)}.. , 2 = . , 3 = (40)
On the other hand, we know (cf. [38, Lemma 1]) that π𝜋\pi is smooth with ‖∇π‖≤1norm∇𝜋1\|\nabla\pi\|\leq 1, and ∇π∇𝜋\nabla\pi is Lipschitz-continuous w.r.t. to ∥⋅∥∗\|\cdot\|_{*}, i.e.,
, 1 = ‖∇π(z)−∇π(z′)‖≤‖z−z′‖∗∀z,z′∈E.formulae-sequencenorm∇𝜋𝑧∇𝜋superscript𝑧′subscriptnorm𝑧superscript𝑧′for-all𝑧superscript𝑧′𝐸\|\nabla\pi(z)-\nabla\pi(z^{\prime})\|\leq\|z-z^{\prime}\|_{*}\quad\forall z,z^{\prime}\in E.. , 2 =
As a consequence of Lipschitz continuity of π𝜋\pi, when denoting πβ(η)=βπ(ηβ)subscript𝜋𝛽𝜂𝛽𝜋𝜂𝛽\pi_{\beta}(\eta)=\beta\pi\big{(}{\eta\over\beta}\big{)}, we have
, 1 = πβ(ηj−1+ξj)−πβ(ηj−1)≤‖ξj‖∗,subscript𝜋𝛽subscript𝜂𝑗1subscript𝜉𝑗subscript𝜋𝛽subscript𝜂𝑗1subscriptnormsubscript𝜉𝑗\pi_{\beta}(\eta_{j-1}+\xi_{j})-\pi_{\beta}(\eta_{j-1})\leq\|\xi_{j}\|_{*},. , 2 =
so that 𝐄{exp([πβ(ηj)−πβ(ηj−1)]2/s2)}≤exp(1)𝐄superscriptdelimited-[]subscript𝜋𝛽subscript𝜂𝑗subscript𝜋𝛽subscript𝜂𝑗12superscript𝑠21{\mathbf{E}}\left\{\exp\left([\pi_{\beta}(\eta_{j})-\pi_{\beta}(\eta_{j-1})]^{2}/s^{2}\right)\right\}\leq\exp(1).
Furthermore,
, 1 = πβ(ηj−1+ξj)≤πβ(ηj−1)+⟨∇πβ(ηj−1),ξj/β⟩+‖ξj‖∗2/β,subscript𝜋𝛽subscript𝜂𝑗1subscript𝜉𝑗subscript𝜋𝛽subscript𝜂𝑗1∇subscript𝜋𝛽subscript𝜂𝑗1subscript𝜉𝑗𝛽superscriptsubscriptnormsubscript𝜉𝑗2𝛽\pi_{\beta}(\eta_{j-1}+\xi_{j})\leq\pi_{\beta}(\eta_{j-1})+{\langle}\nabla\pi_{\beta}(\eta_{j-1}),\xi_{j}/\beta{\rangle}+{\|\xi_{j}\|_{*}^{2}/\beta},. , 2 =
and, because ηj−1subscript𝜂𝑗1\eta_{j-1} does not depend on ξjsubscript𝜉𝑗\xi_{j} and 𝐄{‖ξj‖∗2}≤s2𝐄superscriptsubscriptnormsubscript𝜉𝑗2superscript𝑠2{\mathbf{E}}\{\|\xi_{j}\|_{*}^{2}\}\leq s^{2}, we get
, 1 = 𝐄j−1{πβ(ηj)−πβ(ηj−1)}≤s2/β.subscript𝐄𝑗1subscript𝜋𝛽subscript𝜂𝑗subscript𝜋𝛽subscript𝜂𝑗1superscript𝑠2𝛽{\mathbf{E}}_{j-1}\{\pi_{\beta}(\eta_{j})-\pi_{\beta}(\eta_{j-1})\}\leq{s^{2}/\beta}.. , 2 =
By [28, Proposition 4.2] we conclude that
random variables δj=πβ(ηj)−πβ(ηj−1)subscript𝛿𝑗subscript𝜋𝛽subscript𝜂𝑗subscript𝜋𝛽subscript𝜂𝑗1\delta_{j}=\pi_{\beta}(\eta_{j})-\pi_{\beta}(\eta_{j-1}) satisfy for all t≥0𝑡0t\geq 0,
| 1,891
|
Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix A Proofs
A.2 Deviation inequalities
Proof.
Lemma A.3
, 1 = ∀L∈𝐙+𝐄{exp(‖ηL‖∗210Θs2L)}≤exp(1)formulae-sequencefor-all𝐿subscript𝐙𝐄superscriptsubscriptnormsubscript𝜂𝐿210Θsuperscript𝑠2𝐿1\displaystyle\forall L\in{\mathbf{Z}}_{+}\quad{\mathbf{E}}\left\{\exp\left({\|\eta_{L}\|_{*}^{2}\over 10\Theta s^{2}L}\right)\right\}\leq\exp(1). , 2 = . , 3 = (39)
where Θ=max‖z‖≤1θ(z)Θsubscriptnorm𝑧1𝜃𝑧\Theta=\max_{\|z\|\leq 1}\theta(z) for the d.-g.f. θ𝜃\theta of the unit ball of norm ∥⋅∥\|\cdot\| in E𝐸E, as defined in Section 2.2.
Proof.
Let for η∈E𝜂𝐸\eta\in E, π(η)=sup‖z‖≤1[⟨η,z⟩−θ(z)]𝜋𝜂subscriptsupremumnorm𝑧1delimited-[]𝜂𝑧𝜃𝑧\pi(\eta)=\sup_{\|z\|\leq 1}[{\langle}\eta,z{\rangle}-\theta(z)].
Observe that for all β>0𝛽0\beta>0,
, 1 = ‖ηL‖∗=sup‖z‖≤1⟨ηL,z⟩≤max‖z‖≤1βθ(z)+βπ(ηL/β)≤βΘ+βπ(ηLβ).subscriptnormsubscript𝜂𝐿subscriptsupremumnorm𝑧1subscript𝜂𝐿𝑧subscriptnorm𝑧1𝛽𝜃𝑧𝛽𝜋subscript𝜂𝐿𝛽𝛽Θ𝛽𝜋subscript𝜂𝐿𝛽\displaystyle\|\eta_{L}\|_{*}=\sup_{\|z\|\leq 1}{\langle}\eta_{L},z{\rangle}\leq\max_{\|z\|\leq 1}\beta\theta(z)+\beta\pi(\eta_{L}/\beta)\leq\beta\Theta+\beta\pi\big{(}{\eta_{L}\over\beta}\big{)}.. , 2 = . , 3 = (40)
On the other hand, we know (cf. [38, Lemma 1]) that π𝜋\pi is smooth with ‖∇π‖≤1norm∇𝜋1\|\nabla\pi\|\leq 1, and ∇π∇𝜋\nabla\pi is Lipschitz-continuous w.r.t. to ∥⋅∥∗\|\cdot\|_{*}, i.e.,
, 1 = ‖∇π(z)−∇π(z′)‖≤‖z−z′‖∗∀z,z′∈E.formulae-sequencenorm∇𝜋𝑧∇𝜋superscript𝑧′subscriptnorm𝑧superscript𝑧′for-all𝑧superscript𝑧′𝐸\|\nabla\pi(z)-\nabla\pi(z^{\prime})\|\leq\|z-z^{\prime}\|_{*}\quad\forall z,z^{\prime}\in E.. , 2 =
As a consequence of Lipschitz continuity of π𝜋\pi, when denoting πβ(η)=βπ(ηβ)subscript𝜋𝛽𝜂𝛽𝜋𝜂𝛽\pi_{\beta}(\eta)=\beta\pi\big{(}{\eta\over\beta}\big{)}, we have
, 1 = πβ(ηj−1+ξj)−πβ(ηj−1)≤‖ξj‖∗,subscript𝜋𝛽subscript𝜂𝑗1subscript𝜉𝑗subscript𝜋𝛽subscript𝜂𝑗1subscriptnormsubscript𝜉𝑗\pi_{\beta}(\eta_{j-1}+\xi_{j})-\pi_{\beta}(\eta_{j-1})\leq\|\xi_{j}\|_{*},. , 2 =
so that 𝐄{exp([πβ(ηj)−πβ(ηj−1)]2/s2)}≤exp(1)𝐄superscriptdelimited-[]subscript𝜋𝛽subscript𝜂𝑗subscript𝜋𝛽subscript𝜂𝑗12superscript𝑠21{\mathbf{E}}\left\{\exp\left([\pi_{\beta}(\eta_{j})-\pi_{\beta}(\eta_{j-1})]^{2}/s^{2}\right)\right\}\leq\exp(1).
Furthermore,
, 1 = πβ(ηj−1+ξj)≤πβ(ηj−1)+⟨∇πβ(ηj−1),ξj/β⟩+‖ξj‖∗2/β,subscript𝜋𝛽subscript𝜂𝑗1subscript𝜉𝑗subscript𝜋𝛽subscript𝜂𝑗1∇subscript𝜋𝛽subscript𝜂𝑗1subscript𝜉𝑗𝛽superscriptsubscriptnormsubscript𝜉𝑗2𝛽\pi_{\beta}(\eta_{j-1}+\xi_{j})\leq\pi_{\beta}(\eta_{j-1})+{\langle}\nabla\pi_{\beta}(\eta_{j-1}),\xi_{j}/\beta{\rangle}+{\|\xi_{j}\|_{*}^{2}/\beta},. , 2 =
and, because ηj−1subscript𝜂𝑗1\eta_{j-1} does not depend on ξjsubscript𝜉𝑗\xi_{j} and 𝐄{‖ξj‖∗2}≤s2𝐄superscriptsubscriptnormsubscript𝜉𝑗2superscript𝑠2{\mathbf{E}}\{\|\xi_{j}\|_{*}^{2}\}\leq s^{2}, we get
, 1 = 𝐄j−1{πβ(ηj)−πβ(ηj−1)}≤s2/β.subscript𝐄𝑗1subscript𝜋𝛽subscript𝜂𝑗subscript𝜋𝛽subscript𝜂𝑗1superscript𝑠2𝛽{\mathbf{E}}_{j-1}\{\pi_{\beta}(\eta_{j})-\pi_{\beta}(\eta_{j-1})\}\leq{s^{2}/\beta}.. , 2 =
By [28, Proposition 4.2] we conclude that
random variables δj=πβ(ηj)−πβ(ηj−1)subscript𝛿𝑗subscript𝜋𝛽subscript𝜂𝑗subscript𝜋𝛽subscript𝜂𝑗1\delta_{j}=\pi_{\beta}(\eta_{j})-\pi_{\beta}(\eta_{j-1}) satisfy for all t≥0𝑡0t\geq 0,
| 1,923
|
50
|
, 1 = 𝐄j−1{etδj}≤exp(ts2β−1+34t2s2).subscript𝐄𝑗1superscript𝑒𝑡subscript𝛿𝑗𝑡superscript𝑠2superscript𝛽134superscript𝑡2superscript𝑠2{\mathbf{E}}_{j-1}\left\{e^{t\delta_{j}}\right\}\leq\exp\left(ts^{2}\beta^{-1}+{\tfrac{3}{4}t^{2}s^{2}}\right).. , 2 =
Consequently,
, 1 = 𝐄{etπβ(ηL)}≤𝐄{etπβ(ηL−1)}exp(ts2β−1+34t2s2)≤exp(ts2Lβ−1+34t2s2L).𝐄superscript𝑒𝑡subscript𝜋𝛽subscript𝜂𝐿𝐄superscript𝑒𝑡subscript𝜋𝛽subscript𝜂𝐿1𝑡superscript𝑠2superscript𝛽134superscript𝑡2superscript𝑠2𝑡superscript𝑠2𝐿superscript𝛽134superscript𝑡2superscript𝑠2𝐿{\mathbf{E}}\left\{e^{t\pi_{\beta}(\eta_{L})}\right\}\leq{\mathbf{E}}\left\{e^{t\pi_{\beta}(\eta_{L-1})}\right\}\exp\left({ts^{2}\beta^{-1}}+{\tfrac{3}{4}t^{2}s^{2}}\right)\leq\exp\left({ts^{2}L\beta^{-1}}+{\tfrac{3}{4}t^{2}s^{2}L}\right).. , 2 =
When substituting the latter bound into (40), we obtain for β2=s2L/Θsuperscript𝛽2superscript𝑠2𝐿Θ\beta^{2}={s^{2}L/\Theta}
, 1 = 𝐄{et‖ηL‖∗}≤exp(tsΘL+34t2s2L)∀t≥0.formulae-sequence𝐄superscript𝑒𝑡subscriptnormsubscript𝜂𝐿𝑡𝑠Θ𝐿34superscript𝑡2superscript𝑠2𝐿for-all𝑡0\displaystyle{\mathbf{E}}\left\{e^{t\|\eta_{L}\|_{*}}\right\}\leq\exp\left(ts\sqrt{\Theta L}+\tfrac{3}{4}t^{2}s^{2}L\right)\quad\forall t\geq 0.. , 2 = . , 3 = (41)
To complete the proof of the lemma, it remains to show that (41) implies (39). This is straightforward. Indeed, for χ∼𝒩(0,1),α>0formulae-sequencesimilar-to𝜒𝒩01𝛼0\chi\sim{\cal N}(0,1),\,\alpha>0 and ζ=‖ηL‖∗𝜁subscriptnormsubscript𝜂𝐿\zeta=\|\eta_{L}\|_{*} one has
, 1 = 𝐄{eαζ2}𝐄superscript𝑒𝛼superscript𝜁2\displaystyle{\mathbf{E}}\Big{\{}e^{\alpha\zeta^{2}}\Big{\}}. , 2 = =𝐄{𝐄η(e2αζχ)}=𝐄χ{𝐄{e2αζχ}}absent𝐄subscript𝐄𝜂superscript𝑒2𝛼𝜁𝜒subscript𝐄𝜒𝐄superscript𝑒2𝛼𝜁𝜒\displaystyle={\mathbf{E}}\left\{{\mathbf{E}}_{\eta}\left(e^{\sqrt{2\alpha}\zeta\chi}\right)\right\}={\mathbf{E}}_{\chi}\left\{{\mathbf{E}}\left\{e^{\sqrt{2\alpha}\zeta\chi}\right\}\right\}. , 3 = . , 1 = . , 2 = ≤𝐄χ{exp(2αΘLsχ+32αNs2χ2)}=(1−3αNs2)−1/2exp{αΘLs21−3αNs2}absentsubscript𝐄𝜒2𝛼Θ𝐿𝑠𝜒32𝛼𝑁superscript𝑠2superscript𝜒2superscript13𝛼𝑁superscript𝑠212𝛼Θ𝐿superscript𝑠213𝛼𝑁superscript𝑠2\displaystyle\leq{\mathbf{E}}_{\chi}\left\{\exp\left(\sqrt{2\alpha\Theta L}\,s\chi+\tfrac{3}{2}\alpha Ns^{2}\chi^{2}\right)\right\}=(1-3\alpha Ns^{2})^{-1/2}\exp\left\{\frac{\alpha\Theta Ls^{2}}{1-3\alpha Ns^{2}}\right\}. , 3 =
When setting α=(10Θs2L)−1𝛼superscript10Θsuperscript𝑠2𝐿1\alpha=(10\Theta s^{2}L)^{-1}, we conclude that
, 1 = 𝐄{eαζ2}≤exp(1)𝐄superscript𝑒𝛼superscript𝜁21{\mathbf{E}}\Big{\{}e^{\alpha\zeta^{2}}\Big{\}}\leq\exp(1). , 2 =
due to Θ≥1/2Θ12\Theta\geq 1/2. □□\Box
| 1,562
|
Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix A Proofs
A.2 Deviation inequalities
Proof.
Lemma A.3
, 1 = 𝐄j−1{etδj}≤exp(ts2β−1+34t2s2).subscript𝐄𝑗1superscript𝑒𝑡subscript𝛿𝑗𝑡superscript𝑠2superscript𝛽134superscript𝑡2superscript𝑠2{\mathbf{E}}_{j-1}\left\{e^{t\delta_{j}}\right\}\leq\exp\left(ts^{2}\beta^{-1}+{\tfrac{3}{4}t^{2}s^{2}}\right).. , 2 =
Consequently,
, 1 = 𝐄{etπβ(ηL)}≤𝐄{etπβ(ηL−1)}exp(ts2β−1+34t2s2)≤exp(ts2Lβ−1+34t2s2L).𝐄superscript𝑒𝑡subscript𝜋𝛽subscript𝜂𝐿𝐄superscript𝑒𝑡subscript𝜋𝛽subscript𝜂𝐿1𝑡superscript𝑠2superscript𝛽134superscript𝑡2superscript𝑠2𝑡superscript𝑠2𝐿superscript𝛽134superscript𝑡2superscript𝑠2𝐿{\mathbf{E}}\left\{e^{t\pi_{\beta}(\eta_{L})}\right\}\leq{\mathbf{E}}\left\{e^{t\pi_{\beta}(\eta_{L-1})}\right\}\exp\left({ts^{2}\beta^{-1}}+{\tfrac{3}{4}t^{2}s^{2}}\right)\leq\exp\left({ts^{2}L\beta^{-1}}+{\tfrac{3}{4}t^{2}s^{2}L}\right).. , 2 =
When substituting the latter bound into (40), we obtain for β2=s2L/Θsuperscript𝛽2superscript𝑠2𝐿Θ\beta^{2}={s^{2}L/\Theta}
, 1 = 𝐄{et‖ηL‖∗}≤exp(tsΘL+34t2s2L)∀t≥0.formulae-sequence𝐄superscript𝑒𝑡subscriptnormsubscript𝜂𝐿𝑡𝑠Θ𝐿34superscript𝑡2superscript𝑠2𝐿for-all𝑡0\displaystyle{\mathbf{E}}\left\{e^{t\|\eta_{L}\|_{*}}\right\}\leq\exp\left(ts\sqrt{\Theta L}+\tfrac{3}{4}t^{2}s^{2}L\right)\quad\forall t\geq 0.. , 2 = . , 3 = (41)
To complete the proof of the lemma, it remains to show that (41) implies (39). This is straightforward. Indeed, for χ∼𝒩(0,1),α>0formulae-sequencesimilar-to𝜒𝒩01𝛼0\chi\sim{\cal N}(0,1),\,\alpha>0 and ζ=‖ηL‖∗𝜁subscriptnormsubscript𝜂𝐿\zeta=\|\eta_{L}\|_{*} one has
, 1 = 𝐄{eαζ2}𝐄superscript𝑒𝛼superscript𝜁2\displaystyle{\mathbf{E}}\Big{\{}e^{\alpha\zeta^{2}}\Big{\}}. , 2 = =𝐄{𝐄η(e2αζχ)}=𝐄χ{𝐄{e2αζχ}}absent𝐄subscript𝐄𝜂superscript𝑒2𝛼𝜁𝜒subscript𝐄𝜒𝐄superscript𝑒2𝛼𝜁𝜒\displaystyle={\mathbf{E}}\left\{{\mathbf{E}}_{\eta}\left(e^{\sqrt{2\alpha}\zeta\chi}\right)\right\}={\mathbf{E}}_{\chi}\left\{{\mathbf{E}}\left\{e^{\sqrt{2\alpha}\zeta\chi}\right\}\right\}. , 3 = . , 1 = . , 2 = ≤𝐄χ{exp(2αΘLsχ+32αNs2χ2)}=(1−3αNs2)−1/2exp{αΘLs21−3αNs2}absentsubscript𝐄𝜒2𝛼Θ𝐿𝑠𝜒32𝛼𝑁superscript𝑠2superscript𝜒2superscript13𝛼𝑁superscript𝑠212𝛼Θ𝐿superscript𝑠213𝛼𝑁superscript𝑠2\displaystyle\leq{\mathbf{E}}_{\chi}\left\{\exp\left(\sqrt{2\alpha\Theta L}\,s\chi+\tfrac{3}{2}\alpha Ns^{2}\chi^{2}\right)\right\}=(1-3\alpha Ns^{2})^{-1/2}\exp\left\{\frac{\alpha\Theta Ls^{2}}{1-3\alpha Ns^{2}}\right\}. , 3 =
When setting α=(10Θs2L)−1𝛼superscript10Θsuperscript𝑠2𝐿1\alpha=(10\Theta s^{2}L)^{-1}, we conclude that
, 1 = 𝐄{eαζ2}≤exp(1)𝐄superscript𝑒𝛼superscript𝜁21{\mathbf{E}}\Big{\{}e^{\alpha\zeta^{2}}\Big{\}}\leq\exp(1). , 2 =
due to Θ≥1/2Θ12\Theta\geq 1/2. □□\Box
| 1,594
|
51
|
We start with analysing the behaviour of the approximate solution x^m0ksubscriptsuperscript^𝑥𝑘subscript𝑚0{\widehat{x}}^{k}_{m_{0}} at the stages of the preliminary phase of the procedure.
| 57
|
Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix A Proofs
A.3 Proof of Theorem 2.1
We start with analysing the behaviour of the approximate solution x^m0ksubscriptsuperscript^𝑥𝑘subscript𝑚0{\widehat{x}}^{k}_{m_{0}} at the stages of the preliminary phase of the procedure.
| 87
|
52
|
Let m0=⌈64δ2ρνs(4Θ+60t)⌉subscript𝑚064superscript𝛿2𝜌𝜈𝑠4Θ60𝑡m_{0}=\lceil 64\delta^{2}\rho\nu s(4\Theta+{60}t)\rceil ((here ⌈a⌉𝑎\lceil a\rceil stands for the smallest integer greater or equal to a𝑎a), γ=(4ν)−1𝛾superscript4𝜈1\gamma=(4\nu)^{-1}, and let t𝑡t satisfy t≥42+log(m0)𝑡42subscript𝑚0t\geq 4\sqrt{2+\log(m_{0})}.
Suppose that R≥2δσ∗6ρs/ν𝑅2𝛿subscript𝜎6𝜌𝑠𝜈R\geq 2\delta\sigma_{*}\sqrt{6\rho s/\nu}, that initial condition x0subscript𝑥0x_{0} of Algorithms 1 and 2 satisfies ‖x0−x∗‖≤Rnormsubscript𝑥0subscript𝑥𝑅\|x_{0}-x_{*}\|\leq R, and that at the stage k𝑘k of the preliminary phase we choose
, 1 = κk=Rk−1ν(4Θ+60t)ρsm0subscript𝜅𝑘subscript𝑅𝑘1𝜈4Θ60𝑡𝜌𝑠subscript𝑚0\displaystyle\kappa_{k}=R_{k-1}\sqrt{\frac{\nu(4\Theta+{60}t)}{\rho sm_{0}}}. , 2 = . , 3 = (42)
where (Rk)k≥0subscriptsubscript𝑅𝑘𝑘0(R_{k})_{k\geq 0} is defined recursively:
, 1 = Rk+1=12Rk+16σ∗2δ2ρsνRk,R0=R.formulae-sequencesubscript𝑅𝑘112subscript𝑅𝑘16superscriptsubscript𝜎2superscript𝛿2𝜌𝑠𝜈subscript𝑅𝑘subscript𝑅0𝑅R_{k+1}=\mbox{\small$\frac{1}{2}$}{R_{k}}+\frac{{16}\sigma_{*}^{2}\delta^{2}\rho s}{\nu R_{k}},\quad R_{0}=R.. , 2 =
Then the approximate solution x^m0ksuperscriptsubscript^𝑥subscript𝑚0𝑘{\widehat{x}}_{m_{0}}^{k} at the end of the k𝑘kth stage of the CSMD-SR algorithm satisfies, with probability ≥1−4ke−tabsent14𝑘superscript𝑒𝑡\geq 1-4ke^{-t}
, 1 = ‖x^m0k−x∗‖normsuperscriptsubscript^𝑥subscript𝑚0𝑘subscript𝑥\displaystyle\|{\widehat{x}}_{m_{0}}^{k}-x_{*}\|. , 2 = ≤Rk≤2−kR+4σ∗δ2ρs/ν.absentsubscript𝑅𝑘superscript2𝑘𝑅4subscript𝜎𝛿2𝜌𝑠𝜈\displaystyle\leq R_{k}\leq 2^{-k}R+{4}\sigma_{*}\delta\sqrt{{2}\rho s/\nu}.. , 3 = . , 4 = . , 5 = (43)
In particular, the estimate x^m0K¯1subscriptsuperscript^𝑥subscript¯𝐾1subscript𝑚0{\widehat{x}}^{{\overline{K}}_{1}}_{m_{0}} after K¯1=⌈12log2(R2ν32σ∗2δ2ρs)⌉subscript¯𝐾112subscript2superscript𝑅2𝜈32subscriptsuperscript𝜎2superscript𝛿2𝜌𝑠\overline{K}_{1}=\left\lceil\mbox{\small$\frac{1}{2}$}\log_{2}\left(\frac{R^{2}\nu}{{32}\sigma^{2}_{*}\delta^{2}\rho s}\right)\right\rceil stages satisfies with probaility at least 1−4K¯1e−t14subscript¯𝐾1superscript𝑒𝑡1-4{\overline{K}}_{1}e^{-t}
, 1 = ‖x¯m0K¯1−x∗‖≤8σ∗δ2ρs/ν.normsuperscriptsubscript¯𝑥subscript𝑚0subscript¯𝐾1subscript𝑥8subscript𝜎𝛿2𝜌𝑠𝜈\displaystyle\|\overline{x}_{m_{0}}^{\overline{K}_{1}}-x_{*}\|\leq{8}\sigma_{*}\delta\sqrt{{2}\rho s/\nu}.. , 2 = . , 3 = (44)
| 1,333
|
Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix A Proofs
A.3 Proof of Theorem 2.1
Lemma A.4
Let m0=⌈64δ2ρνs(4Θ+60t)⌉subscript𝑚064superscript𝛿2𝜌𝜈𝑠4Θ60𝑡m_{0}=\lceil 64\delta^{2}\rho\nu s(4\Theta+{60}t)\rceil ((here ⌈a⌉𝑎\lceil a\rceil stands for the smallest integer greater or equal to a𝑎a), γ=(4ν)−1𝛾superscript4𝜈1\gamma=(4\nu)^{-1}, and let t𝑡t satisfy t≥42+log(m0)𝑡42subscript𝑚0t\geq 4\sqrt{2+\log(m_{0})}.
Suppose that R≥2δσ∗6ρs/ν𝑅2𝛿subscript𝜎6𝜌𝑠𝜈R\geq 2\delta\sigma_{*}\sqrt{6\rho s/\nu}, that initial condition x0subscript𝑥0x_{0} of Algorithms 1 and 2 satisfies ‖x0−x∗‖≤Rnormsubscript𝑥0subscript𝑥𝑅\|x_{0}-x_{*}\|\leq R, and that at the stage k𝑘k of the preliminary phase we choose
, 1 = κk=Rk−1ν(4Θ+60t)ρsm0subscript𝜅𝑘subscript𝑅𝑘1𝜈4Θ60𝑡𝜌𝑠subscript𝑚0\displaystyle\kappa_{k}=R_{k-1}\sqrt{\frac{\nu(4\Theta+{60}t)}{\rho sm_{0}}}. , 2 = . , 3 = (42)
where (Rk)k≥0subscriptsubscript𝑅𝑘𝑘0(R_{k})_{k\geq 0} is defined recursively:
, 1 = Rk+1=12Rk+16σ∗2δ2ρsνRk,R0=R.formulae-sequencesubscript𝑅𝑘112subscript𝑅𝑘16superscriptsubscript𝜎2superscript𝛿2𝜌𝑠𝜈subscript𝑅𝑘subscript𝑅0𝑅R_{k+1}=\mbox{\small$\frac{1}{2}$}{R_{k}}+\frac{{16}\sigma_{*}^{2}\delta^{2}\rho s}{\nu R_{k}},\quad R_{0}=R.. , 2 =
Then the approximate solution x^m0ksuperscriptsubscript^𝑥subscript𝑚0𝑘{\widehat{x}}_{m_{0}}^{k} at the end of the k𝑘kth stage of the CSMD-SR algorithm satisfies, with probability ≥1−4ke−tabsent14𝑘superscript𝑒𝑡\geq 1-4ke^{-t}
, 1 = ‖x^m0k−x∗‖normsuperscriptsubscript^𝑥subscript𝑚0𝑘subscript𝑥\displaystyle\|{\widehat{x}}_{m_{0}}^{k}-x_{*}\|. , 2 = ≤Rk≤2−kR+4σ∗δ2ρs/ν.absentsubscript𝑅𝑘superscript2𝑘𝑅4subscript𝜎𝛿2𝜌𝑠𝜈\displaystyle\leq R_{k}\leq 2^{-k}R+{4}\sigma_{*}\delta\sqrt{{2}\rho s/\nu}.. , 3 = . , 4 = . , 5 = (43)
In particular, the estimate x^m0K¯1subscriptsuperscript^𝑥subscript¯𝐾1subscript𝑚0{\widehat{x}}^{{\overline{K}}_{1}}_{m_{0}} after K¯1=⌈12log2(R2ν32σ∗2δ2ρs)⌉subscript¯𝐾112subscript2superscript𝑅2𝜈32subscriptsuperscript𝜎2superscript𝛿2𝜌𝑠\overline{K}_{1}=\left\lceil\mbox{\small$\frac{1}{2}$}\log_{2}\left(\frac{R^{2}\nu}{{32}\sigma^{2}_{*}\delta^{2}\rho s}\right)\right\rceil stages satisfies with probaility at least 1−4K¯1e−t14subscript¯𝐾1superscript𝑒𝑡1-4{\overline{K}}_{1}e^{-t}
, 1 = ‖x¯m0K¯1−x∗‖≤8σ∗δ2ρs/ν.normsuperscriptsubscript¯𝑥subscript𝑚0subscript¯𝐾1subscript𝑥8subscript𝜎𝛿2𝜌𝑠𝜈\displaystyle\|\overline{x}_{m_{0}}^{\overline{K}_{1}}-x_{*}\|\leq{8}\sigma_{*}\delta\sqrt{{2}\rho s/\nu}.. , 2 = . , 3 = (44)
| 1,368
|
53
|
Proof of the lemma.
1o. Note that initial point x0subscript𝑥0x_{0} satisfies x0∈XR(x∗)subscript𝑥0subscript𝑋𝑅subscript𝑥x_{0}\in X_{R}(x_{*}). Suppose that the initial point x0k=x^m0k−1subscriptsuperscript𝑥𝑘0subscriptsuperscript^𝑥𝑘1subscript𝑚0x^{k}_{0}={\widehat{x}}^{k-1}_{m_{0}} of the k𝑘kth stage of the method satisfy x0k∈XRk−1(x∗)superscriptsubscript𝑥0𝑘subscript𝑋subscript𝑅𝑘1subscript𝑥x_{0}^{k}\in X_{R_{k-1}}(x_{*}) with probability 1−4(k−1)e−t14𝑘1superscript𝑒𝑡1-4(k-1)e^{-t}.
In other words, there is a set ℬk−1⊂Ωsubscriptℬ𝑘1Ω{\cal B}_{k-1}\subset\Omega, Prob(ℬk−1)≥1−4(k−1)e−tProbsubscriptℬ𝑘114𝑘1superscript𝑒𝑡\hbox{\rm Prob}({\cal B}_{k-1})\geq 1-4(k-1)e^{-t}, such that for all ω¯k−1=[ω1;…;ωm0(k−1)]⊂ℬk−1superscript¯𝜔𝑘1subscript𝜔1…subscript𝜔subscript𝑚0𝑘1subscriptℬ𝑘1{\overline{\omega}}^{k-1}=[\omega_{1};...;\omega_{m_{0}(k-1)}]\subset{\cal B}_{k-1} one has x0k∈XRk−1(x∗)superscriptsubscript𝑥0𝑘subscript𝑋subscript𝑅𝑘1subscript𝑥x_{0}^{k}\in X_{R_{k-1}}(x_{*}).
Let us show that upon termination of the k𝑘kthe stage x^m0ksubscriptsuperscript^𝑥𝑘subscript𝑚0{\widehat{x}}^{k}_{m_{0}} satisfy ‖xm0k−x∗‖≤Rknormsuperscriptsubscript𝑥subscript𝑚0𝑘subscript𝑥subscript𝑅𝑘\|x_{m_{0}}^{k}-x_{*}\|\leq R_{k} with probability 1−4ke−t14𝑘superscript𝑒𝑡1-4ke^{-t}.
By Proposition A.1 (with h(x)=κk‖x‖ℎ𝑥subscript𝜅𝑘norm𝑥h(x)=\kappa_{k}\|x\|)
we conclude that for some Ω¯k⊂Ωsubscript¯Ω𝑘Ω{\overline{\Omega}}_{k}\subset\Omega, Prob(Ω¯k)≥1−4e−tProbsubscript¯Ω𝑘14superscript𝑒𝑡\hbox{\rm Prob}({\overline{\Omega}}_{k})\geq 1-4e^{-t}, solution x^m0ksubscriptsuperscript^𝑥𝑘subscript𝑚0{\widehat{x}}^{k}_{m_{0}} after m0subscript𝑚0m_{0} iterations of the stage satisfies, for all
for all ωk=[ω(k−1)m0+1,…,ωkm0]∈Ω¯ksuperscript𝜔𝑘subscript𝜔𝑘1subscript𝑚01…subscript𝜔𝑘subscript𝑚0subscript¯Ω𝑘\omega^{k}=[\omega_{(k-1)m_{0}+1},...,\omega_{km_{0}}]\in{\overline{\Omega}}_{k},
, 1 = F(x^m0k)−F(x∗)≤1m0(νRk−12(4Θ+60t)+κkRk−1)+σ∗2ν(74+6tm0).𝐹subscriptsuperscript^𝑥𝑘subscript𝑚0𝐹subscript𝑥1subscript𝑚0𝜈subscriptsuperscript𝑅2𝑘14Θ60𝑡subscript𝜅𝑘subscript𝑅𝑘1superscriptsubscript𝜎2𝜈746𝑡subscript𝑚0{F}({\widehat{x}}^{k}_{m_{0}})-{F}(x_{*})\leq\tfrac{1}{m_{0}}\left(\nu R^{2}_{k-1}(4\Theta+{60}t)+{\kappa_{k}R_{k-1}}\right)+\frac{\sigma_{*}^{2}}{\nu}\left({\tfrac{7}{4}}+\tfrac{6t}{m_{0}}\right).. , 2 =
When using the relationship (14) of Assumption [RSC] we now get
, 1 = ‖x^m0k−x∗‖≤δ[ρsκk+Rk−1m0+νRk−12κkm0(4Θ+60t)+σ∗2νκk(74+6tm0)].normsubscriptsuperscript^𝑥𝑘subscript𝑚0subscript𝑥𝛿delimited-[]𝜌𝑠subscript𝜅𝑘subscript𝑅𝑘1subscript𝑚0𝜈subscriptsuperscript𝑅2𝑘1subscript𝜅𝑘subscript𝑚04Θ60𝑡superscriptsubscript𝜎2𝜈subscript𝜅𝑘746𝑡subscript𝑚0\displaystyle\|{\widehat{x}}^{k}_{m_{0}}-x_{*}\|\leq\delta\left[\rho s\kappa_{k}+\frac{R_{k-1}}{m_{0}}+\frac{\nu R^{2}_{k-1}}{\kappa_{k}m_{0}}\left(4\Theta+{60}t\right)+\frac{\sigma_{*}^{2}}{\nu\kappa_{k}}\left({\tfrac{7}{4}}+\tfrac{6t}{m_{0}}\right)\right].. , 2 = . , 3 = (45)
| 1,649
|
Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix A Proofs
A.3 Proof of Theorem 2.1
Lemma A.4
Proof of the lemma.
1o. Note that initial point x0subscript𝑥0x_{0} satisfies x0∈XR(x∗)subscript𝑥0subscript𝑋𝑅subscript𝑥x_{0}\in X_{R}(x_{*}). Suppose that the initial point x0k=x^m0k−1subscriptsuperscript𝑥𝑘0subscriptsuperscript^𝑥𝑘1subscript𝑚0x^{k}_{0}={\widehat{x}}^{k-1}_{m_{0}} of the k𝑘kth stage of the method satisfy x0k∈XRk−1(x∗)superscriptsubscript𝑥0𝑘subscript𝑋subscript𝑅𝑘1subscript𝑥x_{0}^{k}\in X_{R_{k-1}}(x_{*}) with probability 1−4(k−1)e−t14𝑘1superscript𝑒𝑡1-4(k-1)e^{-t}.
In other words, there is a set ℬk−1⊂Ωsubscriptℬ𝑘1Ω{\cal B}_{k-1}\subset\Omega, Prob(ℬk−1)≥1−4(k−1)e−tProbsubscriptℬ𝑘114𝑘1superscript𝑒𝑡\hbox{\rm Prob}({\cal B}_{k-1})\geq 1-4(k-1)e^{-t}, such that for all ω¯k−1=[ω1;…;ωm0(k−1)]⊂ℬk−1superscript¯𝜔𝑘1subscript𝜔1…subscript𝜔subscript𝑚0𝑘1subscriptℬ𝑘1{\overline{\omega}}^{k-1}=[\omega_{1};...;\omega_{m_{0}(k-1)}]\subset{\cal B}_{k-1} one has x0k∈XRk−1(x∗)superscriptsubscript𝑥0𝑘subscript𝑋subscript𝑅𝑘1subscript𝑥x_{0}^{k}\in X_{R_{k-1}}(x_{*}).
Let us show that upon termination of the k𝑘kthe stage x^m0ksubscriptsuperscript^𝑥𝑘subscript𝑚0{\widehat{x}}^{k}_{m_{0}} satisfy ‖xm0k−x∗‖≤Rknormsuperscriptsubscript𝑥subscript𝑚0𝑘subscript𝑥subscript𝑅𝑘\|x_{m_{0}}^{k}-x_{*}\|\leq R_{k} with probability 1−4ke−t14𝑘superscript𝑒𝑡1-4ke^{-t}.
By Proposition A.1 (with h(x)=κk‖x‖ℎ𝑥subscript𝜅𝑘norm𝑥h(x)=\kappa_{k}\|x\|)
we conclude that for some Ω¯k⊂Ωsubscript¯Ω𝑘Ω{\overline{\Omega}}_{k}\subset\Omega, Prob(Ω¯k)≥1−4e−tProbsubscript¯Ω𝑘14superscript𝑒𝑡\hbox{\rm Prob}({\overline{\Omega}}_{k})\geq 1-4e^{-t}, solution x^m0ksubscriptsuperscript^𝑥𝑘subscript𝑚0{\widehat{x}}^{k}_{m_{0}} after m0subscript𝑚0m_{0} iterations of the stage satisfies, for all
for all ωk=[ω(k−1)m0+1,…,ωkm0]∈Ω¯ksuperscript𝜔𝑘subscript𝜔𝑘1subscript𝑚01…subscript𝜔𝑘subscript𝑚0subscript¯Ω𝑘\omega^{k}=[\omega_{(k-1)m_{0}+1},...,\omega_{km_{0}}]\in{\overline{\Omega}}_{k},
, 1 = F(x^m0k)−F(x∗)≤1m0(νRk−12(4Θ+60t)+κkRk−1)+σ∗2ν(74+6tm0).𝐹subscriptsuperscript^𝑥𝑘subscript𝑚0𝐹subscript𝑥1subscript𝑚0𝜈subscriptsuperscript𝑅2𝑘14Θ60𝑡subscript𝜅𝑘subscript𝑅𝑘1superscriptsubscript𝜎2𝜈746𝑡subscript𝑚0{F}({\widehat{x}}^{k}_{m_{0}})-{F}(x_{*})\leq\tfrac{1}{m_{0}}\left(\nu R^{2}_{k-1}(4\Theta+{60}t)+{\kappa_{k}R_{k-1}}\right)+\frac{\sigma_{*}^{2}}{\nu}\left({\tfrac{7}{4}}+\tfrac{6t}{m_{0}}\right).. , 2 =
When using the relationship (14) of Assumption [RSC] we now get
, 1 = ‖x^m0k−x∗‖≤δ[ρsκk+Rk−1m0+νRk−12κkm0(4Θ+60t)+σ∗2νκk(74+6tm0)].normsubscriptsuperscript^𝑥𝑘subscript𝑚0subscript𝑥𝛿delimited-[]𝜌𝑠subscript𝜅𝑘subscript𝑅𝑘1subscript𝑚0𝜈subscriptsuperscript𝑅2𝑘1subscript𝜅𝑘subscript𝑚04Θ60𝑡superscriptsubscript𝜎2𝜈subscript𝜅𝑘746𝑡subscript𝑚0\displaystyle\|{\widehat{x}}^{k}_{m_{0}}-x_{*}\|\leq\delta\left[\rho s\kappa_{k}+\frac{R_{k-1}}{m_{0}}+\frac{\nu R^{2}_{k-1}}{\kappa_{k}m_{0}}\left(4\Theta+{60}t\right)+\frac{\sigma_{*}^{2}}{\nu\kappa_{k}}\left({\tfrac{7}{4}}+\tfrac{6t}{m_{0}}\right)\right].. , 2 = . , 3 = (45)
| 1,684
|
54
|
Note that
κksubscript𝜅𝑘\kappa_{k} as defined in (42) satisfies
κk≤Rk−1(8δρs)−1,subscript𝜅𝑘subscript𝑅𝑘1superscript8𝛿𝜌𝑠1\kappa_{k}\leq R_{k-1}(8\delta\rho s)^{-1},
while κkm0≥8δ(4Θ+60t)Rk−1νsubscript𝜅𝑘subscript𝑚08𝛿4Θ60𝑡subscript𝑅𝑘1𝜈\kappa_{k}m_{0}\geq 8\delta(4\Theta+{60}t)R_{k-1}\nu. Because
m0≥3840tsubscript𝑚03840𝑡m_{0}\geq{3840}t due to ρν≥1𝜌𝜈1\rho\nu\geq 1 and δ≥1𝛿1\delta\geq 1, one also has
(74+6tm0)κk−1<16δρs/Rk−1746𝑡subscript𝑚0superscriptsubscript𝜅𝑘116𝛿𝜌𝑠subscript𝑅𝑘1\left({\tfrac{7}{4}}+\tfrac{6t}{m_{0}}\right)\kappa_{k}^{-1}<{16}\delta\rho s/R_{k-1}.
When substituting the above bounds into (45) we obtain
, 1 = ‖x^m0k−x∗‖≤δRk−1(14δ+1m0)+16δ2ρsσ∗2Rk−1ν≤12Rk−1+16δ2ρsσ∗2Rk−1ν=Rk.normsubscriptsuperscript^𝑥𝑘subscript𝑚0subscript𝑥𝛿subscript𝑅𝑘114𝛿1subscript𝑚016superscript𝛿2𝜌𝑠superscriptsubscript𝜎2subscript𝑅𝑘1𝜈12subscript𝑅𝑘116superscript𝛿2𝜌𝑠superscriptsubscript𝜎2subscript𝑅𝑘1𝜈subscript𝑅𝑘\displaystyle\|{\widehat{x}}^{k}_{m_{0}}-x_{*}\|\leq\delta R_{k-1}\left(\tfrac{1}{4\delta}+\tfrac{1}{m_{0}}\right)+\frac{{16}\delta^{2}\rho s\sigma_{*}^{2}}{R_{k-1}\nu}\leq\mbox{\small$\frac{1}{2}$}{R_{k-1}}+\frac{{16}\delta^{2}\rho s\sigma_{*}^{2}}{R_{k-1}\nu}=R_{k}.. , 2 = . , 3 = (46)
We conclude that x^m0k∈XRk(x∗)subscriptsuperscript^𝑥𝑘subscript𝑚0subscript𝑋subscript𝑅𝑘subscript𝑥{\widehat{x}}^{k}_{m_{0}}\in X_{R_{k}}(x_{*}) for all ω¯k∈ℬk=ℬk−1∩Ω¯ksuperscript¯𝜔𝑘subscriptℬ𝑘subscriptℬ𝑘1subscript¯Ω𝑘{\overline{\omega}}^{k}\in{\cal B}_{k}={\cal B}_{k-1}\cap{\overline{\Omega}}_{k}, and
, 1 = Prob(ℬk)≥Prob(ℬk−1)−Prob(Ω¯kc)≥1−4ke−t.Probsubscriptℬ𝑘Probsubscriptℬ𝑘1Probsuperscriptsubscript¯Ω𝑘𝑐14𝑘superscript𝑒𝑡\hbox{\rm Prob}({\cal B}_{k})\geq\hbox{\rm Prob}({\cal B}_{k-1})-\hbox{\rm Prob}({\overline{\Omega}}_{k}^{c})\geq 1-4ke^{-t}.. , 2 =
| 1,083
|
Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix A Proofs
A.3 Proof of Theorem 2.1
Lemma A.4
Note that
κksubscript𝜅𝑘\kappa_{k} as defined in (42) satisfies
κk≤Rk−1(8δρs)−1,subscript𝜅𝑘subscript𝑅𝑘1superscript8𝛿𝜌𝑠1\kappa_{k}\leq R_{k-1}(8\delta\rho s)^{-1},
while κkm0≥8δ(4Θ+60t)Rk−1νsubscript𝜅𝑘subscript𝑚08𝛿4Θ60𝑡subscript𝑅𝑘1𝜈\kappa_{k}m_{0}\geq 8\delta(4\Theta+{60}t)R_{k-1}\nu. Because
m0≥3840tsubscript𝑚03840𝑡m_{0}\geq{3840}t due to ρν≥1𝜌𝜈1\rho\nu\geq 1 and δ≥1𝛿1\delta\geq 1, one also has
(74+6tm0)κk−1<16δρs/Rk−1746𝑡subscript𝑚0superscriptsubscript𝜅𝑘116𝛿𝜌𝑠subscript𝑅𝑘1\left({\tfrac{7}{4}}+\tfrac{6t}{m_{0}}\right)\kappa_{k}^{-1}<{16}\delta\rho s/R_{k-1}.
When substituting the above bounds into (45) we obtain
, 1 = ‖x^m0k−x∗‖≤δRk−1(14δ+1m0)+16δ2ρsσ∗2Rk−1ν≤12Rk−1+16δ2ρsσ∗2Rk−1ν=Rk.normsubscriptsuperscript^𝑥𝑘subscript𝑚0subscript𝑥𝛿subscript𝑅𝑘114𝛿1subscript𝑚016superscript𝛿2𝜌𝑠superscriptsubscript𝜎2subscript𝑅𝑘1𝜈12subscript𝑅𝑘116superscript𝛿2𝜌𝑠superscriptsubscript𝜎2subscript𝑅𝑘1𝜈subscript𝑅𝑘\displaystyle\|{\widehat{x}}^{k}_{m_{0}}-x_{*}\|\leq\delta R_{k-1}\left(\tfrac{1}{4\delta}+\tfrac{1}{m_{0}}\right)+\frac{{16}\delta^{2}\rho s\sigma_{*}^{2}}{R_{k-1}\nu}\leq\mbox{\small$\frac{1}{2}$}{R_{k-1}}+\frac{{16}\delta^{2}\rho s\sigma_{*}^{2}}{R_{k-1}\nu}=R_{k}.. , 2 = . , 3 = (46)
We conclude that x^m0k∈XRk(x∗)subscriptsuperscript^𝑥𝑘subscript𝑚0subscript𝑋subscript𝑅𝑘subscript𝑥{\widehat{x}}^{k}_{m_{0}}\in X_{R_{k}}(x_{*}) for all ω¯k∈ℬk=ℬk−1∩Ω¯ksuperscript¯𝜔𝑘subscriptℬ𝑘subscriptℬ𝑘1subscript¯Ω𝑘{\overline{\omega}}^{k}\in{\cal B}_{k}={\cal B}_{k-1}\cap{\overline{\Omega}}_{k}, and
, 1 = Prob(ℬk)≥Prob(ℬk−1)−Prob(Ω¯kc)≥1−4ke−t.Probsubscriptℬ𝑘Probsubscriptℬ𝑘1Probsuperscriptsubscript¯Ω𝑘𝑐14𝑘superscript𝑒𝑡\hbox{\rm Prob}({\cal B}_{k})\geq\hbox{\rm Prob}({\cal B}_{k-1})-\hbox{\rm Prob}({\overline{\Omega}}_{k}^{c})\geq 1-4ke^{-t}.. , 2 =
| 1,118
|
55
|
Let now a=16δ2ρsσ∗2/ν𝑎16superscript𝛿2𝜌𝑠superscriptsubscript𝜎2𝜈a={16}\delta^{2}\rho s\sigma_{*}^{2}/\nu, and let us study the behaviour of the sequence
, 1 = Rk=Rk−12+aRk−1=:f(Rk−1),R0=R≥2a.R_{k}=\frac{R_{k-1}}{2}+\frac{a}{R_{k-1}}=:f(R_{k-1}),\quad R_{0}=R\geq\sqrt{2a}.. , 2 =
Function f𝑓f admits a fixed point at R=2a𝑅2𝑎R=\sqrt{2a} which is also the minimum of f𝑓f, so one has Rk≥2asubscript𝑅𝑘2𝑎R_{k}\geq\sqrt{2a} ∀kfor-all𝑘\forall k. Thus,
, 1 = dk:=Rk−2a=Rk−1−2a2+2a−2aRk−12Rk−1≤12dk−1≤2−kd0≤2−k(R−2a).assignsubscript𝑑𝑘subscript𝑅𝑘2𝑎subscript𝑅𝑘12𝑎22𝑎2𝑎subscript𝑅𝑘12subscript𝑅𝑘112subscript𝑑𝑘1superscript2𝑘subscript𝑑0superscript2𝑘𝑅2𝑎d_{k}:=R_{k}-\sqrt{2a}=\frac{R_{k-1}-\sqrt{2a}}{2}+\frac{2a-\sqrt{2a}R_{k-1}}{2R_{k-1}}\leq\mbox{\small$\frac{1}{2}$}d_{k-1}\leq 2^{-k}d_{0}\leq 2^{-k}(R-\sqrt{2a}).. , 2 =
We deduce that
Rk≤2−kR0+2asubscript𝑅𝑘superscript2𝑘subscript𝑅02𝑎R_{k}\leq 2^{-k}R_{0}+\sqrt{2a}
which is (43).
Finally,
after running K¯1subscript¯𝐾1\overline{K}_{1} stages of the preliminary phase, the estimate x^m0K¯1subscriptsuperscript^𝑥subscript¯𝐾1subscript𝑚0{\widehat{x}}^{\overline{K}_{1}}_{m_{0}} satisfies
, 1 = ‖x^m0K¯1−x∗‖≤8δσ∗2ρs/ν.normsuperscriptsubscript^𝑥subscript𝑚0subscript¯𝐾1subscript𝑥8𝛿subscript𝜎2𝜌𝑠𝜈\|{\widehat{x}}_{m_{0}}^{{\overline{K}}_{1}}-x_{*}\|\leq{8}\delta\sigma_{*}\sqrt{{2}\rho s/\nu}.. , 2 = . , 3 = □□\Box
We turn next to the analysis of the asymptotic phase of Algorithm 2. We assume that the preliminary phase of the algorithm has been completed.
| 847
|
Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix A Proofs
A.3 Proof of Theorem 2.1
2o.
Let now a=16δ2ρsσ∗2/ν𝑎16superscript𝛿2𝜌𝑠superscriptsubscript𝜎2𝜈a={16}\delta^{2}\rho s\sigma_{*}^{2}/\nu, and let us study the behaviour of the sequence
, 1 = Rk=Rk−12+aRk−1=:f(Rk−1),R0=R≥2a.R_{k}=\frac{R_{k-1}}{2}+\frac{a}{R_{k-1}}=:f(R_{k-1}),\quad R_{0}=R\geq\sqrt{2a}.. , 2 =
Function f𝑓f admits a fixed point at R=2a𝑅2𝑎R=\sqrt{2a} which is also the minimum of f𝑓f, so one has Rk≥2asubscript𝑅𝑘2𝑎R_{k}\geq\sqrt{2a} ∀kfor-all𝑘\forall k. Thus,
, 1 = dk:=Rk−2a=Rk−1−2a2+2a−2aRk−12Rk−1≤12dk−1≤2−kd0≤2−k(R−2a).assignsubscript𝑑𝑘subscript𝑅𝑘2𝑎subscript𝑅𝑘12𝑎22𝑎2𝑎subscript𝑅𝑘12subscript𝑅𝑘112subscript𝑑𝑘1superscript2𝑘subscript𝑑0superscript2𝑘𝑅2𝑎d_{k}:=R_{k}-\sqrt{2a}=\frac{R_{k-1}-\sqrt{2a}}{2}+\frac{2a-\sqrt{2a}R_{k-1}}{2R_{k-1}}\leq\mbox{\small$\frac{1}{2}$}d_{k-1}\leq 2^{-k}d_{0}\leq 2^{-k}(R-\sqrt{2a}).. , 2 =
We deduce that
Rk≤2−kR0+2asubscript𝑅𝑘superscript2𝑘subscript𝑅02𝑎R_{k}\leq 2^{-k}R_{0}+\sqrt{2a}
which is (43).
Finally,
after running K¯1subscript¯𝐾1\overline{K}_{1} stages of the preliminary phase, the estimate x^m0K¯1subscriptsuperscript^𝑥subscript¯𝐾1subscript𝑚0{\widehat{x}}^{\overline{K}_{1}}_{m_{0}} satisfies
, 1 = ‖x^m0K¯1−x∗‖≤8δσ∗2ρs/ν.normsuperscriptsubscript^𝑥subscript𝑚0subscript¯𝐾1subscript𝑥8𝛿subscript𝜎2𝜌𝑠𝜈\|{\widehat{x}}_{m_{0}}^{{\overline{K}}_{1}}-x_{*}\|\leq{8}\delta\sigma_{*}\sqrt{{2}\rho s/\nu}.. , 2 = . , 3 = □□\Box
We turn next to the analysis of the asymptotic phase of Algorithm 2. We assume that the preliminary phase of the algorithm has been completed.
| 880
|
56
|
Let t𝑡t be such that t≥42+log(m1)𝑡42subscript𝑚1t\geq 4\sqrt{2+\log(m_{1})}, with m1=⌈81δ2ρsν(4Θ+60t)⌉subscript𝑚181superscript𝛿2𝜌𝑠𝜈4Θ60𝑡m_{1}=\lceil 81\delta^{2}\rho s\nu(4\Theta+{60}t)\rceil, γ=(4ν)−1𝛾superscript4𝜈1\gamma=(4\nu)^{-1}, and let ℓk=⌈10×4k−1Θ⌉subscriptℓ𝑘10superscript4𝑘1Θ\ell_{k}=\lceil 10\times 4^{k-1}\Theta\rceil. We set
, 1 = κk=rk−1ν(4Θ+60t)ρsm1,rk=2−kr0,r0=8δσ∗2ρs/ν.formulae-sequencesubscript𝜅𝑘subscript𝑟𝑘1𝜈4Θ60𝑡𝜌𝑠subscript𝑚1formulae-sequencesubscript𝑟𝑘superscript2𝑘subscript𝑟0subscript𝑟08𝛿subscript𝜎2𝜌𝑠𝜈\kappa_{k}=r_{k-1}\sqrt{\frac{\nu(4\Theta+{60}t)}{\rho sm_{1}}},\quad r_{k}=2^{-k}r_{0},\quad r_{0}={8}\delta\sigma_{*}\sqrt{{2}\rho s/\nu}.. , 2 =
Then the approximate solution by Algorithm 2 x^m1ksuperscriptsubscript^𝑥subscript𝑚1𝑘{\widehat{x}}_{m_{1}}^{k} at the end of the k𝑘kth stage of the asymptotic phase
satisfies, with probability ≥1−4(K¯1+k)e−tabsent14subscript¯𝐾1𝑘superscript𝑒𝑡\geq 1-4(\overline{K}_{1}+k)e^{-t}, ‖x^m1k−x∗‖≤rknormsuperscriptsubscript^𝑥subscript𝑚1𝑘subscript𝑥subscript𝑟𝑘\|{\widehat{x}}_{m_{1}}^{k}-x_{*}\|\leq r_{k}, implying that
, 1 = ‖x^m1k−x∗‖≲δ2σ∗ρsΘ(Θ+t)Nk,less-than-or-similar-tonormsuperscriptsubscript^𝑥subscript𝑚1𝑘subscript𝑥superscript𝛿2subscript𝜎𝜌𝑠ΘΘ𝑡subscript𝑁𝑘\displaystyle\|{\widehat{x}}_{m_{1}}^{k}-x_{*}\|\lesssim\delta^{2}\sigma_{*}\rho s\sqrt{\frac{\Theta\left(\Theta+t\right)}{N_{k}}},. , 2 = . , 3 = (47)
where Nk=m1∑i=1kℓisubscript𝑁𝑘subscript𝑚1superscriptsubscript𝑖1𝑘subscriptℓ𝑖N_{k}=m_{1}\sum_{i=1}^{k}\ell_{i} is the total count of oracle calls for k𝑘k asymptotic stages.
Proof of the lemma.
Upon terminating the preliminary phase, the initial condition x0=x^m0K¯1subscript𝑥0superscriptsubscript^𝑥subscript𝑚0subscript¯𝐾1x_{0}={\widehat{x}}_{m_{0}}^{{\overline{K}}_{1}} of the asymptotic phase satisfies (44)
with probability greater or equal to 1−4K¯1e−t14subscript¯𝐾1superscript𝑒𝑡1-4\overline{K}_{1}e^{-t}.
We are to show that ∀k≥1for-all𝑘1\forall k\geq 1, with probability at least 1−4(K¯1+k)e−t14subscript¯𝐾1𝑘superscript𝑒𝑡1-4({\overline{K}}_{1}+k)e^{-t},
, 1 = ‖x^m1k−x∗‖≤rk=2−kr0,r0=8δσ∗2ρs/ν.formulae-sequencenormsubscriptsuperscript^𝑥𝑘subscript𝑚1subscript𝑥subscript𝑟𝑘superscript2𝑘subscript𝑟0subscript𝑟08𝛿subscript𝜎2𝜌𝑠𝜈\|{\widehat{x}}^{k}_{m_{1}}-x_{*}\|\leq r_{k}=2^{-k}r_{0},\quad r_{0}={8}\delta\sigma_{*}\sqrt{{2}\rho s/\nu}.. , 2 =
The claim is obviously true for k=0𝑘0k=0. Let let us suppose that it holds at stage k−1≥0𝑘10k-1\geq 0, and let us prove that it also holds at stage k𝑘k.
To this end, we reproduce the argument used in the proof of Lemma A.4, while taking into account that now ℓksubscriptℓ𝑘\ell_{k} observations are averaged at each iteration of the CSMD algorithm. Recall (cf. Lemma A.3) that this amounts to replacing sub-Gaussian parameter σ∗2superscriptsubscript𝜎2\sigma_{*}^{2} with σ¯∗2=10Θσ∗2/ℓksuperscriptsubscript¯𝜎210Θsuperscriptsubscript𝜎2subscriptℓ𝑘{\overline{\sigma}}_{*}^{2}=10\Theta\sigma_{*}^{2}/\ell_{k}.
When applying the result of Proposition A.1 and the bound of (14) we conclude (cf. (45)) that, with probability 1−(K¯1+k)e−t1subscript¯𝐾1𝑘superscript𝑒𝑡1-({\overline{K}}_{1}+k)e^{-t},
| 1,618
|
Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix A Proofs
A.3 Proof of Theorem 2.1
2o.
Lemma A.5
Let t𝑡t be such that t≥42+log(m1)𝑡42subscript𝑚1t\geq 4\sqrt{2+\log(m_{1})}, with m1=⌈81δ2ρsν(4Θ+60t)⌉subscript𝑚181superscript𝛿2𝜌𝑠𝜈4Θ60𝑡m_{1}=\lceil 81\delta^{2}\rho s\nu(4\Theta+{60}t)\rceil, γ=(4ν)−1𝛾superscript4𝜈1\gamma=(4\nu)^{-1}, and let ℓk=⌈10×4k−1Θ⌉subscriptℓ𝑘10superscript4𝑘1Θ\ell_{k}=\lceil 10\times 4^{k-1}\Theta\rceil. We set
, 1 = κk=rk−1ν(4Θ+60t)ρsm1,rk=2−kr0,r0=8δσ∗2ρs/ν.formulae-sequencesubscript𝜅𝑘subscript𝑟𝑘1𝜈4Θ60𝑡𝜌𝑠subscript𝑚1formulae-sequencesubscript𝑟𝑘superscript2𝑘subscript𝑟0subscript𝑟08𝛿subscript𝜎2𝜌𝑠𝜈\kappa_{k}=r_{k-1}\sqrt{\frac{\nu(4\Theta+{60}t)}{\rho sm_{1}}},\quad r_{k}=2^{-k}r_{0},\quad r_{0}={8}\delta\sigma_{*}\sqrt{{2}\rho s/\nu}.. , 2 =
Then the approximate solution by Algorithm 2 x^m1ksuperscriptsubscript^𝑥subscript𝑚1𝑘{\widehat{x}}_{m_{1}}^{k} at the end of the k𝑘kth stage of the asymptotic phase
satisfies, with probability ≥1−4(K¯1+k)e−tabsent14subscript¯𝐾1𝑘superscript𝑒𝑡\geq 1-4(\overline{K}_{1}+k)e^{-t}, ‖x^m1k−x∗‖≤rknormsuperscriptsubscript^𝑥subscript𝑚1𝑘subscript𝑥subscript𝑟𝑘\|{\widehat{x}}_{m_{1}}^{k}-x_{*}\|\leq r_{k}, implying that
, 1 = ‖x^m1k−x∗‖≲δ2σ∗ρsΘ(Θ+t)Nk,less-than-or-similar-tonormsuperscriptsubscript^𝑥subscript𝑚1𝑘subscript𝑥superscript𝛿2subscript𝜎𝜌𝑠ΘΘ𝑡subscript𝑁𝑘\displaystyle\|{\widehat{x}}_{m_{1}}^{k}-x_{*}\|\lesssim\delta^{2}\sigma_{*}\rho s\sqrt{\frac{\Theta\left(\Theta+t\right)}{N_{k}}},. , 2 = . , 3 = (47)
where Nk=m1∑i=1kℓisubscript𝑁𝑘subscript𝑚1superscriptsubscript𝑖1𝑘subscriptℓ𝑖N_{k}=m_{1}\sum_{i=1}^{k}\ell_{i} is the total count of oracle calls for k𝑘k asymptotic stages.
Proof of the lemma.
Upon terminating the preliminary phase, the initial condition x0=x^m0K¯1subscript𝑥0superscriptsubscript^𝑥subscript𝑚0subscript¯𝐾1x_{0}={\widehat{x}}_{m_{0}}^{{\overline{K}}_{1}} of the asymptotic phase satisfies (44)
with probability greater or equal to 1−4K¯1e−t14subscript¯𝐾1superscript𝑒𝑡1-4\overline{K}_{1}e^{-t}.
We are to show that ∀k≥1for-all𝑘1\forall k\geq 1, with probability at least 1−4(K¯1+k)e−t14subscript¯𝐾1𝑘superscript𝑒𝑡1-4({\overline{K}}_{1}+k)e^{-t},
, 1 = ‖x^m1k−x∗‖≤rk=2−kr0,r0=8δσ∗2ρs/ν.formulae-sequencenormsubscriptsuperscript^𝑥𝑘subscript𝑚1subscript𝑥subscript𝑟𝑘superscript2𝑘subscript𝑟0subscript𝑟08𝛿subscript𝜎2𝜌𝑠𝜈\|{\widehat{x}}^{k}_{m_{1}}-x_{*}\|\leq r_{k}=2^{-k}r_{0},\quad r_{0}={8}\delta\sigma_{*}\sqrt{{2}\rho s/\nu}.. , 2 =
The claim is obviously true for k=0𝑘0k=0. Let let us suppose that it holds at stage k−1≥0𝑘10k-1\geq 0, and let us prove that it also holds at stage k𝑘k.
To this end, we reproduce the argument used in the proof of Lemma A.4, while taking into account that now ℓksubscriptℓ𝑘\ell_{k} observations are averaged at each iteration of the CSMD algorithm. Recall (cf. Lemma A.3) that this amounts to replacing sub-Gaussian parameter σ∗2superscriptsubscript𝜎2\sigma_{*}^{2} with σ¯∗2=10Θσ∗2/ℓksuperscriptsubscript¯𝜎210Θsuperscriptsubscript𝜎2subscriptℓ𝑘{\overline{\sigma}}_{*}^{2}=10\Theta\sigma_{*}^{2}/\ell_{k}.
When applying the result of Proposition A.1 and the bound of (14) we conclude (cf. (45)) that, with probability 1−(K¯1+k)e−t1subscript¯𝐾1𝑘superscript𝑒𝑡1-({\overline{K}}_{1}+k)e^{-t},
| 1,656
|
57
|
, 1 = ‖x^m1k−x∗‖≤δ[ρsκk+rk−1m1+νrk−12κkm1(4Θ+60t)+10Θσ∗2νκkℓk(74+6tm1)]normsubscriptsuperscript^𝑥𝑘subscript𝑚1subscript𝑥𝛿delimited-[]𝜌𝑠subscript𝜅𝑘subscript𝑟𝑘1subscript𝑚1𝜈subscriptsuperscript𝑟2𝑘1subscript𝜅𝑘subscript𝑚14Θ60𝑡10Θsuperscriptsubscript𝜎2𝜈subscript𝜅𝑘subscriptℓ𝑘746𝑡subscript𝑚1\|{\widehat{x}}^{k}_{m_{1}}-x_{*}\|\leq\delta\left[\rho s\kappa_{k}+\frac{r_{k-1}}{m_{1}}+\frac{\nu r^{2}_{k-1}}{\kappa_{k}m_{1}}\left(4\Theta+{60}t\right)+\frac{10\Theta\sigma_{*}^{2}}{\nu\kappa_{k}\ell_{k}}\left({\tfrac{7}{4}}+\tfrac{6t}{m_{1}}\right)\right]. , 2 =
By simple algebra, we obtain the following analogue of (46):
, 1 = ‖x^m1k−x∗‖<δrk−1(29δ+1m1)+104−k+1δ2ρsσ∗2rk−1ν<rk−14+rk−14=rk.normsubscriptsuperscript^𝑥𝑘subscript𝑚1subscript𝑥𝛿subscript𝑟𝑘129𝛿1subscript𝑚110superscript4𝑘1superscript𝛿2𝜌𝑠superscriptsubscript𝜎2subscript𝑟𝑘1𝜈subscript𝑟𝑘14subscript𝑟𝑘14subscript𝑟𝑘\|{\widehat{x}}^{k}_{m_{1}}-x_{*}\|<\delta r_{k-1}\left(\tfrac{2}{9\delta}+\tfrac{1}{{m_{1}}}\right)+{10}\frac{4^{-k+1}\delta^{2}\rho s\sigma_{*}^{2}}{r_{k-1}\nu}<\tfrac{r_{k-1}}{4}+\tfrac{r_{k-1}}{4}=r_{k}.. , 2 =
Observe that upon the end of the k𝑘kth stage we used
Nk=m1∑i=1kℓk<3m1Θ∑j=1k4j−1≤4kΘm1subscript𝑁𝑘subscript𝑚1superscriptsubscript𝑖1𝑘subscriptℓ𝑘3subscript𝑚1Θsuperscriptsubscript𝑗1𝑘superscript4𝑗1superscript4𝑘Θsubscript𝑚1N_{k}=m_{1}\sum_{i=1}^{k}\ell_{k}<3m_{1}\Theta\sum_{j=1}^{k}4^{j-1}\leq 4^{k}\Theta m_{1} observations of the asymptotic stage. As a consequence,
4−k<Θm1/Nksuperscript4𝑘Θsubscript𝑚1subscript𝑁𝑘4^{-k}<\Theta m_{1}/N_{k} and
, 1 = rk=2−kr0≲δ2σ∗Θ(Θ+t)sνρNk.subscript𝑟𝑘superscript2𝑘subscript𝑟0less-than-or-similar-tosuperscript𝛿2subscript𝜎ΘΘ𝑡𝑠𝜈𝜌subscript𝑁𝑘r_{k}=2^{-k}r_{0}\lesssim\delta^{2}\sigma_{*}\sqrt{\Theta(\Theta+t)s\nu\rho\over N_{k}}.. , 2 = . , 3 = □□\Box
Assuming that the preliminary phase of Algorithm 1 was completed, we now consider the asymptotic phase of the algorithm.
| 1,099
|
Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix A Proofs
A.3 Proof of Theorem 2.1
2o.
Lemma A.5
, 1 = ‖x^m1k−x∗‖≤δ[ρsκk+rk−1m1+νrk−12κkm1(4Θ+60t)+10Θσ∗2νκkℓk(74+6tm1)]normsubscriptsuperscript^𝑥𝑘subscript𝑚1subscript𝑥𝛿delimited-[]𝜌𝑠subscript𝜅𝑘subscript𝑟𝑘1subscript𝑚1𝜈subscriptsuperscript𝑟2𝑘1subscript𝜅𝑘subscript𝑚14Θ60𝑡10Θsuperscriptsubscript𝜎2𝜈subscript𝜅𝑘subscriptℓ𝑘746𝑡subscript𝑚1\|{\widehat{x}}^{k}_{m_{1}}-x_{*}\|\leq\delta\left[\rho s\kappa_{k}+\frac{r_{k-1}}{m_{1}}+\frac{\nu r^{2}_{k-1}}{\kappa_{k}m_{1}}\left(4\Theta+{60}t\right)+\frac{10\Theta\sigma_{*}^{2}}{\nu\kappa_{k}\ell_{k}}\left({\tfrac{7}{4}}+\tfrac{6t}{m_{1}}\right)\right]. , 2 =
By simple algebra, we obtain the following analogue of (46):
, 1 = ‖x^m1k−x∗‖<δrk−1(29δ+1m1)+104−k+1δ2ρsσ∗2rk−1ν<rk−14+rk−14=rk.normsubscriptsuperscript^𝑥𝑘subscript𝑚1subscript𝑥𝛿subscript𝑟𝑘129𝛿1subscript𝑚110superscript4𝑘1superscript𝛿2𝜌𝑠superscriptsubscript𝜎2subscript𝑟𝑘1𝜈subscript𝑟𝑘14subscript𝑟𝑘14subscript𝑟𝑘\|{\widehat{x}}^{k}_{m_{1}}-x_{*}\|<\delta r_{k-1}\left(\tfrac{2}{9\delta}+\tfrac{1}{{m_{1}}}\right)+{10}\frac{4^{-k+1}\delta^{2}\rho s\sigma_{*}^{2}}{r_{k-1}\nu}<\tfrac{r_{k-1}}{4}+\tfrac{r_{k-1}}{4}=r_{k}.. , 2 =
Observe that upon the end of the k𝑘kth stage we used
Nk=m1∑i=1kℓk<3m1Θ∑j=1k4j−1≤4kΘm1subscript𝑁𝑘subscript𝑚1superscriptsubscript𝑖1𝑘subscriptℓ𝑘3subscript𝑚1Θsuperscriptsubscript𝑗1𝑘superscript4𝑗1superscript4𝑘Θsubscript𝑚1N_{k}=m_{1}\sum_{i=1}^{k}\ell_{k}<3m_{1}\Theta\sum_{j=1}^{k}4^{j-1}\leq 4^{k}\Theta m_{1} observations of the asymptotic stage. As a consequence,
4−k<Θm1/Nksuperscript4𝑘Θsubscript𝑚1subscript𝑁𝑘4^{-k}<\Theta m_{1}/N_{k} and
, 1 = rk=2−kr0≲δ2σ∗Θ(Θ+t)sνρNk.subscript𝑟𝑘superscript2𝑘subscript𝑟0less-than-or-similar-tosuperscript𝛿2subscript𝜎ΘΘ𝑡𝑠𝜈𝜌subscript𝑁𝑘r_{k}=2^{-k}r_{0}\lesssim\delta^{2}\sigma_{*}\sqrt{\Theta(\Theta+t)s\nu\rho\over N_{k}}.. , 2 = . , 3 = □□\Box
Assuming that the preliminary phase of Algorithm 1 was completed, we now consider the asymptotic phase of the algorithm.
| 1,137
|
58
|
Let t≥42+logmk𝑡42subscript𝑚𝑘t\geq 4\sqrt{2+\log m_{k}}, mk=⌈4k+4(4Θ+60t)δ2ρsν⌉subscript𝑚𝑘superscript4𝑘44Θ60𝑡superscript𝛿2𝜌𝑠𝜈m_{k}=\left\lceil 4^{k+4}(4\Theta+{60}t)\delta^{2}\rho s\nu\right\rceil,
, 1 = γk=rk−12σ∗(4Θ+60t)2mk,κk2=5σ∗rk−1ρs(4Θ+60t)mkformulae-sequencesuperscript𝛾𝑘subscript𝑟𝑘12subscript𝜎4Θ60𝑡2subscript𝑚𝑘superscriptsubscript𝜅𝑘25subscript𝜎subscript𝑟𝑘1𝜌𝑠4Θ60𝑡subscript𝑚𝑘\displaystyle\gamma^{k}={r_{k-1}\over{2}\sigma_{*}}\sqrt{(4\Theta+{60}t)\over{2}m_{k}},\quad\kappa_{k}^{2}={{5}\sigma_{*}r_{k-1}\over\rho s}\sqrt{{}\left(4\Theta+{60}t\right)\over m_{k}}. , 2 = . , 3 = (48)
where
, 1 = rk:=2−kr0,r0=8δσ∗2ρs/ν.formulae-sequenceassignsubscript𝑟𝑘superscript2𝑘subscript𝑟0subscript𝑟08𝛿subscript𝜎2𝜌𝑠𝜈r_{k}:=2^{-k}r_{0},\quad r_{0}={8}\delta\sigma_{*}\sqrt{{2}\rho s/\nu}.. , 2 =
Then the approximate solution x^mkksubscriptsuperscript^𝑥𝑘subscript𝑚𝑘{\widehat{x}}^{k}_{m_{k}} upon termination of the k𝑘kth asymptotic stage satisfies with probability ≥1−4(K¯1+k)e−tabsent14subscript¯𝐾1𝑘superscript𝑒𝑡\geq 1-4(\overline{K}_{1}+k)e^{-t}
, 1 = ‖x^mkk−x∗‖≤2−kr0≲2−kσ∗δρsν−1≲δ2σ∗ρsΘ+tNknormsuperscriptsubscript^𝑥subscript𝑚𝑘𝑘subscript𝑥superscript2𝑘subscript𝑟0less-than-or-similar-toabsentless-than-or-similar-tosuperscript2𝑘subscript𝜎𝛿𝜌𝑠superscript𝜈1absentsuperscript𝛿2subscript𝜎𝜌𝑠Θ𝑡subscript𝑁𝑘\displaystyle\begin{array}[]{rll}\|{\widehat{x}}_{m_{k}}^{k}-x_{*}\|\leq 2^{-k}r_{0}\lesssim&2^{-k}\sigma_{*}\delta\sqrt{\rho s\nu^{-1}}\lesssim&\delta^{2}\sigma_{*}\rho s\sqrt{\frac{\Theta+t}{N_{k}}}\end{array}. , 2 = . , 3 = (50)
where Nk=∑j=1kmjsubscript𝑁𝑘superscriptsubscript𝑗1𝑘subscript𝑚𝑗N_{k}=\sum_{j=1}^{k}m_{j} is the total iteration count of k𝑘k stages of the asymptotic phase.
Proof of the lemma.
We are to show that ∀k≥0for-all𝑘0\forall k\geq 0,
‖x^mkk−x∗‖≤rknormsubscriptsuperscript^𝑥𝑘subscript𝑚𝑘subscript𝑥subscript𝑟𝑘\|{\widehat{x}}^{k}_{m_{k}}-x_{*}\|\leq r_{k} with probability ≥1−4(K¯1+k)e−tabsent14subscript¯𝐾1𝑘superscript𝑒𝑡\geq 1-4(\overline{K}_{1}+k)e^{-t} is true. By Lemma A.4, the claim is true for k=0𝑘0k=0 (at the start of the asymptotic phase, the initial condition x0=x^m0K¯1subscript𝑥0subscriptsuperscript^𝑥subscript¯𝐾1subscript𝑚0x_{0}={\widehat{x}}^{{\overline{K}}_{1}}_{m_{0}} satisfies the bound (44)).
We now assume it to hold for k−1≥0𝑘10k-1\geq 0, our objective is to implement the recursive step k−1→k→𝑘1𝑘k-1\to k of the proof.
First, observe that the choice of γksuperscript𝛾𝑘\gamma^{k} in (48) satisfies γk≤(4ν)−1superscript𝛾𝑘superscript4𝜈1\gamma^{k}\leq(4\nu)^{-1}, k=1,…𝑘1…k=1,..., so
that Proposition A.1 can be applied. From the result of the proposition and bound (14) we conclude (cf. (45)) that it holds, with probability 1−(K¯1+k)e−t1subscript¯𝐾1𝑘superscript𝑒𝑡1-({\overline{K}}_{1}+k)e^{-t},
, 1 = ‖x^mkk−x∗‖≤δ[ρsκk+rk−1mk+rk−12(4Θ+60t)γkκkmk+8γkσ∗2κk]normsubscriptsuperscript^𝑥𝑘subscript𝑚𝑘subscript𝑥𝛿delimited-[]𝜌𝑠subscript𝜅𝑘subscript𝑟𝑘1subscript𝑚𝑘subscriptsuperscript𝑟2𝑘14Θ60𝑡superscript𝛾𝑘subscript𝜅𝑘subscript𝑚𝑘8superscript𝛾𝑘superscriptsubscript𝜎2subscript𝜅𝑘\|{\widehat{x}}^{k}_{m_{k}}-x_{*}\|\leq\delta\left[\rho s\kappa_{k}+\frac{r_{k-1}}{m_{k}}+\frac{r^{2}_{k-1}\left(4\Theta+{60}t\right)}{\gamma^{k}\kappa_{k}{m_{k}}}+{8\frac{\gamma^{k}\sigma_{*}^{2}}{\kappa_{k}}}\right]. , 2 =
When substituting the value of γksuperscript𝛾𝑘\gamma^{k} from (48) we obtain
| 1,832
|
Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix A Proofs
A.3 Proof of Theorem 2.1
2o.
Lemma A.6
Let t≥42+logmk𝑡42subscript𝑚𝑘t\geq 4\sqrt{2+\log m_{k}}, mk=⌈4k+4(4Θ+60t)δ2ρsν⌉subscript𝑚𝑘superscript4𝑘44Θ60𝑡superscript𝛿2𝜌𝑠𝜈m_{k}=\left\lceil 4^{k+4}(4\Theta+{60}t)\delta^{2}\rho s\nu\right\rceil,
, 1 = γk=rk−12σ∗(4Θ+60t)2mk,κk2=5σ∗rk−1ρs(4Θ+60t)mkformulae-sequencesuperscript𝛾𝑘subscript𝑟𝑘12subscript𝜎4Θ60𝑡2subscript𝑚𝑘superscriptsubscript𝜅𝑘25subscript𝜎subscript𝑟𝑘1𝜌𝑠4Θ60𝑡subscript𝑚𝑘\displaystyle\gamma^{k}={r_{k-1}\over{2}\sigma_{*}}\sqrt{(4\Theta+{60}t)\over{2}m_{k}},\quad\kappa_{k}^{2}={{5}\sigma_{*}r_{k-1}\over\rho s}\sqrt{{}\left(4\Theta+{60}t\right)\over m_{k}}. , 2 = . , 3 = (48)
where
, 1 = rk:=2−kr0,r0=8δσ∗2ρs/ν.formulae-sequenceassignsubscript𝑟𝑘superscript2𝑘subscript𝑟0subscript𝑟08𝛿subscript𝜎2𝜌𝑠𝜈r_{k}:=2^{-k}r_{0},\quad r_{0}={8}\delta\sigma_{*}\sqrt{{2}\rho s/\nu}.. , 2 =
Then the approximate solution x^mkksubscriptsuperscript^𝑥𝑘subscript𝑚𝑘{\widehat{x}}^{k}_{m_{k}} upon termination of the k𝑘kth asymptotic stage satisfies with probability ≥1−4(K¯1+k)e−tabsent14subscript¯𝐾1𝑘superscript𝑒𝑡\geq 1-4(\overline{K}_{1}+k)e^{-t}
, 1 = ‖x^mkk−x∗‖≤2−kr0≲2−kσ∗δρsν−1≲δ2σ∗ρsΘ+tNknormsuperscriptsubscript^𝑥subscript𝑚𝑘𝑘subscript𝑥superscript2𝑘subscript𝑟0less-than-or-similar-toabsentless-than-or-similar-tosuperscript2𝑘subscript𝜎𝛿𝜌𝑠superscript𝜈1absentsuperscript𝛿2subscript𝜎𝜌𝑠Θ𝑡subscript𝑁𝑘\displaystyle\begin{array}[]{rll}\|{\widehat{x}}_{m_{k}}^{k}-x_{*}\|\leq 2^{-k}r_{0}\lesssim&2^{-k}\sigma_{*}\delta\sqrt{\rho s\nu^{-1}}\lesssim&\delta^{2}\sigma_{*}\rho s\sqrt{\frac{\Theta+t}{N_{k}}}\end{array}. , 2 = . , 3 = (50)
where Nk=∑j=1kmjsubscript𝑁𝑘superscriptsubscript𝑗1𝑘subscript𝑚𝑗N_{k}=\sum_{j=1}^{k}m_{j} is the total iteration count of k𝑘k stages of the asymptotic phase.
Proof of the lemma.
We are to show that ∀k≥0for-all𝑘0\forall k\geq 0,
‖x^mkk−x∗‖≤rknormsubscriptsuperscript^𝑥𝑘subscript𝑚𝑘subscript𝑥subscript𝑟𝑘\|{\widehat{x}}^{k}_{m_{k}}-x_{*}\|\leq r_{k} with probability ≥1−4(K¯1+k)e−tabsent14subscript¯𝐾1𝑘superscript𝑒𝑡\geq 1-4(\overline{K}_{1}+k)e^{-t} is true. By Lemma A.4, the claim is true for k=0𝑘0k=0 (at the start of the asymptotic phase, the initial condition x0=x^m0K¯1subscript𝑥0subscriptsuperscript^𝑥subscript¯𝐾1subscript𝑚0x_{0}={\widehat{x}}^{{\overline{K}}_{1}}_{m_{0}} satisfies the bound (44)).
We now assume it to hold for k−1≥0𝑘10k-1\geq 0, our objective is to implement the recursive step k−1→k→𝑘1𝑘k-1\to k of the proof.
First, observe that the choice of γksuperscript𝛾𝑘\gamma^{k} in (48) satisfies γk≤(4ν)−1superscript𝛾𝑘superscript4𝜈1\gamma^{k}\leq(4\nu)^{-1}, k=1,…𝑘1…k=1,..., so
that Proposition A.1 can be applied. From the result of the proposition and bound (14) we conclude (cf. (45)) that it holds, with probability 1−(K¯1+k)e−t1subscript¯𝐾1𝑘superscript𝑒𝑡1-({\overline{K}}_{1}+k)e^{-t},
, 1 = ‖x^mkk−x∗‖≤δ[ρsκk+rk−1mk+rk−12(4Θ+60t)γkκkmk+8γkσ∗2κk]normsubscriptsuperscript^𝑥𝑘subscript𝑚𝑘subscript𝑥𝛿delimited-[]𝜌𝑠subscript𝜅𝑘subscript𝑟𝑘1subscript𝑚𝑘subscriptsuperscript𝑟2𝑘14Θ60𝑡superscript𝛾𝑘subscript𝜅𝑘subscript𝑚𝑘8superscript𝛾𝑘superscriptsubscript𝜎2subscript𝜅𝑘\|{\widehat{x}}^{k}_{m_{k}}-x_{*}\|\leq\delta\left[\rho s\kappa_{k}+\frac{r_{k-1}}{m_{k}}+\frac{r^{2}_{k-1}\left(4\Theta+{60}t\right)}{\gamma^{k}\kappa_{k}{m_{k}}}+{8\frac{\gamma^{k}\sigma_{*}^{2}}{\kappa_{k}}}\right]. , 2 =
When substituting the value of γksuperscript𝛾𝑘\gamma^{k} from (48) we obtain
| 1,870
|
59
|
, 1 = ‖x^mkk−x∗‖≤δ[ρsκk+rk−1mk+4σ∗rk−1κk2(4Θ+60t)mk],normsubscriptsuperscript^𝑥𝑘subscript𝑚𝑘subscript𝑥𝛿delimited-[]𝜌𝑠subscript𝜅𝑘subscript𝑟𝑘1subscript𝑚𝑘4subscript𝜎subscript𝑟𝑘1subscript𝜅𝑘24Θ60𝑡subscript𝑚𝑘\|{\widehat{x}}^{k}_{m_{k}}-x_{*}\|\leq\delta\left[\rho s\kappa_{k}+\frac{r_{k-1}}{m_{k}}+\frac{{4}\sigma_{*}r_{k-1}}{\kappa_{k}}\sqrt{{2}(4\Theta+{60}t)\over m_{k}}\right],. , 2 =
which, by the choice of κksubscript𝜅𝑘\kappa_{k} in (48), results in
results in
, 1 = ‖x^mkk−x∗‖2≤2δ2[10ρsσ∗rk−14Θ+60tmk+rk−12mk2]≤rk−124=rk2.superscriptnormsubscriptsuperscript^𝑥𝑘subscript𝑚𝑘subscript𝑥22superscript𝛿2delimited-[]10𝜌𝑠subscript𝜎subscript𝑟𝑘14Θ60𝑡subscript𝑚𝑘subscriptsuperscript𝑟2𝑘1superscriptsubscript𝑚𝑘2superscriptsubscript𝑟𝑘124superscriptsubscript𝑟𝑘2\|{\widehat{x}}^{k}_{m_{k}}-x_{*}\|^{2}\leq 2\delta^{2}\left[{10}\rho s\sigma_{*}r_{k-1}\sqrt{{}4\Theta+{60}t\over m_{k}}+\frac{r^{2}_{k-1}}{m_{k}^{2}}\right]\leq\frac{r_{k-1}^{2}}{4}=r_{k}^{2}.. , 2 =
It remains to note that the total number Nk=∑j=1kmjsubscript𝑁𝑘superscriptsubscript𝑗1𝑘subscript𝑚𝑗N_{k}=\sum_{j=1}^{k}m_{j} of iterations during k𝑘k stages of the asymptotic phase satisfies Nk≲4k(Θ+t)δ2ρsνless-than-or-similar-tosubscript𝑁𝑘superscript4𝑘Θ𝑡superscript𝛿2𝜌𝑠𝜈N_{k}\lesssim 4^{k}(\Theta+t)\delta^{2}\rho s\nu, and 2−k≲δ(Θ+t)ρsνNkless-than-or-similar-tosuperscript2𝑘𝛿Θ𝑡𝜌𝑠𝜈subscript𝑁𝑘2^{-k}\lesssim\delta\sqrt{(\Theta+t)\rho s\nu\over N_{k}}, which along with definition of r0subscript𝑟0r_{0} implies (50). □□\Box
| 868
|
Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix A Proofs
A.3 Proof of Theorem 2.1
2o.
Lemma A.6
, 1 = ‖x^mkk−x∗‖≤δ[ρsκk+rk−1mk+4σ∗rk−1κk2(4Θ+60t)mk],normsubscriptsuperscript^𝑥𝑘subscript𝑚𝑘subscript𝑥𝛿delimited-[]𝜌𝑠subscript𝜅𝑘subscript𝑟𝑘1subscript𝑚𝑘4subscript𝜎subscript𝑟𝑘1subscript𝜅𝑘24Θ60𝑡subscript𝑚𝑘\|{\widehat{x}}^{k}_{m_{k}}-x_{*}\|\leq\delta\left[\rho s\kappa_{k}+\frac{r_{k-1}}{m_{k}}+\frac{{4}\sigma_{*}r_{k-1}}{\kappa_{k}}\sqrt{{2}(4\Theta+{60}t)\over m_{k}}\right],. , 2 =
which, by the choice of κksubscript𝜅𝑘\kappa_{k} in (48), results in
results in
, 1 = ‖x^mkk−x∗‖2≤2δ2[10ρsσ∗rk−14Θ+60tmk+rk−12mk2]≤rk−124=rk2.superscriptnormsubscriptsuperscript^𝑥𝑘subscript𝑚𝑘subscript𝑥22superscript𝛿2delimited-[]10𝜌𝑠subscript𝜎subscript𝑟𝑘14Θ60𝑡subscript𝑚𝑘subscriptsuperscript𝑟2𝑘1superscriptsubscript𝑚𝑘2superscriptsubscript𝑟𝑘124superscriptsubscript𝑟𝑘2\|{\widehat{x}}^{k}_{m_{k}}-x_{*}\|^{2}\leq 2\delta^{2}\left[{10}\rho s\sigma_{*}r_{k-1}\sqrt{{}4\Theta+{60}t\over m_{k}}+\frac{r^{2}_{k-1}}{m_{k}^{2}}\right]\leq\frac{r_{k-1}^{2}}{4}=r_{k}^{2}.. , 2 =
It remains to note that the total number Nk=∑j=1kmjsubscript𝑁𝑘superscriptsubscript𝑗1𝑘subscript𝑚𝑗N_{k}=\sum_{j=1}^{k}m_{j} of iterations during k𝑘k stages of the asymptotic phase satisfies Nk≲4k(Θ+t)δ2ρsνless-than-or-similar-tosubscript𝑁𝑘superscript4𝑘Θ𝑡superscript𝛿2𝜌𝑠𝜈N_{k}\lesssim 4^{k}(\Theta+t)\delta^{2}\rho s\nu, and 2−k≲δ(Θ+t)ρsνNkless-than-or-similar-tosuperscript2𝑘𝛿Θ𝑡𝜌𝑠𝜈subscript𝑁𝑘2^{-k}\lesssim\delta\sqrt{(\Theta+t)\rho s\nu\over N_{k}}, which along with definition of r0subscript𝑟0r_{0} implies (50). □□\Box
| 906
|
60
|
We can now terminate the proof of the theorem.
Let us prove the accuracy bound of the theorem for the minibatch variant of the procedure.
Assume that the “total observation budget” N𝑁N is such that only the preliminary phase of the procedure is implemented.
This is the case when either m0K¯1≥Nsubscript𝑚0subscript¯𝐾1𝑁m_{0}\overline{K}_{1}\geq N, or m0K¯1<Nsubscript𝑚0subscript¯𝐾1𝑁m_{0}\overline{K}_{1}<N and m0K¯1+m1ℓ1>Nsubscript𝑚0subscript¯𝐾1subscript𝑚1subscriptℓ1𝑁m_{0}\overline{K}_{1}+m_{1}\ell_{1}>N.
The output x^Nsubscript^𝑥𝑁{\widehat{x}}_{N} of the algorithm is then the last update of the preliminary phase, and by Lemma A.4 it satisfies
‖x^N−x∗‖≤R2−knormsubscript^𝑥𝑁subscript𝑥𝑅superscript2𝑘\|{\widehat{x}}_{N}-x_{*}\|\leq R2^{-k}
where k𝑘k is the count of completed stages.
In the case of m0K¯1≥Nsubscript𝑚0subscript¯𝐾1𝑁m_{0}\overline{K}_{1}\geq N this clearly implies that (recall that N≥m0𝑁subscript𝑚0N\geq m_{0}) that k≥cN/m0𝑘𝑐𝑁subscript𝑚0k\geq cN/m_{0} and, with probability ≥1−4ke−tabsent14𝑘superscript𝑒𝑡\geq 1-4ke^{-t}
, 1 = ‖x^N−x∗‖≲Rexp{−c′Nδ2ρsν(Θ+t)}.less-than-or-similar-tonormsubscript^𝑥𝑁subscript𝑥𝑅superscript𝑐′𝑁superscript𝛿2𝜌𝑠𝜈Θ𝑡\displaystyle\|{\widehat{x}}_{N}-x_{*}\|\lesssim R\exp\left\{-\frac{c^{\prime}N}{\delta^{2}\rho s\nu(\Theta+t)}\right\}.. , 2 = . , 3 = (51)
On the other hand, when m0K¯1<N<m0K¯1+m1ℓ1subscript𝑚0subscript¯𝐾1𝑁subscript𝑚0subscript¯𝐾1subscript𝑚1subscriptℓ1m_{0}\overline{K}_{1}<N<m_{0}\overline{K}_{1}+m_{1}\ell_{1}, by definition of m1subscript𝑚1m_{1} and ℓ1subscriptℓ1\ell_{1}, one has N≤Cm0K¯1𝑁𝐶subscript𝑚0subscript¯𝐾1N\leq Cm_{0}\overline{K}_{1}, so that bound
(51) still holds in this case.
Now, consider the case where at least one asymptotic stage has been completed.
When m0K¯1>N2subscript𝑚0subscript¯𝐾1𝑁2m_{0}\overline{K}_{1}>\frac{N}{2} we still have N≤Cm0K¯1𝑁𝐶subscript𝑚0subscript¯𝐾1N\leq Cm_{0}\overline{K}_{1}, so that the bound (51) holds for the approximate solution x^N(b)subscriptsuperscript^𝑥𝑏𝑁{\widehat{x}}^{(b)}_{N} at the end of the asymptotic stage. Otherwise, the number of oracle calls Nksubscript𝑁𝑘N_{k} of asymptotic stages satisfies Nk≥N/2subscript𝑁𝑘𝑁2N_{k}\geq N/2, and by (47) this implies that with probability ≥1−4(K¯1+K¯2)e−tabsent14subscript¯𝐾1subscript¯𝐾2superscript𝑒𝑡\geq 1-4(\overline{K}_{1}+\overline{K}_{2})e^{-t},
, 1 = ‖x^N(b)−x∗‖≲δ2σ∗ρsΘ(Θ+t)N.less-than-or-similar-tonormsubscriptsuperscript^𝑥𝑏𝑁subscript𝑥superscript𝛿2subscript𝜎𝜌𝑠ΘΘ𝑡𝑁\|{\widehat{x}}^{(b)}_{N}-x_{*}\|\lesssim\delta^{2}\sigma_{*}\rho s\sqrt{\frac{\Theta(\Theta+t)}{N}}.. , 2 =
To summarize, in both cases, the bound of Theorem 2.1 holds with probability at least 1−4(K¯1+K¯2)e−t14subscript¯𝐾1subscript¯𝐾2superscript𝑒𝑡1-4(\overline{K}_{1}+\overline{K}_{2})e^{-t}.
The proof of the accuracy bound for the “standard” solution x^Nsubscript^𝑥𝑁{\widehat{x}}_{N} is completely analogous, making use of the bound (50) of Lemma A.6 instead of (47).
□□\Box
| 1,387
|
Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix A Proofs
A.3 Proof of Theorem 2.1
Proof of Theorem 2.1.
We can now terminate the proof of the theorem.
Let us prove the accuracy bound of the theorem for the minibatch variant of the procedure.
Assume that the “total observation budget” N𝑁N is such that only the preliminary phase of the procedure is implemented.
This is the case when either m0K¯1≥Nsubscript𝑚0subscript¯𝐾1𝑁m_{0}\overline{K}_{1}\geq N, or m0K¯1<Nsubscript𝑚0subscript¯𝐾1𝑁m_{0}\overline{K}_{1}<N and m0K¯1+m1ℓ1>Nsubscript𝑚0subscript¯𝐾1subscript𝑚1subscriptℓ1𝑁m_{0}\overline{K}_{1}+m_{1}\ell_{1}>N.
The output x^Nsubscript^𝑥𝑁{\widehat{x}}_{N} of the algorithm is then the last update of the preliminary phase, and by Lemma A.4 it satisfies
‖x^N−x∗‖≤R2−knormsubscript^𝑥𝑁subscript𝑥𝑅superscript2𝑘\|{\widehat{x}}_{N}-x_{*}\|\leq R2^{-k}
where k𝑘k is the count of completed stages.
In the case of m0K¯1≥Nsubscript𝑚0subscript¯𝐾1𝑁m_{0}\overline{K}_{1}\geq N this clearly implies that (recall that N≥m0𝑁subscript𝑚0N\geq m_{0}) that k≥cN/m0𝑘𝑐𝑁subscript𝑚0k\geq cN/m_{0} and, with probability ≥1−4ke−tabsent14𝑘superscript𝑒𝑡\geq 1-4ke^{-t}
, 1 = ‖x^N−x∗‖≲Rexp{−c′Nδ2ρsν(Θ+t)}.less-than-or-similar-tonormsubscript^𝑥𝑁subscript𝑥𝑅superscript𝑐′𝑁superscript𝛿2𝜌𝑠𝜈Θ𝑡\displaystyle\|{\widehat{x}}_{N}-x_{*}\|\lesssim R\exp\left\{-\frac{c^{\prime}N}{\delta^{2}\rho s\nu(\Theta+t)}\right\}.. , 2 = . , 3 = (51)
On the other hand, when m0K¯1<N<m0K¯1+m1ℓ1subscript𝑚0subscript¯𝐾1𝑁subscript𝑚0subscript¯𝐾1subscript𝑚1subscriptℓ1m_{0}\overline{K}_{1}<N<m_{0}\overline{K}_{1}+m_{1}\ell_{1}, by definition of m1subscript𝑚1m_{1} and ℓ1subscriptℓ1\ell_{1}, one has N≤Cm0K¯1𝑁𝐶subscript𝑚0subscript¯𝐾1N\leq Cm_{0}\overline{K}_{1}, so that bound
(51) still holds in this case.
Now, consider the case where at least one asymptotic stage has been completed.
When m0K¯1>N2subscript𝑚0subscript¯𝐾1𝑁2m_{0}\overline{K}_{1}>\frac{N}{2} we still have N≤Cm0K¯1𝑁𝐶subscript𝑚0subscript¯𝐾1N\leq Cm_{0}\overline{K}_{1}, so that the bound (51) holds for the approximate solution x^N(b)subscriptsuperscript^𝑥𝑏𝑁{\widehat{x}}^{(b)}_{N} at the end of the asymptotic stage. Otherwise, the number of oracle calls Nksubscript𝑁𝑘N_{k} of asymptotic stages satisfies Nk≥N/2subscript𝑁𝑘𝑁2N_{k}\geq N/2, and by (47) this implies that with probability ≥1−4(K¯1+K¯2)e−tabsent14subscript¯𝐾1subscript¯𝐾2superscript𝑒𝑡\geq 1-4(\overline{K}_{1}+\overline{K}_{2})e^{-t},
, 1 = ‖x^N(b)−x∗‖≲δ2σ∗ρsΘ(Θ+t)N.less-than-or-similar-tonormsubscriptsuperscript^𝑥𝑏𝑁subscript𝑥superscript𝛿2subscript𝜎𝜌𝑠ΘΘ𝑡𝑁\|{\widehat{x}}^{(b)}_{N}-x_{*}\|\lesssim\delta^{2}\sigma_{*}\rho s\sqrt{\frac{\Theta(\Theta+t)}{N}}.. , 2 =
To summarize, in both cases, the bound of Theorem 2.1 holds with probability at least 1−4(K¯1+K¯2)e−t14subscript¯𝐾1subscript¯𝐾2superscript𝑒𝑡1-4(\overline{K}_{1}+\overline{K}_{2})e^{-t}.
The proof of the accuracy bound for the “standard” solution x^Nsubscript^𝑥𝑁{\widehat{x}}_{N} is completely analogous, making use of the bound (50) of Lemma A.6 instead of (47).
□□\Box
| 1,426
|
61
|
Theorem 2.1 as stated in Section 2.3 does not say anything about convergence of g(x^N)𝑔subscript^𝑥𝑁g({\widehat{x}}_{N}) to g(x∗)𝑔subscript𝑥g(x_{*}). Such information can be easily extracted from the proof of the theorem.
Indeed, observe that at the end of a stage of the method, one has, with probability 1−Cke−t1𝐶𝑘superscript𝑒𝑡1-Cke^{-t},
, 1 = Fκk(x^k)−Fκk≤υk,subscript𝐹subscript𝜅𝑘superscript^𝑥𝑘subscript𝐹subscript𝜅𝑘subscript𝜐𝑘F_{\kappa_{k}}({\widehat{x}}^{k})-F_{\kappa_{k}}\leq\upsilon_{k},. , 2 =
or
, 1 = g(x^k)−g(x∗)≤υk+κk(‖x^k‖−‖x∗‖)≤υk+κk‖x^k−x∗‖𝑔superscript^𝑥𝑘𝑔subscript𝑥subscript𝜐𝑘subscript𝜅𝑘normsuperscript^𝑥𝑘normsubscript𝑥subscript𝜐𝑘subscript𝜅𝑘normsuperscript^𝑥𝑘subscript𝑥g({\widehat{x}}^{k})-g(x_{*})\leq\upsilon_{k}+\kappa_{k}(\|{\widehat{x}}^{k}\|-\|x_{*}\|)\leq\upsilon_{k}+\kappa_{k}\|{\widehat{x}}^{k}-x_{*}\|. , 2 =
where x^ksuperscript^𝑥𝑘{\widehat{x}}^{k} is the approximate solution at the end of the stage k𝑘k. One the other hand, at the end of the k𝑘kth stage of the preliminary phase one has
‖x^k−x∗‖≤Rk≤2−kRnormsuperscript^𝑥𝑘subscript𝑥subscript𝑅𝑘superscript2𝑘𝑅\|{\widehat{x}}^{k}-x_{*}\|\leq R_{k}\leq 2^{-k}R, with
κk≲Rk(δρs)−1≤2−kR(δρs)−1less-than-or-similar-tosubscript𝜅𝑘subscript𝑅𝑘superscript𝛿𝜌𝑠1superscript2𝑘𝑅superscript𝛿𝜌𝑠1\kappa_{k}\lesssim R_{k}(\delta\rho s)^{-1}\leq 2^{-k}R(\delta\rho s)^{-1} and υk≲4−kR2δ2ρsless-than-or-similar-tosubscript𝜐𝑘superscript4𝑘superscript𝑅2superscript𝛿2𝜌𝑠\upsilon_{k}\lesssim{4^{-k}R^{2}\over\delta^{2}\rho s}
implying that
, 1 = g(x^k)−g(x∗)≲υk+Rk2δ2ρs≲(δ−2+δ−1)R2ρsexp{−cδρνNs(Θ+t)}less-than-or-similar-to𝑔superscript^𝑥𝑘𝑔subscript𝑥subscript𝜐𝑘superscriptsubscript𝑅𝑘2superscript𝛿2𝜌𝑠less-than-or-similar-tosuperscript𝛿2superscript𝛿1superscript𝑅2𝜌𝑠𝑐𝛿𝜌𝜈𝑁𝑠Θ𝑡g({\widehat{x}}^{k})-g(x_{*})\lesssim\upsilon_{k}+{R_{k}^{2}\over\delta^{2}\rho s}\lesssim(\delta^{-2}+\delta^{-1}){R^{2}\over\rho s}\exp\left\{-\frac{c}{\delta\rho\nu}{N\over s(\Theta+t)}\right\}. , 2 =
where N𝑁N is the current iteration count. Furthermore, at the end of the k𝑘kth asymptotic stage, one has, with probability 1−(K¯1+k)e−t1subscript¯𝐾1𝑘superscript𝑒𝑡1-({\overline{K}}_{1}+k)e^{-t},
‖x^k−x∗‖≤Rk≲δ2σ∗ρsΘ+tmk,normsuperscript^𝑥𝑘subscript𝑥subscript𝑅𝑘less-than-or-similar-tosuperscript𝛿2subscript𝜎𝜌𝑠Θ𝑡subscript𝑚𝑘\|{\widehat{x}}^{k}-x_{*}\|\leq R_{k}\lesssim{\delta^{2}\sigma_{*}\rho s}\sqrt{\frac{\Theta+t}{m_{k}}},
while
κk≍2−kδσ∗(ρνs)−1/2≲δσ∗Θ+tmk,asymptotically-equalssubscript𝜅𝑘superscript2𝑘𝛿subscript𝜎superscript𝜌𝜈𝑠12less-than-or-similar-to𝛿subscript𝜎Θ𝑡subscript𝑚𝑘\kappa_{k}\asymp 2^{-k}\delta\sigma_{*}(\rho\nu s)^{-1/2}\lesssim\delta\sigma_{*}\sqrt{\Theta+t\over m_{k}}, and υk≲δ2σ∗2ρs(Θ+t)/mkless-than-or-similar-tosubscript𝜐𝑘superscript𝛿2superscriptsubscript𝜎2𝜌𝑠Θ𝑡subscript𝑚𝑘\upsilon_{k}\lesssim{\delta^{2}\sigma_{*}^{2}\rho s(\Theta+t)/m_{k}}.
As a result, the corresponding x^ksuperscript^𝑥𝑘{\widehat{x}}^{k} satisfies
, 1 = g(x^k)−g(x∗)≤υk+κk‖x^k−x∗‖≲(δ2+δ3)ρσ∗2sΘ+tmk.𝑔superscript^𝑥𝑘𝑔subscript𝑥subscript𝜐𝑘subscript𝜅𝑘normsuperscript^𝑥𝑘subscript𝑥less-than-or-similar-tosuperscript𝛿2superscript𝛿3𝜌superscriptsubscript𝜎2𝑠Θ𝑡subscript𝑚𝑘g({\widehat{x}}^{k})-g(x_{*})\leq\upsilon_{k}+\kappa_{k}\|{\widehat{x}}^{k}-x_{*}\|\lesssim(\delta^{2}+\delta^{3})\rho\sigma_{*}^{2}s{\Theta+t\over m_{k}}.. , 2 =
When putting the above bounds together, assuming that at least 1 stage of the algorithm was completed, we arrive at the bound after N𝑁N steps:
| 1,922
|
Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix A Proofs
A.3 Proof of Theorem 2.1
Proof of Theorem 2.1.
Remark A.1
Theorem 2.1 as stated in Section 2.3 does not say anything about convergence of g(x^N)𝑔subscript^𝑥𝑁g({\widehat{x}}_{N}) to g(x∗)𝑔subscript𝑥g(x_{*}). Such information can be easily extracted from the proof of the theorem.
Indeed, observe that at the end of a stage of the method, one has, with probability 1−Cke−t1𝐶𝑘superscript𝑒𝑡1-Cke^{-t},
, 1 = Fκk(x^k)−Fκk≤υk,subscript𝐹subscript𝜅𝑘superscript^𝑥𝑘subscript𝐹subscript𝜅𝑘subscript𝜐𝑘F_{\kappa_{k}}({\widehat{x}}^{k})-F_{\kappa_{k}}\leq\upsilon_{k},. , 2 =
or
, 1 = g(x^k)−g(x∗)≤υk+κk(‖x^k‖−‖x∗‖)≤υk+κk‖x^k−x∗‖𝑔superscript^𝑥𝑘𝑔subscript𝑥subscript𝜐𝑘subscript𝜅𝑘normsuperscript^𝑥𝑘normsubscript𝑥subscript𝜐𝑘subscript𝜅𝑘normsuperscript^𝑥𝑘subscript𝑥g({\widehat{x}}^{k})-g(x_{*})\leq\upsilon_{k}+\kappa_{k}(\|{\widehat{x}}^{k}\|-\|x_{*}\|)\leq\upsilon_{k}+\kappa_{k}\|{\widehat{x}}^{k}-x_{*}\|. , 2 =
where x^ksuperscript^𝑥𝑘{\widehat{x}}^{k} is the approximate solution at the end of the stage k𝑘k. One the other hand, at the end of the k𝑘kth stage of the preliminary phase one has
‖x^k−x∗‖≤Rk≤2−kRnormsuperscript^𝑥𝑘subscript𝑥subscript𝑅𝑘superscript2𝑘𝑅\|{\widehat{x}}^{k}-x_{*}\|\leq R_{k}\leq 2^{-k}R, with
κk≲Rk(δρs)−1≤2−kR(δρs)−1less-than-or-similar-tosubscript𝜅𝑘subscript𝑅𝑘superscript𝛿𝜌𝑠1superscript2𝑘𝑅superscript𝛿𝜌𝑠1\kappa_{k}\lesssim R_{k}(\delta\rho s)^{-1}\leq 2^{-k}R(\delta\rho s)^{-1} and υk≲4−kR2δ2ρsless-than-or-similar-tosubscript𝜐𝑘superscript4𝑘superscript𝑅2superscript𝛿2𝜌𝑠\upsilon_{k}\lesssim{4^{-k}R^{2}\over\delta^{2}\rho s}
implying that
, 1 = g(x^k)−g(x∗)≲υk+Rk2δ2ρs≲(δ−2+δ−1)R2ρsexp{−cδρνNs(Θ+t)}less-than-or-similar-to𝑔superscript^𝑥𝑘𝑔subscript𝑥subscript𝜐𝑘superscriptsubscript𝑅𝑘2superscript𝛿2𝜌𝑠less-than-or-similar-tosuperscript𝛿2superscript𝛿1superscript𝑅2𝜌𝑠𝑐𝛿𝜌𝜈𝑁𝑠Θ𝑡g({\widehat{x}}^{k})-g(x_{*})\lesssim\upsilon_{k}+{R_{k}^{2}\over\delta^{2}\rho s}\lesssim(\delta^{-2}+\delta^{-1}){R^{2}\over\rho s}\exp\left\{-\frac{c}{\delta\rho\nu}{N\over s(\Theta+t)}\right\}. , 2 =
where N𝑁N is the current iteration count. Furthermore, at the end of the k𝑘kth asymptotic stage, one has, with probability 1−(K¯1+k)e−t1subscript¯𝐾1𝑘superscript𝑒𝑡1-({\overline{K}}_{1}+k)e^{-t},
‖x^k−x∗‖≤Rk≲δ2σ∗ρsΘ+tmk,normsuperscript^𝑥𝑘subscript𝑥subscript𝑅𝑘less-than-or-similar-tosuperscript𝛿2subscript𝜎𝜌𝑠Θ𝑡subscript𝑚𝑘\|{\widehat{x}}^{k}-x_{*}\|\leq R_{k}\lesssim{\delta^{2}\sigma_{*}\rho s}\sqrt{\frac{\Theta+t}{m_{k}}},
while
κk≍2−kδσ∗(ρνs)−1/2≲δσ∗Θ+tmk,asymptotically-equalssubscript𝜅𝑘superscript2𝑘𝛿subscript𝜎superscript𝜌𝜈𝑠12less-than-or-similar-to𝛿subscript𝜎Θ𝑡subscript𝑚𝑘\kappa_{k}\asymp 2^{-k}\delta\sigma_{*}(\rho\nu s)^{-1/2}\lesssim\delta\sigma_{*}\sqrt{\Theta+t\over m_{k}}, and υk≲δ2σ∗2ρs(Θ+t)/mkless-than-or-similar-tosubscript𝜐𝑘superscript𝛿2superscriptsubscript𝜎2𝜌𝑠Θ𝑡subscript𝑚𝑘\upsilon_{k}\lesssim{\delta^{2}\sigma_{*}^{2}\rho s(\Theta+t)/m_{k}}.
As a result, the corresponding x^ksuperscript^𝑥𝑘{\widehat{x}}^{k} satisfies
, 1 = g(x^k)−g(x∗)≤υk+κk‖x^k−x∗‖≲(δ2+δ3)ρσ∗2sΘ+tmk.𝑔superscript^𝑥𝑘𝑔subscript𝑥subscript𝜐𝑘subscript𝜅𝑘normsuperscript^𝑥𝑘subscript𝑥less-than-or-similar-tosuperscript𝛿2superscript𝛿3𝜌superscriptsubscript𝜎2𝑠Θ𝑡subscript𝑚𝑘g({\widehat{x}}^{k})-g(x_{*})\leq\upsilon_{k}+\kappa_{k}\|{\widehat{x}}^{k}-x_{*}\|\lesssim(\delta^{2}+\delta^{3})\rho\sigma_{*}^{2}s{\Theta+t\over m_{k}}.. , 2 =
When putting the above bounds together, assuming that at least 1 stage of the algorithm was completed, we arrive at the bound after N𝑁N steps:
| 1,966
|
62
|
, 1 = g(x^N)−g(x∗)≲(δ−2+δ−1)R2ρsexp{−cδ2ρνNs(Θ+t)}+(δ2+δ3)ρσ∗2sΘ+tNless-than-or-similar-to𝑔subscript^𝑥𝑁𝑔subscript𝑥superscript𝛿2superscript𝛿1superscript𝑅2𝜌𝑠𝑐superscript𝛿2𝜌𝜈𝑁𝑠Θ𝑡superscript𝛿2superscript𝛿3𝜌superscriptsubscript𝜎2𝑠Θ𝑡𝑁\displaystyle g({\widehat{x}}_{N})-g(x_{*})\lesssim(\delta^{-2}+\delta^{-1}){R^{2}\over\rho s}\exp\left\{-\frac{c}{\delta^{2}\rho\nu}{N\over s(\Theta+t)}\right\}+(\delta^{2}+\delta^{3})\rho\sigma_{*}^{2}s{\Theta+t\over N}. , 2 = . , 3 = (52)
with probability 1−(K¯1+K¯2)e−t1subscript¯𝐾1subscript¯𝐾2superscript𝑒𝑡1-({\overline{K}}_{1}+{\overline{K}}_{2})e^{-t}.
| 371
|
Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix A Proofs
A.3 Proof of Theorem 2.1
Proof of Theorem 2.1.
Remark A.1
, 1 = g(x^N)−g(x∗)≲(δ−2+δ−1)R2ρsexp{−cδ2ρνNs(Θ+t)}+(δ2+δ3)ρσ∗2sΘ+tNless-than-or-similar-to𝑔subscript^𝑥𝑁𝑔subscript𝑥superscript𝛿2superscript𝛿1superscript𝑅2𝜌𝑠𝑐superscript𝛿2𝜌𝜈𝑁𝑠Θ𝑡superscript𝛿2superscript𝛿3𝜌superscriptsubscript𝜎2𝑠Θ𝑡𝑁\displaystyle g({\widehat{x}}_{N})-g(x_{*})\lesssim(\delta^{-2}+\delta^{-1}){R^{2}\over\rho s}\exp\left\{-\frac{c}{\delta^{2}\rho\nu}{N\over s(\Theta+t)}\right\}+(\delta^{2}+\delta^{3})\rho\sigma_{*}^{2}s{\Theta+t\over N}. , 2 = . , 3 = (52)
with probability 1−(K¯1+K¯2)e−t1subscript¯𝐾1subscript¯𝐾2superscript𝑒𝑡1-({\overline{K}}_{1}+{\overline{K}}_{2})e^{-t}.
| 415
|
63
|
1o. Recall that 𝔯𝔯\mathfrak{r} is r¯¯𝑟{\overline{r}}-Lipschitz continuous, i.e., for all t,t′∈𝐑m𝑡superscript𝑡′superscript𝐑𝑚t,t^{\prime}\in{\mathbf{R}}^{m}
, 1 = |𝔯(t)−𝔯(t′)|≤r¯|t−t′|.𝔯𝑡𝔯superscript𝑡′¯𝑟𝑡superscript𝑡′|\mathfrak{r}(t)-\mathfrak{r}(t^{\prime})|\leq{\overline{r}}|t-t^{\prime}|.. , 2 =
As a result, for all x,x′∈X𝑥superscript𝑥′𝑋x,x^{\prime}\in X,
, 1 = ‖ϕ[𝔯(ϕiTx)−𝔯(ϕiTx′)]‖∞subscriptnormitalic-ϕdelimited-[]𝔯superscriptsubscriptitalic-ϕ𝑖𝑇𝑥𝔯superscriptsubscriptitalic-ϕ𝑖𝑇superscript𝑥′\displaystyle\|\phi[\mathfrak{r}(\phi_{i}^{T}x)-\mathfrak{r}(\phi_{i}^{T}x^{\prime})]\|_{\infty}. , 2 = ≤r¯‖ϕi‖∞|ϕiT(x−x′)|≤r¯‖ϕi‖∞2‖x−x′‖1≤r¯ν¯2‖x−x′‖1,absent¯𝑟subscriptnormsubscriptitalic-ϕ𝑖superscriptsubscriptitalic-ϕ𝑖𝑇𝑥superscript𝑥′¯𝑟subscriptsuperscriptnormsubscriptitalic-ϕ𝑖2subscriptnorm𝑥superscript𝑥′1¯𝑟superscript¯𝜈2subscriptnorm𝑥superscript𝑥′1\displaystyle\leq{\overline{r}}\|\phi_{i}\|_{\infty}|\phi_{i}^{T}(x-x^{\prime})|\leq{\overline{r}}\|\phi_{i}\|^{2}_{\infty}\|x-x^{\prime}\|_{1}\leq{\overline{r}}{\overline{\nu}}^{2}\|x-x^{\prime}\|_{1},. , 3 =
so that ∇G(x,ω)=ϕ[𝔯(ϕTx)−η]∇𝐺𝑥𝜔italic-ϕdelimited-[]𝔯superscriptitalic-ϕ𝑇𝑥𝜂\nabla G(x,\omega)=\phi[\mathfrak{r}(\phi^{T}x)-\eta] is Lipschitz continuous w.r.t. ℓ1subscriptℓ1\ell_{1}-norm with Lipschitz constant
ℒ(ω)≤r¯ν¯2.ℒ𝜔¯𝑟superscript¯𝜈2{\cal L}(\omega)\leq{\overline{r}}{\overline{\nu}}^{2}.
| 827
|
Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix A Proofs
A.4 Proof of Proposition 3.1
1o. Recall that 𝔯𝔯\mathfrak{r} is r¯¯𝑟{\overline{r}}-Lipschitz continuous, i.e., for all t,t′∈𝐑m𝑡superscript𝑡′superscript𝐑𝑚t,t^{\prime}\in{\mathbf{R}}^{m}
, 1 = |𝔯(t)−𝔯(t′)|≤r¯|t−t′|.𝔯𝑡𝔯superscript𝑡′¯𝑟𝑡superscript𝑡′|\mathfrak{r}(t)-\mathfrak{r}(t^{\prime})|\leq{\overline{r}}|t-t^{\prime}|.. , 2 =
As a result, for all x,x′∈X𝑥superscript𝑥′𝑋x,x^{\prime}\in X,
, 1 = ‖ϕ[𝔯(ϕiTx)−𝔯(ϕiTx′)]‖∞subscriptnormitalic-ϕdelimited-[]𝔯superscriptsubscriptitalic-ϕ𝑖𝑇𝑥𝔯superscriptsubscriptitalic-ϕ𝑖𝑇superscript𝑥′\displaystyle\|\phi[\mathfrak{r}(\phi_{i}^{T}x)-\mathfrak{r}(\phi_{i}^{T}x^{\prime})]\|_{\infty}. , 2 = ≤r¯‖ϕi‖∞|ϕiT(x−x′)|≤r¯‖ϕi‖∞2‖x−x′‖1≤r¯ν¯2‖x−x′‖1,absent¯𝑟subscriptnormsubscriptitalic-ϕ𝑖superscriptsubscriptitalic-ϕ𝑖𝑇𝑥superscript𝑥′¯𝑟subscriptsuperscriptnormsubscriptitalic-ϕ𝑖2subscriptnorm𝑥superscript𝑥′1¯𝑟superscript¯𝜈2subscriptnorm𝑥superscript𝑥′1\displaystyle\leq{\overline{r}}\|\phi_{i}\|_{\infty}|\phi_{i}^{T}(x-x^{\prime})|\leq{\overline{r}}\|\phi_{i}\|^{2}_{\infty}\|x-x^{\prime}\|_{1}\leq{\overline{r}}{\overline{\nu}}^{2}\|x-x^{\prime}\|_{1},. , 3 =
so that ∇G(x,ω)=ϕ[𝔯(ϕTx)−η]∇𝐺𝑥𝜔italic-ϕdelimited-[]𝔯superscriptitalic-ϕ𝑇𝑥𝜂\nabla G(x,\omega)=\phi[\mathfrak{r}(\phi^{T}x)-\eta] is Lipschitz continuous w.r.t. ℓ1subscriptℓ1\ell_{1}-norm with Lipschitz constant
ℒ(ω)≤r¯ν¯2.ℒ𝜔¯𝑟superscript¯𝜈2{\cal L}(\omega)\leq{\overline{r}}{\overline{\nu}}^{2}.
| 856
|
64
|
Due to strong monotonicity of 𝔯𝔯\mathfrak{r},
, 1 = g(x)−g(x∗)𝑔𝑥𝑔subscript𝑥\displaystyle g(x)-g(x_{*}). , 2 = =\displaystyle=. , 3 = ∫01∇g(x∗+t(x−x∗))T(x−x∗)𝑑tsuperscriptsubscript01∇𝑔superscriptsubscript𝑥𝑡𝑥subscript𝑥𝑇𝑥subscript𝑥differential-d𝑡\displaystyle\int_{0}^{1}\nabla g(x_{*}+t(x-x_{*}))^{T}(x-x_{*})dt. , 4 = . , 1 = . , 2 = =\displaystyle=. , 3 = ∫01𝐄{ϕ[𝔯(ϕT(x∗+t(x−x∗))−𝔯(ϕTx∗)]}T(x−x∗)dt\displaystyle\int_{0}^{1}{\mathbf{E}}\Big{\{}\phi[\mathfrak{r}(\phi^{T}(x_{*}+t(x-x_{*}))-\mathfrak{r}(\phi^{T}x_{*})]\Big{\}}^{T}(x-x_{*})dt. , 4 = . , 1 = . , 2 = ≥\displaystyle\geq. , 3 = ∫01r¯𝐄{(ϕT(x−x∗))2}t𝑑t=12r¯‖x−x∗‖Σ2,superscriptsubscript01¯𝑟𝐄superscriptsuperscriptitalic-ϕ𝑇𝑥subscript𝑥2𝑡differential-d𝑡12¯𝑟subscriptsuperscriptnorm𝑥subscript𝑥2Σ\displaystyle\int_{0}^{1}{\underline{r}}{\mathbf{E}}\big{\{}(\phi^{T}(x-x_{*}))^{2}\big{\}}tdt=\mbox{\small$\frac{1}{2}$}{\underline{r}}\|x-x_{*}\|^{2}_{\Sigma},. , 4 =
what is (16).
| 566
|
Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix A Proofs
A.4 Proof of Proposition 3.1
2o.
Due to strong monotonicity of 𝔯𝔯\mathfrak{r},
, 1 = g(x)−g(x∗)𝑔𝑥𝑔subscript𝑥\displaystyle g(x)-g(x_{*}). , 2 = =\displaystyle=. , 3 = ∫01∇g(x∗+t(x−x∗))T(x−x∗)𝑑tsuperscriptsubscript01∇𝑔superscriptsubscript𝑥𝑡𝑥subscript𝑥𝑇𝑥subscript𝑥differential-d𝑡\displaystyle\int_{0}^{1}\nabla g(x_{*}+t(x-x_{*}))^{T}(x-x_{*})dt. , 4 = . , 1 = . , 2 = =\displaystyle=. , 3 = ∫01𝐄{ϕ[𝔯(ϕT(x∗+t(x−x∗))−𝔯(ϕTx∗)]}T(x−x∗)dt\displaystyle\int_{0}^{1}{\mathbf{E}}\Big{\{}\phi[\mathfrak{r}(\phi^{T}(x_{*}+t(x-x_{*}))-\mathfrak{r}(\phi^{T}x_{*})]\Big{\}}^{T}(x-x_{*})dt. , 4 = . , 1 = . , 2 = ≥\displaystyle\geq. , 3 = ∫01r¯𝐄{(ϕT(x−x∗))2}t𝑑t=12r¯‖x−x∗‖Σ2,superscriptsubscript01¯𝑟𝐄superscriptsuperscriptitalic-ϕ𝑇𝑥subscript𝑥2𝑡differential-d𝑡12¯𝑟subscriptsuperscriptnorm𝑥subscript𝑥2Σ\displaystyle\int_{0}^{1}{\underline{r}}{\mathbf{E}}\big{\{}(\phi^{T}(x-x_{*}))^{2}\big{\}}tdt=\mbox{\small$\frac{1}{2}$}{\underline{r}}\|x-x_{*}\|^{2}_{\Sigma},. , 4 =
what is (16).
| 598
|
65
|
The sub-Gaussianity in the “batchless” case is readily given by ∇G(x∗,ωi)=σϕiξi∇𝐺subscript𝑥subscript𝜔𝑖𝜎subscriptitalic-ϕ𝑖subscript𝜉𝑖\nabla G(x_{*},\omega_{i})=\sigma\phi_{i}\xi_{i} with
‖ϕiξi‖∞≤‖ϕi‖∞|ξi|≤ν¯‖ξi‖2subscriptnormsubscriptitalic-ϕ𝑖subscript𝜉𝑖subscriptnormsubscriptitalic-ϕ𝑖subscript𝜉𝑖¯𝜈subscriptnormsubscript𝜉𝑖2\|\phi_{i}\xi_{i}\|_{\infty}\leq\|\phi_{i}\|_{\infty}|\xi_{i}|\leq{\overline{\nu}}\|\xi_{i}\|_{2} and
, 1 = 𝐄{exp(‖∇G(x∗,ωi)‖∞2σ2ν¯2)}≤e𝐄superscriptsubscriptnorm∇𝐺subscript𝑥subscript𝜔𝑖2superscript𝜎2superscript¯𝜈2𝑒{\mathbf{E}}\left\{\exp\left({\|\nabla G(x_{*},\omega_{i})\|_{\infty}^{2}\over\sigma^{2}{\overline{\nu}}^{2}}\right)\right\}\leq e. , 2 =
due to 𝐄{eξi2}≤exp(1)𝐄superscript𝑒superscriptsubscript𝜉𝑖21{\mathbf{E}}\big{\{}e^{\xi_{i}^{2}}\big{\}}\leq\exp(1). Because ΘΘ\Theta variation of the d.-g.f. θ𝜃\theta, as defined in (26), is bounded with Clnn𝐶𝑛C\ln n, by Lemma A.3 we conclude that batch observation
, 1 = H(x∗,ωi(L))=1L∑ℓ=1L∇G(x∗,ωiℓ)=1L∑ℓ=1Lσϕiℓ,ξiℓformulae-sequence𝐻subscript𝑥subscriptsuperscript𝜔𝐿𝑖1𝐿superscriptsubscriptℓ1𝐿∇𝐺subscript𝑥subscriptsuperscript𝜔ℓ𝑖1𝐿superscriptsubscriptℓ1𝐿𝜎subscriptsuperscriptitalic-ϕℓ𝑖subscriptsuperscript𝜉ℓ𝑖H\left(x_{*},\omega^{(L)}_{i}\right)={1\over L}\sum_{\ell=1}^{L}\nabla G(x_{*},\omega^{\ell}_{i})={1\over L}\sum_{\ell=1}^{L}\sigma\phi^{\ell}_{i},\xi^{\ell}_{i}. , 2 =
is sub-Gaussian with parameter ≲σ2ν¯2lnnless-than-or-similar-toabsentsuperscript𝜎2superscript¯𝜈2𝑛\lesssim\sigma^{2}{\overline{\nu}}^{2}\ln n.
| 860
|
Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix A Proofs
A.4 Proof of Proposition 3.1
3o.
The sub-Gaussianity in the “batchless” case is readily given by ∇G(x∗,ωi)=σϕiξi∇𝐺subscript𝑥subscript𝜔𝑖𝜎subscriptitalic-ϕ𝑖subscript𝜉𝑖\nabla G(x_{*},\omega_{i})=\sigma\phi_{i}\xi_{i} with
‖ϕiξi‖∞≤‖ϕi‖∞|ξi|≤ν¯‖ξi‖2subscriptnormsubscriptitalic-ϕ𝑖subscript𝜉𝑖subscriptnormsubscriptitalic-ϕ𝑖subscript𝜉𝑖¯𝜈subscriptnormsubscript𝜉𝑖2\|\phi_{i}\xi_{i}\|_{\infty}\leq\|\phi_{i}\|_{\infty}|\xi_{i}|\leq{\overline{\nu}}\|\xi_{i}\|_{2} and
, 1 = 𝐄{exp(‖∇G(x∗,ωi)‖∞2σ2ν¯2)}≤e𝐄superscriptsubscriptnorm∇𝐺subscript𝑥subscript𝜔𝑖2superscript𝜎2superscript¯𝜈2𝑒{\mathbf{E}}\left\{\exp\left({\|\nabla G(x_{*},\omega_{i})\|_{\infty}^{2}\over\sigma^{2}{\overline{\nu}}^{2}}\right)\right\}\leq e. , 2 =
due to 𝐄{eξi2}≤exp(1)𝐄superscript𝑒superscriptsubscript𝜉𝑖21{\mathbf{E}}\big{\{}e^{\xi_{i}^{2}}\big{\}}\leq\exp(1). Because ΘΘ\Theta variation of the d.-g.f. θ𝜃\theta, as defined in (26), is bounded with Clnn𝐶𝑛C\ln n, by Lemma A.3 we conclude that batch observation
, 1 = H(x∗,ωi(L))=1L∑ℓ=1L∇G(x∗,ωiℓ)=1L∑ℓ=1Lσϕiℓ,ξiℓformulae-sequence𝐻subscript𝑥subscriptsuperscript𝜔𝐿𝑖1𝐿superscriptsubscriptℓ1𝐿∇𝐺subscript𝑥subscriptsuperscript𝜔ℓ𝑖1𝐿superscriptsubscriptℓ1𝐿𝜎subscriptsuperscriptitalic-ϕℓ𝑖subscriptsuperscript𝜉ℓ𝑖H\left(x_{*},\omega^{(L)}_{i}\right)={1\over L}\sum_{\ell=1}^{L}\nabla G(x_{*},\omega^{\ell}_{i})={1\over L}\sum_{\ell=1}^{L}\sigma\phi^{\ell}_{i},\xi^{\ell}_{i}. , 2 =
is sub-Gaussian with parameter ≲σ2ν¯2lnnless-than-or-similar-toabsentsuperscript𝜎2superscript¯𝜈2𝑛\lesssim\sigma^{2}{\overline{\nu}}^{2}\ln n.
| 892
|
66
|
In the situation of Section 3.1, ΣΣ\Sigma is positive definite, Σ⪰κΣIsucceeds-or-equalsΣsubscript𝜅Σ𝐼\Sigma\succeq\kappa_{\Sigma}I, κΣ>0subscript𝜅Σ0\kappa_{\Sigma}>0, and condition 𝐐(λ,ψ)𝐐𝜆𝜓{\mathbf{Q}}(\lambda,\psi)
is satisfied with λ=κΣ𝜆subscript𝜅Σ\lambda=\kappa_{\Sigma} and ψ=1𝜓1\psi=1.
Because quadratic minoration condition (17) for g𝑔g is verified with μ≥r¯𝜇¯𝑟\mu\geq{\underline{r}} due to (16), when applying the result of Lemma 3.1, we conclude that Assumption [RSC] holds with δ=1𝛿1\delta=1 and ρ=(κΣr¯)−1𝜌superscriptsubscript𝜅Σ¯𝑟1\rho=(\kappa_{\Sigma}{\underline{r}})^{-1}.555We refer to Section B.2 and Lemma B.1 for the proof of Lemma 3.1. □□\Box
| 298
|
Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix A Proofs
A.4 Proof of Proposition 3.1
4o.
In the situation of Section 3.1, ΣΣ\Sigma is positive definite, Σ⪰κΣIsucceeds-or-equalsΣsubscript𝜅Σ𝐼\Sigma\succeq\kappa_{\Sigma}I, κΣ>0subscript𝜅Σ0\kappa_{\Sigma}>0, and condition 𝐐(λ,ψ)𝐐𝜆𝜓{\mathbf{Q}}(\lambda,\psi)
is satisfied with λ=κΣ𝜆subscript𝜅Σ\lambda=\kappa_{\Sigma} and ψ=1𝜓1\psi=1.
Because quadratic minoration condition (17) for g𝑔g is verified with μ≥r¯𝜇¯𝑟\mu\geq{\underline{r}} due to (16), when applying the result of Lemma 3.1, we conclude that Assumption [RSC] holds with δ=1𝛿1\delta=1 and ρ=(κΣr¯)−1𝜌superscriptsubscript𝜅Σ¯𝑟1\rho=(\kappa_{\Sigma}{\underline{r}})^{-1}.555We refer to Section B.2 and Lemma B.1 for the proof of Lemma 3.1. □□\Box
| 330
|
67
|
The scope of results of Section 2 is much broader than “vanilla” sparsity optimization. We discuss here general notion of sparsity structure which provides a proper application framework for these results.
In what follows we assume to be given a sparsity structure [21] on E𝐸E—a family 𝒫𝒫{\cal P} of projector mappings P=P2𝑃superscript𝑃2P=P^{2} on E𝐸E such that
A1.1
every P∈𝒫𝑃𝒫P\in{\cal P} is assigned a linear map P¯¯𝑃{\overline{P}} on E𝐸E such that PP¯=0𝑃¯𝑃0P{\overline{P}}=0 and a nonnegative weight π(P)𝜋𝑃\pi(P);
A1.2
whenever P∈𝒫𝑃𝒫P\in{\cal P} and f,g∈E𝑓𝑔𝐸f,g\in E such that ‖f‖∗≤1subscriptnorm𝑓1\|f\|_{*}\leq 1, ‖g‖∗≤1subscriptnorm𝑔1\|g\|_{*}\leq 1,
‖P∗f+P¯∗g‖∗≤1subscriptnormsuperscript𝑃𝑓superscript¯𝑃𝑔1\|P^{*}f+{\overline{P}}^{*}g\|_{*}\leq 1
where for a linear map Q:E→F:𝑄→𝐸𝐹Q:\,E\to F, Q∗:F→E:superscript𝑄→𝐹𝐸Q^{*}:\,F\to E is the conjugate mapping.
Following [21], we refer to a collection of the just introduced entities and sparsity structure on E𝐸E.
For a nonnegative real s𝑠s we set
, 1 = 𝒫s={P∈𝒫:π(P)≤s}.subscript𝒫𝑠conditional-set𝑃𝒫𝜋𝑃𝑠{\cal P}_{s}=\{P\in{\cal P}:\pi(P)\leq s\}.. , 2 =
Given s≥0𝑠0s\geq 0 we call x∈E𝑥𝐸x\in E s𝑠s-sparse if there exists P∈𝒫s𝑃subscript𝒫𝑠P\in{\cal P}_{s} such that Px=x𝑃𝑥𝑥Px=x.
Typically, one is interested in the following “standard examples”:
| 651
|
Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix B Properties of sparsity structures
B.1 Sparsity structures
The scope of results of Section 2 is much broader than “vanilla” sparsity optimization. We discuss here general notion of sparsity structure which provides a proper application framework for these results.
In what follows we assume to be given a sparsity structure [21] on E𝐸E—a family 𝒫𝒫{\cal P} of projector mappings P=P2𝑃superscript𝑃2P=P^{2} on E𝐸E such that
A1.1
every P∈𝒫𝑃𝒫P\in{\cal P} is assigned a linear map P¯¯𝑃{\overline{P}} on E𝐸E such that PP¯=0𝑃¯𝑃0P{\overline{P}}=0 and a nonnegative weight π(P)𝜋𝑃\pi(P);
A1.2
whenever P∈𝒫𝑃𝒫P\in{\cal P} and f,g∈E𝑓𝑔𝐸f,g\in E such that ‖f‖∗≤1subscriptnorm𝑓1\|f\|_{*}\leq 1, ‖g‖∗≤1subscriptnorm𝑔1\|g\|_{*}\leq 1,
‖P∗f+P¯∗g‖∗≤1subscriptnormsuperscript𝑃𝑓superscript¯𝑃𝑔1\|P^{*}f+{\overline{P}}^{*}g\|_{*}\leq 1
where for a linear map Q:E→F:𝑄→𝐸𝐹Q:\,E\to F, Q∗:F→E:superscript𝑄→𝐹𝐸Q^{*}:\,F\to E is the conjugate mapping.
Following [21], we refer to a collection of the just introduced entities and sparsity structure on E𝐸E.
For a nonnegative real s𝑠s we set
, 1 = 𝒫s={P∈𝒫:π(P)≤s}.subscript𝒫𝑠conditional-set𝑃𝒫𝜋𝑃𝑠{\cal P}_{s}=\{P\in{\cal P}:\pi(P)\leq s\}.. , 2 =
Given s≥0𝑠0s\geq 0 we call x∈E𝑥𝐸x\in E s𝑠s-sparse if there exists P∈𝒫s𝑃subscript𝒫𝑠P\in{\cal P}_{s} such that Px=x𝑃𝑥𝑥Px=x.
Typically, one is interested in the following “standard examples”:
| 679
|
68
|
1.
“Vanilla (usual)” sparsity: in this case E=𝐑n𝐸superscript𝐑𝑛E={\mathbf{R}}^{n} with the standard inner product, 𝒫𝒫{\cal P} is comprised of projectors on all coordinate subspaces of 𝐑nsuperscript𝐑𝑛{\mathbf{R}}^{n}, π(P)=rank(P)𝜋𝑃rank𝑃\pi(P)={\hbox{\rm rank}}(P), and ∥⋅∥=∥⋅∥1\|\cdot\|=\|\cdot\|_{1}.
2.
Group sparsity: E=𝐑n𝐸superscript𝐑𝑛E={\mathbf{R}}^{n}, and we partition the set {1,…,n}1…𝑛\{1,...,n\} of indices into K𝐾K nonoverlapping subsets I1,…,IKsubscript𝐼1…subscript𝐼𝐾I_{1},...,I_{K}, so that to every x∈𝐑n𝑥superscript𝐑𝑛x\in{\mathbf{R}}^{n} we associate blocks xksuperscript𝑥𝑘x^{k} with corresponding indices in Ik,k=1,…,Kformulae-sequencesubscript𝐼𝑘𝑘1…𝐾I_{k},\,k=1,...,K. Now 𝒫𝒫{\cal P} is comprised of projectors P=PI𝑃subscript𝑃𝐼P=P_{I} onto subspaces EI={[x1,…,xK]∈𝐑n:xk=0∀k∉I}subscript𝐸𝐼conditional-setsuperscript𝑥1…superscript𝑥𝐾superscript𝐑𝑛superscript𝑥𝑘0for-all𝑘𝐼E_{I}=\{[x^{1},...,x^{K}]\in{\mathbf{R}}^{n}:\,x^{k}=0\,\forall k\notin I\} associated with subsets I𝐼I of the index set {1,…,K}1…𝐾\{1,...,K\}. We set π(PI)=cardI𝜋subscript𝑃𝐼card𝐼\pi(P_{I})={\mathrm{card}}I, and define ‖x‖=∑k=1K‖xk‖2norm𝑥superscriptsubscript𝑘1𝐾subscriptnormsubscript𝑥𝑘2\|x\|=\sum_{k=1}^{K}\|x_{k}\|_{2}—block ℓ1/ℓ2subscriptℓ1subscriptℓ2\ell_{1}/\ell_{2}-norm.
3.
Low rank structure: in this example E=𝐑p×q𝐸superscript𝐑𝑝𝑞E={\mathbf{R}}^{p\times q} with, for the sake of definiteness, p≥q𝑝𝑞p\geq q, and the Frobenius inner product. Here 𝒫𝒫{\cal P} is the set of mappings P(x)=PℓxPr𝑃𝑥subscript𝑃ℓ𝑥subscript𝑃𝑟P(x)=P_{\ell}xP_{r} where Pℓsubscript𝑃ℓP_{\ell} and Prsubscript𝑃𝑟P_{r} are, respectively, q×q𝑞𝑞q\times q and p×p𝑝𝑝p\times p orthoprojectors, P¯(x)=(I−Pℓ)x(I−Pr)¯𝑃𝑥𝐼subscript𝑃ℓ𝑥𝐼subscript𝑃𝑟{\overline{P}}(x)=(I-P_{\ell})x(I-P_{r}), and
∥⋅∥\|\cdot\| is the nuclear norm ‖x‖=∑i=1qσi(x)norm𝑥superscriptsubscript𝑖1𝑞subscript𝜎𝑖𝑥\|x\|=\sum_{i=1}^{q}\sigma_{i}(x) where σ1(x)≥σ2(x)≥…≥σq(x)subscript𝜎1𝑥subscript𝜎2𝑥…subscript𝜎𝑞𝑥\sigma_{1}(x)\geq\sigma_{2}(x)\geq...\geq\sigma_{q}(x) are singular values of x𝑥x, ∥⋅∥∗\|\cdot\|_{*} is the spectral norm, so that ‖x‖∗=σ1(x)subscriptnorm𝑥subscript𝜎1𝑥\|x\|_{*}=\sigma_{1}(x), and π(P)=max[rank(Pℓ),rank(Pr)]𝜋𝑃ranksubscript𝑃ℓranksubscript𝑃𝑟\pi(P)=\max[{\hbox{\rm rank}}(P_{\ell}),{\hbox{\rm rank}}(P_{r})].
In this case, for ‖f‖∗≤1subscriptnorm𝑓1\|f\|_{*}\leq 1 and ‖g‖∗≤1subscriptnorm𝑔1\|g\|_{*}\leq 1 one has
‖P∗(f)‖∗=‖PℓfPr‖∗≤1,‖P¯∗(g)‖∗=‖(I−Pℓ)g(I−Pr)‖∗≤1,formulae-sequencesubscriptnormsuperscript𝑃𝑓subscriptnormsubscript𝑃ℓ𝑓subscript𝑃𝑟1subscriptnormsuperscript¯𝑃𝑔subscriptnorm𝐼subscript𝑃ℓ𝑔𝐼subscript𝑃𝑟1\|P^{*}(f)\|_{*}=\|P_{\ell}fP_{r}\|_{*}\leq 1,\quad\|{\overline{P}}^{*}(g)\|_{*}=\|(I-P_{\ell})g(I-P_{r})\|_{*}\leq 1,
and because the images and orthogonal complements to the kernels of P𝑃P and P¯¯𝑃{\overline{P}} are orthogonal to each other, ‖P∗(f)+P¯∗(g)‖∗≤1subscriptnormsuperscript𝑃𝑓superscript¯𝑃𝑔1\|P^{*}(f)+{\overline{P}}^{*}(g)\|_{*}\leq 1.
| 1,691
|
Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix B Properties of sparsity structures
B.1 Sparsity structures
1.
“Vanilla (usual)” sparsity: in this case E=𝐑n𝐸superscript𝐑𝑛E={\mathbf{R}}^{n} with the standard inner product, 𝒫𝒫{\cal P} is comprised of projectors on all coordinate subspaces of 𝐑nsuperscript𝐑𝑛{\mathbf{R}}^{n}, π(P)=rank(P)𝜋𝑃rank𝑃\pi(P)={\hbox{\rm rank}}(P), and ∥⋅∥=∥⋅∥1\|\cdot\|=\|\cdot\|_{1}.
2.
Group sparsity: E=𝐑n𝐸superscript𝐑𝑛E={\mathbf{R}}^{n}, and we partition the set {1,…,n}1…𝑛\{1,...,n\} of indices into K𝐾K nonoverlapping subsets I1,…,IKsubscript𝐼1…subscript𝐼𝐾I_{1},...,I_{K}, so that to every x∈𝐑n𝑥superscript𝐑𝑛x\in{\mathbf{R}}^{n} we associate blocks xksuperscript𝑥𝑘x^{k} with corresponding indices in Ik,k=1,…,Kformulae-sequencesubscript𝐼𝑘𝑘1…𝐾I_{k},\,k=1,...,K. Now 𝒫𝒫{\cal P} is comprised of projectors P=PI𝑃subscript𝑃𝐼P=P_{I} onto subspaces EI={[x1,…,xK]∈𝐑n:xk=0∀k∉I}subscript𝐸𝐼conditional-setsuperscript𝑥1…superscript𝑥𝐾superscript𝐑𝑛superscript𝑥𝑘0for-all𝑘𝐼E_{I}=\{[x^{1},...,x^{K}]\in{\mathbf{R}}^{n}:\,x^{k}=0\,\forall k\notin I\} associated with subsets I𝐼I of the index set {1,…,K}1…𝐾\{1,...,K\}. We set π(PI)=cardI𝜋subscript𝑃𝐼card𝐼\pi(P_{I})={\mathrm{card}}I, and define ‖x‖=∑k=1K‖xk‖2norm𝑥superscriptsubscript𝑘1𝐾subscriptnormsubscript𝑥𝑘2\|x\|=\sum_{k=1}^{K}\|x_{k}\|_{2}—block ℓ1/ℓ2subscriptℓ1subscriptℓ2\ell_{1}/\ell_{2}-norm.
3.
Low rank structure: in this example E=𝐑p×q𝐸superscript𝐑𝑝𝑞E={\mathbf{R}}^{p\times q} with, for the sake of definiteness, p≥q𝑝𝑞p\geq q, and the Frobenius inner product. Here 𝒫𝒫{\cal P} is the set of mappings P(x)=PℓxPr𝑃𝑥subscript𝑃ℓ𝑥subscript𝑃𝑟P(x)=P_{\ell}xP_{r} where Pℓsubscript𝑃ℓP_{\ell} and Prsubscript𝑃𝑟P_{r} are, respectively, q×q𝑞𝑞q\times q and p×p𝑝𝑝p\times p orthoprojectors, P¯(x)=(I−Pℓ)x(I−Pr)¯𝑃𝑥𝐼subscript𝑃ℓ𝑥𝐼subscript𝑃𝑟{\overline{P}}(x)=(I-P_{\ell})x(I-P_{r}), and
∥⋅∥\|\cdot\| is the nuclear norm ‖x‖=∑i=1qσi(x)norm𝑥superscriptsubscript𝑖1𝑞subscript𝜎𝑖𝑥\|x\|=\sum_{i=1}^{q}\sigma_{i}(x) where σ1(x)≥σ2(x)≥…≥σq(x)subscript𝜎1𝑥subscript𝜎2𝑥…subscript𝜎𝑞𝑥\sigma_{1}(x)\geq\sigma_{2}(x)\geq...\geq\sigma_{q}(x) are singular values of x𝑥x, ∥⋅∥∗\|\cdot\|_{*} is the spectral norm, so that ‖x‖∗=σ1(x)subscriptnorm𝑥subscript𝜎1𝑥\|x\|_{*}=\sigma_{1}(x), and π(P)=max[rank(Pℓ),rank(Pr)]𝜋𝑃ranksubscript𝑃ℓranksubscript𝑃𝑟\pi(P)=\max[{\hbox{\rm rank}}(P_{\ell}),{\hbox{\rm rank}}(P_{r})].
In this case, for ‖f‖∗≤1subscriptnorm𝑓1\|f\|_{*}\leq 1 and ‖g‖∗≤1subscriptnorm𝑔1\|g\|_{*}\leq 1 one has
‖P∗(f)‖∗=‖PℓfPr‖∗≤1,‖P¯∗(g)‖∗=‖(I−Pℓ)g(I−Pr)‖∗≤1,formulae-sequencesubscriptnormsuperscript𝑃𝑓subscriptnormsubscript𝑃ℓ𝑓subscript𝑃𝑟1subscriptnormsuperscript¯𝑃𝑔subscriptnorm𝐼subscript𝑃ℓ𝑔𝐼subscript𝑃𝑟1\|P^{*}(f)\|_{*}=\|P_{\ell}fP_{r}\|_{*}\leq 1,\quad\|{\overline{P}}^{*}(g)\|_{*}=\|(I-P_{\ell})g(I-P_{r})\|_{*}\leq 1,
and because the images and orthogonal complements to the kernels of P𝑃P and P¯¯𝑃{\overline{P}} are orthogonal to each other, ‖P∗(f)+P¯∗(g)‖∗≤1subscriptnormsuperscript𝑃𝑓superscript¯𝑃𝑔1\|P^{*}(f)+{\overline{P}}^{*}(g)\|_{*}\leq 1.
| 1,719
|
69
|
We say that a positive semidefinite mapping Σ:E→E:Σ→𝐸𝐸\Sigma:\,E\to E satisfies condition 𝐐(λ,ψ)𝐐𝜆𝜓{\mathbf{Q}}(\lambda,\psi) for given s∈𝐙+𝑠subscript𝐙s\in{\mathbf{Z}}_{+} if for some ψ,λ>0𝜓𝜆0\psi,\lambda>0 and all P∈𝒫s𝑃subscript𝒫𝑠P\in{\cal P}_{s} and z∈E𝑧𝐸z\in E
, 1 = ‖Pz‖≤s/λ‖z‖Σ+‖P¯z‖−ψ‖z‖.norm𝑃𝑧𝑠𝜆subscriptnorm𝑧Σnorm¯𝑃𝑧𝜓norm𝑧\|Pz\|\leq\sqrt{s/\lambda}\|z\|_{\Sigma}+\|{\overline{P}}z\|-\psi\|z\|.. , 2 = . , 3 = (53)
| 284
|
Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix B Properties of sparsity structures
B.2 Condition 𝐐(λ,ψ)𝐐𝜆𝜓{\mathbf{Q}}(\lambda,\psi)
We say that a positive semidefinite mapping Σ:E→E:Σ→𝐸𝐸\Sigma:\,E\to E satisfies condition 𝐐(λ,ψ)𝐐𝜆𝜓{\mathbf{Q}}(\lambda,\psi) for given s∈𝐙+𝑠subscript𝐙s\in{\mathbf{Z}}_{+} if for some ψ,λ>0𝜓𝜆0\psi,\lambda>0 and all P∈𝒫s𝑃subscript𝒫𝑠P\in{\cal P}_{s} and z∈E𝑧𝐸z\in E
, 1 = ‖Pz‖≤s/λ‖z‖Σ+‖P¯z‖−ψ‖z‖.norm𝑃𝑧𝑠𝜆subscriptnorm𝑧Σnorm¯𝑃𝑧𝜓norm𝑧\|Pz\|\leq\sqrt{s/\lambda}\|z\|_{\Sigma}+\|{\overline{P}}z\|-\psi\|z\|.. , 2 = . , 3 = (53)
| 338
|
70
|
Suppose that x∗subscript𝑥x_{*} is an optimal solution to (5) such that for some P∈𝒫s𝑃subscript𝒫𝑠P\in{\cal P}_{s}, ‖(I−P)x∗‖≤δnorm𝐼𝑃subscript𝑥𝛿\|(I-P)x_{*}\|\leq\delta, and that condition 𝐐(λ,ψ)𝐐𝜆𝜓{\mathbf{Q}}(\lambda,\psi) is satisfied.
Furthermore, assume that objective g𝑔g of (5) satisfies the following minoration condition
, 1 = g(x)−g(x∗)≥μ(‖x−x∗‖Σ)𝑔𝑥𝑔subscript𝑥𝜇subscriptnorm𝑥subscript𝑥Σg(x)-g(x_{*})\geq\mu\big{(}\|x-x_{*}\|_{\Sigma}\big{)}. , 2 =
where μ(⋅)𝜇⋅\mu(\cdot) is monotone increasing and convex.
Then a feasible solution x^∈𝒳^𝑥𝒳\widehat{x}\in\mathcal{X} to (7) such that
, 1 = Prob{Fκ(x^)−Fk(x∗)≤υ}≥1−ϵ.Probsubscript𝐹𝜅^𝑥subscript𝐹𝑘subscript𝑥𝜐1italic-ϵ\displaystyle\hbox{\rm Prob}\left\{F_{\kappa}(\widehat{x})-F_{k}(x_{*})\leq\upsilon\right\}\geq 1-\epsilon.. , 2 =
satisfies, with probability at least 1−ϵ1italic-ϵ1-\epsilon,
, 1 = ‖x^−x∗‖≤μ∗(κs/λ)+υκψ+2δψnorm^𝑥subscript𝑥superscript𝜇𝜅𝑠𝜆𝜐𝜅𝜓2𝛿𝜓\displaystyle\|\widehat{x}-x_{*}\|\leq{\mu^{*}\left(\kappa\sqrt{s/\lambda}\right)+\upsilon\over\kappa\psi}+{2\delta\over\psi}. , 2 = . , 3 = (54)
where μ∗:𝐑+→𝐑+:superscript𝜇→subscript𝐑subscript𝐑\mu^{*}:\,{\mathbf{R}}_{+}\to{\mathbf{R}}_{+} is conjugate to μ(⋅)𝜇⋅\mu(\cdot), μ∗(t)=supu≥0[tu−μ(u)]superscript𝜇𝑡subscriptsupremum𝑢0delimited-[]𝑡𝑢𝜇𝑢\mu^{*}(t)=\sup_{u\geq 0}[tu-\mu(u)].
| 752
|
Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix B Properties of sparsity structures
B.2 Condition 𝐐(λ,ψ)𝐐𝜆𝜓{\mathbf{Q}}(\lambda,\psi)
Lemma B.1
Suppose that x∗subscript𝑥x_{*} is an optimal solution to (5) such that for some P∈𝒫s𝑃subscript𝒫𝑠P\in{\cal P}_{s}, ‖(I−P)x∗‖≤δnorm𝐼𝑃subscript𝑥𝛿\|(I-P)x_{*}\|\leq\delta, and that condition 𝐐(λ,ψ)𝐐𝜆𝜓{\mathbf{Q}}(\lambda,\psi) is satisfied.
Furthermore, assume that objective g𝑔g of (5) satisfies the following minoration condition
, 1 = g(x)−g(x∗)≥μ(‖x−x∗‖Σ)𝑔𝑥𝑔subscript𝑥𝜇subscriptnorm𝑥subscript𝑥Σg(x)-g(x_{*})\geq\mu\big{(}\|x-x_{*}\|_{\Sigma}\big{)}. , 2 =
where μ(⋅)𝜇⋅\mu(\cdot) is monotone increasing and convex.
Then a feasible solution x^∈𝒳^𝑥𝒳\widehat{x}\in\mathcal{X} to (7) such that
, 1 = Prob{Fκ(x^)−Fk(x∗)≤υ}≥1−ϵ.Probsubscript𝐹𝜅^𝑥subscript𝐹𝑘subscript𝑥𝜐1italic-ϵ\displaystyle\hbox{\rm Prob}\left\{F_{\kappa}(\widehat{x})-F_{k}(x_{*})\leq\upsilon\right\}\geq 1-\epsilon.. , 2 =
satisfies, with probability at least 1−ϵ1italic-ϵ1-\epsilon,
, 1 = ‖x^−x∗‖≤μ∗(κs/λ)+υκψ+2δψnorm^𝑥subscript𝑥superscript𝜇𝜅𝑠𝜆𝜐𝜅𝜓2𝛿𝜓\displaystyle\|\widehat{x}-x_{*}\|\leq{\mu^{*}\left(\kappa\sqrt{s/\lambda}\right)+\upsilon\over\kappa\psi}+{2\delta\over\psi}. , 2 = . , 3 = (54)
where μ∗:𝐑+→𝐑+:superscript𝜇→subscript𝐑subscript𝐑\mu^{*}:\,{\mathbf{R}}_{+}\to{\mathbf{R}}_{+} is conjugate to μ(⋅)𝜇⋅\mu(\cdot), μ∗(t)=supu≥0[tu−μ(u)]superscript𝜇𝑡subscriptsupremum𝑢0delimited-[]𝑡𝑢𝜇𝑢\mu^{*}(t)=\sup_{u\geq 0}[tu-\mu(u)].
| 811
|
71
|
When setting z=x^−x∗𝑧^𝑥subscript𝑥z={\widehat{x}}-x_{*} one has
, 1 = x^^𝑥\displaystyle{\widehat{x}}. , 2 = =‖x∗+z‖=‖Px∗+(I−P)x∗+z‖≥‖Px∗+z‖−‖(I−P)x∗‖absentnormsubscript𝑥𝑧norm𝑃subscript𝑥𝐼𝑃subscript𝑥𝑧norm𝑃subscript𝑥𝑧norm𝐼𝑃subscript𝑥\displaystyle=\|x_{*}+z\|=\|Px_{*}+(I-P)x_{*}+z\|\geq\|Px_{*}+z\|-\|(I-P)x_{*}\|. , 3 = . , 1 = . , 2 = ≥‖Px∗‖+‖P¯z‖−‖Pz‖−δabsentnorm𝑃subscript𝑥norm¯𝑃𝑧norm𝑃𝑧𝛿\displaystyle\geq\|Px_{*}\|+\|{\overline{P}}z\|-\|Pz\|-\delta. , 3 =
where we used the relation
, 1 = ‖Px∗+z‖≥‖Px∗‖−‖Pz‖+‖P¯z‖norm𝑃subscript𝑥𝑧norm𝑃subscript𝑥norm𝑃𝑧norm¯𝑃𝑧\|Px_{*}+z\|\geq\|Px_{*}\|-\|Pz\|+\|{\overline{P}}z\|. , 2 =
(cf. Lemma 3.1 of [21] applied to w=Px∗𝑤𝑃subscript𝑥w=Px_{*}). When using condition 𝐐(λ,ψ)𝐐𝜆𝜓{\mathbf{Q}}(\lambda,\psi) we obtain
, 1 = ‖x^‖≥‖Px∗‖−s/λ‖z‖Σ+ψ‖z‖−δ,norm^𝑥norm𝑃subscript𝑥𝑠𝜆subscriptnorm𝑧Σ𝜓norm𝑧𝛿\|{\widehat{x}}\|\geq\|Px_{*}\|-\sqrt{s/\lambda}\|z\|_{\Sigma}+\psi\|z\|-\delta,. , 2 =
so that Fk(x^)≤Fk(x∗)+υsubscript𝐹𝑘^𝑥subscript𝐹𝑘subscript𝑥𝜐F_{k}({\widehat{x}})\leq F_{k}(x_{*})+\upsilon implies
, 1 = κ(‖Px∗‖+ψ‖z‖−δ)𝜅norm𝑃subscript𝑥𝜓norm𝑧𝛿\displaystyle\kappa\left(\|Px_{*}\|+\psi\|z\|-\delta\right). , 2 = ≤12[g(x∗)−g(x^)]+κs/λ‖z‖Σ+κ‖x∗‖+υabsent12delimited-[]𝑔subscript𝑥𝑔^𝑥𝜅𝑠𝜆subscriptnorm𝑧Σ𝜅normsubscript𝑥𝜐\displaystyle\leq\mbox{\small$\frac{1}{2}$}[g(x_{*})-g({\widehat{x}})]+\kappa\sqrt{s/\lambda}\|z\|_{\Sigma}+\kappa\|x_{*}\|+\upsilon. , 3 = . , 1 = . , 2 = ≤−12μ(‖z‖Σ)+κs/λ‖z‖Σ+κ‖x∗‖+υabsent12𝜇subscriptnorm𝑧Σ𝜅𝑠𝜆subscriptnorm𝑧Σ𝜅normsubscript𝑥𝜐\displaystyle\leq-\mbox{\small$\frac{1}{2}$}\mu(\|z\|_{\Sigma})+\kappa\sqrt{s/\lambda}\|z\|_{\Sigma}+\kappa\|x_{*}\|+\upsilon. , 3 = . , 1 = . , 2 = ≤12μ∗(2κs/λ)+κ‖x∗‖+υ,absent12superscript𝜇2𝜅𝑠𝜆𝜅normsubscript𝑥𝜐\displaystyle\leq\mbox{\small$\frac{1}{2}$}\mu^{*}(2\kappa\sqrt{s/\lambda})+\kappa\|x_{*}\|+\upsilon,. , 3 =
and we conclude that
, 1 = κψ‖z‖≤12μ∗(2κs/λ)+2κδ+υ𝜅𝜓norm𝑧12superscript𝜇2𝜅𝑠𝜆2𝜅𝛿𝜐\kappa\psi\|z\|\leq\mbox{\small$\frac{1}{2}$}\mu^{*}(2\kappa\sqrt{s/\lambda})+2\kappa\delta+\upsilon. , 2 =
due to ‖x∗‖−‖Px∗‖≤‖(I−P)x∗‖≤δnormsubscript𝑥norm𝑃subscript𝑥norm𝐼𝑃subscript𝑥𝛿\|x_{*}\|-\|Px_{*}\|\leq\|(I-P)x_{*}\|\leq\delta. □□\Box
Note that when μ(u)=μ2u2𝜇𝑢𝜇2superscript𝑢2\mu(u)=\tfrac{\mu}{2}u^{2}, one has μ∗(t)=12μt2superscript𝜇𝑡12𝜇superscript𝑡2\mu^{*}(t)=\tfrac{1}{2\mu}t^{2}, and in the case of ∥⋅∥=∥⋅∥1\|\cdot\|=\|\cdot\|_{1}, with probability 1−ϵ1italic-ϵ1-\epsilon,
, 1 = ‖x^−x∗‖1≤sκμλψ+υκψ+2δψ.subscriptnorm^𝑥subscript𝑥1𝑠𝜅𝜇𝜆𝜓𝜐𝜅𝜓2𝛿𝜓\|{\widehat{x}}-x_{*}\|_{1}\leq{s\kappa\over\mu\lambda\psi}+{\upsilon\over\kappa\psi}+{2\delta\over\psi}.. , 2 =
This, in particular, implies bound (18) of Lemma 3.1.
| 1,784
|
Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix B Properties of sparsity structures
B.2 Condition 𝐐(λ,ψ)𝐐𝜆𝜓{\mathbf{Q}}(\lambda,\psi)
Proof.
When setting z=x^−x∗𝑧^𝑥subscript𝑥z={\widehat{x}}-x_{*} one has
, 1 = x^^𝑥\displaystyle{\widehat{x}}. , 2 = =‖x∗+z‖=‖Px∗+(I−P)x∗+z‖≥‖Px∗+z‖−‖(I−P)x∗‖absentnormsubscript𝑥𝑧norm𝑃subscript𝑥𝐼𝑃subscript𝑥𝑧norm𝑃subscript𝑥𝑧norm𝐼𝑃subscript𝑥\displaystyle=\|x_{*}+z\|=\|Px_{*}+(I-P)x_{*}+z\|\geq\|Px_{*}+z\|-\|(I-P)x_{*}\|. , 3 = . , 1 = . , 2 = ≥‖Px∗‖+‖P¯z‖−‖Pz‖−δabsentnorm𝑃subscript𝑥norm¯𝑃𝑧norm𝑃𝑧𝛿\displaystyle\geq\|Px_{*}\|+\|{\overline{P}}z\|-\|Pz\|-\delta. , 3 =
where we used the relation
, 1 = ‖Px∗+z‖≥‖Px∗‖−‖Pz‖+‖P¯z‖norm𝑃subscript𝑥𝑧norm𝑃subscript𝑥norm𝑃𝑧norm¯𝑃𝑧\|Px_{*}+z\|\geq\|Px_{*}\|-\|Pz\|+\|{\overline{P}}z\|. , 2 =
(cf. Lemma 3.1 of [21] applied to w=Px∗𝑤𝑃subscript𝑥w=Px_{*}). When using condition 𝐐(λ,ψ)𝐐𝜆𝜓{\mathbf{Q}}(\lambda,\psi) we obtain
, 1 = ‖x^‖≥‖Px∗‖−s/λ‖z‖Σ+ψ‖z‖−δ,norm^𝑥norm𝑃subscript𝑥𝑠𝜆subscriptnorm𝑧Σ𝜓norm𝑧𝛿\|{\widehat{x}}\|\geq\|Px_{*}\|-\sqrt{s/\lambda}\|z\|_{\Sigma}+\psi\|z\|-\delta,. , 2 =
so that Fk(x^)≤Fk(x∗)+υsubscript𝐹𝑘^𝑥subscript𝐹𝑘subscript𝑥𝜐F_{k}({\widehat{x}})\leq F_{k}(x_{*})+\upsilon implies
, 1 = κ(‖Px∗‖+ψ‖z‖−δ)𝜅norm𝑃subscript𝑥𝜓norm𝑧𝛿\displaystyle\kappa\left(\|Px_{*}\|+\psi\|z\|-\delta\right). , 2 = ≤12[g(x∗)−g(x^)]+κs/λ‖z‖Σ+κ‖x∗‖+υabsent12delimited-[]𝑔subscript𝑥𝑔^𝑥𝜅𝑠𝜆subscriptnorm𝑧Σ𝜅normsubscript𝑥𝜐\displaystyle\leq\mbox{\small$\frac{1}{2}$}[g(x_{*})-g({\widehat{x}})]+\kappa\sqrt{s/\lambda}\|z\|_{\Sigma}+\kappa\|x_{*}\|+\upsilon. , 3 = . , 1 = . , 2 = ≤−12μ(‖z‖Σ)+κs/λ‖z‖Σ+κ‖x∗‖+υabsent12𝜇subscriptnorm𝑧Σ𝜅𝑠𝜆subscriptnorm𝑧Σ𝜅normsubscript𝑥𝜐\displaystyle\leq-\mbox{\small$\frac{1}{2}$}\mu(\|z\|_{\Sigma})+\kappa\sqrt{s/\lambda}\|z\|_{\Sigma}+\kappa\|x_{*}\|+\upsilon. , 3 = . , 1 = . , 2 = ≤12μ∗(2κs/λ)+κ‖x∗‖+υ,absent12superscript𝜇2𝜅𝑠𝜆𝜅normsubscript𝑥𝜐\displaystyle\leq\mbox{\small$\frac{1}{2}$}\mu^{*}(2\kappa\sqrt{s/\lambda})+\kappa\|x_{*}\|+\upsilon,. , 3 =
and we conclude that
, 1 = κψ‖z‖≤12μ∗(2κs/λ)+2κδ+υ𝜅𝜓norm𝑧12superscript𝜇2𝜅𝑠𝜆2𝜅𝛿𝜐\kappa\psi\|z\|\leq\mbox{\small$\frac{1}{2}$}\mu^{*}(2\kappa\sqrt{s/\lambda})+2\kappa\delta+\upsilon. , 2 =
due to ‖x∗‖−‖Px∗‖≤‖(I−P)x∗‖≤δnormsubscript𝑥norm𝑃subscript𝑥norm𝐼𝑃subscript𝑥𝛿\|x_{*}\|-\|Px_{*}\|\leq\|(I-P)x_{*}\|\leq\delta. □□\Box
Note that when μ(u)=μ2u2𝜇𝑢𝜇2superscript𝑢2\mu(u)=\tfrac{\mu}{2}u^{2}, one has μ∗(t)=12μt2superscript𝜇𝑡12𝜇superscript𝑡2\mu^{*}(t)=\tfrac{1}{2\mu}t^{2}, and in the case of ∥⋅∥=∥⋅∥1\|\cdot\|=\|\cdot\|_{1}, with probability 1−ϵ1italic-ϵ1-\epsilon,
, 1 = ‖x^−x∗‖1≤sκμλψ+υκψ+2δψ.subscriptnorm^𝑥subscript𝑥1𝑠𝜅𝜇𝜆𝜓𝜐𝜅𝜓2𝛿𝜓\|{\widehat{x}}-x_{*}\|_{1}\leq{s\kappa\over\mu\lambda\psi}+{\upsilon\over\kappa\psi}+{2\delta\over\psi}.. , 2 =
This, in particular, implies bound (18) of Lemma 3.1.
| 1,840
|
72
|
We discuss implications of condition 𝐐(λ,ψ)𝐐𝜆𝜓{\mathbf{Q}}(\lambda,\psi) and result of Lemma B.1 for “usual” sparsity in Section 3 of the paper. Now, let us consider
the case of the low rank sparsity. Let z∈𝐑p×q𝑧superscript𝐑𝑝𝑞z\in{\mathbf{R}}^{p\times q} with p≥q𝑝𝑞p\geq q for the sake of definiteness. In this case, ∥⋅∥\|\cdot\| is the nuclear norm, and we put
P(z)=PℓzPr𝑃𝑧subscript𝑃ℓ𝑧subscript𝑃𝑟P(z)=P_{\ell}zP_{r} where
Pℓsubscript𝑃ℓP_{\ell} and Prsubscript𝑃𝑟P_{r} are orthoprojectors of rank s≤q𝑠𝑞s\leq q such that ‖(I−P)(x)‖=‖x∗−Pℓx∗Pr‖≤δnorm𝐼𝑃𝑥normsubscript𝑥subscript𝑃ℓsubscript𝑥subscript𝑃𝑟𝛿\|(I-P)(x)\|=\|x_{*}-P_{\ell}x_{*}P_{r}\|\leq\delta.666E.g., choose Pℓsubscript𝑃ℓP_{\ell} and Prsubscript𝑃𝑟P_{r} as left and right projectors on the space generated by s𝑠s principal left and right singular vectors of x∗subscript𝑥x_{*}, so that
‖x∗−Pℓx∗Pr‖=‖(I−Pℓ)x∗(I−Pr)‖=∑i=s+1qσi≤δnormsubscript𝑥subscript𝑃ℓsubscript𝑥subscript𝑃𝑟norm𝐼subscript𝑃ℓsubscript𝑥𝐼subscript𝑃𝑟superscriptsubscript𝑖𝑠1𝑞subscript𝜎𝑖𝛿\|x_{*}-P_{\ell}x_{*}P_{r}\|=\|(I-P_{\ell})x_{*}(I-P_{r})\|=\sum_{i=s+1}^{q}\sigma_{i}\leq\delta.
Furthermore, for a p×q𝑝𝑞p\times q matrix z𝑧z let us put
, 1 = σ(k)(z)=∑i=1kσi(z), 1≤k≤q.formulae-sequencesuperscript𝜎𝑘𝑧superscriptsubscript𝑖1𝑘subscript𝜎𝑖𝑧1𝑘𝑞\sigma^{(k)}(z)=\sum_{i=1}^{k}\sigma_{i}(z),\,\,1\leq k\leq q.. , 2 =
With the sparsity parameter s𝑠s being a nonnegative integer,
, 1 = ∀(z∈𝐑p×q,P∈𝒫s):∥P(z)∥≤σ(s)(z),∥P¯(z)∥≥∥z∥−σ(2s)(z).\forall(z\in{\mathbf{R}}^{p\times q},P\in{\cal P}_{s}):\quad\|P(z)\|\leq\sigma^{(s)}(z),\,\;\;\|\overline{P}(z)\|\geq\|z\|-\sigma^{(2s)}(z).. , 2 =
and we conclude that
in the present situation condition
, 1 = σ(s)(z)+σ(2s)(z)≤s/λ‖z‖Σ+(1−ψ)‖z‖superscript𝜎𝑠𝑧superscript𝜎2𝑠𝑧𝑠𝜆subscriptnorm𝑧Σ1𝜓norm𝑧\displaystyle\sigma^{(s)}(z)+\sigma^{(2s)}(z)\leq\sqrt{s/\lambda}\|z\|_{\Sigma}+(1-\psi)\|z\|. , 2 = . , 3 = (55)
is sufficient for the validity of 𝐐(λ,ψ)𝐐𝜆𝜓{\mathbf{Q}}(\lambda,\psi). As a result, condition (55) with ψ>0𝜓0\psi>0 is sufficient for applicability of the bound of Lemma B.1. It may also be compared to the necessary and sufficient condition of “s𝑠s-goodness of ΣΣ\Sigma” in [43]:
, 1 = ∃ψ>0: 2σ(s)(z)≤(1−ψ)‖z‖∀z∈Ker(Σ).:𝜓02superscript𝜎𝑠𝑧1𝜓norm𝑧for-all𝑧KerΣ\exists\psi>0:\;2\sigma^{(s)}(z)\leq(1-\psi)\|z\|\;\;\forall z\in\mathrm{Ker}(\Sigma).. , 2 =
| 1,312
|
Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix B Properties of sparsity structures
B.2 Condition 𝐐(λ,ψ)𝐐𝜆𝜓{\mathbf{Q}}(\lambda,\psi)
Proof.
Remark B.1
We discuss implications of condition 𝐐(λ,ψ)𝐐𝜆𝜓{\mathbf{Q}}(\lambda,\psi) and result of Lemma B.1 for “usual” sparsity in Section 3 of the paper. Now, let us consider
the case of the low rank sparsity. Let z∈𝐑p×q𝑧superscript𝐑𝑝𝑞z\in{\mathbf{R}}^{p\times q} with p≥q𝑝𝑞p\geq q for the sake of definiteness. In this case, ∥⋅∥\|\cdot\| is the nuclear norm, and we put
P(z)=PℓzPr𝑃𝑧subscript𝑃ℓ𝑧subscript𝑃𝑟P(z)=P_{\ell}zP_{r} where
Pℓsubscript𝑃ℓP_{\ell} and Prsubscript𝑃𝑟P_{r} are orthoprojectors of rank s≤q𝑠𝑞s\leq q such that ‖(I−P)(x)‖=‖x∗−Pℓx∗Pr‖≤δnorm𝐼𝑃𝑥normsubscript𝑥subscript𝑃ℓsubscript𝑥subscript𝑃𝑟𝛿\|(I-P)(x)\|=\|x_{*}-P_{\ell}x_{*}P_{r}\|\leq\delta.666E.g., choose Pℓsubscript𝑃ℓP_{\ell} and Prsubscript𝑃𝑟P_{r} as left and right projectors on the space generated by s𝑠s principal left and right singular vectors of x∗subscript𝑥x_{*}, so that
‖x∗−Pℓx∗Pr‖=‖(I−Pℓ)x∗(I−Pr)‖=∑i=s+1qσi≤δnormsubscript𝑥subscript𝑃ℓsubscript𝑥subscript𝑃𝑟norm𝐼subscript𝑃ℓsubscript𝑥𝐼subscript𝑃𝑟superscriptsubscript𝑖𝑠1𝑞subscript𝜎𝑖𝛿\|x_{*}-P_{\ell}x_{*}P_{r}\|=\|(I-P_{\ell})x_{*}(I-P_{r})\|=\sum_{i=s+1}^{q}\sigma_{i}\leq\delta.
Furthermore, for a p×q𝑝𝑞p\times q matrix z𝑧z let us put
, 1 = σ(k)(z)=∑i=1kσi(z), 1≤k≤q.formulae-sequencesuperscript𝜎𝑘𝑧superscriptsubscript𝑖1𝑘subscript𝜎𝑖𝑧1𝑘𝑞\sigma^{(k)}(z)=\sum_{i=1}^{k}\sigma_{i}(z),\,\,1\leq k\leq q.. , 2 =
With the sparsity parameter s𝑠s being a nonnegative integer,
, 1 = ∀(z∈𝐑p×q,P∈𝒫s):∥P(z)∥≤σ(s)(z),∥P¯(z)∥≥∥z∥−σ(2s)(z).\forall(z\in{\mathbf{R}}^{p\times q},P\in{\cal P}_{s}):\quad\|P(z)\|\leq\sigma^{(s)}(z),\,\;\;\|\overline{P}(z)\|\geq\|z\|-\sigma^{(2s)}(z).. , 2 =
and we conclude that
in the present situation condition
, 1 = σ(s)(z)+σ(2s)(z)≤s/λ‖z‖Σ+(1−ψ)‖z‖superscript𝜎𝑠𝑧superscript𝜎2𝑠𝑧𝑠𝜆subscriptnorm𝑧Σ1𝜓norm𝑧\displaystyle\sigma^{(s)}(z)+\sigma^{(2s)}(z)\leq\sqrt{s/\lambda}\|z\|_{\Sigma}+(1-\psi)\|z\|. , 2 = . , 3 = (55)
is sufficient for the validity of 𝐐(λ,ψ)𝐐𝜆𝜓{\mathbf{Q}}(\lambda,\psi). As a result, condition (55) with ψ>0𝜓0\psi>0 is sufficient for applicability of the bound of Lemma B.1. It may also be compared to the necessary and sufficient condition of “s𝑠s-goodness of ΣΣ\Sigma” in [43]:
, 1 = ∃ψ>0: 2σ(s)(z)≤(1−ψ)‖z‖∀z∈Ker(Σ).:𝜓02superscript𝜎𝑠𝑧1𝜓norm𝑧for-all𝑧KerΣ\exists\psi>0:\;2\sigma^{(s)}(z)\leq(1-\psi)\|z\|\;\;\forall z\in\mathrm{Ker}(\Sigma).. , 2 =
| 1,373
|
73
|
This section complements the numerical results appearing on the main body of the paper. We consider the setting in Section 3.3 of sparse recovery problem from GLR model observations (15).
In the experiments below, we consider the choice (21) of activation function 𝔯α(t)subscript𝔯𝛼𝑡\mathfrak{r}_{\alpha}(t) with values α=1𝛼1\alpha=1 and α=1/10𝛼110\alpha=1/10; value α=1𝛼1\alpha=1 corresponds to linear regression with 𝔯(t)=t𝔯𝑡𝑡\mathfrak{r}(t)=t, whereas when α=0.1𝛼0.1\alpha=0.1 activation have a flatter curve with rapidly decreasing with r𝑟r modulus of strong convexity for |t|≤r𝑡𝑟|t|\leq r. Same as before, in our experiments, the dimension of the parameter space is n=500 000𝑛500000n=500\,000, the sparsity level of the optimal point x∗subscript𝑥x_{*} is s=100𝑠100s=100; we use the minibatch Algorithm 2 with the maximal number of oracle calls is N=250 000𝑁250000N=250\,000.
In Figures 4 and 5 we report results for κΣ∈{0.1,1}subscript𝜅Σ0.11\kappa_{\Sigma}\in\{0.1,1\} and σ∈{0.001,0.1}𝜎0.0010.1\sigma\in\{0.001,0.1\}; the simulations are repeated 10 times, we trace the median of the estimation error ‖x^i−x∗‖1subscriptnormsubscript^𝑥𝑖subscript𝑥1\|{\widehat{x}}_{i}-x_{*}\|_{1} along with its first and the last deciles against the number of oracle calls.
In our experiments, multistage algorithms exhibit linear convergence on initial iterations. Surprisingly, “standard” (non-Euclidean) SMD also converges fast in the “preliminary” regime. This may be explained by the fact that iteration xisubscript𝑥𝑖x_{i} of the SMD obtained by the “usual” proximal mapping Prox(γi−1∇G(xi−1,ωi),xi−1)Proxsubscript𝛾𝑖1∇𝐺subscript𝑥𝑖1subscript𝜔𝑖subscript𝑥𝑖1\mathrm{Prox}(\gamma_{i-1}\nabla G(x_{i-1},\omega_{i}),x_{i-1}) is computed as a solution to the optimization problem with “penalty” θ(x)=c‖x‖pp𝜃𝑥𝑐superscriptsubscriptnorm𝑥𝑝𝑝\theta(x)=c\|x\|_{p}^{p}, p=1+1/lnn𝑝11𝑛p=1+1/\ln n which results in a “natural” sparsification of xisubscript𝑥𝑖x_{i}. As iterations progress, such “sparsification” becomes insufficient, and the multistage routine eventually outperforms the SMD.
Implementing the method for “flatter” nonlinear activation 𝔯(t)𝔯𝑡\mathfrak{r}(t) or increased condition number of the regressor covariance matrix ΣΣ\Sigma requires increasing the length m0subscript𝑚0m_{0} of the stage of the algorithm.
Algorithm 1 CSMD-SR
Algorithm 2 Asymptotic phase of CSMD-SR with minibatch
Figure 1: Given the activation function 𝔯𝔯\mathfrak{r} in (21) and α=(0,0.01,0.1,0.25,1)𝛼00.010.10.251\alpha=(0,0.01,0.1,0.25,1); left plot: mappings hℎh; right plot: moduli of strong monotonicity of mappings H𝐻H on {x:‖x‖Σ≤r}conditional-set𝑥subscriptnorm𝑥Σ𝑟\{x:\|x\|_{\Sigma}\leq r\} as function of r𝑟r.
Figure 2: CSMD-SR and “vanilla” SMD in Generalized Linear Regression problem: ℓ1subscriptℓ1\ell_{1} error as a function of the number of oracle calls
Figure 3: Preliminary stages of the CSMD-SR and its variant with data recycling: linear regression experiment (left pane), GLR with activation 𝔯1/10(t)subscript𝔯110𝑡\mathfrak{r}_{1/10}(t) (right pane).
(a) κΣ=1,σ=0.1,m0=5000formulae-sequencesubscript𝜅Σ1formulae-sequence𝜎0.1subscript𝑚05000\kappa_{\Sigma}=1,\sigma=0.1,m_{0}=5000
(a) κΣ=1,σ=0.1,m0=8000formulae-sequencesubscript𝜅Σ1formulae-sequence𝜎0.1subscript𝑚08000\kappa_{\Sigma}=1,\sigma=0.1,m_{0}=8000
| 1,297
|
Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix C Supplementary numerical experiments
This section complements the numerical results appearing on the main body of the paper. We consider the setting in Section 3.3 of sparse recovery problem from GLR model observations (15).
In the experiments below, we consider the choice (21) of activation function 𝔯α(t)subscript𝔯𝛼𝑡\mathfrak{r}_{\alpha}(t) with values α=1𝛼1\alpha=1 and α=1/10𝛼110\alpha=1/10; value α=1𝛼1\alpha=1 corresponds to linear regression with 𝔯(t)=t𝔯𝑡𝑡\mathfrak{r}(t)=t, whereas when α=0.1𝛼0.1\alpha=0.1 activation have a flatter curve with rapidly decreasing with r𝑟r modulus of strong convexity for |t|≤r𝑡𝑟|t|\leq r. Same as before, in our experiments, the dimension of the parameter space is n=500 000𝑛500000n=500\,000, the sparsity level of the optimal point x∗subscript𝑥x_{*} is s=100𝑠100s=100; we use the minibatch Algorithm 2 with the maximal number of oracle calls is N=250 000𝑁250000N=250\,000.
In Figures 4 and 5 we report results for κΣ∈{0.1,1}subscript𝜅Σ0.11\kappa_{\Sigma}\in\{0.1,1\} and σ∈{0.001,0.1}𝜎0.0010.1\sigma\in\{0.001,0.1\}; the simulations are repeated 10 times, we trace the median of the estimation error ‖x^i−x∗‖1subscriptnormsubscript^𝑥𝑖subscript𝑥1\|{\widehat{x}}_{i}-x_{*}\|_{1} along with its first and the last deciles against the number of oracle calls.
In our experiments, multistage algorithms exhibit linear convergence on initial iterations. Surprisingly, “standard” (non-Euclidean) SMD also converges fast in the “preliminary” regime. This may be explained by the fact that iteration xisubscript𝑥𝑖x_{i} of the SMD obtained by the “usual” proximal mapping Prox(γi−1∇G(xi−1,ωi),xi−1)Proxsubscript𝛾𝑖1∇𝐺subscript𝑥𝑖1subscript𝜔𝑖subscript𝑥𝑖1\mathrm{Prox}(\gamma_{i-1}\nabla G(x_{i-1},\omega_{i}),x_{i-1}) is computed as a solution to the optimization problem with “penalty” θ(x)=c‖x‖pp𝜃𝑥𝑐superscriptsubscriptnorm𝑥𝑝𝑝\theta(x)=c\|x\|_{p}^{p}, p=1+1/lnn𝑝11𝑛p=1+1/\ln n which results in a “natural” sparsification of xisubscript𝑥𝑖x_{i}. As iterations progress, such “sparsification” becomes insufficient, and the multistage routine eventually outperforms the SMD.
Implementing the method for “flatter” nonlinear activation 𝔯(t)𝔯𝑡\mathfrak{r}(t) or increased condition number of the regressor covariance matrix ΣΣ\Sigma requires increasing the length m0subscript𝑚0m_{0} of the stage of the algorithm.
Algorithm 1 CSMD-SR
Algorithm 2 Asymptotic phase of CSMD-SR with minibatch
Figure 1: Given the activation function 𝔯𝔯\mathfrak{r} in (21) and α=(0,0.01,0.1,0.25,1)𝛼00.010.10.251\alpha=(0,0.01,0.1,0.25,1); left plot: mappings hℎh; right plot: moduli of strong monotonicity of mappings H𝐻H on {x:‖x‖Σ≤r}conditional-set𝑥subscriptnorm𝑥Σ𝑟\{x:\|x\|_{\Sigma}\leq r\} as function of r𝑟r.
Figure 2: CSMD-SR and “vanilla” SMD in Generalized Linear Regression problem: ℓ1subscriptℓ1\ell_{1} error as a function of the number of oracle calls
Figure 3: Preliminary stages of the CSMD-SR and its variant with data recycling: linear regression experiment (left pane), GLR with activation 𝔯1/10(t)subscript𝔯110𝑡\mathfrak{r}_{1/10}(t) (right pane).
(a) κΣ=1,σ=0.1,m0=5000formulae-sequencesubscript𝜅Σ1formulae-sequence𝜎0.1subscript𝑚05000\kappa_{\Sigma}=1,\sigma=0.1,m_{0}=5000
(a) κΣ=1,σ=0.1,m0=8000formulae-sequencesubscript𝜅Σ1formulae-sequence𝜎0.1subscript𝑚08000\kappa_{\Sigma}=1,\sigma=0.1,m_{0}=8000
| 1,316
|
0
|
Many real-world datasets, such as healthcare, climate, and economics, are often collected as irregular time series, which poses challenges for accurate modeling. In this paper, we propose the Amortized Control of continuous State Space Model (ACSSM) for continuous dynamical modeling of time series for irregular and discrete observations. We first present a multi-marginal Doob’s hℎh-transform to construct a continuous dynamical system conditioned on these irregular observations. Following this, we introduce a variational inference algorithm with a tight evidence lower bound (ELBO), leveraging stochastic optimal control (SOC) theory to approximate the intractable Doob’s hℎh-transform and simulate the conditioned dynamics. To improve efficiency and scalability during both training and inference, ACSSM employs amortized inference to decouple representation learning from the latent dynamics. Additionally, it incorporates a simulation-free latent dynamics framework and a transformer-based data assimilation scheme, facilitating parallel inference of the latent states and ELBO computation. Through empirical evaluations across a variety of real-world datasets, ACSSM demonstrates superior performance in tasks such as classification, regression, interpolation, and extrapolation, while maintaining computational efficiency.
| 236
|
Amortized Control of Continuous State Space Feynman-Kac Model for Irregular Time Series
Abstract
Many real-world datasets, such as healthcare, climate, and economics, are often collected as irregular time series, which poses challenges for accurate modeling. In this paper, we propose the Amortized Control of continuous State Space Model (ACSSM) for continuous dynamical modeling of time series for irregular and discrete observations. We first present a multi-marginal Doob’s hℎh-transform to construct a continuous dynamical system conditioned on these irregular observations. Following this, we introduce a variational inference algorithm with a tight evidence lower bound (ELBO), leveraging stochastic optimal control (SOC) theory to approximate the intractable Doob’s hℎh-transform and simulate the conditioned dynamics. To improve efficiency and scalability during both training and inference, ACSSM employs amortized inference to decouple representation learning from the latent dynamics. Additionally, it incorporates a simulation-free latent dynamics framework and a transformer-based data assimilation scheme, facilitating parallel inference of the latent states and ELBO computation. Through empirical evaluations across a variety of real-world datasets, ACSSM demonstrates superior performance in tasks such as classification, regression, interpolation, and extrapolation, while maintaining computational efficiency.
| 258
|
1
|
State space models (SSMs) are widely used for modeling time series data (Fraccaro et al., 2017; Rangapuram et al., 2018; de Bézenac et al., 2020) but typically assume evenly spaced observations. However, many real-world datasets, such as healthcare (Silva et al., 2012), climate (Menne et al., 2015), are often collected as irregular time series, which poses challenges for accurate modeling. It provokes the neural differential equations (Chen et al., 2018; Rubanova et al., 2019; Li et al., 2020; Kidger et al., 2020; Zeng et al., 2023) for a continuous-time dynamical modeling of a time-series. Moreover, by incorporating stochastic dynamical models with SSM models, the application of continuous-dynamical state space models (CD-SSM) (Jazwinski, 2007) with neural networks have emerged which allows for more flexible modeling (Schirmer et al., 2022; Ansari et al., 2023). However, they typically rely on the Bayesian recursion to approximate the desired latent posterior distribution (filtering/smoothing distribution) involves sequential prediction and updating, resulting in computational costs that scale with the length of observations (Särkkä & García-Fernández, 2020).
Instead of relying on Bayesian recursion, we propose an Amortized Control of continuous State Space Model (ACSSM), a variational inference (VI) algorithm that directly approximates the posterior distribution using a measure-theoretic formulation, enable a simulation of continuous trajectories from such distributions. To achieve this, we introduce a Feynman-Kac model (Del Moral, 2011; Chopin et al., 2020), which facilitates a thorough sequential analysis. Within this formulation, we propose a multi-marginal Doob’s hℎh-transform to provide conditional latent dynamics described by stochastic differential equations (SDEs) which conditioned on a irregularly sampled, set of discrete observations. It extends the previous Doob’s hℎh-transform (Chopin et al., 2023) by incorporating multi-marginal constraints instead of a single terminal constraint.
Considering that the estimation of the Doob’s hℎh-transform is infeasible in general, we utilize the theory of stochastic optimal control (SOC) (Fleming & Soner, 2006; Carmona, 2016) to simulate the conditioned SDEs by approximating the Doob’s hℎh-transform. It allow us to propose a tight evidence lower bound (ELBO) for the aforementioned VI algorithm by establishing a fundamental connection between the partial differential equations (PDEs) associated with Doob’s hℎh-transform and SOC. In the Sequential Monte Carlo (SMC) literatures, Doob’s hℎh-transform is analogous to the twist-function (Guarniero et al., 2017). In this context, Heng et al. (2020) proposed an algorithm focused on approximating the twisted transition kernel instead of the underlying dynamics. Recently, a concurrent study Lu & Wang (2024) presented a continuous-time expansion. However, both studies primarily concentrate on approximation methods rather than practical real-world applications.
In practical situations, the computation of ELBO for a VI algorithm might impractical due to the instability and high memory demands associated with gradient computation of the approximated stochastic dynamics over the entire sequence interval (Liu et al., 2024; Park et al., 2024). To address this issue, we propose two efficient modeling approaches:
1) We establish amortized inference by introducing an auxiliary variable to the latent space, generated by a neural network encoder-decoder. It separates the learning of representations in high-dimensional time-series from generative modeling in the latent dynamics (Fraccaro et al., 2017), offering a more flexible parameterization. Moreover, it allows inference of the posterior distribution for a novel time-series sequence without relying on Bayesian recursion by incorporating the learned control function.
2) We use the simulation-free property, which allows for parallel computation of latent marginal distributions over a given interval. Moreover, we explore a more flexible linear approximation of the drift function in controlled SDEs to enhance the efficiency of the proposed controlled dynamics.
We evaluated ACSSM on several time-series tasks across various real-world datasets. Our experiments show that ACSSM consistently outperforms existing baseline models in each tasks, demonstrating its effectiveness in capturing the underlying dynamics of irregular time-series. Additionally, ACSSM achieves significant computational efficiency, enabling faster training times compared to dynamics-based models that rely on numerical simulations.
We summarize our contributions as follows:
•
We extend the theory of Doob’s hℎh-transform, previsoly studied with a single marginal constraint, to a multi-marginal cases. This indicates the existence of a class of conditioned SDEs that depend on future observations, where the solutions of these SDEs lead to the true posterior path measure within the framework of CD-SSM.
•
We reformulate the simulation of conditioned SDEs as a SOC problem to approximating an impractical Doob’s hℎh-transform. By leveraging the connection between SOC theory and Doob’s hℎh-transform, we propose a variational inference algorithm with a tight ELBO.
•
For practical real-world applications, we introduce an efficient and scalable modeling approach that enables parallelization of latent dynamic simulation and ELBO computation.
•
We demonstrate its superior performance across various real-world irregularly sampled time-series tasks, including per-time point classification, regression, and sequence interpolation and extrapolation, all with computational efficiency.
| 1,207
|
Amortized Control of Continuous State Space Feynman-Kac Model for Irregular Time Series
1 Introduction
State space models (SSMs) are widely used for modeling time series data (Fraccaro et al., 2017; Rangapuram et al., 2018; de Bézenac et al., 2020) but typically assume evenly spaced observations. However, many real-world datasets, such as healthcare (Silva et al., 2012), climate (Menne et al., 2015), are often collected as irregular time series, which poses challenges for accurate modeling. It provokes the neural differential equations (Chen et al., 2018; Rubanova et al., 2019; Li et al., 2020; Kidger et al., 2020; Zeng et al., 2023) for a continuous-time dynamical modeling of a time-series. Moreover, by incorporating stochastic dynamical models with SSM models, the application of continuous-dynamical state space models (CD-SSM) (Jazwinski, 2007) with neural networks have emerged which allows for more flexible modeling (Schirmer et al., 2022; Ansari et al., 2023). However, they typically rely on the Bayesian recursion to approximate the desired latent posterior distribution (filtering/smoothing distribution) involves sequential prediction and updating, resulting in computational costs that scale with the length of observations (Särkkä & García-Fernández, 2020).
Instead of relying on Bayesian recursion, we propose an Amortized Control of continuous State Space Model (ACSSM), a variational inference (VI) algorithm that directly approximates the posterior distribution using a measure-theoretic formulation, enable a simulation of continuous trajectories from such distributions. To achieve this, we introduce a Feynman-Kac model (Del Moral, 2011; Chopin et al., 2020), which facilitates a thorough sequential analysis. Within this formulation, we propose a multi-marginal Doob’s hℎh-transform to provide conditional latent dynamics described by stochastic differential equations (SDEs) which conditioned on a irregularly sampled, set of discrete observations. It extends the previous Doob’s hℎh-transform (Chopin et al., 2023) by incorporating multi-marginal constraints instead of a single terminal constraint.
Considering that the estimation of the Doob’s hℎh-transform is infeasible in general, we utilize the theory of stochastic optimal control (SOC) (Fleming & Soner, 2006; Carmona, 2016) to simulate the conditioned SDEs by approximating the Doob’s hℎh-transform. It allow us to propose a tight evidence lower bound (ELBO) for the aforementioned VI algorithm by establishing a fundamental connection between the partial differential equations (PDEs) associated with Doob’s hℎh-transform and SOC. In the Sequential Monte Carlo (SMC) literatures, Doob’s hℎh-transform is analogous to the twist-function (Guarniero et al., 2017). In this context, Heng et al. (2020) proposed an algorithm focused on approximating the twisted transition kernel instead of the underlying dynamics. Recently, a concurrent study Lu & Wang (2024) presented a continuous-time expansion. However, both studies primarily concentrate on approximation methods rather than practical real-world applications.
In practical situations, the computation of ELBO for a VI algorithm might impractical due to the instability and high memory demands associated with gradient computation of the approximated stochastic dynamics over the entire sequence interval (Liu et al., 2024; Park et al., 2024). To address this issue, we propose two efficient modeling approaches:
1) We establish amortized inference by introducing an auxiliary variable to the latent space, generated by a neural network encoder-decoder. It separates the learning of representations in high-dimensional time-series from generative modeling in the latent dynamics (Fraccaro et al., 2017), offering a more flexible parameterization. Moreover, it allows inference of the posterior distribution for a novel time-series sequence without relying on Bayesian recursion by incorporating the learned control function.
2) We use the simulation-free property, which allows for parallel computation of latent marginal distributions over a given interval. Moreover, we explore a more flexible linear approximation of the drift function in controlled SDEs to enhance the efficiency of the proposed controlled dynamics.
We evaluated ACSSM on several time-series tasks across various real-world datasets. Our experiments show that ACSSM consistently outperforms existing baseline models in each tasks, demonstrating its effectiveness in capturing the underlying dynamics of irregular time-series. Additionally, ACSSM achieves significant computational efficiency, enabling faster training times compared to dynamics-based models that rely on numerical simulations.
We summarize our contributions as follows:
•
We extend the theory of Doob’s hℎh-transform, previsoly studied with a single marginal constraint, to a multi-marginal cases. This indicates the existence of a class of conditioned SDEs that depend on future observations, where the solutions of these SDEs lead to the true posterior path measure within the framework of CD-SSM.
•
We reformulate the simulation of conditioned SDEs as a SOC problem to approximating an impractical Doob’s hℎh-transform. By leveraging the connection between SOC theory and Doob’s hℎh-transform, we propose a variational inference algorithm with a tight ELBO.
•
For practical real-world applications, we introduce an efficient and scalable modeling approach that enables parallelization of latent dynamic simulation and ELBO computation.
•
We demonstrate its superior performance across various real-world irregularly sampled time-series tasks, including per-time point classification, regression, and sequence interpolation and extrapolation, all with computational efficiency.
| 1,230
|
2
|
Throughout this paper, we denote path measure by ℙ(⋅)superscriptℙ⋅\mathbb{P}^{(\cdot)}, defined on the space of continuous functions Ω=C([0,T],ℝd)Ω𝐶0𝑇superscriptℝ𝑑\Omega=C([0,T],\mathbb{R}^{d}). We sometimes denote with ℙℙ\mathbb{P} the expectation as 𝔼ℙt,𝐱[⋅]=𝔼ℙ[⋅|𝐗t=𝐱]\mathbb{E}^{t,\mathbf{x}}_{\mathbb{P}}[\cdot]=\mathbb{E}_{\mathbb{P}}[\cdot|\mathbf{X}_{t}=\mathbf{x}], where the stochastic processes corresponding to ℙ(⋅)superscriptℙ⋅\mathbb{P}^{(\cdot)} are represented as 𝐗(⋅)superscript𝐗⋅\mathbf{X}^{(\cdot)} and their time-marginal distribution at time t∈[0,T]𝑡0𝑇t\in[0,T] is given by the push-forward measure μt(⋅):=(𝐗t(⋅))#ℙ(⋅)assignsubscriptsuperscript𝜇⋅𝑡subscriptsubscriptsuperscript𝐗⋅𝑡#superscriptℙ⋅\mu^{(\cdot)}_{t}:=(\mathbf{X}^{(\cdot)}_{t})_{\#}\mathbb{P}^{(\cdot)} with marginal density 𝐩t(⋅)subscriptsuperscript𝐩⋅𝑡\mathbf{p}^{(\cdot)}_{t}. This marginal density represents the Radon-Nikodym derivative dμt(⋅)(𝐱)=𝐩t(⋅)(𝐱)dℒ(𝐱)𝑑superscriptsubscript𝜇𝑡⋅𝐱subscriptsuperscript𝐩⋅𝑡𝐱𝑑ℒ𝐱d\mu_{t}^{(\cdot)}(\mathbf{x})=\mathbf{p}^{(\cdot)}_{t}(\mathbf{x})d\mathcal{L}(\mathbf{x}), where ℒℒ\mathcal{L} denotes the Lebesgue measure. Additionally, for a function 𝒱:[0,T]×ℝd→ℝ:𝒱→0𝑇superscriptℝ𝑑ℝ\mathcal{V}:[0,T]\times\mathbb{R}^{d}\to\mathbb{R}, we define the first and second derivatives with respect to 𝐱∈ℝd𝐱superscriptℝ𝑑\mathbf{x}\in\mathbb{R}^{d} as ∇𝐱𝒱subscript∇𝐱𝒱\nabla_{\mathbf{x}}\mathcal{V} and ∇𝐱𝐱𝒱subscript∇𝐱𝐱𝒱\nabla_{\mathbf{xx}}\mathcal{V}, respectively, and the derivative with respect to time t∈[0,T]𝑡0𝑇t\in[0,T] as ∂t𝒱subscript𝑡𝒱\partial_{t}\mathcal{V}. For a sequence of functions {𝒱i}i∈[1:k]subscriptsubscript𝒱𝑖𝑖delimited-[]:1𝑘\{\mathcal{V}_{i}\}_{i\in[1:k]}, we will denote 𝒱i(t,𝐱):=𝒱i,tassignsubscript𝒱𝑖𝑡𝐱subscript𝒱𝑖𝑡\mathcal{V}_{i}(t,\mathbf{x}):=\mathcal{V}_{i,t} and [1:k]={1,⋯,k}[1:k]=\{1,\cdots,k\}. Finally, the Kullback-Leibler (KL) divergence between two probability measures μ𝜇\mu and ν𝜈\nu is defined as DKL(μ|ν)=∫ℝdlogdμdν(𝐱)𝑑μ(𝐱)subscript𝐷KLconditional𝜇𝜈subscriptsuperscriptℝ𝑑𝑑𝜇𝑑𝜈𝐱differential-d𝜇𝐱D_{\text{KL}}(\mu|\nu)=\int_{\mathbb{R}^{d}}\log\frac{d\mu}{d\nu}(\mathbf{x})d\mu(\mathbf{x}) when μ𝜇\mu is absolutely continuous with respect to ν𝜈\nu, and DKL(μ|ν)=+∞subscript𝐷KLconditional𝜇𝜈D_{\text{KL}}(\mu|\nu)=+\infty otherwise.
| 1,210
|
Amortized Control of Continuous State Space Feynman-Kac Model for Irregular Time Series
1 Introduction
Notation
Throughout this paper, we denote path measure by ℙ(⋅)superscriptℙ⋅\mathbb{P}^{(\cdot)}, defined on the space of continuous functions Ω=C([0,T],ℝd)Ω𝐶0𝑇superscriptℝ𝑑\Omega=C([0,T],\mathbb{R}^{d}). We sometimes denote with ℙℙ\mathbb{P} the expectation as 𝔼ℙt,𝐱[⋅]=𝔼ℙ[⋅|𝐗t=𝐱]\mathbb{E}^{t,\mathbf{x}}_{\mathbb{P}}[\cdot]=\mathbb{E}_{\mathbb{P}}[\cdot|\mathbf{X}_{t}=\mathbf{x}], where the stochastic processes corresponding to ℙ(⋅)superscriptℙ⋅\mathbb{P}^{(\cdot)} are represented as 𝐗(⋅)superscript𝐗⋅\mathbf{X}^{(\cdot)} and their time-marginal distribution at time t∈[0,T]𝑡0𝑇t\in[0,T] is given by the push-forward measure μt(⋅):=(𝐗t(⋅))#ℙ(⋅)assignsubscriptsuperscript𝜇⋅𝑡subscriptsubscriptsuperscript𝐗⋅𝑡#superscriptℙ⋅\mu^{(\cdot)}_{t}:=(\mathbf{X}^{(\cdot)}_{t})_{\#}\mathbb{P}^{(\cdot)} with marginal density 𝐩t(⋅)subscriptsuperscript𝐩⋅𝑡\mathbf{p}^{(\cdot)}_{t}. This marginal density represents the Radon-Nikodym derivative dμt(⋅)(𝐱)=𝐩t(⋅)(𝐱)dℒ(𝐱)𝑑superscriptsubscript𝜇𝑡⋅𝐱subscriptsuperscript𝐩⋅𝑡𝐱𝑑ℒ𝐱d\mu_{t}^{(\cdot)}(\mathbf{x})=\mathbf{p}^{(\cdot)}_{t}(\mathbf{x})d\mathcal{L}(\mathbf{x}), where ℒℒ\mathcal{L} denotes the Lebesgue measure. Additionally, for a function 𝒱:[0,T]×ℝd→ℝ:𝒱→0𝑇superscriptℝ𝑑ℝ\mathcal{V}:[0,T]\times\mathbb{R}^{d}\to\mathbb{R}, we define the first and second derivatives with respect to 𝐱∈ℝd𝐱superscriptℝ𝑑\mathbf{x}\in\mathbb{R}^{d} as ∇𝐱𝒱subscript∇𝐱𝒱\nabla_{\mathbf{x}}\mathcal{V} and ∇𝐱𝐱𝒱subscript∇𝐱𝐱𝒱\nabla_{\mathbf{xx}}\mathcal{V}, respectively, and the derivative with respect to time t∈[0,T]𝑡0𝑇t\in[0,T] as ∂t𝒱subscript𝑡𝒱\partial_{t}\mathcal{V}. For a sequence of functions {𝒱i}i∈[1:k]subscriptsubscript𝒱𝑖𝑖delimited-[]:1𝑘\{\mathcal{V}_{i}\}_{i\in[1:k]}, we will denote 𝒱i(t,𝐱):=𝒱i,tassignsubscript𝒱𝑖𝑡𝐱subscript𝒱𝑖𝑡\mathcal{V}_{i}(t,\mathbf{x}):=\mathcal{V}_{i,t} and [1:k]={1,⋯,k}[1:k]=\{1,\cdots,k\}. Finally, the Kullback-Leibler (KL) divergence between two probability measures μ𝜇\mu and ν𝜈\nu is defined as DKL(μ|ν)=∫ℝdlogdμdν(𝐱)𝑑μ(𝐱)subscript𝐷KLconditional𝜇𝜈subscriptsuperscriptℝ𝑑𝑑𝜇𝑑𝜈𝐱differential-d𝜇𝐱D_{\text{KL}}(\mu|\nu)=\int_{\mathbb{R}^{d}}\log\frac{d\mu}{d\nu}(\mathbf{x})d\mu(\mathbf{x}) when μ𝜇\mu is absolutely continuous with respect to ν𝜈\nu, and DKL(μ|ν)=+∞subscript𝐷KLconditional𝜇𝜈D_{\text{KL}}(\mu|\nu)=+\infty otherwise.
| 1,236
|
3
|
Consider for a set of time-stamps (regular or irregular) {ti}i=0ksuperscriptsubscriptsubscript𝑡𝑖𝑖0𝑘\{t_{i}\}_{i=0}^{k} over a interval 𝒯=[0,T]𝒯0𝑇\mathcal{T}=[0,T], i.e.,0=t0≤⋯≤tk=T\textit{i}.\textit{e}.,0=t_{0}\leq\cdots\leq t_{k}=T. The CD-SSM assumes a continuous-time Markov state trajectory 𝐗0:Tsubscript𝐗:0𝑇\mathbf{X}_{0:T} in latent space ℝdsuperscriptℝ𝑑\mathbb{R}^{d} is given as a solution of the SDE:
, 1 = (Prior State)d𝐗t=b(t,𝐗t)dt+d𝐖t,(Prior State)𝑑subscript𝐗𝑡𝑏𝑡subscript𝐗𝑡𝑑𝑡𝑑subscript𝐖𝑡\textit{(Prior State)}\quad d\mathbf{X}_{t}=b(t,\mathbf{X}_{t})dt+d\mathbf{W}_{t},. , 2 = . , 3 = (1)
where 𝐗0∼μ0similar-tosubscript𝐗0subscript𝜇0\mathbf{X}_{0}\sim\mu_{0} and {𝐖t}t∈[0,T]subscriptsubscript𝐖𝑡𝑡0𝑇\{\mathbf{W}_{t}\}_{t\in[0,T]} is a ℝdsuperscriptℝ𝑑\mathbb{R}^{d}-valued Wiener process that is independent of the μ0subscript𝜇0\mu_{0}. Since 𝐗tsubscript𝐗𝑡\mathbf{X}_{t} is Markov process, the time-evolution of marginal distribution μtsubscript𝜇𝑡\mu_{t} is governed by a transition density, which is the solution to the Fokker-Planck equation assocaited with 𝐗tsubscript𝐗𝑡\mathbf{X}_{t}. This allows us to define a path measure ℙℙ\mathbb{P} that represent the weak solutions of the SDE in (1) over an interval [0,T]0𝑇[0,T]111See details on Appendix A.2..
For a measurement model gi(𝐲ti|𝐗ti)subscript𝑔𝑖conditionalsubscript𝐲subscript𝑡𝑖subscript𝐗subscript𝑡𝑖g_{i}(\mathbf{y}_{t_{i}}|\mathbf{X}_{t_{i}}), we consider the case that we have only access to the realization of the (latent) observation process at each discrete-time stamps {ti}i∈[1:k]subscriptsubscript𝑡𝑖𝑖delimited-[]:1𝑘\{t_{i}\}_{i\in[1:k]}, i.e.,𝐲ti∼gi(𝐲ti|𝐗ti),∀i∈[1:k]\textit{i}.\textit{e}.,\mathbf{y}_{t_{i}}\sim g_{i}(\mathbf{y}_{t_{i}}|\mathbf{X}_{t_{i}}),\forall i\in[1:k]. In this paper, our goal is to infer the classes of SDEs which inducing the filtering/smoothing path measure ℙ⋆:=ℙ⋆(⋅|ℋtk)\mathbb{P}^{\star}:=\mathbb{P}^{\star}(\cdot|\mathcal{H}_{t_{k}}), the conditional distribution over the interval [0,T]0𝑇[0,T] for a given ℙℙ\mathbb{P} and a set of observations up to time tksubscript𝑡𝑘t_{k}, ℋtk={𝐲ti|i≤k}subscriptℋsubscript𝑡𝑘conditional-setsubscript𝐲subscript𝑡𝑖𝑖𝑘\mathcal{H}_{t_{k}}=\{\mathbf{y}_{t_{i}}|i\leq k\}:
, 1 = (Posterior Dist.)dℙ⋆(𝐗0:T|ℋtk)(Posterior Dist.)𝑑superscriptℙ⋆conditionalsubscript𝐗:0𝑇subscriptℋsubscript𝑡𝑘\displaystyle\textit{(Posterior Dist.)}\quad d\mathbb{P}^{\star}(\mathbf{X}_{0:T}|\mathcal{H}_{t_{k}}). , 2 = =1𝐙(ℋtk)∏i=1Kgi(𝐲ti|𝐗ti)dℙ(𝐗0:T)absent1𝐙subscriptℋsubscript𝑡𝑘superscriptsubscriptproduct𝑖1𝐾subscript𝑔𝑖conditionalsubscript𝐲subscript𝑡𝑖subscript𝐗subscript𝑡𝑖𝑑ℙsubscript𝐗:0𝑇\displaystyle=\frac{1}{\mathbf{Z}(\mathcal{H}_{t_{k}})}\prod_{i=1}^{K}g_{i}(\mathbf{y}_{t_{i}}|\mathbf{X}_{t_{i}})d\mathbb{P}(\mathbf{X}_{0:T}). , 3 = . , 4 = (2)
where the normalizing constant 𝐙(ℋtk)=𝔼ℙ[∏i=1Kgi(𝐲ti|𝐗ti)]𝐙subscriptℋsubscript𝑡𝑘subscript𝔼ℙdelimited-[]superscriptsubscriptproduct𝑖1𝐾subscript𝑔𝑖conditionalsubscript𝐲subscript𝑡𝑖subscript𝐗subscript𝑡𝑖\mathbf{Z}(\mathcal{H}_{t_{k}})=\mathbb{E}_{\mathbb{P}}\left[\prod_{i=1}^{K}g_{i}(\mathbf{y}_{t_{i}}|\mathbf{X}_{t_{i}})\right] serve as a observations likelihood. The path measure formulation of the posterior distribution described in (2) referred to as Feynman-Kac models. See (Del Moral, 2011; Chopin et al., 2020) for a more comprehensive understanding.
| 1,608
|
Amortized Control of Continuous State Space Feynman-Kac Model for Irregular Time Series
2 Preliminaries
Continuous-Discrete State Space Model
Consider for a set of time-stamps (regular or irregular) {ti}i=0ksuperscriptsubscriptsubscript𝑡𝑖𝑖0𝑘\{t_{i}\}_{i=0}^{k} over a interval 𝒯=[0,T]𝒯0𝑇\mathcal{T}=[0,T], i.e.,0=t0≤⋯≤tk=T\textit{i}.\textit{e}.,0=t_{0}\leq\cdots\leq t_{k}=T. The CD-SSM assumes a continuous-time Markov state trajectory 𝐗0:Tsubscript𝐗:0𝑇\mathbf{X}_{0:T} in latent space ℝdsuperscriptℝ𝑑\mathbb{R}^{d} is given as a solution of the SDE:
, 1 = (Prior State)d𝐗t=b(t,𝐗t)dt+d𝐖t,(Prior State)𝑑subscript𝐗𝑡𝑏𝑡subscript𝐗𝑡𝑑𝑡𝑑subscript𝐖𝑡\textit{(Prior State)}\quad d\mathbf{X}_{t}=b(t,\mathbf{X}_{t})dt+d\mathbf{W}_{t},. , 2 = . , 3 = (1)
where 𝐗0∼μ0similar-tosubscript𝐗0subscript𝜇0\mathbf{X}_{0}\sim\mu_{0} and {𝐖t}t∈[0,T]subscriptsubscript𝐖𝑡𝑡0𝑇\{\mathbf{W}_{t}\}_{t\in[0,T]} is a ℝdsuperscriptℝ𝑑\mathbb{R}^{d}-valued Wiener process that is independent of the μ0subscript𝜇0\mu_{0}. Since 𝐗tsubscript𝐗𝑡\mathbf{X}_{t} is Markov process, the time-evolution of marginal distribution μtsubscript𝜇𝑡\mu_{t} is governed by a transition density, which is the solution to the Fokker-Planck equation assocaited with 𝐗tsubscript𝐗𝑡\mathbf{X}_{t}. This allows us to define a path measure ℙℙ\mathbb{P} that represent the weak solutions of the SDE in (1) over an interval [0,T]0𝑇[0,T]111See details on Appendix A.2..
For a measurement model gi(𝐲ti|𝐗ti)subscript𝑔𝑖conditionalsubscript𝐲subscript𝑡𝑖subscript𝐗subscript𝑡𝑖g_{i}(\mathbf{y}_{t_{i}}|\mathbf{X}_{t_{i}}), we consider the case that we have only access to the realization of the (latent) observation process at each discrete-time stamps {ti}i∈[1:k]subscriptsubscript𝑡𝑖𝑖delimited-[]:1𝑘\{t_{i}\}_{i\in[1:k]}, i.e.,𝐲ti∼gi(𝐲ti|𝐗ti),∀i∈[1:k]\textit{i}.\textit{e}.,\mathbf{y}_{t_{i}}\sim g_{i}(\mathbf{y}_{t_{i}}|\mathbf{X}_{t_{i}}),\forall i\in[1:k]. In this paper, our goal is to infer the classes of SDEs which inducing the filtering/smoothing path measure ℙ⋆:=ℙ⋆(⋅|ℋtk)\mathbb{P}^{\star}:=\mathbb{P}^{\star}(\cdot|\mathcal{H}_{t_{k}}), the conditional distribution over the interval [0,T]0𝑇[0,T] for a given ℙℙ\mathbb{P} and a set of observations up to time tksubscript𝑡𝑘t_{k}, ℋtk={𝐲ti|i≤k}subscriptℋsubscript𝑡𝑘conditional-setsubscript𝐲subscript𝑡𝑖𝑖𝑘\mathcal{H}_{t_{k}}=\{\mathbf{y}_{t_{i}}|i\leq k\}:
, 1 = (Posterior Dist.)dℙ⋆(𝐗0:T|ℋtk)(Posterior Dist.)𝑑superscriptℙ⋆conditionalsubscript𝐗:0𝑇subscriptℋsubscript𝑡𝑘\displaystyle\textit{(Posterior Dist.)}\quad d\mathbb{P}^{\star}(\mathbf{X}_{0:T}|\mathcal{H}_{t_{k}}). , 2 = =1𝐙(ℋtk)∏i=1Kgi(𝐲ti|𝐗ti)dℙ(𝐗0:T)absent1𝐙subscriptℋsubscript𝑡𝑘superscriptsubscriptproduct𝑖1𝐾subscript𝑔𝑖conditionalsubscript𝐲subscript𝑡𝑖subscript𝐗subscript𝑡𝑖𝑑ℙsubscript𝐗:0𝑇\displaystyle=\frac{1}{\mathbf{Z}(\mathcal{H}_{t_{k}})}\prod_{i=1}^{K}g_{i}(\mathbf{y}_{t_{i}}|\mathbf{X}_{t_{i}})d\mathbb{P}(\mathbf{X}_{0:T}). , 3 = . , 4 = (2)
where the normalizing constant 𝐙(ℋtk)=𝔼ℙ[∏i=1Kgi(𝐲ti|𝐗ti)]𝐙subscriptℋsubscript𝑡𝑘subscript𝔼ℙdelimited-[]superscriptsubscriptproduct𝑖1𝐾subscript𝑔𝑖conditionalsubscript𝐲subscript𝑡𝑖subscript𝐗subscript𝑡𝑖\mathbf{Z}(\mathcal{H}_{t_{k}})=\mathbb{E}_{\mathbb{P}}\left[\prod_{i=1}^{K}g_{i}(\mathbf{y}_{t_{i}}|\mathbf{X}_{t_{i}})\right] serve as a observations likelihood. The path measure formulation of the posterior distribution described in (2) referred to as Feynman-Kac models. See (Del Moral, 2011; Chopin et al., 2020) for a more comprehensive understanding.
| 1,641
|
4
|
In this section, we introduce our proposed model, ACSSM. First, we present the Multi-marginal Doob’s hℎh-transform, outlining the continuous dynamics for ℙ⋆superscriptℙ⋆\mathbb{P}^{\star} in Section 3.1. Then, in Section 3.2, we frame the VI for approximating ℙ⋆superscriptℙ⋆\mathbb{P}^{\star} using SOC. To support scalable real-world applications, we discuss efficient modeling and amortized inference in Section 3.3222Proofs and detailed derivations are provided in Appendix A..
| 143
|
Amortized Control of Continuous State Space Feynman-Kac Model for Irregular Time Series
3 Controlled Continuous-Discrete State Space Model
In this section, we introduce our proposed model, ACSSM. First, we present the Multi-marginal Doob’s hℎh-transform, outlining the continuous dynamics for ℙ⋆superscriptℙ⋆\mathbb{P}^{\star} in Section 3.1. Then, in Section 3.2, we frame the VI for approximating ℙ⋆superscriptℙ⋆\mathbb{P}^{\star} using SOC. To support scalable real-world applications, we discuss efficient modeling and amortized inference in Section 3.3222Proofs and detailed derivations are provided in Appendix A..
| 173
|
5
|
Before applying VI to approximate the posterior distribution ℙ⋆superscriptℙ⋆\mathbb{P}^{\star} in SOC, we first show that a class of SDEs exists whose solutions induce a path measure equivalent to ℙ⋆superscriptℙ⋆\mathbb{P}^{\star} in (2). This formulation provides a valuable insight for defining an appropriate objective function for the SOC problem in the next section. To do so, we first define a sequence of normalized potential functions {fi}i∈[1:k]subscriptsubscript𝑓𝑖𝑖delimited-[]:1𝑘\{f_{i}\}_{i\in[1:k]}, where each fi:ℝd→ℝ+:subscript𝑓𝑖→superscriptℝ𝑑subscriptℝf_{i}:\mathbb{R}^{d}\to\mathbb{R}_{+}, for all i∈[1:k]i\in[1:k],
, 1 = fi(𝐲ti|𝐱ti)=gi(𝐲ti|𝐱ti)𝐋i(gi),subscript𝑓𝑖conditionalsubscript𝐲subscript𝑡𝑖subscript𝐱subscript𝑡𝑖subscript𝑔𝑖conditionalsubscript𝐲subscript𝑡𝑖subscript𝐱subscript𝑡𝑖subscript𝐋𝑖subscript𝑔𝑖f_{i}(\mathbf{y}_{t_{i}}|\mathbf{x}_{t_{i}})=\frac{g_{i}(\mathbf{y}_{t_{i}}|\mathbf{x}_{t_{i}})}{\mathbf{L}_{i}(g_{i})},. , 2 = . , 3 = (3)
where 𝐋i(gi)=∫ℝdgti(𝐲ti|𝐱ti)𝑑ℙ(𝐱0:T)subscript𝐋𝑖subscript𝑔𝑖subscriptsuperscriptℝ𝑑subscript𝑔subscript𝑡𝑖conditionalsubscript𝐲subscript𝑡𝑖subscript𝐱subscript𝑡𝑖differential-dℙsubscript𝐱:0𝑇\mathbf{L}_{i}(g_{i})=\int_{\mathbb{R}^{d}}g_{t_{i}}(\mathbf{y}_{t_{i}}|\mathbf{x}_{t_{i}})d\mathbb{P}(\mathbf{x}_{0:T}), for all i∈[1:k]i\in[1:k] is the normalization constant. Then, we can observe that the potential functions {fi}i∈[1:k]subscriptsubscript𝑓𝑖𝑖delimited-[]:1𝑘\{f_{i}\}_{i\in[1:k]} defined in (3) satisfying the normalizing property i.e.,𝔼ℙ[∏i=1kfi(𝐲ti|𝐱ti)]=1\textit{i}.\textit{e}.,\mathbb{E}_{\mathbb{P}}[\prod_{i=1}^{k}f_{i}(\mathbf{y}_{t_{i}}|\mathbf{x}_{t_{i}})]=1 and dℙ⋆(𝐱0:T|ℋtk)=∏i=1kfi(𝐲ti|𝐱ti)dℙ(𝐱0:T)𝑑superscriptℙ⋆conditionalsubscript𝐱:0𝑇subscriptℋsubscript𝑡𝑘superscriptsubscriptproduct𝑖1𝑘subscript𝑓𝑖conditionalsubscript𝐲subscript𝑡𝑖subscript𝐱subscript𝑡𝑖𝑑ℙsubscript𝐱:0𝑇d\mathbb{P}^{\star}(\mathbf{x}_{0:T}|\mathcal{H}_{t_{k}})=\prod_{i=1}^{k}f_{i}(\mathbf{y}_{t_{i}}|\mathbf{x}_{t_{i}})d\mathbb{P}(\mathbf{x}_{0:T}) from (2). Now, with the choice of reference measure ℙℙ\mathbb{P} induced by Markov process in (1), we can define the conditional SDEs conditioned on ℋtksubscriptℋsubscript𝑡𝑘\mathcal{H}_{t_{k}} which inducing the desired path measure ℙ⋆superscriptℙ⋆\mathbb{P}^{\star}. Note that this is an extension of the original Doob’s hℎh-transform (Doob, 1957), incorporating multiple marginal constraints. Below, we summarize the relevant result.
| 1,171
|
Amortized Control of Continuous State Space Feynman-Kac Model for Irregular Time Series
3 Controlled Continuous-Discrete State Space Model
3.1 Multi Marginal Doob’s hℎh-transform
Before applying VI to approximate the posterior distribution ℙ⋆superscriptℙ⋆\mathbb{P}^{\star} in SOC, we first show that a class of SDEs exists whose solutions induce a path measure equivalent to ℙ⋆superscriptℙ⋆\mathbb{P}^{\star} in (2). This formulation provides a valuable insight for defining an appropriate objective function for the SOC problem in the next section. To do so, we first define a sequence of normalized potential functions {fi}i∈[1:k]subscriptsubscript𝑓𝑖𝑖delimited-[]:1𝑘\{f_{i}\}_{i\in[1:k]}, where each fi:ℝd→ℝ+:subscript𝑓𝑖→superscriptℝ𝑑subscriptℝf_{i}:\mathbb{R}^{d}\to\mathbb{R}_{+}, for all i∈[1:k]i\in[1:k],
, 1 = fi(𝐲ti|𝐱ti)=gi(𝐲ti|𝐱ti)𝐋i(gi),subscript𝑓𝑖conditionalsubscript𝐲subscript𝑡𝑖subscript𝐱subscript𝑡𝑖subscript𝑔𝑖conditionalsubscript𝐲subscript𝑡𝑖subscript𝐱subscript𝑡𝑖subscript𝐋𝑖subscript𝑔𝑖f_{i}(\mathbf{y}_{t_{i}}|\mathbf{x}_{t_{i}})=\frac{g_{i}(\mathbf{y}_{t_{i}}|\mathbf{x}_{t_{i}})}{\mathbf{L}_{i}(g_{i})},. , 2 = . , 3 = (3)
where 𝐋i(gi)=∫ℝdgti(𝐲ti|𝐱ti)𝑑ℙ(𝐱0:T)subscript𝐋𝑖subscript𝑔𝑖subscriptsuperscriptℝ𝑑subscript𝑔subscript𝑡𝑖conditionalsubscript𝐲subscript𝑡𝑖subscript𝐱subscript𝑡𝑖differential-dℙsubscript𝐱:0𝑇\mathbf{L}_{i}(g_{i})=\int_{\mathbb{R}^{d}}g_{t_{i}}(\mathbf{y}_{t_{i}}|\mathbf{x}_{t_{i}})d\mathbb{P}(\mathbf{x}_{0:T}), for all i∈[1:k]i\in[1:k] is the normalization constant. Then, we can observe that the potential functions {fi}i∈[1:k]subscriptsubscript𝑓𝑖𝑖delimited-[]:1𝑘\{f_{i}\}_{i\in[1:k]} defined in (3) satisfying the normalizing property i.e.,𝔼ℙ[∏i=1kfi(𝐲ti|𝐱ti)]=1\textit{i}.\textit{e}.,\mathbb{E}_{\mathbb{P}}[\prod_{i=1}^{k}f_{i}(\mathbf{y}_{t_{i}}|\mathbf{x}_{t_{i}})]=1 and dℙ⋆(𝐱0:T|ℋtk)=∏i=1kfi(𝐲ti|𝐱ti)dℙ(𝐱0:T)𝑑superscriptℙ⋆conditionalsubscript𝐱:0𝑇subscriptℋsubscript𝑡𝑘superscriptsubscriptproduct𝑖1𝑘subscript𝑓𝑖conditionalsubscript𝐲subscript𝑡𝑖subscript𝐱subscript𝑡𝑖𝑑ℙsubscript𝐱:0𝑇d\mathbb{P}^{\star}(\mathbf{x}_{0:T}|\mathcal{H}_{t_{k}})=\prod_{i=1}^{k}f_{i}(\mathbf{y}_{t_{i}}|\mathbf{x}_{t_{i}})d\mathbb{P}(\mathbf{x}_{0:T}) from (2). Now, with the choice of reference measure ℙℙ\mathbb{P} induced by Markov process in (1), we can define the conditional SDEs conditioned on ℋtksubscriptℋsubscript𝑡𝑘\mathcal{H}_{t_{k}} which inducing the desired path measure ℙ⋆superscriptℙ⋆\mathbb{P}^{\star}. Note that this is an extension of the original Doob’s hℎh-transform (Doob, 1957), incorporating multiple marginal constraints. Below, we summarize the relevant result.
| 1,216
|
6
|
Let us define a sequence of functions {hi}i∈[1:k]subscriptsubscriptℎ𝑖𝑖delimited-[]:1𝑘\{h_{i}\}_{i\in[1:k]}, where each hi:[ti−1,ti)×ℝd→ℝ+:subscriptℎ𝑖→subscript𝑡𝑖1subscript𝑡𝑖superscriptℝ𝑑subscriptℝh_{i}:[t_{i-1},t_{i})\times\mathbb{R}^{d}\to\mathbb{R}_{+}, for all i∈[1:k]i\in[1:k], is a conditional expectation hi(t,𝐱t):=𝔼ℙ[∏j≥ikfj(𝐲tj|𝐗tj)|𝐗t=𝐱t]assignsubscriptℎ𝑖𝑡subscript𝐱𝑡subscript𝔼ℙdelimited-[]conditionalsuperscriptsubscriptproduct𝑗𝑖𝑘subscript𝑓𝑗conditionalsubscript𝐲subscript𝑡𝑗subscript𝐗subscript𝑡𝑗subscript𝐗𝑡subscript𝐱𝑡h_{i}(t,\mathbf{x}_{t}):=\mathbb{E}_{\mathbb{P}}\left[\prod_{j\geq i}^{k}f_{j}(\mathbf{y}_{t_{j}}|\mathbf{X}_{t_{j}})|\mathbf{X}_{t}=\mathbf{x}_{t}\right],
where {fi}i∈[1:k]subscriptsubscript𝑓𝑖𝑖delimited-[]:1𝑘\{f_{i}\}_{i\in[1:k]} is defined in (3). Now, we define a function h:[0,T]×ℝd→ℝ+:ℎ→0𝑇superscriptℝ𝑑subscriptℝh:[0,T]\times\mathbb{R}^{d}\to\mathbb{R}_{+} by integrating the functions {hi}i∈[1:k]subscriptsubscriptℎ𝑖𝑖delimited-[]:1𝑘\{h_{i}\}_{i\in[1:k]},
, 1 = h(t,𝐱):=∑i=1khi(t,𝐱)𝟏[ti−1,ti)(t).assignℎ𝑡𝐱superscriptsubscript𝑖1𝑘subscriptℎ𝑖𝑡𝐱subscript1subscript𝑡𝑖1subscript𝑡𝑖𝑡h(t,\mathbf{x}):=\sum_{i=1}^{k}h_{i}(t,\mathbf{x})\mathbf{1}_{[t_{i-1},t_{i})}(t).. , 2 = . , 3 = (4)
Then, with the initial condition μ0⋆(d𝐱0)=h1(t0,𝐱0)μ0(d𝐱0)subscriptsuperscript𝜇⋆0𝑑subscript𝐱0subscriptℎ1subscript𝑡0subscript𝐱0subscript𝜇0𝑑subscript𝐱0\mu^{\star}_{0}(d\mathbf{x}_{0})=h_{1}(t_{0},\mathbf{x}_{0})\mu_{0}(d\mathbf{x}_{0}),
the solution of the following conditional SDE inducing
the posterior path measure ℙ⋆superscriptℙ⋆\mathbb{P}^{\star} in (2):
, 1 = (Conditioned State)d𝐗t⋆=[b(t,𝐗t⋆)+∇𝐱logh(t,𝐗t⋆)]dt+d𝐖t(Conditioned State)𝑑subscriptsuperscript𝐗⋆𝑡delimited-[]𝑏𝑡subscriptsuperscript𝐗⋆𝑡subscript∇𝐱ℎ𝑡subscriptsuperscript𝐗⋆𝑡𝑑𝑡𝑑subscript𝐖𝑡\text{(Conditioned State)}\quad d\mathbf{X}^{\star}_{t}=\left[b(t,\mathbf{X}^{\star}_{t})+\nabla_{\mathbf{x}}\log h(t,\mathbf{X}^{\star}_{t})\right]dt+d\mathbf{W}_{t}. , 2 = . , 3 = (5)
Theorem 3.1 demonstrates that we can obtain sample trajectories from ℙ⋆superscriptℙ⋆\mathbb{P}^{\star} in (2) by simulating the dynamics in (5). However, estimating the functions {hi}i∈[1:k]subscriptsubscriptℎ𝑖𝑖delimited-[]:1𝑘\{h_{i}\}_{i\in[1:k]} requires both the estimation of the sequence of normalization constants {𝐋i}i∈[1:k]subscriptsubscript𝐋𝑖𝑖delimited-[]:1𝑘\{\mathbf{L}_{i}\}_{i\in[1:k]} and the computation of conditional expectations, which is infeasible in general. For these reasons, we propose a VI algorithm to approximate the functions {hi}i∈[1:k]subscriptsubscriptℎ𝑖𝑖delimited-[]:1𝑘\{h_{i}\}_{i\in[1:k]}. We derive the variational bound for the VI by exploiting the theory of SOC.
| 1,416
|
Amortized Control of Continuous State Space Feynman-Kac Model for Irregular Time Series
3 Controlled Continuous-Discrete State Space Model
3.1 Multi Marginal Doob’s hℎh-transform
Theorem 3.1 (Multi-marginal Doob’s hℎh-transform).
Let us define a sequence of functions {hi}i∈[1:k]subscriptsubscriptℎ𝑖𝑖delimited-[]:1𝑘\{h_{i}\}_{i\in[1:k]}, where each hi:[ti−1,ti)×ℝd→ℝ+:subscriptℎ𝑖→subscript𝑡𝑖1subscript𝑡𝑖superscriptℝ𝑑subscriptℝh_{i}:[t_{i-1},t_{i})\times\mathbb{R}^{d}\to\mathbb{R}_{+}, for all i∈[1:k]i\in[1:k], is a conditional expectation hi(t,𝐱t):=𝔼ℙ[∏j≥ikfj(𝐲tj|𝐗tj)|𝐗t=𝐱t]assignsubscriptℎ𝑖𝑡subscript𝐱𝑡subscript𝔼ℙdelimited-[]conditionalsuperscriptsubscriptproduct𝑗𝑖𝑘subscript𝑓𝑗conditionalsubscript𝐲subscript𝑡𝑗subscript𝐗subscript𝑡𝑗subscript𝐗𝑡subscript𝐱𝑡h_{i}(t,\mathbf{x}_{t}):=\mathbb{E}_{\mathbb{P}}\left[\prod_{j\geq i}^{k}f_{j}(\mathbf{y}_{t_{j}}|\mathbf{X}_{t_{j}})|\mathbf{X}_{t}=\mathbf{x}_{t}\right],
where {fi}i∈[1:k]subscriptsubscript𝑓𝑖𝑖delimited-[]:1𝑘\{f_{i}\}_{i\in[1:k]} is defined in (3). Now, we define a function h:[0,T]×ℝd→ℝ+:ℎ→0𝑇superscriptℝ𝑑subscriptℝh:[0,T]\times\mathbb{R}^{d}\to\mathbb{R}_{+} by integrating the functions {hi}i∈[1:k]subscriptsubscriptℎ𝑖𝑖delimited-[]:1𝑘\{h_{i}\}_{i\in[1:k]},
, 1 = h(t,𝐱):=∑i=1khi(t,𝐱)𝟏[ti−1,ti)(t).assignℎ𝑡𝐱superscriptsubscript𝑖1𝑘subscriptℎ𝑖𝑡𝐱subscript1subscript𝑡𝑖1subscript𝑡𝑖𝑡h(t,\mathbf{x}):=\sum_{i=1}^{k}h_{i}(t,\mathbf{x})\mathbf{1}_{[t_{i-1},t_{i})}(t).. , 2 = . , 3 = (4)
Then, with the initial condition μ0⋆(d𝐱0)=h1(t0,𝐱0)μ0(d𝐱0)subscriptsuperscript𝜇⋆0𝑑subscript𝐱0subscriptℎ1subscript𝑡0subscript𝐱0subscript𝜇0𝑑subscript𝐱0\mu^{\star}_{0}(d\mathbf{x}_{0})=h_{1}(t_{0},\mathbf{x}_{0})\mu_{0}(d\mathbf{x}_{0}),
the solution of the following conditional SDE inducing
the posterior path measure ℙ⋆superscriptℙ⋆\mathbb{P}^{\star} in (2):
, 1 = (Conditioned State)d𝐗t⋆=[b(t,𝐗t⋆)+∇𝐱logh(t,𝐗t⋆)]dt+d𝐖t(Conditioned State)𝑑subscriptsuperscript𝐗⋆𝑡delimited-[]𝑏𝑡subscriptsuperscript𝐗⋆𝑡subscript∇𝐱ℎ𝑡subscriptsuperscript𝐗⋆𝑡𝑑𝑡𝑑subscript𝐖𝑡\text{(Conditioned State)}\quad d\mathbf{X}^{\star}_{t}=\left[b(t,\mathbf{X}^{\star}_{t})+\nabla_{\mathbf{x}}\log h(t,\mathbf{X}^{\star}_{t})\right]dt+d\mathbf{W}_{t}. , 2 = . , 3 = (5)
Theorem 3.1 demonstrates that we can obtain sample trajectories from ℙ⋆superscriptℙ⋆\mathbb{P}^{\star} in (2) by simulating the dynamics in (5). However, estimating the functions {hi}i∈[1:k]subscriptsubscriptℎ𝑖𝑖delimited-[]:1𝑘\{h_{i}\}_{i\in[1:k]} requires both the estimation of the sequence of normalization constants {𝐋i}i∈[1:k]subscriptsubscript𝐋𝑖𝑖delimited-[]:1𝑘\{\mathbf{L}_{i}\}_{i\in[1:k]} and the computation of conditional expectations, which is infeasible in general. For these reasons, we propose a VI algorithm to approximate the functions {hi}i∈[1:k]subscriptsubscriptℎ𝑖𝑖delimited-[]:1𝑘\{h_{i}\}_{i\in[1:k]}. We derive the variational bound for the VI by exploiting the theory of SOC.
| 1,481
|
7
|
The SOC (Fleming & Soner, 2006; Carmona, 2016) is a mathematical framework that deals with the problem of finding control policies in order to achieve certain object. In this paper, we define following control-affine SDE, adjusting the prior dynamics in (1) with a Markov control α:[0,T]×ℝd→ℝd:𝛼→0𝑇superscriptℝ𝑑superscriptℝ𝑑\alpha:[0,T]\times\mathbb{R}^{d}\to\mathbb{R}^{d}:
, 1 = (Controlled State)d𝐗tα=[b(t,𝐗tα)+α(t,𝐗tα)]dt+d𝐖~t,(Controlled State)𝑑subscriptsuperscript𝐗𝛼𝑡delimited-[]𝑏𝑡subscriptsuperscript𝐗𝛼𝑡𝛼𝑡subscriptsuperscript𝐗𝛼𝑡𝑑𝑡𝑑subscript~𝐖𝑡\textit{(Controlled State)}\quad d\mathbf{X}^{\alpha}_{t}=\left[b(t,\mathbf{X}^{\alpha}_{t})+\alpha(t,\mathbf{X}^{\alpha}_{t})\right]dt+d\tilde{\mathbf{W}}_{t},. , 2 = . , 3 = (6)
where 𝐗0α∼μ0similar-tosubscriptsuperscript𝐗𝛼0subscript𝜇0\mathbf{X}^{\alpha}_{0}\sim\mu_{0}. We refer to the SDE in (6) as controlled SDE. We can expect that for a well-defined function set α∈𝔸𝛼𝔸\alpha\in\mathbb{A}, the class of controlled SDE (6) encompass the SDE in (5). This implies that the desired path measure ℙ⋆superscriptℙ⋆\mathbb{P}^{\star} can be achieved through the SOC formulation. In general, the goal of SOC is to find the optimal control policy α⋆superscript𝛼⋆\alpha^{\star} that minimizes a given arbitrary cost function 𝒥(t,𝐱t,α)𝒥𝑡subscript𝐱𝑡𝛼\mathcal{J}(t,\mathbf{x}_{t},\alpha) i.e.,α⋆(t,𝐱t)=argminα∈𝔸𝒥(t,𝐱t,α)\textit{i}.\textit{e}.,\alpha^{\star}(t,\mathbf{x}_{t})=\operatorname*{arg\,min}_{\alpha\in\mathbb{A}}\mathcal{J}(t,\mathbf{x}_{t},\alpha) and determine the value function 𝒱(t,𝐱t)=minα∈𝔸𝒥(t,𝐱t,α)𝒱𝑡subscript𝐱𝑡subscript𝛼𝔸𝒥𝑡subscript𝐱𝑡𝛼\mathcal{V}(t,\mathbf{x}_{t})=\min_{\alpha\in\mathbb{A}}\mathcal{J}(t,\mathbf{x}_{t},\alpha),
where 𝒱(t,𝐱t):=𝒥(t,𝐱t,α⋆)≤𝒥(t,𝐱t,α)assign𝒱𝑡subscript𝐱𝑡𝒥𝑡subscript𝐱𝑡superscript𝛼⋆𝒥𝑡subscript𝐱𝑡𝛼\mathcal{V}(t,\mathbf{x}_{t}):=\mathcal{J}(t,\mathbf{x}_{t},\alpha^{\star})\leq\mathcal{J}(t,\mathbf{x}_{t},\alpha) holds for any α∈𝔸𝛼𝔸\alpha\in\mathbb{A}. Below, we demonstrate how, with a carefully chosen cost function, the theory of SOC establishes a connection between two classes of SDEs (5) and (6). This connection enables the development of a variational inference algorithm with a tight evidence lower bound (ELBO). To this end, we consider the following cost function:
, 1 = 𝒥(t,𝐱t,α)=𝔼ℙα[∫tT12∥α(s,𝐗sα)∥2𝑑s−∑i:{t≤ti}logfi(𝐲ti|𝐗tiα)|𝐗tα=𝐱t].𝒥𝑡subscript𝐱𝑡𝛼subscript𝔼superscriptℙ𝛼delimited-[]superscriptsubscript𝑡𝑇12superscriptdelimited-∥∥𝛼𝑠subscriptsuperscript𝐗𝛼𝑠2differential-d𝑠conditionalsubscript:𝑖𝑡subscript𝑡𝑖subscript𝑓𝑖conditionalsubscript𝐲subscript𝑡𝑖subscriptsuperscript𝐗𝛼subscript𝑡𝑖subscriptsuperscript𝐗𝛼𝑡subscript𝐱𝑡\mathcal{J}(t,\mathbf{x}_{t},\alpha)=\mathbb{E}_{\mathbb{P}^{\alpha}}\left[\int_{t}^{T}\frac{1}{2}\left\lVert\alpha(s,\mathbf{X}^{\alpha}_{s})\right\rVert^{2}ds-\sum_{i:\{t\leq t_{i}\}}\log f_{i}(\mathbf{y}_{t_{i}}|\mathbf{X}^{\alpha}_{t_{i}})|\mathbf{X}^{\alpha}_{t}=\mathbf{x}_{t}\right].. , 2 = . , 3 = (7)
Then, the value function 𝒱𝒱\mathcal{V} for (7) satisfies the dynamic programming principle (Carmona, 2016):
| 1,510
|
Amortized Control of Continuous State Space Feynman-Kac Model for Irregular Time Series
3 Controlled Continuous-Discrete State Space Model
3.2 Stochastic Optimal Control
The SOC (Fleming & Soner, 2006; Carmona, 2016) is a mathematical framework that deals with the problem of finding control policies in order to achieve certain object. In this paper, we define following control-affine SDE, adjusting the prior dynamics in (1) with a Markov control α:[0,T]×ℝd→ℝd:𝛼→0𝑇superscriptℝ𝑑superscriptℝ𝑑\alpha:[0,T]\times\mathbb{R}^{d}\to\mathbb{R}^{d}:
, 1 = (Controlled State)d𝐗tα=[b(t,𝐗tα)+α(t,𝐗tα)]dt+d𝐖~t,(Controlled State)𝑑subscriptsuperscript𝐗𝛼𝑡delimited-[]𝑏𝑡subscriptsuperscript𝐗𝛼𝑡𝛼𝑡subscriptsuperscript𝐗𝛼𝑡𝑑𝑡𝑑subscript~𝐖𝑡\textit{(Controlled State)}\quad d\mathbf{X}^{\alpha}_{t}=\left[b(t,\mathbf{X}^{\alpha}_{t})+\alpha(t,\mathbf{X}^{\alpha}_{t})\right]dt+d\tilde{\mathbf{W}}_{t},. , 2 = . , 3 = (6)
where 𝐗0α∼μ0similar-tosubscriptsuperscript𝐗𝛼0subscript𝜇0\mathbf{X}^{\alpha}_{0}\sim\mu_{0}. We refer to the SDE in (6) as controlled SDE. We can expect that for a well-defined function set α∈𝔸𝛼𝔸\alpha\in\mathbb{A}, the class of controlled SDE (6) encompass the SDE in (5). This implies that the desired path measure ℙ⋆superscriptℙ⋆\mathbb{P}^{\star} can be achieved through the SOC formulation. In general, the goal of SOC is to find the optimal control policy α⋆superscript𝛼⋆\alpha^{\star} that minimizes a given arbitrary cost function 𝒥(t,𝐱t,α)𝒥𝑡subscript𝐱𝑡𝛼\mathcal{J}(t,\mathbf{x}_{t},\alpha) i.e.,α⋆(t,𝐱t)=argminα∈𝔸𝒥(t,𝐱t,α)\textit{i}.\textit{e}.,\alpha^{\star}(t,\mathbf{x}_{t})=\operatorname*{arg\,min}_{\alpha\in\mathbb{A}}\mathcal{J}(t,\mathbf{x}_{t},\alpha) and determine the value function 𝒱(t,𝐱t)=minα∈𝔸𝒥(t,𝐱t,α)𝒱𝑡subscript𝐱𝑡subscript𝛼𝔸𝒥𝑡subscript𝐱𝑡𝛼\mathcal{V}(t,\mathbf{x}_{t})=\min_{\alpha\in\mathbb{A}}\mathcal{J}(t,\mathbf{x}_{t},\alpha),
where 𝒱(t,𝐱t):=𝒥(t,𝐱t,α⋆)≤𝒥(t,𝐱t,α)assign𝒱𝑡subscript𝐱𝑡𝒥𝑡subscript𝐱𝑡superscript𝛼⋆𝒥𝑡subscript𝐱𝑡𝛼\mathcal{V}(t,\mathbf{x}_{t}):=\mathcal{J}(t,\mathbf{x}_{t},\alpha^{\star})\leq\mathcal{J}(t,\mathbf{x}_{t},\alpha) holds for any α∈𝔸𝛼𝔸\alpha\in\mathbb{A}. Below, we demonstrate how, with a carefully chosen cost function, the theory of SOC establishes a connection between two classes of SDEs (5) and (6). This connection enables the development of a variational inference algorithm with a tight evidence lower bound (ELBO). To this end, we consider the following cost function:
, 1 = 𝒥(t,𝐱t,α)=𝔼ℙα[∫tT12∥α(s,𝐗sα)∥2𝑑s−∑i:{t≤ti}logfi(𝐲ti|𝐗tiα)|𝐗tα=𝐱t].𝒥𝑡subscript𝐱𝑡𝛼subscript𝔼superscriptℙ𝛼delimited-[]superscriptsubscript𝑡𝑇12superscriptdelimited-∥∥𝛼𝑠subscriptsuperscript𝐗𝛼𝑠2differential-d𝑠conditionalsubscript:𝑖𝑡subscript𝑡𝑖subscript𝑓𝑖conditionalsubscript𝐲subscript𝑡𝑖subscriptsuperscript𝐗𝛼subscript𝑡𝑖subscriptsuperscript𝐗𝛼𝑡subscript𝐱𝑡\mathcal{J}(t,\mathbf{x}_{t},\alpha)=\mathbb{E}_{\mathbb{P}^{\alpha}}\left[\int_{t}^{T}\frac{1}{2}\left\lVert\alpha(s,\mathbf{X}^{\alpha}_{s})\right\rVert^{2}ds-\sum_{i:\{t\leq t_{i}\}}\log f_{i}(\mathbf{y}_{t_{i}}|\mathbf{X}^{\alpha}_{t_{i}})|\mathbf{X}^{\alpha}_{t}=\mathbf{x}_{t}\right].. , 2 = . , 3 = (7)
Then, the value function 𝒱𝒱\mathcal{V} for (7) satisfies the dynamic programming principle (Carmona, 2016):
| 1,549
|
8
|
Let us consider a sequence of left continuous functions {𝒱i}i∈[1:k+1]subscriptsubscript𝒱𝑖𝑖delimited-[]:1𝑘1\{\mathcal{V}_{i}\}_{i\in[1:k+1]}, where each 𝒱i∈C1,2([ti−1,ti)×ℝd)subscript𝒱𝑖superscript𝐶12subscript𝑡𝑖1subscript𝑡𝑖superscriptℝ𝑑\mathcal{V}_{i}\in C^{1,2}([t_{i-1},t_{i})\times\mathbb{R}^{d})
, 1 = 𝒱i(t,𝐱t):=minα∈𝔸𝔼ℙα[∫ti−1ti12∥αs∥2𝑑s−logfi(𝐲ti|𝐗tiα)+𝒱i+1(ti,𝐗tiα)|𝐗t=𝐱t],assignsubscript𝒱𝑖𝑡subscript𝐱𝑡subscript𝛼𝔸subscript𝔼superscriptℙ𝛼delimited-[]superscriptsubscriptsubscript𝑡𝑖1subscript𝑡𝑖12superscriptdelimited-∥∥subscript𝛼𝑠2differential-d𝑠subscript𝑓𝑖conditionalsubscript𝐲subscript𝑡𝑖subscriptsuperscript𝐗𝛼subscript𝑡𝑖conditionalsubscript𝒱𝑖1subscript𝑡𝑖subscriptsuperscript𝐗𝛼subscript𝑡𝑖subscript𝐗𝑡subscript𝐱𝑡\mathcal{V}_{i}(t,\mathbf{x}_{t}):=\min_{\alpha\in\mathbb{A}}\mathbb{E}_{\mathbb{P}^{\alpha}}\left[\int_{t_{i-1}}^{t_{i}}\frac{1}{2}\left\lVert\alpha_{s}\right\rVert^{2}ds-\log f_{i}(\mathbf{y}_{t_{i}}|\mathbf{X}^{\alpha}_{t_{i}})+\mathcal{V}_{i+1}(t_{i},\mathbf{X}^{\alpha}_{t_{i}})|\mathbf{X}_{t}=\mathbf{x}_{t}\right],. , 2 = . , 3 = (8)
for all i∈[1:k]i\in[1:k] and 𝒱k+1=0subscript𝒱𝑘10\mathcal{V}_{k+1}=0. Then, for any 0≤t≤u≤T0𝑡𝑢𝑇0\leq t\leq u\leq T, the value function 𝒱𝒱\mathcal{V} for the cost function in (7) satisfying the recursion defined as follows:
, 1 = 𝒱(t,𝐱t)=minα∈𝔸𝔼ℙα[∫ttI(u)12∥αs∥2𝑑s−∑i:{t≤ti≤tI(u)}logfi+𝒱I(u)+1(tI(u),𝐗tI(u)α)|𝐗tα=𝐱t],𝒱𝑡subscript𝐱𝑡subscript𝛼𝔸subscript𝔼superscriptℙ𝛼delimited-[]superscriptsubscript𝑡subscript𝑡𝐼𝑢12superscriptdelimited-∥∥subscript𝛼𝑠2differential-d𝑠subscript:𝑖𝑡subscript𝑡𝑖subscript𝑡𝐼𝑢subscript𝑓𝑖conditionalsubscript𝒱𝐼𝑢1subscript𝑡𝐼𝑢subscriptsuperscript𝐗𝛼subscript𝑡𝐼𝑢subscriptsuperscript𝐗𝛼𝑡subscript𝐱𝑡\mathcal{V}(t,\mathbf{x}_{t})=\min_{\alpha\in\mathbb{A}}\mathbb{E}_{\mathbb{P}^{\alpha}}\left[\int_{t}^{t_{I(u)}}\frac{1}{2}\left\lVert\alpha_{s}\right\rVert^{2}ds-\sum_{i:\{t\leq t_{i}\leq t_{I(u)}\}}\log f_{i}+\mathcal{V}_{I(u)+1}(t_{I(u)},\mathbf{X}^{\alpha}_{t_{I(u)}})|\mathbf{X}^{\alpha}_{t}=\mathbf{x}_{t}\right],. , 2 = . , 3 = (9)
with the indexing function I(u)=max{i∈[1:k]|ti≤u}I(u)=\max\{i\in[1:k]|t_{i}\leq u\} and fi=fi(𝐲ti|𝐗tiα)subscript𝑓𝑖subscript𝑓𝑖conditionalsubscript𝐲subscript𝑡𝑖subscriptsuperscript𝐗𝛼subscript𝑡𝑖f_{i}=f_{i}(\mathbf{y}_{t_{i}}|\mathbf{X}^{\alpha}_{t_{i}})
The value functions presented in Theorem 3.2 suggest that the objective of the optimal control policy α𝛼\alpha for the interval [ti−1,ti)subscript𝑡𝑖1subscript𝑡𝑖[t_{i-1},t_{i}) is not just to minimize the negative log-potential −logfi(𝐲ti|⋅)subscript𝑓𝑖conditionalsubscript𝐲subscript𝑡𝑖⋅-\log f_{i}(\mathbf{y}_{t_{i}}|\cdot) for the immediate observation 𝐲tisubscript𝐲subscript𝑡𝑖\mathbf{y}_{t_{i}}. Instead, it also involves considering future costs {𝒱j}j∈[i+1:k]subscriptsubscript𝒱𝑗𝑗delimited-[]:𝑖1𝑘\{\mathcal{V}_{j}\}_{j\in[i+1:k]} and the corresponding future observations {𝐲tj}j∈[i+1:k]subscriptsubscript𝐲subscript𝑡𝑗𝑗delimited-[]:𝑖1𝑘\{\mathbf{y}_{t_{j}}\}_{j\in[i+1:k]}, since 𝒱isubscript𝒱𝑖\mathcal{V}_{i} follows a recursive structure. Since our goal is to approximate ℙ⋆superscriptℙ⋆\mathbb{P}^{\star} in (5), it is natural that the optimal control policy α⋆superscript𝛼⋆\alpha^{\star} should reflect the future observations {𝐲tj}j∈[i+1:k]subscriptsubscript𝐲subscript𝑡𝑗𝑗delimited-[]:𝑖1𝑘\{\mathbf{y}_{t_{j}}\}_{j\in[i+1:k]}, as the hℎh-function inherently does. Next, we will derive the explicit form of the optimal control policy.
| 1,866
|
Amortized Control of Continuous State Space Feynman-Kac Model for Irregular Time Series
3 Controlled Continuous-Discrete State Space Model
3.2 Stochastic Optimal Control
Theorem 3.2 (Dynamic Programming Principle).
Let us consider a sequence of left continuous functions {𝒱i}i∈[1:k+1]subscriptsubscript𝒱𝑖𝑖delimited-[]:1𝑘1\{\mathcal{V}_{i}\}_{i\in[1:k+1]}, where each 𝒱i∈C1,2([ti−1,ti)×ℝd)subscript𝒱𝑖superscript𝐶12subscript𝑡𝑖1subscript𝑡𝑖superscriptℝ𝑑\mathcal{V}_{i}\in C^{1,2}([t_{i-1},t_{i})\times\mathbb{R}^{d})
, 1 = 𝒱i(t,𝐱t):=minα∈𝔸𝔼ℙα[∫ti−1ti12∥αs∥2𝑑s−logfi(𝐲ti|𝐗tiα)+𝒱i+1(ti,𝐗tiα)|𝐗t=𝐱t],assignsubscript𝒱𝑖𝑡subscript𝐱𝑡subscript𝛼𝔸subscript𝔼superscriptℙ𝛼delimited-[]superscriptsubscriptsubscript𝑡𝑖1subscript𝑡𝑖12superscriptdelimited-∥∥subscript𝛼𝑠2differential-d𝑠subscript𝑓𝑖conditionalsubscript𝐲subscript𝑡𝑖subscriptsuperscript𝐗𝛼subscript𝑡𝑖conditionalsubscript𝒱𝑖1subscript𝑡𝑖subscriptsuperscript𝐗𝛼subscript𝑡𝑖subscript𝐗𝑡subscript𝐱𝑡\mathcal{V}_{i}(t,\mathbf{x}_{t}):=\min_{\alpha\in\mathbb{A}}\mathbb{E}_{\mathbb{P}^{\alpha}}\left[\int_{t_{i-1}}^{t_{i}}\frac{1}{2}\left\lVert\alpha_{s}\right\rVert^{2}ds-\log f_{i}(\mathbf{y}_{t_{i}}|\mathbf{X}^{\alpha}_{t_{i}})+\mathcal{V}_{i+1}(t_{i},\mathbf{X}^{\alpha}_{t_{i}})|\mathbf{X}_{t}=\mathbf{x}_{t}\right],. , 2 = . , 3 = (8)
for all i∈[1:k]i\in[1:k] and 𝒱k+1=0subscript𝒱𝑘10\mathcal{V}_{k+1}=0. Then, for any 0≤t≤u≤T0𝑡𝑢𝑇0\leq t\leq u\leq T, the value function 𝒱𝒱\mathcal{V} for the cost function in (7) satisfying the recursion defined as follows:
, 1 = 𝒱(t,𝐱t)=minα∈𝔸𝔼ℙα[∫ttI(u)12∥αs∥2𝑑s−∑i:{t≤ti≤tI(u)}logfi+𝒱I(u)+1(tI(u),𝐗tI(u)α)|𝐗tα=𝐱t],𝒱𝑡subscript𝐱𝑡subscript𝛼𝔸subscript𝔼superscriptℙ𝛼delimited-[]superscriptsubscript𝑡subscript𝑡𝐼𝑢12superscriptdelimited-∥∥subscript𝛼𝑠2differential-d𝑠subscript:𝑖𝑡subscript𝑡𝑖subscript𝑡𝐼𝑢subscript𝑓𝑖conditionalsubscript𝒱𝐼𝑢1subscript𝑡𝐼𝑢subscriptsuperscript𝐗𝛼subscript𝑡𝐼𝑢subscriptsuperscript𝐗𝛼𝑡subscript𝐱𝑡\mathcal{V}(t,\mathbf{x}_{t})=\min_{\alpha\in\mathbb{A}}\mathbb{E}_{\mathbb{P}^{\alpha}}\left[\int_{t}^{t_{I(u)}}\frac{1}{2}\left\lVert\alpha_{s}\right\rVert^{2}ds-\sum_{i:\{t\leq t_{i}\leq t_{I(u)}\}}\log f_{i}+\mathcal{V}_{I(u)+1}(t_{I(u)},\mathbf{X}^{\alpha}_{t_{I(u)}})|\mathbf{X}^{\alpha}_{t}=\mathbf{x}_{t}\right],. , 2 = . , 3 = (9)
with the indexing function I(u)=max{i∈[1:k]|ti≤u}I(u)=\max\{i\in[1:k]|t_{i}\leq u\} and fi=fi(𝐲ti|𝐗tiα)subscript𝑓𝑖subscript𝑓𝑖conditionalsubscript𝐲subscript𝑡𝑖subscriptsuperscript𝐗𝛼subscript𝑡𝑖f_{i}=f_{i}(\mathbf{y}_{t_{i}}|\mathbf{X}^{\alpha}_{t_{i}})
The value functions presented in Theorem 3.2 suggest that the objective of the optimal control policy α𝛼\alpha for the interval [ti−1,ti)subscript𝑡𝑖1subscript𝑡𝑖[t_{i-1},t_{i}) is not just to minimize the negative log-potential −logfi(𝐲ti|⋅)subscript𝑓𝑖conditionalsubscript𝐲subscript𝑡𝑖⋅-\log f_{i}(\mathbf{y}_{t_{i}}|\cdot) for the immediate observation 𝐲tisubscript𝐲subscript𝑡𝑖\mathbf{y}_{t_{i}}. Instead, it also involves considering future costs {𝒱j}j∈[i+1:k]subscriptsubscript𝒱𝑗𝑗delimited-[]:𝑖1𝑘\{\mathcal{V}_{j}\}_{j\in[i+1:k]} and the corresponding future observations {𝐲tj}j∈[i+1:k]subscriptsubscript𝐲subscript𝑡𝑗𝑗delimited-[]:𝑖1𝑘\{\mathbf{y}_{t_{j}}\}_{j\in[i+1:k]}, since 𝒱isubscript𝒱𝑖\mathcal{V}_{i} follows a recursive structure. Since our goal is to approximate ℙ⋆superscriptℙ⋆\mathbb{P}^{\star} in (5), it is natural that the optimal control policy α⋆superscript𝛼⋆\alpha^{\star} should reflect the future observations {𝐲tj}j∈[i+1:k]subscriptsubscript𝐲subscript𝑡𝑗𝑗delimited-[]:𝑖1𝑘\{\mathbf{y}_{t_{j}}\}_{j\in[i+1:k]}, as the hℎh-function inherently does. Next, we will derive the explicit form of the optimal control policy.
| 1,916
|
9
|
Suppose there exist a sequence of left continuous functions 𝒱i(t,𝐱)∈C1,2([ti−1,ti),ℝd)subscript𝒱𝑖𝑡𝐱superscript𝐶12subscript𝑡𝑖1subscript𝑡𝑖superscriptℝ𝑑\mathcal{V}_{i}(t,\mathbf{x})\in C^{1,2}([t_{i-1},t_{i}),\mathbb{R}^{d}), for all i∈[1:k]i\in[1:k], satisfying the following Hamiltonian-Jacobi-Bellman (HJB) equation:
, 1 = . , 2 = ∂t𝒱i,t+𝒜t𝒱i,t+minα∈𝔸[(∇𝐱𝒱i,t)⊤αi,t+12∥αi,t∥2]=0,ti−1≤t<tiformulae-sequencesubscript𝑡subscript𝒱𝑖𝑡subscript𝒜𝑡subscript𝒱𝑖𝑡subscript𝛼𝔸superscriptsubscript∇𝐱subscript𝒱𝑖𝑡topsubscript𝛼𝑖𝑡12superscriptdelimited-∥∥subscript𝛼𝑖𝑡20subscript𝑡𝑖1𝑡subscript𝑡𝑖\displaystyle\partial_{t}\mathcal{V}_{i,t}+\mathcal{A}_{t}\mathcal{V}_{i,t}+\min_{\alpha\in\mathbb{A}}\left[(\nabla_{\mathbf{x}}\mathcal{V}_{i,t})^{\top}\alpha_{i,t}+\frac{1}{2}\left\lVert\alpha_{i,t}\right\rVert^{2}\right]=0,\quad t_{i-1}\leq t<t_{i}. , 3 = . , 4 = (10). , 1 = . , 2 = 𝒱i(ti,𝐱)=−logfi(𝐲ti|𝐱)+𝒱i+1(ti,𝐱),t=ti,∀i∈[1:k],\displaystyle\mathcal{V}_{i}(t_{i},\mathbf{x})=-\log f_{i}(\mathbf{y}_{t_{i}}|\mathbf{x})+\mathcal{V}_{i+1}(t_{i},\mathbf{x}),\quad t=t_{i},\quad\forall i\in[1:k],. , 3 = . , 4 = (11)
where a minimum is attained by αi⋆(t,𝐱)=∇𝐱𝒱i(t,𝐱)subscriptsuperscript𝛼⋆𝑖𝑡𝐱subscript∇𝐱subscript𝒱𝑖𝑡𝐱\alpha^{\star}_{i}(t,\mathbf{x})=\nabla_{\mathbf{x}}\mathcal{V}_{i}(t,\mathbf{x}). Now, define a function α:[0,T]×ℝd→ℝd:𝛼→0𝑇superscriptℝ𝑑superscriptℝ𝑑\alpha:[0,T]\times\mathbb{R}^{d}\to\mathbb{R}^{d} by integrating the optimal controls {αi}i∈[1:k]subscriptsubscript𝛼𝑖𝑖delimited-[]:1𝑘\{\alpha_{i}\}_{i\in[1:k]},
, 1 = α⋆(t,𝐱):=∑i=1kαi⋆(t,𝐱)𝟏[ti−1,ti)(t)assignsuperscript𝛼⋆𝑡𝐱superscriptsubscript𝑖1𝑘subscriptsuperscript𝛼⋆𝑖𝑡𝐱subscript1subscript𝑡𝑖1subscript𝑡𝑖𝑡\alpha^{\star}(t,\mathbf{x}):=\sum_{i=1}^{k}\alpha^{\star}_{i}(t,\mathbf{x})\mathbf{1}_{[t_{i-1},t_{i})}(t). , 2 = . , 3 = (12)
Then, 𝒱(t,𝐱t)=𝒥(t,𝐱t,α⋆)≤𝒥(t,𝐱t,α)𝒱𝑡subscript𝐱𝑡𝒥𝑡subscript𝐱𝑡superscript𝛼⋆𝒥𝑡subscript𝐱𝑡𝛼\mathcal{V}(t,\mathbf{x}_{t})=\mathcal{J}(t,\mathbf{x}_{t},\alpha^{\star})\leq\mathcal{J}(t,\mathbf{x}_{t},\alpha) holds for any (t,𝐱t)∈[0,T]×ℝd𝑡subscript𝐱𝑡0𝑇superscriptℝ𝑑(t,\mathbf{x}_{t})\in[0,T]\times\mathbb{R}^{d} and α∈𝔸𝛼𝔸\alpha\in\mathbb{A}.
Note that the optimal control (12) for the cost function (7) share a similar structure as in (4). Since the theory of PDEs establishes a fundamental link between various classes of PDEs and SDEs (Richter & Berner, 2022; Berner et al., 2024), it allows us to reveal the inherent connection between (12) and (4), thereby enable us to simulate the conditional SDE (5) in an alternative way.
| 1,418
|
Amortized Control of Continuous State Space Feynman-Kac Model for Irregular Time Series
3 Controlled Continuous-Discrete State Space Model
3.2 Stochastic Optimal Control
Theorem 3.3 (Verification Theorem).
Suppose there exist a sequence of left continuous functions 𝒱i(t,𝐱)∈C1,2([ti−1,ti),ℝd)subscript𝒱𝑖𝑡𝐱superscript𝐶12subscript𝑡𝑖1subscript𝑡𝑖superscriptℝ𝑑\mathcal{V}_{i}(t,\mathbf{x})\in C^{1,2}([t_{i-1},t_{i}),\mathbb{R}^{d}), for all i∈[1:k]i\in[1:k], satisfying the following Hamiltonian-Jacobi-Bellman (HJB) equation:
, 1 = . , 2 = ∂t𝒱i,t+𝒜t𝒱i,t+minα∈𝔸[(∇𝐱𝒱i,t)⊤αi,t+12∥αi,t∥2]=0,ti−1≤t<tiformulae-sequencesubscript𝑡subscript𝒱𝑖𝑡subscript𝒜𝑡subscript𝒱𝑖𝑡subscript𝛼𝔸superscriptsubscript∇𝐱subscript𝒱𝑖𝑡topsubscript𝛼𝑖𝑡12superscriptdelimited-∥∥subscript𝛼𝑖𝑡20subscript𝑡𝑖1𝑡subscript𝑡𝑖\displaystyle\partial_{t}\mathcal{V}_{i,t}+\mathcal{A}_{t}\mathcal{V}_{i,t}+\min_{\alpha\in\mathbb{A}}\left[(\nabla_{\mathbf{x}}\mathcal{V}_{i,t})^{\top}\alpha_{i,t}+\frac{1}{2}\left\lVert\alpha_{i,t}\right\rVert^{2}\right]=0,\quad t_{i-1}\leq t<t_{i}. , 3 = . , 4 = (10). , 1 = . , 2 = 𝒱i(ti,𝐱)=−logfi(𝐲ti|𝐱)+𝒱i+1(ti,𝐱),t=ti,∀i∈[1:k],\displaystyle\mathcal{V}_{i}(t_{i},\mathbf{x})=-\log f_{i}(\mathbf{y}_{t_{i}}|\mathbf{x})+\mathcal{V}_{i+1}(t_{i},\mathbf{x}),\quad t=t_{i},\quad\forall i\in[1:k],. , 3 = . , 4 = (11)
where a minimum is attained by αi⋆(t,𝐱)=∇𝐱𝒱i(t,𝐱)subscriptsuperscript𝛼⋆𝑖𝑡𝐱subscript∇𝐱subscript𝒱𝑖𝑡𝐱\alpha^{\star}_{i}(t,\mathbf{x})=\nabla_{\mathbf{x}}\mathcal{V}_{i}(t,\mathbf{x}). Now, define a function α:[0,T]×ℝd→ℝd:𝛼→0𝑇superscriptℝ𝑑superscriptℝ𝑑\alpha:[0,T]\times\mathbb{R}^{d}\to\mathbb{R}^{d} by integrating the optimal controls {αi}i∈[1:k]subscriptsubscript𝛼𝑖𝑖delimited-[]:1𝑘\{\alpha_{i}\}_{i\in[1:k]},
, 1 = α⋆(t,𝐱):=∑i=1kαi⋆(t,𝐱)𝟏[ti−1,ti)(t)assignsuperscript𝛼⋆𝑡𝐱superscriptsubscript𝑖1𝑘subscriptsuperscript𝛼⋆𝑖𝑡𝐱subscript1subscript𝑡𝑖1subscript𝑡𝑖𝑡\alpha^{\star}(t,\mathbf{x}):=\sum_{i=1}^{k}\alpha^{\star}_{i}(t,\mathbf{x})\mathbf{1}_{[t_{i-1},t_{i})}(t). , 2 = . , 3 = (12)
Then, 𝒱(t,𝐱t)=𝒥(t,𝐱t,α⋆)≤𝒥(t,𝐱t,α)𝒱𝑡subscript𝐱𝑡𝒥𝑡subscript𝐱𝑡superscript𝛼⋆𝒥𝑡subscript𝐱𝑡𝛼\mathcal{V}(t,\mathbf{x}_{t})=\mathcal{J}(t,\mathbf{x}_{t},\alpha^{\star})\leq\mathcal{J}(t,\mathbf{x}_{t},\alpha) holds for any (t,𝐱t)∈[0,T]×ℝd𝑡subscript𝐱𝑡0𝑇superscriptℝ𝑑(t,\mathbf{x}_{t})\in[0,T]\times\mathbb{R}^{d} and α∈𝔸𝛼𝔸\alpha\in\mathbb{A}.
Note that the optimal control (12) for the cost function (7) share a similar structure as in (4). Since the theory of PDEs establishes a fundamental link between various classes of PDEs and SDEs (Richter & Berner, 2022; Berner et al., 2024), it allows us to reveal the inherent connection between (12) and (4), thereby enable us to simulate the conditional SDE (5) in an alternative way.
| 1,468
|
10
|
The hℎh function satisfying the following linear PDE:
, 1 = . , 2 = ∂thi,t+𝒜thi,t=0,ti−1≤t<tiformulae-sequencesubscript𝑡subscriptℎ𝑖𝑡subscript𝒜𝑡subscriptℎ𝑖𝑡0subscript𝑡𝑖1𝑡subscript𝑡𝑖\displaystyle\partial_{t}h_{i,t}+\mathcal{A}_{t}h_{i,t}=0,\quad t_{i-1}\leq t<t_{i}. , 3 = . , 4 = (13). , 1 = . , 2 = hi(ti,𝐱)=fi(𝐲ti|𝐱)hi+1(ti,𝐱),t=ti,∀i∈[1:k].\displaystyle h_{i}(t_{i},\mathbf{x})=f_{i}(\mathbf{y}_{t_{i}}|\mathbf{x})h_{i+1}(t_{i},\mathbf{x}),\quad t=t_{i},\quad\forall i\in[1:k].. , 3 = . , 4 = (14)
Moreover, for a logarithm transformation 𝒱=−logh𝒱ℎ\mathcal{V}=-\log h, 𝒱𝒱\mathcal{V} satisfying the HJB equation in (10-11).
According to Lemma 3.4, the solution of linear PDE in (13-14) is negative exponential to the solution of the HJB equation in (10-11). Therefore, it leads to the following corollary:
| 394
|
Amortized Control of Continuous State Space Feynman-Kac Model for Irregular Time Series
3 Controlled Continuous-Discrete State Space Model
3.2 Stochastic Optimal Control
Lemma 3.4 (Hopf-Cole Transformation).
The hℎh function satisfying the following linear PDE:
, 1 = . , 2 = ∂thi,t+𝒜thi,t=0,ti−1≤t<tiformulae-sequencesubscript𝑡subscriptℎ𝑖𝑡subscript𝒜𝑡subscriptℎ𝑖𝑡0subscript𝑡𝑖1𝑡subscript𝑡𝑖\displaystyle\partial_{t}h_{i,t}+\mathcal{A}_{t}h_{i,t}=0,\quad t_{i-1}\leq t<t_{i}. , 3 = . , 4 = (13). , 1 = . , 2 = hi(ti,𝐱)=fi(𝐲ti|𝐱)hi+1(ti,𝐱),t=ti,∀i∈[1:k].\displaystyle h_{i}(t_{i},\mathbf{x})=f_{i}(\mathbf{y}_{t_{i}}|\mathbf{x})h_{i+1}(t_{i},\mathbf{x}),\quad t=t_{i},\quad\forall i\in[1:k].. , 3 = . , 4 = (14)
Moreover, for a logarithm transformation 𝒱=−logh𝒱ℎ\mathcal{V}=-\log h, 𝒱𝒱\mathcal{V} satisfying the HJB equation in (10-11).
According to Lemma 3.4, the solution of linear PDE in (13-14) is negative exponential to the solution of the HJB equation in (10-11). Therefore, it leads to the following corollary:
| 445
|
11
|
For optimal control α⋆superscript𝛼⋆\alpha^{\star} induced by the cost function (7) with dynamics (6), it satisfies α⋆=∇𝐱loghsuperscript𝛼⋆subscript∇𝐱ℎ\alpha^{\star}=\nabla_{\mathbf{x}}\log h. In other words, we can simulate the conditional SDEs in (5) by simulating the controlled SDE (6) with optimal control α⋆superscript𝛼⋆\alpha^{\star}.
Corollary 3.5 states that the Markov process induced by α⋆superscript𝛼⋆\alpha^{\star} and ∇𝐱loghsubscript∇𝐱ℎ\nabla_{\mathbf{x}}\log h is equivalent under same initial condition. However, comparing the ℙαsuperscriptℙ𝛼\mathbb{P}^{\alpha} induced by the controlled dynamics in (6) with an initial condition μ0subscript𝜇0\mu_{0}, the conditioned dynamics (5) has an intial condition μ0⋆superscriptsubscript𝜇0⋆\mu_{0}^{\star} to inducing the desired path measure ℙ⋆superscriptℙ⋆\mathbb{P}^{\star}. In other words, although we find the optimal control α⋆superscript𝛼⋆\alpha^{\star}, the constant discrepancy between μ0subscript𝜇0\mu_{0} and μ0⋆subscriptsuperscript𝜇⋆0\mu^{\star}_{0} still remain, thereby keeping ℙ⋆superscriptℙ⋆\mathbb{P}^{\star} and ℙα⋆superscriptℙsuperscript𝛼⋆\mathbb{P}^{\alpha^{\star}} misaligned. Fortunately, a surrogate cost function can be derived from the variational representation under the KL-divergence, allowing us to find the optimal control while minimizing the discrepancy.
| 499
|
Amortized Control of Continuous State Space Feynman-Kac Model for Irregular Time Series
3 Controlled Continuous-Discrete State Space Model
3.2 Stochastic Optimal Control
Corollary 3.5 (Optimal Control).
For optimal control α⋆superscript𝛼⋆\alpha^{\star} induced by the cost function (7) with dynamics (6), it satisfies α⋆=∇𝐱loghsuperscript𝛼⋆subscript∇𝐱ℎ\alpha^{\star}=\nabla_{\mathbf{x}}\log h. In other words, we can simulate the conditional SDEs in (5) by simulating the controlled SDE (6) with optimal control α⋆superscript𝛼⋆\alpha^{\star}.
Corollary 3.5 states that the Markov process induced by α⋆superscript𝛼⋆\alpha^{\star} and ∇𝐱loghsubscript∇𝐱ℎ\nabla_{\mathbf{x}}\log h is equivalent under same initial condition. However, comparing the ℙαsuperscriptℙ𝛼\mathbb{P}^{\alpha} induced by the controlled dynamics in (6) with an initial condition μ0subscript𝜇0\mu_{0}, the conditioned dynamics (5) has an intial condition μ0⋆superscriptsubscript𝜇0⋆\mu_{0}^{\star} to inducing the desired path measure ℙ⋆superscriptℙ⋆\mathbb{P}^{\star}. In other words, although we find the optimal control α⋆superscript𝛼⋆\alpha^{\star}, the constant discrepancy between μ0subscript𝜇0\mu_{0} and μ0⋆subscriptsuperscript𝜇⋆0\mu^{\star}_{0} still remain, thereby keeping ℙ⋆superscriptℙ⋆\mathbb{P}^{\star} and ℙα⋆superscriptℙsuperscript𝛼⋆\mathbb{P}^{\alpha^{\star}} misaligned. Fortunately, a surrogate cost function can be derived from the variational representation under the KL-divergence, allowing us to find the optimal control while minimizing the discrepancy.
| 550
|
12
|
Let us assume that the path measure ℙαsuperscriptℙ𝛼\mathbb{P}^{\alpha} induced by (6) for any α∈𝔸𝛼𝔸\alpha\in\mathbb{A} satisfies DKL(ℙα|ℙ⋆)<∞subscript𝐷KLconditionalsuperscriptℙ𝛼superscriptℙ⋆D_{\text{KL}}(\mathbb{P}^{\alpha}|\mathbb{P}^{\star})<\infty. Then, for a cost function 𝒥𝒥\mathcal{J} in (7) and μ0⋆subscriptsuperscript𝜇⋆0\mu^{\star}_{0} in (5), it holds:
, 1 = DKL(ℙα|ℙ⋆)=DKL(μ0|μ0⋆)+𝔼𝐱0∼μ0[𝒥(0,𝐱0,α)]=ℒ(α)+log𝐙(ℋtk)≥0,subscript𝐷KLconditionalsuperscriptℙ𝛼superscriptℙ⋆subscript𝐷KLconditionalsubscript𝜇0subscriptsuperscript𝜇⋆0subscript𝔼similar-tosubscript𝐱0subscript𝜇0delimited-[]𝒥0subscript𝐱0𝛼ℒ𝛼𝐙subscriptℋsubscript𝑡𝑘0\displaystyle D_{\text{KL}}(\mathbb{P}^{\alpha}|\mathbb{P}^{\star})=D_{\text{KL}}(\mu_{0}|\mu^{\star}_{0})+\mathbb{E}_{\mathbf{x}_{0}\sim\mu_{0}}\left[\mathcal{J}(0,\mathbf{x}_{0},\alpha)\right]=\mathcal{L}(\alpha)+\log\mathbf{Z}(\mathcal{H}_{t_{k}})\geq 0,. , 2 = . , 3 = (15)
where the objective function ℒ(α)ℒ𝛼\mathcal{L}(\alpha) (a negative ELBO) is given by:
, 1 = ℒ(α)=𝔼ℙα[∫0T12∥α(s,𝐗sα)∥2𝑑s−∑ikloggi(𝐲ti|𝐗tiα)]≥−log𝐙(ℋtk).ℒ𝛼subscript𝔼superscriptℙ𝛼delimited-[]superscriptsubscript0𝑇12superscriptdelimited-∥∥𝛼𝑠subscriptsuperscript𝐗𝛼𝑠2differential-d𝑠superscriptsubscript𝑖𝑘subscript𝑔𝑖conditionalsubscript𝐲subscript𝑡𝑖subscriptsuperscript𝐗𝛼subscript𝑡𝑖𝐙subscriptℋsubscript𝑡𝑘\mathcal{L}(\alpha)=\mathbb{E}_{\mathbb{P}^{\alpha}}\left[\int_{0}^{T}\frac{1}{2}\left\lVert\alpha(s,\mathbf{X}^{\alpha}_{s})\right\rVert^{2}ds-\sum_{i}^{k}\log g_{i}(\mathbf{y}_{t_{i}}|\mathbf{X}^{\alpha}_{t_{i}})\right]\geq-\log\mathbf{Z}(\mathcal{H}_{t_{k}}).. , 2 = . , 3 = (16)
Moreover, assume that ℒ(α)ℒ𝛼\mathcal{L}(\alpha) has a global minimum α⋆=argminα∈𝔸ℒ(α)superscript𝛼⋆subscriptargmin𝛼𝔸ℒ𝛼\alpha^{\star}=\operatorname*{arg\,min}_{\alpha\in\mathbb{A}}\mathcal{L}(\alpha). Then the equality holds in (16) i.e.,ℒ(α⋆)=−log𝐙(ℋtk)\textit{i}.\textit{e}.,\mathcal{L}(\alpha^{\star})=-\log\mathbf{Z}(\mathcal{H}_{t_{k}}) and μ0⋆=μ0subscriptsuperscript𝜇⋆0subscript𝜇0\mu^{\star}_{0}=\mu_{0} almost everywhere with respect to μ0subscript𝜇0\mu_{0}.
Theorem 3.6 suggests that the optimal control α⋆=argminαℒ(α)superscript𝛼⋆subscriptargmin𝛼ℒ𝛼\alpha^{\star}=\operatorname*{arg\,min}_{\alpha}\mathcal{L}(\alpha) provides the tight variational bound for the likelihood functions {gi}i∈[1:k]subscriptsubscript𝑔𝑖𝑖delimited-[]:1𝑘\{g_{i}\}_{i\in[1:k]} and the prior path measure ℙℙ\mathbb{P} induced by (1). Furthermore, α⋆superscript𝛼⋆\alpha^{\star} ensures that μ0⋆=μ0superscriptsubscript𝜇0⋆subscript𝜇0\mu_{0}^{\star}=\mu_{0}, indicating that simulating the controlled SDE in (6) with α⋆superscript𝛼⋆\alpha^{\star} and initial condition μ0subscript𝜇0\mu_{0} generates trajectories from the posterior path measure ℙ⋆superscriptℙ⋆\mathbb{P}^{\star} in (2).
In practice, the optimal control α⋆superscript𝛼⋆\alpha^{\star} can be approximated using a highly flexible neural network, which serves as a function approximator (i.e.,α:=α(⋅;θ)\textit{i}.\textit{e}.,\alpha:=\alpha(\cdot;\theta)), optimized through gradient descent-based optimization (Li et al., 2020; Zhang & Chen, 2022; Vargas et al., 2023).
However, applying gradient descent-based optimization necessitates computing gradients through the simulated diffusion process over the interval [0,T]0𝑇[0,T] to estimate the objective function (16) for neural network training. This approach can become slow, unstable, and memory-intensive as the time horizon or dimension of latent space increases (Iakovlev et al., 2023; Park et al., 2024). It contrasts with the philosophy of many recent generative models (Ho et al., 2020; Song et al., 2020), which aim to decompose the generative process and solve the sub-problems jointly. Additionally, for inference, it requires numerical simulations such as Euler-Maruyama solvers (Kloeden & Platen, 2013), which can also be time-consuming for a long time series. It motivated us to propose an efficient and scalable modeling approach for real-world applications described in the next section.
| 1,753
|
Amortized Control of Continuous State Space Feynman-Kac Model for Irregular Time Series
3 Controlled Continuous-Discrete State Space Model
3.2 Stochastic Optimal Control
Theorem 3.6 (Tight Variational Bound).
Let us assume that the path measure ℙαsuperscriptℙ𝛼\mathbb{P}^{\alpha} induced by (6) for any α∈𝔸𝛼𝔸\alpha\in\mathbb{A} satisfies DKL(ℙα|ℙ⋆)<∞subscript𝐷KLconditionalsuperscriptℙ𝛼superscriptℙ⋆D_{\text{KL}}(\mathbb{P}^{\alpha}|\mathbb{P}^{\star})<\infty. Then, for a cost function 𝒥𝒥\mathcal{J} in (7) and μ0⋆subscriptsuperscript𝜇⋆0\mu^{\star}_{0} in (5), it holds:
, 1 = DKL(ℙα|ℙ⋆)=DKL(μ0|μ0⋆)+𝔼𝐱0∼μ0[𝒥(0,𝐱0,α)]=ℒ(α)+log𝐙(ℋtk)≥0,subscript𝐷KLconditionalsuperscriptℙ𝛼superscriptℙ⋆subscript𝐷KLconditionalsubscript𝜇0subscriptsuperscript𝜇⋆0subscript𝔼similar-tosubscript𝐱0subscript𝜇0delimited-[]𝒥0subscript𝐱0𝛼ℒ𝛼𝐙subscriptℋsubscript𝑡𝑘0\displaystyle D_{\text{KL}}(\mathbb{P}^{\alpha}|\mathbb{P}^{\star})=D_{\text{KL}}(\mu_{0}|\mu^{\star}_{0})+\mathbb{E}_{\mathbf{x}_{0}\sim\mu_{0}}\left[\mathcal{J}(0,\mathbf{x}_{0},\alpha)\right]=\mathcal{L}(\alpha)+\log\mathbf{Z}(\mathcal{H}_{t_{k}})\geq 0,. , 2 = . , 3 = (15)
where the objective function ℒ(α)ℒ𝛼\mathcal{L}(\alpha) (a negative ELBO) is given by:
, 1 = ℒ(α)=𝔼ℙα[∫0T12∥α(s,𝐗sα)∥2𝑑s−∑ikloggi(𝐲ti|𝐗tiα)]≥−log𝐙(ℋtk).ℒ𝛼subscript𝔼superscriptℙ𝛼delimited-[]superscriptsubscript0𝑇12superscriptdelimited-∥∥𝛼𝑠subscriptsuperscript𝐗𝛼𝑠2differential-d𝑠superscriptsubscript𝑖𝑘subscript𝑔𝑖conditionalsubscript𝐲subscript𝑡𝑖subscriptsuperscript𝐗𝛼subscript𝑡𝑖𝐙subscriptℋsubscript𝑡𝑘\mathcal{L}(\alpha)=\mathbb{E}_{\mathbb{P}^{\alpha}}\left[\int_{0}^{T}\frac{1}{2}\left\lVert\alpha(s,\mathbf{X}^{\alpha}_{s})\right\rVert^{2}ds-\sum_{i}^{k}\log g_{i}(\mathbf{y}_{t_{i}}|\mathbf{X}^{\alpha}_{t_{i}})\right]\geq-\log\mathbf{Z}(\mathcal{H}_{t_{k}}).. , 2 = . , 3 = (16)
Moreover, assume that ℒ(α)ℒ𝛼\mathcal{L}(\alpha) has a global minimum α⋆=argminα∈𝔸ℒ(α)superscript𝛼⋆subscriptargmin𝛼𝔸ℒ𝛼\alpha^{\star}=\operatorname*{arg\,min}_{\alpha\in\mathbb{A}}\mathcal{L}(\alpha). Then the equality holds in (16) i.e.,ℒ(α⋆)=−log𝐙(ℋtk)\textit{i}.\textit{e}.,\mathcal{L}(\alpha^{\star})=-\log\mathbf{Z}(\mathcal{H}_{t_{k}}) and μ0⋆=μ0subscriptsuperscript𝜇⋆0subscript𝜇0\mu^{\star}_{0}=\mu_{0} almost everywhere with respect to μ0subscript𝜇0\mu_{0}.
Theorem 3.6 suggests that the optimal control α⋆=argminαℒ(α)superscript𝛼⋆subscriptargmin𝛼ℒ𝛼\alpha^{\star}=\operatorname*{arg\,min}_{\alpha}\mathcal{L}(\alpha) provides the tight variational bound for the likelihood functions {gi}i∈[1:k]subscriptsubscript𝑔𝑖𝑖delimited-[]:1𝑘\{g_{i}\}_{i\in[1:k]} and the prior path measure ℙℙ\mathbb{P} induced by (1). Furthermore, α⋆superscript𝛼⋆\alpha^{\star} ensures that μ0⋆=μ0superscriptsubscript𝜇0⋆subscript𝜇0\mu_{0}^{\star}=\mu_{0}, indicating that simulating the controlled SDE in (6) with α⋆superscript𝛼⋆\alpha^{\star} and initial condition μ0subscript𝜇0\mu_{0} generates trajectories from the posterior path measure ℙ⋆superscriptℙ⋆\mathbb{P}^{\star} in (2).
In practice, the optimal control α⋆superscript𝛼⋆\alpha^{\star} can be approximated using a highly flexible neural network, which serves as a function approximator (i.e.,α:=α(⋅;θ)\textit{i}.\textit{e}.,\alpha:=\alpha(\cdot;\theta)), optimized through gradient descent-based optimization (Li et al., 2020; Zhang & Chen, 2022; Vargas et al., 2023).
However, applying gradient descent-based optimization necessitates computing gradients through the simulated diffusion process over the interval [0,T]0𝑇[0,T] to estimate the objective function (16) for neural network training. This approach can become slow, unstable, and memory-intensive as the time horizon or dimension of latent space increases (Iakovlev et al., 2023; Park et al., 2024). It contrasts with the philosophy of many recent generative models (Ho et al., 2020; Song et al., 2020), which aim to decompose the generative process and solve the sub-problems jointly. Additionally, for inference, it requires numerical simulations such as Euler-Maruyama solvers (Kloeden & Platen, 2013), which can also be time-consuming for a long time series. It motivated us to propose an efficient and scalable modeling approach for real-world applications described in the next section.
| 1,805
|
13
|
The linear approximation of the general drift function provides a simulation-free property for the dynamics and significantly enhances scalability while ensuring high-quality performance (Deng et al., 2024). Motivate by this property, we investigate a class of linear SDEs to improve the efficiency of the proposed controlled dynamics, ensuring superior performance compared to other baselines. We introduce the following affine linear SDEs:
, 1 = d𝐗t=[−𝐀𝐗t+α]dt+d𝐖t,whereμ0∼𝒩(𝐦0,𝚺0),formulae-sequence𝑑subscript𝐗𝑡delimited-[]subscript𝐀𝐗𝑡𝛼𝑑𝑡𝑑subscript𝐖𝑡wheresimilar-tosubscript𝜇0𝒩subscript𝐦0subscript𝚺0d\mathbf{X}_{t}=\left[-\mathbf{A}\mathbf{X}_{t}+\alpha\right]dt+d\mathbf{W}_{t},\quad\text{where}\quad\mu_{0}\sim\mathcal{N}(\mathbf{m}_{0},\mathbf{\Sigma}_{0}),. , 2 = . , 3 = (17)
and a matrix 𝐀∈ℝd×d𝐀superscriptℝ𝑑𝑑\mathbf{A}\in\mathbb{R}^{d\times d} and a vector α∈ℝd𝛼superscriptℝ𝑑\alpha\in\mathbb{R}^{d}. The solutions for 𝐗tsubscript𝐗𝑡\mathbf{X}_{t} in (17) has a closed-form Gaussian distribution for any t∈[0,T]𝑡0𝑇t\in[0,T], where the mean 𝐦tsubscript𝐦𝑡\mathbf{m}_{t} and covariance 𝚺tsubscript𝚺𝑡\mathbf{\Sigma}_{t} can be explicitly computed by solving the ODEs (Särkkä & Solin, 2019):
, 1 = . , 2 = 𝐦t=e−𝐀t𝐦0−𝐀−1(e−𝐀t−𝐈)αsubscript𝐦𝑡superscript𝑒𝐀𝑡subscript𝐦0superscript𝐀1superscript𝑒𝐀𝑡𝐈𝛼\displaystyle\mathbf{m}_{t}=e^{-\mathbf{A}t}\mathbf{m}_{0}-\mathbf{A}^{-1}(e^{-\mathbf{A}t}-\mathbf{I})\alpha. , 3 = . , 4 = (18). , 1 = . , 2 = 𝚺t=e−𝐀t𝚺0e−𝐀⊤t+∫0te−𝐀(t−s)e𝐀⊤(t−s)𝑑s.subscript𝚺𝑡superscript𝑒𝐀𝑡subscript𝚺0superscript𝑒superscript𝐀top𝑡superscriptsubscript0𝑡superscript𝑒𝐀𝑡𝑠superscript𝑒superscript𝐀top𝑡𝑠differential-d𝑠\displaystyle\mathbf{\Sigma}_{t}=e^{-\mathbf{A}t}\mathbf{\Sigma}_{0}e^{-\mathbf{A}^{\top}t}+\int_{0}^{t}e^{-\mathbf{A}(t-s)}e^{\mathbf{A}^{\top}(t-s)}ds.. , 3 = . , 4 = (19)
However, calculating the moments 𝐦tsubscript𝐦𝑡\mathbf{m}_{t} and 𝚺tsubscript𝚺𝑡\mathbf{\Sigma}_{t} in (18) involves computing matrix exponentials, inversions, and performing numerical integration. These operations can be computationally intensive, especially for large matrices or when high precision is required. These computations can be simplifies by restricting the matrix 𝐀𝐀\mathbf{A} to be a diagonal or semi-positive definite (SPD).
| 1,033
|
Amortized Control of Continuous State Space Feynman-Kac Model for Irregular Time Series
3 Controlled Continuous-Discrete State Space Model
3.3 Efficient Modeling of the Latent System
The linear approximation of the general drift function provides a simulation-free property for the dynamics and significantly enhances scalability while ensuring high-quality performance (Deng et al., 2024). Motivate by this property, we investigate a class of linear SDEs to improve the efficiency of the proposed controlled dynamics, ensuring superior performance compared to other baselines. We introduce the following affine linear SDEs:
, 1 = d𝐗t=[−𝐀𝐗t+α]dt+d𝐖t,whereμ0∼𝒩(𝐦0,𝚺0),formulae-sequence𝑑subscript𝐗𝑡delimited-[]subscript𝐀𝐗𝑡𝛼𝑑𝑡𝑑subscript𝐖𝑡wheresimilar-tosubscript𝜇0𝒩subscript𝐦0subscript𝚺0d\mathbf{X}_{t}=\left[-\mathbf{A}\mathbf{X}_{t}+\alpha\right]dt+d\mathbf{W}_{t},\quad\text{where}\quad\mu_{0}\sim\mathcal{N}(\mathbf{m}_{0},\mathbf{\Sigma}_{0}),. , 2 = . , 3 = (17)
and a matrix 𝐀∈ℝd×d𝐀superscriptℝ𝑑𝑑\mathbf{A}\in\mathbb{R}^{d\times d} and a vector α∈ℝd𝛼superscriptℝ𝑑\alpha\in\mathbb{R}^{d}. The solutions for 𝐗tsubscript𝐗𝑡\mathbf{X}_{t} in (17) has a closed-form Gaussian distribution for any t∈[0,T]𝑡0𝑇t\in[0,T], where the mean 𝐦tsubscript𝐦𝑡\mathbf{m}_{t} and covariance 𝚺tsubscript𝚺𝑡\mathbf{\Sigma}_{t} can be explicitly computed by solving the ODEs (Särkkä & Solin, 2019):
, 1 = . , 2 = 𝐦t=e−𝐀t𝐦0−𝐀−1(e−𝐀t−𝐈)αsubscript𝐦𝑡superscript𝑒𝐀𝑡subscript𝐦0superscript𝐀1superscript𝑒𝐀𝑡𝐈𝛼\displaystyle\mathbf{m}_{t}=e^{-\mathbf{A}t}\mathbf{m}_{0}-\mathbf{A}^{-1}(e^{-\mathbf{A}t}-\mathbf{I})\alpha. , 3 = . , 4 = (18). , 1 = . , 2 = 𝚺t=e−𝐀t𝚺0e−𝐀⊤t+∫0te−𝐀(t−s)e𝐀⊤(t−s)𝑑s.subscript𝚺𝑡superscript𝑒𝐀𝑡subscript𝚺0superscript𝑒superscript𝐀top𝑡superscriptsubscript0𝑡superscript𝑒𝐀𝑡𝑠superscript𝑒superscript𝐀top𝑡𝑠differential-d𝑠\displaystyle\mathbf{\Sigma}_{t}=e^{-\mathbf{A}t}\mathbf{\Sigma}_{0}e^{-\mathbf{A}^{\top}t}+\int_{0}^{t}e^{-\mathbf{A}(t-s)}e^{\mathbf{A}^{\top}(t-s)}ds.. , 3 = . , 4 = (19)
However, calculating the moments 𝐦tsubscript𝐦𝑡\mathbf{m}_{t} and 𝚺tsubscript𝚺𝑡\mathbf{\Sigma}_{t} in (18) involves computing matrix exponentials, inversions, and performing numerical integration. These operations can be computationally intensive, especially for large matrices or when high precision is required. These computations can be simplifies by restricting the matrix 𝐀𝐀\mathbf{A} to be a diagonal or semi-positive definite (SPD).
| 1,074
|
14
|
Since SPD matrix 𝐀𝐀\mathbf{A} admits the eigen-decomposition 𝐀=𝐄𝐃𝐄⊤𝐀superscript𝐄𝐃𝐄top\mathbf{A}=\mathbf{E}\mathbf{D}\mathbf{E}^{\top} with 𝐄∈ℝd×d𝐄superscriptℝ𝑑𝑑\mathbf{E}\in\mathbb{R}^{d\times d} and 𝐃∈diag(ℝd)𝐃diagsuperscriptℝ𝑑\mathbf{D}\in\text{diag}(\mathbb{R}^{d}), the process 𝐗tsubscript𝐗𝑡\mathbf{X}_{t} expressed in a standard basis can be transformed to a 𝐗^tsubscript^𝐗𝑡\hat{\mathbf{X}}_{t} which have diagonalized drift function. In the space spanned by the eigen-basis 𝐄𝐄\mathbf{E}, the dynamics in (1) can be rewritten into:
, 1 = d𝐗^t=[−𝐃𝐗^t+α^]dt+d𝐖^t,whereμ^0∼𝒩(𝐦0^,𝚺0^),formulae-sequence𝑑subscript^𝐗𝑡delimited-[]𝐃subscript^𝐗𝑡^𝛼𝑑𝑡𝑑subscript^𝐖𝑡wheresimilar-tosubscript^𝜇0𝒩^subscript𝐦0^subscript𝚺0d\hat{\mathbf{X}}_{t}=\left[-\mathbf{D}\hat{\mathbf{X}}_{t}+\hat{\alpha}\right]dt+d\hat{\mathbf{W}}_{t},\quad\text{where}\quad\hat{\mu}_{0}\sim\mathcal{N}(\hat{\mathbf{m}_{0}},\hat{\mathbf{\Sigma}_{0}}),. , 2 = . , 3 = (20)
𝐗^t=𝐄⊤𝐗tsubscript^𝐗𝑡superscript𝐄topsubscript𝐗𝑡\hat{\mathbf{X}}_{t}=\mathbf{E}^{\top}\mathbf{X}_{t}, α^=𝐄⊤α^𝛼superscript𝐄top𝛼\hat{\alpha}=\mathbf{E}^{\top}\alpha, 𝐖^t=𝐄⊤𝐖tsubscript^𝐖𝑡superscript𝐄topsubscript𝐖𝑡\hat{\mathbf{W}}_{t}=\mathbf{E}^{\top}\mathbf{W}_{t}, 𝐦^t=𝐄⊤𝐦tsubscript^𝐦𝑡superscript𝐄topsubscript𝐦𝑡\hat{\mathbf{m}}_{t}=\mathbf{E}^{\top}\mathbf{m}_{t} and 𝚺^t=𝐄⊤𝚺t𝐄subscript^𝚺𝑡superscript𝐄topsubscript𝚺𝑡𝐄\hat{\mathbf{\Sigma}}_{t}=\mathbf{E}^{\top}\mathbf{\Sigma}_{t}\mathbf{E}. Note that 𝐖^t=d𝐄⊤𝐖tsuperscript𝑑subscript^𝐖𝑡superscript𝐄topsubscript𝐖𝑡\hat{\mathbf{W}}_{t}\stackrel{{\scriptstyle d}}{{=}}\mathbf{E}^{\top}\mathbf{W}_{t} for any t∈[0,T]𝑡0𝑇t\in[0,T] due to the orthonormality of 𝐄𝐄\mathbf{E}, so 𝐖^tsubscript^𝐖𝑡\hat{\mathbf{W}}_{t} can be regarded as a standard Wiener process. Because of 𝐃=diag(𝛌)𝐃diag𝛌\mathbf{D}=\text{diag}(\bm{\lambda}), where 𝛌={λ1,⋯,λd}𝛌subscript𝜆1⋯subscript𝜆𝑑\bm{\lambda}=\{\lambda_{1},\cdots,\lambda_{d}\} and each λi≥0subscript𝜆𝑖0\lambda_{i}\geq 0 for all i∈[1:d]i\in[1:d], we can obtain the state distributions of 𝐗tsubscript𝐗𝑡\mathbf{X}_{t} for any t∈[0,T]𝑡0𝑇t\in[0,T] by solving ODEs in (18-19) analytically, without the need for numerical computation. The results are then transforming back to the standard basis i.e.,𝐦t=𝐄𝐦^t,𝚺t=𝐄𝚺^t𝐄T\textit{i}.\textit{e}.,\mathbf{m}_{t}=\mathbf{E}\hat{\mathbf{m}}_{t},\mathbf{\Sigma}_{t}=\mathbf{E}\hat{\mathbf{\Sigma}}_{t}\mathbf{E}^{T}.
| 1,322
|
Amortized Control of Continuous State Space Feynman-Kac Model for Irregular Time Series
3 Controlled Continuous-Discrete State Space Model
3.3 Efficient Modeling of the Latent System
Remark 3.7 (Diagonalization).
Since SPD matrix 𝐀𝐀\mathbf{A} admits the eigen-decomposition 𝐀=𝐄𝐃𝐄⊤𝐀superscript𝐄𝐃𝐄top\mathbf{A}=\mathbf{E}\mathbf{D}\mathbf{E}^{\top} with 𝐄∈ℝd×d𝐄superscriptℝ𝑑𝑑\mathbf{E}\in\mathbb{R}^{d\times d} and 𝐃∈diag(ℝd)𝐃diagsuperscriptℝ𝑑\mathbf{D}\in\text{diag}(\mathbb{R}^{d}), the process 𝐗tsubscript𝐗𝑡\mathbf{X}_{t} expressed in a standard basis can be transformed to a 𝐗^tsubscript^𝐗𝑡\hat{\mathbf{X}}_{t} which have diagonalized drift function. In the space spanned by the eigen-basis 𝐄𝐄\mathbf{E}, the dynamics in (1) can be rewritten into:
, 1 = d𝐗^t=[−𝐃𝐗^t+α^]dt+d𝐖^t,whereμ^0∼𝒩(𝐦0^,𝚺0^),formulae-sequence𝑑subscript^𝐗𝑡delimited-[]𝐃subscript^𝐗𝑡^𝛼𝑑𝑡𝑑subscript^𝐖𝑡wheresimilar-tosubscript^𝜇0𝒩^subscript𝐦0^subscript𝚺0d\hat{\mathbf{X}}_{t}=\left[-\mathbf{D}\hat{\mathbf{X}}_{t}+\hat{\alpha}\right]dt+d\hat{\mathbf{W}}_{t},\quad\text{where}\quad\hat{\mu}_{0}\sim\mathcal{N}(\hat{\mathbf{m}_{0}},\hat{\mathbf{\Sigma}_{0}}),. , 2 = . , 3 = (20)
𝐗^t=𝐄⊤𝐗tsubscript^𝐗𝑡superscript𝐄topsubscript𝐗𝑡\hat{\mathbf{X}}_{t}=\mathbf{E}^{\top}\mathbf{X}_{t}, α^=𝐄⊤α^𝛼superscript𝐄top𝛼\hat{\alpha}=\mathbf{E}^{\top}\alpha, 𝐖^t=𝐄⊤𝐖tsubscript^𝐖𝑡superscript𝐄topsubscript𝐖𝑡\hat{\mathbf{W}}_{t}=\mathbf{E}^{\top}\mathbf{W}_{t}, 𝐦^t=𝐄⊤𝐦tsubscript^𝐦𝑡superscript𝐄topsubscript𝐦𝑡\hat{\mathbf{m}}_{t}=\mathbf{E}^{\top}\mathbf{m}_{t} and 𝚺^t=𝐄⊤𝚺t𝐄subscript^𝚺𝑡superscript𝐄topsubscript𝚺𝑡𝐄\hat{\mathbf{\Sigma}}_{t}=\mathbf{E}^{\top}\mathbf{\Sigma}_{t}\mathbf{E}. Note that 𝐖^t=d𝐄⊤𝐖tsuperscript𝑑subscript^𝐖𝑡superscript𝐄topsubscript𝐖𝑡\hat{\mathbf{W}}_{t}\stackrel{{\scriptstyle d}}{{=}}\mathbf{E}^{\top}\mathbf{W}_{t} for any t∈[0,T]𝑡0𝑇t\in[0,T] due to the orthonormality of 𝐄𝐄\mathbf{E}, so 𝐖^tsubscript^𝐖𝑡\hat{\mathbf{W}}_{t} can be regarded as a standard Wiener process. Because of 𝐃=diag(𝛌)𝐃diag𝛌\mathbf{D}=\text{diag}(\bm{\lambda}), where 𝛌={λ1,⋯,λd}𝛌subscript𝜆1⋯subscript𝜆𝑑\bm{\lambda}=\{\lambda_{1},\cdots,\lambda_{d}\} and each λi≥0subscript𝜆𝑖0\lambda_{i}\geq 0 for all i∈[1:d]i\in[1:d], we can obtain the state distributions of 𝐗tsubscript𝐗𝑡\mathbf{X}_{t} for any t∈[0,T]𝑡0𝑇t\in[0,T] by solving ODEs in (18-19) analytically, without the need for numerical computation. The results are then transforming back to the standard basis i.e.,𝐦t=𝐄𝐦^t,𝚺t=𝐄𝚺^t𝐄T\textit{i}.\textit{e}.,\mathbf{m}_{t}=\mathbf{E}\hat{\mathbf{m}}_{t},\mathbf{\Sigma}_{t}=\mathbf{E}\hat{\mathbf{\Sigma}}_{t}\mathbf{E}^{T}.
| 1,373
|
15
|
To leverage the advantages of linear SDEs in (17) which offer simulation-free property, we aim to linearize the drift function in (6). However, the naïve formulation described in (17) may limit the expressiveness of the latent dynamics for real-world applications. Hence, we introduce a parameterization strategy inspired by (Becker et al., 2019; Klushyn et al., 2021), which leverage neural networks to enhance the flexibility by fully incorporating an attentive structure with a given observations 𝐲0:Tsubscript𝐲:0𝑇\mathbf{y}_{0:T} while maintaining a linear formulation:
, 1 = . , 2 = d𝐗tα=[−𝐀t𝐗t+αt]dt+d𝐖~t,𝑑subscriptsuperscript𝐗𝛼𝑡delimited-[]subscript𝐀𝑡subscript𝐗𝑡subscript𝛼𝑡𝑑𝑡𝑑subscript~𝐖𝑡\displaystyle d\mathbf{X}^{\alpha}_{t}=\left[-\mathbf{A}_{t}\mathbf{X}_{t}+\alpha_{t}\right]dt+d\tilde{\mathbf{W}}_{t},. , 3 = . , 4 = (21)
where the approximated drift function is constructed as affine formulation with following components:
, 1 = 𝐀t=∑l=1Lwθ(l)(𝐳t)𝐀(l),αt=𝐁θ𝐳t.formulae-sequencesubscript𝐀𝑡superscriptsubscript𝑙1𝐿subscriptsuperscript𝑤𝑙𝜃subscript𝐳𝑡superscript𝐀𝑙subscript𝛼𝑡subscript𝐁𝜃subscript𝐳𝑡\mathbf{A}_{t}=\sum_{l=1}^{L}w^{(l)}_{\theta}(\mathbf{z}_{t})\mathbf{A}^{(l)},\quad\alpha_{t}=\mathbf{B}_{\theta}\mathbf{z}_{t}.. , 2 = . , 3 = (22)
The matrix 𝐀tsubscript𝐀𝑡\mathbf{A}_{t} is given by a convex combination of L𝐿L trainable base matrices {𝐀(l)}l∈[1:L]subscriptsuperscript𝐀𝑙𝑙delimited-[]:1𝐿\{\mathbf{A}^{(l)}\}_{l\in[1:L]}, where the weights 𝐰θ=softmax(𝐟θ(𝐳t))subscript𝐰𝜃softmaxsubscript𝐟𝜃subscript𝐳𝑡\mathbf{w}_{\theta}=\text{softmax}(\mathbf{f}_{\theta}(\mathbf{z}_{t})) are produced by the neural network 𝐟θsubscript𝐟𝜃\mathbf{f}_{\theta}. Additionally, 𝐁θ∈ℝd×dsubscript𝐁𝜃superscriptℝ𝑑𝑑\mathbf{B}_{\theta}\in\mathbb{R}^{d\times d} is a trainable matrix. The latent variable 𝐳tsubscript𝐳𝑡\mathbf{z}_{t} is produced by the transformer 𝐓θsubscript𝐓𝜃\mathbf{T}_{\theta}, which encodes the given observations 𝐲0:Tsubscript𝐲:0𝑇\mathbf{y}_{0:T}, depending on the task-specific information assimilation scheme.
Figure 2 illustrates two assimilation schemes using marked attention mechanism333See Appendix C.2 for more details.: the history assimilation scheme, the transformer 𝐓θsubscript𝐓𝜃\mathbf{T}_{\theta} encodes information up to the current time t𝑡t and outputs 𝐳tsubscript𝐳𝑡\mathbf{z}_{t}, i.e.,𝐳t=𝐓θ(ℋt)\textit{i}.\textit{e}.,\mathbf{z}_{t}=\mathbf{T}_{\theta}(\mathcal{H}_{t}), and the full assimilation scheme, the transformer 𝐓θsubscript𝐓𝜃\mathbf{T}_{\theta} encodes information over the entire interval [0,T]0𝑇[0,T] and outputs 𝐳tsubscript𝐳𝑡\mathbf{z}_{t}, i.e.,𝐳t=𝐓θ(ℋT)\textit{i}.\textit{e}.,\mathbf{z}_{t}=\mathbf{T}_{\theta}(\mathcal{H}_{T}). Note that this general formulation brought from the control formulation enables more flexible use of information encoded from observations. In contrast, previous Kalman-filtering based CD-SSM method (Schirmer et al., 2022) relies on recurrent updates, which limits them to typically using historical information only.
Furthermore, since observations are updated only at discrete time steps {ti}i∈[1:k]subscriptsubscript𝑡𝑖𝑖delimited-[]:1𝑘\{t_{i}\}_{i\in[1:k]}, the latent variables 𝐳tsubscript𝐳𝑡\mathbf{z}_{t} remain constant within any interval t∈[ti−1,ti)𝑡subscript𝑡𝑖1subscript𝑡𝑖t\in[t_{i-1},t_{i}) for all i∈[1:k]i\in[1:k], making 𝐀isubscript𝐀𝑖\mathbf{A}_{i} and αisubscript𝛼𝑖\alpha_{i} constant as well. As a result, the dynamics (21) remain linear over local intervals. This structure enables us to derive a closed-form solution for the intermediate latent states.
| 1,420
|
Amortized Control of Continuous State Space Feynman-Kac Model for Irregular Time Series
3 Controlled Continuous-Discrete State Space Model
3.3 Efficient Modeling of the Latent System
Locally Linear Approximation
To leverage the advantages of linear SDEs in (17) which offer simulation-free property, we aim to linearize the drift function in (6). However, the naïve formulation described in (17) may limit the expressiveness of the latent dynamics for real-world applications. Hence, we introduce a parameterization strategy inspired by (Becker et al., 2019; Klushyn et al., 2021), which leverage neural networks to enhance the flexibility by fully incorporating an attentive structure with a given observations 𝐲0:Tsubscript𝐲:0𝑇\mathbf{y}_{0:T} while maintaining a linear formulation:
, 1 = . , 2 = d𝐗tα=[−𝐀t𝐗t+αt]dt+d𝐖~t,𝑑subscriptsuperscript𝐗𝛼𝑡delimited-[]subscript𝐀𝑡subscript𝐗𝑡subscript𝛼𝑡𝑑𝑡𝑑subscript~𝐖𝑡\displaystyle d\mathbf{X}^{\alpha}_{t}=\left[-\mathbf{A}_{t}\mathbf{X}_{t}+\alpha_{t}\right]dt+d\tilde{\mathbf{W}}_{t},. , 3 = . , 4 = (21)
where the approximated drift function is constructed as affine formulation with following components:
, 1 = 𝐀t=∑l=1Lwθ(l)(𝐳t)𝐀(l),αt=𝐁θ𝐳t.formulae-sequencesubscript𝐀𝑡superscriptsubscript𝑙1𝐿subscriptsuperscript𝑤𝑙𝜃subscript𝐳𝑡superscript𝐀𝑙subscript𝛼𝑡subscript𝐁𝜃subscript𝐳𝑡\mathbf{A}_{t}=\sum_{l=1}^{L}w^{(l)}_{\theta}(\mathbf{z}_{t})\mathbf{A}^{(l)},\quad\alpha_{t}=\mathbf{B}_{\theta}\mathbf{z}_{t}.. , 2 = . , 3 = (22)
The matrix 𝐀tsubscript𝐀𝑡\mathbf{A}_{t} is given by a convex combination of L𝐿L trainable base matrices {𝐀(l)}l∈[1:L]subscriptsuperscript𝐀𝑙𝑙delimited-[]:1𝐿\{\mathbf{A}^{(l)}\}_{l\in[1:L]}, where the weights 𝐰θ=softmax(𝐟θ(𝐳t))subscript𝐰𝜃softmaxsubscript𝐟𝜃subscript𝐳𝑡\mathbf{w}_{\theta}=\text{softmax}(\mathbf{f}_{\theta}(\mathbf{z}_{t})) are produced by the neural network 𝐟θsubscript𝐟𝜃\mathbf{f}_{\theta}. Additionally, 𝐁θ∈ℝd×dsubscript𝐁𝜃superscriptℝ𝑑𝑑\mathbf{B}_{\theta}\in\mathbb{R}^{d\times d} is a trainable matrix. The latent variable 𝐳tsubscript𝐳𝑡\mathbf{z}_{t} is produced by the transformer 𝐓θsubscript𝐓𝜃\mathbf{T}_{\theta}, which encodes the given observations 𝐲0:Tsubscript𝐲:0𝑇\mathbf{y}_{0:T}, depending on the task-specific information assimilation scheme.
Figure 2 illustrates two assimilation schemes using marked attention mechanism333See Appendix C.2 for more details.: the history assimilation scheme, the transformer 𝐓θsubscript𝐓𝜃\mathbf{T}_{\theta} encodes information up to the current time t𝑡t and outputs 𝐳tsubscript𝐳𝑡\mathbf{z}_{t}, i.e.,𝐳t=𝐓θ(ℋt)\textit{i}.\textit{e}.,\mathbf{z}_{t}=\mathbf{T}_{\theta}(\mathcal{H}_{t}), and the full assimilation scheme, the transformer 𝐓θsubscript𝐓𝜃\mathbf{T}_{\theta} encodes information over the entire interval [0,T]0𝑇[0,T] and outputs 𝐳tsubscript𝐳𝑡\mathbf{z}_{t}, i.e.,𝐳t=𝐓θ(ℋT)\textit{i}.\textit{e}.,\mathbf{z}_{t}=\mathbf{T}_{\theta}(\mathcal{H}_{T}). Note that this general formulation brought from the control formulation enables more flexible use of information encoded from observations. In contrast, previous Kalman-filtering based CD-SSM method (Schirmer et al., 2022) relies on recurrent updates, which limits them to typically using historical information only.
Furthermore, since observations are updated only at discrete time steps {ti}i∈[1:k]subscriptsubscript𝑡𝑖𝑖delimited-[]:1𝑘\{t_{i}\}_{i\in[1:k]}, the latent variables 𝐳tsubscript𝐳𝑡\mathbf{z}_{t} remain constant within any interval t∈[ti−1,ti)𝑡subscript𝑡𝑖1subscript𝑡𝑖t\in[t_{i-1},t_{i}) for all i∈[1:k]i\in[1:k], making 𝐀isubscript𝐀𝑖\mathbf{A}_{i} and αisubscript𝛼𝑖\alpha_{i} constant as well. As a result, the dynamics (21) remain linear over local intervals. This structure enables us to derive a closed-form solution for the intermediate latent states.
| 1,467
|
16
|
Let us consider sequences of SPD matrices {𝐀i}i∈[1:k]subscriptsubscript𝐀𝑖𝑖delimited-[]:1𝑘\{\mathbf{A}_{i}\}_{i\in[1:k]} and vectors {αi}i∈[1:k]subscriptsubscript𝛼𝑖𝑖delimited-[]:1𝑘\{\alpha_{i}\}_{i\in[1:k]} and the following control-affine SDEs for all i∈[1:k]i\in[1:k]:
, 1 = d𝐗t=[−𝐀i𝐗t+αi]dt+σd𝐖t,t∈[ti−1,ti).formulae-sequence𝑑subscript𝐗𝑡delimited-[]subscript𝐀𝑖subscript𝐗𝑡subscript𝛼𝑖𝑑𝑡𝜎𝑑subscript𝐖𝑡𝑡subscript𝑡𝑖1subscript𝑡𝑖d\mathbf{X}_{t}=\left[-\mathbf{A}_{i}\mathbf{X}_{t}+\alpha_{i}\right]dt+\sigma d\mathbf{W}_{t},\quad t\in[t_{i-1},t_{i}).. , 2 = . , 3 = (23)
Then, the solution of (23) condition to the initial distribution μ0=𝒩(𝐦0,𝚺0)subscript𝜇0𝒩subscript𝐦0subscript𝚺0\mu_{0}=\mathcal{N}(\mathbf{m}_{0},\mathbf{\Sigma}_{0}) is a Gaussian process 𝒩(𝐦t,𝚺t)𝒩subscript𝐦𝑡subscript𝚺𝑡\mathcal{N}(\mathbf{m}_{t},\mathbf{\Sigma}_{t}) for all t∈[0,T]𝑡0𝑇t\in[0,T], where the first two moments is given by
, 1 = 𝐦t=e−(t−t0)𝐀i𝐦0−∑j=1i−1(e−(ti−1−tj)𝐀j𝐀j−1(e−(tj−tj−1)𝐀j−𝐈)αj)−𝐀i−1(e−(t−ti−1)𝐀i−𝐈)αi,subscript𝐦𝑡superscript𝑒𝑡subscript𝑡0subscript𝐀𝑖subscript𝐦0superscriptsubscript𝑗1𝑖1superscript𝑒subscript𝑡𝑖1subscript𝑡𝑗subscript𝐀𝑗superscriptsubscript𝐀𝑗1superscript𝑒subscript𝑡𝑗subscript𝑡𝑗1subscript𝐀𝑗𝐈subscript𝛼𝑗superscriptsubscript𝐀𝑖1superscript𝑒𝑡subscript𝑡𝑖1subscript𝐀𝑖𝐈subscript𝛼𝑖\displaystyle\mathbf{m}_{t}=e^{-(t-t_{0})\mathbf{A}_{i}}\mathbf{m}_{0}-\sum_{j=1}^{i-1}\left(e^{-(t_{i-1}-t_{j})\mathbf{A}_{j}}\mathbf{A}_{j}^{-1}(e^{-(t_{j}-t_{j-1})\mathbf{A}_{j}}-\mathbf{I})\alpha_{j}\right)-\mathbf{A}_{i}^{-1}(e^{-(t-t_{i-1})\mathbf{A}_{i}}-\mathbf{I})\alpha_{i},. , 2 = . , 1 = 𝚺t=e−2(t−t0)𝐀i𝚺0−∑j=1i−1(e−2(ti−1−tj)𝐀j𝐀j−1(e−2(tj−tj−1)𝐀j−𝐈)2)−𝐀i−1(e−2(t−ti−1)𝐀i−𝐈)2.subscript𝚺𝑡superscript𝑒2𝑡subscript𝑡0subscript𝐀𝑖subscript𝚺0superscriptsubscript𝑗1𝑖1superscript𝑒2subscript𝑡𝑖1subscript𝑡𝑗subscript𝐀𝑗superscriptsubscript𝐀𝑗1superscript𝑒2subscript𝑡𝑗subscript𝑡𝑗1subscript𝐀𝑗𝐈2superscriptsubscript𝐀𝑖1superscript𝑒2𝑡subscript𝑡𝑖1subscript𝐀𝑖𝐈2\displaystyle\mathbf{\Sigma}_{t}=e^{-2(t-t_{0})\mathbf{A}_{i}}\mathbf{\Sigma}_{0}-\sum_{j=1}^{i-1}\left(e^{-2(t_{i-1}-t_{j})\mathbf{A}_{j}}\mathbf{A}_{j}^{-1}\frac{(e^{-2(t_{j}-t_{j-1})\mathbf{A}_{j}}-\mathbf{I})}{2}\right)-\mathbf{A}_{i}^{-1}\frac{(e^{-2(t-t_{i-1})\mathbf{A}_{i}}-\mathbf{I})}{2}.. , 2 =
| 1,431
|
Amortized Control of Continuous State Space Feynman-Kac Model for Irregular Time Series
3 Controlled Continuous-Discrete State Space Model
3.3 Efficient Modeling of the Latent System
Locally Linear Approximation
Theorem 3.8 (Simulation-free estimation).
Let us consider sequences of SPD matrices {𝐀i}i∈[1:k]subscriptsubscript𝐀𝑖𝑖delimited-[]:1𝑘\{\mathbf{A}_{i}\}_{i\in[1:k]} and vectors {αi}i∈[1:k]subscriptsubscript𝛼𝑖𝑖delimited-[]:1𝑘\{\alpha_{i}\}_{i\in[1:k]} and the following control-affine SDEs for all i∈[1:k]i\in[1:k]:
, 1 = d𝐗t=[−𝐀i𝐗t+αi]dt+σd𝐖t,t∈[ti−1,ti).formulae-sequence𝑑subscript𝐗𝑡delimited-[]subscript𝐀𝑖subscript𝐗𝑡subscript𝛼𝑖𝑑𝑡𝜎𝑑subscript𝐖𝑡𝑡subscript𝑡𝑖1subscript𝑡𝑖d\mathbf{X}_{t}=\left[-\mathbf{A}_{i}\mathbf{X}_{t}+\alpha_{i}\right]dt+\sigma d\mathbf{W}_{t},\quad t\in[t_{i-1},t_{i}).. , 2 = . , 3 = (23)
Then, the solution of (23) condition to the initial distribution μ0=𝒩(𝐦0,𝚺0)subscript𝜇0𝒩subscript𝐦0subscript𝚺0\mu_{0}=\mathcal{N}(\mathbf{m}_{0},\mathbf{\Sigma}_{0}) is a Gaussian process 𝒩(𝐦t,𝚺t)𝒩subscript𝐦𝑡subscript𝚺𝑡\mathcal{N}(\mathbf{m}_{t},\mathbf{\Sigma}_{t}) for all t∈[0,T]𝑡0𝑇t\in[0,T], where the first two moments is given by
, 1 = 𝐦t=e−(t−t0)𝐀i𝐦0−∑j=1i−1(e−(ti−1−tj)𝐀j𝐀j−1(e−(tj−tj−1)𝐀j−𝐈)αj)−𝐀i−1(e−(t−ti−1)𝐀i−𝐈)αi,subscript𝐦𝑡superscript𝑒𝑡subscript𝑡0subscript𝐀𝑖subscript𝐦0superscriptsubscript𝑗1𝑖1superscript𝑒subscript𝑡𝑖1subscript𝑡𝑗subscript𝐀𝑗superscriptsubscript𝐀𝑗1superscript𝑒subscript𝑡𝑗subscript𝑡𝑗1subscript𝐀𝑗𝐈subscript𝛼𝑗superscriptsubscript𝐀𝑖1superscript𝑒𝑡subscript𝑡𝑖1subscript𝐀𝑖𝐈subscript𝛼𝑖\displaystyle\mathbf{m}_{t}=e^{-(t-t_{0})\mathbf{A}_{i}}\mathbf{m}_{0}-\sum_{j=1}^{i-1}\left(e^{-(t_{i-1}-t_{j})\mathbf{A}_{j}}\mathbf{A}_{j}^{-1}(e^{-(t_{j}-t_{j-1})\mathbf{A}_{j}}-\mathbf{I})\alpha_{j}\right)-\mathbf{A}_{i}^{-1}(e^{-(t-t_{i-1})\mathbf{A}_{i}}-\mathbf{I})\alpha_{i},. , 2 = . , 1 = 𝚺t=e−2(t−t0)𝐀i𝚺0−∑j=1i−1(e−2(ti−1−tj)𝐀j𝐀j−1(e−2(tj−tj−1)𝐀j−𝐈)2)−𝐀i−1(e−2(t−ti−1)𝐀i−𝐈)2.subscript𝚺𝑡superscript𝑒2𝑡subscript𝑡0subscript𝐀𝑖subscript𝚺0superscriptsubscript𝑗1𝑖1superscript𝑒2subscript𝑡𝑖1subscript𝑡𝑗subscript𝐀𝑗superscriptsubscript𝐀𝑗1superscript𝑒2subscript𝑡𝑗subscript𝑡𝑗1subscript𝐀𝑗𝐈2superscriptsubscript𝐀𝑖1superscript𝑒2𝑡subscript𝑡𝑖1subscript𝐀𝑖𝐈2\displaystyle\mathbf{\Sigma}_{t}=e^{-2(t-t_{0})\mathbf{A}_{i}}\mathbf{\Sigma}_{0}-\sum_{j=1}^{i-1}\left(e^{-2(t_{i-1}-t_{j})\mathbf{A}_{j}}\mathbf{A}_{j}^{-1}\frac{(e^{-2(t_{j}-t_{j-1})\mathbf{A}_{j}}-\mathbf{I})}{2}\right)-\mathbf{A}_{i}^{-1}\frac{(e^{-2(t-t_{i-1})\mathbf{A}_{i}}-\mathbf{I})}{2}.. , 2 =
| 1,489
|
17
|
Given an associative operator ⊗tensor-product\otimes and a sequence of elements [st1,⋯stK]subscript𝑠subscript𝑡1⋯subscript𝑠subscript𝑡𝐾[s_{t_{1}},\cdots s_{t_{K}}], the parallel scan algorithm computes the all-prefix-sum which returns the sequence
, 1 = [st1,(st1⊗st2),⋯,(st1⊗st2⊗⋯⊗stK)]subscript𝑠subscript𝑡1tensor-productsubscript𝑠subscript𝑡1subscript𝑠subscript𝑡2⋯tensor-productsubscript𝑠subscript𝑡1subscript𝑠subscript𝑡2⋯subscript𝑠subscript𝑡𝐾[s_{t_{1}},(s_{t_{1}}\otimes s_{t_{2}}),\cdots,(s_{t_{1}}\otimes s_{t_{2}}\otimes\cdots\otimes s_{t_{K}})]. , 2 = . , 3 = (24)
in 𝒪(logK)𝒪𝐾\mathcal{O}(\log K) time. Leveraging the linear formulation described in Theorem 3.8 and the inherent parallel nature of the transformer architecture for sequential structure, our method can be integrated with the parallel scan algorithm (Blelloch, 1990) resulting efficient computation of the marginal Gaussian distributions by computing both moments {𝐦t}t∈[0,T]subscriptsubscript𝐦𝑡𝑡0𝑇\{\mathbf{m}_{t}\}_{t\in[0,T]} and {𝚺t}t∈[0,T]subscriptsubscript𝚺𝑡𝑡0𝑇\{\mathbf{\Sigma}_{t}\}_{t\in[0,T]} in a parallel sense444See Appendix B for more details..
| 462
|
Amortized Control of Continuous State Space Feynman-Kac Model for Irregular Time Series
3 Controlled Continuous-Discrete State Space Model
3.3 Efficient Modeling of the Latent System
Parallel Computation
Given an associative operator ⊗tensor-product\otimes and a sequence of elements [st1,⋯stK]subscript𝑠subscript𝑡1⋯subscript𝑠subscript𝑡𝐾[s_{t_{1}},\cdots s_{t_{K}}], the parallel scan algorithm computes the all-prefix-sum which returns the sequence
, 1 = [st1,(st1⊗st2),⋯,(st1⊗st2⊗⋯⊗stK)]subscript𝑠subscript𝑡1tensor-productsubscript𝑠subscript𝑡1subscript𝑠subscript𝑡2⋯tensor-productsubscript𝑠subscript𝑡1subscript𝑠subscript𝑡2⋯subscript𝑠subscript𝑡𝐾[s_{t_{1}},(s_{t_{1}}\otimes s_{t_{2}}),\cdots,(s_{t_{1}}\otimes s_{t_{2}}\otimes\cdots\otimes s_{t_{K}})]. , 2 = . , 3 = (24)
in 𝒪(logK)𝒪𝐾\mathcal{O}(\log K) time. Leveraging the linear formulation described in Theorem 3.8 and the inherent parallel nature of the transformer architecture for sequential structure, our method can be integrated with the parallel scan algorithm (Blelloch, 1990) resulting efficient computation of the marginal Gaussian distributions by computing both moments {𝐦t}t∈[0,T]subscriptsubscript𝐦𝑡𝑡0𝑇\{\mathbf{m}_{t}\}_{t\in[0,T]} and {𝚺t}t∈[0,T]subscriptsubscript𝚺𝑡𝑡0𝑇\{\mathbf{\Sigma}_{t}\}_{t\in[0,T]} in a parallel sense444See Appendix B for more details..
| 507
|
18
|
Note that (21-22) involves approximating the Markov control by a non-Markov control α(ℋt):=αθassign𝛼subscriptℋ𝑡superscript𝛼𝜃\alpha(\mathcal{H}_{t}):=\alpha^{\theta}, parameterized by neural network θ𝜃\theta. However, Theorem 3.3 establishes that the optimal control should be Markov, as it is verified by the HJB equation (Van Handel, 2007). In our case, we expect that with a high-capacity neural network θ𝜃\theta, the local minimum θMsuperscript𝜃𝑀\theta^{M}, obtained after M𝑀M gradient descent steps θm+1=θm−∇θℒ(αθm)superscript𝜃𝑚1superscript𝜃𝑚subscript∇𝜃ℒsuperscript𝛼superscript𝜃𝑚\theta^{m+1}=\theta^{m}-\nabla_{\theta}\mathcal{L}(\alpha^{\theta^{m}}) yields ℒ(α⋆)≈ℒ(αθm→θM)ℒsuperscript𝛼⋆ℒsuperscript𝛼→superscript𝜃𝑚superscript𝜃𝑀\mathcal{L}(\alpha^{\star})\approx\mathcal{L}(\alpha^{\theta^{m}\to\theta^{M}}).
| 359
|
Amortized Control of Continuous State Space Feynman-Kac Model for Irregular Time Series
3 Controlled Continuous-Discrete State Space Model
3.3 Efficient Modeling of the Latent System
Parallel Computation
Remark 3.9 (Non-Markov Control).
Note that (21-22) involves approximating the Markov control by a non-Markov control α(ℋt):=αθassign𝛼subscriptℋ𝑡superscript𝛼𝜃\alpha(\mathcal{H}_{t}):=\alpha^{\theta}, parameterized by neural network θ𝜃\theta. However, Theorem 3.3 establishes that the optimal control should be Markov, as it is verified by the HJB equation (Van Handel, 2007). In our case, we expect that with a high-capacity neural network θ𝜃\theta, the local minimum θMsuperscript𝜃𝑀\theta^{M}, obtained after M𝑀M gradient descent steps θm+1=θm−∇θℒ(αθm)superscript𝜃𝑚1superscript𝜃𝑚subscript∇𝜃ℒsuperscript𝛼superscript𝜃𝑚\theta^{m+1}=\theta^{m}-\nabla_{\theta}\mathcal{L}(\alpha^{\theta^{m}}) yields ℒ(α⋆)≈ℒ(αθm→θM)ℒsuperscript𝛼⋆ℒsuperscript𝛼→superscript𝜃𝑚superscript𝜃𝑀\mathcal{L}(\alpha^{\star})\approx\mathcal{L}(\alpha^{\theta^{m}\to\theta^{M}}).
| 416
|
19
|
Moreover, to enhance flexibility, we treat 𝐲0:Tsubscript𝐲:0𝑇\mathbf{y}_{0:T} as an auxiliary variable in the latent space which is produced by a neural network encoder qϕsubscript𝑞italic-ϕq_{\phi} applied to the given time series data 𝐨0:Tsubscript𝐨:0𝑇\mathbf{o}_{0:T} i.e.,𝐲0:T∼qϕ(𝐲0:T|𝐨0:T)\textit{i}.\textit{e}.,\mathbf{y}_{0:T}\sim q_{\phi}(\mathbf{y}_{0:T}|\mathbf{o}_{0:T}), where it is factorized as
, 1 = qϕ(𝐲0:T|𝐨0:T)=∏i=1kqϕ(𝐲ti|𝐨ti)=∏i=1k𝒩(𝐲ti|𝐪ϕ(𝐨ti),𝚺q)subscript𝑞italic-ϕconditionalsubscript𝐲:0𝑇subscript𝐨:0𝑇superscriptsubscriptproduct𝑖1𝑘subscript𝑞italic-ϕconditionalsubscript𝐲subscript𝑡𝑖subscript𝐨subscript𝑡𝑖superscriptsubscriptproduct𝑖1𝑘𝒩conditionalsubscript𝐲subscript𝑡𝑖subscript𝐪italic-ϕsubscript𝐨subscript𝑡𝑖subscript𝚺𝑞q_{\phi}(\mathbf{y}_{0:T}|\mathbf{o}_{0:T})=\prod_{i=1}^{k}q_{\phi}(\mathbf{y}_{t_{i}}|\mathbf{o}_{t_{i}})=\prod_{i=1}^{k}\mathcal{N}(\mathbf{y}_{t_{i}}|\mathbf{q}_{\phi}(\mathbf{o}_{t_{i}}),\mathbf{\Sigma}_{q}). , 2 = . , 3 = (25)
with a fixed variance 𝚺qsubscript𝚺𝑞\mathbf{\Sigma}_{q}. Additionally, it enables the modeling of nonlinear emission distributions through a neural network decoder pψ(𝐨0:T|𝐲0:T)subscript𝑝𝜓conditionalsubscript𝐨:0𝑇subscript𝐲:0𝑇p_{\psi}(\mathbf{o}_{0:T}|\mathbf{y}_{0:T}), where it is factorized as
, 1 = pψ(𝐨0:T∣𝐲0:T)=∏i=1kpψ(𝐨ti∣𝐲ti),subscript𝑝𝜓conditionalsubscript𝐨:0𝑇subscript𝐲:0𝑇superscriptsubscriptproduct𝑖1𝑘subscript𝑝𝜓conditionalsubscript𝐨subscript𝑡𝑖subscript𝐲subscript𝑡𝑖p_{\psi}(\mathbf{o}_{0:T}\mid\mathbf{y}_{0:T})=\prod_{i=1}^{k}p_{\psi}(\mathbf{o}_{t_{i}}\mid\mathbf{y}_{t_{i}}),. , 2 = . , 3 = (26)
with the likelihood function pψsubscript𝑝𝜓p_{\psi} depending on the task at hand. This formulation separates the representation learning of 𝐲0:Tsubscript𝐲:0𝑇\mathbf{y}_{0:T} from the dynamics of 𝐗0:Tαsubscriptsuperscript𝐗𝛼:0𝑇\mathbf{X}^{\alpha}_{0:T}, allowing for a more flexible parameterization. Typically, the information capturing the underlying physical dynamics resides in a much lower-dimensional space compared to the original sequence (Fraccaro et al., 2017). Therefore, separating generative modeling through lower-dimensional latent dynamics from representation modeling in the original space (e.g., pixel values in an image sequence) offers greater flexibility in parameterization.
| 987
|
Amortized Control of Continuous State Space Feynman-Kac Model for Irregular Time Series
3 Controlled Continuous-Discrete State Space Model
3.3 Efficient Modeling of the Latent System
Amortization
Moreover, to enhance flexibility, we treat 𝐲0:Tsubscript𝐲:0𝑇\mathbf{y}_{0:T} as an auxiliary variable in the latent space which is produced by a neural network encoder qϕsubscript𝑞italic-ϕq_{\phi} applied to the given time series data 𝐨0:Tsubscript𝐨:0𝑇\mathbf{o}_{0:T} i.e.,𝐲0:T∼qϕ(𝐲0:T|𝐨0:T)\textit{i}.\textit{e}.,\mathbf{y}_{0:T}\sim q_{\phi}(\mathbf{y}_{0:T}|\mathbf{o}_{0:T}), where it is factorized as
, 1 = qϕ(𝐲0:T|𝐨0:T)=∏i=1kqϕ(𝐲ti|𝐨ti)=∏i=1k𝒩(𝐲ti|𝐪ϕ(𝐨ti),𝚺q)subscript𝑞italic-ϕconditionalsubscript𝐲:0𝑇subscript𝐨:0𝑇superscriptsubscriptproduct𝑖1𝑘subscript𝑞italic-ϕconditionalsubscript𝐲subscript𝑡𝑖subscript𝐨subscript𝑡𝑖superscriptsubscriptproduct𝑖1𝑘𝒩conditionalsubscript𝐲subscript𝑡𝑖subscript𝐪italic-ϕsubscript𝐨subscript𝑡𝑖subscript𝚺𝑞q_{\phi}(\mathbf{y}_{0:T}|\mathbf{o}_{0:T})=\prod_{i=1}^{k}q_{\phi}(\mathbf{y}_{t_{i}}|\mathbf{o}_{t_{i}})=\prod_{i=1}^{k}\mathcal{N}(\mathbf{y}_{t_{i}}|\mathbf{q}_{\phi}(\mathbf{o}_{t_{i}}),\mathbf{\Sigma}_{q}). , 2 = . , 3 = (25)
with a fixed variance 𝚺qsubscript𝚺𝑞\mathbf{\Sigma}_{q}. Additionally, it enables the modeling of nonlinear emission distributions through a neural network decoder pψ(𝐨0:T|𝐲0:T)subscript𝑝𝜓conditionalsubscript𝐨:0𝑇subscript𝐲:0𝑇p_{\psi}(\mathbf{o}_{0:T}|\mathbf{y}_{0:T}), where it is factorized as
, 1 = pψ(𝐨0:T∣𝐲0:T)=∏i=1kpψ(𝐨ti∣𝐲ti),subscript𝑝𝜓conditionalsubscript𝐨:0𝑇subscript𝐲:0𝑇superscriptsubscriptproduct𝑖1𝑘subscript𝑝𝜓conditionalsubscript𝐨subscript𝑡𝑖subscript𝐲subscript𝑡𝑖p_{\psi}(\mathbf{o}_{0:T}\mid\mathbf{y}_{0:T})=\prod_{i=1}^{k}p_{\psi}(\mathbf{o}_{t_{i}}\mid\mathbf{y}_{t_{i}}),. , 2 = . , 3 = (26)
with the likelihood function pψsubscript𝑝𝜓p_{\psi} depending on the task at hand. This formulation separates the representation learning of 𝐲0:Tsubscript𝐲:0𝑇\mathbf{y}_{0:T} from the dynamics of 𝐗0:Tαsubscriptsuperscript𝐗𝛼:0𝑇\mathbf{X}^{\alpha}_{0:T}, allowing for a more flexible parameterization. Typically, the information capturing the underlying physical dynamics resides in a much lower-dimensional space compared to the original sequence (Fraccaro et al., 2017). Therefore, separating generative modeling through lower-dimensional latent dynamics from representation modeling in the original space (e.g., pixel values in an image sequence) offers greater flexibility in parameterization.
| 1,032
|
20
|
We jointly train, in an end-to-end manner, the amortization parameters {ϕ,ψ}italic-ϕ𝜓\{\phi,\psi\} for the encoder-decoder pair, along with the parameters of the latent dynamics θ={𝐟θ,𝐁θ,𝐓θ,𝐦0,𝚺0,{𝐀(l)}l∈[1:L]}𝜃subscript𝐟𝜃subscript𝐁𝜃subscript𝐓𝜃subscript𝐦0subscript𝚺0subscriptsuperscript𝐀𝑙𝑙delimited-[]:1𝐿\theta=\{\mathbf{f}_{\theta},\mathbf{B}_{\theta},\mathbf{T}_{\theta},\mathbf{m}_{0},\mathbf{\Sigma}_{0},\{\mathbf{A}^{(l)}\}_{l\in[1:L]}\}, which include the parameters required for controlled latent dynamics. The training is achieved by maximizing the evidence lower bound (ELBO) of the observation log-likelihood for a given the time series 𝐨0:Tsubscript𝐨:0𝑇\mathbf{o}_{0:T}:
, 1 = logpψ(𝐨0:T)subscript𝑝𝜓subscript𝐨:0𝑇\displaystyle\log p_{\psi}(\mathbf{o}_{0:T}). , 2 = ≥𝔼ℋT∼qϕ(𝐲0:T|𝐨0:T)[log∏i=0Kpψ(𝐨ti|𝐲ti)g(𝐲0:T)∏i=0Kqϕ(𝐲ti|𝐨ti)]absentsubscript𝔼similar-tosubscriptℋ𝑇subscript𝑞italic-ϕconditionalsubscript𝐲:0𝑇subscript𝐨:0𝑇delimited-[]superscriptsubscriptproduct𝑖0𝐾subscript𝑝𝜓conditionalsubscript𝐨subscript𝑡𝑖subscript𝐲subscript𝑡𝑖𝑔subscript𝐲:0𝑇superscriptsubscriptproduct𝑖0𝐾subscript𝑞italic-ϕconditionalsubscript𝐲subscript𝑡𝑖subscript𝐨subscript𝑡𝑖\displaystyle\geq\mathbb{E}_{\mathcal{H}_{T}\sim q_{\phi}(\mathbf{y}_{0:T}|\mathbf{o}_{0:T})}\left[\log\frac{\prod_{i=0}^{K}p_{\psi}(\mathbf{o}_{t_{i}}|\mathbf{y}_{t_{i}})g(\mathbf{y}_{0:T})}{\prod_{i=0}^{K}q_{\phi}(\mathbf{y}_{t_{i}}|\mathbf{o}_{t_{i}})}\right]. , 3 = . , 4 = (27). , 1 = . , 2 = ⪆𝔼ℋT∼qϕ(𝐲0:T|𝐨0:T)∑i=1K[logpψ(𝐨ti|𝐲ti)−ℒ(θ)]=ELBO(ψ,ϕ,θ)greater-than-or-approximately-equalsabsentsubscript𝔼similar-tosubscriptℋ𝑇subscript𝑞italic-ϕconditionalsubscript𝐲:0𝑇subscript𝐨:0𝑇superscriptsubscript𝑖1𝐾delimited-[]subscript𝑝𝜓conditionalsubscript𝐨subscript𝑡𝑖subscript𝐲subscript𝑡𝑖ℒ𝜃ELBO𝜓italic-ϕ𝜃\displaystyle\gtrapprox\mathbb{E}_{\mathcal{H}_{T}\sim q_{\phi}(\mathbf{y}_{0:T}|\mathbf{o}_{0:T})}\sum_{i=1}^{K}\left[\log p_{\psi}(\mathbf{o}_{t_{i}}|\mathbf{y}_{t_{i}})-\mathcal{L}(\theta)\right]=\text{ELBO}(\psi,\phi,\theta). , 3 = . , 4 = (28)
Since 𝐙(ℋtk)=g(𝐲0:T)𝐙subscriptℋsubscript𝑡𝑘𝑔subscript𝐲:0𝑇\mathbf{Z}(\mathcal{H}_{t_{k}})=g(\mathbf{y}_{0:T}), the prior over auxiliary variable g(𝐲0:T)𝑔subscript𝐲:0𝑇g(\mathbf{y}_{0:T}) can be computed using the ELBO ℒ(θ)ℒ𝜃\mathcal{L}(\theta) proposed in (7) as part of our variational inference procedure for the latent posterior ℙ⋆superscriptℙ⋆\mathbb{P}^{\star} in proposed Sec 2, including all latent parameters θ𝜃\theta containing the control αθsuperscript𝛼𝜃\alpha^{\theta}. Note that our modeling is computationally favorable for both training and inference, since estimating marginal distributions in a latent space can be parallelized. This allows for the generation of a latent trajectory over the entire interval without the need for numerical simulations. The overall training and inference processes are summarized in the Algorithm 1 and Algorithm 2 in Appendix, respectively.
| 1,335
|
Amortized Control of Continuous State Space Feynman-Kac Model for Irregular Time Series
3 Controlled Continuous-Discrete State Space Model
3.4 Training and Inference
Training Objective Function
We jointly train, in an end-to-end manner, the amortization parameters {ϕ,ψ}italic-ϕ𝜓\{\phi,\psi\} for the encoder-decoder pair, along with the parameters of the latent dynamics θ={𝐟θ,𝐁θ,𝐓θ,𝐦0,𝚺0,{𝐀(l)}l∈[1:L]}𝜃subscript𝐟𝜃subscript𝐁𝜃subscript𝐓𝜃subscript𝐦0subscript𝚺0subscriptsuperscript𝐀𝑙𝑙delimited-[]:1𝐿\theta=\{\mathbf{f}_{\theta},\mathbf{B}_{\theta},\mathbf{T}_{\theta},\mathbf{m}_{0},\mathbf{\Sigma}_{0},\{\mathbf{A}^{(l)}\}_{l\in[1:L]}\}, which include the parameters required for controlled latent dynamics. The training is achieved by maximizing the evidence lower bound (ELBO) of the observation log-likelihood for a given the time series 𝐨0:Tsubscript𝐨:0𝑇\mathbf{o}_{0:T}:
, 1 = logpψ(𝐨0:T)subscript𝑝𝜓subscript𝐨:0𝑇\displaystyle\log p_{\psi}(\mathbf{o}_{0:T}). , 2 = ≥𝔼ℋT∼qϕ(𝐲0:T|𝐨0:T)[log∏i=0Kpψ(𝐨ti|𝐲ti)g(𝐲0:T)∏i=0Kqϕ(𝐲ti|𝐨ti)]absentsubscript𝔼similar-tosubscriptℋ𝑇subscript𝑞italic-ϕconditionalsubscript𝐲:0𝑇subscript𝐨:0𝑇delimited-[]superscriptsubscriptproduct𝑖0𝐾subscript𝑝𝜓conditionalsubscript𝐨subscript𝑡𝑖subscript𝐲subscript𝑡𝑖𝑔subscript𝐲:0𝑇superscriptsubscriptproduct𝑖0𝐾subscript𝑞italic-ϕconditionalsubscript𝐲subscript𝑡𝑖subscript𝐨subscript𝑡𝑖\displaystyle\geq\mathbb{E}_{\mathcal{H}_{T}\sim q_{\phi}(\mathbf{y}_{0:T}|\mathbf{o}_{0:T})}\left[\log\frac{\prod_{i=0}^{K}p_{\psi}(\mathbf{o}_{t_{i}}|\mathbf{y}_{t_{i}})g(\mathbf{y}_{0:T})}{\prod_{i=0}^{K}q_{\phi}(\mathbf{y}_{t_{i}}|\mathbf{o}_{t_{i}})}\right]. , 3 = . , 4 = (27). , 1 = . , 2 = ⪆𝔼ℋT∼qϕ(𝐲0:T|𝐨0:T)∑i=1K[logpψ(𝐨ti|𝐲ti)−ℒ(θ)]=ELBO(ψ,ϕ,θ)greater-than-or-approximately-equalsabsentsubscript𝔼similar-tosubscriptℋ𝑇subscript𝑞italic-ϕconditionalsubscript𝐲:0𝑇subscript𝐨:0𝑇superscriptsubscript𝑖1𝐾delimited-[]subscript𝑝𝜓conditionalsubscript𝐨subscript𝑡𝑖subscript𝐲subscript𝑡𝑖ℒ𝜃ELBO𝜓italic-ϕ𝜃\displaystyle\gtrapprox\mathbb{E}_{\mathcal{H}_{T}\sim q_{\phi}(\mathbf{y}_{0:T}|\mathbf{o}_{0:T})}\sum_{i=1}^{K}\left[\log p_{\psi}(\mathbf{o}_{t_{i}}|\mathbf{y}_{t_{i}})-\mathcal{L}(\theta)\right]=\text{ELBO}(\psi,\phi,\theta). , 3 = . , 4 = (28)
Since 𝐙(ℋtk)=g(𝐲0:T)𝐙subscriptℋsubscript𝑡𝑘𝑔subscript𝐲:0𝑇\mathbf{Z}(\mathcal{H}_{t_{k}})=g(\mathbf{y}_{0:T}), the prior over auxiliary variable g(𝐲0:T)𝑔subscript𝐲:0𝑇g(\mathbf{y}_{0:T}) can be computed using the ELBO ℒ(θ)ℒ𝜃\mathcal{L}(\theta) proposed in (7) as part of our variational inference procedure for the latent posterior ℙ⋆superscriptℙ⋆\mathbb{P}^{\star} in proposed Sec 2, including all latent parameters θ𝜃\theta containing the control αθsuperscript𝛼𝜃\alpha^{\theta}. Note that our modeling is computationally favorable for both training and inference, since estimating marginal distributions in a latent space can be parallelized. This allows for the generation of a latent trajectory over the entire interval without the need for numerical simulations. The overall training and inference processes are summarized in the Algorithm 1 and Algorithm 2 in Appendix, respectively.
| 1,377
|
21
|
In this section, we present empirical results demonstrating the effectiveness of ACSSM in modeling real-world irregular time-series data. The primary objective was to evaluate its capability to capture the underlying dynamics across various datasets. To demonstrate the applicability of the ACSSM, we conducted experiments on four tasks: per-time regression/classification and sequence interpolation/extrapolation, using four datasets: Human Activity, USHCN (Menne et al., 2015), and Physionet (Silva et al., 2012). We compare our approach against various baselines including RNN architecture (RKN-ΔtsubscriptΔ𝑡\Delta_{t} (Becker et al., 2019), GRU-ΔtsubscriptΔ𝑡\Delta_{t} (Chung et al., 2014), GRU-D (Che et al., 2018)) as well as dynamics-based models (Latent-ODE (Chen et al., 2018; Rubanova et al., 2019), ODE-RNN (Rubanova et al., 2019), GRU-ODE-B (De Brouwer et al., 2019), CRU (Schirmer et al., 2022), Latent-SDEHH{}_{\text{H}} (Zeng et al., 2023)) and attention-based models (mTAND (Shukla & Marlin, 2021)), which have been developed for modeling irregular time series data. We reported the averaged results over five runs with different seed. The best results are highlighted in bold, while the second-best results are shown in blue. Additional experimental details can be found in Appendix C.
| 369
|
Amortized Control of Continuous State Space Feynman-Kac Model for Irregular Time Series
4 Experiment
In this section, we present empirical results demonstrating the effectiveness of ACSSM in modeling real-world irregular time-series data. The primary objective was to evaluate its capability to capture the underlying dynamics across various datasets. To demonstrate the applicability of the ACSSM, we conducted experiments on four tasks: per-time regression/classification and sequence interpolation/extrapolation, using four datasets: Human Activity, USHCN (Menne et al., 2015), and Physionet (Silva et al., 2012). We compare our approach against various baselines including RNN architecture (RKN-ΔtsubscriptΔ𝑡\Delta_{t} (Becker et al., 2019), GRU-ΔtsubscriptΔ𝑡\Delta_{t} (Chung et al., 2014), GRU-D (Che et al., 2018)) as well as dynamics-based models (Latent-ODE (Chen et al., 2018; Rubanova et al., 2019), ODE-RNN (Rubanova et al., 2019), GRU-ODE-B (De Brouwer et al., 2019), CRU (Schirmer et al., 2022), Latent-SDEHH{}_{\text{H}} (Zeng et al., 2023)) and attention-based models (mTAND (Shukla & Marlin, 2021)), which have been developed for modeling irregular time series data. We reported the averaged results over five runs with different seed. The best results are highlighted in bold, while the second-best results are shown in blue. Additional experimental details can be found in Appendix C.
| 392
|
22
|
We investigated the classification performance of our proposed model. For this purpose, we trained the model on the Human Activity dataset, which contains time-series data from five individuals performing various activities such as walking, sitting, lying, and standing etc. The dataset includes 12 features in total, representing 3D positions captured by four sensors attached to the belt, chest, and both ankles. Following the pre-processing approach proposed by (Rubanova et al., 2019), the dataset comprises 6,554 sequences, each with 211 time points. The task is to classify each time point into one of seven activities.
Table 1 reports the test accuracy, showing that ACSSM outperforms all baseline models. We employed the full assimilation scheme to maintain consistency with the other baselines, which infer the latent state using full observation. It is important to note that the two dynamical models, Latent-ODE and Latent-SDEHH{}_{\text{H}}, incorporate a parameterized vector field in their latent dynamics, thereby rely on numerical solvers to infer intermediate states. Besides, mTAND is an attention-based method that does not depend on dynamical or state-space models, thus avoiding numerical simulation. We believe the significant performance improvement of our model comes from its integration of an attention mechanism into dynamical models. Simulation-free dynamics avoid numerical simulations while maintaining temporal structure, leading to more stable learning.
| 287
|
Amortized Control of Continuous State Space Feynman-Kac Model for Irregular Time Series
4 Experiment
4.1 Per time point classification &\& regression
Human Activity Classification
We investigated the classification performance of our proposed model. For this purpose, we trained the model on the Human Activity dataset, which contains time-series data from five individuals performing various activities such as walking, sitting, lying, and standing etc. The dataset includes 12 features in total, representing 3D positions captured by four sensors attached to the belt, chest, and both ankles. Following the pre-processing approach proposed by (Rubanova et al., 2019), the dataset comprises 6,554 sequences, each with 211 time points. The task is to classify each time point into one of seven activities.
Table 1 reports the test accuracy, showing that ACSSM outperforms all baseline models. We employed the full assimilation scheme to maintain consistency with the other baselines, which infer the latent state using full observation. It is important to note that the two dynamical models, Latent-ODE and Latent-SDEHH{}_{\text{H}}, incorporate a parameterized vector field in their latent dynamics, thereby rely on numerical solvers to infer intermediate states. Besides, mTAND is an attention-based method that does not depend on dynamical or state-space models, thus avoiding numerical simulation. We believe the significant performance improvement of our model comes from its integration of an attention mechanism into dynamical models. Simulation-free dynamics avoid numerical simulations while maintaining temporal structure, leading to more stable learning.
| 326
|
23
|
Next, We explored the problem of sequence regression using pendulum experiment (Becker et al., 2019), where the goal is to infer the sine and cosine of the pendulum angle from irregularly observed, noisy pendulum images (Schirmer et al., 2022). To assess our performance, we compared it with previous dynamics-based models, reporting the regression MSE on a held-out test set as shown in Table 2. We employed the full assimilation scheme. The experimental results demonstrated that our proposed method outperformed existing models, delivering superior performance. These findings highlight that, even when linearizing the drift function, the amortization and proposed neural network based locally linear dynamics in Sec 3.3 preserves the expressivity of our approach, enabling more accurate inference of non-linear systems.
Moreover, particularly in comparison to CRU, we believe that the significant performance improvement stems from the fundamental differences in how information is leveraged. To infer intermediate angular values, utilizing not only past information but also future positions of the pendulum can enhance the accuracy of these predictions. In this regard, while CRU relies solely on the past positions of the pendulum for its predictions, our model is capable of utilizing both past and future positions.
| 253
|
Amortized Control of Continuous State Space Feynman-Kac Model for Irregular Time Series
4 Experiment
4.1 Per time point classification &\& regression
Pendulum Regression
Next, We explored the problem of sequence regression using pendulum experiment (Becker et al., 2019), where the goal is to infer the sine and cosine of the pendulum angle from irregularly observed, noisy pendulum images (Schirmer et al., 2022). To assess our performance, we compared it with previous dynamics-based models, reporting the regression MSE on a held-out test set as shown in Table 2. We employed the full assimilation scheme. The experimental results demonstrated that our proposed method outperformed existing models, delivering superior performance. These findings highlight that, even when linearizing the drift function, the amortization and proposed neural network based locally linear dynamics in Sec 3.3 preserves the expressivity of our approach, enabling more accurate inference of non-linear systems.
Moreover, particularly in comparison to CRU, we believe that the significant performance improvement stems from the fundamental differences in how information is leveraged. To infer intermediate angular values, utilizing not only past information but also future positions of the pendulum can enhance the accuracy of these predictions. In this regard, while CRU relies solely on the past positions of the pendulum for its predictions, our model is capable of utilizing both past and future positions.
| 293
|
24
|
We benchmark the models on two real-world datasets, USHCN and Physionet. The USHCN dataset (Menne et al., 2015) contains 1,218 daily measurements from weather stations across the US with five variables over a four-year period. The Physionet dataset (Silva et al., 2012) contains 8,000 multivariate clinical time-series of 41 features recorded over 48 hours.555See Appendix C.1 for details on the datasets.
| 105
|
Amortized Control of Continuous State Space Feynman-Kac Model for Irregular Time Series
4 Experiment
4.2 Sequence Interpolation &\& Extrapolation
Datasets
We benchmark the models on two real-world datasets, USHCN and Physionet. The USHCN dataset (Menne et al., 2015) contains 1,218 daily measurements from weather stations across the US with five variables over a four-year period. The Physionet dataset (Silva et al., 2012) contains 8,000 multivariate clinical time-series of 41 features recorded over 48 hours.555See Appendix C.1 for details on the datasets.
| 144
|
25
|
We begin by evaluating the effectiveness of ACSSM on the interpolation task. Following the approach of Schirmer et al. (2022) and Rubanova et al. (2019), each model is required to infer all time points t∈𝒯′𝑡superscript𝒯′t\in\mathcal{T^{\prime}} based on a subset of observations 𝐨t∈𝒯subscript𝐨𝑡𝒯\mathbf{o}_{t\in\mathcal{T}} where 𝒯⊆𝒯′𝒯superscript𝒯′\mathcal{T}\subseteq\mathcal{T}^{\prime}. For the interpolation task, the encoded observations 𝐲t∈𝒯subscript𝐲𝑡𝒯\mathbf{y}_{t\in\mathcal{T}} were assimilated by using the full assimilation scheme for the construction of the accurate smoothing distribution. The interpolation results presented in Table 3, where we report the test MSE evaluated on the entire time points 𝒯′superscript𝒯′\mathcal{T}^{\prime}. For all datasets, ACSSM outperforms other baselines in terms of test MSE. It clearly indicate the expressiveness of the ACSSM, trajectories 𝐗t∈𝒯′αsubscriptsuperscript𝐗𝛼𝑡superscript𝒯′\mathbf{X}^{\alpha}_{t\in\mathcal{T}^{\prime}} sampled from approximated path measure over the entire interval 𝒯′superscript𝒯′\mathcal{T^{\prime}} are contain sufficient information for generating accurate predictions.
| 378
|
Amortized Control of Continuous State Space Feynman-Kac Model for Irregular Time Series
4 Experiment
4.2 Sequence Interpolation &\& Extrapolation
Interpolation
We begin by evaluating the effectiveness of ACSSM on the interpolation task. Following the approach of Schirmer et al. (2022) and Rubanova et al. (2019), each model is required to infer all time points t∈𝒯′𝑡superscript𝒯′t\in\mathcal{T^{\prime}} based on a subset of observations 𝐨t∈𝒯subscript𝐨𝑡𝒯\mathbf{o}_{t\in\mathcal{T}} where 𝒯⊆𝒯′𝒯superscript𝒯′\mathcal{T}\subseteq\mathcal{T}^{\prime}. For the interpolation task, the encoded observations 𝐲t∈𝒯subscript𝐲𝑡𝒯\mathbf{y}_{t\in\mathcal{T}} were assimilated by using the full assimilation scheme for the construction of the accurate smoothing distribution. The interpolation results presented in Table 3, where we report the test MSE evaluated on the entire time points 𝒯′superscript𝒯′\mathcal{T}^{\prime}. For all datasets, ACSSM outperforms other baselines in terms of test MSE. It clearly indicate the expressiveness of the ACSSM, trajectories 𝐗t∈𝒯′αsubscriptsuperscript𝐗𝛼𝑡superscript𝒯′\mathbf{X}^{\alpha}_{t\in\mathcal{T}^{\prime}} sampled from approximated path measure over the entire interval 𝒯′superscript𝒯′\mathcal{T^{\prime}} are contain sufficient information for generating accurate predictions.
| 417
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.