chunk_id int64 0 448 | chunk_text stringlengths 1 10.8k | chunk_text_tokens int64 1 2.01k | serialized_text stringlengths 1 11.1k | serialized_text_tokens int64 1 2.02k |
|---|---|---|---|---|
0 | In this paper we discuss an application of Stochastic Approximation to statistical estimation of high-dimensional sparse parameters. The proposed solution reduces to resolving a penalized stochastic optimization problem on each stage of a multistage algorithm; each problem being solved to a prescribed accuracy by the n... | 215 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
Abstract
In this paper we discuss an application of Stochastic Approximation to statistical estimation of high-dimensional sparse parameters. The proposed solution reduces to resolving a penalized stochastic optimization problem on each stage of a multistage alg... | 229 |
1 | Our original motivation is the well known problem of (generalized) linear high-dimensional regression with random design. Formally, consider a dataset of N𝑁N points (ϕi,ηi),i∈{1,…,N}subscriptitalic-ϕ𝑖subscript𝜂𝑖𝑖1…𝑁(\phi_{i},\eta_{i}),i\in\{1,\ldots,N\}, where ϕi∈𝐑nsubscriptitalic-ϕ𝑖superscript𝐑𝑛\phi_{i}\in{\... | 1,715 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
1 Introduction
Our original motivation is the well known problem of (generalized) linear high-dimensional regression with random design. Formally, consider a dataset of N𝑁N points (ϕi,ηi),i∈{1,…,N}subscriptitalic-ϕ𝑖subscript𝜂𝑖𝑖1…𝑁(\phi_{i},\eta_{i}),i\in\{... | 1,730 |
2 | Sparse recovery by Lasso and Dantzig Selector has been extensively studied [11, 8, 5, 46, 10, 9]. It computes a solution x^Nsubscript^𝑥𝑁{\widehat{x}}_{N} to the ℓ1subscriptℓ1\ell_{1}-penalized problem
minxg^N(x)+λ‖x‖1subscript𝑥subscript^𝑔𝑁𝑥𝜆subscriptnorm𝑥1\min_{x}{\widehat{g}}_{N}(x)+\lambda\|x\|_{1} where λ... | 1,505 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
1 Introduction
Existing approaches and related works.
Sparse recovery by Lasso and Dantzig Selector has been extensively studied [11, 8, 5, 46, 10, 9]. It computes a solution x^Nsubscript^𝑥𝑁{\widehat{x}}_{N} to the ℓ1subscriptℓ1\ell_{1}-penalized problem
minx... | 1,526 |
3 | We provide a refined analysis of Composite Stochastic Mirror Descent (CSMD) algorithms for computing sparse solutions to Stochastic Optimization problem leveraging smoothness of the objective. This leads to a new “aggressive” choice of parameters in a multistage algorithm with significantly improved performances compar... | 311 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
1 Introduction
Principal contributions.
We provide a refined analysis of Composite Stochastic Mirror Descent (CSMD) algorithms for computing sparse solutions to Stochastic Optimization problem leveraging smoothness of the objective. This leads to a new “aggressi... | 329 |
4 | The remaining of the paper is organized as follows. In Section 2, the general problem is set, and the multistage optimization routine and the study of its basic properties are presented. Then, in Section 3, we discuss the properties of the method and conditions under which it leads to “small error” solutions to sparse ... | 583 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
1 Introduction
Organization and notation
The remaining of the paper is organized as follows. In Section 2, the general problem is set, and the multistage optimization routine and the study of its basic properties are presented. Then, in Section 3, we discuss the... | 602 |
5 | This section is dedicated to the formulation of the generic stochastic optimization problem, the description and the analysis of the generic algorithm. | 24 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
2 Multistage Stochastic Mirror Descent for Sparse Stochastic Optimization
This section is dedicated to the formulation of the generic stochastic optimization problem, the description and the analysis of the generic algorithm. | 51 |
6 | Let X𝑋X be a convex closed subset of an Euclidean space E𝐸E and (Ω,P)Ω𝑃(\Omega,P) a probability space. We consider a mapping G:X×Ω→𝐑:𝐺→𝑋Ω𝐑G:X\times\Omega\rightarrow{\mathbf{R}} such that, for all ω∈Ω𝜔Ω\omega\in\Omega, G(⋅,ω)𝐺⋅𝜔G(\cdot,\omega) is convex on X𝑋X and smooth, meaning that ∇G(⋅,ω)∇𝐺⋅𝜔\nabla G(... | 1,321 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
2 Multistage Stochastic Mirror Descent for Sparse Stochastic Optimization
2.1 Problem statement
Let X𝑋X be a convex closed subset of an Euclidean space E𝐸E and (Ω,P)Ω𝑃(\Omega,P) a probability space. We consider a mapping G:X×Ω→𝐑:𝐺→𝑋Ω𝐑G:X\times\Omega\right... | 1,354 |
7 | As mentioned in the introduction, (stochastic) optimization over the set of sparse solutions can be done through ”composite” techniques. We take a similar approach here, by transforming the generic problem 5 into the following composite Stochastic Optimization problem, adapted to some norm ∥⋅∥\|\cdot\|, and parameteriz... | 317 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
2 Multistage Stochastic Mirror Descent for Sparse Stochastic Optimization
2.2 Composite Stochastic Mirror Descent algorithm
As mentioned in the introduction, (stochastic) optimization over the set of sparse solutions can be done through ”composite” techniques. W... | 355 |
8 | Let B𝐵B be the unit ball of the norm ∥⋅∥\|\cdot\| and θ:B→𝐑:𝜃→𝐵𝐑\theta:\,B\to{\mathbf{R}} be a distance-generating function (d.-g.f.) of B𝐵B, i.e., a continuously differentiable convex function which is strongly convex with respect to the norm ∥⋅∥\|\cdot\|,
, 1 = ⟨∇θ(x)−∇θ(x′),x−x′⟩≥‖x−x′‖2,∀x,x′∈X.formulae-seq... | 393 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
2 Multistage Stochastic Mirror Descent for Sparse Stochastic Optimization
2.2 Composite Stochastic Mirror Descent algorithm
Proximal setup, Bregman divergences and Proximal mapping.
Let B𝐵B be the unit ball of the norm ∥⋅∥\|\cdot\| and θ:B→𝐑:𝜃→𝐵𝐑\theta:\,B\... | 448 |
9 | For any x0∈Xsubscript𝑥0𝑋x_{0}\in X, let XR(x0):={z∈X:‖z−x0‖≤R}assignsubscript𝑋𝑅subscript𝑥0conditional-set𝑧𝑋norm𝑧subscript𝑥0𝑅X_{R}(x_{0}):=\{z\in X:\|z-x_{0}\|\leq R\} be the ball of radius R𝑅R around x0subscript𝑥0x_{0}. It is equipped with the d.-g.f. ϑx0R(z):=R2θ((z−x0)/R)assignsubscriptsuperscriptital... | 443 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
2 Multistage Stochastic Mirror Descent for Sparse Stochastic Optimization
2.2 Composite Stochastic Mirror Descent algorithm
Proximal setup, Bregman divergences and Proximal mapping.
Definition 2.1
For any x0∈Xsubscript𝑥0𝑋x_{0}\in X, let XR(x0):={z∈X:‖z−x0‖≤R}... | 504 |
10 | Given x0∈Xsubscript𝑥0𝑋x_{0}\in X and R>0𝑅0R>0, the Bregman divergence V𝑉V associated to ϑitalic-ϑ\vartheta is defined by
, 1 = Vx0(x,z)=ϑx0R(z)−ϑx0R(x)−⟨∇ϑx0R(x),z−x⟩,x,z∈X.formulae-sequencesubscript𝑉subscript𝑥0𝑥𝑧subscriptsuperscriptitalic-ϑ𝑅subscript𝑥0𝑧subscriptsuperscriptitalic-ϑ𝑅subscript𝑥0𝑥∇subscr... | 368 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
2 Multistage Stochastic Mirror Descent for Sparse Stochastic Optimization
2.2 Composite Stochastic Mirror Descent algorithm
Proximal setup, Bregman divergences and Proximal mapping.
Definition 2.2
Given x0∈Xsubscript𝑥0𝑋x_{0}\in X and R>0𝑅0R>0, the Bregman div... | 429 |
11 | The composite proximal mapping with respect to hℎh and x𝑥x is defined by
, 1 = Proxh,x0(ζ,x)subscriptProxℎsubscript𝑥0𝜁𝑥\displaystyle\mathrm{Prox}_{h,x_{0}}(\zeta,x). , 2 = :=assign\displaystyle:=. , 3 = argminz∈XR(x0){⟨ζ,z⟩+h(z)+Vx0(x,z)}subscriptargmin𝑧subscript𝑋𝑅subscript𝑥0𝜁𝑧ℎ𝑧subscript𝑉subscript𝑥0... | 527 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
2 Multistage Stochastic Mirror Descent for Sparse Stochastic Optimization
2.2 Composite Stochastic Mirror Descent algorithm
Proximal setup, Bregman divergences and Proximal mapping.
Definition 2.3
The composite proximal mapping with respect to hℎh and x𝑥x is de... | 588 |
12 | Given a sequence of positive step sizes γi>0subscript𝛾𝑖0\gamma_{i}>0, the Composite Stochastic Mirror Descent (CSMD) is defined by the following recursion
, 1 = xisubscript𝑥𝑖\displaystyle x_{i}. , 2 = =Proxγih,x0(γi−1∇G(xi−1,ωi),xi−1),x0∈X.formulae-sequenceabsentsubscriptProxsubscript𝛾𝑖ℎsubscript𝑥0subscript�... | 1,249 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
2 Multistage Stochastic Mirror Descent for Sparse Stochastic Optimization
2.2 Composite Stochastic Mirror Descent algorithm
Composite Stochastic Mirror Descent algorithm.
Given a sequence of positive step sizes γi>0subscript𝛾𝑖0\gamma_{i}>0, the Composite Stoch... | 1,295 |
13 | If step-sizes are constant, i.e., γi≡γ≤(4ν)−1subscript𝛾𝑖𝛾superscript4𝜈1\gamma_{i}\equiv\gamma\leq(4\nu)^{-1}, i=0,1,…𝑖01…i=0,1,..., and the initial point x0∈Xsubscript𝑥0𝑋x_{0}\in X such that x∗∈XR(x0)subscript𝑥subscript𝑋𝑅subscript𝑥0x_{*}\in X_{R}(x_{0}) then for any t≳1+lnmgreater-than-or-equivalent-to𝑡1... | 902 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
2 Multistage Stochastic Mirror Descent for Sparse Stochastic Optimization
2.2 Composite Stochastic Mirror Descent algorithm
Composite Stochastic Mirror Descent algorithm.
Proposition 2.1
If step-sizes are constant, i.e., γi≡γ≤(4ν)−1subscript𝛾𝑖𝛾superscript4𝜈... | 955 |
14 | Our approach to find sparse solution to the original stochastic optimization problem (7) consists in solving a sequence of auxiliary composite problems (7), with their sequence of parameters (κ𝜅\kappa, x0subscript𝑥0x_{0}, R𝑅R) defined recursively. For the latter, we need to infer the quality of approximate solution ... | 117 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
2 Multistage Stochastic Mirror Descent for Sparse Stochastic Optimization
2.3 Main contribution: a multistage adaptive algorithm
Our approach to find sparse solution to the original stochastic optimization problem (7) consists in solving a sequence of auxiliary ... | 157 |
15 | There exist some δ>0𝛿0\delta>0 and ρ<∞𝜌\rho<\infty such that for any feasible solution x^∈X^𝑥𝑋\widehat{x}\in X to the composite problem (7) satisfying, with probability at least 1−ε1𝜀1-\varepsilon,
, 1 = Fκ(x^)−Fκ(x∗)≤υ,subscript𝐹𝜅^𝑥subscript𝐹𝜅subscript𝑥𝜐F_{\kappa}(\widehat{x})-F_{\kappa}(x_{*})\leq\upsil... | 1,104 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
2 Multistage Stochastic Mirror Descent for Sparse Stochastic Optimization
2.3 Main contribution: a multistage adaptive algorithm
Assumption [RSC]
There exist some δ>0𝛿0\delta>0 and ρ<∞𝜌\rho<\infty such that for any feasible solution x^∈X^𝑥𝑋\widehat{x}\in X t... | 1,150 |
16 | Assume that the total sample budget satisfies N≥m0𝑁subscript𝑚0N\geq m_{0}, so that at least one stage of the preliminary phase of Algorithm 1 is completed, then for t≳lnNgreater-than-or-equivalent-to𝑡𝑁t\gtrsim\sqrt{\ln N} the approximate solution x^Nsubscript^𝑥𝑁\widehat{x}_{N} of Algorithm 1 satisfies, with prob... | 838 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
2 Multistage Stochastic Mirror Descent for Sparse Stochastic Optimization
2.3 Main contribution: a multistage adaptive algorithm
Assumption [RSC]
Theorem 2.1
Assume that the total sample budget satisfies N≥m0𝑁subscript𝑚0N\geq m_{0}, so that at least one stage ... | 891 |
17 | Along with the oracle computation, proximal computation to be implemented at each iteration of the algorithm is an important part of the computational cost of the method. It becomes even more important during the asymptotic phase when number of iterations per stage increases exponentially fast with the stage count, and... | 165 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
2 Multistage Stochastic Mirror Descent for Sparse Stochastic Optimization
2.3 Main contribution: a multistage adaptive algorithm
Assumption [RSC]
Remark 2.1
Along with the oracle computation, proximal computation to be implemented at each iteration of the algori... | 217 |
18 | We now consider again the original problem of recovery of a s𝑠s-sparse signal x∗∈X⊂𝐑nsubscript𝑥𝑋superscript𝐑𝑛x_{*}\in X\subset{\mathbf{R}}^{n} from random observations defined by
, 1 = ηi=𝔯(ϕiTx∗)+σξi,i=1,2,…,N,formulae-sequencesubscript𝜂𝑖𝔯superscriptsubscriptitalic-ϕ𝑖𝑇subscript𝑥𝜎subscript𝜉𝑖𝑖12…𝑁\d... | 1,064 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
3 Sparse generalized linear regression by stochastic approximation
3.1 Problem setting
We now consider again the original problem of recovery of a s𝑠s-sparse signal x∗∈X⊂𝐑nsubscript𝑥𝑋superscript𝐑𝑛x_{*}\in X\subset{\mathbf{R}}^{n} from random observations d... | 1,091 |
19 | Assume that 𝔯𝔯\mathfrak{r} is r¯¯𝑟{\overline{r}}-Lipschitz continuous and r¯¯𝑟{\underline{r}}-strongly monotone (i.e., |𝔯(t)−𝔯(t′)|≥r¯|t−t′|𝔯𝑡𝔯superscript𝑡′¯𝑟𝑡superscript𝑡′|\mathfrak{r}(t)-\mathfrak{r}(t^{\prime})|\geq{\underline{r}}|t-t^{\prime}| which implies that 𝔰𝔰\mathfrak{s} is r¯¯𝑟{\underline{... | 698 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
3 Sparse generalized linear regression by stochastic approximation
3.1 Problem setting
Proposition 3.1
Assume that 𝔯𝔯\mathfrak{r} is r¯¯𝑟{\overline{r}}-Lipschitz continuous and r¯¯𝑟{\underline{r}}-strongly monotone (i.e., |𝔯(t)−𝔯(t′)|≥r¯|t−t′|𝔯𝑡𝔯supe... | 732 |
20 | Let λ>0𝜆0\lambda>0 and 0<ψ≤10𝜓10<\psi\leq 1, and suppose that for all subsets I⊂{1,…,n}𝐼1…𝑛I\subset\{1,...,n\} of cardinality smaller than s𝑠s the following property is verified:
, 1 = ∀z∈𝐑n‖zI‖1≤sλ‖z‖Σ+12(1−ψ)‖z‖1formulae-sequencefor-all𝑧superscript𝐑𝑛subscriptnormsubscript𝑧𝐼1𝑠𝜆subscriptnorm𝑧Σ121𝜓subs... | 763 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
3 Sparse generalized linear regression by stochastic approximation
3.1 Problem setting
Lemma 3.1
Let λ>0𝜆0\lambda>0 and 0<ψ≤10𝜓10<\psi\leq 1, and suppose that for all subsets I⊂{1,…,n}𝐼1…𝑛I\subset\{1,...,n\} of cardinality smaller than s𝑠s the following pro... | 796 |
21 | Condition 𝐐(λ,ψ)𝐐𝜆𝜓{\mathbf{Q}}(\lambda,\psi) generalizes the classical Restricted Eigenvalue (RE) property [5] and Compatibility Condition [46], and is the most relaxed condition under which classical bounds for the error of ℓ1subscriptℓ1\ell_{1}-recovery routines were established. Validity of 𝐐(λ,ψ)𝐐𝜆𝜓{\mat... | 680 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
3 Sparse generalized linear regression by stochastic approximation
3.1 Problem setting
Remark 3.1
Condition 𝐐(λ,ψ)𝐐𝜆𝜓{\mathbf{Q}}(\lambda,\psi) generalizes the classical Restricted Eigenvalue (RE) property [5] and Compatibility Condition [46], and is the mo... | 713 |
22 | In the case of linear regression where 𝔯(t)=t𝔯𝑡𝑡\mathfrak{r}(t)=t, it holds
, 1 = g(x)𝑔𝑥\displaystyle g(x). , 2 = =\displaystyle=. , 3 = 𝐄{12(ϕTx)2−xTϕη}=12𝐄{(ϕT(x∗−x))2−(ϕTx∗)2}𝐄12superscriptsuperscriptitalic-ϕ𝑇𝑥2superscript𝑥𝑇italic-ϕ𝜂12𝐄superscriptsuperscriptitalic-ϕ𝑇subscript𝑥𝑥2superscri... | 1,886 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
3 Sparse generalized linear regression by stochastic approximation
3.1 Problem setting
Remarks.
In the case of linear regression where 𝔯(t)=t𝔯𝑡𝑡\mathfrak{r}(t)=t, it holds
, 1 = g(x)𝑔𝑥\displaystyle g(x). , 2 = =\displaystyle=. , 3 = 𝐄{12(ϕTx)2−xTϕη... | 1,915 |
23 | where ς∼𝒩(0,1)similar-to𝜍𝒩01\varsigma\sim{\cal N}(0,1). Thus, H(x)𝐻𝑥H(x) is proportional to Σ1/2x‖x‖ΣsuperscriptΣ12𝑥subscriptnorm𝑥Σ{\Sigma^{1/2}x\over\|x\|_{\Sigma}} with coefficient
, 1 = h(‖x‖Σ)=𝐄{ς𝔯(ς‖x‖Σ)}.ℎsubscriptnorm𝑥Σ𝐄𝜍𝔯𝜍subscriptnorm𝑥Σh\big{(}\|x\|_{\Sigma}\big{)}={\mathbf{E}}\left\{\va... | 296 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
3 Sparse generalized linear regression by stochastic approximation
3.1 Problem setting
Remarks.
where ς∼𝒩(0,1)similar-to𝜍𝒩01\varsigma\sim{\cal N}(0,1). Thus, H(x)𝐻𝑥H(x) is proportional to Σ1/2x‖x‖ΣsuperscriptΣ12𝑥subscriptnorm𝑥Σ{\Sigma^{1/2}x\over\|x\|_... | 325 |
24 | In this section, we describe the statistical properties of approximate solutions of Algorithm 1 when applied to the sparse recovery problem. We shall use the following distance-generating function of the ℓ1subscriptℓ1\ell_{1}-ball of 𝐑nsuperscript𝐑𝑛{\mathbf{R}}^{n} (cf. [27, Section 5.7.1])
, 1 = θ(x)=cp‖x‖pp,p={2... | 409 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
3 Sparse generalized linear regression by stochastic approximation
3.2 Stochastic Mirror Descent algorithm
In this section, we describe the statistical properties of approximate solutions of Algorithm 1 when applied to the sparse recovery problem. We shall use t... | 440 |
25 | For t≳lnNgreater-than-or-equivalent-to𝑡𝑁t\gtrsim\sqrt{\ln N}, assuming the samples budget is large enough, i.e., N≥m0𝑁subscript𝑚0N\geq m_{0} (so that at least one stage of the preliminary phase of Algorithm 1 is completed), the approximate solution x^Nsubscript^𝑥𝑁\widehat{x}_{N} output satisfies with probability... | 841 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
3 Sparse generalized linear regression by stochastic approximation
3.2 Stochastic Mirror Descent algorithm
Proposition 3.2
For t≳lnNgreater-than-or-equivalent-to𝑡𝑁t\gtrsim\sqrt{\ln N}, assuming the samples budget is large enough, i.e., N≥m0𝑁subscript𝑚0N\geq... | 879 |
26 | Bounds for the ℓ1subscriptℓ1\ell_{1}-norm of the error x^N−x∗subscript^𝑥𝑁subscript𝑥{\widehat{x}}_{N}-x_{*} (or x^N(b)−x∗superscriptsubscript^𝑥𝑁𝑏subscript𝑥\widehat{x}_{N}^{(b)}-x_{*}) established in Proposition 3.2 allows us to quantify prediction error g(x^N)−g(x∗)𝑔subscript^𝑥𝑁𝑔subscript𝑥g({\widehat{x}}_{... | 1,564 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
3 Sparse generalized linear regression by stochastic approximation
3.2 Stochastic Mirror Descent algorithm
Remark 3.2
Bounds for the ℓ1subscriptℓ1\ell_{1}-norm of the error x^N−x∗subscript^𝑥𝑁subscript𝑥{\widehat{x}}_{N}-x_{*} (or x^N(b)−x∗superscriptsubscript^... | 1,601 |
27 | The proposed approach allows also to address the situation in which regressors are not a.s. bounded. For instance, consider the case of random regressors with i.i.d sub-Gaussian entries such that
, 1 = ∀j≤n,𝐄[exp([ϕi]j2ϰ2)]≤1.formulae-sequencefor-all𝑗𝑛𝐄delimited-[]superscriptsubscriptdelimited-[]subscriptitalic-ϕ... | 711 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
3 Sparse generalized linear regression by stochastic approximation
3.2 Stochastic Mirror Descent algorithm
Remark 3.3
The proposed approach allows also to address the situation in which regressors are not a.s. bounded. For instance, consider the case of random r... | 748 |
28 | In this section, we present results of a small simulation study illustrating the theoretical part of the previous section.222The reader is invited to check Section C of the supplementary material for more experimental results. We consider the GLR model (15) with activation function (21) where α=1/2𝛼12\alpha=1/2.
In ou... | 1,254 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
3 Sparse generalized linear regression by stochastic approximation
3.3 Numerical experiments
In this section, we present results of a small simulation study illustrating the theoretical part of the previous section.222The reader is invited to check Section C of ... | 1,282 |
29 | We use notation 𝐄isubscript𝐄𝑖{\mathbf{E}}_{i} for conditional expectation given x0subscript𝑥0x_{0} and ω1,…,ωisubscript𝜔1…subscript𝜔𝑖\omega_{1},...,\omega_{i}. | 73 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix A Proofs
We use notation 𝐄isubscript𝐄𝑖{\mathbf{E}}_{i} for conditional expectation given x0subscript𝑥0x_{0} and ω1,…,ωisubscript𝜔1…subscript𝜔𝑖\omega_{1},...,\omega_{i}. | 91 |
30 | The result of Proposition 2.1 is an immediate consequence of the following statement. | 17 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix A Proofs
A.1 Proof of Proposition 2.1
The result of Proposition 2.1 is an immediate consequence of the following statement. | 46 |
31 | Let
, 1 = f(x)=12g(x)+h(x),x∈X.formulae-sequence𝑓𝑥12𝑔𝑥ℎ𝑥𝑥𝑋f(x)=\mbox{\small$\frac{1}{2}$}g(x)+h(x),\quad x\in X.. , 2 =
In the situation of Section 2.2, let γi≤(4ν)−1subscript𝛾𝑖superscript4𝜈1\gamma_{i}\leq(4\nu)^{-1} for all i=0,1,…𝑖01…i=0,1,..., and let x^msubscript^𝑥𝑚\widehat{x}_{m} be defined in ... | 1,580 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix A Proofs
A.1 Proof of Proposition 2.1
Proposition A.1
Let
, 1 = f(x)=12g(x)+h(x),x∈X.formulae-sequence𝑓𝑥12𝑔𝑥ℎ𝑥𝑥𝑋f(x)=\mbox{\small$\frac{1}{2}$}g(x)+h(x),\quad x\in X.. , 2 =
In the situation of Section 2.2, let γi≤(4ν)−1subscript𝛾𝑖supers... | 1,615 |
32 | Denote Hi=∇G(xi−1,ωi)subscript𝐻𝑖∇𝐺subscript𝑥𝑖1subscript𝜔𝑖H_{i}=\nabla G(x_{i-1},\omega_{i}). In the sequel, we use the shortcut notation ϑ(z)italic-ϑ𝑧\vartheta(z) and V(x,z)𝑉𝑥𝑧V(x,z) for ϑx0R(z)superscriptsubscriptitalic-ϑsubscript𝑥0𝑅𝑧\vartheta_{x_{0}}^{R}(z) and Vx0(x,z)subscript𝑉subscript𝑥0𝑥𝑧V_... | 220 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix A Proofs
A.1 Proof of Proposition 2.1
Proof.
Denote Hi=∇G(xi−1,ωi)subscript𝐻𝑖∇𝐺subscript𝑥𝑖1subscript𝜔𝑖H_{i}=\nabla G(x_{i-1},\omega_{i}). In the sequel, we use the shortcut notation ϑ(z)italic-ϑ𝑧\vartheta(z) and V(x,z)𝑉𝑥𝑧V(x,z) for ϑx0R(z... | 251 |
33 | From the definition of xisubscript𝑥𝑖x_{i} and of the composite prox-mapping (8) (cf. Lemma A.1 of [40]), we conclude that there is ηi∈∂h(xi)subscript𝜂𝑖ℎsubscript𝑥𝑖\eta_{i}\in\partial h(x_{i}) such that
, 1 = ⟨γi−1Hi+γiηi+∇ϑ(xi)−∇ϑ(xi−1),z−xi⟩≥0,∀z∈𝒳,formulae-sequencesubscript𝛾𝑖1subscript𝐻𝑖subscript𝛾𝑖s... | 1,389 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix A Proofs
A.1 Proof of Proposition 2.1
1o.
From the definition of xisubscript𝑥𝑖x_{i} and of the composite prox-mapping (8) (cf. Lemma A.1 of [40]), we conclude that there is ηi∈∂h(xi)subscript𝜂𝑖ℎsubscript𝑥𝑖\eta_{i}\in\partial h(x_{i}) such that
, ... | 1,421 |
34 | , 1 = ‖∇G(x,ω)‖∗2superscriptsubscriptnorm∇𝐺𝑥𝜔2\displaystyle\|\nabla G(x,\omega)\|_{*}^{2}. , 2 = ≤2‖∇G(x,ω)−∇G(x∗,ω)‖∗2+2‖∇G(x∗,ω)‖∗2absent2superscriptsubscriptnorm∇𝐺𝑥𝜔∇𝐺subscript𝑥𝜔22superscriptsubscriptnorm∇𝐺subscript𝑥𝜔2\displaystyle\leq 2\|\nabla G(x,\omega)-\nabla G(x_{*},\omega)\|_{*}^{2}+2\|\nabl... | 1,970 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix A Proofs
A.1 Proof of Proposition 2.1
1o.
, 1 = ‖∇G(x,ω)‖∗2superscriptsubscriptnorm∇𝐺𝑥𝜔2\displaystyle\|\nabla G(x,\omega)\|_{*}^{2}. , 2 = ≤2‖∇G(x,ω)−∇G(x∗,ω)‖∗2+2‖∇G(x∗,ω)‖∗2absent2superscriptsubscriptnorm∇𝐺𝑥𝜔∇𝐺subscript𝑥𝜔22superscriptsu... | 2,002 |
35 | , 1 = ∑i=1mγi−1(34⟨∇g(xi−1),xi−1−x∗⟩+[h(xi−1)−h(x∗)])superscriptsubscript𝑖1𝑚subscript𝛾𝑖134∇𝑔subscript𝑥𝑖1subscript𝑥𝑖1subscript𝑥delimited-[]ℎsubscript𝑥𝑖1ℎsubscript𝑥\displaystyle\sum_{i=1}^{m}\gamma_{i-1}\Big{(}\mbox{\small$\frac{3}{4}$}{\langle}\nabla g(x_{i-1}),x_{i-1}-x_{*}{\rangle}+[h(x_{i-1})-h(x_{*... | 668 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix A Proofs
A.1 Proof of Proposition 2.1
1o.
, 1 = ∑i=1mγi−1(34⟨∇g(xi−1),xi−1−x∗⟩+[h(xi−1)−h(x∗)])superscriptsubscript𝑖1𝑚subscript𝛾𝑖134∇𝑔subscript𝑥𝑖1subscript𝑥𝑖1subscript𝑥delimited-[]ℎsubscript𝑥𝑖1ℎsubscript𝑥\displaystyle\sum_{i=1}^{m}\gam... | 700 |
36 | We have
, 1 = γi−1⟨ξi,xi−1−x∗⟩subscript𝛾𝑖1subscript𝜉𝑖subscript𝑥𝑖1subscript𝑥\displaystyle\gamma_{i-1}{\langle}\xi_{i},x_{i-1}-x_{*}{\rangle}. , 2 = =\displaystyle=. , 3 = γi−1⟨[∇G(xi−1,ωi)−∇G(x∗,ωi)]−∇g(xi−1),xi−1−x∗⟩⏞υisubscript𝛾𝑖1superscript⏞delimited-[]∇𝐺subscript𝑥𝑖1subscript𝜔𝑖∇𝐺subscript𝑥subscri... | 1,951 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix A Proofs
A.1 Proof of Proposition 2.1
2o.
We have
, 1 = γi−1⟨ξi,xi−1−x∗⟩subscript𝛾𝑖1subscript𝜉𝑖subscript𝑥𝑖1subscript𝑥\displaystyle\gamma_{i-1}{\langle}\xi_{i},x_{i-1}-x_{*}{\rangle}. , 2 = =\displaystyle=. , 3 = γi−1⟨[∇G(xi−1,ωi)−∇G(x∗,ωi)]−∇... | 1,983 |
37 | , 1 = rm(3)≤23tR2σ∗2∑i=0m−1γi2≤3tR2+3σ∗2∑i=0m−1γi2.subscriptsuperscript𝑟3𝑚23𝑡superscript𝑅2superscriptsubscript𝜎2superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖23𝑡superscript𝑅23superscriptsubscript𝜎2superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖2\displaystyle r^{(3)}_{m}\leq 2\sqrt{{3tR^{2}\sigma... | 1,900 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix A Proofs
A.1 Proof of Proposition 2.1
2o.
, 1 = rm(3)≤23tR2σ∗2∑i=0m−1γi2≤3tR2+3σ∗2∑i=0m−1γi2.subscriptsuperscript𝑟3𝑚23𝑡superscript𝑅2superscriptsubscript𝜎2superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖23𝑡superscript𝑅23superscriptsubsc... | 1,932 |
38 | , 1 = 21t∑i=0m−1γi4≤21tγ¯2∑i=0m−1γi2≤(21tγ¯2)221𝑡superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖421𝑡superscript¯𝛾2superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖2superscript21𝑡superscript¯𝛾2221t\sum_{i=0}^{m-1}\gamma_{i}^{4}\leq{21t\overline{\gamma}^{2}\sum_{i=0}^{m-1}\gamma_{i}^{2}}\leq(21t\overline{... | 1,943 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix A Proofs
A.1 Proof of Proposition 2.1
2o.
, 1 = 21t∑i=0m−1γi4≤21tγ¯2∑i=0m−1γi2≤(21tγ¯2)221𝑡superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖421𝑡superscript¯𝛾2superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖2superscript21𝑡superscript¯𝛾2221... | 1,975 |
39 | for all ωm∈Ωm(2)superscript𝜔𝑚subscriptsuperscriptΩ2𝑚\omega^{m}\in\Omega^{(2)}_{m} such that Prob(Ωm(2))≥1−2e−tProbsubscriptsuperscriptΩ2𝑚12superscript𝑒𝑡\hbox{\rm Prob}(\Omega^{(2)}_{m})\geq 1-2e^{-t}. Note that
, 1 = Δm(2)≤2tR2+14ν∑i=0m−1γi2⟨∇g(xi),xi−x∗⟩,superscriptsubscriptΔ𝑚22𝑡superscript𝑅214𝜈super... | 755 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix A Proofs
A.1 Proof of Proposition 2.1
2o.
for all ωm∈Ωm(2)superscript𝜔𝑚subscriptsuperscriptΩ2𝑚\omega^{m}\in\Omega^{(2)}_{m} such that Prob(Ωm(2))≥1−2e−tProbsubscriptsuperscriptΩ2𝑚12superscript𝑒𝑡\hbox{\rm Prob}(\Omega^{(2)}_{m})\geq 1-2e^{-t}. No... | 787 |
40 | When substituting bounds (33)–(35) into (32) we obtain
, 1 = Rmsubscript𝑅𝑚\displaystyle R_{m}. , 2 = ≤\displaystyle\leq. , 3 = 14∑i=0m−1γi⟨∇g(xi),xi−x∗⟩+12tR2+σ∗2[4∑i=0m−1γi2+24tγ¯2]+23tR2σ∗2∑i=0m−1γi214superscriptsubscript𝑖0𝑚1subscript𝛾𝑖∇𝑔subscript𝑥𝑖subscript𝑥𝑖subscript𝑥12𝑡superscript𝑅2subs... | 1,808 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix A Proofs
A.1 Proof of Proposition 2.1
3o.
When substituting bounds (33)–(35) into (32) we obtain
, 1 = Rmsubscript𝑅𝑚\displaystyle R_{m}. , 2 = ≤\displaystyle\leq. , 3 = 14∑i=0m−1γi⟨∇g(xi),xi−x∗⟩+12tR2+σ∗2[4∑i=0m−1γi2+24tγ¯2]+23tR2σ∗2∑i=0... | 1,840 |
41 | , 1 = 12[g(x^m)−g(x∗)]+[h(x^m)−h(x∗)]12delimited-[]𝑔subscript^𝑥𝑚𝑔subscript𝑥delimited-[]ℎsubscript^𝑥𝑚ℎsubscript𝑥\displaystyle\mbox{\small$\frac{1}{2}$}[g({\widehat{x}}_{m})-g(x_{*})]+[h({\widehat{x}}_{m})-h(x_{*})]. , 2 = . , 1 = ≤V(x0,x∗)+15tR2γm+h(x0)−h(xm)m+γσ∗2(7+24tm).absent𝑉subscript𝑥0subsc... | 351 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix A Proofs
A.1 Proof of Proposition 2.1
3o.
, 1 = 12[g(x^m)−g(x∗)]+[h(x^m)−h(x∗)]12delimited-[]𝑔subscript^𝑥𝑚𝑔subscript𝑥delimited-[]ℎsubscript^𝑥𝑚ℎsubscript𝑥\displaystyle\mbox{\small$\frac{1}{2}$}[g({\widehat{x}}_{m})-g(x_{*})]+[h({\widehat{x}}... | 383 |
42 | To prove the bound for the minibatch solution x^m(L)=(∑i=0m−1γi)−1∑i=0m−1γixi(L)superscriptsubscript^𝑥𝑚𝐿superscriptsuperscriptsubscript𝑖0𝑚1subscript𝛾𝑖1superscriptsubscript𝑖0𝑚1subscript𝛾𝑖superscriptsubscript𝑥𝑖𝐿{\widehat{x}}_{m}^{(L)}=\left(\sum_{i=0}^{m-1}\gamma_{i}\right)^{-1}\sum_{i=0}^{m-1}\gamma_{i}x... | 392 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix A Proofs
A.1 Proof of Proposition 2.1
5o.
To prove the bound for the minibatch solution x^m(L)=(∑i=0m−1γi)−1∑i=0m−1γixi(L)superscriptsubscript^𝑥𝑚𝐿superscriptsuperscriptsubscript𝑖0𝑚1subscript𝛾𝑖1superscriptsubscript𝑖0𝑚1subscript𝛾𝑖superscripts... | 424 |
43 | Let us assume that (ξi,ℱi)i=1,2,…subscriptsubscript𝜉𝑖subscriptℱ𝑖𝑖12…(\xi_{i},{\cal F}_{i})_{i=1,2,...} is a sequence of sub-Gaussian random variables satisfying444Here, same as above, we denote 𝐄i−1subscript𝐄𝑖1{\mathbf{E}}_{i-1} the expectation conditional to ℱi−1subscriptℱ𝑖1{\cal F}_{i-1}.
, 1 = 𝐄i−1{etξi}≤... | 787 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix A Proofs
A.2 Deviation inequalities
Let us assume that (ξi,ℱi)i=1,2,…subscriptsubscript𝜉𝑖subscriptℱ𝑖𝑖12…(\xi_{i},{\cal F}_{i})_{i=1,2,...} is a sequence of sub-Gaussian random variables satisfying444Here, same as above, we denote 𝐄i−1subscript𝐄𝑖1... | 812 |
44 | For all x>0𝑥0x>0 one has
, 1 = . , 2 = . , 3 = . , 4 = . , 1 = Prob{Sn≥2xrn}Probsubscript𝑆𝑛2𝑥subscript𝑟𝑛\displaystyle\hbox{\rm Prob}\left\{S_{n}\geq\sqrt{2xr_{n}}\right\}. , 2 = ≤e−x,absentsuperscript𝑒𝑥\displaystyle\leq e^{-x},. , 3 = . , 4 = (37a). , 1 = Prob{Mn≥2x(vn+hn)+2xs¯2}Probsubscript𝑀𝑛2𝑥subs... | 304 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix A Proofs
A.2 Deviation inequalities
Lemma A.1
For all x>0𝑥0x>0 one has
, 1 = . , 2 = . , 3 = . , 4 = . , 1 = Prob{Sn≥2xrn}Probsubscript𝑆𝑛2𝑥subscript𝑟𝑛\displaystyle\hbox{\rm Prob}\left\{S_{n}\geq\sqrt{2xr_{n}}\right\}. , 2 = ≤e−x,absentsuperscri... | 334 |
45 | The inequality (37a) is straightforward. To prove (37b), note that for t<12s¯−2𝑡12superscript¯𝑠2t<\frac{1}{2}\overline{s}^{-2} and η∼𝒩(0,1)similar-to𝜂𝒩01\eta\sim\mathcal{N}(0,1) independent of ξ0,…,ξnsubscript𝜉0…subscript𝜉𝑛\xi_{0},...,\xi_{n} , we have:
, 1 = 𝐄i−1{etξi2}subscript𝐄𝑖1superscript𝑒𝑡supersc... | 1,825 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix A Proofs
A.2 Deviation inequalities
Proof.
The inequality (37a) is straightforward. To prove (37b), note that for t<12s¯−2𝑡12superscript¯𝑠2t<\frac{1}{2}\overline{s}^{-2} and η∼𝒩(0,1)similar-to𝜂𝒩01\eta\sim\mathcal{N}(0,1) independent of ξ0,…,ξnsub... | 1,852 |
46 | Denote Mn=∑i=1n[ζi−μi]subscript𝑀𝑛superscriptsubscript𝑖1𝑛delimited-[]subscript𝜁𝑖subscript𝜇𝑖M_{n}=\sum_{i=1}^{n}[\zeta_{i}-\mu_{i}] and qn=∑i=1nσi2subscript𝑞𝑛superscriptsubscript𝑖1𝑛superscriptsubscript𝜎𝑖2q_{n}=\sum_{i=1}^{n}\sigma_{i}^{2}. Note that qn≤nsubscript𝑞𝑛𝑛q_{n}\leq n. | 173 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix A Proofs
A.2 Deviation inequalities
Proof.
Denote Mn=∑i=1n[ζi−μi]subscript𝑀𝑛superscriptsubscript𝑖1𝑛delimited-[]subscript𝜁𝑖subscript𝜇𝑖M_{n}=\sum_{i=1}^{n}[\zeta_{i}-\mu_{i}] and qn=∑i=1nσi2subscript𝑞𝑛superscriptsubscript𝑖1𝑛superscriptsubscrip... | 200 |
47 | Let x≥1𝑥1x\geq 1; one has
, 1 = Prob{Mn≥2xqn+x}≤[e(2xln[9n2x]+1)+1]e−x.Probsubscript𝑀𝑛2𝑥subscript𝑞𝑛𝑥delimited-[]𝑒2𝑥9𝑛2𝑥11superscript𝑒𝑥\hbox{\rm Prob}\left\{M_{n}\geq\sqrt{2xq_{n}}+{x}\right\}\leq\left[e\left(2x\ln\left[{9n\over 2x}\right]+1\right)+1\right]e^{-x}.. , 2 =
In particular, for x≥42+... | 1,946 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix A Proofs
A.2 Deviation inequalities
Proof.
Lemma A.2
Let x≥1𝑥1x\geq 1; one has
, 1 = Prob{Mn≥2xqn+x}≤[e(2xln[9n2x]+1)+1]e−x.Probsubscript𝑀𝑛2𝑥subscript𝑞𝑛𝑥delimited-[]𝑒2𝑥9𝑛2𝑥11superscript𝑒𝑥\hbox{\rm Prob}\left\{M_{n}\geq\sqrt{2xq_{n... | 1,978 |
48 | , 1 = Prob{Mn≥2xqn+x}≤2e−x/2.Probsubscript𝑀𝑛2𝑥subscript𝑞𝑛𝑥2superscript𝑒𝑥2\hbox{\rm Prob}\left\{M_{n}\geq\sqrt{2xq_{n}}+{x}\right\}\leq 2e^{-x/2}.. , 2 = . , 3 = □□\Box
Let (ξi)i=1,…subscriptsubscript𝜉𝑖𝑖1…(\xi_{i})_{i=1,...} be a sequence of independent random vectors in 𝐑nsuperscript𝐑𝑛{\mathbf{R}}^{n}... | 1,239 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix A Proofs
A.2 Deviation inequalities
Proof.
Lemma A.2
, 1 = Prob{Mn≥2xqn+x}≤2e−x/2.Probsubscript𝑀𝑛2𝑥subscript𝑞𝑛𝑥2superscript𝑒𝑥2\hbox{\rm Prob}\left\{M_{n}\geq\sqrt{2xq_{n}}+{x}\right\}\leq 2e^{-x/2}.. , 2 = . , 3 = □□\Box
Let (ξi)i=1,…subscri... | 1,271 |
49 | , 1 = ∀L∈𝐙+𝐄{exp(‖ηL‖∗210Θs2L)}≤exp(1)formulae-sequencefor-all𝐿subscript𝐙𝐄superscriptsubscriptnormsubscript𝜂𝐿210Θsuperscript𝑠2𝐿1\displaystyle\forall L\in{\mathbf{Z}}_{+}\quad{\mathbf{E}}\left\{\exp\left({\|\eta_{L}\|_{*}^{2}\over 10\Theta s^{2}L}\right)\right\}\leq\exp(1). , 2 = . , 3 = (39)
where Θ=max‖... | 1,891 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix A Proofs
A.2 Deviation inequalities
Proof.
Lemma A.3
, 1 = ∀L∈𝐙+𝐄{exp(‖ηL‖∗210Θs2L)}≤exp(1)formulae-sequencefor-all𝐿subscript𝐙𝐄superscriptsubscriptnormsubscript𝜂𝐿210Θsuperscript𝑠2𝐿1\displaystyle\forall L\in{\mathbf{Z}}_{+}\quad{\mathbf{E}... | 1,923 |
50 | , 1 = 𝐄j−1{etδj}≤exp(ts2β−1+34t2s2).subscript𝐄𝑗1superscript𝑒𝑡subscript𝛿𝑗𝑡superscript𝑠2superscript𝛽134superscript𝑡2superscript𝑠2{\mathbf{E}}_{j-1}\left\{e^{t\delta_{j}}\right\}\leq\exp\left(ts^{2}\beta^{-1}+{\tfrac{3}{4}t^{2}s^{2}}\right).. , 2 =
Consequently,
, 1 = 𝐄{etπβ(ηL)}≤𝐄{etπβ(ηL−1)}... | 1,562 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix A Proofs
A.2 Deviation inequalities
Proof.
Lemma A.3
, 1 = 𝐄j−1{etδj}≤exp(ts2β−1+34t2s2).subscript𝐄𝑗1superscript𝑒𝑡subscript𝛿𝑗𝑡superscript𝑠2superscript𝛽134superscript𝑡2superscript𝑠2{\mathbf{E}}_{j-1}\left\{e^{t\delta_{j}}\right\}\leq\e... | 1,594 |
51 | We start with analysing the behaviour of the approximate solution x^m0ksubscriptsuperscript^𝑥𝑘subscript𝑚0{\widehat{x}}^{k}_{m_{0}} at the stages of the preliminary phase of the procedure. | 57 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix A Proofs
A.3 Proof of Theorem 2.1
We start with analysing the behaviour of the approximate solution x^m0ksubscriptsuperscript^𝑥𝑘subscript𝑚0{\widehat{x}}^{k}_{m_{0}} at the stages of the preliminary phase of the procedure. | 87 |
52 | Let m0=⌈64δ2ρνs(4Θ+60t)⌉subscript𝑚064superscript𝛿2𝜌𝜈𝑠4Θ60𝑡m_{0}=\lceil 64\delta^{2}\rho\nu s(4\Theta+{60}t)\rceil ((here ⌈a⌉𝑎\lceil a\rceil stands for the smallest integer greater or equal to a𝑎a), γ=(4ν)−1𝛾superscript4𝜈1\gamma=(4\nu)^{-1}, and let t𝑡t satisfy t≥42+log(m0)𝑡42subscript𝑚0t\geq 4\sq... | 1,333 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix A Proofs
A.3 Proof of Theorem 2.1
Lemma A.4
Let m0=⌈64δ2ρνs(4Θ+60t)⌉subscript𝑚064superscript𝛿2𝜌𝜈𝑠4Θ60𝑡m_{0}=\lceil 64\delta^{2}\rho\nu s(4\Theta+{60}t)\rceil ((here ⌈a⌉𝑎\lceil a\rceil stands for the smallest integer greater or equal to a𝑎... | 1,368 |
53 | Proof of the lemma.
1o. Note that initial point x0subscript𝑥0x_{0} satisfies x0∈XR(x∗)subscript𝑥0subscript𝑋𝑅subscript𝑥x_{0}\in X_{R}(x_{*}). Suppose that the initial point x0k=x^m0k−1subscriptsuperscript𝑥𝑘0subscriptsuperscript^𝑥𝑘1subscript𝑚0x^{k}_{0}={\widehat{x}}^{k-1}_{m_{0}} of the k𝑘kth stage of the me... | 1,649 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix A Proofs
A.3 Proof of Theorem 2.1
Lemma A.4
Proof of the lemma.
1o. Note that initial point x0subscript𝑥0x_{0} satisfies x0∈XR(x∗)subscript𝑥0subscript𝑋𝑅subscript𝑥x_{0}\in X_{R}(x_{*}). Suppose that the initial point x0k=x^m0k−1subscriptsuperscrip... | 1,684 |
54 | Note that
κksubscript𝜅𝑘\kappa_{k} as defined in (42) satisfies
κk≤Rk−1(8δρs)−1,subscript𝜅𝑘subscript𝑅𝑘1superscript8𝛿𝜌𝑠1\kappa_{k}\leq R_{k-1}(8\delta\rho s)^{-1},
while κkm0≥8δ(4Θ+60t)Rk−1νsubscript𝜅𝑘subscript𝑚08𝛿4Θ60𝑡subscript𝑅𝑘1𝜈\kappa_{k}m_{0}\geq 8\delta(4\Theta+{60}t)R_{k-1}\nu. Because
... | 1,083 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix A Proofs
A.3 Proof of Theorem 2.1
Lemma A.4
Note that
κksubscript𝜅𝑘\kappa_{k} as defined in (42) satisfies
κk≤Rk−1(8δρs)−1,subscript𝜅𝑘subscript𝑅𝑘1superscript8𝛿𝜌𝑠1\kappa_{k}\leq R_{k-1}(8\delta\rho s)^{-1},
while κkm0≥8δ(4Θ+60t)Rk−1νs... | 1,118 |
55 | Let now a=16δ2ρsσ∗2/ν𝑎16superscript𝛿2𝜌𝑠superscriptsubscript𝜎2𝜈a={16}\delta^{2}\rho s\sigma_{*}^{2}/\nu, and let us study the behaviour of the sequence
, 1 = Rk=Rk−12+aRk−1=:f(Rk−1),R0=R≥2a.R_{k}=\frac{R_{k-1}}{2}+\frac{a}{R_{k-1}}=:f(R_{k-1}),\quad R_{0}=R\geq\sqrt{2a}.. , 2 =
Function f𝑓f admits a fixed p... | 847 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix A Proofs
A.3 Proof of Theorem 2.1
2o.
Let now a=16δ2ρsσ∗2/ν𝑎16superscript𝛿2𝜌𝑠superscriptsubscript𝜎2𝜈a={16}\delta^{2}\rho s\sigma_{*}^{2}/\nu, and let us study the behaviour of the sequence
, 1 = Rk=Rk−12+aRk−1=:f(Rk−1),R0=R≥2a.R_{k}=\frac{R_{... | 880 |
56 | Let t𝑡t be such that t≥42+log(m1)𝑡42subscript𝑚1t\geq 4\sqrt{2+\log(m_{1})}, with m1=⌈81δ2ρsν(4Θ+60t)⌉subscript𝑚181superscript𝛿2𝜌𝑠𝜈4Θ60𝑡m_{1}=\lceil 81\delta^{2}\rho s\nu(4\Theta+{60}t)\rceil, γ=(4ν)−1𝛾superscript4𝜈1\gamma=(4\nu)^{-1}, and let ℓk=⌈10×4k−1Θ⌉subscriptℓ𝑘10superscript4𝑘1Θ\ell_{k}=\lc... | 1,618 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix A Proofs
A.3 Proof of Theorem 2.1
2o.
Lemma A.5
Let t𝑡t be such that t≥42+log(m1)𝑡42subscript𝑚1t\geq 4\sqrt{2+\log(m_{1})}, with m1=⌈81δ2ρsν(4Θ+60t)⌉subscript𝑚181superscript𝛿2𝜌𝑠𝜈4Θ60𝑡m_{1}=\lceil 81\delta^{2}\rho s\nu(4\Theta+{60}t)\rc... | 1,656 |
57 | , 1 = ‖x^m1k−x∗‖≤δ[ρsκk+rk−1m1+νrk−12κkm1(4Θ+60t)+10Θσ∗2νκkℓk(74+6tm1)]normsubscriptsuperscript^𝑥𝑘subscript𝑚1subscript𝑥𝛿delimited-[]𝜌𝑠subscript𝜅𝑘subscript𝑟𝑘1subscript𝑚1𝜈subscriptsuperscript𝑟2𝑘1subscript𝜅𝑘subscript𝑚14Θ60𝑡10Θsuperscriptsubscript𝜎2𝜈subscript𝜅𝑘subscriptℓ𝑘746𝑡subscript... | 1,099 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix A Proofs
A.3 Proof of Theorem 2.1
2o.
Lemma A.5
, 1 = ‖x^m1k−x∗‖≤δ[ρsκk+rk−1m1+νrk−12κkm1(4Θ+60t)+10Θσ∗2νκkℓk(74+6tm1)]normsubscriptsuperscript^𝑥𝑘subscript𝑚1subscript𝑥𝛿delimited-[]𝜌𝑠subscript𝜅𝑘subscript𝑟𝑘1subscript𝑚1𝜈subscript... | 1,137 |
58 | Let t≥42+logmk𝑡42subscript𝑚𝑘t\geq 4\sqrt{2+\log m_{k}}, mk=⌈4k+4(4Θ+60t)δ2ρsν⌉subscript𝑚𝑘superscript4𝑘44Θ60𝑡superscript𝛿2𝜌𝑠𝜈m_{k}=\left\lceil 4^{k+4}(4\Theta+{60}t)\delta^{2}\rho s\nu\right\rceil,
, 1 = γk=rk−12σ∗(4Θ+60t)2mk,κk2=5σ∗rk−1ρs(4Θ+60t)mkformulae-sequencesuperscript𝛾𝑘subscript... | 1,832 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix A Proofs
A.3 Proof of Theorem 2.1
2o.
Lemma A.6
Let t≥42+logmk𝑡42subscript𝑚𝑘t\geq 4\sqrt{2+\log m_{k}}, mk=⌈4k+4(4Θ+60t)δ2ρsν⌉subscript𝑚𝑘superscript4𝑘44Θ60𝑡superscript𝛿2𝜌𝑠𝜈m_{k}=\left\lceil 4^{k+4}(4\Theta+{60}t)\delta^{2}\rho s\nu\r... | 1,870 |
59 | , 1 = ‖x^mkk−x∗‖≤δ[ρsκk+rk−1mk+4σ∗rk−1κk2(4Θ+60t)mk],normsubscriptsuperscript^𝑥𝑘subscript𝑚𝑘subscript𝑥𝛿delimited-[]𝜌𝑠subscript𝜅𝑘subscript𝑟𝑘1subscript𝑚𝑘4subscript𝜎subscript𝑟𝑘1subscript𝜅𝑘24Θ60𝑡subscript𝑚𝑘\|{\widehat{x}}^{k}_{m_{k}}-x_{*}\|\leq\delta\left[\rho s\kappa_{k}+\frac{r_{k-1}}{m_{k}... | 868 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix A Proofs
A.3 Proof of Theorem 2.1
2o.
Lemma A.6
, 1 = ‖x^mkk−x∗‖≤δ[ρsκk+rk−1mk+4σ∗rk−1κk2(4Θ+60t)mk],normsubscriptsuperscript^𝑥𝑘subscript𝑚𝑘subscript𝑥𝛿delimited-[]𝜌𝑠subscript𝜅𝑘subscript𝑟𝑘1subscript𝑚𝑘4subscript𝜎subscript𝑟𝑘1subscr... | 906 |
60 | We can now terminate the proof of the theorem.
Let us prove the accuracy bound of the theorem for the minibatch variant of the procedure.
Assume that the “total observation budget” N𝑁N is such that only the preliminary phase of the procedure is implemented.
This is the case when either m0K¯1≥Nsubscript𝑚0subscript¯𝐾... | 1,387 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix A Proofs
A.3 Proof of Theorem 2.1
Proof of Theorem 2.1.
We can now terminate the proof of the theorem.
Let us prove the accuracy bound of the theorem for the minibatch variant of the procedure.
Assume that the “total observation budget” N𝑁N is such tha... | 1,426 |
61 | Theorem 2.1 as stated in Section 2.3 does not say anything about convergence of g(x^N)𝑔subscript^𝑥𝑁g({\widehat{x}}_{N}) to g(x∗)𝑔subscript𝑥g(x_{*}). Such information can be easily extracted from the proof of the theorem.
Indeed, observe that at the end of a stage of the method, one has, with probability 1−Cke−... | 1,922 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix A Proofs
A.3 Proof of Theorem 2.1
Proof of Theorem 2.1.
Remark A.1
Theorem 2.1 as stated in Section 2.3 does not say anything about convergence of g(x^N)𝑔subscript^𝑥𝑁g({\widehat{x}}_{N}) to g(x∗)𝑔subscript𝑥g(x_{*}). Such information can be easily... | 1,966 |
62 | , 1 = g(x^N)−g(x∗)≲(δ−2+δ−1)R2ρsexp{−cδ2ρνNs(Θ+t)}+(δ2+δ3)ρσ∗2sΘ+tNless-than-or-similar-to𝑔subscript^𝑥𝑁𝑔subscript𝑥superscript𝛿2superscript𝛿1superscript𝑅2𝜌𝑠𝑐superscript𝛿2𝜌𝜈𝑁𝑠Θ𝑡superscript𝛿2superscript𝛿3𝜌superscriptsubscript𝜎2𝑠Θ𝑡𝑁\displaystyle g({\widehat{x}}_{N})-g(x_{*})\lesssim(\d... | 371 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix A Proofs
A.3 Proof of Theorem 2.1
Proof of Theorem 2.1.
Remark A.1
, 1 = g(x^N)−g(x∗)≲(δ−2+δ−1)R2ρsexp{−cδ2ρνNs(Θ+t)}+(δ2+δ3)ρσ∗2sΘ+tNless-than-or-similar-to𝑔subscript^𝑥𝑁𝑔subscript𝑥superscript𝛿2superscript𝛿1superscript𝑅2𝜌𝑠𝑐super... | 415 |
63 | 1o. Recall that 𝔯𝔯\mathfrak{r} is r¯¯𝑟{\overline{r}}-Lipschitz continuous, i.e., for all t,t′∈𝐑m𝑡superscript𝑡′superscript𝐑𝑚t,t^{\prime}\in{\mathbf{R}}^{m}
, 1 = |𝔯(t)−𝔯(t′)|≤r¯|t−t′|.𝔯𝑡𝔯superscript𝑡′¯𝑟𝑡superscript𝑡′|\mathfrak{r}(t)-\mathfrak{r}(t^{\prime})|\leq{\overline{r}}|t-t^{\prime}|.. , 2 =
A... | 827 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix A Proofs
A.4 Proof of Proposition 3.1
1o. Recall that 𝔯𝔯\mathfrak{r} is r¯¯𝑟{\overline{r}}-Lipschitz continuous, i.e., for all t,t′∈𝐑m𝑡superscript𝑡′superscript𝐑𝑚t,t^{\prime}\in{\mathbf{R}}^{m}
, 1 = |𝔯(t)−𝔯(t′)|≤r¯|t−t′|.𝔯𝑡𝔯superscript𝑡... | 856 |
64 | Due to strong monotonicity of 𝔯𝔯\mathfrak{r},
, 1 = g(x)−g(x∗)𝑔𝑥𝑔subscript𝑥\displaystyle g(x)-g(x_{*}). , 2 = =\displaystyle=. , 3 = ∫01∇g(x∗+t(x−x∗))T(x−x∗)𝑑tsuperscriptsubscript01∇𝑔superscriptsubscript𝑥𝑡𝑥subscript𝑥𝑇𝑥subscript𝑥differential-d𝑡\displaystyle\int_{0}^{1}\nabla g(x_{*}+t(x-x_{*}))^{T}... | 566 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix A Proofs
A.4 Proof of Proposition 3.1
2o.
Due to strong monotonicity of 𝔯𝔯\mathfrak{r},
, 1 = g(x)−g(x∗)𝑔𝑥𝑔subscript𝑥\displaystyle g(x)-g(x_{*}). , 2 = =\displaystyle=. , 3 = ∫01∇g(x∗+t(x−x∗))T(x−x∗)𝑑tsuperscriptsubscript01∇𝑔superscriptsub... | 598 |
65 | The sub-Gaussianity in the “batchless” case is readily given by ∇G(x∗,ωi)=σϕiξi∇𝐺subscript𝑥subscript𝜔𝑖𝜎subscriptitalic-ϕ𝑖subscript𝜉𝑖\nabla G(x_{*},\omega_{i})=\sigma\phi_{i}\xi_{i} with
‖ϕiξi‖∞≤‖ϕi‖∞|ξi|≤ν¯‖ξi‖2subscriptnormsubscriptitalic-ϕ𝑖subscript𝜉𝑖subscriptnormsubscriptitalic-ϕ𝑖subscript𝜉𝑖¯𝜈su... | 860 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix A Proofs
A.4 Proof of Proposition 3.1
3o.
The sub-Gaussianity in the “batchless” case is readily given by ∇G(x∗,ωi)=σϕiξi∇𝐺subscript𝑥subscript𝜔𝑖𝜎subscriptitalic-ϕ𝑖subscript𝜉𝑖\nabla G(x_{*},\omega_{i})=\sigma\phi_{i}\xi_{i} with
‖ϕiξi‖∞≤‖ϕi‖∞... | 892 |
66 | In the situation of Section 3.1, ΣΣ\Sigma is positive definite, Σ⪰κΣIsucceeds-or-equalsΣsubscript𝜅Σ𝐼\Sigma\succeq\kappa_{\Sigma}I, κΣ>0subscript𝜅Σ0\kappa_{\Sigma}>0, and condition 𝐐(λ,ψ)𝐐𝜆𝜓{\mathbf{Q}}(\lambda,\psi)
is satisfied with λ=κΣ𝜆subscript𝜅Σ\lambda=\kappa_{\Sigma} and ψ=1𝜓1\psi=1.
Because quadratic... | 298 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix A Proofs
A.4 Proof of Proposition 3.1
4o.
In the situation of Section 3.1, ΣΣ\Sigma is positive definite, Σ⪰κΣIsucceeds-or-equalsΣsubscript𝜅Σ𝐼\Sigma\succeq\kappa_{\Sigma}I, κΣ>0subscript𝜅Σ0\kappa_{\Sigma}>0, and condition 𝐐(λ,ψ)𝐐𝜆𝜓{\mathbf{Q}}(... | 330 |
67 | The scope of results of Section 2 is much broader than “vanilla” sparsity optimization. We discuss here general notion of sparsity structure which provides a proper application framework for these results.
In what follows we assume to be given a sparsity structure [21] on E𝐸E—a family 𝒫𝒫{\cal P} of projector mapping... | 651 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix B Properties of sparsity structures
B.1 Sparsity structures
The scope of results of Section 2 is much broader than “vanilla” sparsity optimization. We discuss here general notion of sparsity structure which provides a proper application framework for th... | 679 |
68 | 1.
“Vanilla (usual)” sparsity: in this case E=𝐑n𝐸superscript𝐑𝑛E={\mathbf{R}}^{n} with the standard inner product, 𝒫𝒫{\cal P} is comprised of projectors on all coordinate subspaces of 𝐑nsuperscript𝐑𝑛{\mathbf{R}}^{n}, π(P)=rank(P)𝜋𝑃rank𝑃\pi(P)={\hbox{\rm rank}}(P), and ∥⋅∥=∥⋅∥1\|\cdot\|=\|\cdot\|_{1}.
2.
... | 1,691 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix B Properties of sparsity structures
B.1 Sparsity structures
1.
“Vanilla (usual)” sparsity: in this case E=𝐑n𝐸superscript𝐑𝑛E={\mathbf{R}}^{n} with the standard inner product, 𝒫𝒫{\cal P} is comprised of projectors on all coordinate subspaces of 𝐑n... | 1,719 |
69 | We say that a positive semidefinite mapping Σ:E→E:Σ→𝐸𝐸\Sigma:\,E\to E satisfies condition 𝐐(λ,ψ)𝐐𝜆𝜓{\mathbf{Q}}(\lambda,\psi) for given s∈𝐙+𝑠subscript𝐙s\in{\mathbf{Z}}_{+} if for some ψ,λ>0𝜓𝜆0\psi,\lambda>0 and all P∈𝒫s𝑃subscript𝒫𝑠P\in{\cal P}_{s} and z∈E𝑧𝐸z\in E
, 1 = ‖Pz‖≤s/λ‖z‖Σ+‖P¯z‖−ψ‖z‖.norm... | 284 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix B Properties of sparsity structures
B.2 Condition 𝐐(λ,ψ)𝐐𝜆𝜓{\mathbf{Q}}(\lambda,\psi)
We say that a positive semidefinite mapping Σ:E→E:Σ→𝐸𝐸\Sigma:\,E\to E satisfies condition 𝐐(λ,ψ)𝐐𝜆𝜓{\mathbf{Q}}(\lambda,\psi) for given s∈𝐙+𝑠subscript𝐙s... | 338 |
70 | Suppose that x∗subscript𝑥x_{*} is an optimal solution to (5) such that for some P∈𝒫s𝑃subscript𝒫𝑠P\in{\cal P}_{s}, ‖(I−P)x∗‖≤δnorm𝐼𝑃subscript𝑥𝛿\|(I-P)x_{*}\|\leq\delta, and that condition 𝐐(λ,ψ)𝐐𝜆𝜓{\mathbf{Q}}(\lambda,\psi) is satisfied.
Furthermore, assume that objective g𝑔g of (5) satisfies the followi... | 752 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix B Properties of sparsity structures
B.2 Condition 𝐐(λ,ψ)𝐐𝜆𝜓{\mathbf{Q}}(\lambda,\psi)
Lemma B.1
Suppose that x∗subscript𝑥x_{*} is an optimal solution to (5) such that for some P∈𝒫s𝑃subscript𝒫𝑠P\in{\cal P}_{s}, ‖(I−P)x∗‖≤δnorm𝐼𝑃subscript𝑥𝛿... | 811 |
71 | When setting z=x^−x∗𝑧^𝑥subscript𝑥z={\widehat{x}}-x_{*} one has
, 1 = x^^𝑥\displaystyle{\widehat{x}}. , 2 = =‖x∗+z‖=‖Px∗+(I−P)x∗+z‖≥‖Px∗+z‖−‖(I−P)x∗‖absentnormsubscript𝑥𝑧norm𝑃subscript𝑥𝐼𝑃subscript𝑥𝑧norm𝑃subscript𝑥𝑧norm𝐼𝑃subscript𝑥\displaystyle=\|x_{*}+z\|=\|Px_{*}+(I-P)x_{*}+z\|\geq\|Px_{*}+z\|-\|(... | 1,784 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix B Properties of sparsity structures
B.2 Condition 𝐐(λ,ψ)𝐐𝜆𝜓{\mathbf{Q}}(\lambda,\psi)
Proof.
When setting z=x^−x∗𝑧^𝑥subscript𝑥z={\widehat{x}}-x_{*} one has
, 1 = x^^𝑥\displaystyle{\widehat{x}}. , 2 = =‖x∗+z‖=‖Px∗+(I−P)x∗+z‖≥‖Px∗+z‖−‖(I−P)x∗... | 1,840 |
72 | We discuss implications of condition 𝐐(λ,ψ)𝐐𝜆𝜓{\mathbf{Q}}(\lambda,\psi) and result of Lemma B.1 for “usual” sparsity in Section 3 of the paper. Now, let us consider
the case of the low rank sparsity. Let z∈𝐑p×q𝑧superscript𝐑𝑝𝑞z\in{\mathbf{R}}^{p\times q} with p≥q𝑝𝑞p\geq q for the sake of definiteness. In th... | 1,312 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix B Properties of sparsity structures
B.2 Condition 𝐐(λ,ψ)𝐐𝜆𝜓{\mathbf{Q}}(\lambda,\psi)
Proof.
Remark B.1
We discuss implications of condition 𝐐(λ,ψ)𝐐𝜆𝜓{\mathbf{Q}}(\lambda,\psi) and result of Lemma B.1 for “usual” sparsity in Section 3 of the p... | 1,373 |
73 | This section complements the numerical results appearing on the main body of the paper. We consider the setting in Section 3.3 of sparse recovery problem from GLR model observations (15).
In the experiments below, we consider the choice (21) of activation function 𝔯α(t)subscript𝔯𝛼𝑡\mathfrak{r}_{\alpha}(t) with val... | 1,297 | Stochastic Mirror Descent for Large-Scale Sparse Recovery
Appendix C Supplementary numerical experiments
This section complements the numerical results appearing on the main body of the paper. We consider the setting in Section 3.3 of sparse recovery problem from GLR model observations (15).
In the experiments below, w... | 1,316 |
0 | Many real-world datasets, such as healthcare, climate, and economics, are often collected as irregular time series, which poses challenges for accurate modeling. In this paper, we propose the Amortized Control of continuous State Space Model (ACSSM) for continuous dynamical modeling of time series for irregular and dis... | 236 | Amortized Control of Continuous State Space Feynman-Kac Model for Irregular Time Series
Abstract
Many real-world datasets, such as healthcare, climate, and economics, are often collected as irregular time series, which poses challenges for accurate modeling. In this paper, we propose the Amortized Control of continuous... | 258 |
1 | State space models (SSMs) are widely used for modeling time series data (Fraccaro et al., 2017; Rangapuram et al., 2018; de Bézenac et al., 2020) but typically assume evenly spaced observations. However, many real-world datasets, such as healthcare (Silva et al., 2012), climate (Menne et al., 2015), are often collected... | 1,207 | Amortized Control of Continuous State Space Feynman-Kac Model for Irregular Time Series
1 Introduction
State space models (SSMs) are widely used for modeling time series data (Fraccaro et al., 2017; Rangapuram et al., 2018; de Bézenac et al., 2020) but typically assume evenly spaced observations. However, many real-wor... | 1,230 |
2 | Throughout this paper, we denote path measure by ℙ(⋅)superscriptℙ⋅\mathbb{P}^{(\cdot)}, defined on the space of continuous functions Ω=C([0,T],ℝd)Ω𝐶0𝑇superscriptℝ𝑑\Omega=C([0,T],\mathbb{R}^{d}). We sometimes denote with ℙℙ\mathbb{P} the expectation as 𝔼ℙt,𝐱[⋅]=𝔼ℙ[⋅|𝐗t=𝐱]\mathbb{E}^{t,\mathbf{x}}_{\mathbb{P}}[\... | 1,210 | Amortized Control of Continuous State Space Feynman-Kac Model for Irregular Time Series
1 Introduction
Notation
Throughout this paper, we denote path measure by ℙ(⋅)superscriptℙ⋅\mathbb{P}^{(\cdot)}, defined on the space of continuous functions Ω=C([0,T],ℝd)Ω𝐶0𝑇superscriptℝ𝑑\Omega=C([0,T],\mathbb{R}^{d}). We someti... | 1,236 |
3 | Consider for a set of time-stamps (regular or irregular) {ti}i=0ksuperscriptsubscriptsubscript𝑡𝑖𝑖0𝑘\{t_{i}\}_{i=0}^{k} over a interval 𝒯=[0,T]𝒯0𝑇\mathcal{T}=[0,T], i.e.,0=t0≤⋯≤tk=T\textit{i}.\textit{e}.,0=t_{0}\leq\cdots\leq t_{k}=T. The CD-SSM assumes a continuous-time Markov state trajectory 𝐗0:Tsubscript𝐗:0... | 1,608 | Amortized Control of Continuous State Space Feynman-Kac Model for Irregular Time Series
2 Preliminaries
Continuous-Discrete State Space Model
Consider for a set of time-stamps (regular or irregular) {ti}i=0ksuperscriptsubscriptsubscript𝑡𝑖𝑖0𝑘\{t_{i}\}_{i=0}^{k} over a interval 𝒯=[0,T]𝒯0𝑇\mathcal{T}=[0,T], i.e.,0=... | 1,641 |
4 | In this section, we introduce our proposed model, ACSSM. First, we present the Multi-marginal Doob’s hℎh-transform, outlining the continuous dynamics for ℙ⋆superscriptℙ⋆\mathbb{P}^{\star} in Section 3.1. Then, in Section 3.2, we frame the VI for approximating ℙ⋆superscriptℙ⋆\mathbb{P}^{\star} using SOC. To support scal... | 143 | Amortized Control of Continuous State Space Feynman-Kac Model for Irregular Time Series
3 Controlled Continuous-Discrete State Space Model
In this section, we introduce our proposed model, ACSSM. First, we present the Multi-marginal Doob’s hℎh-transform, outlining the continuous dynamics for ℙ⋆superscriptℙ⋆\mathbb{P}^{... | 173 |
5 | Before applying VI to approximate the posterior distribution ℙ⋆superscriptℙ⋆\mathbb{P}^{\star} in SOC, we first show that a class of SDEs exists whose solutions induce a path measure equivalent to ℙ⋆superscriptℙ⋆\mathbb{P}^{\star} in (2). This formulation provides a valuable insight for defining an appropriate objectiv... | 1,171 | Amortized Control of Continuous State Space Feynman-Kac Model for Irregular Time Series
3 Controlled Continuous-Discrete State Space Model
3.1 Multi Marginal Doob’s hℎh-transform
Before applying VI to approximate the posterior distribution ℙ⋆superscriptℙ⋆\mathbb{P}^{\star} in SOC, we first show that a class of SDEs exi... | 1,216 |
6 | Let us define a sequence of functions {hi}i∈[1:k]subscriptsubscriptℎ𝑖𝑖delimited-[]:1𝑘\{h_{i}\}_{i\in[1:k]}, where each hi:[ti−1,ti)×ℝd→ℝ+:subscriptℎ𝑖→subscript𝑡𝑖1subscript𝑡𝑖superscriptℝ𝑑subscriptℝh_{i}:[t_{i-1},t_{i})\times\mathbb{R}^{d}\to\mathbb{R}_{+}, for all i∈[1:k]i\in[1:k], is a conditional expectatio... | 1,416 | Amortized Control of Continuous State Space Feynman-Kac Model for Irregular Time Series
3 Controlled Continuous-Discrete State Space Model
3.1 Multi Marginal Doob’s hℎh-transform
Theorem 3.1 (Multi-marginal Doob’s hℎh-transform).
Let us define a sequence of functions {hi}i∈[1:k]subscriptsubscriptℎ𝑖𝑖delimited-[]:1𝑘... | 1,481 |
7 | The SOC (Fleming & Soner, 2006; Carmona, 2016) is a mathematical framework that deals with the problem of finding control policies in order to achieve certain object. In this paper, we define following control-affine SDE, adjusting the prior dynamics in (1) with a Markov control α:[0,T]×ℝd→ℝd:𝛼→0𝑇superscriptℝ𝑑supers... | 1,510 | Amortized Control of Continuous State Space Feynman-Kac Model for Irregular Time Series
3 Controlled Continuous-Discrete State Space Model
3.2 Stochastic Optimal Control
The SOC (Fleming & Soner, 2006; Carmona, 2016) is a mathematical framework that deals with the problem of finding control policies in order to achieve... | 1,549 |
8 | Let us consider a sequence of left continuous functions {𝒱i}i∈[1:k+1]subscriptsubscript𝒱𝑖𝑖delimited-[]:1𝑘1\{\mathcal{V}_{i}\}_{i\in[1:k+1]}, where each 𝒱i∈C1,2([ti−1,ti)×ℝd)subscript𝒱𝑖superscript𝐶12subscript𝑡𝑖1subscript𝑡𝑖superscriptℝ𝑑\mathcal{V}_{i}\in C^{1,2}([t_{i-1},t_{i})\times\mathbb{R}^{d})
, 1 =... | 1,866 | Amortized Control of Continuous State Space Feynman-Kac Model for Irregular Time Series
3 Controlled Continuous-Discrete State Space Model
3.2 Stochastic Optimal Control
Theorem 3.2 (Dynamic Programming Principle).
Let us consider a sequence of left continuous functions {𝒱i}i∈[1:k+1]subscriptsubscript𝒱𝑖𝑖delimited... | 1,916 |
9 | Suppose there exist a sequence of left continuous functions 𝒱i(t,𝐱)∈C1,2([ti−1,ti),ℝd)subscript𝒱𝑖𝑡𝐱superscript𝐶12subscript𝑡𝑖1subscript𝑡𝑖superscriptℝ𝑑\mathcal{V}_{i}(t,\mathbf{x})\in C^{1,2}([t_{i-1},t_{i}),\mathbb{R}^{d}), for all i∈[1:k]i\in[1:k], satisfying the following Hamiltonian-Jacobi-Bellman (HJB)... | 1,418 | Amortized Control of Continuous State Space Feynman-Kac Model for Irregular Time Series
3 Controlled Continuous-Discrete State Space Model
3.2 Stochastic Optimal Control
Theorem 3.3 (Verification Theorem).
Suppose there exist a sequence of left continuous functions 𝒱i(t,𝐱)∈C1,2([ti−1,ti),ℝd)subscript𝒱𝑖𝑡𝐱supersc... | 1,468 |
10 | The hℎh function satisfying the following linear PDE:
, 1 = . , 2 = ∂thi,t+𝒜thi,t=0,ti−1≤t<tiformulae-sequencesubscript𝑡subscriptℎ𝑖𝑡subscript𝒜𝑡subscriptℎ𝑖𝑡0subscript𝑡𝑖1𝑡subscript𝑡𝑖\displaystyle\partial_{t}h_{i,t}+\mathcal{A}_{t}h_{i,t}=0,\quad t_{i-1}\leq t<t_{i}. , 3 = . , 4 = (13). , 1 = . , 2 = hi(ti,�... | 394 | Amortized Control of Continuous State Space Feynman-Kac Model for Irregular Time Series
3 Controlled Continuous-Discrete State Space Model
3.2 Stochastic Optimal Control
Lemma 3.4 (Hopf-Cole Transformation).
The hℎh function satisfying the following linear PDE:
, 1 = . , 2 = ∂thi,t+𝒜thi,t=0,ti−1≤t<tiformulae-sequence... | 445 |
11 | For optimal control α⋆superscript𝛼⋆\alpha^{\star} induced by the cost function (7) with dynamics (6), it satisfies α⋆=∇𝐱loghsuperscript𝛼⋆subscript∇𝐱ℎ\alpha^{\star}=\nabla_{\mathbf{x}}\log h. In other words, we can simulate the conditional SDEs in (5) by simulating the controlled SDE (6) with optimal control α⋆supe... | 499 | Amortized Control of Continuous State Space Feynman-Kac Model for Irregular Time Series
3 Controlled Continuous-Discrete State Space Model
3.2 Stochastic Optimal Control
Corollary 3.5 (Optimal Control).
For optimal control α⋆superscript𝛼⋆\alpha^{\star} induced by the cost function (7) with dynamics (6), it satisfies α... | 550 |
12 | Let us assume that the path measure ℙαsuperscriptℙ𝛼\mathbb{P}^{\alpha} induced by (6) for any α∈𝔸𝛼𝔸\alpha\in\mathbb{A} satisfies DKL(ℙα|ℙ⋆)<∞subscript𝐷KLconditionalsuperscriptℙ𝛼superscriptℙ⋆D_{\text{KL}}(\mathbb{P}^{\alpha}|\mathbb{P}^{\star})<\infty. Then, for a cost function 𝒥𝒥\mathcal{J} in (7) and μ0⋆subsc... | 1,753 | Amortized Control of Continuous State Space Feynman-Kac Model for Irregular Time Series
3 Controlled Continuous-Discrete State Space Model
3.2 Stochastic Optimal Control
Theorem 3.6 (Tight Variational Bound).
Let us assume that the path measure ℙαsuperscriptℙ𝛼\mathbb{P}^{\alpha} induced by (6) for any α∈𝔸𝛼𝔸\alpha\i... | 1,805 |
13 | The linear approximation of the general drift function provides a simulation-free property for the dynamics and significantly enhances scalability while ensuring high-quality performance (Deng et al., 2024). Motivate by this property, we investigate a class of linear SDEs to improve the efficiency of the proposed contr... | 1,033 | Amortized Control of Continuous State Space Feynman-Kac Model for Irregular Time Series
3 Controlled Continuous-Discrete State Space Model
3.3 Efficient Modeling of the Latent System
The linear approximation of the general drift function provides a simulation-free property for the dynamics and significantly enhances sc... | 1,074 |
14 | Since SPD matrix 𝐀𝐀\mathbf{A} admits the eigen-decomposition 𝐀=𝐄𝐃𝐄⊤𝐀superscript𝐄𝐃𝐄top\mathbf{A}=\mathbf{E}\mathbf{D}\mathbf{E}^{\top} with 𝐄∈ℝd×d𝐄superscriptℝ𝑑𝑑\mathbf{E}\in\mathbb{R}^{d\times d} and 𝐃∈diag(ℝd)𝐃diagsuperscriptℝ𝑑\mathbf{D}\in\text{diag}(\mathbb{R}^{d}), the process 𝐗tsubscript𝐗𝑡\mat... | 1,322 | Amortized Control of Continuous State Space Feynman-Kac Model for Irregular Time Series
3 Controlled Continuous-Discrete State Space Model
3.3 Efficient Modeling of the Latent System
Remark 3.7 (Diagonalization).
Since SPD matrix 𝐀𝐀\mathbf{A} admits the eigen-decomposition 𝐀=𝐄𝐃𝐄⊤𝐀superscript𝐄𝐃𝐄top\mathbf{A}=\... | 1,373 |
15 | To leverage the advantages of linear SDEs in (17) which offer simulation-free property, we aim to linearize the drift function in (6). However, the naïve formulation described in (17) may limit the expressiveness of the latent dynamics for real-world applications. Hence, we introduce a parameterization strategy inspire... | 1,420 | Amortized Control of Continuous State Space Feynman-Kac Model for Irregular Time Series
3 Controlled Continuous-Discrete State Space Model
3.3 Efficient Modeling of the Latent System
Locally Linear Approximation
To leverage the advantages of linear SDEs in (17) which offer simulation-free property, we aim to linearize ... | 1,467 |
16 | Let us consider sequences of SPD matrices {𝐀i}i∈[1:k]subscriptsubscript𝐀𝑖𝑖delimited-[]:1𝑘\{\mathbf{A}_{i}\}_{i\in[1:k]} and vectors {αi}i∈[1:k]subscriptsubscript𝛼𝑖𝑖delimited-[]:1𝑘\{\alpha_{i}\}_{i\in[1:k]} and the following control-affine SDEs for all i∈[1:k]i\in[1:k]:
, 1 = d𝐗t=[−𝐀i𝐗t+αi]dt+σd𝐖t... | 1,431 | Amortized Control of Continuous State Space Feynman-Kac Model for Irregular Time Series
3 Controlled Continuous-Discrete State Space Model
3.3 Efficient Modeling of the Latent System
Locally Linear Approximation
Theorem 3.8 (Simulation-free estimation).
Let us consider sequences of SPD matrices {𝐀i}i∈[1:k]subscripts... | 1,489 |
17 | Given an associative operator ⊗tensor-product\otimes and a sequence of elements [st1,⋯stK]subscript𝑠subscript𝑡1⋯subscript𝑠subscript𝑡𝐾[s_{t_{1}},\cdots s_{t_{K}}], the parallel scan algorithm computes the all-prefix-sum which returns the sequence
, 1 = [st1,(st1⊗st2),⋯,(st1⊗st2⊗⋯⊗stK)]subscript𝑠subscript𝑡1tensor... | 462 | Amortized Control of Continuous State Space Feynman-Kac Model for Irregular Time Series
3 Controlled Continuous-Discrete State Space Model
3.3 Efficient Modeling of the Latent System
Parallel Computation
Given an associative operator ⊗tensor-product\otimes and a sequence of elements [st1,⋯stK]subscript𝑠subscript𝑡1⋯s... | 507 |
18 | Note that (21-22) involves approximating the Markov control by a non-Markov control α(ℋt):=αθassign𝛼subscriptℋ𝑡superscript𝛼𝜃\alpha(\mathcal{H}_{t}):=\alpha^{\theta}, parameterized by neural network θ𝜃\theta. However, Theorem 3.3 establishes that the optimal control should be Markov, as it is verified by the HJB e... | 359 | Amortized Control of Continuous State Space Feynman-Kac Model for Irregular Time Series
3 Controlled Continuous-Discrete State Space Model
3.3 Efficient Modeling of the Latent System
Parallel Computation
Remark 3.9 (Non-Markov Control).
Note that (21-22) involves approximating the Markov control by a non-Markov control... | 416 |
19 | Moreover, to enhance flexibility, we treat 𝐲0:Tsubscript𝐲:0𝑇\mathbf{y}_{0:T} as an auxiliary variable in the latent space which is produced by a neural network encoder qϕsubscript𝑞italic-ϕq_{\phi} applied to the given time series data 𝐨0:Tsubscript𝐨:0𝑇\mathbf{o}_{0:T} i.e.,𝐲0:T∼qϕ(𝐲0:T|𝐨0:T)\textit{i}.\textit... | 987 | Amortized Control of Continuous State Space Feynman-Kac Model for Irregular Time Series
3 Controlled Continuous-Discrete State Space Model
3.3 Efficient Modeling of the Latent System
Amortization
Moreover, to enhance flexibility, we treat 𝐲0:Tsubscript𝐲:0𝑇\mathbf{y}_{0:T} as an auxiliary variable in the latent space... | 1,032 |
20 | We jointly train, in an end-to-end manner, the amortization parameters {ϕ,ψ}italic-ϕ𝜓\{\phi,\psi\} for the encoder-decoder pair, along with the parameters of the latent dynamics θ={𝐟θ,𝐁θ,𝐓θ,𝐦0,𝚺0,{𝐀(l)}l∈[1:L]}𝜃subscript𝐟𝜃subscript𝐁𝜃subscript𝐓𝜃subscript𝐦0subscript𝚺0subscriptsuperscript𝐀𝑙𝑙delimited-... | 1,335 | Amortized Control of Continuous State Space Feynman-Kac Model for Irregular Time Series
3 Controlled Continuous-Discrete State Space Model
3.4 Training and Inference
Training Objective Function
We jointly train, in an end-to-end manner, the amortization parameters {ϕ,ψ}italic-ϕ𝜓\{\phi,\psi\} for the encoder-decoder pa... | 1,377 |
21 | In this section, we present empirical results demonstrating the effectiveness of ACSSM in modeling real-world irregular time-series data. The primary objective was to evaluate its capability to capture the underlying dynamics across various datasets. To demonstrate the applicability of the ACSSM, we conducted experimen... | 369 | Amortized Control of Continuous State Space Feynman-Kac Model for Irregular Time Series
4 Experiment
In this section, we present empirical results demonstrating the effectiveness of ACSSM in modeling real-world irregular time-series data. The primary objective was to evaluate its capability to capture the underlying dy... | 392 |
22 | We investigated the classification performance of our proposed model. For this purpose, we trained the model on the Human Activity dataset, which contains time-series data from five individuals performing various activities such as walking, sitting, lying, and standing etc. The dataset includes 12 features in total, re... | 287 | Amortized Control of Continuous State Space Feynman-Kac Model for Irregular Time Series
4 Experiment
4.1 Per time point classification &\& regression
Human Activity Classification
We investigated the classification performance of our proposed model. For this purpose, we trained the model on the Human Activity dataset, ... | 326 |
23 | Next, We explored the problem of sequence regression using pendulum experiment (Becker et al., 2019), where the goal is to infer the sine and cosine of the pendulum angle from irregularly observed, noisy pendulum images (Schirmer et al., 2022). To assess our performance, we compared it with previous dynamics-based mode... | 253 | Amortized Control of Continuous State Space Feynman-Kac Model for Irregular Time Series
4 Experiment
4.1 Per time point classification &\& regression
Pendulum Regression
Next, We explored the problem of sequence regression using pendulum experiment (Becker et al., 2019), where the goal is to infer the sine and cosine o... | 293 |
24 | We benchmark the models on two real-world datasets, USHCN and Physionet. The USHCN dataset (Menne et al., 2015) contains 1,218 daily measurements from weather stations across the US with five variables over a four-year period. The Physionet dataset (Silva et al., 2012) contains 8,000 multivariate clinical time-series o... | 105 | Amortized Control of Continuous State Space Feynman-Kac Model for Irregular Time Series
4 Experiment
4.2 Sequence Interpolation &\& Extrapolation
Datasets
We benchmark the models on two real-world datasets, USHCN and Physionet. The USHCN dataset (Menne et al., 2015) contains 1,218 daily measurements from weather statio... | 144 |
25 | We begin by evaluating the effectiveness of ACSSM on the interpolation task. Following the approach of Schirmer et al. (2022) and Rubanova et al. (2019), each model is required to infer all time points t∈𝒯′𝑡superscript𝒯′t\in\mathcal{T^{\prime}} based on a subset of observations 𝐨t∈𝒯subscript𝐨𝑡𝒯\mathbf{o}_{t\in\... | 378 | Amortized Control of Continuous State Space Feynman-Kac Model for Irregular Time Series
4 Experiment
4.2 Sequence Interpolation &\& Extrapolation
Interpolation
We begin by evaluating the effectiveness of ACSSM on the interpolation task. Following the approach of Schirmer et al. (2022) and Rubanova et al. (2019), each m... | 417 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.