chunk_id int64 0 448 | chunk_text stringlengths 1 10.8k | chunk_text_tokens int64 1 2.01k | serialized_text stringlengths 1 11.1k | serialized_text_tokens int64 1 2.02k |
|---|---|---|---|---|
31 | (stochastic optimization).
For probabilities p=(p1,…,pm)∈P𝑝subscript𝑝1…subscript𝑝𝑚𝑃p=(p_{1},\dots,p_{m})\in P, where P⊂ℝm𝑃superscriptℝ𝑚P\subset\mathbb{R}^{m} is the set of nonnegative vectors with components summing to one, consider the problem
, 1 = minimizex∈X∑i=1mpifi(x),subscriptminimize𝑥𝑋superscriptsubs... | 916 | 3 Examples
3.2 Example
(stochastic optimization).
For probabilities p=(p1,…,pm)∈P𝑝subscript𝑝1…subscript𝑝𝑚𝑃p=(p_{1},\dots,p_{m})\in P, where P⊂ℝm𝑃superscriptℝ𝑚P\subset\mathbb{R}^{m} is the set of nonnegative vectors with components summing to one, consider the problem
, 1 = minimizex∈X∑i=1mpifi(x),subscriptmini... | 924 |
32 | (distributionally robust optimization).
Let P⊂ℝm𝑃superscriptℝ𝑚P\subset\mathbb{R}^{m} be as in the previous example and let A𝐴A and Aνsuperscript𝐴𝜈A^{\nu} be nonempty closed subsets of P𝑃P. We consider the problem
, 1 = minimizex∈Xmaxp∈A∑i=1mpifi(x)subscriptminimize𝑥𝑋subscriptmax𝑝𝐴superscriptsubscript𝑖1𝑚su... | 1,784 | 3 Examples
3.3 Example
(distributionally robust optimization).
Let P⊂ℝm𝑃superscriptℝ𝑚P\subset\mathbb{R}^{m} be as in the previous example and let A𝐴A and Aνsuperscript𝐴𝜈A^{\nu} be nonempty closed subsets of P𝑃P. We consider the problem
, 1 = minimizex∈Xmaxp∈A∑i=1mpifi(x)subscriptminimize𝑥𝑋subscriptmax𝑝𝐴supe... | 1,792 |
33 | Since there are p¯ν∈Asuperscript¯𝑝𝜈𝐴\bar{p}^{\nu}\in A such that ‖pν−p¯ν‖2→0→subscriptnormsuperscript𝑝𝜈superscript¯𝑝𝜈20\|p^{\nu}-\bar{p}^{\nu}\|_{2}\to 0 because Aν→s Asuperscript𝐴𝜈→s 𝐴A^{\nu}\,{\lower 1.0pt\hbox{$\rightarrow$}}\kern-10.0pt\hbox{\raise 4.0pt\hbox{$\,\scriptstyle s$}}\hskip 7.0ptA, limsuphν... | 336 | 3 Examples
3.3 Example
Since there are p¯ν∈Asuperscript¯𝑝𝜈𝐴\bar{p}^{\nu}\in A such that ‖pν−p¯ν‖2→0→subscriptnormsuperscript𝑝𝜈superscript¯𝑝𝜈20\|p^{\nu}-\bar{p}^{\nu}\|_{2}\to 0 because Aν→s Asuperscript𝐴𝜈→s 𝐴A^{\nu}\,{\lower 1.0pt\hbox{$\rightarrow$}}\kern-10.0pt\hbox{\raise 4.0pt\hbox{$\,\scriptstyle s$}}\... | 344 |
34 | (augmented Lagrangian methods).
With h(z)=z1+∑i=2mι{0}(zi)ℎ𝑧subscript𝑧1superscriptsubscript𝑖2𝑚subscript𝜄0subscript𝑧𝑖h(z)=z_{1}+\sum_{i=2}^{m}\iota_{\{0\}}(z_{i}), the problem
, 1 = minimizex∈Xf1(x) subject to f2(x)=0,…,fm(x)=0formulae-sequencesubscriptminimize𝑥𝑋subscript𝑓1𝑥 subject to subscript𝑓2𝑥0…... | 1,412 | 3 Examples
3.4 Example
(augmented Lagrangian methods).
With h(z)=z1+∑i=2mι{0}(zi)ℎ𝑧subscript𝑧1superscriptsubscript𝑖2𝑚subscript𝜄0subscript𝑧𝑖h(z)=z_{1}+\sum_{i=2}^{m}\iota_{\{0\}}(z_{i}), the problem
, 1 = minimizex∈Xf1(x) subject to f2(x)=0,…,fm(x)=0formulae-sequencesubscriptminimize𝑥𝑋subscript𝑓1𝑥 subj... | 1,420 |
35 | (min-functions).
For smooth gik:ℝn→ℝ:subscript𝑔𝑖𝑘→superscriptℝ𝑛ℝg_{ik}:\mathbb{R}^{n}\to\mathbb{R}, k=1,…,si𝑘1…subscript𝑠𝑖k=1,\dots,s_{i}, i=1,…,m𝑖1…𝑚i=1,\dots,m, let
, 1 = fi(x)=mink=1,…,sigik(x).subscript𝑓𝑖𝑥subscript𝑘1…subscript𝑠𝑖subscript𝑔𝑖𝑘𝑥f_{i}(x)=\min_{k=1,\dots,s_{i}}g_{ik}(x).. , 2 =
W... | 1,673 | 3 Examples
3.5 Example
(min-functions).
For smooth gik:ℝn→ℝ:subscript𝑔𝑖𝑘→superscriptℝ𝑛ℝg_{ik}:\mathbb{R}^{n}\to\mathbb{R}, k=1,…,si𝑘1…subscript𝑠𝑖k=1,\dots,s_{i}, i=1,…,m𝑖1…𝑚i=1,\dots,m, let
, 1 = fi(x)=mink=1,…,sigik(x).subscript𝑓𝑖𝑥subscript𝑘1…subscript𝑠𝑖subscript𝑔𝑖𝑘𝑥f_{i}(x)=\min_{k=1,\dots,s_{... | 1,681 |
36 | Since μikν(x)∈(0,1)subscriptsuperscript𝜇𝜈𝑖𝑘𝑥01\mu^{\nu}_{ik}(x)\in(0,1) regardless of x𝑥x, we have that {∇fiν(xν),ν∈ℕ}∇superscriptsubscript𝑓𝑖𝜈superscript𝑥𝜈𝜈ℕ\{\nabla f_{i}^{\nu}(x^{\nu}),\nu\in\mathbb{N}\} is bounded. For some N∈𝒩∞#𝑁subscriptsuperscript𝒩#N\in{\cal N}^{\scriptscriptstyle\#}_{\infty}, s... | 1,904 | 3 Examples
3.5 Example
Since μikν(x)∈(0,1)subscriptsuperscript𝜇𝜈𝑖𝑘𝑥01\mu^{\nu}_{ik}(x)\in(0,1) regardless of x𝑥x, we have that {∇fiν(xν),ν∈ℕ}∇superscriptsubscript𝑓𝑖𝜈superscript𝑥𝜈𝜈ℕ\{\nabla f_{i}^{\nu}(x^{\nu}),\nu\in\mathbb{N}\} is bounded. For some N∈𝒩∞#𝑁subscriptsuperscript𝒩#N\in{\cal N}^{\scriptscr... | 1,912 |
37 | , 1 = ∇fiν(xν)=∑k=1siμikν(xν)∇gik(xν)→N v¯=∑k∈𝔸i(x¯)μik∞∇gik(x¯)∈con∂fi(x¯)∇subscriptsuperscript𝑓𝜈𝑖superscript𝑥𝜈superscriptsubscript𝑘1subscript𝑠𝑖subscriptsuperscript𝜇𝜈𝑖𝑘superscript𝑥𝜈∇subscript𝑔𝑖𝑘superscript𝑥𝜈→N ¯𝑣subscript𝑘subscript𝔸𝑖¯𝑥subscriptsuperscript𝜇𝑖𝑘∇subscript𝑔𝑖𝑘¯�... | 481 | 3 Examples
3.5 Example
, 1 = ∇fiν(xν)=∑k=1siμikν(xν)∇gik(xν)→N v¯=∑k∈𝔸i(x¯)μik∞∇gik(x¯)∈con∂fi(x¯)∇subscriptsuperscript𝑓𝜈𝑖superscript𝑥𝜈superscriptsubscript𝑘1subscript𝑠𝑖subscriptsuperscript𝜇𝜈𝑖𝑘superscript𝑥𝜈∇subscript𝑔𝑖𝑘superscript𝑥𝜈→N ¯𝑣subscript𝑘subscript𝔸𝑖¯𝑥subscriptsuperscript�... | 489 |
38 | (penalty methods).
With h(z)=z1+ι(−∞,0]m−1(z2,…,zm)ℎ𝑧subscript𝑧1subscript𝜄superscript0𝑚1subscript𝑧2…subscript𝑧𝑚h(z)=z_{1}+\iota_{(-\infty,0]^{m-1}}(z_{2},\dots,z_{m}), we consider the problem
, 1 = minimizex∈Xf1(x) subject to f2(x)≤0,…,fm(x)≤0,formulae-sequencesubscriptminimize𝑥𝑋subscript𝑓1𝑥 subject t... | 1,254 | 3 Examples
3.6 Example
(penalty methods).
With h(z)=z1+ι(−∞,0]m−1(z2,…,zm)ℎ𝑧subscript𝑧1subscript𝜄superscript0𝑚1subscript𝑧2…subscript𝑧𝑚h(z)=z_{1}+\iota_{(-\infty,0]^{m-1}}(z_{2},\dots,z_{m}), we consider the problem
, 1 = minimizex∈Xf1(x) subject to f2(x)≤0,…,fm(x)≤0,formulae-sequencesubscriptminimize𝑥𝑋s... | 1,262 |
39 | (interior-point methods).
For the actual problem in Example 3.6, we consider the approximation
, 1 = hν(z)={z1−1θν∑i=2mln(−zi) if z2<0,…,zm<0∞ otherwise,superscriptℎ𝜈𝑧casessubscript𝑧11superscript𝜃𝜈superscriptsubscript𝑖2𝑚subscript𝑧𝑖formulae-sequence if subscript𝑧20…subscript𝑧𝑚0 otherwise,h^{\nu}(z)=\begi... | 1,161 | 3 Examples
3.7 Example
(interior-point methods).
For the actual problem in Example 3.6, we consider the approximation
, 1 = hν(z)={z1−1θν∑i=2mln(−zi) if z2<0,…,zm<0∞ otherwise,superscriptℎ𝜈𝑧casessubscript𝑧11superscript𝜃𝜈superscriptsubscript𝑖2𝑚subscript𝑧𝑖formulae-sequence if subscript𝑧20…subscript𝑧𝑚0 oth... | 1,169 |
40 | (expectation functions).
Expectation functions arise in stochastic optimization and machine learning and then the component functions f1,…,fmsubscript𝑓1…subscript𝑓𝑚f_{1},\dots,f_{m} may take the form
, 1 = fi(x)=𝔼[gi(𝝃,x)],subscript𝑓𝑖𝑥𝔼delimited-[]subscript𝑔𝑖𝝃𝑥f_{i}(x)=\mathbb{E}\big{[}g_{i}(\mbox{\bold... | 923 | 3 Examples
3.8 Example
(expectation functions).
Expectation functions arise in stochastic optimization and machine learning and then the component functions f1,…,fmsubscript𝑓1…subscript𝑓𝑚f_{1},\dots,f_{m} may take the form
, 1 = fi(x)=𝔼[gi(𝝃,x)],subscript𝑓𝑖𝑥𝔼delimited-[]subscript𝑔𝑖𝝃𝑥f_{i}(x)=\mathbb{E}\... | 931 |
41 | (oracle functions).
Suppose that hℎh is real-valued, but not available in an explicit form. If for each z𝑧z we can compute h(z)ℎ𝑧h(z) and a subgradient v∈∂h(z)𝑣ℎ𝑧v\in\partial h(z), then the approximation
, 1 = hν(z)=maxk=1,…,νh(zk)+⟨vk,z−zk⟩, with zk∈ℝm,vk∈∂h(zk),formulae-sequencesuperscriptℎ𝜈𝑧subscript𝑘1... | 966 | 3 Examples
3.9 Example
(oracle functions).
Suppose that hℎh is real-valued, but not available in an explicit form. If for each z𝑧z we can compute h(z)ℎ𝑧h(z) and a subgradient v∈∂h(z)𝑣ℎ𝑧v\in\partial h(z), then the approximation
, 1 = hν(z)=maxk=1,…,νh(zk)+⟨vk,z−zk⟩, with zk∈ℝm,vk∈∂h(zk),formulae-sequencesuper... | 974 |
42 | (homotopy method).
For proper, lsc, and convex h^:ℝm−1→ℝ¯:^ℎ→superscriptℝ𝑚1¯ℝ\hat{h}:\mathbb{R}^{m-1}\to\overline{\mathbb{R}} and lLc F^:ℝn→ℝm−1:^𝐹→superscriptℝ𝑛superscriptℝ𝑚1\hat{F}:\mathbb{R}^{n}\to\mathbb{R}^{m-1}, consider the problem
, 1 = minimizex∈Xh^(F^(x)),subscriptminimize𝑥𝑋^ℎ^𝐹𝑥\mathop{\rm minimize... | 1,563 | 3 Examples
3.10 Example
(homotopy method).
For proper, lsc, and convex h^:ℝm−1→ℝ¯:^ℎ→superscriptℝ𝑚1¯ℝ\hat{h}:\mathbb{R}^{m-1}\to\overline{\mathbb{R}} and lLc F^:ℝn→ℝm−1:^𝐹→superscriptℝ𝑛superscriptℝ𝑚1\hat{F}:\mathbb{R}^{n}\to\mathbb{R}^{m-1}, consider the problem
, 1 = minimizex∈Xh^(F^(x)),subscriptminimize𝑥𝑋^ℎ^... | 1,571 |
43 | (monitoring functions).
In extended nonlinear programming [49], one utilizes
, 1 = h(z)=supy∈Y{⟨z,y⟩−12⟨y,By⟩} and hν(z)=supy∈Yν{⟨z,y⟩−12⟨y,Bνy⟩},ℎ𝑧subscriptsup𝑦𝑌𝑧𝑦12𝑦𝐵𝑦 and superscriptℎ𝜈𝑧subscriptsup𝑦superscript𝑌𝜈𝑧𝑦12𝑦superscript𝐵𝜈𝑦h(z)=\mathop{\rm sup}\nolimits_{y\in Y}\big{\{}\langle z,y\r... | 1,465 | 3 Examples
3.11 Example
(monitoring functions).
In extended nonlinear programming [49], one utilizes
, 1 = h(z)=supy∈Y{⟨z,y⟩−12⟨y,By⟩} and hν(z)=supy∈Yν{⟨z,y⟩−12⟨y,Bνy⟩},ℎ𝑧subscriptsup𝑦𝑌𝑧𝑦12𝑦𝐵𝑦 and superscriptℎ𝜈𝑧subscriptsup𝑦superscript𝑌𝜈𝑧𝑦12𝑦superscript𝐵𝜈𝑦h(z)=\mathop{\rm sup}\nolimits_{y\in... | 1,473 |
44 | (difference-of-convex functions).
For a proper, lsc, and convex f:ℝn→ℝ¯:𝑓→superscriptℝ𝑛¯ℝf:\mathbb{R}^{n}\to\overline{\mathbb{R}} and convex g:ℝn→ℝ:𝑔→superscriptℝ𝑛ℝg:\mathbb{R}^{n}\to\mathbb{R}, consider the problem
, 1 = minimizex∈Xf(x)−g(x),subscriptminimize𝑥𝑋𝑓𝑥𝑔𝑥\mathop{\rm minimize}_{x\in X}f(x)-g(x),. ... | 1,470 | 3 Examples
3.12 Example
(difference-of-convex functions).
For a proper, lsc, and convex f:ℝn→ℝ¯:𝑓→superscriptℝ𝑛¯ℝf:\mathbb{R}^{n}\to\overline{\mathbb{R}} and convex g:ℝn→ℝ:𝑔→superscriptℝ𝑛ℝg:\mathbb{R}^{n}\to\mathbb{R}, consider the problem
, 1 = minimizex∈Xf(x)−g(x),subscriptminimize𝑥𝑋𝑓𝑥𝑔𝑥\mathop{\rm minimi... | 1,478 |
45 | As an example of an implementable version of the consistent approximation algorithm, we consider the setting where Xνsuperscript𝑋𝜈X^{\nu} is convex, hνsuperscriptℎ𝜈h^{\nu} is real-valued, and Fνsuperscript𝐹𝜈F^{\nu} is twice continuously differentiable. Then, the approximating problems (2.1) are solvable by proxima... | 1,925 | 4 Enhanced Proximal Composite Algorithm
As an example of an implementable version of the consistent approximation algorithm, we consider the setting where Xνsuperscript𝑋𝜈X^{\nu} is convex, hνsuperscriptℎ𝜈h^{\nu} is real-valued, and Fνsuperscript𝐹𝜈F^{\nu} is twice continuously differentiable. Then, the approximatin... | 1,933 |
46 | , 1 = y¯k+1∈∂hν(z¯k+1) and −∇Fν(x¯k)⊤y¯k+1−1λk(x¯k+1−x¯k)∈NXν(x¯k+1).superscript¯𝑦𝑘1superscriptℎ𝜈superscript¯𝑧𝑘1 and ∇superscript𝐹𝜈superscriptsuperscript¯𝑥𝑘topsuperscript¯𝑦𝑘11superscript𝜆𝑘superscript¯𝑥𝑘1superscript¯𝑥𝑘subscript𝑁superscript𝑋𝜈superscript¯𝑥𝑘1\bar{y}^{k+1}\in\partial h^{\nu}(\bar... | 957 | 4 Enhanced Proximal Composite Algorithm
, 1 = y¯k+1∈∂hν(z¯k+1) and −∇Fν(x¯k)⊤y¯k+1−1λk(x¯k+1−x¯k)∈NXν(x¯k+1).superscript¯𝑦𝑘1superscriptℎ𝜈superscript¯𝑧𝑘1 and ∇superscript𝐹𝜈superscriptsuperscript¯𝑥𝑘topsuperscript¯𝑦𝑘11superscript𝜆𝑘superscript¯𝑥𝑘1superscript¯𝑥𝑘subscript𝑁superscript𝑋𝜈superscript¯𝑥... | 965 |
47 | (enhanced proximal composite algorithm).
Suppose that Xν→s Xsuperscript𝑋𝜈→s 𝑋X^{\nu}\,{\lower 1.0pt\hbox{$\rightarrow$}}\kern-10.0pt\hbox{\raise 4.0pt\hbox{$\,\scriptstyle s$}}\hskip 7.0ptX with these sets being convex, hν→e hsuperscriptℎ𝜈→e ℎh^{\nu}\,{\lower 1.0pt\hbox{$\rightarrow$}}\kern-10.0pt\hbox{\raise 4... | 1,642 | 4 Enhanced Proximal Composite Algorithm
4.1 Theorem
(enhanced proximal composite algorithm).
Suppose that Xν→s Xsuperscript𝑋𝜈→s 𝑋X^{\nu}\,{\lower 1.0pt\hbox{$\rightarrow$}}\kern-10.0pt\hbox{\raise 4.0pt\hbox{$\,\scriptstyle s$}}\hskip 7.0ptX with these sets being convex, hν→e hsuperscriptℎ𝜈→e ℎh^{\nu}\,{\lower ... | 1,656 |
48 | , 1 = −∇Fν(x¯k)⊤y−1λk(x⋆−x¯k)∈NXν(x⋆) for some y∈∂hν(Fν(x¯k)+∇Fν(x¯k)(x⋆−x¯k)).∇superscript𝐹𝜈superscriptsuperscript¯𝑥𝑘top𝑦1superscript𝜆𝑘superscript𝑥⋆superscript¯𝑥𝑘subscript𝑁superscript𝑋𝜈superscript𝑥⋆ for some 𝑦superscriptℎ𝜈superscript𝐹𝜈superscript¯𝑥𝑘∇superscript𝐹𝜈superscript¯𝑥𝑘superscr... | 1,018 | 4 Enhanced Proximal Composite Algorithm
4.1 Theorem
, 1 = −∇Fν(x¯k)⊤y−1λk(x⋆−x¯k)∈NXν(x⋆) for some y∈∂hν(Fν(x¯k)+∇Fν(x¯k)(x⋆−x¯k)).∇superscript𝐹𝜈superscriptsuperscript¯𝑥𝑘top𝑦1superscript𝜆𝑘superscript𝑥⋆superscript¯𝑥𝑘subscript𝑁superscript𝑋𝜈superscript𝑥⋆ for some 𝑦superscriptℎ𝜈superscript𝐹𝜈supe... | 1,032 |
49 | For the sake of contradiction, suppose that the algorithm iterates indefinitely without generating the next (xν,yν,zν)superscript𝑥𝜈superscript𝑦𝜈superscript𝑧𝜈(x^{\nu},y^{\nu},z^{\nu}). Let {x¯k,k∈ℕ}superscript¯𝑥𝑘𝑘ℕ\{\bar{x}^{k},k\in\mathbb{N}\} be the resulting sequence, which has a cluster point in view of (4.... | 1,064 | 4 Enhanced Proximal Composite Algorithm
4.1 Theorem
For the sake of contradiction, suppose that the algorithm iterates indefinitely without generating the next (xν,yν,zν)superscript𝑥𝜈superscript𝑦𝜈superscript𝑧𝜈(x^{\nu},y^{\nu},z^{\nu}). Let {x¯k,k∈ℕ}superscript¯𝑥𝑘𝑘ℕ\{\bar{x}^{k},k\in\mathbb{N}\} be the resultin... | 1,078 |
50 | An inverse problem in machine learning is that of determining an input to a collection of neural networks such that their outputs best match a given quantity [28, 30]. Specifically, we are given s𝑠s neural networks represented by the mappings Fi:ℝn0→ℝnq:subscript𝐹𝑖→superscriptℝsubscript𝑛0superscriptℝsubscript𝑛𝑞F_... | 1,716 | 5 Inverse Problems in Machine Learning
An inverse problem in machine learning is that of determining an input to a collection of neural networks such that their outputs best match a given quantity [28, 30]. Specifically, we are given s𝑠s neural networks represented by the mappings Fi:ℝn0→ℝnq:subscript𝐹𝑖→superscriptℝ... | 1,724 |
51 | It is convenient to express (5.1) using additional variables as follows: We think of x1,i,k∈ℝnksubscript𝑥1𝑖𝑘superscriptℝsubscript𝑛𝑘x_{1,i,k}\in\mathbb{R}^{n_{k}} as the output of the k𝑘kth layer for neural network i𝑖i. Let r=∑k=1qnk𝑟superscriptsubscript𝑘1𝑞subscript𝑛𝑘r=\sum_{k=1}^{q}n_{k}, X=X^×ℝsr𝑋^𝑋supe... | 1,798 | 5 Inverse Problems in Machine Learning
It is convenient to express (5.1) using additional variables as follows: We think of x1,i,k∈ℝnksubscript𝑥1𝑖𝑘superscriptℝsubscript𝑛𝑘x_{1,i,k}\in\mathbb{R}^{n_{k}} as the output of the k𝑘kth layer for neural network i𝑖i. Let r=∑k=1qnk𝑟superscriptsubscript𝑘1𝑞subscript𝑛𝑘r=... | 1,806 |
52 | (weak consistency in inverse machine learning).
In the notation of this section, suppose that for each (i,j,k)𝑖𝑗𝑘(i,j,k), the following property holds:
, 1 = γν∈ℝ→γαν∈∂gi,j,kν(γν)}⟹{gi,j,kν(γν)→gi,j,k(γ){αν,ν∈ℕ} is bounded with all its cluster points in ∂gi,j,k(γ).⟹casessuperscript𝛾𝜈ℝ→𝛾otherwisesuperscript�... | 1,872 | 5 Inverse Problems in Machine Learning
5.1 Proposition
(weak consistency in inverse machine learning).
In the notation of this section, suppose that for each (i,j,k)𝑖𝑗𝑘(i,j,k), the following property holds:
, 1 = γν∈ℝ→γαν∈∂gi,j,kν(γν)}⟹{gi,j,kν(γν)→gi,j,k(γ){αν,ν∈ℕ} is bounded with all its cluster points in ∂gi... | 1,885 |
53 | Thus, we have Hiν(xν)=0superscriptsubscript𝐻𝑖𝜈superscript𝑥𝜈0H_{i}^{\nu}(x^{\nu})=0 and xν→x¯→superscript𝑥𝜈¯𝑥x^{\nu}\to\bar{x}. Since h^^ℎ\hat{h} is real-valued and convex, h^ν(x1,1,qν,…,x1,s,qν)superscript^ℎ𝜈superscriptsubscript𝑥11𝑞𝜈…superscriptsubscript𝑥1𝑠𝑞𝜈\hat{h}^{\nu}(x_{1,1,q}^{\nu},\dots,x_{1,s,... | 1,626 | 5 Inverse Problems in Machine Learning
5.1 Proposition
Thus, we have Hiν(xν)=0superscriptsubscript𝐻𝑖𝜈superscript𝑥𝜈0H_{i}^{\nu}(x^{\nu})=0 and xν→x¯→superscript𝑥𝜈¯𝑥x^{\nu}\to\bar{x}. Since h^^ℎ\hat{h} is real-valued and convex, h^ν(x1,1,qν,…,x1,s,qν)superscript^ℎ𝜈superscriptsubscript𝑥11𝑞𝜈…superscriptsubscr... | 1,639 |
54 | Since uν=Fν(xν)−zνsuperscript𝑢𝜈superscript𝐹𝜈superscript𝑥𝜈superscript𝑧𝜈u^{\nu}=F^{\nu}(x^{\nu})-z^{\nu}, we conclude that u¯=F(x¯)−z¯¯𝑢𝐹¯𝑥¯𝑧\bar{u}=F(\bar{x})-\bar{z}; recall that gi,j,kν(γν)→gi,j,k(γ)→superscriptsubscript𝑔𝑖𝑗𝑘𝜈superscript𝛾𝜈subscript𝑔𝑖𝑗𝑘𝛾g_{i,j,k}^{\nu}(\gamma^{\nu})\to g_{i,j... | 1,670 | 5 Inverse Problems in Machine Learning
5.1 Proposition
Since uν=Fν(xν)−zνsuperscript𝑢𝜈superscript𝐹𝜈superscript𝑥𝜈superscript𝑧𝜈u^{\nu}=F^{\nu}(x^{\nu})-z^{\nu}, we conclude that u¯=F(x¯)−z¯¯𝑢𝐹¯𝑥¯𝑧\bar{u}=F(\bar{x})-\bar{z}; recall that gi,j,kν(γν)→gi,j,k(γ)→superscriptsubscript𝑔𝑖𝑗𝑘𝜈superscript𝛾𝜈sub... | 1,683 |
55 | where ci,j,kν=∇ai,j,kν(x)subscriptsuperscript𝑐𝜈𝑖𝑗𝑘∇subscriptsuperscript𝑎𝜈𝑖𝑗𝑘𝑥c^{\nu}_{i,j,k}=\nabla a^{\nu}_{i,j,k}(x). The assumption (5.2) implies that any sequence {di,j,kν∈∂fi,j,kν(xν),ν∈N}formulae-sequencesuperscriptsubscript𝑑𝑖𝑗𝑘𝜈subscriptsuperscript𝑓𝜈𝑖𝑗𝑘superscript𝑥𝜈𝜈𝑁\{d_{i,j,k}^{\nu}\... | 584 | 5 Inverse Problems in Machine Learning
5.1 Proposition
where ci,j,kν=∇ai,j,kν(x)subscriptsuperscript𝑐𝜈𝑖𝑗𝑘∇subscriptsuperscript𝑎𝜈𝑖𝑗𝑘𝑥c^{\nu}_{i,j,k}=\nabla a^{\nu}_{i,j,k}(x). The assumption (5.2) implies that any sequence {di,j,kν∈∂fi,j,kν(xν),ν∈N}formulae-sequencesuperscriptsubscript𝑑𝑖𝑗𝑘𝜈subscriptsup... | 597 |
56 | Consistency furnishes guarantees about the limiting behavior of approximations, but it also can be beneficial to quantify the rate of convergence. In this section, we refine results from [53] and estimate the discrepancy between near-solutions of the optimality condition 0∈Sν(x,y,z)0superscript𝑆𝜈𝑥𝑦𝑧0\in S^{\nu}(x... | 1,684 | 6 Rates and Error Estimates
Consistency furnishes guarantees about the limiting behavior of approximations, but it also can be beneficial to quantify the rate of convergence. In this section, we refine results from [53] and estimate the discrepancy between near-solutions of the optimality condition 0∈Sν(x,y,z)0supersc... | 1,690 |
57 | Thus, S−1(𝔹(0,ε))superscript𝑆1𝔹0𝜀S^{-1}(\mathbb{B}(0,\varepsilon)) is the set of near-solutions of 0∈S(x,y,z)0𝑆𝑥𝑦𝑧0\in S(x,y,z) with the tolerance now being specified by ∥⋅∥out\|\cdot\|_{\rm out}, i.e., (x¯,y¯,z¯)∈S−1(𝔹(0,ε))¯𝑥¯𝑦¯𝑧superscript𝑆1𝔹0𝜀(\bar{x},\bar{y},\bar{z})\in S^{-1}(\mathbb{B}(0,\var... | 352 | 6 Rates and Error Estimates
Thus, S−1(𝔹(0,ε))superscript𝑆1𝔹0𝜀S^{-1}(\mathbb{B}(0,\varepsilon)) is the set of near-solutions of 0∈S(x,y,z)0𝑆𝑥𝑦𝑧0\in S(x,y,z) with the tolerance now being specified by ∥⋅∥out\|\cdot\|_{\rm out}, i.e., (x¯,y¯,z¯)∈S−1(𝔹(0,ε))¯𝑥¯𝑦¯𝑧superscript𝑆1𝔹0𝜀(\bar{x},\bar{y},\bar{z})... | 358 |
58 | (solution error in optimality conditions). Suppose that 0≤δν≤ρ<∞0superscript𝛿𝜈𝜌0\leq\delta^{\nu}\leq\rho<\infty and ε≥δν+exsρ(gphSν;gphS)𝜀superscript𝛿𝜈subscriptexs𝜌gphsuperscript𝑆𝜈gph𝑆\varepsilon\geq\delta^{\nu}+\operatorname{exs}_{\rho}(\operatorname{gph}S^{\nu};~{}\operatorname{gph}S). Then, under the no... | 1,243 | 6 Rates and Error Estimates
6.1 Proposition
(solution error in optimality conditions). Suppose that 0≤δν≤ρ<∞0superscript𝛿𝜈𝜌0\leq\delta^{\nu}\leq\rho<\infty and ε≥δν+exsρ(gphSν;gphS)𝜀superscript𝛿𝜈subscriptexs𝜌gphsuperscript𝑆𝜈gph𝑆\varepsilon\geq\delta^{\nu}+\operatorname{exs}_{\rho}(\operatorname{gph}S^{\nu}... | 1,254 |
59 | (estimate of excess).
Suppose that ρ∈[0,∞)𝜌0\rho\in[0,\infty) and X=Xν𝑋superscript𝑋𝜈X=X^{\nu}. Let
, 1 = η0νsuperscriptsubscript𝜂0𝜈\displaystyle\eta_{0}^{\nu}. , 2 = =sup{‖Fν(x)−F(x)‖2|x∈X∩𝔹(0,ρ)}absentsupremumconditionalsubscriptnormsuperscript𝐹𝜈𝑥𝐹𝑥2𝑥𝑋𝔹0𝜌\displaystyle=\sup\Big{\{}\big{\|}F^{\nu}(x)-... | 1,992 | 6 Rates and Error Estimates
6.2 Theorem
(estimate of excess).
Suppose that ρ∈[0,∞)𝜌0\rho\in[0,\infty) and X=Xν𝑋superscript𝑋𝜈X=X^{\nu}. Let
, 1 = η0νsuperscriptsubscript𝜂0𝜈\displaystyle\eta_{0}^{\nu}. , 2 = =sup{‖Fν(x)−F(x)‖2|x∈X∩𝔹(0,ρ)}absentsupremumconditionalsubscriptnormsuperscript𝐹𝜈𝑥𝐹𝑥2𝑥𝑋𝔹0𝜌\disp... | 2,004 |
60 | There are a¯i∈con∂fiν(x¯)subscript¯𝑎𝑖consuperscriptsubscript𝑓𝑖𝜈¯𝑥\bar{a}_{i}\in\operatorname{con}\partial f_{i}^{\nu}(\bar{x}) such that w¯−∑i=1my¯ia¯i∈NX(x¯)¯𝑤superscriptsubscript𝑖1𝑚subscript¯𝑦𝑖subscript¯𝑎𝑖subscript𝑁𝑋¯𝑥\bar{w}-\mathop{\sum}\nolimits_{i=1}^{m}\bar{y}_{i}\bar{a}_{i}\in N_{X}(\bar{x})... | 1,881 | 6 Rates and Error Estimates
6.2 Theorem
There are a¯i∈con∂fiν(x¯)subscript¯𝑎𝑖consuperscriptsubscript𝑓𝑖𝜈¯𝑥\bar{a}_{i}\in\operatorname{con}\partial f_{i}^{\nu}(\bar{x}) such that w¯−∑i=1my¯ia¯i∈NX(x¯)¯𝑤superscriptsubscript𝑖1𝑚subscript¯𝑦𝑖subscript¯𝑎𝑖subscript𝑁𝑋¯𝑥\bar{w}-\mathop{\sum}\nolimits_{i=1}^{m}... | 1,893 |
61 | , 1 = dl^ρ(C,D)=max{exsρ(C;D);exsρ(D;C)}.𝑑subscript^𝑙𝜌𝐶𝐷subscriptexs𝜌𝐶𝐷subscriptexs𝜌𝐷𝐶d\hat{\kern-1.49994ptl}_{\rho}(C,D)=\max\big{\{}\operatorname{exs}_{\rho}(C;D);\,\operatorname{exs}_{\rho}(D;C)\big{\}}.. , 2 = | 144 | 6 Rates and Error Estimates
6.2 Theorem
, 1 = dl^ρ(C,D)=max{exsρ(C;D);exsρ(D;C)}.𝑑subscript^𝑙𝜌𝐶𝐷subscriptexs𝜌𝐶𝐷subscriptexs𝜌𝐷𝐶d\hat{\kern-1.49994ptl}_{\rho}(C,D)=\max\big{\{}\operatorname{exs}_{\rho}(C;D);\,\operatorname{exs}_{\rho}(D;C)\big{\}}.. , 2 = | 156 |
62 | (approximation of subgradients).
Suppose that the norm on ℝm+1superscriptℝ𝑚1\mathbb{R}^{m+1} is max{‖z‖2,|α|}subscriptnorm𝑧2𝛼\max\{\|z\|_{2},|\alpha|\}, the norm on ℝm×ℝmsuperscriptℝ𝑚superscriptℝ𝑚\mathbb{R}^{m}\times\mathbb{R}^{m} is max{‖z‖2,‖v‖2}subscriptnorm𝑧2subscriptnorm𝑣2\max\{\|z\|_{2},\|v\|_{2}\}, and ... | 755 | 6 Rates and Error Estimates
6.3 Proposition
(approximation of subgradients).
Suppose that the norm on ℝm+1superscriptℝ𝑚1\mathbb{R}^{m+1} is max{‖z‖2,|α|}subscriptnorm𝑧2𝛼\max\{\|z\|_{2},|\alpha|\}, the norm on ℝm×ℝmsuperscriptℝ𝑚superscriptℝ𝑚\mathbb{R}^{m}\times\mathbb{R}^{m} is max{‖z‖2,‖v‖2}subscriptnorm𝑧2subsc... | 766 |
63 | (goal optimization; cont.).
In Example 3.1, smoothing of hνsuperscriptℎ𝜈h^{\nu} causes a solution error
, 1 = exsρ((Sν)−1(𝔹(0,δν));S−1(𝔹(0,ε)))≤β/θνsubscriptexs𝜌superscriptsuperscript𝑆𝜈1𝔹0superscript𝛿𝜈superscript𝑆1𝔹0𝜀𝛽superscript𝜃𝜈\operatorname{exs}_{\rho}\Big{(}\,(S^{\nu})^{-1}\big{(}\mathbb{B}(0,\... | 818 | 6 Rates and Error Estimates
6.4 Example
(goal optimization; cont.).
In Example 3.1, smoothing of hνsuperscriptℎ𝜈h^{\nu} causes a solution error
, 1 = exsρ((Sν)−1(𝔹(0,δν));S−1(𝔹(0,ε)))≤β/θνsubscriptexs𝜌superscriptsuperscript𝑆𝜈1𝔹0superscript𝛿𝜈superscript𝑆1𝔹0𝜀𝛽superscript𝜃𝜈\operatorname{exs}_{\rho}\Big... | 829 |
64 | (distributionally robust optimization; cont.).
For Example 3.3, there is a constant β∈[0,∞)𝛽0\beta\in[0,\infty) such that
, 1 = exsρ((Sν)−1(𝔹(0,δν));S−1(𝔹(0,ε)))≤βανsubscriptexs𝜌superscriptsuperscript𝑆𝜈1𝔹0superscript𝛿𝜈superscript𝑆1𝔹0𝜀𝛽superscript𝛼𝜈\operatorname{exs}_{\rho}\Big{(}\,(S^{\nu})^{-1}\bi... | 1,289 | 6 Rates and Error Estimates
6.5 Example
(distributionally robust optimization; cont.).
For Example 3.3, there is a constant β∈[0,∞)𝛽0\beta\in[0,\infty) such that
, 1 = exsρ((Sν)−1(𝔹(0,δν));S−1(𝔹(0,ε)))≤βανsubscriptexs𝜌superscriptsuperscript𝑆𝜈1𝔹0superscript𝛿𝜈superscript𝑆1𝔹0𝜀𝛽superscript𝛼𝜈\operatorna... | 1,300 |
65 | (augmented Lagrangian methods; cont.).
For Example 3.4, we obtain that
, 1 = exsρ((Sν)−1(𝔹(0,δν));S−1(𝔹(0,ε)))≤βν/θν, where βν=(2ρ+‖yν‖∞)m−1,formulae-sequencesubscriptexs𝜌superscriptsuperscript𝑆𝜈1𝔹0superscript𝛿𝜈superscript𝑆1𝔹0𝜀superscript𝛽𝜈superscript𝜃𝜈 where superscript𝛽𝜈2𝜌subscriptnormsupers... | 1,951 | 6 Rates and Error Estimates
6.6 Example
(augmented Lagrangian methods; cont.).
For Example 3.4, we obtain that
, 1 = exsρ((Sν)−1(𝔹(0,δν));S−1(𝔹(0,ε)))≤βν/θν, where βν=(2ρ+‖yν‖∞)m−1,formulae-sequencesubscriptexs𝜌superscriptsuperscript𝑆𝜈1𝔹0superscript𝛿𝜈superscript𝑆1𝔹0𝜀superscript𝛽𝜈superscript𝜃𝜈 whe... | 1,962 |
66 | (exact penalty methods).
We next approach the problem in Example 3.4 using an exact penalty method. Thus, h(z)=z1+∑i=2mι{0}(zi)ℎ𝑧subscript𝑧1superscriptsubscript𝑖2𝑚subscript𝜄0subscript𝑧𝑖h(z)=z_{1}+\sum_{i=2}^{m}\iota_{\{0\}}(z_{i}) as before, but for θν∈[0,∞)superscript𝜃𝜈0\theta^{\nu}\in[0,\infty) we set
, 1 ... | 1,592 | 6 Rates and Error Estimates
6.7 Example
(exact penalty methods).
We next approach the problem in Example 3.4 using an exact penalty method. Thus, h(z)=z1+∑i=2mι{0}(zi)ℎ𝑧subscript𝑧1superscriptsubscript𝑖2𝑚subscript𝜄0subscript𝑧𝑖h(z)=z_{1}+\sum_{i=2}^{m}\iota_{\{0\}}(z_{i}) as before, but for θν∈[0,∞)superscript𝜃... | 1,603 |
67 | (homotopy method; cont.).
In Example 3.10, we obtain for λν∈(0,1)superscript𝜆𝜈01\lambda^{\nu}\in(0,1) that
, 1 = exsρ((Sν)−1(𝔹(0,δν));S−1(𝔹(0,ε)))≤βνλν, where βν=1+4ρ2−(λν)2(1−λν)2,formulae-sequencesubscriptexs𝜌superscriptsuperscript𝑆𝜈1𝔹0superscript𝛿𝜈superscript𝑆1𝔹0𝜀superscript𝛽𝜈superscript𝜆𝜈 w... | 1,849 | 6 Rates and Error Estimates
6.8 Example
(homotopy method; cont.).
In Example 3.10, we obtain for λν∈(0,1)superscript𝜆𝜈01\lambda^{\nu}\in(0,1) that
, 1 = exsρ((Sν)−1(𝔹(0,δν));S−1(𝔹(0,ε)))≤βνλν, where βν=1+4ρ2−(λν)2(1−λν)2,formulae-sequencesubscriptexs𝜌superscriptsuperscript𝑆𝜈1𝔹0superscript𝛿𝜈superscript... | 1,860 |
0 | We consider multi-level composite optimization problems where each mapping in
the composition is the expectation over a family of randomly chosen smooth
mappings or the sum of some finite number of smooth mappings.
We present a normalized proximal approximate gradient (NPAG) method where the
approximate gradients are o... | 223 | Multi-Level Composite Stochastic Optimization via Nested Variance Reduction
Abstract
We consider multi-level composite optimization problems where each mapping in
the composition is the expectation over a family of randomly chosen smooth
mappings or the sum of some finite number of smooth mappings.
We present a normali... | 237 |
1 | composite stochastic optimization, proximal gradient method,
variance reduction | 12 | Multi-Level Composite Stochastic Optimization via Nested Variance Reduction
keywords:
composite stochastic optimization, proximal gradient method,
variance reduction | 26 |
2 | In this paper,
we consider multi-level composite stochastic optimization problems of
the form
, 1 = minimizex∈𝐑d𝐄ξm[fm,ξm(⋯𝐄ξ2[f2,ξ2(𝐄ξ1[f1,ξ1(x)])]⋯)]+Ψ(x),subscriptminimize𝑥superscript𝐑𝑑subscript𝐄subscript𝜉𝑚delimited-[]subscript𝑓𝑚subscript𝜉𝑚⋯subscript𝐄subscript𝜉2delimited-[]subscript𝑓2subscr... | 1,698 | Multi-Level Composite Stochastic Optimization via Nested Variance Reduction
1 Introduction
In this paper,
we consider multi-level composite stochastic optimization problems of
the form
, 1 = minimizex∈𝐑d𝐄ξm[fm,ξm(⋯𝐄ξ2[f2,ξ2(𝐄ξ1[f1,ξ1(x)])]⋯)]+Ψ(x),subscriptminimize𝑥superscript𝐑𝑑subscript𝐄subscript𝜉𝑚d... | 1,713 |
3 | Existing work on composite stochastic optimization traces back
to [12].
Several recent progresses have been made for two-level (m=2𝑚2m=2) problems,
both for the general stochastic formulation
(e.g., [35, 36, 39, 15])
and for the finite-sum formulation
(e.g., [20, 43, 16, 41]).
In order to cite specific results, we fir... | 1,761 | Multi-Level Composite Stochastic Optimization via Nested Variance Reduction
1 Introduction
1.1 Related work
Existing work on composite stochastic optimization traces back
to [12].
Several recent progresses have been made for two-level (m=2𝑚2m=2) problems,
both for the general stochastic formulation
(e.g., [35, 36, 39,... | 1,782 |
4 | In this paper, we propose stochastic gradient algorithms with nested
variance-reduction for solving problems (1)
and (2) for any m≥1𝑚1m\geq 1,
and show that their sample complexity for finding x¯¯𝑥\bar{x} such that
𝐄[‖𝒢(x¯)‖]≤ϵ𝐄delimited-[]norm𝒢¯𝑥italic-ϵ\mathbf{E}[\|\mathcal{G}(\bar{x})\|]\leq\epsilon is O(ϵ... | 1,495 | Multi-Level Composite Stochastic Optimization via Nested Variance Reduction
1 Introduction
1.2 Contributions and outline
In this paper, we propose stochastic gradient algorithms with nested
variance-reduction for solving problems (1)
and (2) for any m≥1𝑚1m\geq 1,
and show that their sample complexity for finding x¯¯𝑥... | 1,517 |
5 | In this section, we present a normalized proximal approximate gradient (NPAG)
method for solving problems of form (3),
which we repeat here for convenience:
, 1 = minimizex∈𝐑d{Φ(x)≜F(x)+Ψ(x)}.subscriptminimize𝑥superscript𝐑𝑑≜Φ𝑥𝐹𝑥Ψ𝑥\mathop{\textrm{minimize}}_{x\in\mathbf{R}^{d}}~{}\left\{\Phi(x)\triangleq F(x)... | 187 | Multi-Level Composite Stochastic Optimization via Nested Variance Reduction
2 Normalized proximal approximate gradient method
In this section, we present a normalized proximal approximate gradient (NPAG)
method for solving problems of form (3),
which we repeat here for convenience:
, 1 = minimizex∈𝐑d{Φ(x)≜F(x)+Ψ(x)... | 208 |
6 | The functions F𝐹F, ΨΨ\Psi and ΦΦ\Phi in (5) satisfy:
(a)
F:𝐑d→𝐑:𝐹→superscript𝐑𝑑𝐑F:\mathbf{R}^{d}\to\mathbf{R} is differentiable and its gradient F′superscript𝐹′F^{\prime} is L𝐿L-Lipschitz continuous;
(b)
Ψ:𝐑d→𝐑∪{+∞}:Ψ→superscript𝐑𝑑𝐑\Psi:\mathbf{R}^{d}\to\mathbf{R}\cup\{+\infty\} is convex and lower semi... | 1,515 | Multi-Level Composite Stochastic Optimization via Nested Variance Reduction
2 Normalized proximal approximate gradient method
Assumption 2.1.
The functions F𝐹F, ΨΨ\Psi and ΦΦ\Phi in (5) satisfy:
(a)
F:𝐑d→𝐑:𝐹→superscript𝐑𝑑𝐑F:\mathbf{R}^{d}\to\mathbf{R} is differentiable and its gradient F′superscript𝐹′F^{\prime... | 1,543 |
7 | Here we focus on the case where the full gradient oracle F′(⋅)superscript𝐹′⋅F^{\prime}(\cdot) is not
available;
instead, we can compute at each iteration t𝑡t an approximate gradient vtsuperscript𝑣𝑡v^{t}.
A straightforward approach
is to replace F′(xt)superscript𝐹′superscript𝑥𝑡F^{\prime}(x^{t}) with vtsuperscri... | 1,893 | Multi-Level Composite Stochastic Optimization via Nested Variance Reduction
2 Normalized proximal approximate gradient method
Assumption 2.1.
Here we focus on the case where the full gradient oracle F′(⋅)superscript𝐹′⋅F^{\prime}(\cdot) is not
available;
instead, we can compute at each iteration t𝑡t an approximate gr... | 1,921 |
8 | Next, we prove a general convergence result concerning
Algorithm 1 without specifying how the approximate gradient
vtsuperscript𝑣𝑡v^{t} is generated.
The only condition we impose is that the Mean-Square Error (MSE),
𝐄[‖vt−F′(xt)‖2]𝐄delimited-[]superscriptnormsuperscript𝑣𝑡superscript𝐹′superscript𝑥𝑡2\mathbf{E}... | 154 | Multi-Level Composite Stochastic Optimization via Nested Variance Reduction
2 Normalized proximal approximate gradient method
Assumption 2.1.
Next, we prove a general convergence result concerning
Algorithm 1 without specifying how the approximate gradient
vtsuperscript𝑣𝑡v^{t} is generated.
The only condition we impo... | 182 |
9 | Suppose Assumption 2.1 hold.
Then the sequence {xt}superscript𝑥𝑡\{x^{t}\} generated by Algorithm 1 satisfies
, 1 = Φ(xt+1)≤Φ(xt)−(γt/η−Lγt2)∥x~t+1−xt∥2+12L∥F′(xt)−vt∥2,∀t≥0.formulae-sequenceΦsuperscript𝑥𝑡1Φsuperscript𝑥𝑡subscript𝛾𝑡𝜂𝐿superscriptsubscript𝛾𝑡2superscriptdelimited-∥∥superscript~𝑥𝑡1supers... | 351 | Multi-Level Composite Stochastic Optimization via Nested Variance Reduction
2 Normalized proximal approximate gradient method
Lemma 2.2.
Suppose Assumption 2.1 hold.
Then the sequence {xt}superscript𝑥𝑡\{x^{t}\} generated by Algorithm 1 satisfies
, 1 = Φ(xt+1)≤Φ(xt)−(γt/η−Lγt2)∥x~t+1−xt∥2+12L∥F′(xt)−vt∥2,∀t≥0.f... | 378 |
10 | According to the update rule for xt+1superscript𝑥𝑡1x^{t+1}, we have
, 1 = Φ(xt+1)Φsuperscript𝑥𝑡1\displaystyle\Phi(x^{t+1}). , 2 = =\displaystyle=. , 3 = F(xt+γt(x~t+1−xt))+Ψ(xt+γt(x~t+1−xt))𝐹superscript𝑥𝑡subscript𝛾𝑡superscript~𝑥𝑡1superscript𝑥𝑡Ψsuperscript𝑥𝑡subscript𝛾𝑡superscript~𝑥𝑡1superscript𝑥... | 1,775 | Multi-Level Composite Stochastic Optimization via Nested Variance Reduction
2 Normalized proximal approximate gradient method
Proof.
According to the update rule for xt+1superscript𝑥𝑡1x^{t+1}, we have
, 1 = Φ(xt+1)Φsuperscript𝑥𝑡1\displaystyle\Phi(x^{t+1}). , 2 = =\displaystyle=. , 3 = F(xt+γt(x~t+1−xt))+Ψ(xt+γt... | 1,798 |
11 | , 1 = ⟨vt,x~t+1−xt⟩+Ψ(x~t+1)−Ψ(xt)≤−1η∥x~t+1−xt∥2.superscript𝑣𝑡superscript~𝑥𝑡1superscript𝑥𝑡Ψsuperscript~𝑥𝑡1Ψsuperscript𝑥𝑡1𝜂superscriptdelimited-∥∥superscript~𝑥𝑡1superscript𝑥𝑡2\left\langle v^{t},\tilde{x}^{t+1}-x^{t}\right\rangle+\Psi(\tilde{x}^{t+1})-\Psi(x^{t})~{}\leq~{}-\frac{1}{\eta}\bigl{\|}\tilde... | 1,247 | Multi-Level Composite Stochastic Optimization via Nested Variance Reduction
2 Normalized proximal approximate gradient method
Proof.
, 1 = ⟨vt,x~t+1−xt⟩+Ψ(x~t+1)−Ψ(xt)≤−1η∥x~t+1−xt∥2.superscript𝑣𝑡superscript~𝑥𝑡1superscript𝑥𝑡Ψsuperscript~𝑥𝑡1Ψsuperscript𝑥𝑡1𝜂superscriptdelimited-∥∥superscript~𝑥𝑡1superscrip... | 1,270 |
12 | Suppose Assumption 2.1 hold and we set η=1/2L𝜂12𝐿\eta=1/2L
in Algorithm 1.
If it holds that
, 1 = 𝐄[‖vt−F′(xt)‖2]≤ϵt2,∀t≥0,formulae-sequence𝐄delimited-[]superscriptnormsuperscript𝑣𝑡superscript𝐹′superscript𝑥𝑡2superscriptsubscriptitalic-ϵ𝑡2for-all𝑡0\mathbf{E}\left[\|v^{t}-F^{\prime}(x^{t})\|^{2}\right]\leq\... | 538 | Multi-Level Composite Stochastic Optimization via Nested Variance Reduction
2 Normalized proximal approximate gradient method
Theorem 2.3.
Suppose Assumption 2.1 hold and we set η=1/2L𝜂12𝐿\eta=1/2L
in Algorithm 1.
If it holds that
, 1 = 𝐄[‖vt−F′(xt)‖2]≤ϵt2,∀t≥0,formulae-sequence𝐄delimited-[]superscriptnormsupers... | 566 |
13 | Using Lemma 2.2, the assumption (12),
and the choice η=1/2L𝜂12𝐿\eta=1/2L, we obtain
, 1 = 𝐄[Φ(xt+1)]𝐄delimited-[]Φsuperscript𝑥𝑡1\displaystyle\mathbf{E}\left[\Phi(x^{t+1})\right]. , 2 = ≤\displaystyle\leq. , 3 = 𝐄[Φ(xt)]−𝐄[(γt/η−Lγt2)∥x~t+1−xt∥2]+ϵt22L𝐄delimited-[]Φsuperscript𝑥𝑡𝐄delimited-[]subscrip... | 1,951 | Multi-Level Composite Stochastic Optimization via Nested Variance Reduction
2 Normalized proximal approximate gradient method
Proof.
Using Lemma 2.2, the assumption (12),
and the choice η=1/2L𝜂12𝐿\eta=1/2L, we obtain
, 1 = 𝐄[Φ(xt+1)]𝐄delimited-[]Φsuperscript𝑥𝑡1\displaystyle\mathbf{E}\left[\Phi(x^{t+1})\right].... | 1,974 |
14 | , 1 = 𝐄[‖vt−F′(xt)‖]≤𝐄[‖vt−F′(xt)‖2]≤ϵt.𝐄delimited-[]normsuperscript𝑣𝑡superscript𝐹′superscript𝑥𝑡𝐄delimited-[]superscriptnormsuperscript𝑣𝑡superscript𝐹′superscript𝑥𝑡2subscriptitalic-ϵ𝑡\mathbf{E}\left[\|v^{t}-F^{\prime}(x^{t})\|\right]\leq\sqrt{\mathbf{E}\left[\|v^{t}-F^{\prime}(x^{t})\|^{2}\right]}\leq... | 1,516 | Multi-Level Composite Stochastic Optimization via Nested Variance Reduction
2 Normalized proximal approximate gradient method
Proof.
, 1 = 𝐄[‖vt−F′(xt)‖]≤𝐄[‖vt−F′(xt)‖2]≤ϵt.𝐄delimited-[]normsuperscript𝑣𝑡superscript𝐹′superscript𝑥𝑡𝐄delimited-[]superscriptnormsuperscript𝑣𝑡superscript𝐹′superscript𝑥𝑡2subsc... | 1,539 |
15 | , 1 = 𝐄[∥𝒢(x¯)∥]=∑k=0T−1ϵk∑t=0T−1ϵt⋅𝐄[∥𝒢(xt)∥]≤4L(Φ(x0)−Φ∗)∑t=0T−1ϵt+4∑t=0T−1ϵt2∑t=0T−1ϵt.𝐄delimited-[]delimited-∥∥𝒢¯𝑥superscriptsubscript𝑘0𝑇1⋅subscriptitalic-ϵ𝑘superscriptsubscript𝑡0𝑇1subscriptitalic-ϵ𝑡𝐄delimited-[]delimited-∥∥𝒢superscript𝑥𝑡4𝐿Φsuperscript𝑥0subscriptΦsuperscriptsubscript𝑡0𝑇... | 481 | Multi-Level Composite Stochastic Optimization via Nested Variance Reduction
2 Normalized proximal approximate gradient method
Proof.
, 1 = 𝐄[∥𝒢(x¯)∥]=∑k=0T−1ϵk∑t=0T−1ϵt⋅𝐄[∥𝒢(xt)∥]≤4L(Φ(x0)−Φ∗)∑t=0T−1ϵt+4∑t=0T−1ϵt2∑t=0T−1ϵt.𝐄delimited-[]delimited-∥∥𝒢¯𝑥superscriptsubscript𝑘0𝑇1⋅subscriptitalic-ϵ𝑘superscr... | 504 |
16 | In the simplest case, one can set ϵt≡ϵsubscriptitalic-ϵ𝑡italic-ϵ\epsilon_{t}\equiv\epsilon for all t=0,1,…,T−1𝑡01…𝑇1t=0,1,...,T-1 and
, 1 = T≥4L(Φ(x0)−Φ∗)ϵ−2.𝑇4𝐿Φsuperscript𝑥0subscriptΦsuperscriptitalic-ϵ2T\geq 4L\left(\Phi(x^{0})-\Phi_{*}\right)\epsilon^{-2}.. , 2 =
Then Theorem 2.3 directly implies
, 1 = �... | 874 | Multi-Level Composite Stochastic Optimization via Nested Variance Reduction
2 Normalized proximal approximate gradient method
Remark 2.4.
In the simplest case, one can set ϵt≡ϵsubscriptitalic-ϵ𝑡italic-ϵ\epsilon_{t}\equiv\epsilon for all t=0,1,…,T−1𝑡01…𝑇1t=0,1,...,T-1 and
, 1 = T≥4L(Φ(x0)−Φ∗)ϵ−2.𝑇4𝐿Φsuperscript... | 901 |
17 | In this section, we discuss stochastic variance-reduction techniques
for smooth nonconvex optimization.
In order to prepare for the multi-level compositional case,
we proceed with a general framework of constructing
stochastic estimators that satisfies (12) for Lipschitz-continuous vector or matrix mappings.
To simplif... | 1,976 | Multi-Level Composite Stochastic Optimization via Nested Variance Reduction
3 A general framework of stochastic variance reduction
In this section, we discuss stochastic variance-reduction techniques
for smooth nonconvex optimization.
In order to prepare for the multi-level compositional case,
we proceed with a general... | 1,997 |
18 | However, if the desired accuracy ϵitalic-ϵ\epsilon is smaller than the uncertainty
level σ𝜎\sigma, then condition (12) is not satisfied
and we cannot use Theorem 2.3 directly.
A common remedy is to use mini-batches; i.e.,
at each iteration of the algorithm, we randomly pick
a subset ℬtsubscriptℬ𝑡\mathcal{B}_{t} of ξ�... | 1,527 | Multi-Level Composite Stochastic Optimization via Nested Variance Reduction
3 A general framework of stochastic variance reduction
However, if the desired accuracy ϵitalic-ϵ\epsilon is smaller than the uncertainty
level σ𝜎\sigma, then condition (12) is not satisfied
and we cannot use Theorem 2.3 directly.
A common rem... | 1,548 |
19 | In order to construct τ𝜏\tau consecutive estimates {v0,…,vτ−1}superscript𝑣0…superscript𝑣𝜏1\{v^{0},\ldots,v^{\tau-1}\},
this estimator uses the mini-batch estimator (20) for v0superscript𝑣0v^{0},
and then constructs v1superscript𝑣1v^{1} through vτ−1superscript𝑣𝜏1v^{\tau-1} using a recursion:
, 1 = v0=ϕℬ0(x0),vt... | 452 | Multi-Level Composite Stochastic Optimization via Nested Variance Reduction
3 A general framework of stochastic variance reduction
3.1 SARAH/SPIDER estimator for stochastic optimization
In order to construct τ𝜏\tau consecutive estimates {v0,…,vτ−1}superscript𝑣0…superscript𝑣𝜏1\{v^{0},\ldots,v^{\tau-1}\},
this estima... | 486 |
20 | [13, Lemma 1]
Suppose the random mappings ϕξsubscriptitalic-ϕ𝜉\phi_{\xi} satisfy (18).
Then the MSE of the estimator in (22) can be bounded as
, 1 = 𝐄[∥vt−ϕ¯(xt)∥2]≤𝐄[∥v0−ϕ¯(x0)∥2]+∑r=1tL2|ℬr|𝐄[‖xr−xr−1‖2].𝐄delimited-[]superscriptdelimited-∥∥superscript𝑣𝑡¯italic-ϕsuperscript𝑥𝑡2𝐄delimited-[]superscriptde... | 1,896 | Multi-Level Composite Stochastic Optimization via Nested Variance Reduction
3 A general framework of stochastic variance reduction
3.1 SARAH/SPIDER estimator for stochastic optimization
Lemma 3.1.
[13, Lemma 1]
Suppose the random mappings ϕξsubscriptitalic-ϕ𝜉\phi_{\xi} satisfy (18).
Then the MSE of the estimator in (2... | 1,936 |
21 | , 1 = b⋆=τ⋆/2=σ/ϵ,ρ(τ⋆)=2ϵ/σ.formulae-sequencesubscript𝑏⋆subscript𝜏⋆2𝜎italic-ϵ𝜌subscript𝜏⋆2italic-ϵ𝜎b_{\star}=\tau_{\star}/2=\sigma/\epsilon,\qquad\rho(\tau_{\star})=2\,\epsilon/\sigma.. , 2 = . , 3 = (26)
Therefore, significant reduction in sample complexity can be expected
when ϵ≪σmuch-less-thanitalic-ϵ𝜎\eps... | 1,043 | Multi-Level Composite Stochastic Optimization via Nested Variance Reduction
3 A general framework of stochastic variance reduction
3.1 SARAH/SPIDER estimator for stochastic optimization
Lemma 3.1.
, 1 = b⋆=τ⋆/2=σ/ϵ,ρ(τ⋆)=2ϵ/σ.formulae-sequencesubscript𝑏⋆subscript𝜏⋆2𝜎italic-ϵ𝜌subscript𝜏⋆2italic-ϵ𝜎b_{\star}=\tau_... | 1,083 |
22 | Consider problem (5) with F(x)=𝐄ξ[fξ(x)]𝐹𝑥subscript𝐄𝜉delimited-[]subscript𝑓𝜉𝑥F(x)=\mathbf{E}_{\xi}[f_{\xi}(x)].
Suppose Assumption 2.1 holds and
the gradient mapping ϕξ≡fξ′subscriptitalic-ϕ𝜉subscriptsuperscript𝑓′𝜉\phi_{\xi}\equiv f^{\prime}_{\xi}
satisfies (18) and (19)
on domΨdomΨ\mathrm{dom\,}\Psi (ins... | 613 | Multi-Level Composite Stochastic Optimization via Nested Variance Reduction
3 A general framework of stochastic variance reduction
3.1 SARAH/SPIDER estimator for stochastic optimization
Corollary 3.2.
Consider problem (5) with F(x)=𝐄ξ[fξ(x)]𝐹𝑥subscript𝐄𝜉delimited-[]subscript𝑓𝜉𝑥F(x)=\mathbf{E}_{\xi}[f_{\xi}(x... | 655 |
23 | In Algorithm 2, the step lengths satisfy the same bound
in (9), i.e.,
‖xt+1−xt‖≤ηϵ=ϵ/2Lnormsuperscript𝑥𝑡1superscript𝑥𝑡𝜂italic-ϵitalic-ϵ2𝐿\|x^{t+1}-x^{t}\|\leq\eta\epsilon=\epsilon/2L.
From the analysis following (25), the parameters
in (28) guarantee (12).
Therefore we can apply Theorem 2.3 and Remark 2.4 to co... | 1,010 | Multi-Level Composite Stochastic Optimization via Nested Variance Reduction
3 A general framework of stochastic variance reduction
3.1 SARAH/SPIDER estimator for stochastic optimization
Proof.
In Algorithm 2, the step lengths satisfy the same bound
in (9), i.e.,
‖xt+1−xt‖≤ηϵ=ϵ/2Lnormsuperscript𝑥𝑡1superscript𝑥𝑡𝜂i... | 1,046 |
24 | Now consider a finite number of mappings
ϕi:𝐑d→𝐑p×q:subscriptitalic-ϕ𝑖→superscript𝐑𝑑superscript𝐑𝑝𝑞\phi_{i}:\mathbf{R}^{d}\to\mathbf{R}^{p\times q} for i=1,…,n𝑖1…𝑛i=1,\ldots,n, and define
ϕ¯(x)=(1/n)∑i=1nϕi(x)¯italic-ϕ𝑥1𝑛superscriptsubscript𝑖1𝑛subscriptitalic-ϕ𝑖𝑥\bar{\phi}(x)=(1/n)\sum_{i=1}^{n}\phi_{... | 1,789 | Multi-Level Composite Stochastic Optimization via Nested Variance Reduction
3 A general framework of stochastic variance reduction
3.2 SARAH/SPIDER estimator for finite-sum optimization
Now consider a finite number of mappings
ϕi:𝐑d→𝐑p×q:subscriptitalic-ϕ𝑖→superscript𝐑𝑑superscript𝐑𝑝𝑞\phi_{i}:\mathbf{R}^{d}\to\m... | 1,824 |
25 | Consider problem (5) with
F(x)=(1/n)∑i=1nfi(x)𝐹𝑥1𝑛superscriptsubscript𝑖1𝑛subscript𝑓𝑖𝑥F(x)=(1/n)\sum_{i=1}^{n}f_{i}(x).
Suppose Assumption 2.1 holds and the gradient fi′subscriptsuperscript𝑓′𝑖f^{\prime}_{i}
satisfies (18) on domΨdomΨ\mathrm{dom\,}\Psi.
If we set the parameters in Algorithm 2 as
, 1 = η=1/2... | 516 | Multi-Level Composite Stochastic Optimization via Nested Variance Reduction
3 A general framework of stochastic variance reduction
3.2 SARAH/SPIDER estimator for finite-sum optimization
Corollary 3.3.
Consider problem (5) with
F(x)=(1/n)∑i=1nfi(x)𝐹𝑥1𝑛superscriptsubscript𝑖1𝑛subscript𝑓𝑖𝑥F(x)=(1/n)\sum_{i=1}^{n... | 559 |
26 | As a general framework, NPAG may also incorporate the SVRG estimator proposed in [18]. We illustrate how SVRG estimator can be applied to construct τ𝜏\tau consecutive estimates {v0,…,vτ−1}superscript𝑣0…superscript𝑣𝜏1\{v^{0},...,v^{\tau-1}\} that satisfy (21) for stochastic optimization (the expectation case).
For t... | 1,453 | Multi-Level Composite Stochastic Optimization via Nested Variance Reduction
3 A general framework of stochastic variance reduction
3.3 SVRG estimator for stochastic optimization
As a general framework, NPAG may also incorporate the SVRG estimator proposed in [18]. We illustrate how SVRG estimator can be applied to cons... | 1,484 |
27 | We can minimize the ratio between the above quantity and τσ2/ϵ2𝜏superscript𝜎2superscriptitalic-ϵ2\tau\sigma^{2}/\epsilon^{2} (of the naive mini-batch scheme) by choosing τ=(2σ/ϵ)2/3𝜏superscript2𝜎italic-ϵ23\tau=(\sqrt{2}\sigma/\epsilon)^{2/3} and total sample complexity per epoch becomes
O(σ2/ϵ2)𝑂superscript𝜎2s... | 558 | Multi-Level Composite Stochastic Optimization via Nested Variance Reduction
3 A general framework of stochastic variance reduction
3.3 SVRG estimator for stochastic optimization
We can minimize the ratio between the above quantity and τσ2/ϵ2𝜏superscript𝜎2superscriptitalic-ϵ2\tau\sigma^{2}/\epsilon^{2} (of the naive ... | 589 |
28 | In this section, we discussion how the SAGA estimator proposed in [8] can be incorporated into our NPAG framework. Note that the SAGA estimator is only meaningful for the finite-sum case (sampling with replacement).
The SAGA estimator can be written as
, 1 = v0superscript𝑣0\displaystyle v^{0}. , 2 = =\displaystyle=. ,... | 1,536 | Multi-Level Composite Stochastic Optimization via Nested Variance Reduction
3 A general framework of stochastic variance reduction
3.4 SAGA estimator for finite-sum optimization
In this section, we discussion how the SAGA estimator proposed in [8] can be incorporated into our NPAG framework. Note that the SAGA estimato... | 1,569 |
29 | When we restrict the step lengths by ‖xr−xr−1‖≤δnormsuperscript𝑥𝑟superscript𝑥𝑟1𝛿\|x^{r}-x^{r-1}\|\leq\delta, then we have ‖xt−xr‖≤|t−r|δnormsuperscript𝑥𝑡superscript𝑥𝑟𝑡𝑟𝛿\|x^{t}-x^{r}\|\leq|t-r|\delta for all t,r𝑡𝑟t,r.
Note that all the batches ℬrsubscriptℬ𝑟\mathcal{B}_{r} are independently and uniformly... | 1,695 | Multi-Level Composite Stochastic Optimization via Nested Variance Reduction
3 A general framework of stochastic variance reduction
3.4 SAGA estimator for finite-sum optimization
When we restrict the step lengths by ‖xr−xr−1‖≤δnormsuperscript𝑥𝑟superscript𝑥𝑟1𝛿\|x^{r}-x^{r-1}\|\leq\delta, then we have ‖xt−xr‖≤|t−r|δ... | 1,728 |
30 | In this section, we present results for the general composite optimization
problems (1)
and (2) with m≥2𝑚2m\geq 2. To provide convergence and complexity result, we only need to construct a gradient estimator
vtsuperscript𝑣𝑡v^{t} that is sufficiently close to F′(xt)superscript𝐹′superscript𝑥𝑡F^{\prime}(x^{t}).
For... | 1,768 | Multi-Level Composite Stochastic Optimization via Nested Variance Reduction
4 Multi-level nested SPIDER
In this section, we present results for the general composite optimization
problems (1)
and (2) with m≥2𝑚2m\geq 2. To provide convergence and complexity result, we only need to construct a gradient estimator
vtsuper... | 1,787 |
31 | For the functions and mappings appearing in (1) and (2), we assume:
(a)
For each i=1,…,m𝑖1…𝑚i=1,\ldots,m and each realization of ξisubscript𝜉𝑖\xi_{i}, mapping fi,ξi:𝐑di−1→𝐑di:subscript𝑓𝑖subscript𝜉𝑖→superscript𝐑subscript𝑑𝑖1superscript𝐑subscript𝑑𝑖f_{i,\xi_{i}}\!:\mathbf{R}^{d_{i-1}}\!\to\!\mathbf{R}^{d_{... | 907 | Multi-Level Composite Stochastic Optimization via Nested Variance Reduction
4 Multi-level nested SPIDER
Assumption 4.1.
For the functions and mappings appearing in (1) and (2), we assume:
(a)
For each i=1,…,m𝑖1…𝑚i=1,\ldots,m and each realization of ξisubscript𝜉𝑖\xi_{i}, mapping fi,ξi:𝐑di−1→𝐑di:subscript𝑓𝑖subsc... | 933 |
32 | Suppose Assumption 4.1.(a) holds.
Then the composite function F𝐹F and its gradient F′superscript𝐹′F^{\prime} are Lipschitz
continuous, with respective Lipschitz constants
, 1 = ℓF=∏i=1mℓi,LF=∑i=1mLi(∏r=1i−1ℓr2)(∏r=i+1mℓr),formulae-sequencesubscriptℓ𝐹superscriptsubscriptproduct𝑖1𝑚subscriptℓ𝑖subscript𝐿𝐹superscr... | 1,175 | Multi-Level Composite Stochastic Optimization via Nested Variance Reduction
4 Multi-level nested SPIDER
Lemma 4.2.
Suppose Assumption 4.1.(a) holds.
Then the composite function F𝐹F and its gradient F′superscript𝐹′F^{\prime} are Lipschitz
continuous, with respective Lipschitz constants
, 1 = ℓF=∏i=1mℓi,LF=∑i=1mLi(∏r=... | 1,200 |
33 | Suppose Assumption 4.1 holds.
In Algorithm 3, if we set η=12LF𝜂12subscript𝐿𝐹\eta=\frac{1}{2L_{F}}, τk=ℓF2mϵksubscript𝜏𝑘subscriptℓ𝐹2𝑚subscriptitalic-ϵ𝑘\tau_{k}=\frac{\ell_{F}}{2m\epsilon_{k}} and
, 1 = Bik=12m(m+1)σF2ϵk2,bik=6(m+1)ℓFϵk,i=1,…,m,formulae-sequencesuperscriptsubscript𝐵𝑖𝑘12𝑚𝑚1superscript... | 752 | Multi-Level Composite Stochastic Optimization via Nested Variance Reduction
4 Multi-level nested SPIDER
Lemma 4.3.
Suppose Assumption 4.1 holds.
In Algorithm 3, if we set η=12LF𝜂12subscript𝐿𝐹\eta=\frac{1}{2L_{F}}, τk=ℓF2mϵksubscript𝜏𝑘subscriptℓ𝐹2𝑚subscriptitalic-ϵ𝑘\tau_{k}=\frac{\ell_{F}}{2m\epsilon_{k}} and... | 777 |
34 | First, we bound ‖vt−F′(xt)‖2superscriptnormsuperscript𝑣𝑡superscript𝐹′superscript𝑥𝑡2\|v^{t}-F^{\prime}(x^{t})\|^{2} in terms of the approximation
errors of the individual estimators in (33)
and (34).
Denoting Fi(xt)subscript𝐹𝑖superscript𝑥𝑡F_{i}(x^{t}) by Fitsuperscriptsubscript𝐹𝑖𝑡F_{i}^{t} for simplicity, ... | 930 | Multi-Level Composite Stochastic Optimization via Nested Variance Reduction
4 Multi-level nested SPIDER
Proof.
First, we bound ‖vt−F′(xt)‖2superscriptnormsuperscript𝑣𝑡superscript𝐹′superscript𝑥𝑡2\|v^{t}-F^{\prime}(x^{t})\|^{2} in terms of the approximation
errors of the individual estimators in (33)
and (34).
Deno... | 951 |
35 | , 1 = . , 2 = . , 3 = 𝐄[‖vt−F′(xt)‖2]𝐄delimited-[]superscriptnormsuperscript𝑣𝑡superscript𝐹′superscript𝑥𝑡2\displaystyle\qquad\mathbf{E}\bigl{[}\|v^{t}-F^{\prime}(x^{t})\|^{2}\bigr{]}. , 4 = . , 1 = . , 2 = ≤\displaystyle\leq. , 3 = (2m−1)(𝐄[∥[z1t]T[z2t]T⋯[zm−1t]Tzmt−[f1′(xt)]T[z2t]T⋯[zm−1t]Tzmt∥2]\displaystyle... | 1,871 | Multi-Level Composite Stochastic Optimization via Nested Variance Reduction
4 Multi-level nested SPIDER
Proof.
, 1 = . , 2 = . , 3 = 𝐄[‖vt−F′(xt)‖2]𝐄delimited-[]superscriptnormsuperscript𝑣𝑡superscript𝐹′superscript𝑥𝑡2\displaystyle\qquad\mathbf{E}\bigl{[}\|v^{t}-F^{\prime}(x^{t})\|^{2}\bigr{]}. , 4 = . , 1 = . ,... | 1,892 |
36 | +𝐄[∥[f1′(xt)]T⋯[fm−1′(Fm−2t)]Tfm′(ym−1t)−[f1′(xt)]T⋯[fm−1′(Fm−2t)]Tfm′(Fm−1t)∥2])\displaystyle+\mathbf{E}\!\left[\big{\|}[f_{1}^{\prime}(x^{t})]^{T}\!\!\cdots\![f^{\prime}_{m\!-\!1}(F_{m\!-\!2}^{t})]^{T}\!f^{\prime}_{m}(y_{m\!-\!1}^{t})\!-\![f_{1}^{\prime}(x^{t})]^{T}\!\!\cdots\![f^{\prime}_{m\!-\!1}(F_{m\!-\!2}^{t})]... | 1,868 | Multi-Level Composite Stochastic Optimization via Nested Variance Reduction
4 Multi-level nested SPIDER
Proof.
+𝐄[∥[f1′(xt)]T⋯[fm−1′(Fm−2t)]Tfm′(ym−1t)−[f1′(xt)]T⋯[fm−1′(Fm−2t)]Tfm′(Fm−1t)∥2])\displaystyle+\mathbf{E}\!\left[\big{\|}[f_{1}^{\prime}(x^{t})]^{T}\!\!\cdots\![f^{\prime}_{m\!-\!1}(F_{m\!-\!2}^{t})]^{T}\!f^{... | 1,889 |
37 | Step 1: Bounding the temporal differences ‖yit−yit−1‖normsuperscriptsubscript𝑦𝑖𝑡superscriptsubscript𝑦𝑖𝑡1\|y_{i}^{t}-y_{i}^{t-1}\|.
For i=0𝑖0i=0, we have y0t=xtsuperscriptsubscript𝑦0𝑡superscript𝑥𝑡y_{0}^{t}=x^{t} and from (9),
‖xt−xt−1‖≤ηϵ=ϵ/2LF≤ϵ/LFnormsuperscript𝑥𝑡superscript𝑥𝑡1𝜂italic-ϵitalic-ϵ2subsc... | 1,921 | Multi-Level Composite Stochastic Optimization via Nested Variance Reduction
4 Multi-level nested SPIDER
Proof.
Step 1: Bounding the temporal differences ‖yit−yit−1‖normsuperscriptsubscript𝑦𝑖𝑡superscriptsubscript𝑦𝑖𝑡1\|y_{i}^{t}-y_{i}^{t-1}\|.
For i=0𝑖0i=0, we have y0t=xtsuperscriptsubscript𝑦0𝑡superscript𝑥𝑡y_{... | 1,942 |
38 | , 1 = ‖zit‖≤‖zi0‖+t⋅(∏r=1i−1ℓr)LiϵLF≤ℓi+τ⋅Li(∏r=1i−1ℓr)ϵLF.normsuperscriptsubscript𝑧𝑖𝑡normsuperscriptsubscript𝑧𝑖0⋅𝑡superscriptsubscriptproduct𝑟1𝑖1subscriptℓ𝑟subscript𝐿𝑖italic-ϵsubscript𝐿𝐹subscriptℓ𝑖⋅𝜏subscript𝐿𝑖superscriptsubscriptproduct𝑟1𝑖1subscriptℓ𝑟italic-ϵsubscript𝐿𝐹\|z_{i}^{t}\|\leq\|z_{... | 1,684 | Multi-Level Composite Stochastic Optimization via Nested Variance Reduction
4 Multi-level nested SPIDER
Proof.
, 1 = ‖zit‖≤‖zi0‖+t⋅(∏r=1i−1ℓr)LiϵLF≤ℓi+τ⋅Li(∏r=1i−1ℓr)ϵLF.normsuperscriptsubscript𝑧𝑖𝑡normsuperscriptsubscript𝑧𝑖0⋅𝑡superscriptsubscriptproduct𝑟1𝑖1subscriptℓ𝑟subscript𝐿𝑖italic-ϵsubscript𝐿𝐹subsc... | 1,705 |
39 | , 1 = 𝐄[‖yit−Fi(xt)‖2]≤i⋅[∑r=1i(∏j=r+1iℓj2)δr2Sr]+i⋅(∏j=1iℓj2)[∑r=1iτϵ2srLF2].𝐄delimited-[]superscriptnormsuperscriptsubscript𝑦𝑖𝑡subscript𝐹𝑖superscript𝑥𝑡2⋅𝑖delimited-[]superscriptsubscript𝑟1𝑖superscriptsubscriptproduct𝑗𝑟1𝑖superscriptsubscriptℓ𝑗2superscriptsubscript𝛿𝑟2subscript𝑆𝑟⋅𝑖superscripts... | 665 | Multi-Level Composite Stochastic Optimization via Nested Variance Reduction
4 Multi-level nested SPIDER
Proof.
, 1 = 𝐄[‖yit−Fi(xt)‖2]≤i⋅[∑r=1i(∏j=r+1iℓj2)δr2Sr]+i⋅(∏j=1iℓj2)[∑r=1iτϵ2srLF2].𝐄delimited-[]superscriptnormsuperscriptsubscript𝑦𝑖𝑡subscript𝐹𝑖superscript𝑥𝑡2⋅𝑖delimited-[]superscriptsubscript𝑟1𝑖... | 686 |
40 | , 1 = . , 2 = . , 3 = 𝐄[‖yk+1t−Fk+1(xt)‖2]𝐄delimited-[]superscriptnormsuperscriptsubscript𝑦𝑘1𝑡subscript𝐹𝑘1superscript𝑥𝑡2\displaystyle\mathbf{E}\bigl{[}\|y_{k+1}^{t}-F_{k+1}(x^{t})\|^{2}\bigr{]}. , 4 = . , 1 = . , 2 = ≤\displaystyle\leq. , 3 = (1+k)𝐄[‖yk+1t−fk+1(ykt)‖2]+(1+k−1)𝐄[‖fk+1(ykt)−fk+1(Fk(x... | 1,926 | Multi-Level Composite Stochastic Optimization via Nested Variance Reduction
4 Multi-level nested SPIDER
Proof.
, 1 = . , 2 = . , 3 = 𝐄[‖yk+1t−Fk+1(xt)‖2]𝐄delimited-[]superscriptnormsuperscriptsubscript𝑦𝑘1𝑡subscript𝐹𝑘1superscript𝑥𝑡2\displaystyle\mathbf{E}\bigl{[}\|y_{k+1}^{t}-F_{k+1}(x^{t})\|^{2}\bigr{]}. , 4... | 1,947 |
41 | , 1 = τ=ℓF2mϵ=∏j=1mℓj2mϵ,𝜏subscriptℓ𝐹2𝑚italic-ϵsuperscriptsubscriptproduct𝑗1𝑚subscriptℓ𝑗2𝑚italic-ϵ\tau=\frac{\ell_{F}}{2m\epsilon}=\frac{\mathop{\textstyle\prod}_{j=1}^{m}\ell_{j}}{2m\epsilon},. , 2 =
we have for r=1,…,m𝑟1…𝑚r=1,\ldots,m and all t≥0𝑡0t\geq 0,
, 1 = ‖zrt‖≤ℓr+τLr∏j=1r−1ℓjϵLF=ℓr(1+Lr(∏j... | 1,906 | Multi-Level Composite Stochastic Optimization via Nested Variance Reduction
4 Multi-level nested SPIDER
Proof.
, 1 = τ=ℓF2mϵ=∏j=1mℓj2mϵ,𝜏subscriptℓ𝐹2𝑚italic-ϵsuperscriptsubscriptproduct𝑗1𝑚subscriptℓ𝑗2𝑚italic-ϵ\tau=\frac{\ell_{F}}{2m\epsilon}=\frac{\mathop{\textstyle\prod}_{j=1}^{m}\ell_{j}}{2m\epsilon},. , 2... | 1,927 |
42 | For 𝒯1subscript𝒯1\mathcal{T}_{1}, we use (43) and set Bi=Bsubscript𝐵𝑖𝐵B_{i}=B and bi=bsubscript𝑏𝑖𝑏b_{i}=b for all
i=1,…,m𝑖1…𝑚i=1,\ldots,m, which yields
, 1 = 𝒯1subscript𝒯1\displaystyle\mathcal{T}_{1}. , 2 = ≤\displaystyle\leq. , 3 = 6m∑i=1m∏r≠iℓr2⋅(σi2Bi+Li2(∏r=1i−1ℓr2)τϵ2biLF2)6𝑚superscriptsubscript... | 1,565 | Multi-Level Composite Stochastic Optimization via Nested Variance Reduction
4 Multi-level nested SPIDER
Proof.
For 𝒯1subscript𝒯1\mathcal{T}_{1}, we use (43) and set Bi=Bsubscript𝐵𝑖𝐵B_{i}=B and bi=bsubscript𝑏𝑖𝑏b_{i}=b for all
i=1,…,m𝑖1…𝑚i=1,\ldots,m, which yields
, 1 = 𝒯1subscript𝒯1\displaystyle\mathcal{T}_{... | 1,586 |
43 | , 1 = 𝒯2subscript𝒯2\displaystyle\mathcal{T}_{2}. , 2 = ≤6m∑i=1m−1(∏r≠i+1ℓr2)Li+12⋅i⋅[∑r=1iδr2Sr(∏j=r+1iℓj2)+(∏j=1iℓj2)∑r=1iτϵ2srLF2]absent6𝑚superscriptsubscript𝑖1𝑚1⋅subscriptproduct𝑟𝑖1superscriptsubscriptℓ𝑟2superscriptsubscript𝐿𝑖12𝑖delimited-[]superscriptsubscript𝑟1𝑖superscriptsubscript𝛿𝑟2subscrip... | 1,816 | Multi-Level Composite Stochastic Optimization via Nested Variance Reduction
4 Multi-level nested SPIDER
Proof.
, 1 = 𝒯2subscript𝒯2\displaystyle\mathcal{T}_{2}. , 2 = ≤6m∑i=1m−1(∏r≠i+1ℓr2)Li+12⋅i⋅[∑r=1iδr2Sr(∏j=r+1iℓj2)+(∏j=1iℓj2)∑r=1iτϵ2srLF2]absent6𝑚superscriptsubscript𝑖1𝑚1⋅subscriptproduct𝑟𝑖1superscript... | 1,837 |
44 | =6m2∑r=1m−1δr2Sr∏j=1rℓr2∑i=rm−1(∏j=i+1mℓj2⋅Li+12⋅∏j=1iℓj4)+6m2∑r=1m−1τϵ2srLF2∑i=rm−1(∏j=1iℓj4⋅Li+12⋅∏j=i+1mℓj2)absent6superscript𝑚2superscriptsubscript𝑟1𝑚1superscriptsubscript𝛿𝑟2subscript𝑆𝑟superscriptsubscriptproduct𝑗1𝑟superscriptsubscriptℓ𝑟2superscriptsubscript𝑖𝑟𝑚1superscriptsubscriptproduct𝑗𝑖1... | 1,580 | Multi-Level Composite Stochastic Optimization via Nested Variance Reduction
4 Multi-level nested SPIDER
Proof.
=6m2∑r=1m−1δr2Sr∏j=1rℓr2∑i=rm−1(∏j=i+1mℓj2⋅Li+12⋅∏j=1iℓj4)+6m2∑r=1m−1τϵ2srLF2∑i=rm−1(∏j=1iℓj4⋅Li+12⋅∏j=i+1mℓj2)absent6superscript𝑚2superscriptsubscript𝑟1𝑚1superscriptsubscript𝛿𝑟2subscript𝑆𝑟supe... | 1,601 |
45 | Consider problem (1) with m≥2𝑚2m\geq 2, and
suppose Assumptions 2.1 and 4.1 hold.
In Algorithm 3, if we set ϵk=mθLFk⋅ℓFsubscriptitalic-ϵ𝑘𝑚𝜃subscript𝐿𝐹⋅𝑘subscriptℓ𝐹\epsilon_{k}=\frac{m\theta L_{F}}{k\cdot\ell_{F}} with θ>0𝜃0\theta>0 and set the other parameters as
in Lemma 4.3,
then the output x¯¯𝑥\bar{x} sa... | 457 | Multi-Level Composite Stochastic Optimization via Nested Variance Reduction
4 Multi-level nested SPIDER
Theorem 4.4.
Consider problem (1) with m≥2𝑚2m\geq 2, and
suppose Assumptions 2.1 and 4.1 hold.
In Algorithm 3, if we set ϵk=mθLFk⋅ℓFsubscriptitalic-ϵ𝑘𝑚𝜃subscript𝐿𝐹⋅𝑘subscriptℓ𝐹\epsilon_{k}=\frac{m\theta L_{... | 483 |
46 | From Theorem 2.3 and Lemma 4.3,
the output of Algorithm 3 satisfies
, 1 = 𝐄[‖𝒢(x¯)‖]≤4LF(Φ(x0)−Φ∗)∑t=0T−1ϵχ(t)+4∑t=0T−1ϵχ(t)2∑t=0T−1ϵχ(t).𝐄delimited-[]norm𝒢¯𝑥4subscript𝐿𝐹Φsuperscript𝑥0subscriptΦsuperscriptsubscript𝑡0𝑇1subscriptitalic-ϵ𝜒𝑡4superscriptsubscript𝑡0𝑇1subscriptsuperscriptitalic-ϵ2𝜒𝑡su... | 1,477 | Multi-Level Composite Stochastic Optimization via Nested Variance Reduction
4 Multi-level nested SPIDER
Proof.
From Theorem 2.3 and Lemma 4.3,
the output of Algorithm 3 satisfies
, 1 = 𝐄[‖𝒢(x¯)‖]≤4LF(Φ(x0)−Φ∗)∑t=0T−1ϵχ(t)+4∑t=0T−1ϵχ(t)2∑t=0T−1ϵχ(t).𝐄delimited-[]norm𝒢¯𝑥4subscript𝐿𝐹Φsuperscript𝑥0subscrip... | 1,498 |
47 | , 1 = ∑k=1K∑i=1m(Bik+(τk−1)bik)+∑k=1K∑i=1m−1(Sik+(τk−1)sik)superscriptsubscript𝑘1𝐾superscriptsubscript𝑖1𝑚superscriptsubscript𝐵𝑖𝑘subscript𝜏𝑘1superscriptsubscript𝑏𝑖𝑘superscriptsubscript𝑘1𝐾superscriptsubscript𝑖1𝑚1superscriptsubscript𝑆𝑖𝑘subscript𝜏𝑘1superscriptsubscript𝑠𝑖𝑘\displaystyle\sum_{k=1}^{K... | 686 | Multi-Level Composite Stochastic Optimization via Nested Variance Reduction
4 Multi-level nested SPIDER
Proof.
, 1 = ∑k=1K∑i=1m(Bik+(τk−1)bik)+∑k=1K∑i=1m−1(Sik+(τk−1)sik)superscriptsubscript𝑘1𝐾superscriptsubscript𝑖1𝑚superscriptsubscript𝐵𝑖𝑘subscript𝜏𝑘1superscriptsubscript𝑏𝑖𝑘superscriptsubscript𝑘1𝐾supersc... | 707 |
48 | Consider problem (2) with m≥2𝑚2m\geq 2, and suppose Assumptions 2.1 and 4.1.(a) hold. In addition, let Nmax=max{N1,…,Nm}subscript𝑁maxsubscript𝑁1…subscript𝑁𝑚N_{\mathrm{max}}=\max\{N_{1},\ldots,N_{m}\} and assume the target precision ϵitalic-ϵ\epsilon satisfies Nmax≤ℓF2mϵ.subscript𝑁maxsubscriptℓ𝐹2𝑚italic-ϵ\sqr... | 1,059 | Multi-Level Composite Stochastic Optimization via Nested Variance Reduction
4 Multi-level nested SPIDER
Theorem 4.5.
Consider problem (2) with m≥2𝑚2m\geq 2, and suppose Assumptions 2.1 and 4.1.(a) hold. In addition, let Nmax=max{N1,…,Nm}subscript𝑁maxsubscript𝑁1…subscript𝑁𝑚N_{\mathrm{max}}=\max\{N_{1},\ldots,N_{m}... | 1,085 |
49 | Through a similar line of proof of Lemma 4.3, 𝐄[‖vt−F′(xt)‖2]≤ϵχ(t)2𝐄delimited-[]superscriptnormsuperscript𝑣𝑡superscript𝐹′superscript𝑥𝑡2superscriptsubscriptitalic-ϵ𝜒𝑡2\mathbf{E}\big{[}\|v^{t}-F^{\prime}(x^{t})\|^{2}\big{]}\leq\epsilon_{\chi(t)}^{2} still holds. Consequently (48) also holds in this case. Not... | 1,710 | Multi-Level Composite Stochastic Optimization via Nested Variance Reduction
4 Multi-level nested SPIDER
Proof.
Through a similar line of proof of Lemma 4.3, 𝐄[‖vt−F′(xt)‖2]≤ϵχ(t)2𝐄delimited-[]superscriptnormsuperscript𝑣𝑡superscript𝐹′superscript𝑥𝑡2superscriptsubscriptitalic-ϵ𝜒𝑡2\mathbf{E}\big{[}\|v^{t}-F^{\p... | 1,731 |
50 | In Theorem 4.4, if we set ϵk≡ϵsubscriptitalic-ϵ𝑘italic-ϵ\epsilon_{k}\equiv\epsilon, then the Algorithm output x¯¯𝑥\bar{x} s.t. 𝐄[‖𝒢(x¯)‖]≤ϵ𝐄delimited-[]norm𝒢¯𝑥italic-ϵ\mathbf{E}[\|\mathcal{G}(\bar{x})\|]\leq\epsilon after K=O(mLFϵ⋅ℓF)𝐾𝑂𝑚subscript𝐿𝐹⋅italic-ϵsubscriptℓ𝐹K=O(\frac{mL_{F}}{\epsilon\cdot\ell... | 778 | Multi-Level Composite Stochastic Optimization via Nested Variance Reduction
4 Multi-level nested SPIDER
Remark 4.6.
In Theorem 4.4, if we set ϵk≡ϵsubscriptitalic-ϵ𝑘italic-ϵ\epsilon_{k}\equiv\epsilon, then the Algorithm output x¯¯𝑥\bar{x} s.t. 𝐄[‖𝒢(x¯)‖]≤ϵ𝐄delimited-[]norm𝒢¯𝑥italic-ϵ\mathbf{E}[\|\mathcal{G}(\ba... | 803 |
51 | In this section, we present numerical experiments to demonstrate the effectiveness of the proposed algorithms and compare them with related work.
We first apply NPAG with different variance-reduced estimators on a noncovex sparse classification problem. Since this is a one-level finite-sum problem, we also compare it w... | 102 | Multi-Level Composite Stochastic Optimization via Nested Variance Reduction
5 Numerical experiments
In this section, we present numerical experiments to demonstrate the effectiveness of the proposed algorithms and compare them with related work.
We first apply NPAG with different variance-reduced estimators on a noncov... | 119 |
52 | We consider the following ℓ1subscriptℓ1\ell_{1}-regularized empirical risk minimization problem:
, 1 = minw1N∑i=1Nℓ(aiTx,bi)+β‖x‖1,subscript𝑤1𝑁superscriptsubscript𝑖1𝑁ℓsuperscriptsubscript𝑎𝑖𝑇𝑥subscript𝑏𝑖𝛽subscriptnorm𝑥1\min_{w}\frac{1}{N}\sum_{i=1}^{N}\ell(a_{i}^{T}x,b_{i})+\beta\|x\|_{1},. , 2 = . , 3 ... | 1,743 | Multi-Level Composite Stochastic Optimization via Nested Variance Reduction
5 Numerical experiments
5.1 Sparse binary classification
We consider the following ℓ1subscriptℓ1\ell_{1}-regularized empirical risk minimization problem:
, 1 = minw1N∑i=1Nℓ(aiTx,bi)+β‖x‖1,subscript𝑤1𝑁superscriptsubscript𝑖1𝑁ℓsuperscript... | 1,767 |
53 | In this section, we present the numerical results for a risk-averse portfolio optimization problems, which is a common benchmark example used in many stochastic composite optimization methds (e.g.formulae-sequence𝑒𝑔e.g., [16, 41, 42, 21]). Consider the scenario where d𝑑d assets can be invested during the time period... | 1,601 | Multi-Level Composite Stochastic Optimization via Nested Variance Reduction
5 Numerical experiments
5.2 Sparse portfolio selection problem
In this section, we present the numerical results for a risk-averse portfolio optimization problems, which is a common benchmark example used in many stochastic composite optimizati... | 1,626 |
54 | For these experiments, CIVR and Nested-SPIDER both take the batch size of ⌈N1/2⌉superscript𝑁12\lceil N^{1/2}\rceil; CIVR-adp takes the adaptive batch size of Sk=⌈min{10k+1,N1/2}⌉subscript𝑆𝑘10𝑘1superscript𝑁12S_{k}=\lceil\min\{10k+1,N^{1/2}\}\rceil; VRSC-PG and C-SAGA both take the batch size of ⌈N2/3⌉superscript�... | 715 | Multi-Level Composite Stochastic Optimization via Nested Variance Reduction
5 Numerical experiments
5.2 Sparse portfolio selection problem
For these experiments, CIVR and Nested-SPIDER both take the batch size of ⌈N1/2⌉superscript𝑁12\lceil N^{1/2}\rceil; CIVR-adp takes the adaptive batch size of Sk=⌈min{10k+1,N1/2}⌉... | 740 |
55 | We have proposed a normalized proximal approximate gradient (NPAG) method
for solving multi-level composite stochastic optimization problems.
The approximate gradients at each iteration are obtained via nested
variance reduction using the SARAH/Spider estimator.
In order to find an approximate stationary point where th... | 418 | Multi-Level Composite Stochastic Optimization via Nested Variance Reduction
6 Conclusion
We have proposed a normalized proximal approximate gradient (NPAG) method
for solving multi-level composite stochastic optimization problems.
The approximate gradients at each iteration are obtained via nested
variance reduction us... | 433 |
56 | Under Assumption 4.1.(a),
it is straightforward to show that fi=𝐄ξifi,ξisubscript𝑓𝑖subscript𝐄subscript𝜉𝑖subscript𝑓𝑖subscript𝜉𝑖f_{i}=\mathbf{E}_{\xi_{i}}f_{i,\xi_{i}}
is ℓisubscriptℓ𝑖\ell_{i}-Lipschitz and its gradient fi′=𝐄ξifi,ξi′subscriptsuperscript𝑓′𝑖subscript𝐄subscript𝜉𝑖subscriptsuperscript𝑓′𝑖s... | 1,846 | Multi-Level Composite Stochastic Optimization via Nested Variance Reduction
Appendix A Proof of Lemma 4.2
Proof.
Under Assumption 4.1.(a),
it is straightforward to show that fi=𝐄ξifi,ξisubscript𝑓𝑖subscript𝐄subscript𝜉𝑖subscript𝑓𝑖subscript𝜉𝑖f_{i}=\mathbf{E}_{\xi_{i}}f_{i,\xi_{i}}
is ℓisubscriptℓ𝑖\ell_{i}-Lips... | 1,871 |
57 | , 1 = ‖Fi(x)−Fi(y)‖normsubscript𝐹𝑖𝑥subscript𝐹𝑖𝑦\displaystyle\|F_{i}(x)-F_{i}(y)\|. , 2 = =\displaystyle=. , 3 = ‖fi(Fi−1(x))−fi(Fi−1(y))‖normsubscript𝑓𝑖subscript𝐹𝑖1𝑥subscript𝑓𝑖subscript𝐹𝑖1𝑦\displaystyle\|f_{i}(F_{i-1}(x))-f_{i}(F_{i-1}(y))\|. , 4 = . , 1 = . , 2 = ≤\displaystyle\leq. , 3 = ℓi‖Fi−... | 826 | Multi-Level Composite Stochastic Optimization via Nested Variance Reduction
Appendix A Proof of Lemma 4.2
Proof.
, 1 = ‖Fi(x)−Fi(y)‖normsubscript𝐹𝑖𝑥subscript𝐹𝑖𝑦\displaystyle\|F_{i}(x)-F_{i}(y)\|. , 2 = =\displaystyle=. , 3 = ‖fi(Fi−1(x))−fi(Fi−1(y))‖normsubscript𝑓𝑖subscript𝐹𝑖1𝑥subscript𝑓𝑖subscript𝐹�... | 851 |
58 | Algorithm 1 Normalized Proximal Approximate Gradient (NPAG) Method
Algorithm 2 Prox-Spider Method
Algorithm 3 Multi-level Nested-Spider Method
Figure 1: Experiments on sparse binary classification on mnist dataset, β=1/12691𝛽112691\beta=1/12691.
Figure 2: Experiments on sparse binary classification on rcv1 dataset, β=... | 143 | Multi-Level Composite Stochastic Optimization via Nested Variance Reduction
References
Algorithm 1 Normalized Proximal Approximate Gradient (NPAG) Method
Algorithm 2 Prox-Spider Method
Algorithm 3 Multi-level Nested-Spider Method
Figure 1: Experiments on sparse binary classification on mnist dataset, β=1/12691𝛽112691\... | 157 |
0 | This paper augments the reward received by a reinforcement learning agent with potential functions in order to help the agent learn (possibly stochastic) optimal policies.
We show that a potential-based reward shaping scheme is able to preserve optimality of stochastic policies, and demonstrate that the ability of an a... | 170 | Potential-Based Advice for Stochastic Policy Learning
Abstract
This paper augments the reward received by a reinforcement learning agent with potential functions in order to help the agent learn (possibly stochastic) optimal policies.
We show that a potential-based reward shaping scheme is able to preserve optimality o... | 181 |
1 | Reinforcement learning (RL) is a framework that allows an agent to complete tasks in an environment, even when a model of the environment is not known.
The agent ‘learns’ to complete a task by maximizing its expected long-term reward, where the reward signal is supplied by the environment.
RL algorithms have been succe... | 721 | Potential-Based Advice for Stochastic Policy Learning
I Introduction
Reinforcement learning (RL) is a framework that allows an agent to complete tasks in an environment, even when a model of the environment is not known.
The agent ‘learns’ to complete a task by maximizing its expected long-term reward, where the reward... | 733 |
2 | Shaping or augmenting the reward received by an RL agent in order to enable it to learn optimal policies faster is an active area of research.
Reward modification via human feedback was used in [9, 10] to interactively shape an agent’s response so that it learned a desired behavior.
However, frequent human supervision ... | 449 | Potential-Based Advice for Stochastic Policy Learning
II Related Work
Shaping or augmenting the reward received by an RL agent in order to enable it to learn optimal policies faster is an active area of research.
Reward modification via human feedback was used in [9, 10] to interactively shape an agent’s response so th... | 462 |
3 | An MDP [27] is a tuple (S,A,𝕋,ρ0,R)𝑆𝐴𝕋subscript𝜌0𝑅(S,A,\mathbb{T},\rho_{0},R).
S𝑆S is the set of states, A𝐴A the set of actions,
𝕋:S×A×S→[0,1]:𝕋→𝑆𝐴𝑆01\mathbb{T}:S\times A\times S\rightarrow[0,1] encodes ℙ(st+1|st,at)ℙconditionalsubscript𝑠𝑡1subscript𝑠𝑡subscript𝑎𝑡\mathbb{P}(s_{t+1}|s_{t},a_{t}), the p... | 1,126 | Potential-Based Advice for Stochastic Policy Learning
III Preliminaries
III-A Reinforcement Learning
An MDP [27] is a tuple (S,A,𝕋,ρ0,R)𝑆𝐴𝕋subscript𝜌0𝑅(S,A,\mathbb{T},\rho_{0},R).
S𝑆S is the set of states, A𝐴A the set of actions,
𝕋:S×A×S→[0,1]:𝕋→𝑆𝐴𝑆01\mathbb{T}:S\times A\times S\rightarrow[0,1] encodes ℙ(... | 1,146 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.