chunk_id int64 0 448 | chunk_text stringlengths 1 10.8k | chunk_text_tokens int64 1 2.01k | serialized_text stringlengths 1 11.1k | serialized_text_tokens int64 1 2.02k |
|---|---|---|---|---|
37 | It follows that r¯τir¯τi−1≤2subscript¯𝑟subscript𝜏𝑖subscript¯𝑟subscript𝜏𝑖12\frac{\overline{r}_{\tau_{i}}}{\overline{r}_{\tau_{i}-1}}\leq 2. Moreover
by the definition of the τisubscript𝜏𝑖\tau_{i} we have that
r¯τi−1r¯τi−1≤2subscript¯𝑟subscript𝜏𝑖1subscript¯𝑟subscript𝜏𝑖12\frac{\overline{r}_{\tau_{i}-1}}{\ove... | 1,781 | Tuning-Free Stochastic Optimization
Appendix
7 Proofs for Section 7
7.1 Proof of Theorem 1
Proof of Theorem 1.
It follows that r¯τir¯τi−1≤2subscript¯𝑟subscript𝜏𝑖subscript¯𝑟subscript𝜏𝑖12\frac{\overline{r}_{\tau_{i}}}{\overline{r}_{\tau_{i}-1}}\leq 2. Moreover
by the definition of the τisubscript𝜏𝑖\tau_{i} we hav... | 1,816 |
38 | , 1 = ∑t=0T−1(f(xt)−f∗)≤4log2DD¯[4DuT−1+32DθT,δσT].superscriptsubscript𝑡0𝑇1𝑓subscript𝑥𝑡subscript𝑓∗42𝐷¯𝐷delimited-[]4𝐷subscript𝑢𝑇132𝐷subscript𝜃𝑇𝛿𝜎𝑇\displaystyle\sum_{t=0}^{T-1}\left(f(x_{t})-f_{\ast}\right)\leq 4\log\frac{2D}{\underline{D}}\left[4D\sqrt{u_{T-1}}+32D\theta_{T,\delta}\sigma\sqr... | 1,963 | Tuning-Free Stochastic Optimization
Appendix
7 Proofs for Section 7
7.1 Proof of Theorem 1
Proof of Theorem 1.
, 1 = ∑t=0T−1(f(xt)−f∗)≤4log2DD¯[4DuT−1+32DθT,δσT].superscriptsubscript𝑡0𝑇1𝑓subscript𝑥𝑡subscript𝑓∗42𝐷¯𝐷delimited-[]4𝐷subscript𝑢𝑇132𝐷subscript𝜃𝑇𝛿𝜎𝑇\displaystyle\sum_{t=0}^{T-1}\left(... | 1,998 |
39 | , 1 = ∑t=0T−1(f(xt)−f∗)≤64log22DD¯LD2+320log22DD¯θT,δσDT.superscriptsubscript𝑡0𝑇1𝑓subscript𝑥𝑡subscript𝑓∗64superscript22𝐷¯𝐷𝐿superscript𝐷2320superscript22𝐷¯𝐷subscript𝜃𝑇𝛿𝜎𝐷𝑇\displaystyle\sum_{t=0}^{T-1}\left(f(x_{t})-f_{\ast}\right)\leq 64\log^{2}\frac{2D}{\underline{D}}LD^{2}+320\log^{2}\fr... | 1,904 | Tuning-Free Stochastic Optimization
Appendix
7 Proofs for Section 7
7.1 Proof of Theorem 1
Proof of Theorem 1.
, 1 = ∑t=0T−1(f(xt)−f∗)≤64log22DD¯LD2+320log22DD¯θT,δσDT.superscriptsubscript𝑡0𝑇1𝑓subscript𝑥𝑡subscript𝑓∗64superscript22𝐷¯𝐷𝐿superscript𝐷2320superscript22𝐷¯𝐷subscript𝜃𝑇𝛿𝜎𝐷𝑇\display... | 1,939 |
40 | Observe that xksubscript𝑥𝑘x_{k} is determined by ℱk−1subscriptℱ𝑘1\mathcal{F}_{k-1}, and since r¯k=maxt≤k(∥xk−x0∥,rϵ)subscript¯𝑟𝑘subscript𝑡𝑘delimited-∥∥subscript𝑥𝑘subscript𝑥0subscript𝑟italic-ϵ\overline{r}_{k}=\max_{t\leq k}\left(\left\lVert x_{k}-x_{0}\right\rVert,r_{\epsilon}\right), it is also determined b... | 1,865 | Tuning-Free Stochastic Optimization
Appendix
7 Proofs for Section 7
7.1 Proof of Theorem 1
Proof of Theorem 1.
Observe that xksubscript𝑥𝑘x_{k} is determined by ℱk−1subscriptℱ𝑘1\mathcal{F}_{k-1}, and since r¯k=maxt≤k(∥xk−x0∥,rϵ)subscript¯𝑟𝑘subscript𝑡𝑘delimited-∥∥subscript𝑥𝑘subscript𝑥0subscript𝑟italic-ϵ\overl... | 1,900 |
41 | •
If f𝑓f is G𝐺G-Lipschitz: then
σ=supx∈ℝd∥∇f(x)−g(x)∥≤2G𝜎subscriptsupremum𝑥superscriptℝ𝑑delimited-∥∥∇𝑓𝑥𝑔𝑥2𝐺\sigma=\sup_{x\in\mathbb{R}^{d}}\left\lVert\nabla f(x)-g(x)\right\rVert\leq 2G and therefore
equation 25 reduces to
∑k=0t−1r¯k2⟨∇f(xk),xk−x∗⟩≤2r¯t[d¯t+r¯t]vt−1+32d¯tr¯t2θt,δGT.superscrip... | 1,821 | Tuning-Free Stochastic Optimization
Appendix
7 Proofs for Section 7
7.1 Proof of Theorem 1
Proof of Theorem 1.
•
If f𝑓f is G𝐺G-Lipschitz: then
σ=supx∈ℝd∥∇f(x)−g(x)∥≤2G𝜎subscriptsupremum𝑥superscriptℝ𝑑delimited-∥∥∇𝑓𝑥𝑔𝑥2𝐺\sigma=\sup_{x\in\mathbb{R}^{d}}\left\lVert\nabla f(x)-g(x)\right\rVert\leq 2G and there... | 1,856 |
42 | ≤r¯t2∑k=0t−1r¯k268DGTθt,δ.absentsuperscriptsubscript¯𝑟𝑡2superscriptsubscript𝑘0𝑡1superscriptsubscript¯𝑟𝑘268𝐷𝐺𝑇subscript𝜃𝑡𝛿\displaystyle\leq\frac{\overline{r}_{t}^{2}}{\sum_{k=0}^{t-1}\overline{r}_{k}^{2}}68DG\sqrt{T}\theta_{t,\delta}.
(26)
We now use Lemma 2 to conclude that there exists some t≤T𝑡𝑇... | 1,665 | Tuning-Free Stochastic Optimization
Appendix
7 Proofs for Section 7
7.1 Proof of Theorem 1
Proof of Theorem 1.
≤r¯t2∑k=0t−1r¯k268DGTθt,δ.absentsuperscriptsubscript¯𝑟𝑡2superscriptsubscript𝑘0𝑡1superscriptsubscript¯𝑟𝑘268𝐷𝐺𝑇subscript𝜃𝑡𝛿\displaystyle\leq\frac{\overline{r}_{t}^{2}}{\sum_{k=0}^{t-1}\overline{... | 1,700 |
43 | ≤2r¯t2σ2T+4L∑k=0t−1r¯k2(f(xk)−f∗).absent2superscriptsubscript¯𝑟𝑡2superscript𝜎2𝑇4𝐿superscriptsubscript𝑘0𝑡1superscriptsubscript¯𝑟𝑘2𝑓subscript𝑥𝑘subscript𝑓∗\displaystyle\leq 2\overline{r}_{t}^{2}\sigma^{2}T+4L\sum_{k=0}^{t-1}\overline{r}_{k}^{2}(f(x_{k})-f_{\ast}).
Using the last line estimate in equ... | 1,856 | Tuning-Free Stochastic Optimization
Appendix
7 Proofs for Section 7
7.1 Proof of Theorem 1
Proof of Theorem 1.
≤2r¯t2σ2T+4L∑k=0t−1r¯k2(f(xk)−f∗).absent2superscriptsubscript¯𝑟𝑡2superscript𝜎2𝑇4𝐿superscriptsubscript𝑘0𝑡1superscriptsubscript¯𝑟𝑘2𝑓subscript𝑥𝑘subscript𝑓∗\displaystyle\leq 2\overline{r}_{t}^{... | 1,891 |
44 | =r¯t2(16[d¯t+r¯t]2L+40θt,δ[d¯t+r¯t]σT).absentsuperscriptsubscript¯𝑟𝑡216superscriptdelimited-[]subscript¯𝑑𝑡subscript¯𝑟𝑡2𝐿40subscript𝜃𝑡𝛿delimited-[]subscript¯𝑑𝑡subscript¯𝑟𝑡𝜎𝑇\displaystyle=\overline{r}_{t}^{2}\left(16\left[\overline{d}_{t}+\overline{r}_{t}\right]^{2}L+40\theta_{t,\delta}\left[\overl... | 1,849 | Tuning-Free Stochastic Optimization
Appendix
7 Proofs for Section 7
7.1 Proof of Theorem 1
Proof of Theorem 1.
=r¯t2(16[d¯t+r¯t]2L+40θt,δ[d¯t+r¯t]σT).absentsuperscriptsubscript¯𝑟𝑡216superscriptdelimited-[]subscript¯𝑑𝑡subscript¯𝑟𝑡2𝐿40subscript𝜃𝑡𝛿delimited-[]subscript¯𝑑𝑡subscript¯𝑟𝑡𝜎𝑇\displaystyle=... | 1,884 |
45 | See 2 | 3 | Tuning-Free Stochastic Optimization
Appendix
8 Proofs for Section 4
8.1 Proof of Proposition 2
See 2 | 30 |
46 | By [Hazan and Kakade, 2019, Theorem 2] we have that the point returned by the algorithm x¯¯𝑥\overline{x} satisfies
, 1 = f(x¯)−f∗≤{2GD∗Tlog+f(x∗)−f^0GD∗T if f is G-Lipschitz, 2LD∗2Tlog+f(x∗)−f^0LD∗2T if f is L-smooth.𝑓¯𝑥subscript𝑓∗cases2𝐺subscript𝐷∗𝑇subscript𝑓subscript𝑥∗subscript^𝑓0𝐺subscript𝐷∗... | 1,043 | Tuning-Free Stochastic Optimization
Appendix
8 Proofs for Section 4
8.1 Proof of Proposition 2
Proof.
By [Hazan and Kakade, 2019, Theorem 2] we have that the point returned by the algorithm x¯¯𝑥\overline{x} satisfies
, 1 = f(x¯)−f∗≤{2GD∗Tlog+f(x∗)−f^0GD∗T if f is G-Lipschitz, 2LD∗2Tlog+f(x∗)−f^0LD∗2T if f... | 1,072 |
47 | See 3 | 3 | Tuning-Free Stochastic Optimization
Appendix
8 Proofs for Section 4
8.2 Proof of Proposition 3
See 3 | 30 |
48 | This is shown in [Khaled et al., 2023, Supplementary material section 7]
for DoWG. The proof for DoG is similar and we omit it for simplicity.
∎ | 41 | Tuning-Free Stochastic Optimization
Appendix
8 Proofs for Section 4
8.2 Proof of Proposition 3
Proof.
This is shown in [Khaled et al., 2023, Supplementary material section 7]
for DoWG. The proof for DoG is similar and we omit it for simplicity.
∎ | 70 |
49 | Let σ>0𝜎0\sigma>0. Let L=σT𝐿𝜎𝑇L=\sigma T. Define the functions
, 1 = f1(x)subscript𝑓1𝑥\displaystyle f_{1}(x). , 2 = =defL2x2+σxdef𝐿2superscript𝑥2𝜎𝑥\displaystyle\overset{\text{def}}{=}\frac{L}{2}x^{2}+\sigma x. , 3 = . , 1 = f2(x)subscript𝑓2𝑥\displaystyle f_{2}(x). , 2 = =defL2x2−σT−1xdef𝐿2superscr... | 1,756 | Tuning-Free Stochastic Optimization
Appendix
8 Proofs for Section 4
8.3 Proof of Theorem 2
Proof.
Let σ>0𝜎0\sigma>0. Let L=σT𝐿𝜎𝑇L=\sigma T. Define the functions
, 1 = f1(x)subscript𝑓1𝑥\displaystyle f_{1}(x). , 2 = =defL2x2+σxdef𝐿2superscript𝑥2𝜎𝑥\displaystyle\overset{\text{def}}{=}\frac{L}{2}x^{2}+\sigma ... | 1,786 |
50 | , 1 = 𝒪(h,σh)(x)=def{hz(x),∇hz(x)}𝒪ℎsubscript𝜎ℎ𝑥defsubscriptℎ𝑧𝑥∇subscriptℎ𝑧𝑥\displaystyle\mathcal{O}(h,\sigma_{h})(x)\overset{\text{def}}{=}\left\{h_{z}(x),\nabla h_{z}(x)\right\}. , 2 = ={{h1(x),∇h1(x)} with probability 1T,{h2(x),∇h2(x)} with probability 1−1T.absentcasessubscriptℎ1𝑥∇subscriptℎ1𝑥 ... | 1,932 | Tuning-Free Stochastic Optimization
Appendix
8 Proofs for Section 4
8.3 Proof of Theorem 2
Proof.
, 1 = 𝒪(h,σh)(x)=def{hz(x),∇hz(x)}𝒪ℎsubscript𝜎ℎ𝑥defsubscriptℎ𝑧𝑥∇subscriptℎ𝑧𝑥\displaystyle\mathcal{O}(h,\sigma_{h})(x)\overset{\text{def}}{=}\left\{h_{z}(x),\nabla h_{z}(x)\right\}. , 2 = ={{h1(x),∇h1(x)} wi... | 1,962 |
51 | , 1 = D∗(f)=|v−0|=v,subscript𝐷∗𝑓𝑣0𝑣\displaystyle D_{\ast}(f)=\left\lvert v-0\right\rvert=v,. , 2 = D∗(h)=|v−u|.subscript𝐷∗ℎ𝑣𝑢\displaystyle D_{\ast}(h)=\left\lvert v-u\right\rvert.. , 3 = . , 4 = (29)
And recall that
, 1 = σf≤σ,subscript𝜎𝑓𝜎\displaystyle\sigma_{f}\leq\sigma,. , 2 = σh≤σ+(T−1)Lu.subscript𝜎ℎ... | 1,692 | Tuning-Free Stochastic Optimization
Appendix
8 Proofs for Section 4
8.3 Proof of Theorem 2
Proof.
, 1 = D∗(f)=|v−0|=v,subscript𝐷∗𝑓𝑣0𝑣\displaystyle D_{\ast}(f)=\left\lvert v-0\right\rvert=v,. , 2 = D∗(h)=|v−u|.subscript𝐷∗ℎ𝑣𝑢\displaystyle D_{\ast}(h)=\left\lvert v-u\right\rvert.. , 3 = . , 4 = (29)
And recall th... | 1,722 |
52 | We shall let ι=defpoly(log+L¯L¯,log+σ¯σ¯,log+D¯D¯,log1δ,logT)𝜄defpolysubscript¯𝐿¯𝐿subscript¯𝜎¯𝜎subscript¯𝐷¯𝐷1𝛿𝑇\iota\overset{\text{def}}{=}\mathrm{poly}\left(\log_{+}\frac{\overline{L}}{\underline{L}},\log_{+}\frac{\overline{\sigma}}{\underline{\sigma}},\log_{+}\frac{\overline{D}}{\underline{D}},\log\f... | 1,839 | Tuning-Free Stochastic Optimization
Appendix
8 Proofs for Section 4
8.3 Proof of Theorem 2
Proof.
We shall let ι=defpoly(log+L¯L¯,log+σ¯σ¯,log+D¯D¯,log1δ,logT)𝜄defpolysubscript¯𝐿¯𝐿subscript¯𝜎¯𝜎subscript¯𝐷¯𝐷1𝛿𝑇\iota\overset{\text{def}}{=}\mathrm{poly}\left(\log_{+}\frac{\overline{L}}{\underline{L}},\log... | 1,869 |
53 | , 1 = L2∥xout∥2=f(xout)−f∗≤cι(LD∗(f)2T+σD∗(f)T)=cι(Lv2T+σvT)𝐿2superscriptdelimited-∥∥subscript𝑥out2𝑓subscript𝑥outsubscript𝑓∗𝑐𝜄𝐿subscript𝐷∗superscript𝑓2𝑇𝜎subscript𝐷∗𝑓𝑇𝑐𝜄𝐿superscript𝑣2𝑇𝜎𝑣𝑇\displaystyle\frac{L}{2}{\left\lVert x_{\mathrm{out}}\right\rVert}^{2}=f(x_{\mathrm{out}})-f_{\ast}... | 1,962 | Tuning-Free Stochastic Optimization
Appendix
8 Proofs for Section 4
8.3 Proof of Theorem 2
Proof.
, 1 = L2∥xout∥2=f(xout)−f∗≤cι(LD∗(f)2T+σD∗(f)T)=cι(Lv2T+σvT)𝐿2superscriptdelimited-∥∥subscript𝑥out2𝑓subscript𝑥outsubscript𝑓∗𝑐𝜄𝐿subscript𝐷∗superscript𝑓2𝑇𝜎subscript𝐷∗𝑓𝑇𝑐𝜄𝐿superscript𝑣2𝑇𝜎𝑣𝑇\... | 1,992 |
54 | It is obvious that for large enough T𝑇T, this expression is positive. Moreover, this situation happens with a positive probability of at least e−12superscript𝑒12\frac{e^{-1}}{2} since by the union bound
, 1 = Prob(Algorithm incorrect for f,h∪Oracle doesn’t output all {h2,∇h2})ProbAlgorithm incorrect for f,hOracle do... | 290 | Tuning-Free Stochastic Optimization
Appendix
8 Proofs for Section 4
8.3 Proof of Theorem 2
Proof.
It is obvious that for large enough T𝑇T, this expression is positive. Moreover, this situation happens with a positive probability of at least e−12superscript𝑒12\frac{e^{-1}}{2} since by the union bound
, 1 = Prob(Algor... | 320 |
55 | We consider the following functions
, 1 = f(x)𝑓𝑥\displaystyle f(x). , 2 = =G|x|,absent𝐺𝑥\displaystyle=G\left\lvert x\right\rvert,. , 3 = . , 1 = f1(x)subscript𝑓1𝑥\displaystyle f_{1}(x). , 2 = =G|x|+Gx,absent𝐺𝑥𝐺𝑥\displaystyle=G\left\lvert x\right\rvert+Gx,. , 3 = . , 1 = f2(x)subscript𝑓2𝑥\displaystyle ... | 1,741 | Tuning-Free Stochastic Optimization
Appendix
8 Proofs for Section 4
8.4 Proof of Theorem 3
Proof.
We consider the following functions
, 1 = f(x)𝑓𝑥\displaystyle f(x). , 2 = =G|x|,absent𝐺𝑥\displaystyle=G\left\lvert x\right\rvert,. , 3 = . , 1 = f1(x)subscript𝑓1𝑥\displaystyle f_{1}(x). , 2 = =G|x|+Gx,absent𝐺𝑥... | 1,771 |
56 | By direct computation we have that 𝔼[hz(x)]=h(x)𝔼delimited-[]subscriptℎ𝑧𝑥ℎ𝑥\mathbb{E}\left[h_{z}(x)\right]=h(x) and
𝔼[∇hz(x)]=∇h(x)𝔼delimited-[]∇subscriptℎ𝑧𝑥∇ℎ𝑥\mathbb{E}\left[\nabla h_{z}(x)\right]=\nabla h(x). From the definition of the functions in equation 40
it is immediate that all the gradients a... | 1,174 | Tuning-Free Stochastic Optimization
Appendix
8 Proofs for Section 4
8.4 Proof of Theorem 3
Proof.
By direct computation we have that 𝔼[hz(x)]=h(x)𝔼delimited-[]subscriptℎ𝑧𝑥ℎ𝑥\mathbb{E}\left[h_{z}(x)\right]=h(x) and
𝔼[∇hz(x)]=∇h(x)𝔼delimited-[]∇subscriptℎ𝑧𝑥∇ℎ𝑥\mathbb{E}\left[\nabla h_{z}(x)\right]=\nabla ... | 1,204 |
57 | We have the two following algorithm-independent lemmas: | 10 | Tuning-Free Stochastic Optimization
Appendix
9 Proofs for Section 4.2
We have the two following algorithm-independent lemmas: | 30 |
58 | Suppose that Y𝑌Y is a sub-exponential random variable (see Section 9) with mean 00 and
sub-exponential modulus R2superscript𝑅2R^{2}, i.e.
for all t>0𝑡0t>0
, 1 = Prob(|Y|≥t)≤2exp(−tR2).Prob𝑌𝑡2𝑡superscript𝑅2\displaystyle\mathrm{Prob}\left(\left\lvert Y\right\rvert\geq t\right)\leq 2\exp\left(-\frac{t}{R^{2}}\ri... | 410 | Tuning-Free Stochastic Optimization
Appendix
9 Proofs for Section 4.2
Lemma 5.
Suppose that Y𝑌Y is a sub-exponential random variable (see Section 9) with mean 00 and
sub-exponential modulus R2superscript𝑅2R^{2}, i.e.
for all t>0𝑡0t>0
, 1 = Prob(|Y|≥t)≤2exp(−tR2).Prob𝑌𝑡2𝑡superscript𝑅2\displaystyle\mathrm{Prob}... | 434 |
59 | By Bernstein’s inequality [Vershynin, 2018, Corollary 2.8.3] we have
, 1 = Prob(|1n∑i=1nYi|≥t)≤2exp[−cmin(t2R4,tR2)n],Prob1𝑛superscriptsubscript𝑖1𝑛subscript𝑌𝑖𝑡2𝑐superscript𝑡2superscript𝑅4𝑡superscript𝑅2𝑛\displaystyle\mathrm{Prob}\left(\left\lvert\frac{1}{n}\sum_{i=1}^{n}Y_{i}\right\rvert\geq t\right)\... | 1,971 | Tuning-Free Stochastic Optimization
Appendix
9 Proofs for Section 4.2
Proof.
By Bernstein’s inequality [Vershynin, 2018, Corollary 2.8.3] we have
, 1 = Prob(|1n∑i=1nYi|≥t)≤2exp[−cmin(t2R4,tR2)n],Prob1𝑛superscriptsubscript𝑖1𝑛subscript𝑌𝑖𝑡2𝑐superscript𝑡2superscript𝑅4𝑡superscript𝑅2𝑛\displaystyle\mathrm{P... | 1,993 |
60 | Recall the definition of sub-exponential random variables:
{definition}
We call a random variable Y𝑌Y R𝑅R-sub-exponential if
, 1 = Prob(|Y|≥t)≤2exp(−tR)Prob𝑌𝑡2𝑡𝑅\displaystyle\mathrm{Prob}\left(\left\lvert Y\right\rvert\geq t\right)\leq 2\exp\left(\frac{-t}{R}\right). , 2 =
for all t≥0𝑡0t\geq 0.
We call a ran... | 272 | Tuning-Free Stochastic Optimization
Appendix
9 Proofs for Section 4.2
Proof.
Recall the definition of sub-exponential random variables:
{definition}
We call a random variable Y𝑌Y R𝑅R-sub-exponential if
, 1 = Prob(|Y|≥t)≤2exp(−tR)Prob𝑌𝑡2𝑡𝑅\displaystyle\mathrm{Prob}\left(\left\lvert Y\right\rvert\geq t\right)\le... | 294 |
61 | [Vershynin, 2018, Lemma 2.7.7]
A random variable Y𝑌Y is R𝑅R-sub-gaussian if and only if Y2superscript𝑌2Y^{2} is R2superscript𝑅2R^{2}-sub-exponential. | 69 | Tuning-Free Stochastic Optimization
Appendix
9 Proofs for Section 4.2
Lemma 6.
[Vershynin, 2018, Lemma 2.7.7]
A random variable Y𝑌Y is R𝑅R-sub-gaussian if and only if Y2superscript𝑌2Y^{2} is R2superscript𝑅2R^{2}-sub-exponential. | 93 |
62 | [Vershynin, 2018, Exercise 2.7.10] If A𝐴A is E𝐸E-sub-exponential then A−𝔼[A]𝐴𝔼delimited-[]𝐴A-\mathbb{E}\left[A\right] is c⋅E⋅𝑐𝐸c\cdot E-sub-exponential for
some absolute constant c𝑐c. | 98 | Tuning-Free Stochastic Optimization
Appendix
9 Proofs for Section 4.2
Lemma 7.
[Vershynin, 2018, Exercise 2.7.10] If A𝐴A is E𝐸E-sub-exponential then A−𝔼[A]𝐴𝔼delimited-[]𝐴A-\mathbb{E}\left[A\right] is c⋅E⋅𝑐𝐸c\cdot E-sub-exponential for
some absolute constant c𝑐c. | 122 |
63 | Suppose that X𝑋X is a random variable that satisfies the assumptions in
Section 4.2 and X1,…,Xnsubscript𝑋1…subscript𝑋𝑛X_{1},\ldots,X_{n} are all i.i.d. copies of X𝑋X. Then
with probability 1−δ1𝛿1-\delta we have that
, 1 = |∑i=1n(∥Xi∥2−σ2)|≤c⋅σ2⋅Ksnr−2[nlog1δ+log1δ].superscriptsubscript𝑖1𝑛superscriptdelimite... | 317 | Tuning-Free Stochastic Optimization
Appendix
9 Proofs for Section 4.2
Lemma 8.
Suppose that X𝑋X is a random variable that satisfies the assumptions in
Section 4.2 and X1,…,Xnsubscript𝑋1…subscript𝑋𝑛X_{1},\ldots,X_{n} are all i.i.d. copies of X𝑋X. Then
with probability 1−δ1𝛿1-\delta we have that
, 1 = |∑i=1n(∥Xi∥2−... | 341 |
64 | By assumption we have that ∥Xi∥delimited-∥∥subscript𝑋𝑖\left\lVert X_{i}\right\rVert is R𝑅R-sub-gaussian, therefore by
Lemma 6 we have that ∥Xi∥2superscriptdelimited-∥∥subscript𝑋𝑖2\left\lVert X_{i}\right\rVert^{2} is
R2superscript𝑅2R^{2}-sub-exponential. By Lemma 7 we then have that
∥Xi∥2−σ2superscriptdelimited-∥∥... | 896 | Tuning-Free Stochastic Optimization
Appendix
9 Proofs for Section 4.2
Proof.
By assumption we have that ∥Xi∥delimited-∥∥subscript𝑋𝑖\left\lVert X_{i}\right\rVert is R𝑅R-sub-gaussian, therefore by
Lemma 6 we have that ∥Xi∥2superscriptdelimited-∥∥subscript𝑋𝑖2\left\lVert X_{i}\right\rVert^{2} is
R2superscript𝑅2R^{2}-... | 918 |
65 | The main idea in the proof is the following lemma, which characterizes the
convergence of the sample variance estimator of b𝑏b i.i.d. random variables by
the number of samples b𝑏b as well as the signal-to-noise ratio Ksnr−1superscriptsubscript𝐾snr1K_{\mathrm{snr}}^{-1}. | 82 | Tuning-Free Stochastic Optimization
Appendix
9 Proofs for Section 4.2
9.1 Proof of Theorem 4
The main idea in the proof is the following lemma, which characterizes the
convergence of the sample variance estimator of b𝑏b i.i.d. random variables by
the number of samples b𝑏b as well as the signal-to-noise ratio Ksnr−1su... | 112 |
66 | Let Y𝑌Y be a random vector in ℝdsuperscriptℝ𝑑\mathbb{R}^{d} such that Z=Y−𝔼[Y]𝑍𝑌𝔼delimited-[]𝑌Z=Y-\mathbb{E}\left[Y\right] satisfies the
assumptions in Section 4.2. Let Y1,Y2,…,Ybsubscript𝑌1subscript𝑌2…subscript𝑌𝑏Y_{1},Y_{2},\ldots,Y_{b} be i.i.d.
copies of Y𝑌Y. Define the sample mean and variance as
, 1 =... | 788 | Tuning-Free Stochastic Optimization
Appendix
9 Proofs for Section 4.2
9.1 Proof of Theorem 4
Lemma 9.
Let Y𝑌Y be a random vector in ℝdsuperscriptℝ𝑑\mathbb{R}^{d} such that Z=Y−𝔼[Y]𝑍𝑌𝔼delimited-[]𝑌Z=Y-\mathbb{E}\left[Y\right] satisfies the
assumptions in Section 4.2. Let Y1,Y2,…,Ybsubscript𝑌1subscript𝑌2…subscr... | 822 |
67 | We shall use the shorthand μ=𝔼[Y]𝜇𝔼delimited-[]𝑌\mu=\mathbb{E}\left[Y\right]. We have
, 1 = σ^2superscript^𝜎2\displaystyle\hat{\sigma}^{2}. , 2 = =1b∑i=1b∥Yi−Y^∥2absent1𝑏superscriptsubscript𝑖1𝑏superscriptdelimited-∥∥subscript𝑌𝑖^𝑌2\displaystyle=\frac{1}{b}\sum_{i=1}^{b}{\left\lVert Y_{i}-\hat{Y}\right\rVert... | 1,870 | Tuning-Free Stochastic Optimization
Appendix
9 Proofs for Section 4.2
9.1 Proof of Theorem 4
Proof.
We shall use the shorthand μ=𝔼[Y]𝜇𝔼delimited-[]𝑌\mu=\mathbb{E}\left[Y\right]. We have
, 1 = σ^2superscript^𝜎2\displaystyle\hat{\sigma}^{2}. , 2 = =1b∑i=1b∥Yi−Y^∥2absent1𝑏superscriptsubscript𝑖1𝑏superscriptdelimi... | 1,902 |
68 | , 1 = ∥1b∑i=1b[μ−Yi]∥2≤c2R2log2dδb.superscriptdelimited-∥∥1𝑏superscriptsubscript𝑖1𝑏delimited-[]𝜇subscript𝑌𝑖2superscript𝑐2superscript𝑅22𝑑𝛿𝑏\displaystyle{\left\lVert\frac{1}{b}\sum_{i=1}^{b}\left[\mu-Y_{i}\right]\right\rVert}^{2}\leq\frac{c^{2}R^{2}\log\frac{2d}{\delta}}{b}.. , 2 = . , 3 = (46)
For the th... | 1,922 | Tuning-Free Stochastic Optimization
Appendix
9 Proofs for Section 4.2
9.1 Proof of Theorem 4
Proof.
, 1 = ∥1b∑i=1b[μ−Yi]∥2≤c2R2log2dδb.superscriptdelimited-∥∥1𝑏superscriptsubscript𝑖1𝑏delimited-[]𝜇subscript𝑌𝑖2superscript𝑐2superscript𝑅22𝑑𝛿𝑏\displaystyle{\left\lVert\frac{1}{b}\sum_{i=1}^{b}\left[\mu-Y_{i}\... | 1,954 |
69 | , 1 = Prob(|Zi,j|≥t)≤Prob(∥Yi−μ∥∥Yj−μ∥≥t)≤2exp(−tR2).Probsubscript𝑍𝑖𝑗𝑡Probdelimited-∥∥subscript𝑌𝑖𝜇delimited-∥∥subscript𝑌𝑗𝜇𝑡2𝑡superscript𝑅2\displaystyle\mathrm{Prob}\left(\left\lvert Z_{i,j}\right\rvert\geq t\right)\leq\mathrm{Prob}\left(\left\lVert Y_{i}-\mu\right\rVert\left\lVert Y_{j}-\mu\right\rVer... | 1,787 | Tuning-Free Stochastic Optimization
Appendix
9 Proofs for Section 4.2
9.1 Proof of Theorem 4
Proof.
, 1 = Prob(|Zi,j|≥t)≤Prob(∥Yi−μ∥∥Yj−μ∥≥t)≤2exp(−tR2).Probsubscript𝑍𝑖𝑗𝑡Probdelimited-∥∥subscript𝑌𝑖𝜇delimited-∥∥subscript𝑌𝑗𝜇𝑡2𝑡superscript𝑅2\displaystyle\mathrm{Prob}\left(\left\lvert Z_{i,j}\right\rvert\... | 1,819 |
70 | , 1 = |1b∑i=1b⟨Yi−μ,Y^−μ⟩|≤c′R2[log2bδb+log2bδb].1𝑏superscriptsubscript𝑖1𝑏subscript𝑌𝑖𝜇^𝑌𝜇superscript𝑐′superscript𝑅2delimited-[]2𝑏𝛿𝑏2𝑏𝛿𝑏\displaystyle\left\lvert\frac{1}{b}\sum_{i=1}^{b}\left\langle Y_{i}-\mu,\hat{Y}-\mu\right\rangle\right\rvert\leq c^{\prime}R^{2}\left[\sqrt{\frac{\log\frac{2b}{\d... | 1,291 | Tuning-Free Stochastic Optimization
Appendix
9 Proofs for Section 4.2
9.1 Proof of Theorem 4
Proof.
, 1 = |1b∑i=1b⟨Yi−μ,Y^−μ⟩|≤c′R2[log2bδb+log2bδb].1𝑏superscriptsubscript𝑖1𝑏subscript𝑌𝑖𝜇^𝑌𝜇superscript𝑐′superscript𝑅2delimited-[]2𝑏𝛿𝑏2𝑏𝛿𝑏\displaystyle\left\lvert\frac{1}{b}\sum_{i=1}^{b}\left\langle ... | 1,323 |
71 | First, observe that at every timestep t𝑡t, conditioned on
ℱt=σ(g1:t−1,x1:t)subscriptℱ𝑡𝜎subscript𝑔:1𝑡1subscript𝑥:1𝑡\mathcal{F}_{t}=\sigma\left(g_{1:t-1},x_{1:t}\right) we have by
Lemma 9 that with probability 1−δT1𝛿𝑇1-\frac{\delta}{T} that the sample variance σ^t2subscriptsuperscript^𝜎2𝑡\hat{\sigma}^{2}_{t}
... | 1,704 | Tuning-Free Stochastic Optimization
Appendix
9 Proofs for Section 4.2
9.1 Proof of Theorem 4
Proof of Theorem 4.
First, observe that at every timestep t𝑡t, conditioned on
ℱt=σ(g1:t−1,x1:t)subscriptℱ𝑡𝜎subscript𝑔:1𝑡1subscript𝑥:1𝑡\mathcal{F}_{t}=\sigma\left(g_{1:t-1},x_{1:t}\right) we have by
Lemma 9 that with pro... | 1,741 |
72 | (Modification of [Ivgi et al., 2023, Lemma 8])
Under the conditions of Theorem 4 both the
DoG (53) and DoWG (54) updates satisfy
for all t≤T𝑡𝑇t\leq T
, 1 = ρtsubscript𝜌𝑡\displaystyle\rho_{t}. , 2 = ∈σ(g0,μ01,…,μ0b…,gt−1,μ0t−1,…,μbt−1),absent𝜎subscript𝑔0superscriptsubscript𝜇01…superscriptsubscript𝜇0𝑏…subscrip... | 904 | Tuning-Free Stochastic Optimization
Appendix
9 Proofs for Section 4.2
9.1 Proof of Theorem 4
Lemma 10.
(Modification of [Ivgi et al., 2023, Lemma 8])
Under the conditions of Theorem 4 both the
DoG (53) and DoWG (54) updates satisfy
for all t≤T𝑡𝑇t\leq T
, 1 = ρtsubscript𝜌𝑡\displaystyle\rho_{t}. , 2 = ∈σ(g0,μ01,…,μ0... | 938 |
73 | The modification of this lemma to account for bounded noise g(xk)−∇f(xk)𝑔subscript𝑥𝑘∇𝑓subscript𝑥𝑘g(x_{k})-\nabla f(x_{k}) rather than
bounded gradients is straightforward, and we omit it for simplicity.
∎ | 80 | Tuning-Free Stochastic Optimization
Appendix
9 Proofs for Section 4.2
9.1 Proof of Theorem 4
Proof.
The modification of this lemma to account for bounded noise g(xk)−∇f(xk)𝑔subscript𝑥𝑘∇𝑓subscript𝑥𝑘g(x_{k})-\nabla f(x_{k}) rather than
bounded gradients is straightforward, and we omit it for simplicity.
∎ | 112 |
74 | (Modification of [Ivgi et al., 2023, Lemma 9])
Under the conditions of Theorem 4 both the
DoG (53) and DoWG (54) updates satisfy
for all t≤T𝑡𝑇t\leq T with probability at least 1−δ1𝛿1-\delta
, 1 = ∑k=0t−1η~k⟨gk−∇f(xk),x∗−xk⟩≤d02.superscriptsubscript𝑘0𝑡1subscript~𝜂𝑘subscript𝑔𝑘∇𝑓subscript𝑥𝑘subscript𝑥∗subscr... | 250 | Tuning-Free Stochastic Optimization
Appendix
9 Proofs for Section 4.2
9.1 Proof of Theorem 4
Lemma 11.
(Modification of [Ivgi et al., 2023, Lemma 9])
Under the conditions of Theorem 4 both the
DoG (53) and DoWG (54) updates satisfy
for all t≤T𝑡𝑇t\leq T with probability at least 1−δ1𝛿1-\delta
, 1 = ∑k=0t−1η~k⟨gk−∇f... | 284 |
75 | The modification is straightforward and omitted.
∎ | 9 | Tuning-Free Stochastic Optimization
Appendix
9 Proofs for Section 4.2
9.1 Proof of Theorem 4
Proof.
The modification is straightforward and omitted.
∎ | 41 |
76 | (Modification of [Ivgi et al., 2023, Lemma 10])
Under the conditions of Theorem 4, if
∑k=0t−1ρt⟨gk−∇f(xk),x∗−xk⟩≤d02superscriptsubscript𝑘0𝑡1subscript𝜌𝑡subscript𝑔𝑘∇𝑓subscript𝑥𝑘subscript𝑥∗subscript𝑥𝑘superscriptsubscript𝑑02\sum_{k=0}^{t-1}\rho_{t}\left\langle g_{k}-\nabla f(x_{k}),x_{\ast}-x_{k}\right\rangl... | 229 | Tuning-Free Stochastic Optimization
Appendix
9 Proofs for Section 4.2
9.1 Proof of Theorem 4
Lemma 12.
(Modification of [Ivgi et al., 2023, Lemma 10])
Under the conditions of Theorem 4, if
∑k=0t−1ρt⟨gk−∇f(xk),x∗−xk⟩≤d02superscriptsubscript𝑘0𝑡1subscript𝜌𝑡subscript𝑔𝑘∇𝑓subscript𝑥𝑘subscript𝑥∗subscript𝑥𝑘supers... | 263 |
77 | The modification is straightforward and omitted.
∎
By Lemmas lemmas 11 and 12 we get
that r¯T≤3d0subscript¯𝑟𝑇3subscript𝑑0\overline{r}_{T}\leq 3d_{0} and it follows that
d¯t=maxk≤tdk≤maxk≤trt+r0≤4d0subscript¯𝑑𝑡subscript𝑘𝑡subscript𝑑𝑘subscript𝑘𝑡subscript𝑟𝑡subscript𝑟04subscript𝑑0\overline{d}_{t}=\max_{k\... | 255 | Tuning-Free Stochastic Optimization
Appendix
9 Proofs for Section 4.2
9.1 Proof of Theorem 4
Proof.
The modification is straightforward and omitted.
∎
By Lemmas lemmas 11 and 12 we get
that r¯T≤3d0subscript¯𝑟𝑇3subscript𝑑0\overline{r}_{T}\leq 3d_{0} and it follows that
d¯t=maxk≤tdk≤maxk≤trt+r0≤4d0subscript¯𝑑𝑡su... | 287 |
78 | We use the exact same construction from Theorem 2 with
the following hints:
, 1 = L¯=L,¯𝐿𝐿\displaystyle\underline{L}=L,. , 2 = L¯=L¯𝐿𝐿\displaystyle\overline{L}=L. , 3 = . , 1 = Δ¯=L2min(v,|u−v|),¯Δ𝐿2𝑣𝑢𝑣\displaystyle\underline{\Delta}=\frac{L}{2}\min(v,\left\lvert u-v\right\rvert),. , 2 = Δ¯=L2max(v,|u−v|).¯... | 1,900 | Tuning-Free Stochastic Optimization
Appendix
10 Proofs for Section 5
10.1 Proof of Theorem 5
Proof.
We use the exact same construction from Theorem 2 with
the following hints:
, 1 = L¯=L,¯𝐿𝐿\displaystyle\underline{L}=L,. , 2 = L¯=L¯𝐿𝐿\displaystyle\overline{L}=L. , 3 = . , 1 = Δ¯=L2min(v,|u−v|),¯Δ𝐿2𝑣𝑢𝑣\display... | 1,930 |
79 | , 1 = xout≤cιvT+cιvT.subscript𝑥out𝑐𝜄𝑣𝑇𝑐𝜄𝑣𝑇\displaystyle x_{\mathrm{out}}\leq\sqrt{c\iota}\sqrt{\frac{v}{T}}+\sqrt{c\iota}\frac{v}{\sqrt{T}}.. , 2 = . , 3 = (57)
On the other hand, applying equation 56 to the function h=L2(x−u)2ℎ𝐿2superscript𝑥𝑢2h=\frac{L}{2}(x-u)^{2}
(as in the proof of Theorem 2) we ob... | 1,902 | Tuning-Free Stochastic Optimization
Appendix
10 Proofs for Section 5
10.1 Proof of Theorem 5
Proof.
, 1 = xout≤cιvT+cιvT.subscript𝑥out𝑐𝜄𝑣𝑇𝑐𝜄𝑣𝑇\displaystyle x_{\mathrm{out}}\leq\sqrt{c\iota}\sqrt{\frac{v}{T}}+\sqrt{c\iota}\frac{v}{\sqrt{T}}.. , 2 = . , 3 = (57)
On the other hand, applying equation 56 to the... | 1,932 |
80 | ([Liu et al., 2023], High-probability
convergence of SGD in the nonconvex setting). Let f𝑓f be L𝐿L-smooth and
possibly nonconvex. Suppose that the stochastic gradient noise is R2superscript𝑅2R^{2}-sub-gaussian. Then for any fixed stepsize η𝜂\eta such that ηL≤1𝜂𝐿1\eta L\leq 1 we have
, 1 = 1T∑t=0T−1∥∇f(xt)∥2≤2... | 360 | Tuning-Free Stochastic Optimization
Appendix
10 Proofs for Section 5
10.2 Proof of Theorem 6
Theorem 7.
([Liu et al., 2023], High-probability
convergence of SGD in the nonconvex setting). Let f𝑓f be L𝐿L-smooth and
possibly nonconvex. Suppose that the stochastic gradient noise is R2superscript𝑅2R^{2}-sub-gaussian. Th... | 393 |
81 | This is a very straightforward generalization of [Liu et al., 2023, Theorem
4.1], and we include it for completeness. By [Liu et al., 2023, Corollary 4.4] we have
that if ηtL≤1subscript𝜂𝑡𝐿1\eta_{t}L\leq 1 and 0≤wtηt2L≤12R20subscript𝑤𝑡superscriptsubscript𝜂𝑡2𝐿12superscript𝑅20\leq w_{t}\eta_{t}^{2}L\leq\frac{... | 1,974 | Tuning-Free Stochastic Optimization
Appendix
10 Proofs for Section 5
10.2 Proof of Theorem 6
Proof.
This is a very straightforward generalization of [Liu et al., 2023, Theorem
4.1], and we include it for completeness. By [Liu et al., 2023, Corollary 4.4] we have
that if ηtL≤1subscript𝜂𝑡𝐿1\eta_{t}L\leq 1 and 0≤wtηt... | 2,004 |
82 | , 1 = 1T∑t=1T∥∇f(xt)∥2≤2Δ1ηT+5ηR2+12R2log1δT.1𝑇superscriptsubscript𝑡1𝑇superscriptdelimited-∥∥∇𝑓subscript𝑥𝑡22subscriptΔ1𝜂𝑇5𝜂superscript𝑅212superscript𝑅21𝛿𝑇\displaystyle\frac{1}{T}\sum_{t=1}^{T}{\left\lVert\nabla f(x_{t})\right\rVert}^{2}\leq\frac{2\Delta_{1}}{\eta T}+5\eta R^{2}+\frac{12R^{2}\log\f... | 219 | Tuning-Free Stochastic Optimization
Appendix
10 Proofs for Section 5
10.2 Proof of Theorem 6
Proof.
, 1 = 1T∑t=1T∥∇f(xt)∥2≤2Δ1ηT+5ηR2+12R2log1δT.1𝑇superscriptsubscript𝑡1𝑇superscriptdelimited-∥∥∇𝑓subscript𝑥𝑡22subscriptΔ1𝜂𝑇5𝜂superscript𝑅212superscript𝑅21𝛿𝑇\displaystyle\frac{1}{T}\sum_{t=1}^{T}{\left... | 249 |
83 | We will use the following lemma from
[Madden et al., 2020]: | 18 | Tuning-Free Stochastic Optimization
Appendix
10 Proofs for Section 5
10.3 Restarting SGD
We will use the following lemma from
[Madden et al., 2020]: | 43 |
84 | [Madden et al., 2020, Lemma 33] Let Z=k∈{1,2,…,K}𝑍𝑘12…𝐾Z=k\in\{1,2,\ldots,K\} with probability pksubscript𝑝𝑘p_{k} and ∑k=1Kpk=1superscriptsubscript𝑘1𝐾subscript𝑝𝑘1\sum_{k=1}^{K}p_{k}=1.
Let Z1,…,Zmsubscript𝑍1…subscript𝑍𝑚Z_{1},\ldots,Z_{m} be independent copies of Z𝑍Z. Let Y=(Y1,…,Ym)𝑌subscript𝑌1…subscript... | 446 | Tuning-Free Stochastic Optimization
Appendix
10 Proofs for Section 5
10.3 Restarting SGD
Lemma 13.
[Madden et al., 2020, Lemma 33] Let Z=k∈{1,2,…,K}𝑍𝑘12…𝐾Z=k\in\{1,2,\ldots,K\} with probability pksubscript𝑝𝑘p_{k} and ∑k=1Kpk=1superscriptsubscript𝑘1𝐾subscript𝑝𝑘1\sum_{k=1}^{K}p_{k}=1.
Let Z1,…,Zmsubscript𝑍1…sub... | 475 |
85 | (Convergence of FindLeader) If we run Algorithm 4 on a set
V𝑉V of P𝑃P points v1,v2,…,vPsubscript𝑣1subscript𝑣2…subscript𝑣𝑃{v_{1},v_{2},\ldots,v_{P}}, with sampling budget M𝑀M and per-point estimation budget K𝐾K,
then the output of the algorithm satisfies for some absolute constant c>0𝑐0c>0
and all γ>0𝛾0\gamma>... | 595 | Tuning-Free Stochastic Optimization
Appendix
10 Proofs for Section 5
10.3 Restarting SGD
Theorem 8.
(Convergence of FindLeader) If we run Algorithm 4 on a set
V𝑉V of P𝑃P points v1,v2,…,vPsubscript𝑣1subscript𝑣2…subscript𝑣𝑃{v_{1},v_{2},\ldots,v_{P}}, with sampling budget M𝑀M and per-point estimation budget K𝐾K,
t... | 625 |
86 | The proof of this theorem loosely follows the proofs of [Ghadimi and Lan, 2013, Theorem
2.4] and [Madden et al., 2020, Theorem
13]. First, define the following two sets of true gradients for the iterates in V𝑉V
and P𝑃P respectively:
, 1 = UV={∇f(v1),∇f(v2),…,∇f(vP)}subscript𝑈𝑉∇𝑓subscript𝑣1∇𝑓subscript𝑣2…∇𝑓su... | 1,958 | Tuning-Free Stochastic Optimization
Appendix
10 Proofs for Section 5
10.3 Restarting SGD
Proof.
The proof of this theorem loosely follows the proofs of [Ghadimi and Lan, 2013, Theorem
2.4] and [Madden et al., 2020, Theorem
13]. First, define the following two sets of true gradients for the iterates in V𝑉V
and P𝑃P res... | 1,985 |
87 | , 1 = ∥∇f(sm∗)∥2superscriptdelimited-∥∥∇𝑓subscript𝑠superscript𝑚∗2\displaystyle{\left\lVert\nabla f(s_{m^{\ast}})\right\rVert}^{2}. , 2 = ≤2∥∇f(sm∗)−g^sm∗∥2+2∥g^sm∗∥2absent2superscriptdelimited-∥∥∇𝑓subscript𝑠superscript𝑚∗subscript^𝑔subscript𝑠superscript𝑚∗22superscriptdelimited-∥∥subscript^𝑔subscript𝑠super... | 1,084 | Tuning-Free Stochastic Optimization
Appendix
10 Proofs for Section 5
10.3 Restarting SGD
Proof.
, 1 = ∥∇f(sm∗)∥2superscriptdelimited-∥∥∇𝑓subscript𝑠superscript𝑚∗2\displaystyle{\left\lVert\nabla f(s_{m^{\ast}})\right\rVert}^{2}. , 2 = ≤2∥∇f(sm∗)−g^sm∗∥2+2∥g^sm∗∥2absent2superscriptdelimited-∥∥∇𝑓subscript𝑠superscr... | 1,111 |
88 | First, observe that Theorem 7 gives that SGD run for T𝑇T
steps with a fixed stepsize η𝜂\eta such that ηL≤1𝜂𝐿1\eta L\leq 1
, 1 = 1T∑t=0T−1∥∇f(xt)∥2≤2(f(x0)−f∗)ηT+5ηR2+12R2log1δT.1𝑇superscriptsubscript𝑡0𝑇1superscriptdelimited-∥∥∇𝑓subscript𝑥𝑡22𝑓subscript𝑥0subscript𝑓∗𝜂𝑇5𝜂superscript𝑅212superscri... | 1,612 | Tuning-Free Stochastic Optimization
Appendix
10 Proofs for Section 5
10.3 Restarting SGD
Proof of Theorem 6.
First, observe that Theorem 7 gives that SGD run for T𝑇T
steps with a fixed stepsize η𝜂\eta such that ηL≤1𝜂𝐿1\eta L\leq 1
, 1 = 1T∑t=0T−1∥∇f(xt)∥2≤2(f(x0)−f∗)ηT+5ηR2+12R2log1δT.1𝑇superscriptsubsc... | 1,644 |
89 | , 1 = 1T∑t=0T−1∥∇f(xtτ)∥21𝑇superscriptsubscript𝑡0𝑇1superscriptdelimited-∥∥∇𝑓superscriptsubscript𝑥𝑡𝜏2\displaystyle\frac{1}{T}\sum_{t=0}^{T-1}{\left\lVert\nabla f(x_{t}^{\tau})\right\rVert}^{2}. , 2 = ≤2(f(x0)−f∗)ητT+5ητR2+12R2log1δTabsent2𝑓subscript𝑥0subscript𝑓∗subscript𝜂𝜏𝑇5subscript𝜂𝜏superscrip... | 1,688 | Tuning-Free Stochastic Optimization
Appendix
10 Proofs for Section 5
10.3 Restarting SGD
Proof of Theorem 6.
, 1 = 1T∑t=0T−1∥∇f(xtτ)∥21𝑇superscriptsubscript𝑡0𝑇1superscriptdelimited-∥∥∇𝑓superscriptsubscript𝑥𝑡𝜏2\displaystyle\frac{1}{T}\sum_{t=0}^{T-1}{\left\lVert\nabla f(x_{t}^{\tau})\right\rVert}^{2}. , 2 = ≤2... | 1,720 |
90 | , 1 = ∥g^r∥2=minn∈[N]∥g^n∥2\displaystyle{\left\lVert\hat{g}_{r}\right\rVert}^{2}=\min_{n\in[N]}{\left\lVert\hat{g}_{n}\right\rVert}^{2}. , 2 = ≤minn∈[N][∥g^n−∇f(yn)+∇f(yn)∥2]absentsubscript𝑛delimited-[]𝑁superscriptdelimited-∥∥subscript^𝑔𝑛∇𝑓subscript𝑦𝑛∇𝑓subscript𝑦𝑛2\displaystyle\leq\min_{n\in[N]}\left[{\lef... | 1,666 | Tuning-Free Stochastic Optimization
Appendix
10 Proofs for Section 5
10.3 Restarting SGD
Proof of Theorem 6.
, 1 = ∥g^r∥2=minn∈[N]∥g^n∥2\displaystyle{\left\lVert\hat{g}_{r}\right\rVert}^{2}=\min_{n\in[N]}{\left\lVert\hat{g}_{n}\right\rVert}^{2}. , 2 = ≤minn∈[N][∥g^n−∇f(yn)+∇f(yn)∥2]absentsubscript𝑛delimited-[]𝑁sup... | 1,698 |
91 | , 1 = ∥∇f(yr)∥2≤6c⋅R2log2dNδT+4[13⋅e⋅[L(f(x0)−f∗)R2T+(f(x0)−f∗)LT]+12R2log1δT+c⋅R2log2dMδT].superscriptdelimited-∥∥∇𝑓subscript𝑦𝑟2⋅6𝑐superscript𝑅22𝑑𝑁𝛿𝑇4delimited-[]⋅13𝑒delimited-[]𝐿𝑓subscript𝑥0subscript𝑓∗superscript𝑅2𝑇𝑓subscript𝑥0subscript𝑓∗𝐿𝑇12superscript𝑅21𝛿𝑇⋅𝑐superscript𝑅2... | 522 | Tuning-Free Stochastic Optimization
Appendix
10 Proofs for Section 5
10.3 Restarting SGD
Proof of Theorem 6.
, 1 = ∥∇f(yr)∥2≤6c⋅R2log2dNδT+4[13⋅e⋅[L(f(x0)−f∗)R2T+(f(x0)−f∗)LT]+12R2log1δT+c⋅R2log2dMδT].superscriptdelimited-∥∥∇𝑓subscript𝑦𝑟2⋅6𝑐superscript𝑅22𝑑𝑁𝛿𝑇4delimited-[]⋅13𝑒delimited-[]𝐿�... | 554 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.