text stringlengths 16 3.88k | source stringlengths 60 201 |
|---|---|
See [Cav11] for an interesting survey on
the statistical theory of inverse problems.
Sparsity adaptive thresholding estimators
If we knew a priori that θ was k sparse, we could employ directly Corollary 2.8
to obtain that with probability 1
δ, we have
−
MSE(Xˆθ
ls
B0(k))
≤
Cδ
σ2k
n
log
ed
2k
.
(cid:17)
(cid:16)
As we w... | https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf |
|
The consequences of this inequality are interesting. One the one hand, if we
observe
τ , then it must correspond to θj∗ = 0. On the other hand, if
τ is smaller, then, θj∗ cannot be very large. In particular, by the triangle
yj| ≤
|
2τ . Therefore, we loose at most 2τ by choosing
ξj | ≤
inequality,
ˆθj = 0. It leads u... | https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf |
1
−
≥
A
6
2.3. The Gaussian Sequence Model
46
(i) If
θ∗
|0 = k,
|
M Xˆhrd) =
θ
SE(
ˆ
θhrd
|
θ∗
2
|2 . σ
−
lo
2 k g(2d/δ)
n
.
(ii) if minj
supp(θ∗) |
∈
θj∗
|
> 3τ , then
supp(θhrd) = supp(θ∗) .
ˆ
Proof. Define the event
n
and recall that Theorem 1.14 yie
following holds for any j = 1, . . . , d.
lds IP(
= max
j
A
τ
,
|
... | https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf |
∗
|
−
2
2 =
d
j=1
X
ˆ
θhrd
j −
|
θj∗
2
| ≤
16
d
j=1
X
min(
|
θj∗
|
2, τ 2)
16
θ∗
|
|0τ 2 .
≤
This completes the proof of (i).
To prove (ii), note that if θj∗ = 0, then
θj∗
> 3τ so that
|
|
=
θj∗ + ξj|
yj|
ˆ
j = 0 so that supp(θ∗)
ˆ
j = 0, then
ˆ
θhrd
j
=
|
τ = 2τ .
−
ˆ
supp(θhrd).
> 2τ . It yields
Therefore, θhrd
Next,... | https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf |
,
|
In short, we can write
ˆθsft
j =
1
(cid:16)
−
2τ
yj| (cid:17)
|
y
j
+
2.4 HIGH-DIMENSIONAL LINEAR REGRESSION
The BIC and Lasso estimators
It can be shown (see Problem 2.5) that the hard and soft thresholding es-
timators are solutions of the following penalized empirical risk minimization
problems:
ˆθhrd = argmin... | https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf |
|
n
Moreover the Lasso estimator of θ∗ in is defined by any θL such that
o
ˆ
∈
ˆθL
∈
argmin
IRd
θ
∈
1
Y Xθ 2
n | − |
2 + 2τ θ 1
| |
o
n
Remark 2.13. Numerical considerations. Computing the BIC estimator
can be proved to be NP-hard in the worst case. In particular, no computational
method is known to be significantly fast... | https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf |
to multiplicative constants (it’s the price to pay to get non asymptotic results).
p
2.4. High-dimensional linear regression
49
2. An interesting method called LARS [EHJT04] computes the entire reg-
ularization path, i.e., the solution of the convex problem for all values
of τ . It relies on the fact that, as a functi... | https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf |
.
(2.14)
satisfies
MSE(Xˆθbic
) =
with probability at least 1
−
Xˆθbic
X
θ∗
2
2 .
|
θ∗
|
|0σ2
−
log(ed/δ)
n
1
n |
δ.
Proof. We begin as usual by noting that
1
n
Y
|
X
ˆθbic
2
2
|
+ 2
τ
ˆ
θbic
|
|0 ≤
−
1
n |
Y
Xθ
∗
|
−
2 + τ 2
2
θ∗
|0 .
|
It implies
Xˆθbic
|
Xθ∗
2
2 ≤
|
nτ 2
θ∗
|
|0 + 2ε⊤X ˆ
(θbic
−
−
θ∗)
nτ 2
ˆ
θbic
|
|... | https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf |
θbic
−
2
θ∗)
2nτ 2
ˆ
θbic
|
|0
−
(2.15)
ˆ(θbic
U
−
(cid:2)
θ∗) =
(cid:3)
Xˆθbic Xθ∗
−
Xˆθbic
Xθ∗|2
−
|
Next, we need to “sup out” θbic. To that end, we decompose the sup into a
max over cardinalities as follows:
ˆ
|
Applied to the above inequality, it yields
≤ ≤ |
∈
sup = max max
1 k d S =k
θ
IRd
sup
supp(θ)=S
.
ˆ(θbic... | https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf |
)
max
d
k
1
≤
≤
d
(cid:8)
max sup 4 ε⊤ΦS, u
S
|
=k
S,∗
u
∗
|
r
2
∈B
(cid:2)
≤
IP
=k (cid:16) u
∈B2
sup
rS,∗
ε⊤
Φ
S, u
∗
(cid:2)
(cid:3)
k=1
S
X X
|
|
2
(cid:3)
2
2nτ 2k
−
t
)
≥
(cid:9)
t
4
≥
+ nτ k
2
1
2
(cid:17)
Moreover, using the ε-net argument from Theorem 1.19, we get for
S
= k,
IP
sup
u
∈B
(cid:16)
r
2
S
,
∗
(cid... | https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf |
IP
X
|
(cid:16)
d
(cid:17)
2k log(ed) +
t
32σ2
−
exp
−
(cid:16)
θ∗
|
|0 log(12)
(cid:17)
=k
S
X
|
|
d
k=1
X
=
k=1
X
d
k=1
X
d
≤
=
=1
k
X
exp
d
k
(cid:18) (cid:19)
exp
exp
−
(cid:16)
−
(cid:16)
t
32σ2
t
32σ2
−
2k log(ed) + θ∗ 0 log(12)
|
|
(cid:17)
−
k log(ed) + θ∗ 0 log(12)
|
|
(cid:17)
by Lemma 2.7
t
32σ2
−
+
θ∗
|
|0 ... | https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf |
Analysis of the Lasso estimator
Slow rate for the Lasso estimator
The properties of the BIC estimator are quite impressive. It shows that under
no assumption on X, one can mimic two oracles: (i) the oracle that knows the
support of θ∗ (and computes least squares on this support), up to a log(ed)
term and (ii) the oracl... | https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf |
:2)
ion of θL, it
Proof. From the definit
ˆ
holds
log(2d)
n
.
r
1
n
Y
|
XˆL
θ
2
|2 + 2τ
ˆ
θL
|
−
1
|1 ≤ n |
Y
Xθ∗
2
|2 + 2τ
|
θ∗
|1 .
−
Using Ho¨lder’s inequality, it implies
XˆθL
|
Xθ∗
2
2 ≤
|
−
ˆ
2ε⊤X(θL
−
ˆ
θL
ˆ
θL
θ∗
|1 − |
X
θ∗) + 2nτ
|1
|
ˆ
2nτ θL + 2 ⊤ε
(cid:0)
|1
|1 −
|
ˆ
|1 + 2(
nτ ) θL
|
|
|
X⊤ε
∞
|
2
X⊤ε
≤
|
... | https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf |
�
−
|
|
Notice that the regularization parameter (2.16) depends on the confidence
level δ. This not the case for the BIC estimator (see (2.14)).
(log d)/n (slow rate), which is
The rate in Theorem 2.15 if of order
(fast rate) for the BIC estimator.
much slower than the rate of order (log d)/n
p
Hereafter, we show that f... | https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf |
satistify a certain property, then there must exist objects that satisfy said
In our case, we consider the following probability distribution on
property.
. Let the design matrix X have entries
random matrices with entries in
1) random variables. We are going to show that
that are i.i.d Rademacher (
most realizations o... | https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf |
0)(cid:12)
(cid:12)
Id > t = IP max
∞
(cid:12)
(cid:12)
(cid:1)
(cid:16)
1
j=k n
n
(j,k)
ξ
i
>
t
I
P
i=1
(cid:12) X
(cid:12)
n
1
(cid:12)
(j,k)
ξi
n
i=1
(cid:12)
(cid:16) X
(cid:12)
2nt
(cid:12)
2e−
2
(cid:17)
(cid:12)
(cid:12)
(cid:12)
> t
(cid:17)
(cid:12)
(cid:12)
(cid:12)
(Union bound)
(Hoeffding: Theorem 1.9)
j=k
X... | https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf |
Sc
|
|1 ≤
3
θS|1 ,
|
θS|
|
2
2 ≤
2 |
2
2
|
Xθ
n
(2.17)
6
6
6
6
2.4. High-dimensional linear regression
55
Proof. We have
2
2 =
|
Xθ
n
|
1
n
|
X
θ
S XθSc
+
2
|2
≥
Xθ 2
S
2
|
n
|
+ 2θS⊤
X⊤X
n
θSc
If follows now from the incoherence condition that
2
2
XθS|
n
|
= θS⊤
X
⊤X
n
θS =
θS|
|
2
2 + θS⊤(
X
⊤X
n
Id)θS ≥ |
θ 2
S|2
−... | https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf |
2. Assume that the linear model (2.2) holds where ε
θ∗
≥
2τ = 8σ
log(2d)
n
r
+ 8σ
r
log(1/δ)
n
MSE(XˆθL) =
1
n
|
XˆL
θ
Xθ∗
|
−
2 . kσ2
2
)
log 2d/δ
(
n
satisfies
and
|
with probability at least 1
ˆθL
θ∗ 1 . kσ
−
|
δ. Moreover,
r
g
lo (2d/δ)
n
.
−
log(2d)
n
IE
MSE(Xˆ
θL)
. kσ2
,
and IE
(cid:2)
(cid:3)
Proof. From the defi... | https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf |
2.15, we get that with probability 1
δ, we get
ε⊤X ˆ(θL
θ∗
)
−
−
ε⊤X
|∞|
nτ
ˆθL
≤ 2 |
≤ |
−
θ∗
|
ˆ
θL
−
|1 ,
θ∗
where we used the fact that
S = supp(θ∗) to be the support of θ∗, we get
Xj|
2
2 ≤
|
n + 1/(14k)
2n. Therefore, taking
≤
XˆθL
|
Xθ∗
|
−
2
2 + nτ
ˆθL
|
−
θ∗
|1 ≤
2nτ
|
ˆ
θL
ˆθSL
ˆθSL
θ∗
θ∗
θ∗
|1 + 2nτ
|1 + 2nτ... | https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf |
find
XˆθL
|
Xθ∗
2
2 ≤
|
−
32nkτ 2 .
Moreover, it yields
ˆθL
|
−
θ∗
|1 ≤
≤
4
4
r
r
2k
n |
2k
n
ˆ
XθL Xθ∗ 2
|
−
√32nkτ 2
32kτ
≤
The bound in expectation follows using the same argument as in the proof of
Corollary 2.9.
Note that all we required for the proof was not really incoherence but the
conclusion of Lemma 2.17:
whe... | https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf |
defined for a given parameter τ > 0
by
≤
ridge
ˆθτ
= argmin
θ IRd
∈
1
n |
Y
Xθ
2
2 + τ
−
|
θ
2
|2
|
.
n
o
is uniquely defined and give its closed form
(a) Show that for any τ , θridge
ˆ
τ
expression.
(b) Compute the bias of θridge
ˆ
τ
and show that it is bounded in absolute value
by
θ∗
|2.
|
Problem 2.2. Let X = (1, Z, .... | https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf |
wℓ1 +
|
|
θ′
|wℓ1
|
What do you conclude?
(b) Show that
θ
|
|wℓq ≤ |
θ
(c) Show that if limd
→∞
(d) Show that, for any q
|q.
|wℓq <
θ
∞
(0, 2) if limd
|
, then limd
θ
|q′ <
|
∞
→∞
for all q′ > q.
|wℓq = C, there exists a con-
stant Cq > 0 that depends on q but not on d and such that under the
assumptions of Theorem 2.1... | https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf |
) .
θ∗
0σ
log
2
ed
θ∗
|
n
(cid:0)
|
0
.
subGn(σ2).
Problem 2.7. Assume that the linear model (2.2) holds where ε
Moreover, assume the conditions of Theorem 2.2 and that the columns of X
n. Then the Lasso estimator
are normalized in such a way that maxj |
ˆθL with regularization parameter
|2 ≤
Xj
√
∼
2τ = 8σ
2 log(2d)
n... | https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf |
f
2
2 .
|
−
ˆ
Even though the model may not be linear, we are interested in studying the
statistical properties of various linear estimators introduced in the previous
ˆ
˜
chapters: θls, θls
K, θls
X , θbic, θL. Clearly, even with an infinite number of obser-
vations, we have no chance of finding a consistent estimator o... | https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf |
j(X) = X j) returns the jth coordinate of
IRd then the goal is to approximate f (x) by θ⊤x. Nevertheless, the use of
Remark 3.1. If M = d and ϕ
X
a dictionary allows for a much more general framework.
∈
(
Note that the use of a dictionary does not affect the methods that we have
been using so far, namely penalized/const... | https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf |
�
ϕ1, . . . , ϕM }
K is such that
H
=
{
R(ϕ¯θ)
≤
R(ϕθ) ,
θ
∀
∈
∈
K .
Moreover, RK = R(ϕ¯θ) is called oracle risk on K. An estimator f is said
to satisfy an oracle inequality (over K) with remainder term φ in expectation
(resp. with high probability) if there exists a constant C
1 such that
ˆ
≥
ˆ
IER(f )
≤
C inf R(ϕθ) +... | https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf |
f onto the linear spam of
ϕ1, . . . , ϕn. Since Y = f + ε, we get
ϕˆθls|
ϕ¯θ|
−
−
Y
Y
2
2
|
f
|
−
2
2
ϕˆls|
θ
f
ϕ 2
θ|2 + 2ε⊤(ϕˆ
¯
θls −
−
≤ |
ϕθ)
¯
3.1. Oracle inequalities
63
Moreover, by Pythagoras’s theorem, we have
It yields
f
|
−
ϕˆls |
θ
2
2 − |
f
2
θ 2 =
ϕ¯
|
ϕˆ −
θls
|
ϕ 2
θ|2 .
¯
−
ϕ
ˆθls −
|
2
ϕ¯θ|2 ≤
2ε⊤(ϕ... | https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf |
−
+
Cσ2
α(1
−
α)n |
|
Cσ2
α(1
−
α)n
θ log(eM
0
)
o
log(1/δ)
Proof. Recall the the proof of Theorem 2.14 for the BIC estimator begins as
follows:
1
n
Y
|
−
ϕ
2 + 2 ˆτ θbic
|
ˆ
θbic|2
IRM . It implies
1
ϕ 2
Y
0
| ≤ n | − |2
τ 2
θ + θ 0 .
| |
This is true for any θ
f
|
−
ϕ
θ
2
∈
2
ˆbic|2 + nτ
ˆ
ˆ
bic
θ
|
f
|0 ≤ |
−
2
ϕθ|2... | https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf |
�ˆ
θbic −
|
2
2
ϕθ|
α
|
ϕ
ˆθbic −
≤
f
2
|2 + α
|
ϕθ −
f
2
|2 ,
we get for α < 1,
(1
α)
|
ϕˆ
θ
f 2
bic − |2 ≤
−
≤
ϕθ
+
(1 + α)
|
ε⊤
2
α
(cid:2)
(1 + α)
|
2
+ ε
α
U
ϕ
θ −
⊤
U
f
2 + nτ 2
2
θ
|0
2
|
θ)
ϕ
|
−
ϕ
(
ˆ
b
θ ic
f
|
(ϕˆθbic
−
2
2
2 + 2nτ (cid:3)
|
2
)
θ −
(cid:3)
−
θ
−
|0
n 2 ˆτ
|
2
nτ
ˆ
θbic
|
|0
θbic
θ
|0
−
We c... | https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf |
(
io
If the linear model happens to be correct, then, simply, MSE(ϕ¯θ) = 0.
Sparse oracle inequality for the Lasso
To prove an oracle inequality for the Lasso, we need incoherence on the design.
Here the design matrix is given by the n
M matrix Φ with elements Φi,j =
ϕj (Xi).
×
subGn(σ2).
Theorem 3.5. Assume the genera... | https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf |
ϕˆL−
θL
|1 ≤
|
(3.7)
Next, note that INC(k) for any k
2√n for all j =
1, . . . , M . Applying Ho¨lder’s inequality using the same steps as in the proof of
Theorem 2.15, we get that with probability 1
−
ϕj |2 ≤
1 implies that
|1+2nτ
δ, it holds
ϕθ)+nτ
|1−
2nτ
ˆ
θL
ˆ
θL
−
≥
θ
θ
θ
|
|
|
|
θ
|1 .
2ε⊤
(ϕˆL −
θ
ϕθ)
nτ
≤ 2 |
... | https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf |
θSL
|
Using now the inequality 2ab
|1 ≤
4nτ
−
θ
ˆθSL
|
S
θ
||
2 a2 + α b2, we ge
pt
2
α
|2 ≤
4τ
−
p
≤
2n
θ
ϕˆL −
θ
|0|
|
ϕθ
|0 ≤
|2 .
4nτ
ˆL
θS −
|
θ
1
|
≤
≤
+
16τ 2n θ
0
| |
α
16τ 2n
α
α
2 |
0 + α
|
θ
|
|
ϕˆθL −
2
ϕθ|2
ϕˆθL −
f
2
2 + α
|
|
ϕθ −
f
2
2
|
3.1. Oracle inequalities
66
Combining this result with (3.7) and ... | https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf |
is not sparse. For such θ, the Lasso estimator still enjoys slow rates
as in Theorem 2.15, which can be easily extended to the misspecified case (see
Problem 3.2). Fortunately, such vectors can be well approximated by sparse
vectors in the following sense: for any vector θ
1, there
exists a vector θ′ that is sparse and ... | https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf |
{
}
ϕ¯θ = ϕθ(1) + ϕθ(2) .
3.1. Oracle inequalities
67
Moreover, observe that
θ(2)
|
1 =
|
M
j=k+1
X
¯
θj| ≤
|
R
Let now U
defined by
∈
IRn be a random vector with values in
0,
{
±
Rϕ1, . . . ,
RϕM }
±
IP(U = Rsign(θj )ϕj ) = |
(2)
(2)
θj
R
| ,
j = k + 1, . . . , M
IP(U = 0) = 1
θ(2)
R
|
1
|
.
−
Note that IE[U ] = ϕθ(2)... | https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf |
2
|
1
ϕ¯θ|
−
2
2 +
(RD√n)2
k
f
|
−
2
ϕ
θ(1) ˜+θ|2 ≥
f
|
−
2
ϕθ 2
|
min
IRM
θ
∈
2k
θ
|0≤
|
and to divide by n.
Maurey’s argument implies the following corollary.
Corollary 3.7. Assume that the assumptions of Theorem 3.4 hold and that
the dictionary
is normalized in such a way that
ϕ1, . . . , ϕM }
{
Then there exists a ... | https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf |
≤
MSE(ϕθ′ ) +
Taking infimum on both sides, we get
2
θ′
|
θ
|
2
1
|
|0
2
θ′
|
θ
|
2
1
|
|0
2
σ
θ
|
+ C
|0 log(eM )
n
θ
f MSE(ϕθ) + C
in
IRM
∈
n
σ2
θ 0
|
|
log
n
(eM )
≤
θ′
inf MSE(ϕθ )
IRM
∈
′ + C min
k
n
o
|
(cid:16)
2
θ′
1
| + C
k
σ2k log(eM )
n
.
(cid:17)o
To control the minimum over k, we need to consider three case... | https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf |
≤
(cid:17)
C
|
2
θ′
1
|
M ≤
Cσ θ′ 1
|
|
r
log(eM )
n
.
3.2. Nonparametric regression
69
On the other hand, if M
θ
|1
σ log(eM)/n
|
≤ √
, then for any Θ
IRM , we have
∈
σ2
θ
|
0
|
log(eM )
n
≤
σ2
M log(eM )
n
Cσ
θ′
|
|1
≤
r
log(eM )
n
.
Note that this last result holds for any estimator that satisfies an oracle
inequali... | https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf |
=
{
→
→
Fourier decomposition
Historically, nonparametric estimation was developed before high-dimensional
statistics and most results hold for the case where the dictionary
forms an orthonormal system of L2([0, 1]):
ϕ1, . . . , ϕM }
H
=
{
1
0
Z
ϕ2
j (x)dx = 1 ,
1
0
Z
ϕj(x)ϕk(x)dx = 0,
j
= k .
∀
We will also deal with ... | https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf |
/2
jk
j
ψ(2 x
k) ,
j, k
Z .
∈
−
It can be shown that for a suitable ψ, the dictionary
forms an
orthonormal system of L2([0, 1]) and sometimes a basis. In the latter case, for
any function g
L2([0, 1]), it holds
ψj,k, j, k
∈
{
Z
}
∈
∞
∞
g
=
θjkψjk ,
θjk =
g(x)ψjk(x)dx .
1
k=X−∞
The coefficients θjk are called wavelet coeffi... | https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf |
1) is absolutely continuous and
−
n
1
[f (β)]2
0
Z
Any function f
the trigonometric basis:
∈
L2 , f (j)(0) = f (j)(1), j = 0, . . . , β
1
−
≤
o
W (β, L) can represented1 as its Fourier expansion along
f (x) = θ1∗ϕ1(x) +
∞
θ2∗kϕ2k(x) + θ2∗k+1ϕ2k+1(x) ,
k=1
X (cid:0)
(cid:1)
x
∀ ∈
[0, 1] ,
where θ∗ =
by
θj∗
}j 1 is in th... | https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf |
j=1
X
θj∗ϕj ,
where the sequence
θj∗
}j 1 belongs to Sobolev ellipsoid of ℓ2(IN) defined by
≥
{
Θ(β, Q) =
θ
n
ℓ2(IN) :
∈
∞
j=1
X
j θ2
a2
j
Q
≤
o
for Q = L2/π2β.
Proof. Let us first recall the definition of the Fourier coefficients
the jth derivative f (j)
of f for j = 1, . . . , β:
sk(j) k 1 of
} ≥
{
s1(j) =
1
0
Z
s
2k(j) =... | https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf |
−
= (2πk)s
2k+1(β
f (β
−
1) .
−
−
Moreover,
s2k+1(β) = √2f (β
−
1)(t) sin(2πkt)
(cid:12)
(cid:12)
(cid:12)
1) .
−
(2πk)s2k(β
=
−
In particular, it yields
s
k β) + s2k+1(β) = (2πk)2 s2k(β
2
2 (
2
By induction, we find that for any k
s (β)2 + s
2
k
2k+1
(β)2
≥
=
(cid:2)1,
1)2 + s2k+1(β
−
1)2
−
(cid:3)
(2πk)2β
2
θ2k + θ2k+... | https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf |
∈
1
f (β)(t)
2
dt
0
Z
(cid:0)
(cid:1)
L2 ,
≤
so that θ
∈
2
Θ(β, L /π ) .
2β
It can actually be shown that the reciprocal is true, that is any function
with Fourier coefficients in Θ(β, Q) belongs to if W (β, L) but we will not be
needing this.
In what follows, we will define smooth functions as functions with Fourier
coeffi... | https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf |
regression
74
Proof. Note first that for any j, j′
ϕ⊤j ϕj′ is of the form
1, . . . , n
1
}
−
∈ {
, j = j′ the inner product
ϕ⊤j ϕj′ = 2
1
n
−
s=0
X
uj(2πkj s/n)vj′ (2πkj′ s/n)
where kj =
ix
Re
(e ),
Next, observe that if kj = kj′ , we have
j/2
⌋
⌊
Im i
x
.
(e )
}
{
is the integer part of j/2 for any x
IR, uj(x), vj′ (x)... | https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf |
the case where kj = kj′ , we have
(cid:3)
nd either a′ = 0 or
|2 = 0, i.e., either a = 0 or b = 0 a
|2 =
|2|
|2|
a′
b′
a
b
(cid:2)
|
|
a⊤a′ =
−
b⊤b′ = 0,
b⊤a′ = a⊤b′ = 0
which implies ϕ⊤j ϕj′ = 0. To conclude the proof, it remains to deal with the
= 1 or j′ = j. In
case where kj = kj′ . This can happen in two cases:
th... | https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf |
ϕj|2 =
|
2
2
a
|
b
|
2
2
|
2
2
|
(cid:26)
if j is even
if j is odd
ϕj|
Therefore, the design matrix Φ is such tha
|
|
|
|
|
2
2 = 2
s=0
X
t
a
2 +
2
2
b
2
2 =
n
−
1
i2πk s
j
e n
2
= n
(cid:12)
(cid:12)
(cid:12)
(cid:12)
Φ⊤Φ = nIM .
Integrated squared error
As mentioned in the introduction of this chapter, the smoothness... | https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf |
|
(cid:17)
(cid:16)
n
jX≥
. Qn2
−
2β .
(3.11)
Proof. Note that for any θ
Θ(β, Q), if β > 1/2, then
∈
1
aj θj aj
|
|
∞
j=2
X
=
θj|
|
∞
j=2
X
j θ2
a2
j
1
a2
j
∞
j=2
X
by Cauchy-Schwarz
∞
≤
v
u
u
t
j=2
X
∞
j=1
X
Q
≤ v
u
u
t
1
j2β
<
∞
3.2. Nonparametric regression
76
Since
{
ϕj}j forms an orthonormal system in L2([0, 1]),... | https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf |
where in
ϕj |2 ≤
|
θ∗
∈
the last inequality, we
√
2n, j
Θ(β, Q), we have
≥
=
θj∗
|
|
j n
X
≥
j n
X
≥
θj∗
aj|
|
1
a
j
≤
a2
j θj∗
|
2
| s
n
jX≥
sjXn
≥
1
a2
j
1
. Qn 2
β
−
.
Note the truncated Fourier series ϕθ∗ is an oracle: this is what we see when
we view f through the lens of functions with only low frequency harmonic... | https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf |
= 2(ϕM
ˆθls −
j>M
X
ϕM
θ∗ )⊤(
n
j
X
≥
θj∗ϕj) + Cσ2M log(1/δ) ,
where we used Lemma 3.13 in the last equality. Together with (3.11) and
Young’s inequality 2ab
0 for any α > 0, we get
αa2 + b2/α, a, b
≤
≥
M
θ
2(ϕˆls −
M
ϕθ∗ )⊤(
n
j
X
≥
j ϕj)
θ∗
M
ϕˆ
θls
α
|
M 2
ϕ 2
∗
|
− θ
≤
+ Qn −
2 2β
C
α
,
3.2. Nonparametric regressi... | https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf |
and assume the
1.
Θ(β, Q) and ε
n 2β+1 and n be large enough so that M n 1. Then the
M
j=1 being the trigonometric
ϕj}
δ, for n large enough,
subG (σ2), σ2
≃
∈
≤ −
ˆ ⌉
≥
≤
∼
{
⌈
n
1
−
ϕ
ˆls
θ
k
−
f 2
k
L2([0,1])
.
2β
n− 2β+1 + σ2
lo
g(1
n
/δ)
.
where the constant factors may depend on β, Q and σ. Moreover
ϕ
ˆθls
IE
k
f... | https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf |
that for the prescribed β, we have n1
n−
bound.
≤
2β+1 . The bound in expectation can be obtained by integrating the tail
2β
−
Adaptive estimation
The rate attained by the projection estimator ϕˆls with M =
is actually
optimal so, in this sense, it is a good estimator. Unfortunately, its implementa-
tion requires the k... | https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf |
resp. ϕˆL− ) the BIC (resp. Lasso) estimator defined
θ
θ
1 with regularization parameter given by (3.5)
in (3.3) (resp. (3.4)) over IRn
(resp. (3.6)). Then ϕn
δ,
ˆ
θ
satisfies with probability 1
ˆ
1, where θ
θbic ˆ
ˆ
, θL
1. Let
≤
≃
⌉
−
−
1
−
}
∈ {
. n− 2β+1 + σ2 log(1/δ)
2β
.
n
1
ϕn
ˆ
θ
−
k
f
k
−
Moreover,
2
L2([0,1])
ϕ... | https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf |
|2
−
ϕn
θ
−
1
−
2 + 2(ϕn
2
ˆ
θ
−
1
−
ϕn
θ
−
1)⊤(ϕn
θ
−
1
1
−
θ − |2
f 2 + α ϕn
θ
1 ϕn
| ˆ − θ
−
−
|
θ
)
0
|
−
f ) + R(
|
1 2 + R( θ 0) ,
|2
| |
2α
≤ 1 α |
−
2α
1 α
−
≤
+
1
α
f
|
ϕn
(cid:16)
(cid:17)
where we used Young’s inequalit
y once again. Choose now α = 1/2 and θ = θM∗ ,
is equal to θ∗ on its first M coordinates a... | https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf |
([0,1])
.
ϕn
1
−
θ∗ −
k
Moreover, using (3.10), we find that
n 1 2
ϕθ∗−
M
kL2([0,1]) + Qn − +
.
n
1 2β R(M )
ϕn 1
ˆ−
θ
k
−
f
2
kL2([0,1]) M − + Qn1
.
2β
−
2β +
M
n
log(en) +
2
σ
n
log(1/δ) .
To conclude the proof, choose M =
of β ensures that n1
bound in expectation is obtained by integrating the tail.
and observe that ... | https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf |
asso estimator θL with regularization parameter 2τ satisfies the
following exact oracle inequality:
√
MSE(ϕˆL )
θ
≤
θ
inf MSE(ϕθ) + Cσ
RM
I
∈
M −
cn
θ
|
|1
log M
r n
o
C, c.
with probability at least 1
for some positive constants
−
ϕ1, . . . , ϕM }
{
n. Show that for any integer k such that 1
be a dictionary normalized ... | https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf |
in terms of their sparsity, here we can measure the complexity of a matrix by
its rank. This feature was successfully employed in a variety of applications
ranging from multi-task learning to collaborative filtering. This last application
was made popular by the Netflix prize in particular.
In this chapter, we study sev... | https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf |
{
AA⊤uj = λ2
and
A⊤Avj = λ2
j vj
for j = 1, . . . , r. The values λj > 0 are called singular values of A and are
If rank r < min(n, m) then the singular values of A are
uniquely defined.
given by λ = (λ1, . . . , λ , 0, . . . , 0)
r
zeros. This way, the vector λ of singular values of a n
m matrix is a vector
in IRmin(n,... | https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf |
,
q > 0 .
The cases where q
0,
can also be extended matrices:
∞}
∈ {
|0 =
A
|
1I(aij = 0) ,
A
|
|
∞
= max
ij
.
aij|
|
ij
X
6
4.2. Multivariate regression
83
The case q = 2 plays a particular role for matrices and
Frobenius norm of A and is often denoted by
Schmidt norm associated to the inner product:
kF .
A
k
|2 is c... | https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf |
≥
≥
n matrices with singular values λ1(A)
. . .
≥
λmin(m,n)(B) respectively. Then the following
λ2(A) . . .
≥
max
k
λk(A)
λk(B)
A B op ,
(cid:12)
λ
(cid:12)
k(A)
k
X
(cid:12)
(cid:12)
A, B
h
i ≤ k
(cid:12)
2
(cid:12)
≤ k − k
−
− k(B)
λ
(cid:12)
1
(cid:12)
kq , + = 1, p, q
q
kF ,
kqk
≤ k
1
p
−
B
B
A
2
A
Weyl (1912)
Hoffm... | https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf |
this chapter, we will focus on the
IRn
where Y
×
design matrix (as before), Θ
E
prediction task, which consists in estimating XΘ∗.
IRd
∼
×
∈
×
×
As mentioned in the foreword of this chapter, we can view this problem as T
,(j) +ε(j), j = 1, . . . , T , where
(univariate) linear regression problems Y (j) = Xθ∗
Y (j), θ∗
... | https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf |
.
.
0 0
,
Θ =
indicates a potentially nonzero entry.
∗
• •
• •
where
•
4.2. Multivariate regression
85
It follows from the result of Problem 4.1 that if each task is performed
ˆ
individually, one may find an estimator Θ such that
1
n
IE
k
X ˆΘ
2
XΘ∗ . σ
kF
2
−
kT log(
ed)
n
,
whe... | https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf |
these franchises and that all franchises are
linear combinations of these profiles.
Sub-Gaussian matrix model
Recall that under the assumption ORT for the design matrix, i.e., X⊤X = nId,
then the univariate regression model can be reduced to the sub-Gaussian se-
quence model. Here we investigate the effect of this assump... | https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf |
k0 =
and recall that
simply estimate the λj s by hard thresholding.
these eigenvectors by the eigenvectors of y is sufficient.
|0. Therefore, if we knew uj and vj, we could
λ
It turns out that estimating
Θ∗
k
|
Consider the SVD of the observed matrix y:
y =
ˆ
λjuˆjvˆj⊤ .
j
X
Definition 4.1. The singular value thresholding... | https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf |
≤
It yields
1
max x⊤Av + max u⊤Av
4 u
x
∈N1
d−1
y
max max x⊤Ay + max max x⊤Av + max u⊤Av
x
1
∈N
∈S
∈S
1
4 x
1
max max x⊤Ay + max max u⊤Av
2 u
∈N1 y
x
1
4 u
∈N2
1
∈N
2
∈N
d−1 v
∈S
T −1
T −1
d−1
v
∈S
∈S
A
kop ≤
k
2 max max x⊤Ay
∈N2
∈N1 y
x
4.2. Multivariate regression
87
So that for any t
≥
0, by a union bound,
IP
k
(ci... | https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf |
value thresholding estimator Θsvt with threshold
ˆ
2τ = 8σ
log(12)(d
r
n
T )
∨
+ 4σ
2 log(1/δ)
n
,
r
(4.3)
satisfies
1
n
k
X ˆΘsvt
XΘ∗
2 =
F
k
k
−
with probability 1
δ.
−
ˆΘsvt Θ∗
2
− kF ≤
144 rank(Θ∗)τ 2
.
σ2 rank(Θ∗)
n
d
(cid:16)
∨
T + log(1/δ) .
(cid:17)
Proof. Assume without loss of generality that the singular valu... | https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf |
�
k
−
Θ∗
2
F
k
(4.4)
Using Cauchy-Schwarz, we control the first term as follows
ˆΘsvt
k
¯Θ
k
2
F ≤
−
ˆ
rank(Θsvt
¯Θ)
k
−
ˆΘsvt
¯Θ
2
op ≤
k
S
2
|
|k
−
ˆ
Θsvt
¯Θ
2
op
k
−
4.2. Multivariate regression
88
Moreover,
ˆ svt
Θ
¯
Θ
−
k
op ≤ k
k
ˆ
Θ
svt
y
y
Θ∗
kop +
k
+ τ + max λ
j
−
Sc | j| ≤
kop +
k
6τ .
∈
Θ∗
¯
Θ
kop
−
−
ˆλj |... | https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf |
2,
j
X
rank(Θ∗)
2)
λj
|
|
432
τ 2
≤
j=1
X
= 432 rank(Θ∗)τ 2 .
In the next subsection, we extend our analysis to the case where X does not
necessarily satisfy the assumption ORT.
Penalization by rank
The estimator from this section is the counterpart of the BIC estimator in the
spectral domain. However, we will see that... | https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf |
’s inequality, we have
ˆ
E, Θ
h
X rk X
−
−
k
k
2
Θ∗
i −
2nτ rank(Θ ) + 2nτ rank(Θ∗) .
ˆ rk
2
2
2
E, X ˆΘrk
h
−
XΘ∗
= 2
E, U
h
2 +
i
i
1
2 k
ˆ
XΘrk
XΘ∗
2
F ,
k
−
where
Write
.
X ˆΘrk
X ˆΘrk
XΘ∗
−
XΘ∗kF
XΘ∗ = ΦN ,
−
U =
k
X ˆΘrk
−
U =
ΦN
N
kF
k
where Φ is a n
d matrix whose columns form orthonormal basis of the
column sp... | https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf |
�⊤E
rank(Θrk) + rank(Θ∗) .
2
op
ˆ
k
k
Next, note that Lemma 4.2 yields
≤ k
(cid:2)
Φ⊤E
2
op ≤
nτ 2 rank(Θrk) + rank(Θ∗) .
nτ 2 so that (cid:3)
k
ˆ
k
E, U
h
2
i
≤
Together with (4.5), this complete(cid:2)s the proof.
(cid:3)
4.2. Multivariate regression
90
It follows from Theorem 4.4 that the estimator by rank penaliza... | https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf |
T into IRn
IRd
T ,
×
×
×
¯
∈
k
¯
Next consider the SVD of Y:
−
k
Y
XΘ
2
F =
Y Y¯ 2
k − kF +
Y¯
k −
2X
kF .
Θ
Y¯ =
λj ujvj⊤
j
X
. . .. The claim is that if we define Y by
˜
where λ1 ≥
λ2 ≥
Y˜ =
k
j=1
X
λj ujvj⊤
which is clearly of rank at most k, then it satisfies
Y¯
k
−
2Y˜
k
F =
min
Z:rank(Z)
¯
Y Z 2
k k − k
F .
≤
Indee... | https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf |
ization Θ is defined to be any
solution to the minimization problem
ˆ
min
IRd×T
∈
Θ
1
n k
Y
−
XΘ
2
F + τ
k
Θ
k1
k
o
n
Clearly this criterion is convex
implemented efficiently
using semi-definite programming. It has been popularized by matrix comple-
tion problems. Let X have the following SVD:
and it can actually be
r
X =
... | https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf |
be n i.i.d. sub-Gaussian random vectors such
that IE[XX ⊤] = Σ and X
Σ
subGd(
k
∼
kop). Then
d + log(1/δ)
n
∨
ˆΣ
k
Σ
kop .
k
Σ
kop
−
r
(cid:16)
d + log(1/δ)
n
,
(cid:17)
4.3. Covariance matrix estimation
92
with probability 1
δ.
−
Proof. Observe first that without loss of generality we can assume that Σ = Id.
Indeed, n... | https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf |
IP Σ
k
(cid:0)
It holds,
Idkop > t
−
≤
(cid:1)
IP
x⊤ ˆ(Σ
x,y
X
∈N
(cid:0)
Id)y
>
t/2
.
(4.6)
−
(cid:1)
ˆ
x⊤(Σ
Id)y =
−
n1
n
i=1
X
(cid:8)
Using polarization, we also have
(Xi⊤x)(Xi⊤y)
−
IE
(X ⊤
i x)(Xi⊤y)
.
(cid:2)
(cid:3)(cid:9)
(Xi⊤x)(Xi⊤y) = + −
4
Z 2
Z 2
− ,
here Z+ = Xi⊤(x + y) and Z = Xi⊤(x
−
y). It yields
−
IE
e... | https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf |
),(cid:1)(cid:3)s
ince X
subG(2), and it follows from Lemma 1.12 that
(cid:0)
(cid:2)
∼
Z 2
+ −
IE[Z 2
+]
∼
−
∼
subE(32) ,
and
Z 2
− −
IE[Z 2 ]
− ∼
subE(32)
Therefore for any s
≤
1/16, we have for any Z
Z+, Z
−}
∈ {
, we have
IE
exp
s
2
Z 2
−
IE[Z 2]
(cid:2)
(cid:0)
(cid:0)
2
e128s ,
≤
(cid:1)(cid:1)(cid:3)
4.3. Covar... | https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf |
:16)
2
n
(4.7)
(0, 1) if
∈
1/2
(cid:17)
Theorem 4.6 indicates that for fixed d, the empirical covariance matrix is a
consistent estimator of Σ (in any norm as they are all equivalent in finite dimen-
sion). However, the bound that we got is not satisfactory in high-dimensions
when d
n. To overcome this limitation, we can... | https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf |
given by the variance Var(X ⊤u). The
u
|
goal is then to maximize reward subject to risk constraints. In most instances,
the empirical covariance matrix is plugged into the formula in place of Σ.
∈
4.4. Principal component analysis
94
4.4 PRINCIPAL COMPONENT ANALYSIS
Spiked covariance model
Estimating the variance in ... | https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf |
1). In particular,
and let Y1, . . . , Yn ∼ Nd(0, Id) so that v⊤Yi are i.i.d.
the vectors (v⊤Y1)v, . . . , (v⊤Yn)v live in the one-dimensional space spanned by
If one would observe such data the problem would be easy as only two
v.
IRd
observations would suffice to recover v. Instead, we observe X1, . . . , Xn ∈
where Xi... | https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf |
these notes.
Clearly, under the spiked covariance model, v is the eigenvector of the
matrix Σ that is associated to its largest eigenvalue 1 + θ. We will refer to
this vector simply as largest eigenvector. To estimate it, a natural candidate
is the largest eigenvector vˆ of Σ, where Σ is any estimator of Σ. There is a
... | https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf |
)2 = 1 + θ cos2(∠(u, v)) .
Therefore,
v⊤Σv
−
v˜⊤Σv˜ = θ[1
−
cos2(∠(v˜, v))] = θ sin2(∠(v˜, v)) .
Next, observe that
v⊤Σv
v˜⊤
−
˜
≤
=
Σv˜ = v⊤Σv
⊤ ˜
v˜ Σv˜
ˆΣ
h
˜Σ
≤ k
√
−
≤
˜
v⊤ Σ
˜
v⊤(cid:0)Σ
−
v˜⊤Σv˜
v˜⊤Σv˜
−
−
Σ, v˜v˜⊤
−
vv⊤
Σ v
Σ(cid:1)v
−
−
(cid:1)
−
v˜v˜⊤
(cid:0)
i
vv⊤
Σ
−
˜Σ
kopk
Σ
−
kopk
2
k
−
v˜v˜⊤
−
k1
vv⊤
(4... | https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf |
that
2 ∠
θ sin ( (v˜, v))
˜
2 Σ
≤ k
Σ
kop sin(∠(v˜, v)) ,
−
so that
sin(∠(v˜, v))
2 ˜
θ
≤ k
Σ Σ
− kop .
To conclude the proof, it remains to check that
min
1
∈{±
ε
}
εv˜
|
−
v
|
2
2 = 2
v˜⊤v
2
|
−
| ≤
2
−
2(v˜⊤v)2 = 2 sin2(∠(v˜, v)) .
4.4. Principal component analysis
97
Combined with Theorem 4.6, we immediately get t... | https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf |
ˆ
Sparse PCA
In the example of Figure 4.1, it may be desirable to interpret the meaning of
the two directions denoted by PC1 and PC2. We know that they are linear
combinations of the original 500,000 gene expression levels. A natural question
to ask is whether only a subset of these genes could suffice to obtain similar
... | https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf |
�
covariance matrix satisfies,
∈
|0 = k
∼
Σ
v
(cid:3)
(cid:2)
|
min
1
∈{±
ε
}
εvˆ
|
−
v
|2 .
1 + θ
θ
r
(cid:16)
k log(ed/k) + log(1/δ)
n
∨
k log(ed/k) + log(1/δ)
n
.
(cid:17)
4.4. Principal component analysis
98
with probability 1
δ.
−
Proof. We begin by obtaining an intermediate result of the Davis-Kahan sin(θ)
theore... | https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf |
��Σv
vˆ⊤Σvˆ
−
≤ k
ˆ
Σ(S)
Σ(S)
kopk
−
vˆ(S)vˆ(S)⊤
−
v(S)v(S)⊤
k1 .
Following the same steps as in the proof of Theorem 4.8, we get now that
min
1
∈{±
ε
}
εvˆ
|
−
v
2
2 ≤
|
2 sin2
∠(vˆ, v)
8
θ2
≤
S :
sup
S
|
|
=2k
ˆ
Σ(S)
k
Σ(S)
kop .
−
(cid:0)
To conclude the proof, it remains to control supS : S =2k k
that end, observe ... | https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf |
≥
r
k log(ed/k) + log(1/δ)
n
∨
k log(ed/k) + log(1/δ)
n
,
for large enough C ensures that the desired bound holds with probability at
least 1
δ.
−
4.5. Problem set
99
4.5 PROBLEM SET
Problem 4.1. Using the results of Chapter 2, show that the following holds
for the multivariate regression model (4.1).
ˆ
1. There exist... | https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf |
�)
n
T )
(d
∨
with probability .99.
3. Comment on the above results in light of the results obtain in Section 4.2.
Problem 4.3.
be the any solution to the minimization problem
ˆ
Consider the multivariate regression model (4.1) and define Θ
min
IRd×T
∈
Θ
n
1
n k −
Y XΘ
2
kF + τ
k
XΘ
k1
o
4.5. Problem set
100
1. Show tha... | https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf |
or a negative answer. Indeed, a positive answer to these questions
simply consists in finding a better proof for the estimator we have studied
(question 1.) or simply finding a better estimator, together with a proof that
it performs better (question 2.). A negative answer is much more arduous.
For example, in question 2... | https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf |
�d∗)⊤
⊂
∈
Recall that GSM is a special case of the linear regression model when the
design matrix satisfies the ORT condition. In this case, we have proved several
performance guarantees (upper bounds) for various choices of Θ that can be
expressed either in the form
or the form
IE
|
(cid:2)
ˆθ
n −
θ∗
2
|2 ≤
(cid:3)
Cφ(... | https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf |
�
(5.4)
inf sup IE
ˆθ θ
Θ
∈
(cid:2)
where the infimum is taker over all estimators (i.e., measurable functions of
Y). Moreover, φ(Θ) is called minimax rate of estimation over Θ.
Note that minimax rates of convergence φ are defined up to multiplicative
constants. We may then choose this constant such that the minimax rate... | https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf |
θ
|
−
2
2 > φ(Θ)
C′
≥
(5.6)
where the infimum is taker over all estimators (i.e., measurable functions of
Y). Moreover, φ(Θ) is called minimax rate of estimation over Θ.
(cid:3)
(cid:3)
5.2 REDUCTION TO FINITE HYPOTHESIS TESTING
Minimax lower bounds rely on information theory and follow from a simple
principle: if the n... | https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf |
θ) that is asso-
ciated to it by
ˆ
ˆ
ˆ
ˆ
ψ(θ) = argmin θ
j M | − |
1
≤
≤
θj 2 ,
with ties broken arbitrarily.
Next observe that if, for some j = 1, . . . , M , ψ(θ) = j, then there exists
ˆ
k = j such that
θj|2. Together with the reverse triangle
θ
inequality it yields
θk|2 ≤ |
−
−
ˆ
θ
|
ˆ
ˆ
θ
|
θj 2
ˆ
θ
θj
− | ≥ | − |... | https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf |
Θ
∈
inf max IPθj ψ = j
ψ 1
M
j
C
′ .
≥
≤
≤
(cid:3)
(cid:2)
ty of error. In the next sections,
robabili
The above quantity is called minimax p
we show how it can be bounded from below using arguments from information
theory. For the purpose of illustration, we begin with the simple case where
M = 2 in the next section.
... | https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf |
eyman-Pearson). Let IP0 and IP1 be two probability measures.
Then for any test ψ, it holds
IP0(ψ = 1) + IP1(ψ = 0)
min(p0, p1)
≥
Z
Moreover, equality holds for the Likelihood Ratio test ψ⋆ = 1I(p1 ≥
Proof. Observe first that
p0).
IP0(ψ⋆ = 1) + IP1(ψ⋆ = 0) =
p0 +
p1
ψ∗=1
Z
Zψ∗=0
=
=
p1≥
Z
p0
p1≥
Z
p0
p0 +
p1
p1<p0
Z
min(... | https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf |
1|
1I(R
The lower bound in the Neyman-Pearson lemma is related to a well known
quantity: the total variation distance.
Definition-Proposition 5.4. The total variation distance between two prob-
ability measures IP0 and IP1 on a measurable space (
) is defined by
,
X
A
R
TV(IP0, IP1) = sup
∈A
= sup
R
∈A
1
2
=
Z
IP0(R)
|
I... | https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf |
. Lower bounds based on two hypotheses
107
In view of the Neyman-Pearson lemma, it is clear that if we want to prove
large lower bounds, we need to find probability distributions that are close in
total variation. Yet, this conflicts with constraint (5.7) and a tradeoff needs to
be achieved. To that end, in the Gaussian s... | https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf |
Q and let X
IP.
≪
∼
1. Observe that by Jensen’s inequality,
KL(IP, Q) =
IE log
−
dQ
dIP
(cid:16)
(X)
(cid:17)
log IE
≥ −
dQ
dIP
(cid:16)
(X)
=
(cid:17)
log(1) = 0 .
−
5.3. Lower bounds based on two hypotheses
108
2. Note that if X = (X1, . . . , Xn),
KL(IP, Q) = IE log
n
(X)
dIP
dQ
(cid:16)
log
=
=
=
Z
i=1
X (cid:16)
... | https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf |
.). Let IP and Q be two probability measures
such that IP
Q. Then
≪
TV(IP, Q)
≤
KL(IP, Q) .
p
5.3. Lower bounds based on two hypotheses
109
Proof. Note that
KL(IP, Q) =
p log
p
q
(cid:17)
(cid:16)
p log
pq>0
Z
=
2
−
=
2
−
2
≥ −
= 2
−
pq>0
Z
pq>0
Z
p log
p
q>0
p
Z
2 √pq
hr
Z
q
p
(cid:16)r (cid:17)
q
p −
(cid:16)hr
q
1
... | https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf |
2σ2/n for some α
2
(0, 1/2). Then
θ1|
∈
|
ˆ
2
θ 2
inf sup IPθ( θ
| − | ≥ n
ˆθ θ Θ
∈
2ασ2
)
1
≥ 2 −
α .
Proof. Write for simplicity IPj = IPθj , j = 0, 1. Recall that it follows from the
5.4. Lower bounds based on many hypotheses
110
reduction to hypothesis testing that
inf sup IPθ(
|
ˆθ θ Θ
∈
ˆ
θ
θ
2
2 ≥
|
−
2ασ2
n
)
... | https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf |
of the above discussion, we need a set of hypotheses that
spans a linear space of dimension proportional to d. In principle, we should
need at least order d hypotheses but we will actually need much more.
5.4 LOWER BOUNDS BASED ON MANY HYPOTHESES
The reduction to hypothesis testing from Section 5.2 allows us to use mor... | https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf |
(x)
log 2. It yields
≥ −
IP(Z = j
|
X) log[IP(Z = j
X)]
|
log 2
IP(Z = ψ(X)
|
−
≥ −
1 .
X) log(M )
−
Next, observe that since X
∼
PZ, the random variable IP(Z = j
IP(Z = j
X) =
|
1 dPj
M dPZ
(X) =
dPj(X)
M
k=1 dPk(X)
P
(5.8)
X) satisfies
|
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
5.4. Lower bounds based on many hypotheses
1... | https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf |
X
this implies the desired result.
Pj(ψ(X) = j)
max Pj (ψ(X) = j) ,
1 j
≤ ≤
M
≤
Fano’s inequality leads to the following useful theorem.
Theorem 5.11. Assume that Θ contains M
that for some constant 0 < α < 1/4, it holds
≥
5 hypotheses θ1, . . . , θM such
(i)
θj −
|
θk|
2
2 ≥
4φ
(ii)
θ
|
j −
2
θk|2 ≤
2ασ2
n
log(M )
The... | https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.