text
stringlengths
16
3.88k
source
stringlengths
60
201
E + )δξ = ((Ex, Ey ) · (Rp, Rq))δξ =⇒ δE = 1. ∂R ∂x ∂p ∂E ∂R ∂y ∂q 1.5 Generating an Initial Curve While only having to measure a single initial curve for profiling is far better than having to measure an entire surface, it is still undesirable and is not a caveat we are satisfied with for this solution. Below, we ...
https://ocw.mit.edu/courses/6-801-machine-vision-fall-2020/617445f0e31836831b40d42cb2f11a10_MIT6_801F20_lec10.pdf
any special points on the object where we already know the orientation without a measurement? These points are along the edge, or occluding boundary, of our objects of interest. Here, we know the surface normal of each of these points. Could we use these edge points as “starting points” for our SfS solution? It tur...
https://ocw.mit.edu/courses/6-801-machine-vision-fall-2020/617445f0e31836831b40d42cb2f11a10_MIT6_801F20_lec10.pdf
Additionally, if we have stationary brightness points in image space, we encounter the “dual” of this problem. By definition stationary points in the image space imply that: ∂E ∂x = ∂E ∂y = 0 6 This in turn implies that p and q cannot be stepped: dp dξ dq dξ = = ∂E ∂x ∂E ∂y =...
https://ocw.mit.edu/courses/6-801-machine-vision-fall-2020/617445f0e31836831b40d42cb2f11a10_MIT6_801F20_lec10.pdf
circle in the plane centered at the stationary point with radius  - this means all points in this plane will have the same surface orientation as the stationary point. Note that mathematically, a local 2D plane on a 3D surface is equivalent to a 2-manifold [1]. This is good in the sense that we know the surface ori...
https://ocw.mit.edu/courses/6-801-machine-vision-fall-2020/617445f0e31836831b40d42cb2f11a10_MIT6_801F20_lec10.pdf
shape? It turns out the answer is no, again because of stationary points. But if we look at the second derivatives of brightness: = (8x) = 8 = (32y) = 32 Exx = Eyy = Exy = ∂2E ∂x2 ∂2E ∂y2 ∂2E ∂x∂y ∂E ∂x ∂E ∂y ∂ ∂x = (32y) = ∂ ∂y (8y) = 0 7 These second derivatives, as we will discuss more in t...
https://ocw.mit.edu/courses/6-801-machine-vision-fall-2020/617445f0e31836831b40d42cb2f11a10_MIT6_801F20_lec10.pdf
Chapter 5 Coupled Fluids with Heat and Mass Transfer 5.1 November 26, 2003: Coupled Fluids, Heat and Mass Transfer! Mechanics: • Congrats to Jenny and David for winning the contest, prize: $5 Tosci’s. • PS8 on Stellar, due Fri 12/5. • Evaluations next Wednesday 12/3. Muddy from last time: • Time smoothing: what...
https://ocw.mit.edu/courses/3-185-transport-phenomena-in-materials-engineering-fall-2003/6185b0ff143062ed97aea16a782c8603_chap5.pdf
T x δT = 3.6 � αx U∞ 3.6 = � U∞ x α = √ 3.6 RexPr When is flow uniform? In a solid, or for much larger thermal boundary layer than fluid, so α >> ν, Pr<< 1. 66 Another way to look at it: Large Prandtl number (>.5) means δT = 0.72Pr−1/2 δu δT = 0.975Pr−1/3 δu Liquid metals (and about nothing else) hav...
https://ocw.mit.edu/courses/3-185-transport-phenomena-in-materials-engineering-fall-2003/6185b0ff143062ed97aea16a782c8603_chap5.pdf
∞) � � U 1 ∞ = hx(Ts − T∞) √ π αx � k hx = √ π αx U∞ = ∞ kρcpU πx Likewise for mass transfer, ρcp is effectively one, so: hDx = � DU ∞ πx Next time: average, dimensional analysis, δT < δu case. 67 5.2 December 1, 2003: Nusselt Number, Heat and Mass Transfer Coefficients Mechanics: • Evals Wednesday. ...
https://ocw.mit.edu/courses/3-185-transport-phenomena-in-materials-engineering-fall-2003/6185b0ff143062ed97aea16a782c8603_chap5.pdf
is ratio of square roots of diffusivity, which is inverse sqrt(Pr). • Case 2: smaller thermal(/concentration) boundary layer (Pr>5 or so): consider T/C BL to have linear velocity, smaller velocity means thicker T/C BL. Here: • Moving on, back to case 1, calculated q|y=0 from erf solution: δC /δu or δT /δu = 0.975Pr...
https://ocw.mit.edu/courses/3-185-transport-phenomena-in-materials-engineering-fall-2003/6185b0ff143062ed97aea16a782c8603_chap5.pdf
= L/Dsolid = 1/h Resistance to conduction in solid Resistance due to BL in liquid Uses L=solid thickness, Dsolid. Heat transfer note: you get one extra dimensionless number, due to heating by viscous friction. Here, Nusselt #, L=length of plate (in flow direction), the conduction and BL are in the same medium, u...
https://ocw.mit.edu/courses/3-185-transport-phenomena-in-materials-engineering-fall-2003/6185b0ff143062ed97aea16a782c8603_chap5.pdf
heat transfer coefficient, kinetic energy transfer coefficient. Types: local, global/average. Laminar flow variation: both∼ 1/ x. Laminar fL = 2fx x=L, hL = 2hx x=L. Dimensionless: f = f (Re), Nu=f (Re,Pr). Different correlations for different geometries. x, average∼ 1/ x, integral∼ √ √ √ | | • Other Nusselt numbers from ...
https://ocw.mit.edu/courses/3-185-transport-phenomena-in-materials-engineering-fall-2003/6185b0ff143062ed97aea16a782c8603_chap5.pdf
dL L dT β = V = 3α. = = ρ = P RT , β = − 1 dρ ρ dT = − RT P − P RT 2 = 1/T. � � Also βC = − 1 dρ ρ dC . Simplest case: vertical wall, Ts at wall, T∞ with density ρ∞ away from it, x vertical and y horizontal for consistency with forced convection BL. Assume: 1. Uniform kinematic viscosity ν = ν∞. 2. S...
https://ocw.mit.edu/courses/3-185-transport-phenomena-in-materials-engineering-fall-2003/6185b0ff143062ed97aea16a782c8603_chap5.pdf
, this gives driving force in the positive­x direction, which is up, like it’s supposed to. Okay, that’s all for today, more next time. 71 5.4 December 5: Wrapup Natural Convection Mechanics: • Test 2: before max=90, mean 75.38, std. dev 12.23; after max=100, mean 95.76, std. dev 6.37. Muddy from last time: • D...
https://ocw.mit.edu/courses/3-185-transport-phenomena-in-materials-engineering-fall-2003/6185b0ff143062ed97aea16a782c8603_chap5.pdf
∞)/(Ts − T∞), dimensionless ux = Rex/2 Grx on P&G p. 232 corresponding to dimensional graphs in W3R p. 313. Explain velocity BL is always at least as thick as thermal BL, but thermal can be thinner for large Pr. √ √ √ Grx vs. y/ 4 x Forced convection: δ ∝ √ Natural convection: δ ∝ 4 x Note: in P&G p. 232 plots, Pr...
https://ocw.mit.edu/courses/3-185-transport-phenomena-in-materials-engineering-fall-2003/6185b0ff143062ed97aea16a782c8603_chap5.pdf
494Pr2/3)2/5 Again, velocity0.8 in a way, sorta like turbulent forced convection boundary layers. 73 5.5 December 8: Wrapup Natural Convection, Streamfunction and Vorticity Mechanics: • Final exam Monday 12/15 in 4­149. Discuss operation, incl. closed/open sections, new diff eq, essay. Muddy from last time: • W...
https://ocw.mit.edu/courses/3-185-transport-phenomena-in-materials-engineering-fall-2003/6185b0ff143062ed97aea16a782c8603_chap5.pdf
�ν 2 Grx = 1 uxx 2 ν � ν2 gβΔT x3 = √ ux gβΔT x ⇒ ux,max = f (Pr) gβΔT x. � � 4 Grx 4 δu x = f (Pr) ⇒ δu = f (Pr) � 4 x Grx/4 √ = � 2f (Pr) 4 x4ν2 gβΔT x3 √ = � xν2 gβΔT . 2f (Pr) 4 These two results are consistent with: ux,max ∝ thickness2, forced convection Δux/Δy goes as 1/ Rex. √ Other g...
https://ocw.mit.edu/courses/3-185-transport-phenomena-in-materials-engineering-fall-2003/6185b0ff143062ed97aea16a782c8603_chap5.pdf
, combining, annihilating. Other application: crystal rotation in semisolid rheology. Stream function, for incompressible flow where � · �u = 0: ux = ∂Ψ ∂y , uy = − ∂Ψ ∂x 2 Collapses velocity components into one parameter. Look at Ψ = Ax, Ψ = By, Ψ = Ax + By, Ψ2 = x2 + y . Cool. 74 Gradient is normal to flow ...
https://ocw.mit.edu/courses/3-185-transport-phenomena-in-materials-engineering-fall-2003/6185b0ff143062ed97aea16a782c8603_chap5.pdf
okes; I like to think mine is more straightforward, but you can read W3R if needed. Also called “inviscid flow”. Motivation: tub with hole, pretty close to zero friction factor, velocity is infinity? No. Something other than viscosity limits it. Navier­Stokes, throw out viscous terms: ρ D�u Dt = −� p + ρ�g Change...
https://ocw.mit.edu/courses/3-185-transport-phenomena-in-materials-engineering-fall-2003/6185b0ff143062ed97aea16a782c8603_chap5.pdf
in corner, 1 ρV 2 + P1 at base over spout, 2 ρV 2 − ρgh2 at tube end. Three equations in three unknowns. Solves to P1 = ρgh, V 2 = 2g(h + h2), P1 = − ρgh2. Can also fill in the table... 2 1 Conditions: • No shear or other losses (not nearly fully­developed) • No interaction with internal solids, etc. • No heat in ...
https://ocw.mit.edu/courses/3-185-transport-phenomena-in-materials-engineering-fall-2003/6185b0ff143062ed97aea16a782c8603_chap5.pdf
Shameless plug...) Thank Albert for a terrific job as a TA! Last muddy questions • What is the relevance of the boundary layer thickness to the Bernoulli equation? The boundary layer is a region where there is quite a bit of shear, and sometimes turbulence. If it is thin relative to the size of the problem (e.g. re...
https://ocw.mit.edu/courses/3-185-transport-phenomena-in-materials-engineering-fall-2003/6185b0ff143062ed97aea16a782c8603_chap5.pdf
CA CA,out CA,in = exp − � � hD A t V Two extremes in continuous reactor behavior with flow rate Q: plug flow and perfect mixing. 77 Plug flow is like a mini­batch with tR = V /Q, draw plug in a pipe, derive: With a surface, the V s cancel, left with CA,out CA,in CA,out CA,in � � − kV Q = exp � = e...
https://ocw.mit.edu/courses/3-185-transport-phenomena-in-materials-engineering-fall-2003/6185b0ff143062ed97aea16a782c8603_chap5.pdf
elmaking: batch, but folk want to make continuous. 78
https://ocw.mit.edu/courses/3-185-transport-phenomena-in-materials-engineering-fall-2003/6185b0ff143062ed97aea16a782c8603_chap5.pdf
18.S997: High Dimensional Statistics Lecture Notes (This version: July 14, 2015) Philippe Rigollet Spring 2015 Preface These lecture notes were written for the course 18.S997: High Dimensional Statistics at MIT. They build on a set of notes that was prepared at Princeton University in 2013-14. Over the past decade, st...
https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf
that Donoho and Johnstone have made the first contributions on this topic in the early nineties. i Preface ii Acknowledgements. These notes were improved thanks to the careful read- ing and comments of Mark Cerenzia, Youssef El Moujahid, Georgina Hall, Jan-Christian Hu¨tter, Gautam Kamath, Kevin Lin, Ali Makhdoumi, Yar...
https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf
subGd(σ2) subE(σ2) Ber(p) Bin(n, p) Lap(λ) PX ∈ IR and variance σ2 > 0 IRd d × Univariate Gaussian distribution with mean µ IRd and covariance matrix Σ d-variate distribution with mean µ Univariate sub-Gaussian distributions with variance proxy σ2 > 0 d-variate sub-Gaussian distributions with variance proxy σ2 > 0 sub-...
https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf
. . . . . . . 2.5 Problem set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Misspecified Linear Models . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Oracle inequalities 3.2 Nonparametric regression . . . . . . . . . . . . . . . . . . . . . 3.3 Problem Set . . . . . . . . . . . . . . . . . . . . . ....
https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf
. . . . . . . 114 Bibliography 121 Introduction This course is mainly about learning a regression function from a collection of observations. In this chapter, after defining this task formally, we give an overview of the course and the questions around regression. We adopt the statistical learning point of view wh...
https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf
Formally, the regression function of Y onto X is defined by: f (x) = IE[Y | X = x] , x . ∈ X As we will see, it arises naturally in the context of prediction. Best prediction and prediction risk Suppose for a moment that you know the conditional distribution of Y given X. Given the realization of X = x, your goa...
https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf
of g can be decom­ X → IE[Y g(X)]2 = IE[Y f (X) + f (X) − = IE[Y − f (X)]2 + IE[f (X) g(X)]2 g(X)]2 + 2IE[Y − − The cross-product term satisfies − f (X)][f (X) g(X)] − − IE[Y − f (X)][f (X) − g(X)] = IE IE [Y = IE = IE [IE(Y [ ( | [f (X) [ − X) − f (X)][f (X) g(X)] X f (X)][f (X) g(X)] − f (X)][f (X) − g(X)] ...
https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf
we observe a sample that consists of independent copies of (X, Y ). The goal of regression function estimation is to use this data to construct an estimator fˆ n : that has small L2 risk R(fˆ n). (X1, Y1), . . . , (Xn, Yn) } Dn = X → Y { Let PX denote the marginal distribution of X and for any h : define Note ...
https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf
equivalent to study the 2 and estimation error I2 R(fˆ n) are random quantities and we need deterministic summaries to quantify their size. It is customary to use one of the two following options. Let φn}n { be a sequence of positive numbers that tends to zero as n goes to infinity. 2. Note that if fˆ n is random...
https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf
arbitrary and can be replaced by another positive constant. fˆ n 2 Such bounds control the tail of the distribution of f 2. They show − I fˆ nI 2 2 can be. Such how large the quantiles of the random variable bounds are favored in learning theory, and are sometimes called PAC­ bounds (for Probably Approximately Co...
https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf
X)]2 + fˆ n I − − 2 .2 f I fˆ n f I − 2 2 as a measure of This equality allowed us to consider only the part error. While this decomposition may not hold for other risk measures, it may be desirable to explore other distances (or pseudo-distances). This leads to two distinct ways to measure error. Either by bou...
https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf
It generalizes both the L2 distance and the sup-norm error by taking for any p 1, the pseudo distance ≥ dp(fˆ n, f ) = fˆ n | − | f pdPX . 1 X The choice of p is somewhat arbitrary and mostly employed as a mathe­ matical exercise. Note that these three examples can be split into two families: global (Sup-norm ...
https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf
n and θ as long as θ is identifiable. X IRd . ∈ ˆ MODELS AND METHODS Empirical risk minimization In our considerations on measuring the performance of an estimator fˆ n, we have carefully avoided the question of how to construct fˆ n. This is of course one of the most important task of statistics. As we will see...
https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf
to from the empirical risk of g defined by − Rn(g) = n1 n g(Xi) 2 . Yi − i=1 n ( We can now proceed to minimizing this risk. However, we have to be careful. 0 for all g. Therefore any function g such that Yi = g(Xi) for Indeed, Rn(g) all i = 1, . . . , n is a minimizer of the empirical risk. Yet, it may not b...
https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf
0.8 x x x Figure 1. It may not be the best choice idea to have fˆn(Xi) = Yi for all i = 1, . . . , n. small). In both cases, this extra knowledge can be incorporated to ERM using either a constraint : or a penalty: or both min g min g ∈G min Rn(g) g ∈G Rn(g) + pen(g) , { } Rn (g) + pen(g) , { } These schem...
https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf
Introduction Linear models 7 X = IRd, an all time favorite constraint is the class of linear functions When IRd . Under that are of the form g(x) = x⊤θ, that is parametrized by θ this constraint, the estimator obtained by ERM is usually called least squares estimator and is defined by fˆ n(x) = x⊤θˆ, where ∈ ...
https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf
about misspecified model, i.e., we try to fit a linear model to data that may not come from a linear model. Since linear models can have good approximation properties especially when the dimension d is large, our hope is that the linear model is never too far from the truth. , · In the case of a misspecified model, t...
https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf
− − G I I I I I f IE I fˆ n f 2 I2 − ≤ I ¯ f f 2 + φn . I2 − ¯ The above inequality is called an oracle inequality. Indeed, it says that if φn is small enough, then fˆ n the estimator mimics the oracle f . It is called “oracle” because it cannot be constructed without the knowledge of the unknown f . It is clear...
https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf
estimators. With the development of aggregation [Nem00, Tsy03, Rig06] and high dimensional statistics [CT07, BRT09, RT11], they have become important finite sample results that characterize the inter­ play between the important parameters of the problem. In some favorable instances, that is when the Xis enjoy specific...
https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf
informal discussion here. ∈ As we will see in Chapter 2, if the regression function is linear f (x) = x ⊤θ∗ , IRd, and under some assumptions on the marginal distribution of X, then θ∗ the least squares estimator fˆ n(x) = x⊤θˆn satisfies ∈ fˆ n IE I f 2 I2 − ≤ C d n , (1) where C > 0 is a constant and in ...
https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf
1 n 1I(θj = 0) . = 6 Introduction 9 Sparsity is just one of many ways to limit the size of the set of potential θ vectors to consider. One could consider vectors θ that have the following structure for example (see Figure 2): • • • • Monotonic: θ1 ≥ θj Smooth: θi | ≤ | Piecewise constant: − θd θ2 ≥ ...
https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf
1. coefficients mathematical ways to capture this phenomena, including ℓq -“balls” for q For q > 0, the unit ℓq-ball of IRd is defined as θj | ≤ θ | | Bq (R) = θ ∈ IRd : θ q = q | | { d j=1 n q θj | | ≤ 1 } 3 | θ where vectors in the unit ℓq ball can be approximated by sparse vectors. k Note that the set...
https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf
)) Therefore, the price to pay for not knowing which subspace to look at is only a logarithmic factor. n ( ≃ ( ) ( 3Strictly speaking, |θ|q is a norm and the ℓq ball is a ball only for q ≥ 1. 6 Introduction 10 θj θj Monotone Smooth θj j j Piecewise constant Smooth in a different basis θj j j Figure 2. Exampl...
https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf
is also unknown as it depends on the unknown PX . This is absolutely αk}k ∈ Z { ∈ infinite sequence ϕk}k { ∈ Introduction 11 correct but we will make the convenient assumption that PX is (essentially) known whenever this is needed. | | k Even if infinity is countable, we still have to estimate an infinite number ...
https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf
Indeed, for any cut-off > k0, but rather that the sequence − | C k k { { { | | | | k0, define the oracle ¯fk0 = αkϕk . k0 k n |≤ | Note that it depends on the unknown αk and define the estimator fˆ n = αˆkϕk , k0 k n |≤ where ˆαk are some data-driven coefficients (obtained by least-squares for ex­ ample). Then b...
https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf
see that we can strike a com­ promise called bias-variance tradeoff. (ˆαk − |≤ k0 k The main difference here with oracle inequalities is that we make assump­ tions on the regression function (here in terms of smoothness) in order to 4Here we illustrate a convenient notational convention that we will be using through­ ...
https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf
see that even if the smoothness index γ is unknown, we can select k0 in a data-driven way that achieves almost the same performance as if γ were known. This phenomenon is called adaptation (to γ). It is important to notice the main difference between the approach taken in nonparametric regression and the one in spar...
https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf
signal and noise, and that satisfy M = S + N . Here N is a random matrix such that IE[N ] = 0, the all-zero matrix. The goal is to estimate the signal matrix S from the observation of M . The structure of S can also be chosen in various ways. We will consider the case where S is sparse in the sense that it has ma...
https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf
task is much easier and is dominated by the former in terms of statistical price. Another important example of matrix estimation is high-dimensional co­ variance estimation, where the goal is to estimate the covariance matrix of a IRd, or its leading eigenvectors, based on n observations. random vector X Such a pro...
https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf
precisely, we can prove that for any estimator f˜ n, there exists a function f of the form f (x) = x⊤θ∗ such that fˆ n IE I f I − 2 2 > c d n for some positive constant c. Here we used a different notation for the constant to emphasize the fact that lower bounds guarantee optimality only up to a constant factor....
https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf
1.1 GAUSSIAN TAILS AND MGF Recall that a random variable X IR has Gaussian distribution iff it has a density p with respect to the Lebesgue measure on IR given by ∈ p(x) = 1 2πσ2 √ exp (x µ)2 − 2σ2 − , x IR , ∈ ( ∈ ∼ N ) IR and σ2 = var(X) > 0 are the mean and variance of where µ = IE(X) (µ, σ2). Note th...
https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf
� √ t2 2σ2 − t . 14 1.1. Gaussian tails and MGF 15 Figure 1.1. Probabilities of falling within 1, 2, and 3 standard deviations close to the mean in a Gaussian distribution. Source http://www.openintro.org/ and X IP( | − µ | > t) ≤ 2 e − π t2 2σ2 t . Proof. Note that it is sufficient to prove the theorem ...
https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf
M : s → M (s) = IE[exp(sZ)] . 1.2. Sub-Gaussian random variables and Chernoff bounds 16 Indeed in the case of a standard Gaussian random variable, we have M (s) = IE[exp(sZ)] = 1 √ 2π 1 √ 2π 2 s 2= e = . sz e e − 2 z 2 dz (z−s)2 2 + s 2 2 dz e − 1 1 It follows that if X ∼ N (µ, σ2), then I...
https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf
2 log(2/δ) n , (1.1) This is almost the confidence interval that you used in introductory statistics. The only difference is that we used an approximation for the Gaussian tail whereas statistical tables or software use a much more accurate computation. Figure 1.2 shows the ration of the width of the confidence inte...
https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf
∈ 1.2. Sub-Gaussian random variables and Chernoff bounds 17 Figure 1.2. Width of confidence intervals from exact computation in R (red dashed) and (1.1) (solid black). proxy σ2 if IE[X] = 0 and u⊤X is sub-Gaussian with variance proxy σ2 for d subGd(σ2). A ran­ any unit vector u − T is said to be sub-Gaussia...
https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf
generating function into a tail bound. Using Markov’s inequality, we have for any s > 0, ∼ ( ) IP(X > t) ≤ IP e sX > est IE sX e est ≤ . Next we use the fact that X is sub-Gaussian to get ( ) IP(X > t) ≤ σ2 s 2 2 e st . − 1.2. Sub-Gaussian random variables and Chernoff bounds 18 The above inequality hol...
https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf
σ2) can be bounded by those of show that the absolute moments of X Z (0, σ2) up to multiplicative constants. ∼ ∼ N Lemma 1.4. Let X be a random variable such that IP[ X | | > t] ≤ 2 exp t2 2σ2 , ) − ( then for any positive integer k 1, ≤ ≥ k] | IE[ X | (2σ2)k/2kΓ(k/2) . In particular, and IE[ X ] | | ≤ ...
https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf
/eσ e √ k . ( Moreover, for k = 1, we have ) 2Γ(1/2) = 2π. √ r √ Using moments, we can prove the following reciprocal to Lemma 1.3. Lemma 1.5. If (1.3) holds, then for any s > 0, it holds IE[exp(sX)] 4σ2 s 2 e . ≤ As a result, we will sometimes write X subG(σ2) when it satisfies (1.3). ∼ Proof. We use the T...
https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf
/2(2k + 1)Γ(k + 1/2) (2k + 1)! 2(k!)2 (2k)! ≤ From the above Lemma, we see that sub-Gaussian random variables can be equivalently defined from their tail bounds and their moment generating functions, up to constants. Sums of independent sub-Gaussian random variables Recall that if X1, . . . , Xn are i.i.d (0, ...
https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf
ollary 1.7. Let X1, . . . , Xn be n independent random variables such that Xi subG(σ2). Then for any a IRn, we have ∼ ∈ n IP i=1 [ � n and aiXi > t i ≤ exp − ( t2 2σ2 a | , 2 2 | ) IP aiXi < i=1 [ � exp − t i ≤ t2 − 2σ2 a | 2 2 | ) ( Of special interest is the case where ai = 1/n for all i. Then, we ge...
https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf
(b−a)2 8 e . In particular, X subG( (b a)2 − 4 ) . ∼ 1.2. Sub-Gaussian random variables and Chernoff bounds 21 Proof. Define ψ(s) = log IE[esX ], and observe that and we can readily compute ψ ′ (s) = IE[XesX ] IE[esX ] , ψ ′′ (s) = IE[X 2e sX ] IE[esX ] IE[XesX ] IE[esX ] 2 . � − � Thus ψ ′′ (s) ...
https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf
such that almost surely, Let X¯ = 1 n n i=1 Xi, then for any t > 0, Xi ∈ [ai, bi] , i. ∀ � P ¯ IP( X − IE( X) > t) ¯ exp ≤ − ( 2t2 2n n (bi i=1 ai)2 − , ) and ¯ IP( X − ¯ IE( X) < t) − ≤ exp � P − 2 2 tn 2 n (bi i=1 − ai)2 . ) ( Note that Hoeffding’s lemma is for any bounded random variables. For � P ...
https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf
EXPONENTIAL RANDOM VARIABLES What can we say when a centered random variable is not sub-Gaussian? A typical example is the double exponential (or Laplace) distribution with parameter 1, denoted by Lap(1). Let X Lap(1) and observe that ∼ X IP( | | > t) = e − t , 0 . t ≥ In particular, the tails of this distribut...
https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf
(cid:3) (cid:2) Lemma 1.10. Let X be a centered random variable such that IP( | 2e− 2t/λ for some λ > 0. Then, for any positive integer k 1, ≥ > t) X | ≤ IE[ X | k] | ≤ λk k! . Moreover, IE[ X | k])1/k | ≤ 2λk , and the moment generating function of X satisfies ( IE e sX e 2s 2λ2 , ≤ s ∀| | ≤ 1 2λ . Pr...
https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf
2λ s ( | | ≤ sX IE e (cid:2) (cid:3) 1 + 1 + ≤ ≤ = 1 + s ∞ k=2 � X ∞ ( | k=2 � X 2λ2 s | kIE[ | | k! X k] | λ)k s | λ)k ∞ ( | k=0 X � s | 1 + 2s 2λ2 2λ2 2s e ≤ ≤ This leads to the following definition 1 | ≤ 2λ s | Definition 1.11. A random variable X is said to be sub-exponential with subE(λ)) i...
https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf
sk2k − 1 (cid:3) IE[X 2k] + (IE[X 2])k ( k=2 � X ∞ sk4kIE[X 2k] k! ) 1 + 1 + 1 + ≤ ≤ ≤ k=2 X � ∞ 2(k!) sk4k2(2σ2)kk! 2(k!) ∞ (8sσ2)k k=0 � X k=2 � X = 1 + (8sσ2)2 = 1 + 128s 2σ4 128s 2σ4 . e ≤ (Jensen) (Jensen again) (Lemma 1.4) for s | | ≤ 1 16σ2 Sub-exponential random variables also give rise ...
https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf
λ2 ∧ . t ) λ � (cid:21) − � (cid:20) Proof. Without loss of generality, assume that λ = 1 (we can always replace Xi by Xi/λ and t by t/λ. Next, using a Chernoff bound, we get for any s > 0 ¯ IP( X > t) ≤ n IE e sXi − e snt . i=1 Y � (cid:2) (cid:3) 1.4. Maximal inequalities 25 s Next, if | butio...
https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf
for the average ¯X. In many instances, we will be interested in controlling the maximum over the parameters of such linear combinations (this is because of empirical risk minimization). The purpose of this section is to present such results. Maximum over a finite set We begin by the simplest case possible: the maxi...
https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf
2) log IE max e sXi i ≤ ≤ IE e sXi log N (cid:2) 1 (cid:3) 1 N i � X (cid:2) ≤ ≤ σ2 s e 2 2 log (cid:3) N 1 i X � ≤ ≤ log N σ2s + s 2 Taking s = 2(log N )/σ2 yields the first inequality in expectation. The first inequality in probability is obtained by a simple union bound: � IP ( max Xi > t 1 ≤ i ≤ N ...
https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf
< t) = [IP(X1 < t)]N 1 i ≤ ≤ N 0 , N . → ∞ → On the opposite side of the picture, if all the Xis are equal to the same random variable X, we have for any t > 0, IP( max Xi < t) = IP(X1 < t) > 0 1 i ≤ ≤ N N ∀ ≥ 1 . In the Gaussian case, lower bounds are also available. They illustrate the effect of the corre...
https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf
θ F { P , } ∈ where P maximum over useful lemma. ⊂ F IRd is a polytope with N vertices. While the family is infinite, the can be reduced to the a finite maximum using the following F Lemma 1.15. Consider a linear form x convex polytope P IRd , ⊂ c x, x, c ⊤ ∈ → IRd . Then for any max c x = max c x x ...
https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf
the two quantities are equal. It immediately yields the following theorem IRd and let IRd be a random vector such that, [v(i)]⊤X, i = 1, . . . , N are sub-Gaussian Theorem 1.16. Let P be a polytope with N vertices v(1), . . . , v(N ) X random variables with variance proxy σ2 . Then ∈ ∈ IE[max θ⊤X] P θ ∈ σ 2 l...
https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf
norm |2 at most 1. Formally, it is defined by u | IRd : B2 = x { ∈ d i=1 � X 2 x i ≤ 1 . } Clearly, this ball is not a polytope and yet, we can control the maximum of random variables indexed by B2. This is due to the fact that there exists a finite subset of B2 such that the maximum over this finite set is ...
https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf
N B2. (0, 1). Then the unit Euclidean ball |N | ≤ Proof. Consider the following iterative construction if the ε-net. Choose x1 = 0. For any i |2 > ε for all j < i. If no such x exists, stop the procedure. Clearly, this will create an ε-net. We now control its size. 2, take any xi to be any x ∈ B2 such that xj...
https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf
∈ IRd be a sub-Gaussian random vector with variance IE[max θ⊤X] = IE[max ∈B2 ∈B2 θ θ θ⊤X ] | ≤ | √ 4σ d . Moreover, for any δ > 0, with probability 1 δ, it holds max θ⊤X = max θ ∈B2 ∈B2 θ | θ⊤X | ≤ 4σ d + 2σ 2 log(1/δ) . � − √ Proof. Let satisfies x such that be a 1/2-net of N 6d . Next, observe that f...
https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf
max θ⊤X > t θ ∈B2 IP ≤ z ∈N 2 max z ⊤X > t ( ) ( To conclude the proof, we find t such that t2 8σ2 e − ≤ ≤ |N | 6d e − t2 8σ2 . ) 2 t +d log(6) e 8σ2 − δ ≤ ⇔ t2 ≥ 8 log(6)σ2d + 8σ2 log(1/δ) . Therefore, it is sufficient to take t = √ 8 log(6)σ d + 2σ 2 log(1/δ) . � � 1.5. Problem set 30 1.5 PR...
https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf
. . . , Zn are freedom) if it has the same distribution as Z1 iid (0, 1). n N (a) Let Z Z 2 ∼ N 1 satisfies − (0, 1). Show that the moment generating function of Y = φ(s) := E e = sY (cid:2) (cid:3) (b) Show that for all 0 < s < 1/2, e √ 1 s − − ∞ 2s if s < 1/2 otherwise    (c) Conclude that φ(s) exp...
https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf
� ≤ 1 j ≤ ≤ are iid sub-Gaussian random variables with variance proxy σ2 . n be a random matrix such that its entries m { (a) Show that the matrix A is sub-Gaussian. What is its variance proxy? (b) Let A 1 1 denote the operator norm of A defined by max IRm x ∈ | Ax |2 x |2 | . Show that there exits a constant ...
https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf
� Xi| ≤ 4eσ log n . � Problem 1.6. Let K be a compact subset of the unit sphere of IRp that Nε with respect to the Euclidean distance of IRp that satisfies admits an ε-net p are positive constants. 1 and d |Nε| ≤ Let X subGp(σ2) be a centered random vector. (C/ε)d for all ε (0, 1). Here C ≤ ≥ ∈ Show that th...
https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf
ε) . Problem 1.8. Let X1, . . . , Xn be n independent and random variables such that IE[Xi] = µ and var(Xi) (0, 1) and assume without loss of generality that n can be factored into n = K G where G = 8 log(1/δ) is a positive integers. σ2 . Fix δ ≤ ∈ · For g = 1, . . . , G, let X¯g denote the average over the g...
https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf
clude. � 6 6 6 r 2 e t p a h C Linear Regression Model In this chapter, we consider the following regression model: Yi = f (Xi) + εi, i = 1, . . . , n , (2.1) where ε = (ε1, . . . , εn)⊤ is sub-Gaussian with variance proxy σ2 and such that IE[ε] = 0. Our goal is to estimate the function f under a li...
https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf
tumor given certain inputs for a new (unseen) patient. A natural measure of performance here is the L2-risk employed in the in- troduction: ˆfn(Xn+1)]2 ˆR(fn) = IE[Yn+1 − = IE[Yn+1 − where PX denotes the marginal distribution of Xn+1. It measures how good the prediction of Yn+1 is in average over realizations of Xn+1. ...
https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf
of these values. In many instances, fixed design can be recognized from their structured form. A typical example is the regular design on [0, 1], given by xi = i/n, i = 1, . . . , n. Interpolation between these points is possible under smoothness as- sumptions. Note that in fixed design, we observe µ∗+ε, where µ∗ = f (x1...
https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf
= 1 n | X ˆ(θ θ∗) | − ˆ 2 2 = (θ θ ∗)⊤ − X X ⊤ n ˆ (θ − θ∗) . (2.2) (2.3) ∈ A natural example of fixed design regression is image denoising. Assume that µ∗i , i 1, . . . , n is the grayscale value of pixel i of an image. We do not get to observe the image µ∗ but rather a noisy version of it Y = µ∗ + ε. Given IRn, our go...
https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf
interested in estimating Xθ∗ and not θ∗ itself, so by exten- sion, we also call µˆls = Xθls least squares estimator. Observe that µˆls is the projection of Y onto the column span of X. ˆ It is not hard to see that least squares estimators of θ∗ and µ∗ = Xθ∗ are maximum likelihood estimators when ε (0, σ2In). ∼ N Propos...
https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf
atisfies ˆ ∼ subGn(σ2). ˆ IE MSE( θls) = IE X 1 n X ˆ θls | X θ∗ | − 2 2 . σ 2 r n , (cid:2) where r = rank(X⊤X (cid:3) r, for any δ > 0, with probability 1 ). Moreove δ, it holds − MSE(Xˆθ ) . σ ls 2 r + log(1/δ) n . Proof. Note that by definition Y | − Xˆθls 2 2 ≤ | | Y Xθ∗ 2 2 = | 2 2 . ε | | − (2.4) Moreover, Y | − X...
https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf
traditional technique is “sup-out” θls. This is typically where maximal inequalities are needed. Here we have to be a bit careful. ˆ Let Φ = [φ1, . . . , φr] . In particular, there exists ν ∈ × IRn r be an orthonormal basis of the column span ˆ θ∗) = Φν. It yields IRr such that X(θls of X ε⊤X ˆ(θls X ˆ(θls θ∗) − θ∗) |2...
https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf
1.19 that − δ, it follows from the last step in the proof1 of sup (ε˜⊤u)2 u ∈B2 ≤ 8 log(6)σ 2r + 8σ2 log(1/δ) . Remark 2.3. If d ≤ n and B := X X has rank d, then we have ⊤ n ˆls θ | θ∗ 2 2 ≤ | − E MS (Xˆθls) λmin(B) , and we can use Theorem 2.2 to bound ˆ θls θ∗ 2 2 directly. | 1we could use Theorem 1.19 directly here...
https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf
��v) K K − v ∈ XK θ ∈ K IRn. This is a measure of the size (width) of where XK = Xθ : θ XK. If ε (0, Id), the expected value of the above supremum is actually called Gaussian width of XK. Here, ε is not Gaussian but sub-Gaussian and similar properties will hold. { ∼ N } ⊂ ∈ ℓ1 constrained least squares Assume here that...
https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf
Theorem 2.4. Let K = θ∗ of X are normalized in such a way that maxj | ˆ least squares estimator θls B1 satisfies 1 = IE n 2X θ∗ |2 . σ r log d n , ˆ Xθls B1 − MSE(Xθls ) ˆ B IE ≥ | 1 (cid:2) (cid:3) 2.2. Least squares estimators 39 Moreover, for any δ > 0, with probability 1 MSE(Xˆθls B1 ) . σ r δ, it holds − log(d/δ) ...
https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf
�ls B 1 benefits from the best of both rates. ˆθls B 1 (exercise!) so that ˆ ls IE MSE(Xθ 1 ) . min B r n , r log d n . (cid:17) (cid:16) round r √ This is called an elbow effect. The elbow t akes place a logarithmic terms). ≃ (cid:3) (cid:2) n (up to ℓ0 constrained least squares We abusively call ℓ0 norm of a vector θ I...
https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf
θls K when K = B0(k). Note that computing θls least squares estimators, which is an exponential number in k. In practice this will be hard (or even (cid:1) (cid:0) impossible) but it is interesting to understand the tatistical properties of this s estimator and to use them as a benchmark. essentially requires computing...
https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf
) so that θls ∈ B0(2k). For θ∗ K − submatrix of X that is obtained S × | the rank of XS and S of X. Denote by rS ≤ | S rS be an orthonormal basis of the column span S | to be the vector with IR| ˆ S. If we denote by S = supp(θls 2k K − IRrSˆ such that IRd, define θ(S) ˆ θ∗), we have | ≤ ˆ S ∈ ∈ ∈ × | | | ∈ X ˆ(θls K θ∗)...
https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf
2 = k SX| | u ∈ (cid:0) r B2 S (cid:1) It follows from the proof of Theorem 1.19 that for any S 2k, sup (ε˜⊤u)2 > t u rS 2 ∈B IP (cid:0) S 6| |e− t 28σ ≤ ≤ (cid:1) | 62ke− | ≤ 28σ . t 2.2. Least squares estimators Together, the above three displays yield IP( | Xˆ θls XK − θ∗ 2 |2 > 4t) ≤ 62ke− 28σ . t d 2k (cid:18) (c...
https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf
k k − k + 1 = eknk+1 (k + 1)k+1 1 + 1 k k , (cid:18) (cid:19) (cid:18) (cid:19) (cid:17) where we used the induction hypothesis in the first inequality. To conclude, it suffices to observe that (cid:16) (cid:16) (cid:17) 1 + (cid:16) k 1 k (cid:17) e ≤ It immediately leads to the following corollary: Corollary 2.8. Under ...
https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf
) ∞ 0 Z H + H + = ≤ ≤ = H + ∞ 0 Z 2k d j IP( | Xˆθls K − Xθ∗ | 2 2 > nu)du IP( | Xˆθls K − Xθ∗ | 2 2 > n(u + H))du 62k ∞ n(u+H) 32σ 2 , e − (cid:19) 0 Z 62 k e − nH 2 32σ 2k j=1 X (cid:18) d j j=1 (cid:18) (cid:19) X 32σ2 n du . Next, take H to be such that 2k j=1 X d j (cid:18) 62ke− nH 2 32σ = 1 . (cid:19) H . σ2k n ...
https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf
to in this sequence model and we will also discuss this case. Its links to nonparametric estimation will become clearer in Chapter 3. The goal here is to estimate the unknown vector θ∗. ∞ N The sub-Gaussian Sequence Model Note first that the model (2.7) is a special case of the linear model with fixed design (2.1) with n...
https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf
� ˆ θ) = (θ θ∗)⊤ − X⊤X n ˆ (θ − θ∗) = ˆ θ | − θ∗ 2 2 . | 2.3. The Gaussian Sequence Model 44 Furthermore, for any θ ∈ IRd, the assumption ORT yields, y | θ | − 2 2 = 1 n | X⊤Y θ 2 2 | − 2 2 − Xθ 2 n 1 θ⊤X⊤Y + Y ⊤XX⊤Y n2 2 2 − 2 n (Xθ)⊤Y + 1 n Y 2 |2 + Q | | = = = θ | | 1 n 1 n | | Y Xθ | − 2 2 + Q , (2.8) where Q is a...
https://ocw.mit.edu/courses/18-s997-high-dimensional-statistics-spring-2015/619e4ae252f1b26cbe0f7a29d5932978_MIT18_S997S15_CourseNotes.pdf