video_id stringclasses 7
values | text stringlengths 2 29.3k |
|---|---|
rLlZpnT02ZU | because the theta that you
have over there is really-- so |
rLlZpnT02ZU | in the definition of
the risk, the theta |
rLlZpnT02ZU | that you have here
if you're unbiased |
rLlZpnT02ZU | is really the
expectation of theta hat. |
rLlZpnT02ZU | So that's really
just the variance. |
rLlZpnT02ZU | So the risk is
really telling you |
rLlZpnT02ZU | how much fluctuations I
have around my expectation |
rLlZpnT02ZU | if unbiased. |
rLlZpnT02ZU | But actually here, it's telling
you how much fluctuations |
rLlZpnT02ZU | I have in average around theta. |
rLlZpnT02ZU | So if you understand the
notion of variance as being-- |
rLlZpnT02ZU | AUDIENCE: [INAUDIBLE] |
rLlZpnT02ZU | PHILIPPE RIGOLLET: What? |
rLlZpnT02ZU | AUDIENCE: Like
variance on average. |
rLlZpnT02ZU | PHILIPPE RIGOLLET: No. |
rLlZpnT02ZU | AUDIENCE: No. |
rLlZpnT02ZU | PHILIPPE RIGOLLET: It's
just like variance. |
rLlZpnT02ZU | AUDIENCE: Oh, OK. |
rLlZpnT02ZU | PHILIPPE RIGOLLET: So when you-- |
rLlZpnT02ZU | I mean, if you claim you
understand what variance is, |
rLlZpnT02ZU | it's telling you
what is the expected |
rLlZpnT02ZU | squared fluctuation
around the expectation |
rLlZpnT02ZU | of my random variable. |
rLlZpnT02ZU | It's just telling you on
average how far I'm going to be. |
rLlZpnT02ZU | And you take the square because
you want to cancel the signs. |
rLlZpnT02ZU | Otherwise, you're
going to get 0. |
rLlZpnT02ZU | AUDIENCE: Oh, OK. |
rLlZpnT02ZU | PHILIPPE RIGOLLET: And
here it's saying, well, |
rLlZpnT02ZU | I really don't care what the
expectation of theta hat is. |
rLlZpnT02ZU | What I want to get
to is theta, so I'm |
rLlZpnT02ZU | looking at the expectation
of the squared fluctuations |
rLlZpnT02ZU | around theta itself. |
rLlZpnT02ZU | If I'm unbiased, it
coincides with the variance. |
rLlZpnT02ZU | But if I'm biased, then I
have to account for the fact |
rLlZpnT02ZU | that I'm really
not computing the-- |
rLlZpnT02ZU | AUDIENCE: OK. |
rLlZpnT02ZU | OK. |
rLlZpnT02ZU | Thanks. |
rLlZpnT02ZU | PHILIPPE RIGOLLET: OK? |
rLlZpnT02ZU | All right. |
rLlZpnT02ZU | Are there any questions? |
rLlZpnT02ZU | So here, what I really
want to illustrate |
rLlZpnT02ZU | is that the risk
itself is a function |
rLlZpnT02ZU | of theta most of the times. |
rLlZpnT02ZU | And so for different
thetas, some estimators |
rLlZpnT02ZU | are going to be
better than others. |
rLlZpnT02ZU | But there's also
the entire range |
rLlZpnT02ZU | of estimators, those
that are really biased, |
rLlZpnT02ZU | but the bias can
completely vanish. |
rLlZpnT02ZU | And so here, you see
you have no bias, |
rLlZpnT02ZU | but the variance can be large. |
rLlZpnT02ZU | Or you have 0 bias-- |
rLlZpnT02ZU | you have a bias, but
the variance is 0. |
rLlZpnT02ZU | So you can actually
have this trade-off |
rLlZpnT02ZU | and you can find things that are
in the entire range in general. |
rLlZpnT02ZU | So those things are
actually-- those trade-offs |
rLlZpnT02ZU | between bias and variance are
usually much better illustrated |
rLlZpnT02ZU | if we're talking about
multivariate parameters. |
rLlZpnT02ZU | If I actually look
at a parameter which |
rLlZpnT02ZU | is the mean of some multivariate
Gaussian, so an entire vector, |
rLlZpnT02ZU | then the bias is going to-- |
rLlZpnT02ZU | I can make the bias
bigger by, for example, |
rLlZpnT02ZU | forcing all the coordinates of
my estimator to be the same. |
rLlZpnT02ZU | So here, I'm going
to get some bias, |
rLlZpnT02ZU | but the variance
is actually going |
rLlZpnT02ZU | to be much better, because
I get to average all |
rLlZpnT02ZU | the coordinates for this guy. |
rLlZpnT02ZU | And so really, the
bias/variance trade-off |
rLlZpnT02ZU | is when you have multiple
parameters to estimate, |
rLlZpnT02ZU | so you have a vector
of parameters, |
rLlZpnT02ZU | a multivariate
parameter, the bias |
rLlZpnT02ZU | increases when you're trying
to pull more information |
rLlZpnT02ZU | across the different
components to actually have |
rLlZpnT02ZU | a lower variance. |
rLlZpnT02ZU | So the more you average,
the lower the variance. |
rLlZpnT02ZU | That's exactly what
we've illustrated. |
rLlZpnT02ZU | As n increases, the
variance decreases, |
rLlZpnT02ZU | like 1 over n or theta,
1 minus theta over n. |
rLlZpnT02ZU | And so this is how it
happens in general. |
rLlZpnT02ZU | In this class, it's mostly
one-dimensional parameter |
rLlZpnT02ZU | estimation, so it's going to be
a little harder to illustrate |
rLlZpnT02ZU | that. |
rLlZpnT02ZU | But if you do, for example,
non-parametric estimation, |
rLlZpnT02ZU | that's all you do. |
rLlZpnT02ZU | There's just bias/variance
trade-offs all the time. |
rLlZpnT02ZU | And in between, when you have
high-dimensional parametric |
rLlZpnT02ZU | estimation, that
happens a lot as well. |
rLlZpnT02ZU | OK. |
rLlZpnT02ZU | So I'm just going to go quickly
through those two remaining |
rLlZpnT02ZU | slides, because we've
actually seen them. |
rLlZpnT02ZU | But I just wanted you to have
somewhere a formal definition |
rLlZpnT02ZU | of what a confidence
interval is. |
rLlZpnT02ZU | And so we fixed a statistical
model for n observations, X1 |
rLlZpnT02ZU | to Xn. |
rLlZpnT02ZU | The parameter theta
here is one-dimensional. |
rLlZpnT02ZU | Theta is a subset
of the real line, |
rLlZpnT02ZU | and that's why I
talk about intervals. |
rLlZpnT02ZU | An interval is a
subset of the line. |
rLlZpnT02ZU | If I had a subset
of R2, for example, |
rLlZpnT02ZU | that would no longer be called
an interval, but a region, |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.