video_id
stringclasses
7 values
text
stringlengths
2
29.3k
rLlZpnT02ZU
that I get different samples every time
rLlZpnT02ZU
should somewhat vanish.
rLlZpnT02ZU
And so what I want is to have a small bias, hopefully a 0 bias.
rLlZpnT02ZU
If this thing is 0, then we see that the estimator is unbiased.
rLlZpnT02ZU
So this is definitely a property that we
rLlZpnT02ZU
are going to be looking for in an estimator,
rLlZpnT02ZU
trying to find them to be unbiased.
rLlZpnT02ZU
But we'll see that it's actually maybe not enough.
rLlZpnT02ZU
So unbiasedness should not be something
rLlZpnT02ZU
you lose your sleep over.
rLlZpnT02ZU
Something that's slightly better is the risk, really
rLlZpnT02ZU
the quadratics risk, which is expectation of--
rLlZpnT02ZU
so if I have an estimator, theta hat,
rLlZpnT02ZU
I'm going to look at the expectation of theta hat n
rLlZpnT02ZU
minus theta squared.
rLlZpnT02ZU
And what we showed last time is that we can actually--
rLlZpnT02ZU
by inserting in there, adding and removing
rLlZpnT02ZU
the expectation of theta hat, we actually
rLlZpnT02ZU
get something where this thing can
rLlZpnT02ZU
be decomposed as the square of the bias plus the variance,
rLlZpnT02ZU
which is just the expectation of theta hat minus its expectation
rLlZpnT02ZU
squared.
rLlZpnT02ZU
That came from the fact that when
rLlZpnT02ZU
I added and removed the expectation of theta hat
rLlZpnT02ZU
in there, the cross-terms cancel.
rLlZpnT02ZU
All right.
rLlZpnT02ZU
So that was the bias squared, and this is the variance.
rLlZpnT02ZU
And so for example, if the quadratic risk goes to 0,
rLlZpnT02ZU
then that means that theta hat converges
rLlZpnT02ZU
to theta in the L2 sense.
rLlZpnT02ZU
And here we know that if we want this to go to 0,
rLlZpnT02ZU
since it's the sum of two positive terms,
rLlZpnT02ZU
we need to have both the bias that goes to 0
rLlZpnT02ZU
and the variance that goes to 0, so we
rLlZpnT02ZU
need to control both of those things.
rLlZpnT02ZU
And so there is usually an inherent trade-off
rLlZpnT02ZU
between getting a small bias and getting a small variance.
rLlZpnT02ZU
If you reduce one too much, then the variance of the other one
rLlZpnT02ZU
is going to--
rLlZpnT02ZU
then the other one is going to increase, or the opposite.
rLlZpnT02ZU
That happens a lot, but not so much, actually, in this class.
rLlZpnT02ZU
So let's just look at a couple of examples.
rLlZpnT02ZU
So am I planning--
rLlZpnT02ZU
yeah.
rLlZpnT02ZU
So examples.
rLlZpnT02ZU
So if I do, for example, X1, Xn, there are iid Bernoulli.
rLlZpnT02ZU
And I'm going to write it theta so
rLlZpnT02ZU
that we keep the same notation.
rLlZpnT02ZU
Then theta hat, what is the theta hat
rLlZpnT02ZU
that we proposed many times?
rLlZpnT02ZU
It's just X bar, Xn bar, the average of Xi's.
rLlZpnT02ZU
So what is the bias of this guy?
rLlZpnT02ZU
Well, to know the bias, I just have to remove theta
rLlZpnT02ZU
from the expectation.
rLlZpnT02ZU
What is the expectation of Xn bar?
rLlZpnT02ZU
Well, by linearity of the expectation,
rLlZpnT02ZU
it's just the average of the expectations.
rLlZpnT02ZU
But since all my Xi's are Bernouilli with the same theta,
rLlZpnT02ZU
then each of this guy is actually equal to theta.
rLlZpnT02ZU
So this thing is actually theta, which means
rLlZpnT02ZU
that this isn't biased, right?
rLlZpnT02ZU
Now, what is the variance of this guy?
rLlZpnT02ZU
So if you forgot the properties of the variance
rLlZpnT02ZU
for sum of independent random variables,
rLlZpnT02ZU
now it's time to wake up.
rLlZpnT02ZU
So we have the variance of something
rLlZpnT02ZU
that looks like 1 over n, the sum from i equal 1 to n of Xi.
rLlZpnT02ZU
So it's of the form variance of a constant times
rLlZpnT02ZU
a random variable.
rLlZpnT02ZU
So the first thing I'm going to do is pull out the constant.
rLlZpnT02ZU
But we know that the variance leaves on the square scale,
rLlZpnT02ZU
so when I pull out a constant outside of the variance,
rLlZpnT02ZU
it comes out with a square.
rLlZpnT02ZU
The variance of a times X is a-squared
rLlZpnT02ZU
times the variance of X, so this is equal to 1
rLlZpnT02ZU
over n squared times the variance of the sum.
rLlZpnT02ZU
So now we want to always do what we want to do.
rLlZpnT02ZU
So we have the variance of the sum.
rLlZpnT02ZU
We would like somehow to say that this
rLlZpnT02ZU
is the sum of the variances.
rLlZpnT02ZU
And in general, we are not allowed to say that,
rLlZpnT02ZU
but we are because my Xi's are actually independent.
rLlZpnT02ZU
So this is actually equal to 1 over n squared sum from i equal
rLlZpnT02ZU
1 to n of the variance of each of the Xi's.
rLlZpnT02ZU
And that's by independence, so this is basic probability.
rLlZpnT02ZU
And now, what is the variance of Xi's where again they're
rLlZpnT02ZU
all the same distribution, so the variance of Xi
rLlZpnT02ZU
is the same as the variance of X1.
rLlZpnT02ZU
And so each of those guys has variance what?
rLlZpnT02ZU
What is the variance of a Bernoulli?
rLlZpnT02ZU
We've said it once.
rLlZpnT02ZU
It's theta times 1 minus theta.
rLlZpnT02ZU
And so now I'm going to have the sum of n times a constant,
rLlZpnT02ZU
so I get n times the constant divided by n squared,
rLlZpnT02ZU
so one of the n's is going to cancel.
rLlZpnT02ZU
And so the whole thing here is actually
rLlZpnT02ZU
equal to theta, 1 minus theta divided by n.
rLlZpnT02ZU
So if I'm interested in the quadratic risk--
rLlZpnT02ZU
and again, I should just say risk,
rLlZpnT02ZU
because this is the only risk we're going