video_id stringclasses 7
values | text stringlengths 2 29.3k |
|---|---|
rLlZpnT02ZU | So in this chapter, we're
going to talk about maximum |
rLlZpnT02ZU | likelihood estimation. |
rLlZpnT02ZU | Who has already seen maximum
likelihood estimation? |
rLlZpnT02ZU | OK. |
rLlZpnT02ZU | And who knows what a
convex function is? |
rLlZpnT02ZU | OK. |
rLlZpnT02ZU | So we'll do a little bit of
reminders on those things. |
rLlZpnT02ZU | So those things are when we do
maximum likelihood estimation, |
rLlZpnT02ZU | likelihood is the function, so
we need to maximize a function. |
rLlZpnT02ZU | That's basically
what we need to do. |
rLlZpnT02ZU | And if I give you
a function, you |
rLlZpnT02ZU | need to know how to
maximize this function. |
rLlZpnT02ZU | Sometimes, you have
closed-form solutions. |
rLlZpnT02ZU | You can take the derivative and
set it equal to 0 and solve it. |
rLlZpnT02ZU | But sometimes, you actually
need to resort to algorithms |
rLlZpnT02ZU | to do that. |
rLlZpnT02ZU | And there's an entire
industry doing that. |
rLlZpnT02ZU | And we'll briefly touch upon
it, but this is definitely |
rLlZpnT02ZU | not the focus of this class. |
rLlZpnT02ZU | OK. |
rLlZpnT02ZU | So before diving directly
into the definition |
rLlZpnT02ZU | of the likelihood and
what is the definition |
rLlZpnT02ZU | of the maximum likelihood
estimator, what |
rLlZpnT02ZU | I'm going to try to
do is to give you |
rLlZpnT02ZU | an insight for what we're
actually doing when we do |
rLlZpnT02ZU | maximum likelihood estimation. |
rLlZpnT02ZU | So remember, we have a
model on a sample space E |
rLlZpnT02ZU | and some candidate
distributions P theta. |
rLlZpnT02ZU | And really, your goal is
to estimate a true theta |
rLlZpnT02ZU | star, the one that generated
some data, X1 to Xn, |
rLlZpnT02ZU | in an iid fashion. |
rLlZpnT02ZU | But this theta star is
really a proxy for us |
rLlZpnT02ZU | to know that we
actually understand |
rLlZpnT02ZU | the distribution itself. |
rLlZpnT02ZU | The goal of knowing theta star
is so that you can actually |
rLlZpnT02ZU | know what P theta star. |
rLlZpnT02ZU | Otherwise, it has--
well, sometimes we |
rLlZpnT02ZU | said it has some meaning
itself, but really you |
rLlZpnT02ZU | want to know what
the distribution is. |
rLlZpnT02ZU | And so your goal is to actually
come up with the distribution-- |
rLlZpnT02ZU | hopefully that comes
from the family P theta-- |
rLlZpnT02ZU | that's close to P theta star. |
rLlZpnT02ZU | So in a way, what does it mean
to have two distributions that |
rLlZpnT02ZU | are close? |
rLlZpnT02ZU | It means that when you
compute probabilities |
rLlZpnT02ZU | on one distribution,
you should have |
rLlZpnT02ZU | the same probability on the
other distribution pretty much. |
rLlZpnT02ZU | So what we can do
is say, well, now I |
rLlZpnT02ZU | have two candidate
distributions. |
rLlZpnT02ZU | So if theta hat leads to a
candidate distribution P theta |
rLlZpnT02ZU | hat, and this is
the true theta star, |
rLlZpnT02ZU | it leads to the true
distribution P theta star |
rLlZpnT02ZU | according to which
my data was drawn. |
rLlZpnT02ZU | That's my candidate. |
rLlZpnT02ZU | As a statistician, I'm
supposed to come up |
rLlZpnT02ZU | with a good candidate,
and this is the truth. |
rLlZpnT02ZU | And what I want is that
if you actually give me |
rLlZpnT02ZU | the distribution,
then I want when |
rLlZpnT02ZU | I'm computing
probabilities for this guy, |
rLlZpnT02ZU | I know what the probabilities
for the other guys are. |
rLlZpnT02ZU | And so really what I want is
that if I compute a probability |
rLlZpnT02ZU | under theta hat of
some interval a, b, |
rLlZpnT02ZU | it should be pretty
close to the probability |
rLlZpnT02ZU | under theta star of a, b. |
rLlZpnT02ZU | And more generally,
if I want to take |
rLlZpnT02ZU | the union of two intervals,
I want this to be true. |
rLlZpnT02ZU | If I take just 1/2 lines, I
want this to be true from 0 |
rLlZpnT02ZU | to infinity, for example,
things like this. |
rLlZpnT02ZU | I want this to be true
for all of them at once. |
rLlZpnT02ZU | And so what I do is that I
write A for a probability event. |
rLlZpnT02ZU | And I want that P hat of
A is close to P star of A |
rLlZpnT02ZU | for any event A in
the sample space. |
rLlZpnT02ZU | Does that sound like
a reasonable goal |
rLlZpnT02ZU | for a statistician? |
rLlZpnT02ZU | So in particular, if I
want those to be close, |
rLlZpnT02ZU | I want the absolute
value of their difference |
rLlZpnT02ZU | to be close to 0. |
rLlZpnT02ZU | And this turns out to be-- |
rLlZpnT02ZU | if I want this to hold
for all possible A's, I |
rLlZpnT02ZU | have all possible events, so I'm
going to actually maximize over |
rLlZpnT02ZU | these events. |
rLlZpnT02ZU | And I'm going to
look at the worst |
rLlZpnT02ZU | possible event on which theta
hat can depart from theta star. |
rLlZpnT02ZU | And so rather than
defining it specifically |
rLlZpnT02ZU | for theta hat and
theta star, I'm |
rLlZpnT02ZU | just going to say, well, if
you give me two probability |
rLlZpnT02ZU | measures, P theta
and P theta prime, |
rLlZpnT02ZU | I want to know how
close they are. |
rLlZpnT02ZU | Well, if I want to
measure how close they |
rLlZpnT02ZU | are by how they can
differ when I measure |
rLlZpnT02ZU | the probability
of some event, I'm |
rLlZpnT02ZU | just looking at the absolute
value of the difference |
rLlZpnT02ZU | of the probabilities
and I'm just |
rLlZpnT02ZU | maximizing over the worst
possible event that might |
rLlZpnT02ZU | actually make them differ. |
rLlZpnT02ZU | Agreed? |
rLlZpnT02ZU | That's a pretty strong notion. |
rLlZpnT02ZU | So if the total variation
between theta and theta prime |
rLlZpnT02ZU | is small, it means that for all
possible A's that you give me, |
rLlZpnT02ZU | then P theta of A is
going to be close to P |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.