video_id
stringclasses 7
values | text
stringlengths 2
29.3k
|
|---|---|
rLlZpnT02ZU
|
the same properties
and the same feeling
|
rLlZpnT02ZU
|
and the same motivations as
the total variation distance.
|
rLlZpnT02ZU
|
But for this guy, we
will be able to build
|
rLlZpnT02ZU
|
an estimate for it,
because it's actually
|
rLlZpnT02ZU
|
going to be of the form
expectation of something.
|
rLlZpnT02ZU
|
And we're going to
be able to replace
|
rLlZpnT02ZU
|
the expectation by an average
and then minimize this average.
|
rLlZpnT02ZU
|
So this surrogate for
total variation distance
|
rLlZpnT02ZU
|
is actually called the
Kullback-Leibler divergence.
|
rLlZpnT02ZU
|
And why we call it divergence
is because it's actually
|
rLlZpnT02ZU
|
not a distance.
|
rLlZpnT02ZU
|
It's not going to be
symmetric to start with.
|
rLlZpnT02ZU
|
So this Kullback-Leibler
or even KL divergence--
|
rLlZpnT02ZU
|
I will just refer to it as KL--
|
rLlZpnT02ZU
|
is actually just
more convenient.
|
rLlZpnT02ZU
|
But it has some roots coming
from information theory, which
|
rLlZpnT02ZU
|
I will not delve into.
|
rLlZpnT02ZU
|
But if any of you is
actually a Core 6 student,
|
rLlZpnT02ZU
|
I'm sure you've
seen that in some--
|
rLlZpnT02ZU
|
I don't know-- course that
has any content on information
|
rLlZpnT02ZU
|
theory.
|
rLlZpnT02ZU
|
All right.
|
rLlZpnT02ZU
|
So the KL divergence between two
probability measures, P theta
|
rLlZpnT02ZU
|
and P theta prime--
|
rLlZpnT02ZU
|
and here, as I said, it's not
going to be the symmetric,
|
rLlZpnT02ZU
|
so it's very important
for you to specify
|
rLlZpnT02ZU
|
which order you say it is,
between P theta and P theta
|
rLlZpnT02ZU
|
prime.
|
rLlZpnT02ZU
|
It's different from saying
between P theta prime and P
|
rLlZpnT02ZU
|
theta.
|
rLlZpnT02ZU
|
And so we denote it by KL.
|
rLlZpnT02ZU
|
And so remember, before we had
either the sum or the integral
|
rLlZpnT02ZU
|
of 1/2 of the distance--
absolute value of the distance
|
rLlZpnT02ZU
|
between the PMFs and 1/2
of the absolute values
|
rLlZpnT02ZU
|
of the distances between the
probability density functions.
|
rLlZpnT02ZU
|
And then we replace
this absolute value
|
rLlZpnT02ZU
|
of the distance divided by
2 by this weird function.
|
rLlZpnT02ZU
|
This function is P
theta, log P theta,
|
rLlZpnT02ZU
|
divided by P theta prime.
|
rLlZpnT02ZU
|
That's the function.
|
rLlZpnT02ZU
|
That's a weird function.
|
rLlZpnT02ZU
|
OK.
|
rLlZpnT02ZU
|
So this was what we had.
|
rLlZpnT02ZU
|
That's the TV.
|
rLlZpnT02ZU
|
And the KL, if I use the
same notation, f and g,
|
rLlZpnT02ZU
|
is integral of f of X, log
of f of X over g of X, dx.
|
rLlZpnT02ZU
|
It's a bit different.
|
rLlZpnT02ZU
|
And I go from discrete to
continuous using an integral.
|
rLlZpnT02ZU
|
Everybody can read this.
|
rLlZpnT02ZU
|
Everybody's fine with this.
|
rLlZpnT02ZU
|
Is there any uncertainty about
the actual definition here?
|
rLlZpnT02ZU
|
So here I go straight
to the definition,
|
rLlZpnT02ZU
|
which is just
plugging the functions
|
rLlZpnT02ZU
|
into some integral and compute.
|
rLlZpnT02ZU
|
So I don't bother with
maxima or anything.
|
rLlZpnT02ZU
|
I mean, there is
something like that,
|
rLlZpnT02ZU
|
but it's certainly not as
natural as the total variation.
|
rLlZpnT02ZU
|
Yes?
|
rLlZpnT02ZU
|
AUDIENCE: The total
variation, [INAUDIBLE]..
|
rLlZpnT02ZU
|
PHILIPPE RIGOLLET:
Yes, just because it's
|
rLlZpnT02ZU
|
hard to build anything
from total variation,
|
rLlZpnT02ZU
|
because I don't know it.
|
rLlZpnT02ZU
|
So it's very difficult.
But if you can actually--
|
rLlZpnT02ZU
|
and even computing it
between two Gaussians,
|
rLlZpnT02ZU
|
just try it for yourself.
|
rLlZpnT02ZU
|
And please stop doing it
after at most six minutes,
|
rLlZpnT02ZU
|
because you won't
be able to do it.
|
rLlZpnT02ZU
|
And so it's just very
hard to manipulate,
|
rLlZpnT02ZU
|
like this integral of
absolute values of differences
|
rLlZpnT02ZU
|
between probability
density function, at least
|
rLlZpnT02ZU
|
for the probability
density functions
|
rLlZpnT02ZU
|
we're used to manipulate
is actually a nightmare.
|
rLlZpnT02ZU
|
And so people prefer KL,
because for the Gaussian,
|
rLlZpnT02ZU
|
this is going to be theta
minus theta prime squared.
|
rLlZpnT02ZU
|
And then we're
going to be happy.
|
rLlZpnT02ZU
|
And so those things are
much easier to manipulate.
|
rLlZpnT02ZU
|
But it's really--
the total variation
|
rLlZpnT02ZU
|
is telling you how
far in the worst case
|
rLlZpnT02ZU
|
the two probabilities can be.
|
rLlZpnT02ZU
|
This is really the
intrinsic notion
|
rLlZpnT02ZU
|
of closeness between
probabilities.
|
rLlZpnT02ZU
|
So that's really the
one-- if we could,
|
rLlZpnT02ZU
|
that's the one we
would go after.
|
rLlZpnT02ZU
|
Sometimes people will
compute them numerically,
|
rLlZpnT02ZU
|
so that they can say, oh, here's
the total variation distance I
|
rLlZpnT02ZU
|
have between those two things.
|
rLlZpnT02ZU
|
And then you actually
know that that
|
rLlZpnT02ZU
|
means they are close, because
the absolute value-- if I tell
|
rLlZpnT02ZU
|
you total variation is
0.01, like we did here,
|
rLlZpnT02ZU
|
it has a very specific meaning.
|
rLlZpnT02ZU
|
If I tell you the KL
divergence is 0.01,
|
rLlZpnT02ZU
|
it's not clear what it means.
|
rLlZpnT02ZU
|
OK.
|
rLlZpnT02ZU
|
So what are the properties?
|
rLlZpnT02ZU
|
The KL divergence between
P theta and P theta prime
|
rLlZpnT02ZU
|
is different from the KL
divergence between P theta
|
rLlZpnT02ZU
|
prime and P theta in general.
|
rLlZpnT02ZU
|
Of course, in general,
because if theta
|
rLlZpnT02ZU
|
is equal to theta prime,
then this certainly is true.
|
rLlZpnT02ZU
|
So there's cases
when it's not true.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.