video_id stringclasses 7
values | text stringlengths 2 29.3k |
|---|---|
rLlZpnT02ZU | theta prime of A, because if-- |
rLlZpnT02ZU | let's say I just found the
bound on the total variation |
rLlZpnT02ZU | distance, which is 0.01. |
rLlZpnT02ZU | All right. |
rLlZpnT02ZU | So that means that this
is going to be larger |
rLlZpnT02ZU | than the max over A of P theta
minus P theta prime of A, |
rLlZpnT02ZU | which means that for any A-- |
rLlZpnT02ZU | actually, let me write P
theta hat and P theta star, |
rLlZpnT02ZU | like we said, theta
hat and theta star. |
rLlZpnT02ZU | And so if I have a bound,
say, on the total variation, |
rLlZpnT02ZU | which is 0.01, that
means that P theta hat-- |
rLlZpnT02ZU | every time I compute a
probability on P theta hat, |
rLlZpnT02ZU | it's basically in the
interval P theta star of A, |
rLlZpnT02ZU | the one that I really wanted
to compute, plus or minus 0.01. |
rLlZpnT02ZU | This has nothing to do
with confidence interval. |
rLlZpnT02ZU | This is just
telling me how far I |
rLlZpnT02ZU | am from the value of
actually trying to compute. |
rLlZpnT02ZU | And that's true for
all A. And that's key. |
rLlZpnT02ZU | That's where this
max comes into play. |
rLlZpnT02ZU | It just says, I want
this bound to hold |
rLlZpnT02ZU | for all possible A's at once. |
rLlZpnT02ZU | So this is actually a
very well-known distance |
rLlZpnT02ZU | between probability measures. |
rLlZpnT02ZU | It's the total
variation distance. |
rLlZpnT02ZU | It's extremely central to
probabilistic analysis. |
rLlZpnT02ZU | And it essentially tells
you that every time-- |
rLlZpnT02ZU | if two probability
distributions are close, |
rLlZpnT02ZU | then it means that every
time I compute a probability |
rLlZpnT02ZU | under P theta but
I really actually |
rLlZpnT02ZU | have data from P
theta prime, then |
rLlZpnT02ZU | the error is no larger
than the total variation. |
rLlZpnT02ZU | OK. |
rLlZpnT02ZU | So this is maybe not
the most convenient way |
rLlZpnT02ZU | of finding a distance. |
rLlZpnT02ZU | I mean, how are you going-- |
rLlZpnT02ZU | in reality, how are you
to compute this maximum |
rLlZpnT02ZU | over all possible events? |
rLlZpnT02ZU | I mean, it's just crazy, right? |
rLlZpnT02ZU | There's an infinite
number of them. |
rLlZpnT02ZU | It's much larger than the number
of intervals, for example, |
rLlZpnT02ZU | so it's a bit annoying. |
rLlZpnT02ZU | And so there's actually
a way to compress it |
rLlZpnT02ZU | by just looking at the basically
function distance or vector |
rLlZpnT02ZU | distance between probability
mass functions or probability |
rLlZpnT02ZU | density functions. |
rLlZpnT02ZU | So I'm going to start
with the discrete version |
rLlZpnT02ZU | of the total variation. |
rLlZpnT02ZU | So throughout this
chapter, I will |
rLlZpnT02ZU | make the difference between
discrete random variables |
rLlZpnT02ZU | and continuous random variables. |
rLlZpnT02ZU | It really doesn't matter. |
rLlZpnT02ZU | All it means is that when
I talk about discrete, |
rLlZpnT02ZU | I will talk about
probability mass functions. |
rLlZpnT02ZU | And when I talk
about continuous, |
rLlZpnT02ZU | I will talk about probability
density functions. |
rLlZpnT02ZU | When I talk about
probability mass functions, |
rLlZpnT02ZU | I talk about sums. |
rLlZpnT02ZU | When I talk about probability
density functions, |
rLlZpnT02ZU | I talk about integrals. |
rLlZpnT02ZU | But they're all the
same thing, really. |
rLlZpnT02ZU | So let's start with the
probability mass function. |
rLlZpnT02ZU | Everybody remembers what
the probability mass |
rLlZpnT02ZU | function of a discrete
random variable is. |
rLlZpnT02ZU | This is the function that tells
me for each possible value |
rLlZpnT02ZU | that it can take,
the probability |
rLlZpnT02ZU | that it takes this value. |
rLlZpnT02ZU | So the Probability
Mass Function, PMF, |
rLlZpnT02ZU | is just the function for
all x in the sample space |
rLlZpnT02ZU | tells me the probability
that my random variable is |
rLlZpnT02ZU | equal to this little value. |
rLlZpnT02ZU | And I will denote it
by P sub theta of X. |
rLlZpnT02ZU | So what I want is, of
course, that the sum |
rLlZpnT02ZU | of the probabilities is 1. |
rLlZpnT02ZU | And I want them to
be non-negative. |
rLlZpnT02ZU | Actually, typically we will
assume that they are positive. |
rLlZpnT02ZU | Otherwise, we can just remove
this x from the sample space. |
rLlZpnT02ZU | And so then I have the total
variation distance, I mean, |
rLlZpnT02ZU | it's supposed to be the
maximum overall sets of-- |
rLlZpnT02ZU | of subsets of E, such
that the probability |
rLlZpnT02ZU | of A minus probability
of theta prime of A-- |
rLlZpnT02ZU | it's complicated,
but really there's |
rLlZpnT02ZU | this beautiful
formula that tells me |
rLlZpnT02ZU | that if I look at the total
variation between P theta |
rLlZpnT02ZU | and P theta prime, it's
actually equal to just 1/2 |
rLlZpnT02ZU | of the sum for all X in E of the
absolute difference between P |
rLlZpnT02ZU | theta X and P theta prime of X. |
rLlZpnT02ZU | So that's something
you can compute. |
rLlZpnT02ZU | If I give you two
probability mass functions, |
rLlZpnT02ZU | you can compute
this immediately. |
rLlZpnT02ZU | But if I give you
just the densities |
rLlZpnT02ZU | and the original distribution,
the original definition |
rLlZpnT02ZU | where you have to max
over all possible events, |
rLlZpnT02ZU | it's not clear
you're going to be |
rLlZpnT02ZU | able to do that very quickly. |
rLlZpnT02ZU | So this is really the
one you can work with. |
rLlZpnT02ZU | But the other one is
really telling you |
rLlZpnT02ZU | what it is doing for you. |
rLlZpnT02ZU | It's controlling the
difference of probabilities |
rLlZpnT02ZU | you can compute on any event. |
rLlZpnT02ZU | But here, it's just
telling you, well, |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.