Sentence stringlengths 102 4.09k | video_title stringlengths 27 104 |
|---|---|
I'm choosing one of them to be heads. So that over 32. And you could verify that five factorial over one factorial times five minus, actually let me just do it just so that you don't have to take my word for it. So five choose one is equal to five factorial over one factorial, which is just one, times five minus four, sorry five minus one factorial, which is equal to five factorial over four factorial, which is just going to be equal to five. All right, we're making good progress. So now let in purple, let's think about the probability that our random variable X is equal to two. Well this is going to be equal to, and now I'll actually resort to the combinatorics. | Binomial distribution Probability and Statistics Khan Academy.mp3 |
So five choose one is equal to five factorial over one factorial, which is just one, times five minus four, sorry five minus one factorial, which is equal to five factorial over four factorial, which is just going to be equal to five. All right, we're making good progress. So now let in purple, let's think about the probability that our random variable X is equal to two. Well this is going to be equal to, and now I'll actually resort to the combinatorics. So this is, you have five flips, and you're choosing two of them to be heads over 32 equally likely possibilities. So this is the number of possibilities that result in two heads. Two of the five flips have chosen to be heads. | Binomial distribution Probability and Statistics Khan Academy.mp3 |
Well this is going to be equal to, and now I'll actually resort to the combinatorics. So this is, you have five flips, and you're choosing two of them to be heads over 32 equally likely possibilities. So this is the number of possibilities that result in two heads. Two of the five flips have chosen to be heads. I guess you could think of it that way by the random gods or whatever you want to say. So this is the fraction of the 32 equally likely possibilities. So this is the probability that X equals two. | Binomial distribution Probability and Statistics Khan Academy.mp3 |
Two of the five flips have chosen to be heads. I guess you could think of it that way by the random gods or whatever you want to say. So this is the fraction of the 32 equally likely possibilities. So this is the probability that X equals two. Well what's this going to be? Well I'll do it right over here, and actually no reason for me to have to keep switching colors. So five choose two is going to be equal to five factorial over two factorial times five minus two factorial. | Binomial distribution Probability and Statistics Khan Academy.mp3 |
So this is the probability that X equals two. Well what's this going to be? Well I'll do it right over here, and actually no reason for me to have to keep switching colors. So five choose two is going to be equal to five factorial over two factorial times five minus two factorial. Five minus two factorial, so this is five factorial over two factorial times three factorial. And this is going to be equal to five times four times three times two. I could write times one, but that doesn't really do anything for us. | Binomial distribution Probability and Statistics Khan Academy.mp3 |
So five choose two is going to be equal to five factorial over two factorial times five minus two factorial. Five minus two factorial, so this is five factorial over two factorial times three factorial. And this is going to be equal to five times four times three times two. I could write times one, but that doesn't really do anything for us. And then two factorial is just going to be two. And then the three factorial is three times two. I could write times one, but once again it doesn't do anything for us. | Binomial distribution Probability and Statistics Khan Academy.mp3 |
I could write times one, but that doesn't really do anything for us. And then two factorial is just going to be two. And then the three factorial is three times two. I could write times one, but once again it doesn't do anything for us. That cancels with that. Four divided by two is two. Five times two is 10. | Binomial distribution Probability and Statistics Khan Academy.mp3 |
I could write times one, but once again it doesn't do anything for us. That cancels with that. Four divided by two is two. Five times two is 10. So this is equal to 10. This right over here is equal to 10 32nds. And obviously we could simplify this fraction, but I like to leave it this way because we're now thinking everything is in terms of 30 seconds. | Binomial distribution Probability and Statistics Khan Academy.mp3 |
Five times two is 10. So this is equal to 10. This right over here is equal to 10 32nds. And obviously we could simplify this fraction, but I like to leave it this way because we're now thinking everything is in terms of 30 seconds. There's a one 32nd chance, x equals zero. Five 32nds chance that x equals one. And a 10 32nds chance that x equals two. | Binomial distribution Probability and Statistics Khan Academy.mp3 |
And obviously we could simplify this fraction, but I like to leave it this way because we're now thinking everything is in terms of 30 seconds. There's a one 32nd chance, x equals zero. Five 32nds chance that x equals one. And a 10 32nds chance that x equals two. Let's keep on going. All right, I'll go in orange. So what is the probability that our random variable x is equal to three? | Binomial distribution Probability and Statistics Khan Academy.mp3 |
And a 10 32nds chance that x equals two. Let's keep on going. All right, I'll go in orange. So what is the probability that our random variable x is equal to three? Well this is going to be five. Out of the five flips, we're going to need to choose three of them to be heads to figure out which of the possibilities involve exactly three heads. And this is over 32 equally likely possibilities. | Binomial distribution Probability and Statistics Khan Academy.mp3 |
So what is the probability that our random variable x is equal to three? Well this is going to be five. Out of the five flips, we're going to need to choose three of them to be heads to figure out which of the possibilities involve exactly three heads. And this is over 32 equally likely possibilities. And this is going to be equal to, so five choose three, is equal to five factorial over three factorial times five minus three factorial. Actually let me just write it down. Five minus three factorial, which is equal to five factorial over three factorial times two factorial. | Binomial distribution Probability and Statistics Khan Academy.mp3 |
And this is over 32 equally likely possibilities. And this is going to be equal to, so five choose three, is equal to five factorial over three factorial times five minus three factorial. Actually let me just write it down. Five minus three factorial, which is equal to five factorial over three factorial times two factorial. Well that's exactly what we had up here. We just swapped the three and the two. So this also is going to be equal to 10. | Binomial distribution Probability and Statistics Khan Academy.mp3 |
Five minus three factorial, which is equal to five factorial over three factorial times two factorial. Well that's exactly what we had up here. We just swapped the three and the two. So this also is going to be equal to 10. So this is also going to be equal to 10 32nds. All right, two more to go. And I think you're going to start seeing a little bit of a symmetry here. | Binomial distribution Probability and Statistics Khan Academy.mp3 |
So this also is going to be equal to 10. So this is also going to be equal to 10 32nds. All right, two more to go. And I think you're going to start seeing a little bit of a symmetry here. One, five, 10, 10. Let's keep going. I haven't used white yet, so maybe I'll use white. | Binomial distribution Probability and Statistics Khan Academy.mp3 |
And I think you're going to start seeing a little bit of a symmetry here. One, five, 10, 10. Let's keep going. I haven't used white yet, so maybe I'll use white. The probability that our random variable X is equal to four. Well, out of our five flips, we want to select four of them to be heads. Or out of the five, and we want to see, we're obviously not actively selecting. | Binomial distribution Probability and Statistics Khan Academy.mp3 |
I haven't used white yet, so maybe I'll use white. The probability that our random variable X is equal to four. Well, out of our five flips, we want to select four of them to be heads. Or out of the five, and we want to see, we're obviously not actively selecting. One way to think about it, we want to figure out the possibilities that involve out of the five flips, four of them are chosen to be heads, or four of them are heads. And this is over 32 equally likely possibilities. So five choose four is equal to five factorial over four factorial times five minus four factorial, which is equal to, well that's just going to be five factorial. | Binomial distribution Probability and Statistics Khan Academy.mp3 |
Or out of the five, and we want to see, we're obviously not actively selecting. One way to think about it, we want to figure out the possibilities that involve out of the five flips, four of them are chosen to be heads, or four of them are heads. And this is over 32 equally likely possibilities. So five choose four is equal to five factorial over four factorial times five minus four factorial, which is equal to, well that's just going to be five factorial. This is going to be one factorial right over here. So that doesn't change the value. You're just going to multiply one factorial times four factorial. | Binomial distribution Probability and Statistics Khan Academy.mp3 |
So five choose four is equal to five factorial over four factorial times five minus four factorial, which is equal to, well that's just going to be five factorial. This is going to be one factorial right over here. So that doesn't change the value. You're just going to multiply one factorial times four factorial. So it's five factorial over four factorial, which is equal to five. So once again, this is five 32nds. And you could have reasoned through this, because if you're saying you want five heads, that means you have one tail. | Binomial distribution Probability and Statistics Khan Academy.mp3 |
You're just going to multiply one factorial times four factorial. So it's five factorial over four factorial, which is equal to five. So once again, this is five 32nds. And you could have reasoned through this, because if you're saying you want five heads, that means you have one tail. And there's five different places you could put that one tail. There are five possibilities with one tail, five of the 32 equally likely. And then, and you could probably guess what we're going to get for x equals five, because having five heads means you have zero tails, and there's only going to be one possibility out of the 32 with zero tails, or that have all heads. | Binomial distribution Probability and Statistics Khan Academy.mp3 |
And you could have reasoned through this, because if you're saying you want five heads, that means you have one tail. And there's five different places you could put that one tail. There are five possibilities with one tail, five of the 32 equally likely. And then, and you could probably guess what we're going to get for x equals five, because having five heads means you have zero tails, and there's only going to be one possibility out of the 32 with zero tails, or that have all heads. Let's write that down. So the probability that a random variable x is equal to five, so we have all five heads. And you could say this is five, and we're choosing five of them to be heads out of the 32 equally likely possibilities. | Binomial distribution Probability and Statistics Khan Academy.mp3 |
And then, and you could probably guess what we're going to get for x equals five, because having five heads means you have zero tails, and there's only going to be one possibility out of the 32 with zero tails, or that have all heads. Let's write that down. So the probability that a random variable x is equal to five, so we have all five heads. And you could say this is five, and we're choosing five of them to be heads out of the 32 equally likely possibilities. Well, five choose five, that's going to be, actually let me just write it here, since I've done it for all the other ones. Five choose five is five factorial over five factorial times five minus five factorial. Well, this right over here is zero factorial, which is equal to one, and so this whole thing simplifies to one. | Binomial distribution Probability and Statistics Khan Academy.mp3 |
And you could say this is five, and we're choosing five of them to be heads out of the 32 equally likely possibilities. Well, five choose five, that's going to be, actually let me just write it here, since I've done it for all the other ones. Five choose five is five factorial over five factorial times five minus five factorial. Well, this right over here is zero factorial, which is equal to one, and so this whole thing simplifies to one. So this is going to be one out of 130 seconds. And so you see the symmetry, 130 second, 130 seconds, 532nd, 532nd, 1032nd, 1032nd. And that makes sense, because the probability of getting five heads is the same as the probability of getting zero tails. | Binomial distribution Probability and Statistics Khan Academy.mp3 |
Well, this right over here is zero factorial, which is equal to one, and so this whole thing simplifies to one. So this is going to be one out of 130 seconds. And so you see the symmetry, 130 second, 130 seconds, 532nd, 532nd, 1032nd, 1032nd. And that makes sense, because the probability of getting five heads is the same as the probability of getting zero tails. And the probability of getting zero tails should be the same as the probability of getting zero heads. So I'll leave you there for this video. In the next video, we'll kind of graphically represent this and we'll see the probability distribution for this random variable. | Binomial distribution Probability and Statistics Khan Academy.mp3 |
Now the goal of this video is to think about, well what is the expected value of a geometric random variable like this? And I'll tell you the answer in future videos when we will apply this formula. But in this video we're actually going to prove it to ourselves mathematically. But the expected value of a geometric random variable is gonna be one over the probability of success on any given trial. So now let's prove it to ourselves. So the expected value of any random variable is just going to be the probability weighted outcomes that you could have. So you could say it is the probability, the probability that our random variable is equal to one times one plus the probability that our random variable is equal to two times two plus, and you get the general idea, it goes on and on and on. | Proof of expected value of geometric random variable AP Statistics Khan Academy.mp3 |
But the expected value of a geometric random variable is gonna be one over the probability of success on any given trial. So now let's prove it to ourselves. So the expected value of any random variable is just going to be the probability weighted outcomes that you could have. So you could say it is the probability, the probability that our random variable is equal to one times one plus the probability that our random variable is equal to two times two plus, and you get the general idea, it goes on and on and on. And a geometric random variable, it can only take on values one, two, three, four, so forth and so on. It will not take on the value zero because you cannot have a success if you have not had a trial yet. But what is this going to be equal to? | Proof of expected value of geometric random variable AP Statistics Khan Academy.mp3 |
So you could say it is the probability, the probability that our random variable is equal to one times one plus the probability that our random variable is equal to two times two plus, and you get the general idea, it goes on and on and on. And a geometric random variable, it can only take on values one, two, three, four, so forth and so on. It will not take on the value zero because you cannot have a success if you have not had a trial yet. But what is this going to be equal to? Well this is going to be equal to, what's the probability that we have a success on our first trial? And actually let me just write it over here. So this is going to be P. What is this going to be? | Proof of expected value of geometric random variable AP Statistics Khan Academy.mp3 |
But what is this going to be equal to? Well this is going to be equal to, what's the probability that we have a success on our first trial? And actually let me just write it over here. So this is going to be P. What is this going to be? What is the probability that we don't have a success on our first trial but we have one on our second trial? Well this is going to be one minus P, that's the first trial where we don't have a success, times a success on the second trial. And actually let me do a few more terms here. | Proof of expected value of geometric random variable AP Statistics Khan Academy.mp3 |
So this is going to be P. What is this going to be? What is the probability that we don't have a success on our first trial but we have one on our second trial? Well this is going to be one minus P, that's the first trial where we don't have a success, times a success on the second trial. And actually let me do a few more terms here. So let me erase this a little bit, do a few more terms. This is going to be the probability that X equals two, sorry, the probability that X equals three, times three. And we're going to keep going on and on and on. | Proof of expected value of geometric random variable AP Statistics Khan Academy.mp3 |
And actually let me do a few more terms here. So let me erase this a little bit, do a few more terms. This is going to be the probability that X equals two, sorry, the probability that X equals three, times three. And we're going to keep going on and on and on. Well what's this going to be? Well the probability that X equals three is we're going to have to get two unsuccessful trials. And so the probability of two unsuccessful trials is one minus P squared. | Proof of expected value of geometric random variable AP Statistics Khan Academy.mp3 |
And we're going to keep going on and on and on. Well what's this going to be? Well the probability that X equals three is we're going to have to get two unsuccessful trials. And so the probability of two unsuccessful trials is one minus P squared. And then one successful trial, just like that. So you get the general idea. So if I wanted to rewrite this, I'm just going to rewrite it to make it a little bit simpler. | Proof of expected value of geometric random variable AP Statistics Khan Academy.mp3 |
And so the probability of two unsuccessful trials is one minus P squared. And then one successful trial, just like that. So you get the general idea. So if I wanted to rewrite this, I'm just going to rewrite it to make it a little bit simpler. So the expected, at least for the purposes of this proof, so the expected value of X is equal to, I'll write this as one P plus two P times one minus P plus three P times one minus P squared. And we're going to keep going on and on and on forever like that. So how do we figure out this sum? | Proof of expected value of geometric random variable AP Statistics Khan Academy.mp3 |
So if I wanted to rewrite this, I'm just going to rewrite it to make it a little bit simpler. So the expected, at least for the purposes of this proof, so the expected value of X is equal to, I'll write this as one P plus two P times one minus P plus three P times one minus P squared. And we're going to keep going on and on and on forever like that. So how do we figure out this sum? Well now I'm going to do a little bit of mathematical trickery or gymnastics, but it's all valid. And if any of y'all have seen the proof of taking a infinite geometric series, then we're going to do a very similar technique. What I'm going to do here is I'm going to think about well what is one minus P times this expected value? | Proof of expected value of geometric random variable AP Statistics Khan Academy.mp3 |
So how do we figure out this sum? Well now I'm going to do a little bit of mathematical trickery or gymnastics, but it's all valid. And if any of y'all have seen the proof of taking a infinite geometric series, then we're going to do a very similar technique. What I'm going to do here is I'm going to think about well what is one minus P times this expected value? So let's do that. So if I say one minus P times the expected value of X, what is that going to be equal to? Well I would multiply every one of these terms by one minus P. So one P times one minus P would be one P times one minus P. You would get that right over there. | Proof of expected value of geometric random variable AP Statistics Khan Academy.mp3 |
What I'm going to do here is I'm going to think about well what is one minus P times this expected value? So let's do that. So if I say one minus P times the expected value of X, what is that going to be equal to? Well I would multiply every one of these terms by one minus P. So one P times one minus P would be one P times one minus P. You would get that right over there. What about two P times one minus P? What would that be equal to? Well that would be two P times one minus P and now we're going to multiply it by one minus P again so you're going to get one minus P squared. | Proof of expected value of geometric random variable AP Statistics Khan Academy.mp3 |
Well I would multiply every one of these terms by one minus P. So one P times one minus P would be one P times one minus P. You would get that right over there. What about two P times one minus P? What would that be equal to? Well that would be two P times one minus P and now we're going to multiply it by one minus P again so you're going to get one minus P squared. And so I think you see where this is going and we're just going to keep adding and adding and adding from there. So now we're going to do something really fun and interesting at least from a mathematical point of view. If this is equal to that, if the left hand side is equal to the right hand side, let's just subtract this value from both sides. | Proof of expected value of geometric random variable AP Statistics Khan Academy.mp3 |
Well that would be two P times one minus P and now we're going to multiply it by one minus P again so you're going to get one minus P squared. And so I think you see where this is going and we're just going to keep adding and adding and adding from there. So now we're going to do something really fun and interesting at least from a mathematical point of view. If this is equal to that, if the left hand side is equal to the right hand side, let's just subtract this value from both sides. So on the left hand side I would have the expected value of X, that's that, minus this. Minus one minus P times the expected value of X. So I'm just subtracting this from that side but let me subtract this from that side. | Proof of expected value of geometric random variable AP Statistics Khan Academy.mp3 |
If this is equal to that, if the left hand side is equal to the right hand side, let's just subtract this value from both sides. So on the left hand side I would have the expected value of X, that's that, minus this. Minus one minus P times the expected value of X. So I'm just subtracting this from that side but let me subtract this from that side. Well I could subtract this expression from that but this is equivalent so I'm just going to subtract this from that. And so what do I get? Well let's see, I'm going to have one minus P and then if I subtract one P times one minus P from two P times one minus P, well I'm just going to be left with plus one P times one minus P. And then if I subtract this from that, I'm going to be left with one P times one minus P squared. | Proof of expected value of geometric random variable AP Statistics Khan Academy.mp3 |
So I'm just subtracting this from that side but let me subtract this from that side. Well I could subtract this expression from that but this is equivalent so I'm just going to subtract this from that. And so what do I get? Well let's see, I'm going to have one minus P and then if I subtract one P times one minus P from two P times one minus P, well I'm just going to be left with plus one P times one minus P. And then if I subtract this from that, I'm going to be left with one P times one minus P squared. And we're just going to keep going on and on and on. And so let me simplify this a little bit. If I distribute this negative, this could be plus and then this would be P minus one. | Proof of expected value of geometric random variable AP Statistics Khan Academy.mp3 |
Well let's see, I'm going to have one minus P and then if I subtract one P times one minus P from two P times one minus P, well I'm just going to be left with plus one P times one minus P. And then if I subtract this from that, I'm going to be left with one P times one minus P squared. And we're just going to keep going on and on and on. And so let me simplify this a little bit. If I distribute this negative, this could be plus and then this would be P minus one. And then if we distribute this expected value of X, we get on the left hand side, let me scroll up a little bit, I don't want to squinch it too much. So let's see, we have the expected value of X and then plus P times the expected value of X, P times the expected value of X minus the expected value of X. These cancel out. | Proof of expected value of geometric random variable AP Statistics Khan Academy.mp3 |
If I distribute this negative, this could be plus and then this would be P minus one. And then if we distribute this expected value of X, we get on the left hand side, let me scroll up a little bit, I don't want to squinch it too much. So let's see, we have the expected value of X and then plus P times the expected value of X, P times the expected value of X minus the expected value of X. These cancel out. It's going to be equal to P plus P times one minus P plus P times one minus P squared. And it's going to keep going on and on and on. Well on the left hand side, all I have is a P times the expected value of X. | Proof of expected value of geometric random variable AP Statistics Khan Academy.mp3 |
These cancel out. It's going to be equal to P plus P times one minus P plus P times one minus P squared. And it's going to keep going on and on and on. Well on the left hand side, all I have is a P times the expected value of X. If I want to solve for the expected value of X, I just divide both sides by P. So I get, and this is kind of neat, through this mathematical gymnastics I now have, I'm just dividing everything by P, both sides. On the left hand side, I just have the expected value of X. If I divide all of these terms by P, this first term becomes one, the second term becomes one minus P. This third term, if I divide by P, becomes plus one minus P squared, so forth and so on. | Proof of expected value of geometric random variable AP Statistics Khan Academy.mp3 |
Well on the left hand side, all I have is a P times the expected value of X. If I want to solve for the expected value of X, I just divide both sides by P. So I get, and this is kind of neat, through this mathematical gymnastics I now have, I'm just dividing everything by P, both sides. On the left hand side, I just have the expected value of X. If I divide all of these terms by P, this first term becomes one, the second term becomes one minus P. This third term, if I divide by P, becomes plus one minus P squared, so forth and so on. Now what's cool about this, this is a classic geometric series with a common ratio of one minus P. And if that term is completely unfamiliar to you, I encourage you, and this is why it's actually called a geometric, one of the reasons, arguments for why it's called a geometric random variable, but I encourage you to review what a geometric series is on Khan Academy if this looks completely unfamiliar. But in other places, we prove, using actually a very similar technique that we did up here, that this sum is going to be equal to one over one minus our common ratio. And our common ratio is one minus P. So what is this going to be equal to? | Proof of expected value of geometric random variable AP Statistics Khan Academy.mp3 |
If I divide all of these terms by P, this first term becomes one, the second term becomes one minus P. This third term, if I divide by P, becomes plus one minus P squared, so forth and so on. Now what's cool about this, this is a classic geometric series with a common ratio of one minus P. And if that term is completely unfamiliar to you, I encourage you, and this is why it's actually called a geometric, one of the reasons, arguments for why it's called a geometric random variable, but I encourage you to review what a geometric series is on Khan Academy if this looks completely unfamiliar. But in other places, we prove, using actually a very similar technique that we did up here, that this sum is going to be equal to one over one minus our common ratio. And our common ratio is one minus P. So what is this going to be equal to? And we are really in the home stretch right over here. This is going to be equal to one over one minus one plus P. One minus one plus P, which is indeed equal to one over P. So there you have it. We have proven to ourselves that the expected value of a geometric random variable, using some, I think, cool mathematics, is indeed equal to one over P. | Proof of expected value of geometric random variable AP Statistics Khan Academy.mp3 |
Actually, I wrote the standard deviation here in the last video, that should be the mean. And let's say it has some variance. I could write it like that, or I could write the standard deviation there. But as long as it has a well-defined mean and standard deviation, I don't care what the distribution looks like. What I can do is take samples in the last video of, say, size 4. So that means I take literally four instances of this random variable. This is one example. | Sampling distribution of the sample mean Probability and Statistics Khan Academy.mp3 |
But as long as it has a well-defined mean and standard deviation, I don't care what the distribution looks like. What I can do is take samples in the last video of, say, size 4. So that means I take literally four instances of this random variable. This is one example. I take their mean, and I consider this the sample mean for my first trial. Or you could almost say for my first sample. I know it's very confusing, because you can consider that a sample, the set to be a sample, or you could consider each member of the set as a sample. | Sampling distribution of the sample mean Probability and Statistics Khan Academy.mp3 |
This is one example. I take their mean, and I consider this the sample mean for my first trial. Or you could almost say for my first sample. I know it's very confusing, because you can consider that a sample, the set to be a sample, or you could consider each member of the set as a sample. So that can be a little bit confusing there. But I have this first sample mean, and then I keep doing that over and over. In my second sample, my sample size is 4. | Sampling distribution of the sample mean Probability and Statistics Khan Academy.mp3 |
I know it's very confusing, because you can consider that a sample, the set to be a sample, or you could consider each member of the set as a sample. So that can be a little bit confusing there. But I have this first sample mean, and then I keep doing that over and over. In my second sample, my sample size is 4. I got four instances of this random variable. I average them. I have another sample mean. | Sampling distribution of the sample mean Probability and Statistics Khan Academy.mp3 |
In my second sample, my sample size is 4. I got four instances of this random variable. I average them. I have another sample mean. And the cool thing about the central limit theorem is as I keep plotting the frequency distribution of my sample means, it starts to approach something that approximates the normal distribution. And it's going to do a better job of approximating that normal distribution as n gets larger. And just so we have a little terminology on our belt, this frequency distribution right here that I've plotted out, or here, or up here, that I started plotting out, that is called, and it's kind of confusing because we use the word sample so much, that is called the sampling distribution of the sample mean. | Sampling distribution of the sample mean Probability and Statistics Khan Academy.mp3 |
I have another sample mean. And the cool thing about the central limit theorem is as I keep plotting the frequency distribution of my sample means, it starts to approach something that approximates the normal distribution. And it's going to do a better job of approximating that normal distribution as n gets larger. And just so we have a little terminology on our belt, this frequency distribution right here that I've plotted out, or here, or up here, that I started plotting out, that is called, and it's kind of confusing because we use the word sample so much, that is called the sampling distribution of the sample mean. And let's dissect this a little bit, just so that this long description of this distribution starts to make a little bit of sense. When we say it's the sampling distribution, that's telling us that it's being derived from, it's the distribution of some statistic, which in this case happens to be the sample mean, and we're deriving it from samples of an original distribution. So each of these, so this is my first sample. | Sampling distribution of the sample mean Probability and Statistics Khan Academy.mp3 |
And just so we have a little terminology on our belt, this frequency distribution right here that I've plotted out, or here, or up here, that I started plotting out, that is called, and it's kind of confusing because we use the word sample so much, that is called the sampling distribution of the sample mean. And let's dissect this a little bit, just so that this long description of this distribution starts to make a little bit of sense. When we say it's the sampling distribution, that's telling us that it's being derived from, it's the distribution of some statistic, which in this case happens to be the sample mean, and we're deriving it from samples of an original distribution. So each of these, so this is my first sample. My sample size is 4. I'm using the statistic, the mean. I actually could have done it with other things. | Sampling distribution of the sample mean Probability and Statistics Khan Academy.mp3 |
So each of these, so this is my first sample. My sample size is 4. I'm using the statistic, the mean. I actually could have done it with other things. I could have done the mode, or the range, or other statistics, but the sampling distribution of the sample mean is the most common one. It's probably, in my mind, the best place to start learning about the central limit theorem, and even, frankly, sampling distribution. So that's what it's called. | Sampling distribution of the sample mean Probability and Statistics Khan Academy.mp3 |
I actually could have done it with other things. I could have done the mode, or the range, or other statistics, but the sampling distribution of the sample mean is the most common one. It's probably, in my mind, the best place to start learning about the central limit theorem, and even, frankly, sampling distribution. So that's what it's called. And just as a little bit of background, and I'll prove this to you experimentally, not mathematically, but I think the experimental is, on some levels, more satisfying with statistics, that this will have the same mean as your original distribution right here. So it has the same mean, but we'll see in the next video that this is actually going to start approximating a normal distribution, even though my original distribution that this is kind of generated from is completely non-normal. So let's do that with this app right here. | Sampling distribution of the sample mean Probability and Statistics Khan Academy.mp3 |
So that's what it's called. And just as a little bit of background, and I'll prove this to you experimentally, not mathematically, but I think the experimental is, on some levels, more satisfying with statistics, that this will have the same mean as your original distribution right here. So it has the same mean, but we'll see in the next video that this is actually going to start approximating a normal distribution, even though my original distribution that this is kind of generated from is completely non-normal. So let's do that with this app right here. And just to give proper credit where credit is due, I think it was developed at Rice University. This is from onlinestatbook.com. This is their app, which I think is a really neat app, because it really helps you to visualize what a sampling distribution of the sample mean is. | Sampling distribution of the sample mean Probability and Statistics Khan Academy.mp3 |
So let's do that with this app right here. And just to give proper credit where credit is due, I think it was developed at Rice University. This is from onlinestatbook.com. This is their app, which I think is a really neat app, because it really helps you to visualize what a sampling distribution of the sample mean is. So I can literally create my own custom distribution here. So let me make something kind of crazy. So you could do this, in theory, with a discrete or a continuous probability density function. | Sampling distribution of the sample mean Probability and Statistics Khan Academy.mp3 |
This is their app, which I think is a really neat app, because it really helps you to visualize what a sampling distribution of the sample mean is. So I can literally create my own custom distribution here. So let me make something kind of crazy. So you could do this, in theory, with a discrete or a continuous probability density function. But what they have here, we could take on one of 32 values and I'm just going to set the different probabilities of getting any of those 32 values. So clearly, this right here is not a normal distribution. It looks a little bit bimodal, but it doesn't have long tails. | Sampling distribution of the sample mean Probability and Statistics Khan Academy.mp3 |
So you could do this, in theory, with a discrete or a continuous probability density function. But what they have here, we could take on one of 32 values and I'm just going to set the different probabilities of getting any of those 32 values. So clearly, this right here is not a normal distribution. It looks a little bit bimodal, but it doesn't have long tails. But what I want to do is first just use the simulation to understand, or to better understand, what the sampling distribution is all about. So what I'm going to do is I'm going to take, well, let's start with 5 at a time. So my sample size is going to be 5. | Sampling distribution of the sample mean Probability and Statistics Khan Academy.mp3 |
It looks a little bit bimodal, but it doesn't have long tails. But what I want to do is first just use the simulation to understand, or to better understand, what the sampling distribution is all about. So what I'm going to do is I'm going to take, well, let's start with 5 at a time. So my sample size is going to be 5. And so when I click Animated, what it's going to do is it's going to take 5 samples from this probability distribution function, it's going to take 5 samples, and you're going to see them when I click Animated, it's going to average them and plot the average down here. And then I'm going to click it again, it's going to do it again. So there you go. | Sampling distribution of the sample mean Probability and Statistics Khan Academy.mp3 |
So my sample size is going to be 5. And so when I click Animated, what it's going to do is it's going to take 5 samples from this probability distribution function, it's going to take 5 samples, and you're going to see them when I click Animated, it's going to average them and plot the average down here. And then I'm going to click it again, it's going to do it again. So there you go. It got 5 samples from there, it averaged them, and it hit there. What did I just do? I clicked, oh, I wanted to clear that. | Sampling distribution of the sample mean Probability and Statistics Khan Academy.mp3 |
So there you go. It got 5 samples from there, it averaged them, and it hit there. What did I just do? I clicked, oh, I wanted to clear that. Let me make this bottom one none. So let me do that over again. So I'm going to take 5 at a time. | Sampling distribution of the sample mean Probability and Statistics Khan Academy.mp3 |
I clicked, oh, I wanted to clear that. Let me make this bottom one none. So let me do that over again. So I'm going to take 5 at a time. So I took 5 samples from up here, and then it took its mean and plotted the mean there. Let me do it again. 5 samples from this probability distribution function, plotted it right there. | Sampling distribution of the sample mean Probability and Statistics Khan Academy.mp3 |
So I'm going to take 5 at a time. So I took 5 samples from up here, and then it took its mean and plotted the mean there. Let me do it again. 5 samples from this probability distribution function, plotted it right there. I could keep doing it, it'll take some time, but as you can see, I plotted it right there. Now, I could do this 1,000 times, it's going to take forever. Let's say I just wanted to do it 1,000 times. | Sampling distribution of the sample mean Probability and Statistics Khan Academy.mp3 |
5 samples from this probability distribution function, plotted it right there. I could keep doing it, it'll take some time, but as you can see, I plotted it right there. Now, I could do this 1,000 times, it's going to take forever. Let's say I just wanted to do it 1,000 times. So this program, just to be clear, it's actually generating the random numbers. This isn't like a rigged program. It's actually going to generate the random numbers according to this probability distribution function. | Sampling distribution of the sample mean Probability and Statistics Khan Academy.mp3 |
Let's say I just wanted to do it 1,000 times. So this program, just to be clear, it's actually generating the random numbers. This isn't like a rigged program. It's actually going to generate the random numbers according to this probability distribution function. It's going to take 5 at a time, find their means, and plot the mean. So if I click 10,000, it's going to do that 10,000 times. So it's going to take 5 numbers from here 10,000 times and find their means 10,000 times, and then plot the 10,000 means here, so let's do that. | Sampling distribution of the sample mean Probability and Statistics Khan Academy.mp3 |
It's actually going to generate the random numbers according to this probability distribution function. It's going to take 5 at a time, find their means, and plot the mean. So if I click 10,000, it's going to do that 10,000 times. So it's going to take 5 numbers from here 10,000 times and find their means 10,000 times, and then plot the 10,000 means here, so let's do that. So there you go. And notice, it's already looking a lot like a normal distribution. And like I said, the original mean of my crazy distribution here was 14.45, and the mean of after doing 10,000 samples, or 10,000 trials, my mean here is 14.42. | Sampling distribution of the sample mean Probability and Statistics Khan Academy.mp3 |
So it's going to take 5 numbers from here 10,000 times and find their means 10,000 times, and then plot the 10,000 means here, so let's do that. So there you go. And notice, it's already looking a lot like a normal distribution. And like I said, the original mean of my crazy distribution here was 14.45, and the mean of after doing 10,000 samples, or 10,000 trials, my mean here is 14.42. So I'm already getting pretty close to the mean there. My standard deviation, you might notice, is less than that, we'll talk about that in a future video. And the skew and kurtosis, these are things that help us measure how normal a distribution is. | Sampling distribution of the sample mean Probability and Statistics Khan Academy.mp3 |
And like I said, the original mean of my crazy distribution here was 14.45, and the mean of after doing 10,000 samples, or 10,000 trials, my mean here is 14.42. So I'm already getting pretty close to the mean there. My standard deviation, you might notice, is less than that, we'll talk about that in a future video. And the skew and kurtosis, these are things that help us measure how normal a distribution is. And I've talked a little bit about it in the past, and let me actually just diverge a little bit, just so it's interesting. And they're fairly straightforward concepts. Skew literally tells, so if this is, let me do it in a different color. | Sampling distribution of the sample mean Probability and Statistics Khan Academy.mp3 |
And the skew and kurtosis, these are things that help us measure how normal a distribution is. And I've talked a little bit about it in the past, and let me actually just diverge a little bit, just so it's interesting. And they're fairly straightforward concepts. Skew literally tells, so if this is, let me do it in a different color. If this is a perfect normal distribution, and clearly my drawing is very far from perfect, if that's a perfect distribution, this would have a skew of 0. If you have a positive skew, that means you have a larger right tail than you would have otherwise expect. So something with a positive skew might look like this. | Sampling distribution of the sample mean Probability and Statistics Khan Academy.mp3 |
Skew literally tells, so if this is, let me do it in a different color. If this is a perfect normal distribution, and clearly my drawing is very far from perfect, if that's a perfect distribution, this would have a skew of 0. If you have a positive skew, that means you have a larger right tail than you would have otherwise expect. So something with a positive skew might look like this. It would have a large tail to the right. So this would be a positive skew, which makes it a little less than ideal for normal distribution. And a negative skew would look like this. | Sampling distribution of the sample mean Probability and Statistics Khan Academy.mp3 |
So something with a positive skew might look like this. It would have a large tail to the right. So this would be a positive skew, which makes it a little less than ideal for normal distribution. And a negative skew would look like this. It has a long tail to the left. So a negative skew might look like that. So that is a negative skew. | Sampling distribution of the sample mean Probability and Statistics Khan Academy.mp3 |
And a negative skew would look like this. It has a long tail to the left. So a negative skew might look like that. So that is a negative skew. If you have trouble remembering it, just remember which direction the tail is going. This tail is going towards the negative direction. This tail is going to the positive direction. | Sampling distribution of the sample mean Probability and Statistics Khan Academy.mp3 |
So that is a negative skew. If you have trouble remembering it, just remember which direction the tail is going. This tail is going towards the negative direction. This tail is going to the positive direction. So if something has no skew, that means that it's nice and symmetrical around its mean. Now kurtosis, which sounds like a very fancy word, is similarly not that fancy of an idea. Kurtosis. | Sampling distribution of the sample mean Probability and Statistics Khan Academy.mp3 |
This tail is going to the positive direction. So if something has no skew, that means that it's nice and symmetrical around its mean. Now kurtosis, which sounds like a very fancy word, is similarly not that fancy of an idea. Kurtosis. So once again, if I were to draw a perfect normal distribution, remember, there's no one normal distribution. You can have different means and different standard deviations. Let's say that's a perfect normal distribution. | Sampling distribution of the sample mean Probability and Statistics Khan Academy.mp3 |
Kurtosis. So once again, if I were to draw a perfect normal distribution, remember, there's no one normal distribution. You can have different means and different standard deviations. Let's say that's a perfect normal distribution. If I have positive kurtosis, what's going to happen is I'm going to have fatter tails. Let me draw it a little nicer than that. I'm going to have fatter tails, but I'm going to have a more pointy peak. | Sampling distribution of the sample mean Probability and Statistics Khan Academy.mp3 |
Let's say that's a perfect normal distribution. If I have positive kurtosis, what's going to happen is I'm going to have fatter tails. Let me draw it a little nicer than that. I'm going to have fatter tails, but I'm going to have a more pointy peak. I didn't even have to draw it that pointy. Let me draw it like this. I'm going to have fatter tails, and I'm going to have a more pointy peak than a normal distribution. | Sampling distribution of the sample mean Probability and Statistics Khan Academy.mp3 |
I'm going to have fatter tails, but I'm going to have a more pointy peak. I didn't even have to draw it that pointy. Let me draw it like this. I'm going to have fatter tails, and I'm going to have a more pointy peak than a normal distribution. So this right here is positive kurtosis. So something that has positive kurtosis, depending on how positive it is, it tells you it's a little bit more pointy than a real normal distribution. Positive kurtosis. | Sampling distribution of the sample mean Probability and Statistics Khan Academy.mp3 |
I'm going to have fatter tails, and I'm going to have a more pointy peak than a normal distribution. So this right here is positive kurtosis. So something that has positive kurtosis, depending on how positive it is, it tells you it's a little bit more pointy than a real normal distribution. Positive kurtosis. And negative kurtosis has smaller tails, but it's smoother near the middle. So it's like this. So something like this would have negative kurtosis. | Sampling distribution of the sample mean Probability and Statistics Khan Academy.mp3 |
Positive kurtosis. And negative kurtosis has smaller tails, but it's smoother near the middle. So it's like this. So something like this would have negative kurtosis. And maybe in future videos we'll explore that in more detail. But in the context of this simulation, it's just telling us how normal this distribution is. So when our sample size was n equals 5 and we did 10,000 trials, we got pretty close to normal distribution. | Sampling distribution of the sample mean Probability and Statistics Khan Academy.mp3 |
So something like this would have negative kurtosis. And maybe in future videos we'll explore that in more detail. But in the context of this simulation, it's just telling us how normal this distribution is. So when our sample size was n equals 5 and we did 10,000 trials, we got pretty close to normal distribution. Let's do another 10,000 trials just to see what happens. It looks even more like a normal distribution. Our mean is now the exact same number, but we still have a little bit of skew and a little bit of kurtosis. | Sampling distribution of the sample mean Probability and Statistics Khan Academy.mp3 |
So when our sample size was n equals 5 and we did 10,000 trials, we got pretty close to normal distribution. Let's do another 10,000 trials just to see what happens. It looks even more like a normal distribution. Our mean is now the exact same number, but we still have a little bit of skew and a little bit of kurtosis. Now let's see what happens if we were to do the same thing with a larger sample size. And we can actually do them simultaneously. So here's n equals 5. | Sampling distribution of the sample mean Probability and Statistics Khan Academy.mp3 |
Our mean is now the exact same number, but we still have a little bit of skew and a little bit of kurtosis. Now let's see what happens if we were to do the same thing with a larger sample size. And we can actually do them simultaneously. So here's n equals 5. Let's do here n equals 25. Let me clear them. I'm going to do the sampling distribution of the sample mean, and I'm going to run 10,000 trials. | Sampling distribution of the sample mean Probability and Statistics Khan Academy.mp3 |
So here's n equals 5. Let's do here n equals 25. Let me clear them. I'm going to do the sampling distribution of the sample mean, and I'm going to run 10,000 trials. So I'll do one animated trial, just so you remember what's going on. So I'm literally taking first five samples from up here, find their mean. Now I'm taking 25 samples from up here, find its mean, and then plotting it down here. | Sampling distribution of the sample mean Probability and Statistics Khan Academy.mp3 |
I'm going to do the sampling distribution of the sample mean, and I'm going to run 10,000 trials. So I'll do one animated trial, just so you remember what's going on. So I'm literally taking first five samples from up here, find their mean. Now I'm taking 25 samples from up here, find its mean, and then plotting it down here. So here the sample size is 25, here it's 5. I'll do it one more time. I take 5, get the mean, plot it. | Sampling distribution of the sample mean Probability and Statistics Khan Academy.mp3 |
Now I'm taking 25 samples from up here, find its mean, and then plotting it down here. So here the sample size is 25, here it's 5. I'll do it one more time. I take 5, get the mean, plot it. Take 25, get the mean, and then plot it down there. This is a larger sample size. Now that thing that I just did, I'm going to do 10,000 times, and that's interesting. | Sampling distribution of the sample mean Probability and Statistics Khan Academy.mp3 |
I take 5, get the mean, plot it. Take 25, get the mean, and then plot it down there. This is a larger sample size. Now that thing that I just did, I'm going to do 10,000 times, and that's interesting. Remember, our first distribution was just this really crazy, very non-normal distribution. But once we did it, so here what's interesting. I mean, they both look a little normal, but if you look at the skew and the kurtosis when our sample size is larger, it's more normal. | Sampling distribution of the sample mean Probability and Statistics Khan Academy.mp3 |
Now that thing that I just did, I'm going to do 10,000 times, and that's interesting. Remember, our first distribution was just this really crazy, very non-normal distribution. But once we did it, so here what's interesting. I mean, they both look a little normal, but if you look at the skew and the kurtosis when our sample size is larger, it's more normal. This has a lower skew than when our sample size was only 5, and it has a less negative kurtosis than when our sample size was 5. So this is a more normal distribution. And one thing that we're going to explore further in a future video is not only is it more normal in its shape, but it's also a tighter fit around the mean. | Sampling distribution of the sample mean Probability and Statistics Khan Academy.mp3 |
I mean, they both look a little normal, but if you look at the skew and the kurtosis when our sample size is larger, it's more normal. This has a lower skew than when our sample size was only 5, and it has a less negative kurtosis than when our sample size was 5. So this is a more normal distribution. And one thing that we're going to explore further in a future video is not only is it more normal in its shape, but it's also a tighter fit around the mean. And you can even think about why that kind of makes sense. When your sample size is larger, your odds of getting really far away from the mean is lower, because it's very low likelihood if you're taking 25 samples or 100 samples that you're just going to get a bunch of stuff way out here, or a bunch of stuff way out here. You're very likely to get a reasonable spread of things. | Sampling distribution of the sample mean Probability and Statistics Khan Academy.mp3 |
And one thing that we're going to explore further in a future video is not only is it more normal in its shape, but it's also a tighter fit around the mean. And you can even think about why that kind of makes sense. When your sample size is larger, your odds of getting really far away from the mean is lower, because it's very low likelihood if you're taking 25 samples or 100 samples that you're just going to get a bunch of stuff way out here, or a bunch of stuff way out here. You're very likely to get a reasonable spread of things. So it makes sense that your mean, your sample mean, is less likely to be far away from the mean. We're going to talk a little bit more about that in the future. But hopefully this kind of satisfies you that, at least experimentally, I haven't proven it to you with mathematical rigor, which hopefully we'll do in the future, but hopefully this satisfies you at least experimentally that the central limit theorem really does apply to any distribution. | Sampling distribution of the sample mean Probability and Statistics Khan Academy.mp3 |
You're very likely to get a reasonable spread of things. So it makes sense that your mean, your sample mean, is less likely to be far away from the mean. We're going to talk a little bit more about that in the future. But hopefully this kind of satisfies you that, at least experimentally, I haven't proven it to you with mathematical rigor, which hopefully we'll do in the future, but hopefully this satisfies you at least experimentally that the central limit theorem really does apply to any distribution. I mean, this is a crazy distribution. I encourage you to use this applet at onlinestatbook.com and experiment with other crazy distributions to believe it for yourself. But the interesting things are that we're approaching a normal distribution, but as my sample size got larger, it's a better fit for a normal distribution. | Sampling distribution of the sample mean Probability and Statistics Khan Academy.mp3 |
His results are displayed in the table below. Alright, this is interesting. These columns, on time, delayed, and the total. So for example, when it was sunny, there's a total of 170 sunny days that year, 167 of which the train was on time, three of which the train was delayed. And we can look at that by the different types of weather conditions. And then they say, for these days, are the events delayed and snowy independent? So to think about this, and remember, we're only going to be able to figure out experimental probabilities, and you should always view experimental probabilities as somewhat suspect. | Conditional probability and independence Probability AP Statistics Khan Academy.mp3 |
So for example, when it was sunny, there's a total of 170 sunny days that year, 167 of which the train was on time, three of which the train was delayed. And we can look at that by the different types of weather conditions. And then they say, for these days, are the events delayed and snowy independent? So to think about this, and remember, we're only going to be able to figure out experimental probabilities, and you should always view experimental probabilities as somewhat suspect. The more experiments you're able to take, the more likely it is to approximate the true theoretical probability, but there's always some chance that they might be different or even quite different. Let's use this data to try to calculate the experimental probability. So the key question here is, what is the probability that the train is delayed? | Conditional probability and independence Probability AP Statistics Khan Academy.mp3 |
So to think about this, and remember, we're only going to be able to figure out experimental probabilities, and you should always view experimental probabilities as somewhat suspect. The more experiments you're able to take, the more likely it is to approximate the true theoretical probability, but there's always some chance that they might be different or even quite different. Let's use this data to try to calculate the experimental probability. So the key question here is, what is the probability that the train is delayed? And then we want to think about, what is the probability that the train is delayed given that it is snowy? If we knew the theoretical probabilities, and if they were exactly the same, if the probability of being delayed was exactly the same as the probability of being delayed given snowy, then being delayed or being snowy would be independent. But if we knew the theoretical probabilities and the probability of being delayed given snowy were different than the probability of being delayed, then we would not say that these are independent variables. | Conditional probability and independence Probability AP Statistics Khan Academy.mp3 |
So the key question here is, what is the probability that the train is delayed? And then we want to think about, what is the probability that the train is delayed given that it is snowy? If we knew the theoretical probabilities, and if they were exactly the same, if the probability of being delayed was exactly the same as the probability of being delayed given snowy, then being delayed or being snowy would be independent. But if we knew the theoretical probabilities and the probability of being delayed given snowy were different than the probability of being delayed, then we would not say that these are independent variables. Now, we don't know the theoretical probabilities. We're just going to calculate the experimental probabilities. And we do have a good number of experiments here. | Conditional probability and independence Probability AP Statistics Khan Academy.mp3 |
But if we knew the theoretical probabilities and the probability of being delayed given snowy were different than the probability of being delayed, then we would not say that these are independent variables. Now, we don't know the theoretical probabilities. We're just going to calculate the experimental probabilities. And we do have a good number of experiments here. So if these are quite different, I would feel confident saying that they are dependent. If they are pretty close with the experimental probability, I would say that it would be hard to make the statement that they are dependent and that you would probably lean towards independence. But let's calculate this. | Conditional probability and independence Probability AP Statistics Khan Academy.mp3 |
And we do have a good number of experiments here. So if these are quite different, I would feel confident saying that they are dependent. If they are pretty close with the experimental probability, I would say that it would be hard to make the statement that they are dependent and that you would probably lean towards independence. But let's calculate this. What is the probability that the train is just delayed? Pause this video and try to figure that out. Well, let's see. | Conditional probability and independence Probability AP Statistics Khan Academy.mp3 |
But let's calculate this. What is the probability that the train is just delayed? Pause this video and try to figure that out. Well, let's see. If we just think in general, we have a total of 365 trials or 365 experiments. And of them, the train was delayed 35 times. Now, what's the probability that the train is delayed given that it is snowy? | Conditional probability and independence Probability AP Statistics Khan Academy.mp3 |
Well, let's see. If we just think in general, we have a total of 365 trials or 365 experiments. And of them, the train was delayed 35 times. Now, what's the probability that the train is delayed given that it is snowy? Pause the video and try to figure that out. Well, let's see. We have a total of 20 snowy days. | Conditional probability and independence Probability AP Statistics Khan Academy.mp3 |
Now, what's the probability that the train is delayed given that it is snowy? Pause the video and try to figure that out. Well, let's see. We have a total of 20 snowy days. And we are delayed 12 of those 20 snowy days. And so this is going to be a probability. 12 20ths is the same thing as, if we multiply both the numerator and the denominator by five, this is a 60% probability, or I could say a 0.6 probability of being delayed when it is snowy. | Conditional probability and independence Probability AP Statistics Khan Academy.mp3 |
We have a total of 20 snowy days. And we are delayed 12 of those 20 snowy days. And so this is going to be a probability. 12 20ths is the same thing as, if we multiply both the numerator and the denominator by five, this is a 60% probability, or I could say a 0.6 probability of being delayed when it is snowy. This is, of course, an experimental probability, which is much higher than this. This is less than 10% right over here. This right over here is less than 0.1. | Conditional probability and independence Probability AP Statistics Khan Academy.mp3 |
12 20ths is the same thing as, if we multiply both the numerator and the denominator by five, this is a 60% probability, or I could say a 0.6 probability of being delayed when it is snowy. This is, of course, an experimental probability, which is much higher than this. This is less than 10% right over here. This right over here is less than 0.1. I could get a calculator to calculate it exactly. It'll be 9 point something percent or 0.9 something. But clearly, this, you are much more likely, at least from the experimental data, it seems like you have a much higher proportion of your snowy days are delayed than just general days in general, than just general days. | Conditional probability and independence Probability AP Statistics Khan Academy.mp3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.