Sentence
stringlengths
102
4.09k
video_title
stringlengths
27
104
So this is going to be 450, 452. So that's the sum of the scores of these five rounds. And then you divide it by the number of rounds you have. So it'd be 452 divided by five. So 452 divided by five is going to give us, five goes into, doesn't go into four, it goes into 45 nine times. Nine times five is 45. You subtract, you get zero, bring down the two.
Impact on median and mean when removing lowest value example 6th grade Khan Academy.mp3
So it'd be 452 divided by five. So 452 divided by five is going to give us, five goes into, doesn't go into four, it goes into 45 nine times. Nine times five is 45. You subtract, you get zero, bring down the two. Five goes into two zero times. Zero times five is, zero times five is zero. Subtract, you have two left over.
Impact on median and mean when removing lowest value example 6th grade Khan Academy.mp3
You subtract, you get zero, bring down the two. Five goes into two zero times. Zero times five is, zero times five is zero. Subtract, you have two left over. So you can say that the mean here, the mean here is 90 and 2 5ths. Maybe, not nine and 2 5ths. 90 and 2 5ths.
Impact on median and mean when removing lowest value example 6th grade Khan Academy.mp3
Subtract, you have two left over. So you can say that the mean here, the mean here is 90 and 2 5ths. Maybe, not nine and 2 5ths. 90 and 2 5ths. So the mean is right around here. So that's the mean of these data points right over there. And if you remove it, what is the mean going to be?
Impact on median and mean when removing lowest value example 6th grade Khan Academy.mp3
90 and 2 5ths. So the mean is right around here. So that's the mean of these data points right over there. And if you remove it, what is the mean going to be? So here we're just going to take our 90 plus our 92 plus our 94 plus our 96. Add them together. So let's see, two plus four plus six is 12.
Impact on median and mean when removing lowest value example 6th grade Khan Academy.mp3
And if you remove it, what is the mean going to be? So here we're just going to take our 90 plus our 92 plus our 94 plus our 96. Add them together. So let's see, two plus four plus six is 12. And you add these together, you're going to get 37. 372 divided by four. Because I have four data points now, not five.
Impact on median and mean when removing lowest value example 6th grade Khan Academy.mp3
So let's see, two plus four plus six is 12. And you add these together, you're going to get 37. 372 divided by four. Because I have four data points now, not five. Four goes into three, let me do this in a place where you can see it. So four goes into 372. Goes into 37 nine times.
Impact on median and mean when removing lowest value example 6th grade Khan Academy.mp3
Because I have four data points now, not five. Four goes into three, let me do this in a place where you can see it. So four goes into 372. Goes into 37 nine times. Nine times four is 36. Subtract, you get a one. Bring down the two.
Impact on median and mean when removing lowest value example 6th grade Khan Academy.mp3
Goes into 37 nine times. Nine times four is 36. Subtract, you get a one. Bring down the two. It goes exactly three times. Three times four is 12. You have no remainder.
Impact on median and mean when removing lowest value example 6th grade Khan Academy.mp3
Bring down the two. It goes exactly three times. Three times four is 12. You have no remainder. So the median and the mean here are both. So this is also the mean. The mean here is also 93.
Impact on median and mean when removing lowest value example 6th grade Khan Academy.mp3
You have no remainder. So the median and the mean here are both. So this is also the mean. The mean here is also 93. So you see that the median, the median went from 92 to 93. It increased. The mean went from 90 and 2 5ths to 93.
Impact on median and mean when removing lowest value example 6th grade Khan Academy.mp3
The mean here is also 93. So you see that the median, the median went from 92 to 93. It increased. The mean went from 90 and 2 5ths to 93. So the mean increased by more than the median. They both increased, but the mean increased by more. And it makes sense, because this number was way, way below all of these over here.
Impact on median and mean when removing lowest value example 6th grade Khan Academy.mp3
The mean went from 90 and 2 5ths to 93. So the mean increased by more than the median. They both increased, but the mean increased by more. And it makes sense, because this number was way, way below all of these over here. So you can imagine, if you take this out, the mean should increase by a good amount. But let's see which of these choices are what we just described. But the mean and the median will decrease.
Impact on median and mean when removing lowest value example 6th grade Khan Academy.mp3
And it makes sense, because this number was way, way below all of these over here. So you can imagine, if you take this out, the mean should increase by a good amount. But let's see which of these choices are what we just described. But the mean and the median will decrease. Nope. But the mean and the median will decrease. Nope.
Impact on median and mean when removing lowest value example 6th grade Khan Academy.mp3
But the mean and the median will decrease. Nope. But the mean and the median will decrease. Nope. But the mean and the median will increase, but the mean will increase by more than the median. That's exactly, that's exactly what happened. The mean went from 90 and 2 5ths, or 90.4, went from 90.4, or 90 and 2 5ths, to 93.
Impact on median and mean when removing lowest value example 6th grade Khan Academy.mp3
As we go further in our statistical careers, it's going to be valuable to assume that certain distributions are normal distributions or sometimes to assume that they are binomial distributions because if we can do that, we can make all sorts of interesting inferences about them when we make that assumption. But one of the key things about normal distributions or binomial distributions is we assume that they're the sum or they can be viewed as the sum of a bunch of independent trials. So we have to assume that trials are independent. Now that is reasonable in a lot of situations, but sometimes, let's say you're conducting a survey of people exiting a mall, and in that case, and let's say you're saying whether they have done their taxes already, if they're exiting the mall, it's hard to do these samples with replacement. They're leaving the mall. You can't say, hey, hey, wait, I just asked you a question. Now you've answered it.
10% Rule of assuming independence between trials Random variables AP Statistics Khan Academy.mp3
Now that is reasonable in a lot of situations, but sometimes, let's say you're conducting a survey of people exiting a mall, and in that case, and let's say you're saying whether they have done their taxes already, if they're exiting the mall, it's hard to do these samples with replacement. They're leaving the mall. You can't say, hey, hey, wait, I just asked you a question. Now you've answered it. Now go back into the mall because I want each trial to be truly independent. But we all know it feels intuitive that, hey, if there are 10,000 people in the mall and I'm going to sample 10 of them, does it really matter that it's truly independent? Doesn't it matter that we're just kind of close to being independent?
10% Rule of assuming independence between trials Random variables AP Statistics Khan Academy.mp3
Now you've answered it. Now go back into the mall because I want each trial to be truly independent. But we all know it feels intuitive that, hey, if there are 10,000 people in the mall and I'm going to sample 10 of them, does it really matter that it's truly independent? Doesn't it matter that we're just kind of close to being independent? And because of that idea and because we do wanna make inferences based on things being close to a binomial distribution or a normal distribution, we have something called the 10% rule. And the 10% rule says that if our sample, if our sample is less than or equal to 10% of the population, then it is okay to assume approximate independence, approximate independence. And there are some fairly sophisticated ways of coming up with this 10% threshold.
10% Rule of assuming independence between trials Random variables AP Statistics Khan Academy.mp3
Doesn't it matter that we're just kind of close to being independent? And because of that idea and because we do wanna make inferences based on things being close to a binomial distribution or a normal distribution, we have something called the 10% rule. And the 10% rule says that if our sample, if our sample is less than or equal to 10% of the population, then it is okay to assume approximate independence, approximate independence. And there are some fairly sophisticated ways of coming up with this 10% threshold. People could have picked 9%. They could have picked 10.1%. But 10% is a nice round number.
10% Rule of assuming independence between trials Random variables AP Statistics Khan Academy.mp3
And there are some fairly sophisticated ways of coming up with this 10% threshold. People could have picked 9%. They could have picked 10.1%. But 10% is a nice round number. And if we look at some tangible examples, it seems to do a pretty good job. So for example, right over here, let's let x be the number of boys from three trials of selecting from a classroom of n students where 50% of the class is a boy and 50% of the class is a girl. And so what we have over here is we have a bunch of different n's.
10% Rule of assuming independence between trials Random variables AP Statistics Khan Academy.mp3
But 10% is a nice round number. And if we look at some tangible examples, it seems to do a pretty good job. So for example, right over here, let's let x be the number of boys from three trials of selecting from a classroom of n students where 50% of the class is a boy and 50% of the class is a girl. And so what we have over here is we have a bunch of different n's. What if we have 20 students in the class? What if we have 30? What if we have 100?
10% Rule of assuming independence between trials Random variables AP Statistics Khan Academy.mp3
And so what we have over here is we have a bunch of different n's. What if we have 20 students in the class? What if we have 30? What if we have 100? What if we have 10,000? And so we could find the probability that we select three boys with replacement in each of these scenarios. And we could also find the probability that we select three boys without replacement.
10% Rule of assuming independence between trials Random variables AP Statistics Khan Academy.mp3
What if we have 100? What if we have 10,000? And so we could find the probability that we select three boys with replacement in each of these scenarios. And we could also find the probability that we select three boys without replacement. And then we could think about what proportion is our sample size of the entire population. And then we could say, does the 10% rule actually make sense? So this first column where we are picking three boys with replacement, in this case, because we are replacing, each of these trials are independent, are truly independent.
10% Rule of assuming independence between trials Random variables AP Statistics Khan Academy.mp3
And we could also find the probability that we select three boys without replacement. And then we could think about what proportion is our sample size of the entire population. And then we could say, does the 10% rule actually make sense? So this first column where we are picking three boys with replacement, in this case, because we are replacing, each of these trials are independent, are truly independent. And if our trials are independent, then x would be truly a binomial variable. Here, we aren't independent because we are not replacing, so not independent. And so officially, in this column right over here when we're not replacing, x would not be considered a binomial random variable.
10% Rule of assuming independence between trials Random variables AP Statistics Khan Academy.mp3
So this first column where we are picking three boys with replacement, in this case, because we are replacing, each of these trials are independent, are truly independent. And if our trials are independent, then x would be truly a binomial variable. Here, we aren't independent because we are not replacing, so not independent. And so officially, in this column right over here when we're not replacing, x would not be considered a binomial random variable. But let's see if there's a threshold where if our sample size is a small enough percentage of our entire population where we would feel not so bad about assuming x is close to being binomial. So in all of the cases where you have independent trials and 50% of the population is boys, 50% is girls, well, you're going to amount to 1 1⁄2 times 1 1⁄2 times 1 1⁄2. So in all of those situations, you have a 12.5% chance that x is going to be equal to three.
10% Rule of assuming independence between trials Random variables AP Statistics Khan Academy.mp3
And so officially, in this column right over here when we're not replacing, x would not be considered a binomial random variable. But let's see if there's a threshold where if our sample size is a small enough percentage of our entire population where we would feel not so bad about assuming x is close to being binomial. So in all of the cases where you have independent trials and 50% of the population is boys, 50% is girls, well, you're going to amount to 1 1⁄2 times 1 1⁄2 times 1 1⁄2. So in all of those situations, you have a 12.5% chance that x is going to be equal to three. And in this case, x would be a binomial variable. But look over here. When three is a fairly large percentage of our population, in this case, it is 15%, the percent chance of getting three boys without replacement is 10.5%, which is reasonably different from 12 1⁄2%.
10% Rule of assuming independence between trials Random variables AP Statistics Khan Academy.mp3
So in all of those situations, you have a 12.5% chance that x is going to be equal to three. And in this case, x would be a binomial variable. But look over here. When three is a fairly large percentage of our population, in this case, it is 15%, the percent chance of getting three boys without replacement is 10.5%, which is reasonably different from 12 1⁄2%. It is 2% different, but 2% relative to 12 1⁄2%. So that's someplace in between 10 and 20% difference in terms of the probability. So this is a reasonably big difference.
10% Rule of assuming independence between trials Random variables AP Statistics Khan Academy.mp3
When three is a fairly large percentage of our population, in this case, it is 15%, the percent chance of getting three boys without replacement is 10.5%, which is reasonably different from 12 1⁄2%. It is 2% different, but 2% relative to 12 1⁄2%. So that's someplace in between 10 and 20% difference in terms of the probability. So this is a reasonably big difference. But as we increase the population size without increasing the sample size, we see that these numbers get closer and closer to each other, all the way so that if you have 10,000 people in your population and you're only doing three trials, that the numbers get very, very close. This is actually 12.49 something percent. But if you round to the nearest 10th of a percent, you see that they are close.
10% Rule of assuming independence between trials Random variables AP Statistics Khan Academy.mp3
So this is a reasonably big difference. But as we increase the population size without increasing the sample size, we see that these numbers get closer and closer to each other, all the way so that if you have 10,000 people in your population and you're only doing three trials, that the numbers get very, very close. This is actually 12.49 something percent. But if you round to the nearest 10th of a percent, you see that they are close. So I think most people would say, all right, if your sample is 3 10,000th of the population, that you'd feel pretty good treating this column without replacement as being pretty close to being a binomial variable. And most people would say, all right, this first scenario where your sample size is 15% of your population, you wouldn't feel so good treating this without replacement column as a binomial random variable. But where do you draw the line?
10% Rule of assuming independence between trials Random variables AP Statistics Khan Academy.mp3
But if you round to the nearest 10th of a percent, you see that they are close. So I think most people would say, all right, if your sample is 3 10,000th of the population, that you'd feel pretty good treating this column without replacement as being pretty close to being a binomial variable. And most people would say, all right, this first scenario where your sample size is 15% of your population, you wouldn't feel so good treating this without replacement column as a binomial random variable. But where do you draw the line? And as we alluded to earlier in the video, the line is typically drawn at 10%. That if your sample size is less than or equal to 10% of your population, it's not unreasonable to treat your random variable, even though it's not officially binomial, to say, okay, maybe it is, maybe I can functionally treat it as binomial, and then from there I can make all of the powerful inferences that we tend to do in statistics. With that said, the lower the percentage the sample is of the population, the better.
10% Rule of assuming independence between trials Random variables AP Statistics Khan Academy.mp3
But where do you draw the line? And as we alluded to earlier in the video, the line is typically drawn at 10%. That if your sample size is less than or equal to 10% of your population, it's not unreasonable to treat your random variable, even though it's not officially binomial, to say, okay, maybe it is, maybe I can functionally treat it as binomial, and then from there I can make all of the powerful inferences that we tend to do in statistics. With that said, the lower the percentage the sample is of the population, the better. Now to be clear, that's not saying that small sample sizes are better than large sample sizes. In statistics, large sample sizes tend to be a lot better than small sample sizes. But if you wanna make this independence assumption, so to speak, even when it's not exactly true, you want your sample to be a small percentage of the population.
10% Rule of assuming independence between trials Random variables AP Statistics Khan Academy.mp3
What we're going to do in this video is calculate a typical measure of how well the actual data points agree with a model, in this case, a linear model. And there's several names for it. We could consider this to be the standard deviation of the residuals, and that's essentially what we're going to calculate. You could also call it the root mean square error, and you'll see why it's called this, because this really describes how we calculate it. So what we're going to do is look at the residuals for each of these points, and then we're going to find the standard deviation of them. So just as a bit of review, the ith residual is going to be equal to the ith y value for a given x minus the predicted y value for a given x. Now when I say y hat right over here, this just says what would the linear regression predict for a given x?
Standard deviation of residuals or Root-mean-square error (RMSD).mp3
You could also call it the root mean square error, and you'll see why it's called this, because this really describes how we calculate it. So what we're going to do is look at the residuals for each of these points, and then we're going to find the standard deviation of them. So just as a bit of review, the ith residual is going to be equal to the ith y value for a given x minus the predicted y value for a given x. Now when I say y hat right over here, this just says what would the linear regression predict for a given x? And this is the actual y for a given x. So for example, and we've done this in other videos, this is all review, the residual here, when x is equal to one, we have y is equal to one, but what was predicted by the model is 2.5 times one minus two, which is.5. So one minus.5, so this residual here, this residual is equal to one minus 0.5, which is equal to 0.5, and it's a positive 0.5.
Standard deviation of residuals or Root-mean-square error (RMSD).mp3
Now when I say y hat right over here, this just says what would the linear regression predict for a given x? And this is the actual y for a given x. So for example, and we've done this in other videos, this is all review, the residual here, when x is equal to one, we have y is equal to one, but what was predicted by the model is 2.5 times one minus two, which is.5. So one minus.5, so this residual here, this residual is equal to one minus 0.5, which is equal to 0.5, and it's a positive 0.5. And if the actual point is above the model, you are going to have a positive residual. Now the residual over here, you also have the actual point being higher than the model. So this is also going to be a positive residual.
Standard deviation of residuals or Root-mean-square error (RMSD).mp3
So one minus.5, so this residual here, this residual is equal to one minus 0.5, which is equal to 0.5, and it's a positive 0.5. And if the actual point is above the model, you are going to have a positive residual. Now the residual over here, you also have the actual point being higher than the model. So this is also going to be a positive residual. And once again, when x is equal to three, the actual y is six. The predicted y is 2.5 times three, which is 7.5 minus two, which is 5.5. So you have six minus 5.5.
Standard deviation of residuals or Root-mean-square error (RMSD).mp3
So this is also going to be a positive residual. And once again, when x is equal to three, the actual y is six. The predicted y is 2.5 times three, which is 7.5 minus two, which is 5.5. So you have six minus 5.5. So here I'll write residual is equal to six minus 5.5, which is equal to 0.5. So once again, you have a positive residual. Now for this point that sits right on the model, the actual is the predicted.
Standard deviation of residuals or Root-mean-square error (RMSD).mp3
So you have six minus 5.5. So here I'll write residual is equal to six minus 5.5, which is equal to 0.5. So once again, you have a positive residual. Now for this point that sits right on the model, the actual is the predicted. When x is two, the actual is three, and what was predicted by the model is three. So the residual here is equal to the actual is three and the predicted is three, so it's equal to zero. And then last but not least, you have this data point where the residual is going to be the actual when x is equal to two is two minus the predicted.
Standard deviation of residuals or Root-mean-square error (RMSD).mp3
Now for this point that sits right on the model, the actual is the predicted. When x is two, the actual is three, and what was predicted by the model is three. So the residual here is equal to the actual is three and the predicted is three, so it's equal to zero. And then last but not least, you have this data point where the residual is going to be the actual when x is equal to two is two minus the predicted. Well, when x is equal to two, you have 2.5 times two, which is equal to five minus two is equal to three. So two minus three is equal to negative one. And so when your actual is below your regression line, you're going to have a negative residual.
Standard deviation of residuals or Root-mean-square error (RMSD).mp3
And then last but not least, you have this data point where the residual is going to be the actual when x is equal to two is two minus the predicted. Well, when x is equal to two, you have 2.5 times two, which is equal to five minus two is equal to three. So two minus three is equal to negative one. And so when your actual is below your regression line, you're going to have a negative residual. So this is going to be negative one right over there. Now we can calculate the standard deviation of the residuals. We're going to take this first residual, which is 0.5, and we're going to square it.
Standard deviation of residuals or Root-mean-square error (RMSD).mp3
And so when your actual is below your regression line, you're going to have a negative residual. So this is going to be negative one right over there. Now we can calculate the standard deviation of the residuals. We're going to take this first residual, which is 0.5, and we're going to square it. We're going to add it to the second residual right over here. I'll use this blue, this teal color. That's zero.
Standard deviation of residuals or Root-mean-square error (RMSD).mp3
We're going to take this first residual, which is 0.5, and we're going to square it. We're going to add it to the second residual right over here. I'll use this blue, this teal color. That's zero. Gonna square that. Then we have this third residual, which is negative one. So plus negative one squared.
Standard deviation of residuals or Root-mean-square error (RMSD).mp3
That's zero. Gonna square that. Then we have this third residual, which is negative one. So plus negative one squared. And then finally we have that fourth residual, which is 0.5 squared. 0.5 squared. So once again, we took each of the residuals, which you could view as the distance between the points and what the model would predict.
Standard deviation of residuals or Root-mean-square error (RMSD).mp3
So plus negative one squared. And then finally we have that fourth residual, which is 0.5 squared. 0.5 squared. So once again, we took each of the residuals, which you could view as the distance between the points and what the model would predict. We are squaring them. When you take a typical standard deviation, you're taking the distance between a point and the mean. Here we're taking the distance between a point and what the model would have predicted.
Standard deviation of residuals or Root-mean-square error (RMSD).mp3
So once again, we took each of the residuals, which you could view as the distance between the points and what the model would predict. We are squaring them. When you take a typical standard deviation, you're taking the distance between a point and the mean. Here we're taking the distance between a point and what the model would have predicted. But we're squaring each of those residuals and adding them all up together. And just like we do with the sample standard deviation, we are now going to divide by one less than the number of residuals we just squared and added. So we have four residuals.
Standard deviation of residuals or Root-mean-square error (RMSD).mp3
Here we're taking the distance between a point and what the model would have predicted. But we're squaring each of those residuals and adding them all up together. And just like we do with the sample standard deviation, we are now going to divide by one less than the number of residuals we just squared and added. So we have four residuals. We're gonna divide by four minus one, which is equal to, of course, three. You could view this part as a mean of the squared errors. And now we're gonna take the square root of it.
Standard deviation of residuals or Root-mean-square error (RMSD).mp3
So we have four residuals. We're gonna divide by four minus one, which is equal to, of course, three. You could view this part as a mean of the squared errors. And now we're gonna take the square root of it. So let's see. This is going to be equal to square root of, this is 0.25. 0.25.
Standard deviation of residuals or Root-mean-square error (RMSD).mp3
And now we're gonna take the square root of it. So let's see. This is going to be equal to square root of, this is 0.25. 0.25. This is just zero. This is going to be positive one. And then this 0.5 squared is going to be 0.25.
Standard deviation of residuals or Root-mean-square error (RMSD).mp3
0.25. This is just zero. This is going to be positive one. And then this 0.5 squared is going to be 0.25. 0.25. All of that over three. Now this numerator is going to be 1.5 over three.
Standard deviation of residuals or Root-mean-square error (RMSD).mp3
And then this 0.5 squared is going to be 0.25. 0.25. All of that over three. Now this numerator is going to be 1.5 over three. So this is going to be equal to, 1.5 is exactly half of three. So we could say this is equal to the square root of 1.5. This is one over the square root of two.
Standard deviation of residuals or Root-mean-square error (RMSD).mp3
Now this numerator is going to be 1.5 over three. So this is going to be equal to, 1.5 is exactly half of three. So we could say this is equal to the square root of 1.5. This is one over the square root of two. One divided by the square root of two, which gets us to, so if we round to the nearest thousandths, it's roughly 0.707. So approximately 0.707. And if you wanted to visualize that, one standard deviation of the residuals below the line would look like this.
Standard deviation of residuals or Root-mean-square error (RMSD).mp3
This is one over the square root of two. One divided by the square root of two, which gets us to, so if we round to the nearest thousandths, it's roughly 0.707. So approximately 0.707. And if you wanted to visualize that, one standard deviation of the residuals below the line would look like this. And one standard deviation above the line for any given x value would go one standard deviation of the residuals above it. It would look something like that. And this is obviously just a hand-drawn approximation.
Standard deviation of residuals or Root-mean-square error (RMSD).mp3
And if you wanted to visualize that, one standard deviation of the residuals below the line would look like this. And one standard deviation above the line for any given x value would go one standard deviation of the residuals above it. It would look something like that. And this is obviously just a hand-drawn approximation. But you do see that this does seem to be roughly indicative of the typical residual. Now it's worth noting, sometimes people will say it's the average residual. And it depends how you think about the word average.
Standard deviation of residuals or Root-mean-square error (RMSD).mp3
And this is obviously just a hand-drawn approximation. But you do see that this does seem to be roughly indicative of the typical residual. Now it's worth noting, sometimes people will say it's the average residual. And it depends how you think about the word average. Because we are squaring the residuals. So outliers, things that are really far from the line, when you square it, are going to have disproportionate impact here. If you didn't want to have that behavior, we could have done something like find the mean of the absolute residuals.
Standard deviation of residuals or Root-mean-square error (RMSD).mp3
And it depends how you think about the word average. Because we are squaring the residuals. So outliers, things that are really far from the line, when you square it, are going to have disproportionate impact here. If you didn't want to have that behavior, we could have done something like find the mean of the absolute residuals. That actually in some ways would have been a simpler one. But this is a standard way of people trying to figure out how much a model disagrees with the actual data. And so you can imagine, the lower this number is, the better the fit of the model.
Standard deviation of residuals or Root-mean-square error (RMSD).mp3
And what it tells us is we can start off with any distribution that has a well-defined mean and variance, and if it has a well-defined variance, it has a well-defined standard deviation. And it could be a continuous distribution or a discrete one. I'll draw a discrete one just because it's easier to imagine, at least for the purposes of this video. So let's say I have a discrete probability distribution function, and I want to be very careful not to make it look anything close to a normal distribution, because I want to show you the power of the central limit theorem. So let's say I have a distribution, let's say I can take on values 1 through 6. 1, 2, 3, 4, 5, 6, it's some kind of crazy dice that's very likely to get a 1. Let's say it's impossible, let me make that a straight line.
Central limit theorem Inferential statistics Probability and Statistics Khan Academy.mp3
So let's say I have a discrete probability distribution function, and I want to be very careful not to make it look anything close to a normal distribution, because I want to show you the power of the central limit theorem. So let's say I have a distribution, let's say I can take on values 1 through 6. 1, 2, 3, 4, 5, 6, it's some kind of crazy dice that's very likely to get a 1. Let's say it's impossible, let me make that a straight line. You're very high likelihood of getting a 1. Let's say it's impossible to get a 2. Let's say it's an okay likelihood of getting a 3 or a 4.
Central limit theorem Inferential statistics Probability and Statistics Khan Academy.mp3
Let's say it's impossible, let me make that a straight line. You're very high likelihood of getting a 1. Let's say it's impossible to get a 2. Let's say it's an okay likelihood of getting a 3 or a 4. Let's say it's impossible to get a 5. Let's say it's very likely to get a 6 like that. So that's my probability distribution function.
Central limit theorem Inferential statistics Probability and Statistics Khan Academy.mp3
Let's say it's an okay likelihood of getting a 3 or a 4. Let's say it's impossible to get a 5. Let's say it's very likely to get a 6 like that. So that's my probability distribution function. If I were to draw a mean, this is symmetric, so maybe the mean would be something like that. It would be halfway, so that would be my mean right there, the standard deviation. Maybe it would be that far and that far above and below the mean.
Central limit theorem Inferential statistics Probability and Statistics Khan Academy.mp3
So that's my probability distribution function. If I were to draw a mean, this is symmetric, so maybe the mean would be something like that. It would be halfway, so that would be my mean right there, the standard deviation. Maybe it would be that far and that far above and below the mean. But that's my discrete probability distribution function. Now what I'm going to do here, instead of just taking samples of this random variable that's described by this probability distribution function, I'm going to take samples of it, but I'm going to average the samples, and then look at those samples and see the frequency of the averages that I get. And when I say average, I mean the mean.
Central limit theorem Inferential statistics Probability and Statistics Khan Academy.mp3
Maybe it would be that far and that far above and below the mean. But that's my discrete probability distribution function. Now what I'm going to do here, instead of just taking samples of this random variable that's described by this probability distribution function, I'm going to take samples of it, but I'm going to average the samples, and then look at those samples and see the frequency of the averages that I get. And when I say average, I mean the mean. Let me define something. Let's say my sample size, and I could put any number here, but let's say first off we try a sample size of n is equal to 4. And what that means is I'm going to take 4 samples from this.
Central limit theorem Inferential statistics Probability and Statistics Khan Academy.mp3
And when I say average, I mean the mean. Let me define something. Let's say my sample size, and I could put any number here, but let's say first off we try a sample size of n is equal to 4. And what that means is I'm going to take 4 samples from this. So let's say the first time I take 4 samples, so my sample size is 4, let's say I get a 1, let's say I get another 1, and let's say I get a 3, and I get a 6. So that right there is my first sample of sample size 4. I know the terminology can get confusing because this is a sample that's made up of 4 samples.
Central limit theorem Inferential statistics Probability and Statistics Khan Academy.mp3
And what that means is I'm going to take 4 samples from this. So let's say the first time I take 4 samples, so my sample size is 4, let's say I get a 1, let's say I get another 1, and let's say I get a 3, and I get a 6. So that right there is my first sample of sample size 4. I know the terminology can get confusing because this is a sample that's made up of 4 samples. But when we talk about the sample mean and the sampling distribution of the sample mean, which we're going to talk more and more about over the next few videos, normally the sample refers to the set of samples from your distribution, and the sample size tells you how many you actually took from your distribution. But the terminology can be very confusing because you could easily view one of these as a sample. But we're taking 4 samples from here.
Central limit theorem Inferential statistics Probability and Statistics Khan Academy.mp3
I know the terminology can get confusing because this is a sample that's made up of 4 samples. But when we talk about the sample mean and the sampling distribution of the sample mean, which we're going to talk more and more about over the next few videos, normally the sample refers to the set of samples from your distribution, and the sample size tells you how many you actually took from your distribution. But the terminology can be very confusing because you could easily view one of these as a sample. But we're taking 4 samples from here. We have a sample size of 4. And what I'm going to do is I'm going to average them. So let's say the mean, I want to be very careful when I say average, the mean of this first sample of size 4 is what?
Central limit theorem Inferential statistics Probability and Statistics Khan Academy.mp3
But we're taking 4 samples from here. We have a sample size of 4. And what I'm going to do is I'm going to average them. So let's say the mean, I want to be very careful when I say average, the mean of this first sample of size 4 is what? 1 plus 1 is 2, 2 plus 3 is 5, 5 plus 6 is 11, 11 divided by 4 is 2.75. That is my first sample mean for my first sample of size 4. Let me do another one.
Central limit theorem Inferential statistics Probability and Statistics Khan Academy.mp3
So let's say the mean, I want to be very careful when I say average, the mean of this first sample of size 4 is what? 1 plus 1 is 2, 2 plus 3 is 5, 5 plus 6 is 11, 11 divided by 4 is 2.75. That is my first sample mean for my first sample of size 4. Let me do another one. My second sample of size 4, let's say that I get a 3, a 4, let's say I get another 3, and let's say I get a 1. I just didn't happen to get a 6 that time. And notice, I can't get a 2 or a 5.
Central limit theorem Inferential statistics Probability and Statistics Khan Academy.mp3
Let me do another one. My second sample of size 4, let's say that I get a 3, a 4, let's say I get another 3, and let's say I get a 1. I just didn't happen to get a 6 that time. And notice, I can't get a 2 or a 5. It's impossible for this distribution. The chance of getting a 2 or a 5 is 0, so I can't have any 2s or 5s over here. So for the second sample of sample size 4, my sample mean, so my second sample mean is going to be 3 plus 4 is 7, 7 plus 3 is 10, plus 1 is 11, 11 divided by 4, once again is 2.75.
Central limit theorem Inferential statistics Probability and Statistics Khan Academy.mp3
And notice, I can't get a 2 or a 5. It's impossible for this distribution. The chance of getting a 2 or a 5 is 0, so I can't have any 2s or 5s over here. So for the second sample of sample size 4, my sample mean, so my second sample mean is going to be 3 plus 4 is 7, 7 plus 3 is 10, plus 1 is 11, 11 divided by 4, once again is 2.75. Let me do one more, because I really want to make it clear what we're doing here. So I do one more. Actually, we're going to do a gazillion more, but let me just do one more in detail.
Central limit theorem Inferential statistics Probability and Statistics Khan Academy.mp3
So for the second sample of sample size 4, my sample mean, so my second sample mean is going to be 3 plus 4 is 7, 7 plus 3 is 10, plus 1 is 11, 11 divided by 4, once again is 2.75. Let me do one more, because I really want to make it clear what we're doing here. So I do one more. Actually, we're going to do a gazillion more, but let me just do one more in detail. So let's say my third sample of sample size 4, I get, so I'm going to literally take 4 samples. So my sample is made up of 4 samples from this original crazy distribution. Let's say I get a 1, a 1, and a 6, and a 6.
Central limit theorem Inferential statistics Probability and Statistics Khan Academy.mp3
Actually, we're going to do a gazillion more, but let me just do one more in detail. So let's say my third sample of sample size 4, I get, so I'm going to literally take 4 samples. So my sample is made up of 4 samples from this original crazy distribution. Let's say I get a 1, a 1, and a 6, and a 6. And so my third sample mean is going to be 1 plus 1 is 2, 2 plus 6 is 8, 8 plus 6 is 14, 14 divided by 4, 14 divided by 4 is what? 3.5. And as I find each of these sample means, so for each of my samples of sample size 4, I figure out a mean.
Central limit theorem Inferential statistics Probability and Statistics Khan Academy.mp3
Let's say I get a 1, a 1, and a 6, and a 6. And so my third sample mean is going to be 1 plus 1 is 2, 2 plus 6 is 8, 8 plus 6 is 14, 14 divided by 4, 14 divided by 4 is what? 3.5. And as I find each of these sample means, so for each of my samples of sample size 4, I figure out a mean. And as I do each of them, I'm going to plot it on a frequency distribution. And this is all going to amaze you in a few seconds. So I plot this all on a frequency distribution.
Central limit theorem Inferential statistics Probability and Statistics Khan Academy.mp3
And as I find each of these sample means, so for each of my samples of sample size 4, I figure out a mean. And as I do each of them, I'm going to plot it on a frequency distribution. And this is all going to amaze you in a few seconds. So I plot this all on a frequency distribution. So I say, okay, on my first sample, my first sample mean was 2.75. So I'm plotting the actual frequency of the sample means I get for each sample. So 2.75, I got it one time, so I'll put a little plot there.
Central limit theorem Inferential statistics Probability and Statistics Khan Academy.mp3
So I plot this all on a frequency distribution. So I say, okay, on my first sample, my first sample mean was 2.75. So I'm plotting the actual frequency of the sample means I get for each sample. So 2.75, I got it one time, so I'll put a little plot there. So that's from that one right there. And I got, the next time, I also got a 2.75. That's a 2.75 there, so I got twice.
Central limit theorem Inferential statistics Probability and Statistics Khan Academy.mp3
So 2.75, I got it one time, so I'll put a little plot there. So that's from that one right there. And I got, the next time, I also got a 2.75. That's a 2.75 there, so I got twice. So I'll plot the frequency right there. Then I got a 3.5, so all the possible values, I could have a 3, I could have a 3.25, I could have a 3.5. So then I have the 3.5, so I'll plot it right there.
Central limit theorem Inferential statistics Probability and Statistics Khan Academy.mp3
That's a 2.75 there, so I got twice. So I'll plot the frequency right there. Then I got a 3.5, so all the possible values, I could have a 3, I could have a 3.25, I could have a 3.5. So then I have the 3.5, so I'll plot it right there. And what I'm going to do is I'm going to keep taking these samples. Maybe I'll take 10,000 of them. So I'm going to keep taking these samples.
Central limit theorem Inferential statistics Probability and Statistics Khan Academy.mp3
So then I have the 3.5, so I'll plot it right there. And what I'm going to do is I'm going to keep taking these samples. Maybe I'll take 10,000 of them. So I'm going to keep taking these samples. So I go all the way to S, you know, 10,000. I just do a bunch of these. And what it's going to look like over time is each of these, I'm going to make it a dot because I'm going to have to zoom out.
Central limit theorem Inferential statistics Probability and Statistics Khan Academy.mp3
So I'm going to keep taking these samples. So I go all the way to S, you know, 10,000. I just do a bunch of these. And what it's going to look like over time is each of these, I'm going to make it a dot because I'm going to have to zoom out. So if I look at it like this, over time, still it's all the values that it might be able to take on. You know, 2.75 might be here. So this first dot is going to be right there.
Central limit theorem Inferential statistics Probability and Statistics Khan Academy.mp3
And what it's going to look like over time is each of these, I'm going to make it a dot because I'm going to have to zoom out. So if I look at it like this, over time, still it's all the values that it might be able to take on. You know, 2.75 might be here. So this first dot is going to be right there. And that second one is going to be right there. And that one at 3.5 is going to look right there. But I'm going to do it 10,000 times because I'm going to have 10,000 dots.
Central limit theorem Inferential statistics Probability and Statistics Khan Academy.mp3
So this first dot is going to be right there. And that second one is going to be right there. And that one at 3.5 is going to look right there. But I'm going to do it 10,000 times because I'm going to have 10,000 dots. And let's say as I do it, I'm going to just keep plotting them. I'm just going to keep plotting the frequencies. I'm just going to keep plotting them over and over and over again.
Central limit theorem Inferential statistics Probability and Statistics Khan Academy.mp3
But I'm going to do it 10,000 times because I'm going to have 10,000 dots. And let's say as I do it, I'm going to just keep plotting them. I'm just going to keep plotting the frequencies. I'm just going to keep plotting them over and over and over again. And what you're going to see is as I take many, many samples of size 4, I'm going to have something that's going to start kind of approximating a normal distribution. So each of these dots represent an incidence of a sample mean. So as I keep adding on this column right here, that means I kept getting the sample mean 2.75.
Central limit theorem Inferential statistics Probability and Statistics Khan Academy.mp3
I'm just going to keep plotting them over and over and over again. And what you're going to see is as I take many, many samples of size 4, I'm going to have something that's going to start kind of approximating a normal distribution. So each of these dots represent an incidence of a sample mean. So as I keep adding on this column right here, that means I kept getting the sample mean 2.75. So over time, I'm going to have something that's starting to approximate a normal distribution. And that is the neat thing about the central limit theorem. So the central limit to N, this was the case for, so in orange, that's the case for N is equal to 4.
Central limit theorem Inferential statistics Probability and Statistics Khan Academy.mp3
So as I keep adding on this column right here, that means I kept getting the sample mean 2.75. So over time, I'm going to have something that's starting to approximate a normal distribution. And that is the neat thing about the central limit theorem. So the central limit to N, this was the case for, so in orange, that's the case for N is equal to 4. This was for sample size of 4. Now, if I did the same thing with the sample size of maybe 20. So in this case, instead of just taking 4 samples from my original crazy distribution, every sample, I take 20 instances of my random variable and I average those 20, and then I plot the sample mean on here.
Central limit theorem Inferential statistics Probability and Statistics Khan Academy.mp3
So the central limit to N, this was the case for, so in orange, that's the case for N is equal to 4. This was for sample size of 4. Now, if I did the same thing with the sample size of maybe 20. So in this case, instead of just taking 4 samples from my original crazy distribution, every sample, I take 20 instances of my random variable and I average those 20, and then I plot the sample mean on here. So in that case, I'm going to have a distribution that looks like this. And we'll discuss this in more videos. But it turns out if I were to plot the sample, 10,000 of the sample means here, I'm going to have something that, two things, it's going to even more closely approximate a normal distribution, and we're going to see in future videos it's actually going to have a smaller, well, let me be clear, it's going to have the same mean.
Central limit theorem Inferential statistics Probability and Statistics Khan Academy.mp3
So in this case, instead of just taking 4 samples from my original crazy distribution, every sample, I take 20 instances of my random variable and I average those 20, and then I plot the sample mean on here. So in that case, I'm going to have a distribution that looks like this. And we'll discuss this in more videos. But it turns out if I were to plot the sample, 10,000 of the sample means here, I'm going to have something that, two things, it's going to even more closely approximate a normal distribution, and we're going to see in future videos it's actually going to have a smaller, well, let me be clear, it's going to have the same mean. So that's the mean, this is going to have the same mean. But it's going to have a smaller standard deviation. So I want to, well, I should plot these from the bottom because you kind of stack it.
Central limit theorem Inferential statistics Probability and Statistics Khan Academy.mp3
But it turns out if I were to plot the sample, 10,000 of the sample means here, I'm going to have something that, two things, it's going to even more closely approximate a normal distribution, and we're going to see in future videos it's actually going to have a smaller, well, let me be clear, it's going to have the same mean. So that's the mean, this is going to have the same mean. But it's going to have a smaller standard deviation. So I want to, well, I should plot these from the bottom because you kind of stack it. One, you get one, then another instance, then another instance. But this is going to more and more approach a normal distribution. So the reality is, and this is what's super cool about the central limit theorem, as your sample size becomes larger, or you can even say as it approaches infinity, but you really don't have to get that close to infinity to really get close to a normal distribution.
Central limit theorem Inferential statistics Probability and Statistics Khan Academy.mp3
So I want to, well, I should plot these from the bottom because you kind of stack it. One, you get one, then another instance, then another instance. But this is going to more and more approach a normal distribution. So the reality is, and this is what's super cool about the central limit theorem, as your sample size becomes larger, or you can even say as it approaches infinity, but you really don't have to get that close to infinity to really get close to a normal distribution. So if you have a sample size of 10 or 20, you're already getting very close to a normal distribution. In fact, about as good an approximation as we see in our everyday life. What's cool is we can start with some crazy distribution.
Central limit theorem Inferential statistics Probability and Statistics Khan Academy.mp3
So the reality is, and this is what's super cool about the central limit theorem, as your sample size becomes larger, or you can even say as it approaches infinity, but you really don't have to get that close to infinity to really get close to a normal distribution. So if you have a sample size of 10 or 20, you're already getting very close to a normal distribution. In fact, about as good an approximation as we see in our everyday life. What's cool is we can start with some crazy distribution. This has nothing to do with a normal distribution. But if we have a sample size, this was n equals 4, but if we have a sample size of n equals 10 or n equals 100, and we were to take 100 of these instead of 4 here and average them and then plot that average, the frequency of it, then we would take 100 again, average them, take the mean, plot that again. And if we were to do that a bunch of times, in fact, if we were to do that an infinite time, we would find that, especially if we had an infinite sample size, we would find a perfect normal distribution.
Central limit theorem Inferential statistics Probability and Statistics Khan Academy.mp3
What's cool is we can start with some crazy distribution. This has nothing to do with a normal distribution. But if we have a sample size, this was n equals 4, but if we have a sample size of n equals 10 or n equals 100, and we were to take 100 of these instead of 4 here and average them and then plot that average, the frequency of it, then we would take 100 again, average them, take the mean, plot that again. And if we were to do that a bunch of times, in fact, if we were to do that an infinite time, we would find that, especially if we had an infinite sample size, we would find a perfect normal distribution. That's the crazy thing. And it doesn't apply just to taking the sample mean. Here we took the sample mean every time, but you could have also taken the sample sum.
Central limit theorem Inferential statistics Probability and Statistics Khan Academy.mp3
And if we were to do that a bunch of times, in fact, if we were to do that an infinite time, we would find that, especially if we had an infinite sample size, we would find a perfect normal distribution. That's the crazy thing. And it doesn't apply just to taking the sample mean. Here we took the sample mean every time, but you could have also taken the sample sum. The central limit theorem would have still applied. But that's what's so super useful about it. Because in life, there's all sorts of processes out there, proteins bumping into each other, people doing crazy things, humans interacting in weird ways, and you don't know the probability distribution functions for any of those things.
Central limit theorem Inferential statistics Probability and Statistics Khan Academy.mp3
Here we took the sample mean every time, but you could have also taken the sample sum. The central limit theorem would have still applied. But that's what's so super useful about it. Because in life, there's all sorts of processes out there, proteins bumping into each other, people doing crazy things, humans interacting in weird ways, and you don't know the probability distribution functions for any of those things. But what the central limit theorem tells us is that if we add a bunch of those actions together, assuming that they all have the same distribution, or if we were to take the mean of all of those actions together, and if we were to plot the frequency of those means, we do get a normal distribution. And that's frankly why the normal distribution shows up so much in statistics, and why, frankly, it's a very good approximation for the sum or the means of a lot of processes. Normal distribution.
Central limit theorem Inferential statistics Probability and Statistics Khan Academy.mp3
So you can ignore the question right here. You can ignore all of this. I'm just using that same data to come up with a 95% confidence interval for the actual mean emission for this new engine design. So we want to find a 95% confidence interval. And as you can imagine, because we only have 10 samples right here, we're going to want to use a t distribution. And right down here, I have a t table. And we want a 95% confidence interval.
T-statistic confidence interval Inferential statistics Probability and Statistics Khan Academy.mp3
So we want to find a 95% confidence interval. And as you can imagine, because we only have 10 samples right here, we're going to want to use a t distribution. And right down here, I have a t table. And we want a 95% confidence interval. So we want to think about the range of t values that 95% of t values will fall under. So let's think about it this way. So let me draw a t distribution.
T-statistic confidence interval Inferential statistics Probability and Statistics Khan Academy.mp3
And we want a 95% confidence interval. So we want to think about the range of t values that 95% of t values will fall under. So let's think about it this way. So let me draw a t distribution. Let me draw a t distribution right over here. So a t distribution looks very similar to a normal distribution, but it has fatter tails. This end and this end will be fatter than in a normal distribution.
T-statistic confidence interval Inferential statistics Probability and Statistics Khan Academy.mp3
So let me draw a t distribution. Let me draw a t distribution right over here. So a t distribution looks very similar to a normal distribution, but it has fatter tails. This end and this end will be fatter than in a normal distribution. And then we want to find an interval. So this is a normalized t distribution. The mean is going to be 0.
T-statistic confidence interval Inferential statistics Probability and Statistics Khan Academy.mp3
This end and this end will be fatter than in a normal distribution. And then we want to find an interval. So this is a normalized t distribution. The mean is going to be 0. And we want to find an interval of t values between some negative value here and some positive value here that contains 95% of the probability. So this right here has to be 95%. And to figure what these critical t values are at this end and this end, we can just use a t table.
T-statistic confidence interval Inferential statistics Probability and Statistics Khan Academy.mp3
The mean is going to be 0. And we want to find an interval of t values between some negative value here and some positive value here that contains 95% of the probability. So this right here has to be 95%. And to figure what these critical t values are at this end and this end, we can just use a t table. And we're going to use the two-sided version of this because we're symmetric around the center. So you look at the two-sided. We want a 95% confidence interval.
T-statistic confidence interval Inferential statistics Probability and Statistics Khan Academy.mp3
And to figure what these critical t values are at this end and this end, we can just use a t table. And we're going to use the two-sided version of this because we're symmetric around the center. So you look at the two-sided. We want a 95% confidence interval. So we're going to look right over here, 95% confidence interval. We have 10 data points, which means we have 9 degrees of freedom. So 9 degrees of freedom for our 10 data points.
T-statistic confidence interval Inferential statistics Probability and Statistics Khan Academy.mp3
We want a 95% confidence interval. So we're going to look right over here, 95% confidence interval. We have 10 data points, which means we have 9 degrees of freedom. So 9 degrees of freedom for our 10 data points. We just took 10 minus 1. So if we look over here, for that type, so for a t distribution with 9 degrees of freedom, you're going to have 95% of the probability is going to be contained within a t value of, so the t value is going to be between negative. So this value right here is 2.262.
T-statistic confidence interval Inferential statistics Probability and Statistics Khan Academy.mp3
So 9 degrees of freedom for our 10 data points. We just took 10 minus 1. So if we look over here, for that type, so for a t distribution with 9 degrees of freedom, you're going to have 95% of the probability is going to be contained within a t value of, so the t value is going to be between negative. So this value right here is 2.262. And this value right here is negative 2.262. That's what this right here tells us. That if you contain all the values that are less than 2.262 away from the center of your t distribution, you will contain 95% of the probability.
T-statistic confidence interval Inferential statistics Probability and Statistics Khan Academy.mp3
So this value right here is 2.262. And this value right here is negative 2.262. That's what this right here tells us. That if you contain all the values that are less than 2.262 away from the center of your t distribution, you will contain 95% of the probability. So that is our t distribution right there. Let me make it very clear. This is our t distribution.
T-statistic confidence interval Inferential statistics Probability and Statistics Khan Academy.mp3
That if you contain all the values that are less than 2.262 away from the center of your t distribution, you will contain 95% of the probability. So that is our t distribution right there. Let me make it very clear. This is our t distribution. So if you randomly pick a t value from this t distribution, it has a 95% chance of being within this far from the mean. Or maybe we should write it this way. If I pick a random t value, if I take a random t statistic, there's a 95% chance that a random t statistic is going to be less than 2.262 and greater than negative 2.262.
T-statistic confidence interval Inferential statistics Probability and Statistics Khan Academy.mp3