Sentence
stringlengths
102
4.09k
video_title
stringlengths
27
104
And you could say, what's the probability that we have more than 4 inches of rain tomorrow? Then you would start here, and you would calculate the area on the curve all the way to infinity, if the curve has area all the way to infinity. And hopefully that's not an infinite number, right? Then your probability won't make any sense. But hopefully if you take this sum, it comes to some number, and we'll say, oh, there's only a 10% chance that you have more than 4 inches tomorrow. And all of this should kind of immediately lead to one light bulb in your head, is that the probability of all of the events that might occur can't be more than 100%, right? All of the events combined can't, you know, there's a probability of 1 that one of these events will occur.
Probability density functions Probability and Statistics Khan Academy.mp3
Then your probability won't make any sense. But hopefully if you take this sum, it comes to some number, and we'll say, oh, there's only a 10% chance that you have more than 4 inches tomorrow. And all of this should kind of immediately lead to one light bulb in your head, is that the probability of all of the events that might occur can't be more than 100%, right? All of the events combined can't, you know, there's a probability of 1 that one of these events will occur. So essentially, the whole area under this curve has to be equal to 1. So if we took the integral of f of x from 0 to infinity, this thing, at least as I've drawn it, dx should be equal to 1, for those of you who have studied calculus. For those of you who haven't, an integral is just the area under a curve, and you can watch the calculus videos if you want to learn a little bit more about how to do them.
Probability density functions Probability and Statistics Khan Academy.mp3
All of the events combined can't, you know, there's a probability of 1 that one of these events will occur. So essentially, the whole area under this curve has to be equal to 1. So if we took the integral of f of x from 0 to infinity, this thing, at least as I've drawn it, dx should be equal to 1, for those of you who have studied calculus. For those of you who haven't, an integral is just the area under a curve, and you can watch the calculus videos if you want to learn a little bit more about how to do them. And this also applies to the discrete probability distributions. Let me draw one. The sum of all of the probabilities have to be equal to 1.
Probability density functions Probability and Statistics Khan Academy.mp3
For those of you who haven't, an integral is just the area under a curve, and you can watch the calculus videos if you want to learn a little bit more about how to do them. And this also applies to the discrete probability distributions. Let me draw one. The sum of all of the probabilities have to be equal to 1. In that example with the dice, or let's say, since it's faster to draw the coin, the two probabilities have to be equal to 1. So if this is 1, 0, where x is equal to 1 if we're heads, or 0 if we're tails, each of these have to be 0.5. Or they don't have to be 0.5, but if 1 was 0.6, the other one would have to be 0.4.
Probability density functions Probability and Statistics Khan Academy.mp3
The sum of all of the probabilities have to be equal to 1. In that example with the dice, or let's say, since it's faster to draw the coin, the two probabilities have to be equal to 1. So if this is 1, 0, where x is equal to 1 if we're heads, or 0 if we're tails, each of these have to be 0.5. Or they don't have to be 0.5, but if 1 was 0.6, the other one would have to be 0.4. They have to add to 1. If one of these was, you can't have a 60% probability of getting a heads, and then a 60% probability of getting a tails as well, because then you would have essentially a 120% probability of either of the outcomes happening, which makes no sense at all. So it's important to realize that a probability distribution function, or probability distribution function in this case for a discrete random variable, they all have to add up to 1.
Probability density functions Probability and Statistics Khan Academy.mp3
Or they don't have to be 0.5, but if 1 was 0.6, the other one would have to be 0.4. They have to add to 1. If one of these was, you can't have a 60% probability of getting a heads, and then a 60% probability of getting a tails as well, because then you would have essentially a 120% probability of either of the outcomes happening, which makes no sense at all. So it's important to realize that a probability distribution function, or probability distribution function in this case for a discrete random variable, they all have to add up to 1. It's 0.5 plus 0.5. And in this case, the area under the probability density function also has to be equal to 1. Anyway, I'm all out of time for now.
Probability density functions Probability and Statistics Khan Academy.mp3
All right, so where we left off, we had simplified our algebraic expression for the squared error to the line from the end data points. And we kind of visualized this expression right here would be a surface in, I guess you could view it as a surface in three dimensions. Where for any m and b is going to be a point on that surface that represents the squared error for that line. And our goal is to find the m and the b, which would define an actual line, to find the m and b that minimize the squared error. And the way that we do that is we find a point where the partial derivative of the squared error with respect to m is 0, and the partial derivative with respect to b is also equal to 0. So it's flat with respect to m. So that means that the slope in this direction is going to be flat. Let me do it in the same color.
Proof (part 3) minimizing squared error to regression line Khan Academy.mp3
And our goal is to find the m and the b, which would define an actual line, to find the m and b that minimize the squared error. And the way that we do that is we find a point where the partial derivative of the squared error with respect to m is 0, and the partial derivative with respect to b is also equal to 0. So it's flat with respect to m. So that means that the slope in this direction is going to be flat. Let me do it in the same color. So the slope in this direction, that's the partial derivative with respect to m, is going to be flat. It's not going to change in that direction. And the partial derivative with respect to b is going to be flat.
Proof (part 3) minimizing squared error to regression line Khan Academy.mp3
Let me do it in the same color. So the slope in this direction, that's the partial derivative with respect to m, is going to be flat. It's not going to change in that direction. And the partial derivative with respect to b is going to be flat. So it will be a flat point right over there. The slope of that point in that direction will also be 0, and that is our minimum point. Let's figure out the m and b's that give us this.
Proof (part 3) minimizing squared error to regression line Khan Academy.mp3
And the partial derivative with respect to b is going to be flat. So it will be a flat point right over there. The slope of that point in that direction will also be 0, and that is our minimum point. Let's figure out the m and b's that give us this. So if I were to take the partial derivative of this expression with respect to m. So the partial derivative of this expression with respect to m. Well, this first term has no m terms in it, so it's a constant from the point of view of m. And just as a reminder, partial derivatives is just like taking a regular derivative, and you're just assuming that everything but the variable that you're taking the partial derivative with respect to, you're assuming everything else is a constant. So in this expression, all the x's, the y's, the b's, the n's, those are all constant. The only variable when we take the partial derivative with respect to m that matters is the m. So this is a constant, there's no m here.
Proof (part 3) minimizing squared error to regression line Khan Academy.mp3
Let's figure out the m and b's that give us this. So if I were to take the partial derivative of this expression with respect to m. So the partial derivative of this expression with respect to m. Well, this first term has no m terms in it, so it's a constant from the point of view of m. And just as a reminder, partial derivatives is just like taking a regular derivative, and you're just assuming that everything but the variable that you're taking the partial derivative with respect to, you're assuming everything else is a constant. So in this expression, all the x's, the y's, the b's, the n's, those are all constant. The only variable when we take the partial derivative with respect to m that matters is the m. So this is a constant, there's no m here. This term right over here, we're taking with respect to m. So the derivative of this with respect to m is negative 2, it's kind of the coefficients on the m. So negative 2 times n times the mean of the xy's, that's the partial of this with respect to m. And then this term right here, this term right here has no m's in it, so it's constant with respect to m, so it's partial derivative with respect to m is 0. Then this term here, you have n times the mean of the x squared times m squared. So this is going to be, we're taking the partial derivative with respect to m, so it's going to be 2 times, so it's going to be plus 2 times, 2 times n times the mean of the x squared times m, right?
Proof (part 3) minimizing squared error to regression line Khan Academy.mp3
The only variable when we take the partial derivative with respect to m that matters is the m. So this is a constant, there's no m here. This term right over here, we're taking with respect to m. So the derivative of this with respect to m is negative 2, it's kind of the coefficients on the m. So negative 2 times n times the mean of the xy's, that's the partial of this with respect to m. And then this term right here, this term right here has no m's in it, so it's constant with respect to m, so it's partial derivative with respect to m is 0. Then this term here, you have n times the mean of the x squared times m squared. So this is going to be, we're taking the partial derivative with respect to m, so it's going to be 2 times, so it's going to be plus 2 times, 2 times n times the mean of the x squared times m, right? The derivative of m squared is 2m, and then you just have this coefficient there as well. Now this term, you also have an m over there, so let's see. Everything else is just kind of a coefficient on this m, so the derivative with respect to m is 2bn times the mean of the x's.
Proof (part 3) minimizing squared error to regression line Khan Academy.mp3
So this is going to be, we're taking the partial derivative with respect to m, so it's going to be 2 times, so it's going to be plus 2 times, 2 times n times the mean of the x squared times m, right? The derivative of m squared is 2m, and then you just have this coefficient there as well. Now this term, you also have an m over there, so let's see. Everything else is just kind of a coefficient on this m, so the derivative with respect to m is 2bn times the mean of the x's. If I took the derivative of 3m, the derivative is just 3, it's just the coefficient on it. And then finally, this is a constant with respect to m, so we don't see it. And so this is the partial derivative with respect to m, that's that right over there, and we want to set this equal to 0.
Proof (part 3) minimizing squared error to regression line Khan Academy.mp3
Everything else is just kind of a coefficient on this m, so the derivative with respect to m is 2bn times the mean of the x's. If I took the derivative of 3m, the derivative is just 3, it's just the coefficient on it. And then finally, this is a constant with respect to m, so we don't see it. And so this is the partial derivative with respect to m, that's that right over there, and we want to set this equal to 0. Now let's do the same thing with respect to b. This term, once again, is a constant from the perspective of b. There's no b here, there's no b over here, so the partial derivatives of either of these with respect to b is 0.
Proof (part 3) minimizing squared error to regression line Khan Academy.mp3
And so this is the partial derivative with respect to m, that's that right over there, and we want to set this equal to 0. Now let's do the same thing with respect to b. This term, once again, is a constant from the perspective of b. There's no b here, there's no b over here, so the partial derivatives of either of these with respect to b is 0. Then over here, you have a negative 2n times the mean of y's as a coefficient on a b. So the partial derivative with respect to b is going to be minus 2n, or negative 2n times the mean of the y's. And then there's no b over here, and we do have a b over here, so it's plus 2mn times the mean of the x's.
Proof (part 3) minimizing squared error to regression line Khan Academy.mp3
There's no b here, there's no b over here, so the partial derivatives of either of these with respect to b is 0. Then over here, you have a negative 2n times the mean of y's as a coefficient on a b. So the partial derivative with respect to b is going to be minus 2n, or negative 2n times the mean of the y's. And then there's no b over here, and we do have a b over here, so it's plus 2mn times the mean of the x's. Plus 2mn times the mean of the x's. This is essentially the coefficient on the b over here. It was written in a mixed order, but all of these are constants from the point of view of b.
Proof (part 3) minimizing squared error to regression line Khan Academy.mp3
And then there's no b over here, and we do have a b over here, so it's plus 2mn times the mean of the x's. Plus 2mn times the mean of the x's. This is essentially the coefficient on the b over here. It was written in a mixed order, but all of these are constants from the point of view of b. They're the coefficient in front of the b. The partial derivative of that with respect to b is just going to be the coefficient. And then finally, the partial derivative of this with respect to b is going to be 2nb, or 2nb to the first.
Proof (part 3) minimizing squared error to regression line Khan Academy.mp3
It was written in a mixed order, but all of these are constants from the point of view of b. They're the coefficient in front of the b. The partial derivative of that with respect to b is just going to be the coefficient. And then finally, the partial derivative of this with respect to b is going to be 2nb, or 2nb to the first. You could even say 2bn, or 2nb is the partial derivative of that with respect to b. And we want to set this equal to 0. So it looks very complicated, but remember we're just trying to solve for the m's and the b's.
Proof (part 3) minimizing squared error to regression line Khan Academy.mp3
And then finally, the partial derivative of this with respect to b is going to be 2nb, or 2nb to the first. You could even say 2bn, or 2nb is the partial derivative of that with respect to b. And we want to set this equal to 0. So it looks very complicated, but remember we're just trying to solve for the m's and the b's. And we have two equations with two unknowns here. We have the m's, we have the m's, and then we have the b's. And to simplify this, both of these equations, actually the top one and the bottom one, both sides are divisible by 2n.
Proof (part 3) minimizing squared error to regression line Khan Academy.mp3
So it looks very complicated, but remember we're just trying to solve for the m's and the b's. And we have two equations with two unknowns here. We have the m's, we have the m's, and then we have the b's. And to simplify this, both of these equations, actually the top one and the bottom one, both sides are divisible by 2n. I mean, 0 is divisible by anything. It'll be just 0. So let's divide the top equation by 2n and see what we get.
Proof (part 3) minimizing squared error to regression line Khan Academy.mp3
And to simplify this, both of these equations, actually the top one and the bottom one, both sides are divisible by 2n. I mean, 0 is divisible by anything. It'll be just 0. So let's divide the top equation by 2n and see what we get. If we divide the top equation by 2n, we can even see it here. This will become just 1. That goes away, and then those go away, and you would just be left with, you are left with negative times the mean, the negative mean of the x, y's.
Proof (part 3) minimizing squared error to regression line Khan Academy.mp3
So let's divide the top equation by 2n and see what we get. If we divide the top equation by 2n, we can even see it here. This will become just 1. That goes away, and then those go away, and you would just be left with, you are left with negative times the mean, the negative mean of the x, y's. Plus m times the mean of the x squareds, m times the mean of the x squareds, plus b times the mean of the x's is equal to 0. That's this first expression when you divide both sides by negative 2n. And then the second expression will be, this will go away.
Proof (part 3) minimizing squared error to regression line Khan Academy.mp3
That goes away, and then those go away, and you would just be left with, you are left with negative times the mean, the negative mean of the x, y's. Plus m times the mean of the x squareds, m times the mean of the x squareds, plus b times the mean of the x's is equal to 0. That's this first expression when you divide both sides by negative 2n. And then the second expression will be, this will go away. This is when you divide it by 2n. I don't want to say negative 2n. That's when you divide it by 2n, you get this.
Proof (part 3) minimizing squared error to regression line Khan Academy.mp3
And then the second expression will be, this will go away. This is when you divide it by 2n. I don't want to say negative 2n. That's when you divide it by 2n, you get this. And when you divide this by 2n, that'll go away, that will go away, and then those will go away. And you're just left with negative, the negative mean of the y's, plus m times the mean of the x's, plus b is equal to 0. So if we find the m and the b values that satisfy the system of equations, we have minimized the squared error.
Proof (part 3) minimizing squared error to regression line Khan Academy.mp3
That's when you divide it by 2n, you get this. And when you divide this by 2n, that'll go away, that will go away, and then those will go away. And you're just left with negative, the negative mean of the y's, plus m times the mean of the x's, plus b is equal to 0. So if we find the m and the b values that satisfy the system of equations, we have minimized the squared error. And we could just solve it in a traditional way, but I want to rewrite this, because I think it's kind of interesting to see what these really represent. So let's add this mean of the x, y's to both sides of this top equation. And then we're going to have, so if we add the mean of the x, y's to both sides of this top equation, what do we get?
Proof (part 3) minimizing squared error to regression line Khan Academy.mp3
So if we find the m and the b values that satisfy the system of equations, we have minimized the squared error. And we could just solve it in a traditional way, but I want to rewrite this, because I think it's kind of interesting to see what these really represent. So let's add this mean of the x, y's to both sides of this top equation. And then we're going to have, so if we add the mean of the x, y's to both sides of this top equation, what do we get? We get m times the mean of the x squareds, plus b times the mean of the x's, is equal to, these are going to cancel out, is equal to the mean of the x, y's. That's that top equation. This bottom equation right here, let's add the mean of y to both sides of this equation.
Proof (part 3) minimizing squared error to regression line Khan Academy.mp3
And then we're going to have, so if we add the mean of the x, y's to both sides of this top equation, what do we get? We get m times the mean of the x squareds, plus b times the mean of the x's, is equal to, these are going to cancel out, is equal to the mean of the x, y's. That's that top equation. This bottom equation right here, let's add the mean of y to both sides of this equation. And I do that so that that cancels out. And then we're left with m, do that in the blue color, show you the same equation. We have m times the mean of the x's, plus b, is equal to the mean of the y's.
Proof (part 3) minimizing squared error to regression line Khan Academy.mp3
This bottom equation right here, let's add the mean of y to both sides of this equation. And I do that so that that cancels out. And then we're left with m, do that in the blue color, show you the same equation. We have m times the mean of the x's, plus b, is equal to the mean of the y's. Now, I actually want to get both of these into mx plus b form. This is actually already there. So you can see that if our line, if our best fitting line is going to be y is equal to mx plus b, we still have to find the m and the b.
Proof (part 3) minimizing squared error to regression line Khan Academy.mp3
We have m times the mean of the x's, plus b, is equal to the mean of the y's. Now, I actually want to get both of these into mx plus b form. This is actually already there. So you can see that if our line, if our best fitting line is going to be y is equal to mx plus b, we still have to find the m and the b. But we see on that best fitting line, because the m and the b that satisfy both of these equations are going to be the m and the b on that best fitting line. So that best fitting line actually contains the point. It actually contains the point.
Proof (part 3) minimizing squared error to regression line Khan Academy.mp3
So you can see that if our line, if our best fitting line is going to be y is equal to mx plus b, we still have to find the m and the b. But we see on that best fitting line, because the m and the b that satisfy both of these equations are going to be the m and the b on that best fitting line. So that best fitting line actually contains the point. It actually contains the point. And we get this from the second equation right here. It contains the point, or the point, I should write it this way, the coordinate mean of x, mean of y, lies on this point. I should say lies on the line.
Proof (part 3) minimizing squared error to regression line Khan Academy.mp3
It actually contains the point. And we get this from the second equation right here. It contains the point, or the point, I should write it this way, the coordinate mean of x, mean of y, lies on this point. I should say lies on the line. And you could see it right over here. If you put the mean of x in this for the optimal m and b, you're going to get the mean of the y. So that's interesting.
Proof (part 3) minimizing squared error to regression line Khan Academy.mp3
I should say lies on the line. And you could see it right over here. If you put the mean of x in this for the optimal m and b, you're going to get the mean of the y. So that's interesting. This optimal line, let's never forget what we're even trying to do. This optimal line is going to contain some point on it. Let me do that in a new color.
Proof (part 3) minimizing squared error to regression line Khan Academy.mp3
So that's interesting. This optimal line, let's never forget what we're even trying to do. This optimal line is going to contain some point on it. Let me do that in a new color. It's going to contain some point on it right here that is the mean of all of the x values and the mean of all the y values. That's just interesting. And it kind of makes sense.
Proof (part 3) minimizing squared error to regression line Khan Academy.mp3
Let me do that in a new color. It's going to contain some point on it right here that is the mean of all of the x values and the mean of all the y values. That's just interesting. And it kind of makes sense. It kind of makes intuitive sense. Now, this other thing, just to kind of get it in the same point of view. And then it'll actually become kind of an easier way to solve the system.
Proof (part 3) minimizing squared error to regression line Khan Academy.mp3
And it kind of makes sense. It kind of makes intuitive sense. Now, this other thing, just to kind of get it in the same point of view. And then it'll actually become kind of an easier way to solve the system. You could solve this a million different ways. But just to give us an intuition of what even is going on here. What's another point that's on that line?
Proof (part 3) minimizing squared error to regression line Khan Academy.mp3
And then it'll actually become kind of an easier way to solve the system. You could solve this a million different ways. But just to give us an intuition of what even is going on here. What's another point that's on that line? Because if you have two points on the line, you know what the equation of the line is going to be. Well, the other point, if we want this to be an mx plus b form. So let's divide both sides of this equation by this term right here, by the mean of the x's.
Proof (part 3) minimizing squared error to regression line Khan Academy.mp3
What's another point that's on that line? Because if you have two points on the line, you know what the equation of the line is going to be. Well, the other point, if we want this to be an mx plus b form. So let's divide both sides of this equation by this term right here, by the mean of the x's. And if we do that, we get m times the mean of the x squareds divided by the mean of the x's plus b is equal to the mean of the xy's divided by the mean of the x's. And so when you write it in this form, this is the exact same equation as that. I just divided both sides by the mean of the x's.
Proof (part 3) minimizing squared error to regression line Khan Academy.mp3
So let's divide both sides of this equation by this term right here, by the mean of the x's. And if we do that, we get m times the mean of the x squareds divided by the mean of the x's plus b is equal to the mean of the xy's divided by the mean of the x's. And so when you write it in this form, this is the exact same equation as that. I just divided both sides by the mean of the x's. You get another interesting point that will lie on this optimal fitting line, at least from the point of view of the squared distances. So another point that will lie on it. So another point that will lie on this optimal line is going to be the point, the x value is going to be this.
Proof (part 3) minimizing squared error to regression line Khan Academy.mp3
I just divided both sides by the mean of the x's. You get another interesting point that will lie on this optimal fitting line, at least from the point of view of the squared distances. So another point that will lie on it. So another point that will lie on this optimal line is going to be the point, the x value is going to be this. So it's going to be the coordinate, the mean of the x squareds divided by the mean of the x's. And then the y value is going to be the mean of the xy's divided by the mean of the x's. And I'll let you think about that a little bit more.
Proof (part 3) minimizing squared error to regression line Khan Academy.mp3
So another point that will lie on this optimal line is going to be the point, the x value is going to be this. So it's going to be the coordinate, the mean of the x squareds divided by the mean of the x's. And then the y value is going to be the mean of the xy's divided by the mean of the x's. And I'll let you think about that a little bit more. But already, this is actually the two points that lie on the line. So both of these on the line. Both of these on the best fitting line, based on how we're measuring a good fit, which is the square distance.
Proof (part 3) minimizing squared error to regression line Khan Academy.mp3
And I'll let you think about that a little bit more. But already, this is actually the two points that lie on the line. So both of these on the line. Both of these on the best fitting line, based on how we're measuring a good fit, which is the square distance. These are on the line that minimize that square distance. What I'm going to do in the next video, this is turning into like a six or seven video saga on trying to prove the best fitting line, or finding the formula for the best fitting line. But it's interesting.
Proof (part 3) minimizing squared error to regression line Khan Academy.mp3
Both of these on the best fitting line, based on how we're measuring a good fit, which is the square distance. These are on the line that minimize that square distance. What I'm going to do in the next video, this is turning into like a six or seven video saga on trying to prove the best fitting line, or finding the formula for the best fitting line. But it's interesting. There's all sorts of kind of neat little mathematical things to ponder over here. But in the next video, we can actually use this information. We could have just solved the system straight up.
Proof (part 3) minimizing squared error to regression line Khan Academy.mp3
Let's say that you're curious about studying the dimensions of the cars that happen to sit in the parking lot. And so you measure their lengths. And so let's just make the computation simple. Let's say that there are five cars in the parking lot. The entire size of the population that we care about is five. And you go and measure their lengths. One car is four meters long.
Population standard deviation Descriptive statistics Probability and Statistics Khan Academy.mp3
Let's say that there are five cars in the parking lot. The entire size of the population that we care about is five. And you go and measure their lengths. One car is four meters long. The other car, another car is 4.2 meters long. Another car is five meters long. The fourth car is 4.3 meters long.
Population standard deviation Descriptive statistics Probability and Statistics Khan Academy.mp3
One car is four meters long. The other car, another car is 4.2 meters long. Another car is five meters long. The fourth car is 4.3 meters long. And then let's say the fifth car is 5.5 meters long. So let's come up with some parameters for this population. So the first one that you might want to figure out is a measure of central tendency.
Population standard deviation Descriptive statistics Probability and Statistics Khan Academy.mp3
The fourth car is 4.3 meters long. And then let's say the fifth car is 5.5 meters long. So let's come up with some parameters for this population. So the first one that you might want to figure out is a measure of central tendency. And probably the most popular one is the arithmetic mean. So let's calculate that first. And we're going to do that for the population.
Population standard deviation Descriptive statistics Probability and Statistics Khan Academy.mp3
So the first one that you might want to figure out is a measure of central tendency. And probably the most popular one is the arithmetic mean. So let's calculate that first. And we're going to do that for the population. So we're going to use mu. So what does the arithmetic mean here? Well, we just have to add all of these data points up and divide by 5.
Population standard deviation Descriptive statistics Probability and Statistics Khan Academy.mp3
And we're going to do that for the population. So we're going to use mu. So what does the arithmetic mean here? Well, we just have to add all of these data points up and divide by 5. And I'll just get the calculator out just so it's a little bit quicker. And so this is going to be 4 plus 4.2 plus 5 plus 4.3 plus 5.5. And then I'm going to take that sum and then divide by 5.
Population standard deviation Descriptive statistics Probability and Statistics Khan Academy.mp3
Well, we just have to add all of these data points up and divide by 5. And I'll just get the calculator out just so it's a little bit quicker. And so this is going to be 4 plus 4.2 plus 5 plus 4.3 plus 5.5. And then I'm going to take that sum and then divide by 5. And I get an arithmetic mean for my population of 4.6. So that's fine. And if we want to put some units there, it's 4.6 meters.
Population standard deviation Descriptive statistics Probability and Statistics Khan Academy.mp3
And then I'm going to take that sum and then divide by 5. And I get an arithmetic mean for my population of 4.6. So that's fine. And if we want to put some units there, it's 4.6 meters. Now, that's the central tendency or a measure of central tendency. We also might be curious about how dispersed is the data, especially from that central tendency. So what would we use?
Population standard deviation Descriptive statistics Probability and Statistics Khan Academy.mp3
And if we want to put some units there, it's 4.6 meters. Now, that's the central tendency or a measure of central tendency. We also might be curious about how dispersed is the data, especially from that central tendency. So what would we use? Well, we already have a tool at our disposal, the population variance. And the population variance is one of many ways of measuring dispersion. It has some very neat properties, which is why the way we've defined it as the mean of the squared distances from the mean tends to be a useful way of doing it.
Population standard deviation Descriptive statistics Probability and Statistics Khan Academy.mp3
So what would we use? Well, we already have a tool at our disposal, the population variance. And the population variance is one of many ways of measuring dispersion. It has some very neat properties, which is why the way we've defined it as the mean of the squared distances from the mean tends to be a useful way of doing it. So let's just do that. Let's actually calculate the population variance for this population right over here. Well, all we need to do is find the distance from each of these points to our mean right over here, and then square them and then take the mean of those squared distances.
Population standard deviation Descriptive statistics Probability and Statistics Khan Academy.mp3
It has some very neat properties, which is why the way we've defined it as the mean of the squared distances from the mean tends to be a useful way of doing it. So let's just do that. Let's actually calculate the population variance for this population right over here. Well, all we need to do is find the distance from each of these points to our mean right over here, and then square them and then take the mean of those squared distances. So let's do that. So it's going to be 4 minus 4.6 squared plus 4.2 minus 4.6 squared plus 5 minus 4.6 squared plus 4.3 minus 4.6 squared. And then finally, I'm running out of space, plus 5.5 minus 4.6 squared.
Population standard deviation Descriptive statistics Probability and Statistics Khan Academy.mp3
Well, all we need to do is find the distance from each of these points to our mean right over here, and then square them and then take the mean of those squared distances. So let's do that. So it's going to be 4 minus 4.6 squared plus 4.2 minus 4.6 squared plus 5 minus 4.6 squared plus 4.3 minus 4.6 squared. And then finally, I'm running out of space, plus 5.5 minus 4.6 squared. And then we're going to divide all of that by 5 to get our population variance. And so what's that going to give us? Let's get our calculator out.
Population standard deviation Descriptive statistics Probability and Statistics Khan Academy.mp3
And then finally, I'm running out of space, plus 5.5 minus 4.6 squared. And then we're going to divide all of that by 5 to get our population variance. And so what's that going to give us? Let's get our calculator out. So 4 minus 4.6 squared, that's negative 0.6 squared. Negative 0.6 squared is going to be the exact same thing as 0.6 squared. So let me write that as 0.6 squared plus 4.2 minus 4.6 is negative 0.4.
Population standard deviation Descriptive statistics Probability and Statistics Khan Academy.mp3
Let's get our calculator out. So 4 minus 4.6 squared, that's negative 0.6 squared. Negative 0.6 squared is going to be the exact same thing as 0.6 squared. So let me write that as 0.6 squared plus 4.2 minus 4.6 is negative 0.4. But when we square it, the negative is going to disappear. So it's going to be plus 0.4. I'll just write 0.4 squared.
Population standard deviation Descriptive statistics Probability and Statistics Khan Academy.mp3
So let me write that as 0.6 squared plus 4.2 minus 4.6 is negative 0.4. But when we square it, the negative is going to disappear. So it's going to be plus 0.4. I'll just write 0.4 squared. And then we have 5 minus 4.6. That's 0.4, so plus 0.4 squared. 4.3 minus 4.6, that's negative 0.3.
Population standard deviation Descriptive statistics Probability and Statistics Khan Academy.mp3
I'll just write 0.4 squared. And then we have 5 minus 4.6. That's 0.4, so plus 0.4 squared. 4.3 minus 4.6, that's negative 0.3. The negative goes away when you square it. So it's going to be plus 0.3 squared. And then finally, 5.5 minus 4.6 is going to be 0.9.
Population standard deviation Descriptive statistics Probability and Statistics Khan Academy.mp3
4.3 minus 4.6, that's negative 0.3. The negative goes away when you square it. So it's going to be plus 0.3 squared. And then finally, 5.5 minus 4.6 is going to be 0.9. So plus 0.9 squared. And then we will divide by the number of data points we have. And we get 0.316.
Population standard deviation Descriptive statistics Probability and Statistics Khan Academy.mp3
And then finally, 5.5 minus 4.6 is going to be 0.9. So plus 0.9 squared. And then we will divide by the number of data points we have. And we get 0.316. Or if we want to write it, this is going to be 0.316. Now let me ask you what is a mildly interesting question. What would be the units?
Population standard deviation Descriptive statistics Probability and Statistics Khan Academy.mp3
And we get 0.316. Or if we want to write it, this is going to be 0.316. Now let me ask you what is a mildly interesting question. What would be the units? What would be the units for this population variance? Since we happen to care about units in this video. Well, up here, this is 4 meters minus 4.6 meters.
Population standard deviation Descriptive statistics Probability and Statistics Khan Academy.mp3
What would be the units? What would be the units for this population variance? Since we happen to care about units in this video. Well, up here, this is 4 meters minus 4.6 meters. 4.2 meters minus 4.6 meters. So these are all meters. These are measurements in meters.
Population standard deviation Descriptive statistics Probability and Statistics Khan Academy.mp3
Well, up here, this is 4 meters minus 4.6 meters. 4.2 meters minus 4.6 meters. So these are all meters. These are measurements in meters. We saw it up here. So these are all measurements in meters. When you subtract them, you'll get meters.
Population standard deviation Descriptive statistics Probability and Statistics Khan Academy.mp3
These are measurements in meters. We saw it up here. So these are all measurements in meters. When you subtract them, you'll get meters. But when you square them, you get meters squared plus meters squared plus meters squared plus meters squared plus meters squared. And then you're just dividing that by a unitless count of the number of data points you have. So the units here are going to be square meters.
Population standard deviation Descriptive statistics Probability and Statistics Khan Academy.mp3
When you subtract them, you'll get meters. But when you square them, you get meters squared plus meters squared plus meters squared plus meters squared plus meters squared. And then you're just dividing that by a unitless count of the number of data points you have. So the units here are going to be square meters. And so you might say, hey, that's kind of a weird unit. If we're trying to figure out, if we're trying to visualize or think about how dispersed we are from the mean, when I visualize it, I visualize dispersion or how varied they are in terms of meters, not meters squared. So what could we do?
Population standard deviation Descriptive statistics Probability and Statistics Khan Academy.mp3
So the units here are going to be square meters. And so you might say, hey, that's kind of a weird unit. If we're trying to figure out, if we're trying to visualize or think about how dispersed we are from the mean, when I visualize it, I visualize dispersion or how varied they are in terms of meters, not meters squared. So what could we do? And a big hint, this comes out of just even the notation for variance. It's this sigma symbol squared. So why don't we just take the square root of it?
Population standard deviation Descriptive statistics Probability and Statistics Khan Academy.mp3
So what could we do? And a big hint, this comes out of just even the notation for variance. It's this sigma symbol squared. So why don't we just take the square root of it? So why don't we just take the square root of our variance, which we will denote with just a sigma. Makes a lot of sense. And in this case, what's it going to be?
Population standard deviation Descriptive statistics Probability and Statistics Khan Academy.mp3
So why don't we just take the square root of it? So why don't we just take the square root of our variance, which we will denote with just a sigma. Makes a lot of sense. And in this case, what's it going to be? It's going to be the square root of 0.316. And then what are the units going to be? It's going to be just meters.
Population standard deviation Descriptive statistics Probability and Statistics Khan Academy.mp3
And in this case, what's it going to be? It's going to be the square root of 0.316. And then what are the units going to be? It's going to be just meters. And we end up with, so let me take the square root of 0.316. And I get 0.56. I'll just round to the nearest thousandth.
Population standard deviation Descriptive statistics Probability and Statistics Khan Academy.mp3
It's going to be just meters. And we end up with, so let me take the square root of 0.316. And I get 0.56. I'll just round to the nearest thousandth. 0.562. So it's approximately 0.562 meters. So you might be saying, Sal, what do we call this thing that we just did, the square root of the variance?
Population standard deviation Descriptive statistics Probability and Statistics Khan Academy.mp3
I'll just round to the nearest thousandth. 0.562. So it's approximately 0.562 meters. So you might be saying, Sal, what do we call this thing that we just did, the square root of the variance? And here we're dealing with the population. We haven't thought about sampling yet. The square root of the population variance, what do we call this thing right over here?
Population standard deviation Descriptive statistics Probability and Statistics Khan Academy.mp3
So you might be saying, Sal, what do we call this thing that we just did, the square root of the variance? And here we're dealing with the population. We haven't thought about sampling yet. The square root of the population variance, what do we call this thing right over here? And this is a very familiar term. Oftentimes when you take an exam, this is calculated for the scores on the exam. This is our population.
Population standard deviation Descriptive statistics Probability and Statistics Khan Academy.mp3
The square root of the population variance, what do we call this thing right over here? And this is a very familiar term. Oftentimes when you take an exam, this is calculated for the scores on the exam. This is our population. Let me do this in a new color. I'm using that yellow a little bit too much. This is the population standard deviation.
Population standard deviation Descriptive statistics Probability and Statistics Khan Academy.mp3
This is our population. Let me do this in a new color. I'm using that yellow a little bit too much. This is the population standard deviation. It is a measure of how much the data is varying from the mean. In general, the larger this value, that means that the data is more varied from the population mean, the smaller. It's less varied.
Population standard deviation Descriptive statistics Probability and Statistics Khan Academy.mp3
This is the population standard deviation. It is a measure of how much the data is varying from the mean. In general, the larger this value, that means that the data is more varied from the population mean, the smaller. It's less varied. And these are all somewhat arbitrary definitions of how we've defined variance. We could have taken things to the fourth power. We could have done other things.
Population standard deviation Descriptive statistics Probability and Statistics Khan Academy.mp3
It's less varied. And these are all somewhat arbitrary definitions of how we've defined variance. We could have taken things to the fourth power. We could have done other things. We could have not taken them to a power, but taken the absolute value here. The reason why we do it this way is it has neat statistical properties as we try to build on it. But that's the population standard deviation, which gives us nice units, meters, meters.
Population standard deviation Descriptive statistics Probability and Statistics Khan Academy.mp3
And it's a pretty, hopefully you'll find, straightforward idea. If you're doing a trial, something that is probabilistic, a trial or an experiment, a sample space is just all of the, it's a set of the possible outcomes. So a very simple trial might be a coin flip. So if you're talking about a coin flip, well then the sample space is going to be the set of all the possible outcomes. So you could get a heads, or you could get a tails. That right over here is the sample space for the coin flip. So, and it's very useful because, for example, if these are equally likely outcomes, and you say, well what's the probability of the event of a heads?
Compound sample spaces Statistics and probability 7th grade Khan Academy.mp3
So if you're talking about a coin flip, well then the sample space is going to be the set of all the possible outcomes. So you could get a heads, or you could get a tails. That right over here is the sample space for the coin flip. So, and it's very useful because, for example, if these are equally likely outcomes, and you say, well what's the probability of the event of a heads? You say, okay, that's one out of the two equally likely outcomes. Or you can even construct, once you know all the possible outcomes, even if they aren't equally likely, you could say, well let's create a probability distribution. We at least know what the sample space is.
Compound sample spaces Statistics and probability 7th grade Khan Academy.mp3
So, and it's very useful because, for example, if these are equally likely outcomes, and you say, well what's the probability of the event of a heads? You say, okay, that's one out of the two equally likely outcomes. Or you can even construct, once you know all the possible outcomes, even if they aren't equally likely, you could say, well let's create a probability distribution. We at least know what the sample space is. We know what the possible outcomes are, now let's think about the probability of each of those outcomes. But a lot of times when people talk about sample spaces, they're often, they tend to be most useful, I would say, when you have equally likely outcomes, like in the case of a fair coin flip. Because then from the sample space, it's fairly straightforward to think about the probability of various events.
Compound sample spaces Statistics and probability 7th grade Khan Academy.mp3
We at least know what the sample space is. We know what the possible outcomes are, now let's think about the probability of each of those outcomes. But a lot of times when people talk about sample spaces, they're often, they tend to be most useful, I would say, when you have equally likely outcomes, like in the case of a fair coin flip. Because then from the sample space, it's fairly straightforward to think about the probability of various events. But this is a simple sample space right over here, but let's make things a little bit more interesting. Let's imagine a world, so let's just put this aside a little bit. Let's imagine a world where there's a bakery, and at that bakery, there are three types, three flavors of cupcakes, but there's also three different sizes of cupcakes.
Compound sample spaces Statistics and probability 7th grade Khan Academy.mp3
Because then from the sample space, it's fairly straightforward to think about the probability of various events. But this is a simple sample space right over here, but let's make things a little bit more interesting. Let's imagine a world, so let's just put this aside a little bit. Let's imagine a world where there's a bakery, and at that bakery, there are three types, three flavors of cupcakes, but there's also three different sizes of cupcakes. So now we're essentially looking at two different ways in which the thing that we're going to be sampling can vary. So what we're doing, let me write this down. So we have our flavors, flavors of cupcakes at this bakery, and let's say that you have chocolate, chocolate.
Compound sample spaces Statistics and probability 7th grade Khan Academy.mp3
Let's imagine a world where there's a bakery, and at that bakery, there are three types, three flavors of cupcakes, but there's also three different sizes of cupcakes. So now we're essentially looking at two different ways in which the thing that we're going to be sampling can vary. So what we're doing, let me write this down. So we have our flavors, flavors of cupcakes at this bakery, and let's say that you have chocolate, chocolate. You have, let's say there's strawberry, strawberry. And let's say that there is vanilla. There is vanilla.
Compound sample spaces Statistics and probability 7th grade Khan Academy.mp3
So we have our flavors, flavors of cupcakes at this bakery, and let's say that you have chocolate, chocolate. You have, let's say there's strawberry, strawberry. And let's say that there is vanilla. There is vanilla. And it comes in, they come in three different sizes. So sizes. Sizes could be small, I'll just write it out, small, medium, or large.
Compound sample spaces Statistics and probability 7th grade Khan Academy.mp3
There is vanilla. And it comes in, they come in three different sizes. So sizes. Sizes could be small, I'll just write it out, small, medium, or large. So if you were, and let's say, you know, each of these flavors come in each of these sizes, or it could be the other way around. Each of these sizes come in all three flavors. So now how do you construct the sample space?
Compound sample spaces Statistics and probability 7th grade Khan Academy.mp3
Sizes could be small, I'll just write it out, small, medium, or large. So if you were, and let's say, you know, each of these flavors come in each of these sizes, or it could be the other way around. Each of these sizes come in all three flavors. So now how do you construct the sample space? If you were to say, look, I'm going to go, you know, I'm going to blindfold myself and walk into this bakery, and randomly, you know, somehow pick up a cupcake, and you know, my fingers can't tell the flavor or the size of the cupcake, what are the possible, what are the possible outcomes for the cupcake I'll pick? And the outcome would be both the flavor and the size of the cupcake. Well, there's a bunch of ways to think about this.
Compound sample spaces Statistics and probability 7th grade Khan Academy.mp3
So now how do you construct the sample space? If you were to say, look, I'm going to go, you know, I'm going to blindfold myself and walk into this bakery, and randomly, you know, somehow pick up a cupcake, and you know, my fingers can't tell the flavor or the size of the cupcake, what are the possible, what are the possible outcomes for the cupcake I'll pick? And the outcome would be both the flavor and the size of the cupcake. Well, there's a bunch of ways to think about this. One way is you could draw a tree. You could say, okay, well, I'm going to pick three different flavors. I could either pick chocolate, chocolate.
Compound sample spaces Statistics and probability 7th grade Khan Academy.mp3
Well, there's a bunch of ways to think about this. One way is you could draw a tree. You could say, okay, well, I'm going to pick three different flavors. I could either pick chocolate, chocolate. I'm going to pick strawberry, strawberry. Or I'm going to pick vanilla, vanilla. And then for each of those flavors, I'm going to pick a small, medium, or large.
Compound sample spaces Statistics and probability 7th grade Khan Academy.mp3
I could either pick chocolate, chocolate. I'm going to pick strawberry, strawberry. Or I'm going to pick vanilla, vanilla. And then for each of those flavors, I'm going to pick a small, medium, or large. So you could say small, medium, large. Small, and so this is a small chocolate, this is a medium chocolate, this is a large chocolate. This is a small strawberry, medium strawberry, large strawberry.
Compound sample spaces Statistics and probability 7th grade Khan Academy.mp3
And then for each of those flavors, I'm going to pick a small, medium, or large. So you could say small, medium, large. Small, and so this is a small chocolate, this is a medium chocolate, this is a large chocolate. This is a small strawberry, medium strawberry, large strawberry. This is a small vanilla, medium vanilla, large vanilla. And so you see there's nine possible outcomes. Once again, this is a medium chocolate.
Compound sample spaces Statistics and probability 7th grade Khan Academy.mp3
This is a small strawberry, medium strawberry, large strawberry. This is a small vanilla, medium vanilla, large vanilla. And so you see there's nine possible outcomes. Once again, this is a medium chocolate. You picked a chocolate and it was a medium one. This is a large vanilla. You picked a vanilla and it is a large one.
Compound sample spaces Statistics and probability 7th grade Khan Academy.mp3
Once again, this is a medium chocolate. You picked a chocolate and it was a medium one. This is a large vanilla. You picked a vanilla and it is a large one. And you could have done it the other way around. You could have said, well, okay, I'm going to either pick a small, medium, or large, and then for each of those, I'm going to pick either a chocolate, strawberry, or vanilla. And I'll just use the first letters to, so I'm either going to pick a chocolate, a strawberry, or a vanilla.
Compound sample spaces Statistics and probability 7th grade Khan Academy.mp3
You picked a vanilla and it is a large one. And you could have done it the other way around. You could have said, well, okay, I'm going to either pick a small, medium, or large, and then for each of those, I'm going to pick either a chocolate, strawberry, or vanilla. And I'll just use the first letters to, so I'm either going to pick a chocolate, a strawberry, or a vanilla. When I write the S over here in this magenta color, I'm talking about the flavor. And if I write the S in green, I'm talking about small. So here, if you have a medium cupcake, it could be chocolate, it could be strawberry, or it could be vanilla.
Compound sample spaces Statistics and probability 7th grade Khan Academy.mp3
And I'll just use the first letters to, so I'm either going to pick a chocolate, a strawberry, or a vanilla. When I write the S over here in this magenta color, I'm talking about the flavor. And if I write the S in green, I'm talking about small. So here, if you have a medium cupcake, it could be chocolate, it could be strawberry, or it could be vanilla. If you have a large cupcake, it could be chocolate, strawberry, or vanilla. So for example, this was a medium chocolate cupcake. Over here, a medium chocolate cupcake is this one.
Compound sample spaces Statistics and probability 7th grade Khan Academy.mp3
So here, if you have a medium cupcake, it could be chocolate, it could be strawberry, or it could be vanilla. If you have a large cupcake, it could be chocolate, strawberry, or vanilla. So for example, this was a medium chocolate cupcake. Over here, a medium chocolate cupcake is this one. It's medium chocolate. It would be that one over here. So you could use these, I have a tree diagram like this, to think about the sample space, to think about the nine possible outcomes here.
Compound sample spaces Statistics and probability 7th grade Khan Academy.mp3
Over here, a medium chocolate cupcake is this one. It's medium chocolate. It would be that one over here. So you could use these, I have a tree diagram like this, to think about the sample space, to think about the nine possible outcomes here. But you could also do a, I guess you could say a grid, where you could write the flavors. So you could have chocolate, actually let me just write the, hold on, let me write them out. Actually, let me just write the letters, it's gonna take a long time to do.
Compound sample spaces Statistics and probability 7th grade Khan Academy.mp3
So you could use these, I have a tree diagram like this, to think about the sample space, to think about the nine possible outcomes here. But you could also do a, I guess you could say a grid, where you could write the flavors. So you could have chocolate, actually let me just write the, hold on, let me write them out. Actually, let me just write the letters, it's gonna take a long time to do. So you could have the flavors, chocolate, strawberry, vanilla. So that's along that axis. And then you have your sizes.
Compound sample spaces Statistics and probability 7th grade Khan Academy.mp3
Actually, let me just write the letters, it's gonna take a long time to do. So you could have the flavors, chocolate, strawberry, vanilla. So that's along that axis. And then you have your sizes. You could have a small, a medium, or large. And you can set up a grid here. So this is another way to do it.
Compound sample spaces Statistics and probability 7th grade Khan Academy.mp3
And then you have your sizes. You could have a small, a medium, or large. And you can set up a grid here. So this is another way to do it. And notice this grid has nine boxes. So let's look at it. So set up the grid.
Compound sample spaces Statistics and probability 7th grade Khan Academy.mp3
So this is another way to do it. And notice this grid has nine boxes. So let's look at it. So set up the grid. Set up the grid. And so what is this one going to be? This is a small chocolate.
Compound sample spaces Statistics and probability 7th grade Khan Academy.mp3
So set up the grid. Set up the grid. And so what is this one going to be? This is a small chocolate. Small, small chocolate. Small chocolate. What is this one?
Compound sample spaces Statistics and probability 7th grade Khan Academy.mp3