Sentence
stringlengths 131
8.39k
| video_title
stringlengths 12
104
|
|---|---|
So you have a minus 2y just like that. And so this simplifies to 2y minus minus 2y. That's 2y plus 2y. I'm just subtracting a negative. Or this inside, and just to save space, this inside, that's just 4y. So let me just, I don't want to have to rewrite the boundaries. So that right there is the same thing as 4y.
|
Green's theorem example 1 Multivariable Calculus Khan Academy.mp3
|
I'm just subtracting a negative. Or this inside, and just to save space, this inside, that's just 4y. So let me just, I don't want to have to rewrite the boundaries. So that right there is the same thing as 4y. The partial of q with respect to y, 2y, minus the partial of p with respect to y, which is minus 2y. You subtract a negative, you get a positive. You have 4y.
|
Green's theorem example 1 Multivariable Calculus Khan Academy.mp3
|
So that right there is the same thing as 4y. The partial of q with respect to y, 2y, minus the partial of p with respect to y, which is minus 2y. You subtract a negative, you get a positive. You have 4y. So let's take the antiderivative of the inside with respect to y. And we're going to get 2y squared. Let me do it a little bit lower.
|
Green's theorem example 1 Multivariable Calculus Khan Academy.mp3
|
You have 4y. So let's take the antiderivative of the inside with respect to y. And we're going to get 2y squared. Let me do it a little bit lower. We're going to get 2y squared. If you take the derivative of this with respect, the partial with respect to y, you're going to get 4y. And we're going to evaluate that from y is equal to 2x squared to y is equal to 2x.
|
Green's theorem example 1 Multivariable Calculus Khan Academy.mp3
|
Let me do it a little bit lower. We're going to get 2y squared. If you take the derivative of this with respect, the partial with respect to y, you're going to get 4y. And we're going to evaluate that from y is equal to 2x squared to y is equal to 2x. And of course, we still have the outside integral still there. x goes from 0 to 1 dx. This thing is going to be equal to the integral from 0 to 1.
|
Green's theorem example 1 Multivariable Calculus Khan Academy.mp3
|
And we're going to evaluate that from y is equal to 2x squared to y is equal to 2x. And of course, we still have the outside integral still there. x goes from 0 to 1 dx. This thing is going to be equal to the integral from 0 to 1. And then we evaluate it first at 2x. So you put 2x in here. 2x squared is 4x squared, right?
|
Green's theorem example 1 Multivariable Calculus Khan Academy.mp3
|
This thing is going to be equal to the integral from 0 to 1. And then we evaluate it first at 2x. So you put 2x in here. 2x squared is 4x squared, right? 2 squared x squared. So 4x squared times 2 is going to be 8x squared minus, put this guy in there. 2x squared squared is 4x to the fourth times 2 is 8x to the 4th.
|
Green's theorem example 1 Multivariable Calculus Khan Academy.mp3
|
2x squared is 4x squared, right? 2 squared x squared. So 4x squared times 2 is going to be 8x squared minus, put this guy in there. 2x squared squared is 4x to the fourth times 2 is 8x to the 4th. Did I do that right? 2x squared, I'm going to put it there for y, substitute y with it. That squared is 4x to the fourth times 2 is 8x to the fourth.
|
Green's theorem example 1 Multivariable Calculus Khan Academy.mp3
|
2x squared squared is 4x to the fourth times 2 is 8x to the 4th. Did I do that right? 2x squared, I'm going to put it there for y, substitute y with it. That squared is 4x to the fourth times 2 is 8x to the fourth. Very good. All right. Now dx, now this is just a straightforward one-dimensional integral.
|
Green's theorem example 1 Multivariable Calculus Khan Academy.mp3
|
That squared is 4x to the fourth times 2 is 8x to the fourth. Very good. All right. Now dx, now this is just a straightforward one-dimensional integral. This is going to be equal to the antiderivative of 8x squared is 8 thirds x to the third. And then the antiderivative of x to the fourth is minus 8 fifths x to the fifth. And we're going to have to evaluate that from 0 to 1.
|
Green's theorem example 1 Multivariable Calculus Khan Academy.mp3
|
Now dx, now this is just a straightforward one-dimensional integral. This is going to be equal to the antiderivative of 8x squared is 8 thirds x to the third. And then the antiderivative of x to the fourth is minus 8 fifths x to the fifth. And we're going to have to evaluate that from 0 to 1. I'll give it a little line there sometimes. And when you put 1 in there, you get, I'll do it in a different color. We get 8 fifths times 1 to the third, which is 8 fifths, minus 8 fifths.
|
Green's theorem example 1 Multivariable Calculus Khan Academy.mp3
|
And we're going to have to evaluate that from 0 to 1. I'll give it a little line there sometimes. And when you put 1 in there, you get, I'll do it in a different color. We get 8 fifths times 1 to the third, which is 8 fifths, minus 8 fifths. And then we're going to have minus, when you put 0 in here, you're just going to get a bunch of 0's. Oh, sorry. I made a mistake.
|
Green's theorem example 1 Multivariable Calculus Khan Academy.mp3
|
We get 8 fifths times 1 to the third, which is 8 fifths, minus 8 fifths. And then we're going to have minus, when you put 0 in here, you're just going to get a bunch of 0's. Oh, sorry. I made a mistake. It's not a blunder. It's 8 thirds. 8 thirds times 1 to the third minus 8 fifths times 1 to the fifth.
|
Green's theorem example 1 Multivariable Calculus Khan Academy.mp3
|
I made a mistake. It's not a blunder. It's 8 thirds. 8 thirds times 1 to the third minus 8 fifths times 1 to the fifth. So that's minus 8 fifths. And then when you subtract the 0, so then minus, you evaluate 0 here, you're just going to get a bunch of 0's. You don't have to do anything else.
|
Green's theorem example 1 Multivariable Calculus Khan Academy.mp3
|
8 thirds times 1 to the third minus 8 fifths times 1 to the fifth. So that's minus 8 fifths. And then when you subtract the 0, so then minus, you evaluate 0 here, you're just going to get a bunch of 0's. You don't have to do anything else. So now we just have to subtract these two fractions. So let's get a common denominator of 15. 8 thirds is the same thing if we multiply the numerator and denominator by 5.
|
Green's theorem example 1 Multivariable Calculus Khan Academy.mp3
|
You don't have to do anything else. So now we just have to subtract these two fractions. So let's get a common denominator of 15. 8 thirds is the same thing if we multiply the numerator and denominator by 5. That is 40 fifteenths. And if we multiply this numerator and denominator by 3, that's going to be 24 over 15. So minus 24 over 15.
|
Green's theorem example 1 Multivariable Calculus Khan Academy.mp3
|
8 thirds is the same thing if we multiply the numerator and denominator by 5. That is 40 fifteenths. And if we multiply this numerator and denominator by 3, that's going to be 24 over 15. So minus 24 over 15. And we get it being equal to 16 over 15. So using Green's theorem, we were able to find the answer to this integral up here. It's equal to 16 fifteenths.
|
Green's theorem example 1 Multivariable Calculus Khan Academy.mp3
|
So far, when I've talked about the gradient of a function, and you know, let's think about this as a multivariable function with just two inputs, those are the easiest to think about. So maybe it's something like x squared plus y squared, very friendly function. When I've talked about the gradient, I've left open a mystery. We have the way of computing it, and the way that you think about computing it is you just take this vector and you just throw the partial derivatives in there, partial with respect to x and the partial with respect to y, with respect to y, and if it was a higher dimensional input, then the output would have as many variables as you need. If it was f of x, y, z, you'd have partial x, partial y, partial z. And this is the way to compute it. But then I gave you a graphical intuition.
|
Why the gradient is the direction of steepest ascent.mp3
|
We have the way of computing it, and the way that you think about computing it is you just take this vector and you just throw the partial derivatives in there, partial with respect to x and the partial with respect to y, with respect to y, and if it was a higher dimensional input, then the output would have as many variables as you need. If it was f of x, y, z, you'd have partial x, partial y, partial z. And this is the way to compute it. But then I gave you a graphical intuition. I said that it points in the direction of steepest ascent. And maybe the way you think about that is you have your input space, which in this case is the xy plane, and you think of it as somehow mapping over to the number line, to your output space. And if you have a given point somewhere, the question is, of all the possible directions that you can move away from this point, all those different directions you can go, which one of them, you know, like this point will land somewhere on the function, and as you move in the various directions, maybe one of them nudges your output a little bit, one of them nudges it a lot, one of it slides it negative, you know, one of them slides it negative a lot.
|
Why the gradient is the direction of steepest ascent.mp3
|
But then I gave you a graphical intuition. I said that it points in the direction of steepest ascent. And maybe the way you think about that is you have your input space, which in this case is the xy plane, and you think of it as somehow mapping over to the number line, to your output space. And if you have a given point somewhere, the question is, of all the possible directions that you can move away from this point, all those different directions you can go, which one of them, you know, like this point will land somewhere on the function, and as you move in the various directions, maybe one of them nudges your output a little bit, one of them nudges it a lot, one of it slides it negative, you know, one of them slides it negative a lot. Which one of these directions results in the greatest increase to your function? And this was the loose intuition. If you want to think in terms of graphs, we could look over at the graph of f of x squared, and this is the gradient field.
|
Why the gradient is the direction of steepest ascent.mp3
|
And if you have a given point somewhere, the question is, of all the possible directions that you can move away from this point, all those different directions you can go, which one of them, you know, like this point will land somewhere on the function, and as you move in the various directions, maybe one of them nudges your output a little bit, one of them nudges it a lot, one of it slides it negative, you know, one of them slides it negative a lot. Which one of these directions results in the greatest increase to your function? And this was the loose intuition. If you want to think in terms of graphs, we could look over at the graph of f of x squared, and this is the gradient field. All of these vectors in the xy plane are the gradients, and you kind of look from below, you can maybe see why each one of these points in the direction you should move to walk uphill on that graph as fast as you can. You know, if you're a mountain climber and you want to get to the top as quickly as possible, these tell you the direction that you should move to go as quickly. This is why you call it direction of steepest ascent.
|
Why the gradient is the direction of steepest ascent.mp3
|
If you want to think in terms of graphs, we could look over at the graph of f of x squared, and this is the gradient field. All of these vectors in the xy plane are the gradients, and you kind of look from below, you can maybe see why each one of these points in the direction you should move to walk uphill on that graph as fast as you can. You know, if you're a mountain climber and you want to get to the top as quickly as possible, these tell you the direction that you should move to go as quickly. This is why you call it direction of steepest ascent. So back over here, I don't see the connection immediately, or at least when I was first learning about it, it wasn't clear why this combination of partial derivatives has anything to do with choosing the best direction. And now that we've learned about the directional derivative, I can give you a little bit of an intuition. So let's say instead of thinking about, you know, all the possible directions, and all of the possible changes to the output that they have, I'll fill in my line there.
|
Why the gradient is the direction of steepest ascent.mp3
|
This is why you call it direction of steepest ascent. So back over here, I don't see the connection immediately, or at least when I was first learning about it, it wasn't clear why this combination of partial derivatives has anything to do with choosing the best direction. And now that we've learned about the directional derivative, I can give you a little bit of an intuition. So let's say instead of thinking about, you know, all the possible directions, and all of the possible changes to the output that they have, I'll fill in my line there. You know, let's say you just have, you've got your point where you're evaluating things, and then you just have a single vector. And let's actually make it a univector. Let's make it the case that this guy has a length of one.
|
Why the gradient is the direction of steepest ascent.mp3
|
So let's say instead of thinking about, you know, all the possible directions, and all of the possible changes to the output that they have, I'll fill in my line there. You know, let's say you just have, you've got your point where you're evaluating things, and then you just have a single vector. And let's actually make it a univector. Let's make it the case that this guy has a length of one. So I'll go over here and I'll just think of that guy as being v, and say that v has a length of one. So this is our vector. We know now, having learned about the directional derivative that you can tell the rate at which the function changes as you move in this direction by taking the directional derivative of your function, and let's say this point, I don't know, what's a good name for this point?
|
Why the gradient is the direction of steepest ascent.mp3
|
Let's make it the case that this guy has a length of one. So I'll go over here and I'll just think of that guy as being v, and say that v has a length of one. So this is our vector. We know now, having learned about the directional derivative that you can tell the rate at which the function changes as you move in this direction by taking the directional derivative of your function, and let's say this point, I don't know, what's a good name for this point? Just like a, b. a, b is this point. When you evaluate this at a, b, and the way that you do that is just dotting the gradient of f. I should say dotting it, evaluate it at that point, because gradient is a vector-valued function and we just want a specific vector here. So evaluating that at your point, a, b, together with whatever the vector is, whatever that value is.
|
Why the gradient is the direction of steepest ascent.mp3
|
We know now, having learned about the directional derivative that you can tell the rate at which the function changes as you move in this direction by taking the directional derivative of your function, and let's say this point, I don't know, what's a good name for this point? Just like a, b. a, b is this point. When you evaluate this at a, b, and the way that you do that is just dotting the gradient of f. I should say dotting it, evaluate it at that point, because gradient is a vector-valued function and we just want a specific vector here. So evaluating that at your point, a, b, together with whatever the vector is, whatever that value is. And in this case, we're thinking of v as a univector. So this, this is how you tell the rate of change, and when I originally introduced the directional derivative, I gave kind of an indication why. You know, if you imagine dotting this together with, I don't know, let's say it was a vector that's like one, two.
|
Why the gradient is the direction of steepest ascent.mp3
|
So evaluating that at your point, a, b, together with whatever the vector is, whatever that value is. And in this case, we're thinking of v as a univector. So this, this is how you tell the rate of change, and when I originally introduced the directional derivative, I gave kind of an indication why. You know, if you imagine dotting this together with, I don't know, let's say it was a vector that's like one, two. Really, you're thinking this vector represents one step in the x direction, two steps in the y direction, so the amount that it changes things should be one times the change caused by a pure step in the x direction, plus two times a change caused by a pure step in the y direction. So that was kind of the loose intuition. You can see the directional derivative video if you want a little bit more discussion on that.
|
Why the gradient is the direction of steepest ascent.mp3
|
You know, if you imagine dotting this together with, I don't know, let's say it was a vector that's like one, two. Really, you're thinking this vector represents one step in the x direction, two steps in the y direction, so the amount that it changes things should be one times the change caused by a pure step in the x direction, plus two times a change caused by a pure step in the y direction. So that was kind of the loose intuition. You can see the directional derivative video if you want a little bit more discussion on that. And this is the formula that you have, but this starts to give us the key for how we could choose the direction of steepest descent, because now what we're really asking, when we say which one of these changes things the most, you know, maybe when you move in that direction, it changes f, you know, a little bit negatively, and we want to know, you know, maybe does another vector w, is the change caused by that gonna be positive? Is it gonna be as big as possible? What we're doing is we're saying, find the maximum for all unit vectors, so for all vectors v that satisfy the property that their length is one, find the maximum of the dot product between f, evaluated at that point, right, evaluated at whatever point we care about, and v. Find that maximum.
|
Why the gradient is the direction of steepest ascent.mp3
|
You can see the directional derivative video if you want a little bit more discussion on that. And this is the formula that you have, but this starts to give us the key for how we could choose the direction of steepest descent, because now what we're really asking, when we say which one of these changes things the most, you know, maybe when you move in that direction, it changes f, you know, a little bit negatively, and we want to know, you know, maybe does another vector w, is the change caused by that gonna be positive? Is it gonna be as big as possible? What we're doing is we're saying, find the maximum for all unit vectors, so for all vectors v that satisfy the property that their length is one, find the maximum of the dot product between f, evaluated at that point, right, evaluated at whatever point we care about, and v. Find that maximum. Well, let's just think about what the dot product represents. So let's say we go over here and let's say, you know, let's say we evaluate the gradient vector, and it turns out that the gradient points in this direction, and maybe it's, you know, it doesn't have to be a unit vector, it might be something very long, like that. So if you imagine some vector v, you know, some unit vector v, let's say it was sticking off in this direction, the way that you interpret this dot product, the dot product between the gradient f and this new vector v, is you would project that vector directly, kind of a perpendicular projection onto your gradient vector, and you'd say, what's that length?
|
Why the gradient is the direction of steepest ascent.mp3
|
What we're doing is we're saying, find the maximum for all unit vectors, so for all vectors v that satisfy the property that their length is one, find the maximum of the dot product between f, evaluated at that point, right, evaluated at whatever point we care about, and v. Find that maximum. Well, let's just think about what the dot product represents. So let's say we go over here and let's say, you know, let's say we evaluate the gradient vector, and it turns out that the gradient points in this direction, and maybe it's, you know, it doesn't have to be a unit vector, it might be something very long, like that. So if you imagine some vector v, you know, some unit vector v, let's say it was sticking off in this direction, the way that you interpret this dot product, the dot product between the gradient f and this new vector v, is you would project that vector directly, kind of a perpendicular projection onto your gradient vector, and you'd say, what's that length? You know, what's that length right there? And just as an example, it would be something a little bit less than one, right, because this is a unit vector, so as an example, let's say that was like 0.7. And then you'd multiply that by the length of the gradient itself, of the vector against which you're dotting, and maybe that guy, maybe the length of the entire gradient vector, just again as an example, maybe that's two.
|
Why the gradient is the direction of steepest ascent.mp3
|
So if you imagine some vector v, you know, some unit vector v, let's say it was sticking off in this direction, the way that you interpret this dot product, the dot product between the gradient f and this new vector v, is you would project that vector directly, kind of a perpendicular projection onto your gradient vector, and you'd say, what's that length? You know, what's that length right there? And just as an example, it would be something a little bit less than one, right, because this is a unit vector, so as an example, let's say that was like 0.7. And then you'd multiply that by the length of the gradient itself, of the vector against which you're dotting, and maybe that guy, maybe the length of the entire gradient vector, just again as an example, maybe that's two. It doesn't have to be, it could be anything. But the way that you interpret this whole dot product then is to take the product of those two. You would take 0.7, the length of your projection, times the length of the original vector.
|
Why the gradient is the direction of steepest ascent.mp3
|
And then you'd multiply that by the length of the gradient itself, of the vector against which you're dotting, and maybe that guy, maybe the length of the entire gradient vector, just again as an example, maybe that's two. It doesn't have to be, it could be anything. But the way that you interpret this whole dot product then is to take the product of those two. You would take 0.7, the length of your projection, times the length of the original vector. And the question is, when is this maximized? What unit vector maximizes this? And if you start to imagine maybe swinging that unit vector around, so if instead of that guy, you were to use one that pointed a little bit more closely in the direction, then its projection would be a little bit longer.
|
Why the gradient is the direction of steepest ascent.mp3
|
You would take 0.7, the length of your projection, times the length of the original vector. And the question is, when is this maximized? What unit vector maximizes this? And if you start to imagine maybe swinging that unit vector around, so if instead of that guy, you were to use one that pointed a little bit more closely in the direction, then its projection would be a little bit longer. Maybe that projection would be like 0.75 or something. If you take the unit vector that points directly in the same direction as that full vector, then the length of its projection is just the length of the vector itself. It would be one.
|
Why the gradient is the direction of steepest ascent.mp3
|
And if you start to imagine maybe swinging that unit vector around, so if instead of that guy, you were to use one that pointed a little bit more closely in the direction, then its projection would be a little bit longer. Maybe that projection would be like 0.75 or something. If you take the unit vector that points directly in the same direction as that full vector, then the length of its projection is just the length of the vector itself. It would be one. Because projecting it doesn't change what it is at all. So it shouldn't be too hard to convince yourself. And if you have shaky intuitions on the dot product, I'd suggest finding the videos we have on Khan Academy for those.
|
Why the gradient is the direction of steepest ascent.mp3
|
It would be one. Because projecting it doesn't change what it is at all. So it shouldn't be too hard to convince yourself. And if you have shaky intuitions on the dot product, I'd suggest finding the videos we have on Khan Academy for those. Sal does a great job giving that deep intuition. But it should kind of make sense why the unit vector that points in the same direction as your gradient is gonna be what maximizes it. So the answer here, the answer to what vector maximizes this is gonna be, well, it's the gradient itself.
|
Why the gradient is the direction of steepest ascent.mp3
|
And if you have shaky intuitions on the dot product, I'd suggest finding the videos we have on Khan Academy for those. Sal does a great job giving that deep intuition. But it should kind of make sense why the unit vector that points in the same direction as your gradient is gonna be what maximizes it. So the answer here, the answer to what vector maximizes this is gonna be, well, it's the gradient itself. It is that gradient vector evaluated at the point we care about, except you'd normalize it because we're only considering unit vectors. So to do that, you just divide it by whatever its magnitude is. If its magnitude was already one, it stays one.
|
Why the gradient is the direction of steepest ascent.mp3
|
So the answer here, the answer to what vector maximizes this is gonna be, well, it's the gradient itself. It is that gradient vector evaluated at the point we care about, except you'd normalize it because we're only considering unit vectors. So to do that, you just divide it by whatever its magnitude is. If its magnitude was already one, it stays one. If its magnitude was two, you're dividing it down by a half. So this is your answer. This is the direction of steepest descent.
|
Why the gradient is the direction of steepest ascent.mp3
|
If its magnitude was already one, it stays one. If its magnitude was two, you're dividing it down by a half. So this is your answer. This is the direction of steepest descent. So I think one thing to notice here is the most fundamental fact is that the gradient is this tool for computing directional derivatives. You can think of that vector as something that you really want to dot against. And that's actually a pretty powerful thought.
|
Why the gradient is the direction of steepest ascent.mp3
|
This is the direction of steepest descent. So I think one thing to notice here is the most fundamental fact is that the gradient is this tool for computing directional derivatives. You can think of that vector as something that you really want to dot against. And that's actually a pretty powerful thought. It's that the gradient, it's not just a vector. It's a vector that loves to be dotted together with other things. That's the fundamental.
|
Why the gradient is the direction of steepest ascent.mp3
|
And that's actually a pretty powerful thought. It's that the gradient, it's not just a vector. It's a vector that loves to be dotted together with other things. That's the fundamental. And as a consequence of this, as a consequence of that, the direction of steepest descent is that vector itself because anything, if you're saying what maximizes the dot product with that thing, it's, well, the vector that points in the same direction as that thing. And this can also give us an interpretation for the length of the gradient. We know the direction is the direction of steepest ascent, but what does the length mean?
|
Why the gradient is the direction of steepest ascent.mp3
|
That's the fundamental. And as a consequence of this, as a consequence of that, the direction of steepest descent is that vector itself because anything, if you're saying what maximizes the dot product with that thing, it's, well, the vector that points in the same direction as that thing. And this can also give us an interpretation for the length of the gradient. We know the direction is the direction of steepest ascent, but what does the length mean? So let's give this guy a name. Let's give this normalized version of it a name. I'm just gonna call it W. So W will be the unit vector that points in the direction of the gradient.
|
Why the gradient is the direction of steepest ascent.mp3
|
We know the direction is the direction of steepest ascent, but what does the length mean? So let's give this guy a name. Let's give this normalized version of it a name. I'm just gonna call it W. So W will be the unit vector that points in the direction of the gradient. If you take the directional derivative in the direction of W of F, what that means is the gradient of F dotted with that W. And if you kind of spell out what W means here, that means you're taking the gradient of the vector dotted with itself, but because it's W and not the gradient, we're normalizing. We're dividing that not by magnitude of F. That doesn't really make sense, but by the value of the gradient. And all of these, I'm just writing gradient of F, but maybe you should be thinking about gradient of F evaluated at AB, but I'm just being kind of lazy and just writing gradient of F. And the top, when you take the dot product with itself, what that means is the square of its magnitude.
|
Why the gradient is the direction of steepest ascent.mp3
|
I'm just gonna call it W. So W will be the unit vector that points in the direction of the gradient. If you take the directional derivative in the direction of W of F, what that means is the gradient of F dotted with that W. And if you kind of spell out what W means here, that means you're taking the gradient of the vector dotted with itself, but because it's W and not the gradient, we're normalizing. We're dividing that not by magnitude of F. That doesn't really make sense, but by the value of the gradient. And all of these, I'm just writing gradient of F, but maybe you should be thinking about gradient of F evaluated at AB, but I'm just being kind of lazy and just writing gradient of F. And the top, when you take the dot product with itself, what that means is the square of its magnitude. But the whole thing is divided by the magnitude. So you can kind of cancel that out. You can say this doesn't need to be there.
|
Why the gradient is the direction of steepest ascent.mp3
|
And all of these, I'm just writing gradient of F, but maybe you should be thinking about gradient of F evaluated at AB, but I'm just being kind of lazy and just writing gradient of F. And the top, when you take the dot product with itself, what that means is the square of its magnitude. But the whole thing is divided by the magnitude. So you can kind of cancel that out. You can say this doesn't need to be there. That exponent doesn't need to be there. And basically, the directional derivative, the directional derivative in the direction of the gradient itself has a value equal to the magnitude of the gradient. So this tells you when you're moving in that direction, in the direction of the gradient, the rate at which the function changes is given by the magnitude of the gradient.
|
Why the gradient is the direction of steepest ascent.mp3
|
You can say this doesn't need to be there. That exponent doesn't need to be there. And basically, the directional derivative, the directional derivative in the direction of the gradient itself has a value equal to the magnitude of the gradient. So this tells you when you're moving in that direction, in the direction of the gradient, the rate at which the function changes is given by the magnitude of the gradient. So it's this really magical vector. It does a lot of things. It's the tool that lets you dot against other vectors to tell you the directional derivative.
|
Why the gradient is the direction of steepest ascent.mp3
|
Okay, so we are finally ready to express the quadratic approximation of a multivariable function in vector form. So I have the whole thing written out here where f is the function that we are trying to approximate, x-naught, y-naught is the constant point about which we are approximating, and then this entire expression is the quadratic approximation, which I've talked about in past videos, and if it seems very complicated or absurd or you're unfamiliar with it, and just dissecting it real quick, this over here is the constant term. This is just gonna evaluate to a constant, everything over here is the linear term, because it just involves taking a variable multiplied by a constant, and then the remainder, every one of these components will have two variables multiplied into it, so like x-squared comes up, and x times y, and y-squared comes up, so that's the quadratic term. Quadratic. Now to vectorize things, first of all, let's write down the input, the input variable x, y as a vector, and typically we'll do that with a bold-faced x to indicate that it's a vector, and its components are just gonna be the single variables x and y, the non-bold-faced, so this is the vector representing the variable input, and then correspondingly, a bold-faced x with a little subscript o, x-naught, is gonna be the constant input, the single point in space near which we are approximating, so when we write things like that, this constant term, simply enough, is gonna look like evaluating your function at that bold-faced x-naught, so that's probably the easiest one to handle. Now the linear term, this looks like a dot product, and if we kind of expand it out as the dot product, it looks like we're taking the partial derivative of f with respect to x, and then the partial derivative with respect to y, and we're evaluating both of those at that bold-faced x-naught input, x-naught as its input, now each one of those partial derivatives is multiplied by variable minus constant number, so this looks like taking the dot product here, I'm gonna erase the word linear, we're taking it with x minus x-naught, and y minus y-naught, this is just expressing the same linear term, but as a dot product, but the convenience here is that this is totally the same thing as saying the gradient of f, gradient of f, that's the vector that contains all the partial derivatives, evaluated at the special input, x-naught, and then we're taking the dot product between that and the variable vector, bold-faced x minus x-naught, since when you do this component-wise, bold-faced x minus x-naught, if we kind of think here, it'll be x the variable minus x-naught the constant, y the variable minus y-naught the constant, which is what we have up there, so this expression kind of vectorizes the whole linear term, and now the beef here, the hard part, how are we gonna vectorize this quadratic term? Now that's what I was leading to in the last couple videos, where I talked about how you express a quadratic form like this with a matrix, and the way that you do it, I'll just kind of scroll down to give us some room, the way that you do it is we'll have a matrix whose components are all of these constants, it'll be this 1 1 2 times the second partial derivative evaluated there, and I'm just gonna, for convenience's sake, I'm gonna just take 1 1 2 times the second partial derivative with respect to x, and leave it as understood that we're evaluating it at this point, and then on the other diagonal, you have 1 1 2 times the other kind of partial derivative with respect to y two times in a row, and then we're gonna multiply it by this constant here, but this term kind of gets broken apart into two different components, if you'll remember in the quadratic form video, it was always things where it was a, and then 2b and c as your constants for the quadratic form, so if we're interpreting this as two times something, then it gets broken down, and on one corner it shows up as f x y, and on the other one, kind of 1 1 2 f x y, so like both of these together are gonna constitute the entire mixed partial derivative, and then the way that we express the quadratic form is we're gonna multiply this by, well, by what?
|
Vector form of multivariable quadratic approximation.mp3
|
Quadratic. Now to vectorize things, first of all, let's write down the input, the input variable x, y as a vector, and typically we'll do that with a bold-faced x to indicate that it's a vector, and its components are just gonna be the single variables x and y, the non-bold-faced, so this is the vector representing the variable input, and then correspondingly, a bold-faced x with a little subscript o, x-naught, is gonna be the constant input, the single point in space near which we are approximating, so when we write things like that, this constant term, simply enough, is gonna look like evaluating your function at that bold-faced x-naught, so that's probably the easiest one to handle. Now the linear term, this looks like a dot product, and if we kind of expand it out as the dot product, it looks like we're taking the partial derivative of f with respect to x, and then the partial derivative with respect to y, and we're evaluating both of those at that bold-faced x-naught input, x-naught as its input, now each one of those partial derivatives is multiplied by variable minus constant number, so this looks like taking the dot product here, I'm gonna erase the word linear, we're taking it with x minus x-naught, and y minus y-naught, this is just expressing the same linear term, but as a dot product, but the convenience here is that this is totally the same thing as saying the gradient of f, gradient of f, that's the vector that contains all the partial derivatives, evaluated at the special input, x-naught, and then we're taking the dot product between that and the variable vector, bold-faced x minus x-naught, since when you do this component-wise, bold-faced x minus x-naught, if we kind of think here, it'll be x the variable minus x-naught the constant, y the variable minus y-naught the constant, which is what we have up there, so this expression kind of vectorizes the whole linear term, and now the beef here, the hard part, how are we gonna vectorize this quadratic term? Now that's what I was leading to in the last couple videos, where I talked about how you express a quadratic form like this with a matrix, and the way that you do it, I'll just kind of scroll down to give us some room, the way that you do it is we'll have a matrix whose components are all of these constants, it'll be this 1 1 2 times the second partial derivative evaluated there, and I'm just gonna, for convenience's sake, I'm gonna just take 1 1 2 times the second partial derivative with respect to x, and leave it as understood that we're evaluating it at this point, and then on the other diagonal, you have 1 1 2 times the other kind of partial derivative with respect to y two times in a row, and then we're gonna multiply it by this constant here, but this term kind of gets broken apart into two different components, if you'll remember in the quadratic form video, it was always things where it was a, and then 2b and c as your constants for the quadratic form, so if we're interpreting this as two times something, then it gets broken down, and on one corner it shows up as f x y, and on the other one, kind of 1 1 2 f x y, so like both of these together are gonna constitute the entire mixed partial derivative, and then the way that we express the quadratic form is we're gonna multiply this by, well, by what? Well, the first component is whatever the thing is that's squared here, so it's gonna be that x minus x naught, and then the second component is whatever the other thing squared is, which in this case is y minus y naught, and of course we take that same vector, but we put it in on the other side too, so let me make a little bit of room, so this is gonna be wide, so we're gonna take that same vector, and then kind of put it on its side, so it'll be x minus x naught as the first component, and then y minus y naught as the second component, but it's written horizontally, and this, if you multiply out the entire matrix, is gonna give us the same expression that you have up here, and if that seems unfamiliar, if that seems, you know, how do you go from there to there, check out the video on quadratic forms, or you can check out the article where I'm talking about the quadratic approximation as a whole, I kind of go through the computation there. Now this matrix right here is almost the Hessian matrix, this is why I made a video about the Hessian matrix, it's not quite because everything has a 1 1 2 multiplied into it, so I'm just gonna kind of take that out, and we'll remember we have to multiply a 1 1 2 in at some point, but otherwise it is the Hessian matrix, which we denote with a kind of bold-faced H, bold-faced H, and emphasize that it's the Hessian of F, the Hessian is something you take of a function, and like I said, remember each of these terms we should be thinking of as evaluated on the special input point, evaluating it at that special, you know, bold-faced x naught input point, I was just kind of too lazy to write it in, each time the x naught, y naught, x naught, y naught, x naught, y naught, all of that, but what we have then is we're multiplying it on the right by this whole vector is the variable vector, bold-faced x minus bold-faced x naught, that's what that entire vector is, and then we kind of have the same thing on the right, you know, bold-faced vector x minus x naught, except that we transpose it, we kind of put it on its side, and the way you denote that, you have a little T there for transpose, so this term captures all of the quadratic information that we need for the approximation, so just to put it all together, if we go back up when we put the constant term that we have, the linear term, and this quadratic form that we just found all together, what we get is that the quadratic approximation of f, which is a function, we'll think of it as a vector input, bold-faced x, it equals the function itself evaluated at, you know, whatever point we're approximating near, plus the gradient of f, which is kind of, it's vector analog of a derivative, evaluated at that point, so this is a constant vector, dot product with the variable vector x minus the constant vector x naught, that whole thing, plus 1 1 2, the, we'll just copy down this whole quadratic term up there, the variable minus the constant, multiplied by the Hessian, which is kind of like an extension of the second derivative to multivariable functions, and we're evaluating that, no, let's see, we're evaluating it at the constant, at the constant x naught, and then on the right side, we're multiplying it by the variable, x minus x naught, and this, this is the quadratic approximation in vector form and the important part is, now, it doesn't just have to be of a two-variable input, you could imagine plugging in a three-variable input or four-variable input, and all of these terms make sense, you know, you take the gradient of a four-variable function, you'll get a vector with four components, you take the Hessian of a four-variable function, you would get a four-by-four matrix, and all of these terms make sense, and I think it's also prettier to write it this way because it looks a lot more like a Taylor expansion in the single-variable world, you have, you know, a constant term, plus the value of a derivative times x minus a constant, plus 1 1 2, what's kind of like the second derivative term, what's kind of like taking an x squared, but this is how it looks in the vector world, so in that way, it's actually maybe a little bit more familiar than writing it out in the full, you know, component-by-component term, where it's easy to kind of get lost in the weeds there, so full vectorized form of the quadratic approximation of a scalar-valued multivariable function. Boy, is that a lot to say.
|
Vector form of multivariable quadratic approximation.mp3
|
And more specifically, it's the Jacobian matrix, or sometimes the associated determinant. And here, I just want to talk about some of the background knowledge that I'm assuming. Because to understand the Jacobian, you do have to have a little bit of a background in linear algebra. And in particular, I want to make sure that everyone here understands how to think about matrices as transformations of space. And when I say transformations, let me just get kind of a matrix on here. I'll call it 2, 1, and negative 3, 1. You'll see why I'm coloring it like this in just a moment.
|
Jacobian prerequisite knowledge.mp3
|
And in particular, I want to make sure that everyone here understands how to think about matrices as transformations of space. And when I say transformations, let me just get kind of a matrix on here. I'll call it 2, 1, and negative 3, 1. You'll see why I'm coloring it like this in just a moment. And when I say how to think about this as a transformation of space, I mean you can multiply a matrix by some kind of two-dimensional vector, some kind of two-dimensional x, y. And this is gonna give us a new two-dimensional vector. This is gonna bring us to, let's see, in this case, it'll be, I'll write kind of 2, 1, negative 3, 1, where what it gives us is 2x plus negative 3 times y, and then 1x plus 1 times y.
|
Jacobian prerequisite knowledge.mp3
|
You'll see why I'm coloring it like this in just a moment. And when I say how to think about this as a transformation of space, I mean you can multiply a matrix by some kind of two-dimensional vector, some kind of two-dimensional x, y. And this is gonna give us a new two-dimensional vector. This is gonna bring us to, let's see, in this case, it'll be, I'll write kind of 2, 1, negative 3, 1, where what it gives us is 2x plus negative 3 times y, and then 1x plus 1 times y. Right, this is a new two-dimensional vector somewhere else in space. And even if you know how to compute it, there's still room for a deeper geometric understanding of what it actually means to take a vector x, y to the vector 2x plus negative 3y and 1x plus 1y. And there's also still a deeper understanding in what we mean when we call this a linear transformation, a linear transformation.
|
Jacobian prerequisite knowledge.mp3
|
This is gonna bring us to, let's see, in this case, it'll be, I'll write kind of 2, 1, negative 3, 1, where what it gives us is 2x plus negative 3 times y, and then 1x plus 1 times y. Right, this is a new two-dimensional vector somewhere else in space. And even if you know how to compute it, there's still room for a deeper geometric understanding of what it actually means to take a vector x, y to the vector 2x plus negative 3y and 1x plus 1y. And there's also still a deeper understanding in what we mean when we call this a linear transformation, a linear transformation. So what I'm gonna do is just show you what this particular transformation looks like on the left here, where every single point on this blue grid, I'm gonna tell the computer, hey, if that point was x, y, I want you to take it to 2x plus negative 3y, 1x plus 1y. And here's what it looks like. So let me just kind of play it out here.
|
Jacobian prerequisite knowledge.mp3
|
And there's also still a deeper understanding in what we mean when we call this a linear transformation, a linear transformation. So what I'm gonna do is just show you what this particular transformation looks like on the left here, where every single point on this blue grid, I'm gonna tell the computer, hey, if that point was x, y, I want you to take it to 2x plus negative 3y, 1x plus 1y. And here's what it looks like. So let me just kind of play it out here. All of the points in space move, and you end up in some final state here. And there are a couple important things to note. First of all, all of the grid lines remain parallel and evenly spaced.
|
Jacobian prerequisite knowledge.mp3
|
So let me just kind of play it out here. All of the points in space move, and you end up in some final state here. And there are a couple important things to note. First of all, all of the grid lines remain parallel and evenly spaced. And they're still lines, they didn't get curved in some way. And that's very, very special. That is the geometric way that you can think about this term, this idea of a linear transformation.
|
Jacobian prerequisite knowledge.mp3
|
First of all, all of the grid lines remain parallel and evenly spaced. And they're still lines, they didn't get curved in some way. And that's very, very special. That is the geometric way that you can think about this term, this idea of a linear transformation. I kind of like to think about it that lines stay lines. And in particular, the grid lines here, the ones that started off as, you know, kind of vertical and horizontal, they still remain parallel and they still remain evenly spaced. And the other thing to notice here is I have these two vectors highlighted, the green vector and the red vector.
|
Jacobian prerequisite knowledge.mp3
|
That is the geometric way that you can think about this term, this idea of a linear transformation. I kind of like to think about it that lines stay lines. And in particular, the grid lines here, the ones that started off as, you know, kind of vertical and horizontal, they still remain parallel and they still remain evenly spaced. And the other thing to notice here is I have these two vectors highlighted, the green vector and the red vector. And these are the ones that started off, if we kind of back things up, these are the ones that started off as the basis vectors, right? Let me kind of make a little bit more room here. The green vector is 1, 0, 1 in the x direction, 0 in the y direction.
|
Jacobian prerequisite knowledge.mp3
|
And the other thing to notice here is I have these two vectors highlighted, the green vector and the red vector. And these are the ones that started off, if we kind of back things up, these are the ones that started off as the basis vectors, right? Let me kind of make a little bit more room here. The green vector is 1, 0, 1 in the x direction, 0 in the y direction. And then that red vertical vector here is 0, 1. Right, 0, 1. And if we notice where they land under this transformation, when the matrix is multiplied by every single vector in space, the place where the green vector lands, the one that started off as 1, 0, has coordinates 2, 1.
|
Jacobian prerequisite knowledge.mp3
|
The green vector is 1, 0, 1 in the x direction, 0 in the y direction. And then that red vertical vector here is 0, 1. Right, 0, 1. And if we notice where they land under this transformation, when the matrix is multiplied by every single vector in space, the place where the green vector lands, the one that started off as 1, 0, has coordinates 2, 1. And that corresponds very directly with the fact that the first column of our matrix is 2, 1. And then similarly, over here, the second vector, the one that started off at 0, 1, ends up at the coordinates negative 3, 1. And that's what corresponds with the fact that the next column is negative 3, 1.
|
Jacobian prerequisite knowledge.mp3
|
And if we notice where they land under this transformation, when the matrix is multiplied by every single vector in space, the place where the green vector lands, the one that started off as 1, 0, has coordinates 2, 1. And that corresponds very directly with the fact that the first column of our matrix is 2, 1. And then similarly, over here, the second vector, the one that started off at 0, 1, ends up at the coordinates negative 3, 1. And that's what corresponds with the fact that the next column is negative 3, 1. And it's actually relatively simple to see why that's going to be true. Here, I'll go ahead and multiply this matrix that we had that was... See, now it's kind of easy to remember what the matrix is, right? I can just kind of read it off here as 2, 1, negative 3, 1.
|
Jacobian prerequisite knowledge.mp3
|
And that's what corresponds with the fact that the next column is negative 3, 1. And it's actually relatively simple to see why that's going to be true. Here, I'll go ahead and multiply this matrix that we had that was... See, now it's kind of easy to remember what the matrix is, right? I can just kind of read it off here as 2, 1, negative 3, 1. But just to see why it's actually taking the basis vectors to the columns like this, if we do the multiplication by 1, 0, notice how it's going to take us to... So it's 2 times 1, that'll be 2, and then negative 3 times 0, so that'll just be 0. And over here, it's 1 times 1, so that's 1, and then 1 times 0.
|
Jacobian prerequisite knowledge.mp3
|
I can just kind of read it off here as 2, 1, negative 3, 1. But just to see why it's actually taking the basis vectors to the columns like this, if we do the multiplication by 1, 0, notice how it's going to take us to... So it's 2 times 1, that'll be 2, and then negative 3 times 0, so that'll just be 0. And over here, it's 1 times 1, so that's 1, and then 1 times 0. So again, we're adding 0. So the only terms that actually mattered because of the 0 down here was everything in that first column. But if we take that same matrix, 2, 1, negative 3, 1, and we multiply it by 0, 1 over here, by the second basis vector, what you're going to get is 2 times 0, so 0 plus that element in that second column, and then 1 times 0, so another 0, plus 1 times 1, plus that 1.
|
Jacobian prerequisite knowledge.mp3
|
And over here, it's 1 times 1, so that's 1, and then 1 times 0. So again, we're adding 0. So the only terms that actually mattered because of the 0 down here was everything in that first column. But if we take that same matrix, 2, 1, negative 3, 1, and we multiply it by 0, 1 over here, by the second basis vector, what you're going to get is 2 times 0, so 0 plus that element in that second column, and then 1 times 0, so another 0, plus 1 times 1, plus that 1. It's kind of like that 0 knocks out all of the terms in other columns. And then like I said, geometrically, the meaning of a linear transformation is that grid lines remain parallel and evenly spaced. And when you start to think about it a little bit, if you can know where this green vector lands and where this red vector lands, that's going to lock into place where the entire grid has to go.
|
Jacobian prerequisite knowledge.mp3
|
But if we take that same matrix, 2, 1, negative 3, 1, and we multiply it by 0, 1 over here, by the second basis vector, what you're going to get is 2 times 0, so 0 plus that element in that second column, and then 1 times 0, so another 0, plus 1 times 1, plus that 1. It's kind of like that 0 knocks out all of the terms in other columns. And then like I said, geometrically, the meaning of a linear transformation is that grid lines remain parallel and evenly spaced. And when you start to think about it a little bit, if you can know where this green vector lands and where this red vector lands, that's going to lock into place where the entire grid has to go. And let me show you what I mean and how this corresponds with maybe a different definition that you've heard for what linear transformation means. If we have some kind of function L, and it's going to take in a vector and spit out a vector, it's said to be linear if it satisfies the property that when you take a constant times a vector, what it produces is that same constant times whatever would have happened if you applied that transformation to the vector not scaled, right? So here you're applying the transformation to a scaled vector, and evidently that's the same as scaling the transformation of the vector.
|
Jacobian prerequisite knowledge.mp3
|
And when you start to think about it a little bit, if you can know where this green vector lands and where this red vector lands, that's going to lock into place where the entire grid has to go. And let me show you what I mean and how this corresponds with maybe a different definition that you've heard for what linear transformation means. If we have some kind of function L, and it's going to take in a vector and spit out a vector, it's said to be linear if it satisfies the property that when you take a constant times a vector, what it produces is that same constant times whatever would have happened if you applied that transformation to the vector not scaled, right? So here you're applying the transformation to a scaled vector, and evidently that's the same as scaling the transformation of the vector. And similarly, second property of linearity is that if you add two vectors, it doesn't really matter if you add them before or after the transformation. If you take the sum of the vectors then apply the transformation, that's the same as first applying the transformation to each one separately and then adding up the results. And one of the most important consequences of this formal definition of linearity is that it means if you take your function and apply it to some vector, x, y, well it can split up that vector as x times the first basis vector, x times one, zero, plus y, let's see, y, times that second basis vector, zero, one.
|
Jacobian prerequisite knowledge.mp3
|
So here you're applying the transformation to a scaled vector, and evidently that's the same as scaling the transformation of the vector. And similarly, second property of linearity is that if you add two vectors, it doesn't really matter if you add them before or after the transformation. If you take the sum of the vectors then apply the transformation, that's the same as first applying the transformation to each one separately and then adding up the results. And one of the most important consequences of this formal definition of linearity is that it means if you take your function and apply it to some vector, x, y, well it can split up that vector as x times the first basis vector, x times one, zero, plus y, let's see, y, times that second basis vector, zero, one. And because of these two properties of linearity, if I can split it up like this, it doesn't matter if I do the scaling and adding before the transformation, or if I do that scaling and adding after the transformation, and say that it's x times whatever the transformed version of one, zero is, and I'll show you geometrically what this means in just a moment, but I kind of want to get all the algebra on the screen, plus y times the transformed version of zero, one. Zero, one. So to be concrete, let's actually put in a value for x and y here and try to think about that specific vector geometrically.
|
Jacobian prerequisite knowledge.mp3
|
And one of the most important consequences of this formal definition of linearity is that it means if you take your function and apply it to some vector, x, y, well it can split up that vector as x times the first basis vector, x times one, zero, plus y, let's see, y, times that second basis vector, zero, one. And because of these two properties of linearity, if I can split it up like this, it doesn't matter if I do the scaling and adding before the transformation, or if I do that scaling and adding after the transformation, and say that it's x times whatever the transformed version of one, zero is, and I'll show you geometrically what this means in just a moment, but I kind of want to get all the algebra on the screen, plus y times the transformed version of zero, one. Zero, one. So to be concrete, let's actually put in a value for x and y here and try to think about that specific vector geometrically. So maybe I'll put in something like vector two, one. So if we look over on the grid, we're going to be focusing on the point that's over here, two, one, and this particular point. And I'm going to play the transformation, and I want you to follow this point to see where it lands, and it's going to end up over here.
|
Jacobian prerequisite knowledge.mp3
|
So to be concrete, let's actually put in a value for x and y here and try to think about that specific vector geometrically. So maybe I'll put in something like vector two, one. So if we look over on the grid, we're going to be focusing on the point that's over here, two, one, and this particular point. And I'm going to play the transformation, and I want you to follow this point to see where it lands, and it's going to end up over here. Okay, so in terms of the old grid, right, the original one that we started with, it's now at the point one, three. This is where we've ended up. But importantly, I want you to notice how it's still two times that green vector, plus one times that red vector.
|
Jacobian prerequisite knowledge.mp3
|
And I'm going to play the transformation, and I want you to follow this point to see where it lands, and it's going to end up over here. Okay, so in terms of the old grid, right, the original one that we started with, it's now at the point one, three. This is where we've ended up. But importantly, I want you to notice how it's still two times that green vector, plus one times that red vector. So it's satisfying that property that it's still x times whatever the transformed version of that first basis vector is, plus y times the transformed version of that second basis vector. So that's all just a little overview. And the upshot, the main thing I want you to remember from all of this, is when you have some kind of matrix, you can think of it as a transformation of space that keeps grid lines parallel and evenly spaced, and that's a very special kind of transformation.
|
Jacobian prerequisite knowledge.mp3
|
But importantly, I want you to notice how it's still two times that green vector, plus one times that red vector. So it's satisfying that property that it's still x times whatever the transformed version of that first basis vector is, plus y times the transformed version of that second basis vector. So that's all just a little overview. And the upshot, the main thing I want you to remember from all of this, is when you have some kind of matrix, you can think of it as a transformation of space that keeps grid lines parallel and evenly spaced, and that's a very special kind of transformation. That is a very restrictive property to have on a function from 2D points to other 2D points. And the convenient way to encode that is that the landing spot for that first basis vector, the one that started off one unit to the right, is represented with the first column of the matrix, and the landing spot for the second basis vector, the one that was pointing one unit up, is encoded with that second column. If this feels totally unfamiliar or you want to learn more about this, it's something that I've made other videos on in the past.
|
Jacobian prerequisite knowledge.mp3
|
So continuing on with where we were in the last video, we're looking for this unit tangent vector function given the parameterization. So the specific example that I have is a function that parameterizes a circle with radius capital R, but I also kind of want to show in parallel what this looks like more abstractly. So here I'll just kind of write down in the abstract half what we did here, what we did for the unit tangent vector. So we actually have the same thing over here where the unit tangent vector should be the derivative function, which we know gives a tangent, right, it's just it might not be unit, but then we normalize it. We take the magnitude of that tangent vector function. And in our specific case with the circle, once we did this and we kind of took the X component squared, Y component squared, and simplified it all out, we got the function R. But in the general case, we might not be so lucky because the magnitude of this derivative is gonna be the square root of X prime of T squared, right? It's this, the X component of the derivative plus Y prime of T squared, just taking the magnitude of a vector here.
|
Curvature formula, part 3.mp3
|
So we actually have the same thing over here where the unit tangent vector should be the derivative function, which we know gives a tangent, right, it's just it might not be unit, but then we normalize it. We take the magnitude of that tangent vector function. And in our specific case with the circle, once we did this and we kind of took the X component squared, Y component squared, and simplified it all out, we got the function R. But in the general case, we might not be so lucky because the magnitude of this derivative is gonna be the square root of X prime of T squared, right? It's this, the X component of the derivative plus Y prime of T squared, just taking the magnitude of a vector here. So when we take the entire function and divide it by that, what we get doesn't simplify as it did in the case of a circle. Instead, we have that X prime of T, which is the X component of our S prime of T, and we have to divide it by that entire magnitude, which was this whole expression, right? You have to divide it by that whole square root expression.
|
Curvature formula, part 3.mp3
|
It's this, the X component of the derivative plus Y prime of T squared, just taking the magnitude of a vector here. So when we take the entire function and divide it by that, what we get doesn't simplify as it did in the case of a circle. Instead, we have that X prime of T, which is the X component of our S prime of T, and we have to divide it by that entire magnitude, which was this whole expression, right? You have to divide it by that whole square root expression. I'm just gonna kind of write dot, dot, dot with the understanding that this square root expression is what goes up there. And similarly, over here, we'd have Y prime of T divided by that entire expression again, right? So simplification doesn't always happen.
|
Curvature formula, part 3.mp3
|
You have to divide it by that whole square root expression. I'm just gonna kind of write dot, dot, dot with the understanding that this square root expression is what goes up there. And similarly, over here, we'd have Y prime of T divided by that entire expression again, right? So simplification doesn't always happen. That was just kind of a lucky happenstance of our circle example. And then now what we want, once we have the unit tangent vector as a function of that same parameter, what we're hoping to find is the derivative of that unit tangent vector with respect to arc length, the arc length S, and to find its magnitude. That's gonna be what curvature is.
|
Curvature formula, part 3.mp3
|
So simplification doesn't always happen. That was just kind of a lucky happenstance of our circle example. And then now what we want, once we have the unit tangent vector as a function of that same parameter, what we're hoping to find is the derivative of that unit tangent vector with respect to arc length, the arc length S, and to find its magnitude. That's gonna be what curvature is. But the way to do this is to take the derivative with respect to the parameter T, so D big T, D little t, and then divide it out by the derivative of our function S with respect to T, which we already found. And the reason I'm doing this, loosely, if you're just thinking of the notation, you might say, oh, you can kind of cancel out the DTs from each one. But another way to think about this is to say, when we have our tangent vector function as a function of T, the parameter T, we're not sure of what its change is with respect to S, right?
|
Curvature formula, part 3.mp3
|
That's gonna be what curvature is. But the way to do this is to take the derivative with respect to the parameter T, so D big T, D little t, and then divide it out by the derivative of our function S with respect to T, which we already found. And the reason I'm doing this, loosely, if you're just thinking of the notation, you might say, oh, you can kind of cancel out the DTs from each one. But another way to think about this is to say, when we have our tangent vector function as a function of T, the parameter T, we're not sure of what its change is with respect to S, right? That's something we don't know directly. But we do directly know its change with respect to a tiny change in that parameter. So then if we just kind of correct that by saying, hey, how much does the length of the curve change?
|
Curvature formula, part 3.mp3
|
But another way to think about this is to say, when we have our tangent vector function as a function of T, the parameter T, we're not sure of what its change is with respect to S, right? That's something we don't know directly. But we do directly know its change with respect to a tiny change in that parameter. So then if we just kind of correct that by saying, hey, how much does the length of the curve change? How far do you move along the curve as you change that parameter? And maybe if I go back up to the picture here, this Ds Dt is saying, for a tiny nudge in time, right, what is the ratio of the size of the movement there with respect to that tiny time? So the reason that this comes out to be a very large vector, right, it's not a tiny thing, is because you're taking the ratio.
|
Curvature formula, part 3.mp3
|
So then if we just kind of correct that by saying, hey, how much does the length of the curve change? How far do you move along the curve as you change that parameter? And maybe if I go back up to the picture here, this Ds Dt is saying, for a tiny nudge in time, right, what is the ratio of the size of the movement there with respect to that tiny time? So the reason that this comes out to be a very large vector, right, it's not a tiny thing, is because you're taking the ratio. Maybe this tiny change was a just itty bitty smidgen vector, but you're dividing it by like one one millionth, or whatever the size of Dt that you're thinking. And in this specific case for our circle, we saw that the magnitude of this guy, you know, if we took the magnitude of that guy, it's gonna be equal to R, which is a little bit poetic, right, that the magnitude of the derivative is the same as the distance from the center. And what this means in our specific case, if we wanna apply this to our circle example, we take Dt, D big T, the tangent vector function, and I'll go ahead and write it here.
|
Curvature formula, part 3.mp3
|
So the reason that this comes out to be a very large vector, right, it's not a tiny thing, is because you're taking the ratio. Maybe this tiny change was a just itty bitty smidgen vector, but you're dividing it by like one one millionth, or whatever the size of Dt that you're thinking. And in this specific case for our circle, we saw that the magnitude of this guy, you know, if we took the magnitude of that guy, it's gonna be equal to R, which is a little bit poetic, right, that the magnitude of the derivative is the same as the distance from the center. And what this means in our specific case, if we wanna apply this to our circle example, we take Dt, D big T, the tangent vector function, and I'll go ahead and write it here. We have the derivative of our tangent vector with respect to the parameter, and we go up and we look here, we say, okay, the unit tangent vector has the formula negative sine of T and cosine of T. So the derivative of negative sine of T is negative cosine. So over here, this guy should look like negative cosine of T. And the other component, the Y component, the derivative of cosine T, as we're differentiating our unit tangent vector function, is negative sine, negative sine of T. And what this implies is that the magnitude of that derivative of the tangent vector with respect to T, well, what's the magnitude of this vector? You've got a cosine, you've got a sine.
|
Curvature formula, part 3.mp3
|
And what this means in our specific case, if we wanna apply this to our circle example, we take Dt, D big T, the tangent vector function, and I'll go ahead and write it here. We have the derivative of our tangent vector with respect to the parameter, and we go up and we look here, we say, okay, the unit tangent vector has the formula negative sine of T and cosine of T. So the derivative of negative sine of T is negative cosine. So over here, this guy should look like negative cosine of T. And the other component, the Y component, the derivative of cosine T, as we're differentiating our unit tangent vector function, is negative sine, negative sine of T. And what this implies is that the magnitude of that derivative of the tangent vector with respect to T, well, what's the magnitude of this vector? You've got a cosine, you've got a sine. There's nothing else in there. You're gonna end up with cosine squared plus sine squared. So this magnitude just equals one.
|
Curvature formula, part 3.mp3
|
You've got a cosine, you've got a sine. There's nothing else in there. You're gonna end up with cosine squared plus sine squared. So this magnitude just equals one. And when we do what we're supposed to over here and divide it by the magnitude of the derivative, right, we take this and we divide it by the magnitude of the derivative, the S, Dt. Well, we've already computed the magnitude of the derivative. That was R. That's how we got this R, is we took the derivative here and took its magnitude and found it.
|
Curvature formula, part 3.mp3
|
So this magnitude just equals one. And when we do what we're supposed to over here and divide it by the magnitude of the derivative, right, we take this and we divide it by the magnitude of the derivative, the S, Dt. Well, we've already computed the magnitude of the derivative. That was R. That's how we got this R, is we took the derivative here and took its magnitude and found it. So we find that in the specific case of the circle, the curvature function that we want is just constantly equal to one over R, which is good and hopeful, right, because I said in the original video on curvature that it's defined as one divided by the radius of the circle that hugs your curve most closely. And if your curve is actually a circle, it's literally a circle, then the circle that hugs it most closely is itself, right? So I should hope that its curvature ends up being one divided by R. And in the more general case, if we take a look at what this ought to be, you can maybe imagine just how horrifying it's gonna be to compute this, right?
|
Curvature formula, part 3.mp3
|
That was R. That's how we got this R, is we took the derivative here and took its magnitude and found it. So we find that in the specific case of the circle, the curvature function that we want is just constantly equal to one over R, which is good and hopeful, right, because I said in the original video on curvature that it's defined as one divided by the radius of the circle that hugs your curve most closely. And if your curve is actually a circle, it's literally a circle, then the circle that hugs it most closely is itself, right? So I should hope that its curvature ends up being one divided by R. And in the more general case, if we take a look at what this ought to be, you can maybe imagine just how horrifying it's gonna be to compute this, right? We've got our tangent vector function, which itself, you know, is almost too long for me to write down. I just put these dot, dot, dots where you're filling in x prime of t squared plus y prime of t squared. And you're gonna have to take this, take its derivative with respect to t, right?
|
Curvature formula, part 3.mp3
|
So I should hope that its curvature ends up being one divided by R. And in the more general case, if we take a look at what this ought to be, you can maybe imagine just how horrifying it's gonna be to compute this, right? We've got our tangent vector function, which itself, you know, is almost too long for me to write down. I just put these dot, dot, dots where you're filling in x prime of t squared plus y prime of t squared. And you're gonna have to take this, take its derivative with respect to t, right? It's not gonna get any simpler when you take its derivative. Take the magnitude of that and divide all of that by the magnitude of the derivative of your original function. And I think what I'll do, I'm not gonna go through all of that here.
|
Curvature formula, part 3.mp3
|
And you're gonna have to take this, take its derivative with respect to t, right? It's not gonna get any simpler when you take its derivative. Take the magnitude of that and divide all of that by the magnitude of the derivative of your original function. And I think what I'll do, I'm not gonna go through all of that here. It's a little bit much, and I'm not sure how helpful it is to walk through all those steps. But for the sake of having it, for anyone who's curious, I think I'll put that into an article, and you can kind of go through the steps at your own pace and see what the formula comes out to be. And I'll just tell you right now, maybe kind of a spoiler alert, what that formula comes out to be is x prime, the derivative of that first component, multiplied by y double prime, the second derivative of that second component, minus y prime, first derivative of that second component, multiplied by x double prime.
|
Curvature formula, part 3.mp3
|
And I think what I'll do, I'm not gonna go through all of that here. It's a little bit much, and I'm not sure how helpful it is to walk through all those steps. But for the sake of having it, for anyone who's curious, I think I'll put that into an article, and you can kind of go through the steps at your own pace and see what the formula comes out to be. And I'll just tell you right now, maybe kind of a spoiler alert, what that formula comes out to be is x prime, the derivative of that first component, multiplied by y double prime, the second derivative of that second component, minus y prime, first derivative of that second component, multiplied by x double prime. And all of that is divided by the, divided by the, kind of magnitude component, the x prime squared plus y prime squared. That whole thing to the 3 halves. And you can maybe see why you're gonna get terms like this, right, because when you're taking, when you're taking the derivative of, when you're taking the derivative of the unit tangent vector function, you have the square root term in it, the square root that has x primes and y primes.
|
Curvature formula, part 3.mp3
|
And I'll just tell you right now, maybe kind of a spoiler alert, what that formula comes out to be is x prime, the derivative of that first component, multiplied by y double prime, the second derivative of that second component, minus y prime, first derivative of that second component, multiplied by x double prime. And all of that is divided by the, divided by the, kind of magnitude component, the x prime squared plus y prime squared. That whole thing to the 3 halves. And you can maybe see why you're gonna get terms like this, right, because when you're taking, when you're taking the derivative of, when you're taking the derivative of the unit tangent vector function, you have the square root term in it, the square root that has x primes and y primes. So that's where you're gonna get your x double prime, y double prime, as if the chain rule takes you down there. And you can maybe see why this whole x prime squared, y prime squared term is gonna maintain itself. And it turns out it comes in here at a 3 halves power.
|
Curvature formula, part 3.mp3
|
And you can maybe see why you're gonna get terms like this, right, because when you're taking, when you're taking the derivative of, when you're taking the derivative of the unit tangent vector function, you have the square root term in it, the square root that has x primes and y primes. So that's where you're gonna get your x double prime, y double prime, as if the chain rule takes you down there. And you can maybe see why this whole x prime squared, y prime squared term is gonna maintain itself. And it turns out it comes in here at a 3 halves power. And what I'm gonna do in the next video, I'm gonna go ahead and describe kind of an intuition for why this formula isn't random. Why if you break down what this is saying, it really does give a feeling for the curvature, the amount that a curve curves, that we want to try to measure. So it's almost like this is a third way of thinking about it, right?
|
Curvature formula, part 3.mp3
|
And it turns out it comes in here at a 3 halves power. And what I'm gonna do in the next video, I'm gonna go ahead and describe kind of an intuition for why this formula isn't random. Why if you break down what this is saying, it really does give a feeling for the curvature, the amount that a curve curves, that we want to try to measure. So it's almost like this is a third way of thinking about it, right? The first one, I said you have whatever circle most closely hugs your curve, and you take one divided by its radius. And then the second way, you're thinking of this DTDS, the change in the unit tangent vector with respect to arc length and taking its magnitude. And of course, all of these are the same, but they're just kind of three different ways to think about it or things that you might plug in when you come across a function.
|
Curvature formula, part 3.mp3
|
So it's almost like this is a third way of thinking about it, right? The first one, I said you have whatever circle most closely hugs your curve, and you take one divided by its radius. And then the second way, you're thinking of this DTDS, the change in the unit tangent vector with respect to arc length and taking its magnitude. And of course, all of these are the same, but they're just kind of three different ways to think about it or things that you might plug in when you come across a function. And I'll go through an example. I'll go through something where we're really computing the curvature of something that's not just a circle. But with that, I'll see you next video.
|
Curvature formula, part 3.mp3
|
And what that means is we're starting to allow ourselves to use terms like x squared, x times y, and y squared. And quadratic basically just means anytime you have two variables multiplied together. So here you have two x's multiplied together, here it's an x multiplied with a y, and here, y squared, that kind of thing. So let's take a look at this local linearization. It seems like a lot, but once you actually kind of go through term by term, you realize it's a relatively simple function, and if you were to plug in numbers for the constant terms, it would come out as something relatively simple. Because this right here, where you're evaluating the function at the specific input point, that's just gonna be some kind of constant. That's just gonna output some kind of number.
|
Quadratic approximation formula, part 1.mp3
|
So let's take a look at this local linearization. It seems like a lot, but once you actually kind of go through term by term, you realize it's a relatively simple function, and if you were to plug in numbers for the constant terms, it would come out as something relatively simple. Because this right here, where you're evaluating the function at the specific input point, that's just gonna be some kind of constant. That's just gonna output some kind of number. And similarly, when you do that to the partial derivative, this little f sub x means partial derivative at that point, you're just getting another number. And over here, this is also just another number, but we've written it in the abstract form so that you can see what you would need to plug in for any function and for any possible input point. And the reason for having it like this, the reason that it comes out to this form is because of a few important properties that this linearization has.
|
Quadratic approximation formula, part 1.mp3
|
That's just gonna output some kind of number. And similarly, when you do that to the partial derivative, this little f sub x means partial derivative at that point, you're just getting another number. And over here, this is also just another number, but we've written it in the abstract form so that you can see what you would need to plug in for any function and for any possible input point. And the reason for having it like this, the reason that it comes out to this form is because of a few important properties that this linearization has. Let me move this stuff out of the way. We'll get back to it in a moment. But I just wanna emphasize a few properties that this has, because it's gonna be properties that we want our quadratic approximation to have as well.
|
Quadratic approximation formula, part 1.mp3
|
And the reason for having it like this, the reason that it comes out to this form is because of a few important properties that this linearization has. Let me move this stuff out of the way. We'll get back to it in a moment. But I just wanna emphasize a few properties that this has, because it's gonna be properties that we want our quadratic approximation to have as well. First of all, when you actually evaluate this function at the desired point, at x naught, y naught, what do you get? Well, this constant term isn't influenced by the variable, so you'll just get that f evaluated at those points, x naught, y naught. And now the rest of the terms, when we plug in x here, this is the only place where you actually see the variable.
|
Quadratic approximation formula, part 1.mp3
|
But I just wanna emphasize a few properties that this has, because it's gonna be properties that we want our quadratic approximation to have as well. First of all, when you actually evaluate this function at the desired point, at x naught, y naught, what do you get? Well, this constant term isn't influenced by the variable, so you'll just get that f evaluated at those points, x naught, y naught. And now the rest of the terms, when we plug in x here, this is the only place where you actually see the variable. Maybe that's worth pointing out, right? We've got two variables here, and there's a lot going on, but the only places where you actually see those variables show up, where you have to plug in anything, is over here and over here. When you plug in x naught for our initial input, this entire term goes to zero, right?
|
Quadratic approximation formula, part 1.mp3
|
And now the rest of the terms, when we plug in x here, this is the only place where you actually see the variable. Maybe that's worth pointing out, right? We've got two variables here, and there's a lot going on, but the only places where you actually see those variables show up, where you have to plug in anything, is over here and over here. When you plug in x naught for our initial input, this entire term goes to zero, right? And then similarly, when you plug in y naught over here, this entire term is gonna go to zero, which multiplies out to zero for everything. So what you end up with, you don't have to add anything else. This is just a fact.
|
Quadratic approximation formula, part 1.mp3
|
When you plug in x naught for our initial input, this entire term goes to zero, right? And then similarly, when you plug in y naught over here, this entire term is gonna go to zero, which multiplies out to zero for everything. So what you end up with, you don't have to add anything else. This is just a fact. And this is an important fact, because it tells you your approximation for the function at the point about which you are approximating actually equals the value of the function at that point, so that's very good. But we have a couple other important facts also, because this isn't just a constant approximation, this is doing a little bit more for us. If you were to take the partial derivative of this linearization with respect to x, what do you get?
|
Quadratic approximation formula, part 1.mp3
|
This is just a fact. And this is an important fact, because it tells you your approximation for the function at the point about which you are approximating actually equals the value of the function at that point, so that's very good. But we have a couple other important facts also, because this isn't just a constant approximation, this is doing a little bit more for us. If you were to take the partial derivative of this linearization with respect to x, what do you get? What do you get when you actually take this partial derivative? Well, if you look up at the original function, this constant term is nothing, so that just corresponds to a zero. Over here, this entire thing looks like a constant multiplied by x minus something, and if you differentiate this with respect to x, what you're gonna get is that constant term, which is the partial derivative of f evaluated at our specific point.
|
Quadratic approximation formula, part 1.mp3
|
If you were to take the partial derivative of this linearization with respect to x, what do you get? What do you get when you actually take this partial derivative? Well, if you look up at the original function, this constant term is nothing, so that just corresponds to a zero. Over here, this entire thing looks like a constant multiplied by x minus something, and if you differentiate this with respect to x, what you're gonna get is that constant term, which is the partial derivative of f evaluated at our specific point. And then the other term has no x's in it, it's just a y, which as far as x's concerned is a constant. So this whole thing would be zero, which means the partial derivative with respect to x is equal to the value of the partial derivative of our original function with respect to x at that point. Now notice, this is not saying that our linearization has the same partial derivative as f everywhere, it's just saying that its partial derivative happens to be a constant, and the constant that it is is the value of the partial derivative of f at that specific input point.
|
Quadratic approximation formula, part 1.mp3
|
Over here, this entire thing looks like a constant multiplied by x minus something, and if you differentiate this with respect to x, what you're gonna get is that constant term, which is the partial derivative of f evaluated at our specific point. And then the other term has no x's in it, it's just a y, which as far as x's concerned is a constant. So this whole thing would be zero, which means the partial derivative with respect to x is equal to the value of the partial derivative of our original function with respect to x at that point. Now notice, this is not saying that our linearization has the same partial derivative as f everywhere, it's just saying that its partial derivative happens to be a constant, and the constant that it is is the value of the partial derivative of f at that specific input point. And you can do pretty much the same thing, and you'll see that the partial derivative of the linearization with respect to y is a constant, and the constant that it happens to be is the value of the partial derivative of f evaluated at that desired point. So these are three facts. You know the value of the linearization at the point, and the value of its two different partial derivatives, and these kind of define the linearization itself.
|
Quadratic approximation formula, part 1.mp3
|
Now notice, this is not saying that our linearization has the same partial derivative as f everywhere, it's just saying that its partial derivative happens to be a constant, and the constant that it is is the value of the partial derivative of f at that specific input point. And you can do pretty much the same thing, and you'll see that the partial derivative of the linearization with respect to y is a constant, and the constant that it happens to be is the value of the partial derivative of f evaluated at that desired point. So these are three facts. You know the value of the linearization at the point, and the value of its two different partial derivatives, and these kind of define the linearization itself. Now what we're gonna do for the quadratic approximation is take this entire formula, and I'm just literally gonna copy it here, and then we're gonna add to it so that the second partial differential information of our approximation matches that of the original function. Okay, that's kind of a mouthful, but it'll become clear as I actually work it out. And let me just kind of clean it up at least a little bit here.
|
Quadratic approximation formula, part 1.mp3
|
You know the value of the linearization at the point, and the value of its two different partial derivatives, and these kind of define the linearization itself. Now what we're gonna do for the quadratic approximation is take this entire formula, and I'm just literally gonna copy it here, and then we're gonna add to it so that the second partial differential information of our approximation matches that of the original function. Okay, that's kind of a mouthful, but it'll become clear as I actually work it out. And let me just kind of clean it up at least a little bit here. So what we're gonna do is we're gonna extend this, and I'm gonna change its name because I don't want it to be a linear function anymore. What I want is for this to be a quadratic function, so instead I'm gonna call it q of x, y. And now we're gonna add some terms, and what I could do, what I could do is add a constant times x squared, since that's something we're allowed, plus some kind of constant times x, y, and then another constant times y squared.
|
Quadratic approximation formula, part 1.mp3
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.