Sentence
stringlengths 131
8.39k
| video_title
stringlengths 12
104
|
|---|---|
And in this video, I'm only going to describe how you compute the gradient, and in the next couple ones, I'm going to give the geometric interpretation. And I hate doing this, I hate showing the computation before the geometric intuition, since usually it should go the other way around. But the gradient is one of those weird things where the way that you compute it actually seems kind of unrelated to the intuition. And you'll see that. We'll connect them in the next few videos, but to do that, we need to know what both of them actually are. So, on the computation side of things, let's say you have some sort of function, and I'm just going to make it a two-variable function. And let's say it's f of x, y equals x squared sine of y.
|
Gradient.mp3
|
And you'll see that. We'll connect them in the next few videos, but to do that, we need to know what both of them actually are. So, on the computation side of things, let's say you have some sort of function, and I'm just going to make it a two-variable function. And let's say it's f of x, y equals x squared sine of y. The gradient is a way of packing together all the partial derivative information of a function. So let's just start by computing the partial derivatives of this guy. So partial of f with respect to x is equal to...
|
Gradient.mp3
|
And let's say it's f of x, y equals x squared sine of y. The gradient is a way of packing together all the partial derivative information of a function. So let's just start by computing the partial derivatives of this guy. So partial of f with respect to x is equal to... So we look at this and we consider x the variable and y the constant. Well, in that case, sine of y is also a constant, you know, as far as x is concerned. The derivative of x is 2x, so we see that this will be 2x times that constant sine of y.
|
Gradient.mp3
|
So partial of f with respect to x is equal to... So we look at this and we consider x the variable and y the constant. Well, in that case, sine of y is also a constant, you know, as far as x is concerned. The derivative of x is 2x, so we see that this will be 2x times that constant sine of y. Whereas the partial derivative with respect to y... Now we look up here and we say x is considered a constant, so x squared is also considered a constant. So this is just a constant times sine of y, so that's going to equal that same constant times the cosine of y, which is the derivative of sine. So now what the gradient does is it just puts both of these together in a vector.
|
Gradient.mp3
|
The derivative of x is 2x, so we see that this will be 2x times that constant sine of y. Whereas the partial derivative with respect to y... Now we look up here and we say x is considered a constant, so x squared is also considered a constant. So this is just a constant times sine of y, so that's going to equal that same constant times the cosine of y, which is the derivative of sine. So now what the gradient does is it just puts both of these together in a vector. And specifically, let me all change colors here, you denote it with a little upside-down triangle. The name of that symbol is nabla, but you often just pronounce it del, you'd say del f or gradient of f. And what this equals is a vector that has those two partial derivatives in it. So the first one is the partial derivative with respect to x, 2x times sine of y.
|
Gradient.mp3
|
So now what the gradient does is it just puts both of these together in a vector. And specifically, let me all change colors here, you denote it with a little upside-down triangle. The name of that symbol is nabla, but you often just pronounce it del, you'd say del f or gradient of f. And what this equals is a vector that has those two partial derivatives in it. So the first one is the partial derivative with respect to x, 2x times sine of y. And the bottom one, partial derivative with respect to y, x squared cosine of y. And notice, maybe I should emphasize, this is actually a vector-valued function, right? So maybe I'll give it a little bit more room here and emphasize that it's got an x and a y.
|
Gradient.mp3
|
So the first one is the partial derivative with respect to x, 2x times sine of y. And the bottom one, partial derivative with respect to y, x squared cosine of y. And notice, maybe I should emphasize, this is actually a vector-valued function, right? So maybe I'll give it a little bit more room here and emphasize that it's got an x and a y. This is a function that takes in a point in two-dimensional space and outputs a two-dimensional vector. So you could also imagine doing this with three different variables, then you would have three partial derivatives and a three-dimensional output. And the way you might write this more generally is we could go down here and say the gradient of any function is equal to a vector with its partial derivatives, partial of f with respect to x and partial of f with respect to y.
|
Gradient.mp3
|
So maybe I'll give it a little bit more room here and emphasize that it's got an x and a y. This is a function that takes in a point in two-dimensional space and outputs a two-dimensional vector. So you could also imagine doing this with three different variables, then you would have three partial derivatives and a three-dimensional output. And the way you might write this more generally is we could go down here and say the gradient of any function is equal to a vector with its partial derivatives, partial of f with respect to x and partial of f with respect to y. And in some sense, you know, we call these partial derivatives, I like to think of the gradient as the full derivative because it kind of captures all of the information that you need. So a very helpful mnemonic device with the gradient is to think about this triangle, this nabla symbol, as being a vector full of partial derivative operators. And by operator, I just mean, you know, here, let's like partial with respect to x, something where you could give it a function and it gives you another function.
|
Gradient.mp3
|
And the way you might write this more generally is we could go down here and say the gradient of any function is equal to a vector with its partial derivatives, partial of f with respect to x and partial of f with respect to y. And in some sense, you know, we call these partial derivatives, I like to think of the gradient as the full derivative because it kind of captures all of the information that you need. So a very helpful mnemonic device with the gradient is to think about this triangle, this nabla symbol, as being a vector full of partial derivative operators. And by operator, I just mean, you know, here, let's like partial with respect to x, something where you could give it a function and it gives you another function. So you give this guy, you know, the function f, and it gives you this expression, this multivariable function as a result. So the nabla symbol is this vector full of different partial derivative operators, and in this case, it might just be two of them. And this is kind of a weird thing, right, because it's like, what, this is a vector, it's got like operators in it, that's not what I thought vectors do.
|
Gradient.mp3
|
And by operator, I just mean, you know, here, let's like partial with respect to x, something where you could give it a function and it gives you another function. So you give this guy, you know, the function f, and it gives you this expression, this multivariable function as a result. So the nabla symbol is this vector full of different partial derivative operators, and in this case, it might just be two of them. And this is kind of a weird thing, right, because it's like, what, this is a vector, it's got like operators in it, that's not what I thought vectors do. But you can kind of see where it's going. It's really just a, you could think of it as a memory trick, but it's in some sense a little bit deeper than that. And really, when you take this triangle and you say, okay, let's take this triangle, and you can kind of imagine multiplying it by f, really it's like an operator taking in this function and it's going to give you another function.
|
Gradient.mp3
|
And this is kind of a weird thing, right, because it's like, what, this is a vector, it's got like operators in it, that's not what I thought vectors do. But you can kind of see where it's going. It's really just a, you could think of it as a memory trick, but it's in some sense a little bit deeper than that. And really, when you take this triangle and you say, okay, let's take this triangle, and you can kind of imagine multiplying it by f, really it's like an operator taking in this function and it's going to give you another function. It's like you take this triangle and you put an f in front of it, and you can imagine, like, this part gets multiplied, quote-unquote, multiplied with f, this part gets, quote-unquote, multiplied with f, but really you're just saying you take the partial derivative with respect to x and then with y, and on and on. And the reason for doing this, this symbol comes up a lot in other contexts. There are two other operators that you're going to learn about called the divergence and the curl.
|
Gradient.mp3
|
And really, when you take this triangle and you say, okay, let's take this triangle, and you can kind of imagine multiplying it by f, really it's like an operator taking in this function and it's going to give you another function. It's like you take this triangle and you put an f in front of it, and you can imagine, like, this part gets multiplied, quote-unquote, multiplied with f, this part gets, quote-unquote, multiplied with f, but really you're just saying you take the partial derivative with respect to x and then with y, and on and on. And the reason for doing this, this symbol comes up a lot in other contexts. There are two other operators that you're going to learn about called the divergence and the curl. We'll get to those later, all in due time. But it's useful to think about this vector-ish thing of partial derivatives. And I mean, one weird thing about it, you could say, okay, so this nabla symbol is a vector of partial derivative operators.
|
Gradient.mp3
|
There are two other operators that you're going to learn about called the divergence and the curl. We'll get to those later, all in due time. But it's useful to think about this vector-ish thing of partial derivatives. And I mean, one weird thing about it, you could say, okay, so this nabla symbol is a vector of partial derivative operators. What's its dimension? And it's like how many dimensions you've got, because if you had a three-dimensional function, that would mean that you should treat this like it's got three different operators as part of it. And you know, I'd kind of finish this off down here.
|
Gradient.mp3
|
And I mean, one weird thing about it, you could say, okay, so this nabla symbol is a vector of partial derivative operators. What's its dimension? And it's like how many dimensions you've got, because if you had a three-dimensional function, that would mean that you should treat this like it's got three different operators as part of it. And you know, I'd kind of finish this off down here. And if you had something that was 100-dimensional, it would have 100 different operators in it. And that's fine. It's really just, again, kind of a memory trick.
|
Gradient.mp3
|
And you know, I'd kind of finish this off down here. And if you had something that was 100-dimensional, it would have 100 different operators in it. And that's fine. It's really just, again, kind of a memory trick. So with that, that's how you compute the gradient. Not too much to it. It's pretty much just partial derivatives, but you smack them into a vector.
|
Gradient.mp3
|
It's really just, again, kind of a memory trick. So with that, that's how you compute the gradient. Not too much to it. It's pretty much just partial derivatives, but you smack them into a vector. Where it gets fun and where it gets interesting is with the geometric interpretation. I'll get to that in the next couple videos. It's also a super important tool for something called the directional derivative.
|
Gradient.mp3
|
And here, I'm gonna go ahead and talk about how you actually compute it. So 3D curl is the kind of thing that you take with regards to a three-dimensional vector field. So something that takes in a three-dimensional point as its input, and then it's gonna output a three-dimensional vector. And it's common to write the component functions as P, Q, and R. So each one of these is a scalar-valued function that takes in a three-dimensional point and just outputs a number. So it'll be that same 3D point with the coordinates X, Y, and Z. X, Y, and Z. So when you have a three-dimensional vector field like this, the image you might have in mind would be something like this, where every point in three-dimensional space has a vector attached to it. And when you actually look at it, there's quite a lot going on.
|
3d curl formula, part 1.mp3
|
And it's common to write the component functions as P, Q, and R. So each one of these is a scalar-valued function that takes in a three-dimensional point and just outputs a number. So it'll be that same 3D point with the coordinates X, Y, and Z. X, Y, and Z. So when you have a three-dimensional vector field like this, the image you might have in mind would be something like this, where every point in three-dimensional space has a vector attached to it. And when you actually look at it, there's quite a lot going on. But in principle, all that's really happening is that each point in space is associated with a vector, and the point in space is the input and the vector is the output, and you're just kind of gluing them together. And naturally, because between the three dimensions of the input and the three dimensions of the output, we have six dimensions going on, the picture that you're looking at becomes quite messy. So the question is, how do you compute this curl value that I've been talking about?
|
3d curl formula, part 1.mp3
|
And when you actually look at it, there's quite a lot going on. But in principle, all that's really happening is that each point in space is associated with a vector, and the point in space is the input and the vector is the output, and you're just kind of gluing them together. And naturally, because between the three dimensions of the input and the three dimensions of the output, we have six dimensions going on, the picture that you're looking at becomes quite messy. So the question is, how do you compute this curl value that I've been talking about? Curl of your vector-valued function. And just as a quick reminder, what this is supposed to be is you're gonna have some kind of fluid flow induced by this vector field, where you're imagining air flowing along each vector. And what you want is a function that tells you at any given point, what is the rotation induced by that fluid flow around that point?
|
3d curl formula, part 1.mp3
|
So the question is, how do you compute this curl value that I've been talking about? Curl of your vector-valued function. And just as a quick reminder, what this is supposed to be is you're gonna have some kind of fluid flow induced by this vector field, where you're imagining air flowing along each vector. And what you want is a function that tells you at any given point, what is the rotation induced by that fluid flow around that point? And because rotation is described with a three-dimensional vector, you're expecting this to be vector-valued. It'll be something that equals a vector output. And if that doesn't make sense, if that doesn't quite jive, maybe go check out the video on how to represent three-dimensional rotation with a vector.
|
3d curl formula, part 1.mp3
|
And what you want is a function that tells you at any given point, what is the rotation induced by that fluid flow around that point? And because rotation is described with a three-dimensional vector, you're expecting this to be vector-valued. It'll be something that equals a vector output. And if that doesn't make sense, if that doesn't quite jive, maybe go check out the video on how to represent three-dimensional rotation with a vector. So what you have here, is gonna be something that takes as its input, x, y, and z. It takes a three-dimensional point, and what it outputs is a vector describing rotation. And there's actually another notation that's quite, quite helpful when it comes to computing this, where you take nabla, that upside-down triangle we used in divergence and gradient, and you imagine taking the cross product between that and your vector v. And as a reminder, this nabla, you imagine it as if it's a vector containing partial differential operators.
|
3d curl formula, part 1.mp3
|
And if that doesn't make sense, if that doesn't quite jive, maybe go check out the video on how to represent three-dimensional rotation with a vector. So what you have here, is gonna be something that takes as its input, x, y, and z. It takes a three-dimensional point, and what it outputs is a vector describing rotation. And there's actually another notation that's quite, quite helpful when it comes to computing this, where you take nabla, that upside-down triangle we used in divergence and gradient, and you imagine taking the cross product between that and your vector v. And as a reminder, this nabla, you imagine it as if it's a vector containing partial differential operators. And that's the kind of thing where when you say it out loud, it sounds kind of fancy. A vector full of partial differential operators. But all it really means is, I'm just gonna write a bunch of symbols, and this partial partial x is something that wants to take in a function, a multivariable function, and tell you its partial derivative.
|
3d curl formula, part 1.mp3
|
And there's actually another notation that's quite, quite helpful when it comes to computing this, where you take nabla, that upside-down triangle we used in divergence and gradient, and you imagine taking the cross product between that and your vector v. And as a reminder, this nabla, you imagine it as if it's a vector containing partial differential operators. And that's the kind of thing where when you say it out loud, it sounds kind of fancy. A vector full of partial differential operators. But all it really means is, I'm just gonna write a bunch of symbols, and this partial partial x is something that wants to take in a function, a multivariable function, and tell you its partial derivative. And strictly speaking, this doesn't really make sense. Like, hey, how can a vector contain these partial differential operators? But as a series of symbolic movements, it's actually quite helpful.
|
3d curl formula, part 1.mp3
|
But all it really means is, I'm just gonna write a bunch of symbols, and this partial partial x is something that wants to take in a function, a multivariable function, and tell you its partial derivative. And strictly speaking, this doesn't really make sense. Like, hey, how can a vector contain these partial differential operators? But as a series of symbolic movements, it's actually quite helpful. Because when you're multiplying these guys by a thing, it's not really multiplication. You're really gonna be giving it some kind of multivariable function, like p, q, or r, the component functions of our vector field and evaluating it. So just as a warm up for how to do this, let's see what this looks like in the case of two dimensions, where we already kind of know the formula for two dimensional curl.
|
3d curl formula, part 1.mp3
|
But as a series of symbolic movements, it's actually quite helpful. Because when you're multiplying these guys by a thing, it's not really multiplication. You're really gonna be giving it some kind of multivariable function, like p, q, or r, the component functions of our vector field and evaluating it. So just as a warm up for how to do this, let's see what this looks like in the case of two dimensions, where we already kind of know the formula for two dimensional curl. So what that would look like is you have a smaller, more two dimensional, just partial partial x, partial partial y, you know, del operator. And you're gonna take the cross product between that and a two dimensional vector that's just the component functions p and q. And in this case, p and q would be just functions of x and y.
|
3d curl formula, part 1.mp3
|
So just as a warm up for how to do this, let's see what this looks like in the case of two dimensions, where we already kind of know the formula for two dimensional curl. So what that would look like is you have a smaller, more two dimensional, just partial partial x, partial partial y, you know, del operator. And you're gonna take the cross product between that and a two dimensional vector that's just the component functions p and q. And in this case, p and q would be just functions of x and y. So I'm kind of overloading notation, right? Over here I have a two dimensional vector field that I'm saying p and y are scalar valued functions with a two dimensional input. But over here, I'm also using p and q to represent ones with a three dimensional input.
|
3d curl formula, part 1.mp3
|
And in this case, p and q would be just functions of x and y. So I'm kind of overloading notation, right? Over here I have a two dimensional vector field that I'm saying p and y are scalar valued functions with a two dimensional input. But over here, I'm also using p and q to represent ones with a three dimensional input. So you should think of these as separate, but it's common to use the same names. And this is just kind of gonna illustrate the broader, kind of more complicated point. So when you compute something like this, the cross product, you typically think of it as taking these diagonal components and multiplying them.
|
3d curl formula, part 1.mp3
|
But over here, I'm also using p and q to represent ones with a three dimensional input. So you should think of these as separate, but it's common to use the same names. And this is just kind of gonna illustrate the broader, kind of more complicated point. So when you compute something like this, the cross product, you typically think of it as taking these diagonal components and multiplying them. So that would be your partial partial x, quote unquote, multiplied with q, which really means you're taking the partial derivative of q with respect to x. And then you subtract off this diagonal component here, which is, let's see, so partial partial, oh sorry, this should be a y. This should be partial partial y.
|
3d curl formula, part 1.mp3
|
So when you compute something like this, the cross product, you typically think of it as taking these diagonal components and multiplying them. So that would be your partial partial x, quote unquote, multiplied with q, which really means you're taking the partial derivative of q with respect to x. And then you subtract off this diagonal component here, which is, let's see, so partial partial, oh sorry, this should be a y. This should be partial partial y. So okay, sorry about that. You're taking partial partial y of p, and that's what you're subtracting off. So partial partial y of p. So just the partial derivative of that p function with respect to y.
|
3d curl formula, part 1.mp3
|
This should be partial partial y. So okay, sorry about that. You're taking partial partial y of p, and that's what you're subtracting off. So partial partial y of p. So just the partial derivative of that p function with respect to y. And hopefully this is something you recognize. This is the two dimensional curl. And it's something we got an intuition for.
|
3d curl formula, part 1.mp3
|
So partial partial y of p. So just the partial derivative of that p function with respect to y. And hopefully this is something you recognize. This is the two dimensional curl. And it's something we got an intuition for. I want it to be more than just a formula. But hopefully this is kind of reassuring that when you take that del operator, that nabla symbol, and cross product with the vector valued function itself, it gives you a sense of curl. Now when we do this in the three dimensional case, we're gonna take a three dimensional cross product between this three dimensional vector-ish thing and this three dimensional function.
|
3d curl formula, part 1.mp3
|
And it's something we got an intuition for. I want it to be more than just a formula. But hopefully this is kind of reassuring that when you take that del operator, that nabla symbol, and cross product with the vector valued function itself, it gives you a sense of curl. Now when we do this in the three dimensional case, we're gonna take a three dimensional cross product between this three dimensional vector-ish thing and this three dimensional function. And now would be a good time, by the way, if you're not terribly comfortable with the cross product of how to compute it or how to interpret it and things like that, now would probably be a good time to go find the videos that Sal does on this and build up that intuition for what a cross product actually is and how to compute it. Because at this point, I'm gonna assume that you know how to compute it because we're doing it in kind of an absurd context of partial differential operators and functions. So it's important to have that foundation.
|
3d curl formula, part 1.mp3
|
Now when we do this in the three dimensional case, we're gonna take a three dimensional cross product between this three dimensional vector-ish thing and this three dimensional function. And now would be a good time, by the way, if you're not terribly comfortable with the cross product of how to compute it or how to interpret it and things like that, now would probably be a good time to go find the videos that Sal does on this and build up that intuition for what a cross product actually is and how to compute it. Because at this point, I'm gonna assume that you know how to compute it because we're doing it in kind of an absurd context of partial differential operators and functions. So it's important to have that foundation. So the way you compute a thing like this is you construct a determinant. So I'm gonna go down here. Determinant of a certain three by three matrix.
|
3d curl formula, part 1.mp3
|
So it's important to have that foundation. So the way you compute a thing like this is you construct a determinant. So I'm gonna go down here. Determinant of a certain three by three matrix. And the top row of that is all of the unit vectors in various directions of three dimensional space. So these i, j, and k guys, i represents the unit vector in the x direction. So that would be i is equal to, you know, x component is one, but then the other components are zero.
|
3d curl formula, part 1.mp3
|
Determinant of a certain three by three matrix. And the top row of that is all of the unit vectors in various directions of three dimensional space. So these i, j, and k guys, i represents the unit vector in the x direction. So that would be i is equal to, you know, x component is one, but then the other components are zero. And then similarly, j and k represent the unit vectors in the y and z direction. And again, if that doesn't quite make sense why I'm putting them up there or what we're about to do, maybe check out that cross product video. So we put those in the top rows as vectors.
|
3d curl formula, part 1.mp3
|
So that would be i is equal to, you know, x component is one, but then the other components are zero. And then similarly, j and k represent the unit vectors in the y and z direction. And again, if that doesn't quite make sense why I'm putting them up there or what we're about to do, maybe check out that cross product video. So we put those in the top rows as vectors. And this is kind of the trick to computing the cross product because again, it's like, what does it mean to put a vector inside a matrix? But it's a notational trick. And then we're gonna take the first vector that we're doing the cross product with and put its components in the next row.
|
3d curl formula, part 1.mp3
|
So we put those in the top rows as vectors. And this is kind of the trick to computing the cross product because again, it's like, what does it mean to put a vector inside a matrix? But it's a notational trick. And then we're gonna take the first vector that we're doing the cross product with and put its components in the next row. So what that would look like is the next row has a partial, partial y, and a partial, oh, sorry, God, I keep messing up here. That's an x. You do whatever the first component is first and then the second component second, and the third component, the z, partial, partial z. I don't know why I'm making that little mistake.
|
3d curl formula, part 1.mp3
|
And then we're gonna take the first vector that we're doing the cross product with and put its components in the next row. So what that would look like is the next row has a partial, partial y, and a partial, oh, sorry, God, I keep messing up here. That's an x. You do whatever the first component is first and then the second component second, and the third component, the z, partial, partial z. I don't know why I'm making that little mistake. And then for the last row, you put in the second vector, which in this case is vector valued function, p, q, and r. P, which is a multivariable function, q, and r. And first it's worth stepping back and looking at this. This is kind of an absurd thing. Usually when we talk about matrices and taking the determinant, all of the components are numbers because you're multiplying numbers together.
|
3d curl formula, part 1.mp3
|
You do whatever the first component is first and then the second component second, and the third component, the z, partial, partial z. I don't know why I'm making that little mistake. And then for the last row, you put in the second vector, which in this case is vector valued function, p, q, and r. P, which is a multivariable function, q, and r. And first it's worth stepping back and looking at this. This is kind of an absurd thing. Usually when we talk about matrices and taking the determinant, all of the components are numbers because you're multiplying numbers together. But here we have got like a notational trick layered on top of a notational trick so that one of the rows is vectors, one of the rows is like partial differential operators, and then the last one, each one of these is a multivariable function. So it seems like this absurd, convoluted, as far away from a matrix full of numbers thing as you can get, but it's actually very helpful for computation because if you go through the process of computing this determinant and saying what could that mean, the thing that plops out is gonna be the formula for three-dimensional curl. And at the risk of having a video that runs too long, I'll call things in in here, but continue going through that operation in the next video.
|
3d curl formula, part 1.mp3
|
So in the last video, I introduced this thing called the second partial derivative test. And if you have some kind of multivariable function, or really just a two variable function is what this applies to, something that's f of x, y and it outputs a number, when you're looking for places where it has a local maximum or a local minimum, the first step, as I talked about a few videos ago, is to find where the gradient equals zero. And sometimes you'll hear these called critical points or stable points, but inputs where the gradient equals zero. And that's really just a way of compactly writing the fact that all the partial derivatives are equal to zero. Now when you find a point like this, in order to test whether it's a local maximum or a local minimum or a saddle point, without actually looking at the graph, because you don't always have the ability to do that at your disposal, the first step is to compute this long value. And this is the thing I wanna give intuition behind. Where you take all three second partial derivatives, the second partial derivative with respect to x, the second partial derivative with respect to y, and the mixed partial derivative, where first you do it with respect to x, then you do it with respect to y.
|
Second partial derivative test intuition.mp3
|
And that's really just a way of compactly writing the fact that all the partial derivatives are equal to zero. Now when you find a point like this, in order to test whether it's a local maximum or a local minimum or a saddle point, without actually looking at the graph, because you don't always have the ability to do that at your disposal, the first step is to compute this long value. And this is the thing I wanna give intuition behind. Where you take all three second partial derivatives, the second partial derivative with respect to x, the second partial derivative with respect to y, and the mixed partial derivative, where first you do it with respect to x, then you do it with respect to y. And you compute this value where you evaluate each one of those at your critical point, and you multiply the two pure second partial derivatives, and then subtract off the square of the mixed partial derivative. And again, I'll give intuition for that in a reason, but right now we just kinda take it, oh, all right, I guess you compute this number. And if that value h, if that value h is greater than zero, what it tells you, what it tells you is that you definitely have either a maximum or a minimum.
|
Second partial derivative test intuition.mp3
|
Where you take all three second partial derivatives, the second partial derivative with respect to x, the second partial derivative with respect to y, and the mixed partial derivative, where first you do it with respect to x, then you do it with respect to y. And you compute this value where you evaluate each one of those at your critical point, and you multiply the two pure second partial derivatives, and then subtract off the square of the mixed partial derivative. And again, I'll give intuition for that in a reason, but right now we just kinda take it, oh, all right, I guess you compute this number. And if that value h, if that value h is greater than zero, what it tells you, what it tells you is that you definitely have either a maximum or a minimum. So you definitely have either a maximum or a minimum. And then to determine which one, you just have to look at the concavity in one direction. So you'll look at the second partial derivative with respect to x, for example, and if that was positive, that would tell you when you look in the x direction, there's a positive concavity.
|
Second partial derivative test intuition.mp3
|
And if that value h, if that value h is greater than zero, what it tells you, what it tells you is that you definitely have either a maximum or a minimum. So you definitely have either a maximum or a minimum. And then to determine which one, you just have to look at the concavity in one direction. So you'll look at the second partial derivative with respect to x, for example, and if that was positive, that would tell you when you look in the x direction, there's a positive concavity. If it was negative, it would mean a negative concavity. And so that means a positive value for that second partial derivative would mean a local minimum, and a negative value would mean a local maximum. So that's what it means if this value h turns out to be greater than zero.
|
Second partial derivative test intuition.mp3
|
So you'll look at the second partial derivative with respect to x, for example, and if that was positive, that would tell you when you look in the x direction, there's a positive concavity. If it was negative, it would mean a negative concavity. And so that means a positive value for that second partial derivative would mean a local minimum, and a negative value would mean a local maximum. So that's what it means if this value h turns out to be greater than zero. And if this value h turns out to be less than zero, strictly less than zero, then you definitely have a saddle point, saddle point, which is neither a maximum nor a minimum. It's kind of like there's disagreement in different directions over whether it should be a maximum or a minimum. And if h equals zero, the test isn't good enough.
|
Second partial derivative test intuition.mp3
|
So that's what it means if this value h turns out to be greater than zero. And if this value h turns out to be less than zero, strictly less than zero, then you definitely have a saddle point, saddle point, which is neither a maximum nor a minimum. It's kind of like there's disagreement in different directions over whether it should be a maximum or a minimum. And if h equals zero, the test isn't good enough. You would have to do something else to figure it out. So why does this work? Why does this seemingly random conglomeration of second partial derivatives give you a test that lets you determine what type of stable point you're looking at?
|
Second partial derivative test intuition.mp3
|
And if h equals zero, the test isn't good enough. You would have to do something else to figure it out. So why does this work? Why does this seemingly random conglomeration of second partial derivatives give you a test that lets you determine what type of stable point you're looking at? Well, let's just understand each term individually. So this second partial derivative with respect to x, since you're taking both partial derivatives with respect to x, you're basically treating the entire multivariable function as if x is the only variable and y was just some constant. So it's like you're only looking at movement in the x direction.
|
Second partial derivative test intuition.mp3
|
Why does this seemingly random conglomeration of second partial derivatives give you a test that lets you determine what type of stable point you're looking at? Well, let's just understand each term individually. So this second partial derivative with respect to x, since you're taking both partial derivatives with respect to x, you're basically treating the entire multivariable function as if x is the only variable and y was just some constant. So it's like you're only looking at movement in the x direction. So in terms of a graph, let's say we've got like this graph here, you can imagine slicing this with a plane that represents movement purely in the x direction. So that'll be a constant y value slice. And you take a look at the curve where this slice intersects your graph.
|
Second partial derivative test intuition.mp3
|
So it's like you're only looking at movement in the x direction. So in terms of a graph, let's say we've got like this graph here, you can imagine slicing this with a plane that represents movement purely in the x direction. So that'll be a constant y value slice. And you take a look at the curve where this slice intersects your graph. And in the one that I have pictured here, it looks like it's a positive concavity. So this term right here kind of tells you x concavity. So it's kind of like the, what is the concavity as far as the variable x is concerned?
|
Second partial derivative test intuition.mp3
|
And you take a look at the curve where this slice intersects your graph. And in the one that I have pictured here, it looks like it's a positive concavity. So this term right here kind of tells you x concavity. So it's kind of like the, what is the concavity as far as the variable x is concerned? And then symmetrically, this over here, when you take the partial derivative with respect to y two times in a row, it's like you're ignoring the fact that x is even a variable. And you're looking purely at what movement in the y direction looks like, which on the graph that I have pictured here, also happens to give you kind of this positive concavity parabola look. But the point is that the curve on the graph that results from looking at movement purely in the y direction can be analyzed just looking at this partial derivative with respect to y twice in a row.
|
Second partial derivative test intuition.mp3
|
So it's kind of like the, what is the concavity as far as the variable x is concerned? And then symmetrically, this over here, when you take the partial derivative with respect to y two times in a row, it's like you're ignoring the fact that x is even a variable. And you're looking purely at what movement in the y direction looks like, which on the graph that I have pictured here, also happens to give you kind of this positive concavity parabola look. But the point is that the curve on the graph that results from looking at movement purely in the y direction can be analyzed just looking at this partial derivative with respect to y twice in a row. So that term kind of tells you y concavity, y concavity. Now, first of all, notice what happens if these disagree. If say, x thought there should be positive concavity and y thought there should be negative concavity.
|
Second partial derivative test intuition.mp3
|
But the point is that the curve on the graph that results from looking at movement purely in the y direction can be analyzed just looking at this partial derivative with respect to y twice in a row. So that term kind of tells you y concavity, y concavity. Now, first of all, notice what happens if these disagree. If say, x thought there should be positive concavity and y thought there should be negative concavity. Here, I'll write that down, what that means. If x thinks there's positive concavity, we have here some kind of positive number that I'll just write as like a plus sign in parentheses. And then this here, y concavity would be some kind of negative number.
|
Second partial derivative test intuition.mp3
|
If say, x thought there should be positive concavity and y thought there should be negative concavity. Here, I'll write that down, what that means. If x thinks there's positive concavity, we have here some kind of positive number that I'll just write as like a plus sign in parentheses. And then this here, y concavity would be some kind of negative number. So I'll just put like a negative sign in parentheses. So that would mean this very first term would be a positive times a negative. And that first term would be negative.
|
Second partial derivative test intuition.mp3
|
And then this here, y concavity would be some kind of negative number. So I'll just put like a negative sign in parentheses. So that would mean this very first term would be a positive times a negative. And that first term would be negative. And now the thing that we're subtracting off, I'll get to the intuition behind this mixed partial derivative term in a moment. But for now, you can notice that it's something squared. It's something that's always a positive term.
|
Second partial derivative test intuition.mp3
|
And that first term would be negative. And now the thing that we're subtracting off, I'll get to the intuition behind this mixed partial derivative term in a moment. But for now, you can notice that it's something squared. It's something that's always a positive term. So you're always subtracting off a positive term, which means if this initial one is negative, the entire term h is definitely gonna be negative. So it's gonna put you over into this saddle point territory, which makes sense because if the x direction and the y direction disagree on concavity, that should be a saddle point. The quintessential example here is when you have, when you have the function f of x, y is equal to x squared minus y squared, x squared minus y squared.
|
Second partial derivative test intuition.mp3
|
It's something that's always a positive term. So you're always subtracting off a positive term, which means if this initial one is negative, the entire term h is definitely gonna be negative. So it's gonna put you over into this saddle point territory, which makes sense because if the x direction and the y direction disagree on concavity, that should be a saddle point. The quintessential example here is when you have, when you have the function f of x, y is equal to x squared minus y squared, x squared minus y squared. And the graph of that, by the way, the graph of that would look like this, where, let's see, so orienting myself here, moving in the x direction, you have kind of positive concavity, which corresponds to the positive coefficient in front of x squared. And in the y direction, it looks like negative concavity, corresponding to that negative coefficient in front of the y squared. So when there's disagreement among these, the test ensures that we're gonna have a saddle point.
|
Second partial derivative test intuition.mp3
|
The quintessential example here is when you have, when you have the function f of x, y is equal to x squared minus y squared, x squared minus y squared. And the graph of that, by the way, the graph of that would look like this, where, let's see, so orienting myself here, moving in the x direction, you have kind of positive concavity, which corresponds to the positive coefficient in front of x squared. And in the y direction, it looks like negative concavity, corresponding to that negative coefficient in front of the y squared. So when there's disagreement among these, the test ensures that we're gonna have a saddle point. Now, what about if they agree, right? What if either it's the case that x thinks there should be positive concavity and y thinks there should be positive concavity, or they both agree that there should be negative concavity? In either one of these cases, when you multiply them together, they're positive.
|
Second partial derivative test intuition.mp3
|
So when there's disagreement among these, the test ensures that we're gonna have a saddle point. Now, what about if they agree, right? What if either it's the case that x thinks there should be positive concavity and y thinks there should be positive concavity, or they both agree that there should be negative concavity? In either one of these cases, when you multiply them together, they're positive. So it's kind of like saying, if you look purely in the x direction or purely in the y direction, they agree that there should be definitely positive concavity or definitely negative concavity. So that entire first term is going to be positive. So it's kind of like a clever way of capturing whether or not the x directions and y directions agree.
|
Second partial derivative test intuition.mp3
|
In either one of these cases, when you multiply them together, they're positive. So it's kind of like saying, if you look purely in the x direction or purely in the y direction, they agree that there should be definitely positive concavity or definitely negative concavity. So that entire first term is going to be positive. So it's kind of like a clever way of capturing whether or not the x directions and y directions agree. However, the reason that it's not enough is because in either case, we're still subtracting off something that's always a positive term. So when you have this agreement between the x direction and the y direction, it then turns into a battle between this xy agreement and whatever's going on with this mixed partial derivative term. And the stronger that mixed partial derivative term, the bigger this negative number, so the more it's pulling the entire value h towards being negative.
|
Second partial derivative test intuition.mp3
|
So it's kind of like a clever way of capturing whether or not the x directions and y directions agree. However, the reason that it's not enough is because in either case, we're still subtracting off something that's always a positive term. So when you have this agreement between the x direction and the y direction, it then turns into a battle between this xy agreement and whatever's going on with this mixed partial derivative term. And the stronger that mixed partial derivative term, the bigger this negative number, so the more it's pulling the entire value h towards being negative. So let me see if I can give a little bit of reasoning behind why this mixed partial derivative term is trying to pull things towards being a saddle point. Let's take a look at the very simple function, f of xy, xy, is equal to x times y. So what that looks like graphically, f of xy equals x times y, is this.
|
Second partial derivative test intuition.mp3
|
And the stronger that mixed partial derivative term, the bigger this negative number, so the more it's pulling the entire value h towards being negative. So let me see if I can give a little bit of reasoning behind why this mixed partial derivative term is trying to pull things towards being a saddle point. Let's take a look at the very simple function, f of xy, xy, is equal to x times y. So what that looks like graphically, f of xy equals x times y, is this. It looks like a saddle point. So let's go ahead and look at its partial derivatives. So the first partial derivative is partial with respect to x and partial with respect to y.
|
Second partial derivative test intuition.mp3
|
So what that looks like graphically, f of xy equals x times y, is this. It looks like a saddle point. So let's go ahead and look at its partial derivatives. So the first partial derivative is partial with respect to x and partial with respect to y. Well, when you do it with respect to x, x looks like a variable, y looks like a constant, it's just that constant y. And when you do it with respect to y, it goes the other way around. Y looks like the variable, x looks like the constant, so the derivative is that constant x.
|
Second partial derivative test intuition.mp3
|
So the first partial derivative is partial with respect to x and partial with respect to y. Well, when you do it with respect to x, x looks like a variable, y looks like a constant, it's just that constant y. And when you do it with respect to y, it goes the other way around. Y looks like the variable, x looks like the constant, so the derivative is that constant x. Now when you take the second partial derivatives, if you do it with respect to x twice in a row, you're differentiating this with respect to x, that looks like a constant, so you get zero. And similarly, if you do it with respect to y twice in a row, you're doing this, and the derivative of x with respect to y, x looks like a constant, goes to zero. But the important term, the one that we're getting an intuition about here, this mixed partial derivative, first with respect to x, then with respect to y, well, you can view it in two ways.
|
Second partial derivative test intuition.mp3
|
Y looks like the variable, x looks like the constant, so the derivative is that constant x. Now when you take the second partial derivatives, if you do it with respect to x twice in a row, you're differentiating this with respect to x, that looks like a constant, so you get zero. And similarly, if you do it with respect to y twice in a row, you're doing this, and the derivative of x with respect to y, x looks like a constant, goes to zero. But the important term, the one that we're getting an intuition about here, this mixed partial derivative, first with respect to x, then with respect to y, well, you can view it in two ways. Either you take the derivative of this expression with respect to y, in which case it's one, or you think of taking the derivative of this expression with respect to x, in which case it's also one. So it's kind of like this function is a very pure way to take a look at what this mixed partial derivative term looks like. And the higher the coefficient here, if I had put a coefficient of three here, that would mean that the mixed partial derivative would ultimately end up being three.
|
Second partial derivative test intuition.mp3
|
But the important term, the one that we're getting an intuition about here, this mixed partial derivative, first with respect to x, then with respect to y, well, you can view it in two ways. Either you take the derivative of this expression with respect to y, in which case it's one, or you think of taking the derivative of this expression with respect to x, in which case it's also one. So it's kind of like this function is a very pure way to take a look at what this mixed partial derivative term looks like. And the higher the coefficient here, if I had put a coefficient of three here, that would mean that the mixed partial derivative would ultimately end up being three. So notice, the reason that this looks like a saddle isn't because the x and y directions disagree. In fact, if you take a look at pure movement in the x direction, it just looks like a constant. The height of the graph along this plane, along this line here, is just a constant, which corresponds to the fact that the second partial derivative with respect to x is equal to zero.
|
Second partial derivative test intuition.mp3
|
And the higher the coefficient here, if I had put a coefficient of three here, that would mean that the mixed partial derivative would ultimately end up being three. So notice, the reason that this looks like a saddle isn't because the x and y directions disagree. In fact, if you take a look at pure movement in the x direction, it just looks like a constant. The height of the graph along this plane, along this line here, is just a constant, which corresponds to the fact that the second partial derivative with respect to x is equal to zero. And then likewise, if you cut it with a plane representing a constant x value, meaning movement purely in the y direction, the height of the graph doesn't really change along there. It's constantly zero, which corresponds to the fact that this other partial derivative is zero. The reason that the whole thing looks like a saddle is because when you cut it with a diagonal plane here, a diagonal plane, it looks like it has negative concavity.
|
Second partial derivative test intuition.mp3
|
The height of the graph along this plane, along this line here, is just a constant, which corresponds to the fact that the second partial derivative with respect to x is equal to zero. And then likewise, if you cut it with a plane representing a constant x value, meaning movement purely in the y direction, the height of the graph doesn't really change along there. It's constantly zero, which corresponds to the fact that this other partial derivative is zero. The reason that the whole thing looks like a saddle is because when you cut it with a diagonal plane here, a diagonal plane, it looks like it has negative concavity. But if you were to chop it in another direction, it would look like it has positive concavity. So in fact, this xy term is kind of like a way of capturing whether there's disagreement in the diagonal directions. And one thing that might be surprising at first is that you only need one of these second partial derivatives in order to determine all of the information about the diagonal directions.
|
Second partial derivative test intuition.mp3
|
The reason that the whole thing looks like a saddle is because when you cut it with a diagonal plane here, a diagonal plane, it looks like it has negative concavity. But if you were to chop it in another direction, it would look like it has positive concavity. So in fact, this xy term is kind of like a way of capturing whether there's disagreement in the diagonal directions. And one thing that might be surprising at first is that you only need one of these second partial derivatives in order to determine all of the information about the diagonal directions. Because you could imagine, you know, maybe there's disagreement between movement along one certain vector and movement along another, and you would have to account for infinitely many directions and look at all of them. And yet evidently, it's the case that you only really need to take a look at this mixed partial derivative term. You know, along with the original pure second partial derivatives with respect to x twice and with respect to y twice.
|
Second partial derivative test intuition.mp3
|
And one thing that might be surprising at first is that you only need one of these second partial derivatives in order to determine all of the information about the diagonal directions. Because you could imagine, you know, maybe there's disagreement between movement along one certain vector and movement along another, and you would have to account for infinitely many directions and look at all of them. And yet evidently, it's the case that you only really need to take a look at this mixed partial derivative term. You know, along with the original pure second partial derivatives with respect to x twice and with respect to y twice. But still, looking at only three different terms to take into account possible disagreement in infinitely many directions actually feels like quite the surprise. And if you want the full rigorous justification for why this is the case, why this second partial derivative test works and kind of an airtight argument, I've put that in an article that you can find that kind of goes into the dirty details for those who are interested. But if you just want the intuition, I think it's fine to think about the fact that this mixed partial derivative is telling you how much your function looks like the graph of f of xy equals x times y, which is the graph that kind of captures all of the diagonal disagreement.
|
Second partial derivative test intuition.mp3
|
You know, along with the original pure second partial derivatives with respect to x twice and with respect to y twice. But still, looking at only three different terms to take into account possible disagreement in infinitely many directions actually feels like quite the surprise. And if you want the full rigorous justification for why this is the case, why this second partial derivative test works and kind of an airtight argument, I've put that in an article that you can find that kind of goes into the dirty details for those who are interested. But if you just want the intuition, I think it's fine to think about the fact that this mixed partial derivative is telling you how much your function looks like the graph of f of xy equals x times y, which is the graph that kind of captures all of the diagonal disagreement. And then when you let that term, that mixed partial derivative term, kind of compete with the agreement between the x and y directions, you know, if they agree very strongly, you have to subtract off a very strong amount in order to pull it back to being negative. So this battle back and forth, if it's pulled to be very negative, that'll give you a saddle point. If it doesn't pull hard enough, then the agreement between the x and y directions wins out and it's either a local maximum or a local minimum.
|
Second partial derivative test intuition.mp3
|
But if you just want the intuition, I think it's fine to think about the fact that this mixed partial derivative is telling you how much your function looks like the graph of f of xy equals x times y, which is the graph that kind of captures all of the diagonal disagreement. And then when you let that term, that mixed partial derivative term, kind of compete with the agreement between the x and y directions, you know, if they agree very strongly, you have to subtract off a very strong amount in order to pull it back to being negative. So this battle back and forth, if it's pulled to be very negative, that'll give you a saddle point. If it doesn't pull hard enough, then the agreement between the x and y directions wins out and it's either a local maximum or a local minimum. So hopefully that sheds a little bit of light on why this term makes sense and why it's a reasonable way to combine the three different second partial derivatives available to you. And again, if you want the full details, I've written that up in an article form. I'll see you next video.
|
Second partial derivative test intuition.mp3
|
No obvious way to directly take the antiderivative of sine cubed theta, but if we had some mixtures of sines and cosines there, then we could start essentially doing u-substitution, which at this point you probably can do in your head. So what we could do is we could write this as a product. So we could write this as sine of theta, so I'll do this part right over here. This is sine of theta, sine of theta times sine squared theta times sine squared theta, and sine squared theta can be rewritten as one minus cosine squared theta. So this is the same thing as sine of theta times sine of theta times one minus cosine squared theta, and if we multiply this out, this gives us sine theta, sine theta minus sine theta cosine squared theta, and this is much easier for us to integrate, although it looks like a more complicated expression because it's easy to take the antiderivative of sine theta, and now it's easy to take the antiderivative of this because we have the derivative of cosine theta sitting right over here, so this is gonna be cosine cubed theta over three. So essentially we're doing u-substitution right over here, but I'll save that for a second. Let's rewrite all of these in a way that's easy to take the antiderivative of it.
|
Evaluating line integral directly - part 2 Multivariable Calculus Khan Academy.mp3
|
This is sine of theta, sine of theta times sine squared theta times sine squared theta, and sine squared theta can be rewritten as one minus cosine squared theta. So this is the same thing as sine of theta times sine of theta times one minus cosine squared theta, and if we multiply this out, this gives us sine theta, sine theta minus sine theta cosine squared theta, and this is much easier for us to integrate, although it looks like a more complicated expression because it's easy to take the antiderivative of sine theta, and now it's easy to take the antiderivative of this because we have the derivative of cosine theta sitting right over here, so this is gonna be cosine cubed theta over three. So essentially we're doing u-substitution right over here, but I'll save that for a second. Let's rewrite all of these in a way that's easy to take the antiderivative of it. Cosine squared theta, we know this is a common trig identity, that's the same thing as 1 1.5, this is the same thing as 1 1.5 of one plus cosine of two theta, and once again, this is much, much easier to take the antiderivative of, so I'll write plus, plus I'll write plus 1 1.5 plus 1 1.5 cosine of two theta, and now we, all of this is actually quite easy to take the antiderivative of, and so I'll just rewrite it again. So minus four cosine theta plus four cosine theta sine theta minus cosine theta sine squared theta d theta, just was able to sneak it in, and so that's our integral between zero and two pi. So let's just take the antiderivative in every one of these steps.
|
Evaluating line integral directly - part 2 Multivariable Calculus Khan Academy.mp3
|
Let's rewrite all of these in a way that's easy to take the antiderivative of it. Cosine squared theta, we know this is a common trig identity, that's the same thing as 1 1.5, this is the same thing as 1 1.5 of one plus cosine of two theta, and once again, this is much, much easier to take the antiderivative of, so I'll write plus, plus I'll write plus 1 1.5 plus 1 1.5 cosine of two theta, and now we, all of this is actually quite easy to take the antiderivative of, and so I'll just rewrite it again. So minus four cosine theta plus four cosine theta sine theta minus cosine theta sine squared theta d theta, just was able to sneak it in, and so that's our integral between zero and two pi. So let's just take the antiderivative in every one of these steps. It's starting to get a little bit messy, I'll try to write a little bit neater. The antiderivative of sine of theta is cosine of, is negative cosine of theta, negative cosine of theta. If you take the derivative of cosine theta, you get negative sine theta, then the negatives cancel out, you get that right over there.
|
Evaluating line integral directly - part 2 Multivariable Calculus Khan Academy.mp3
|
So let's just take the antiderivative in every one of these steps. It's starting to get a little bit messy, I'll try to write a little bit neater. The antiderivative of sine of theta is cosine of, is negative cosine of theta, negative cosine of theta. If you take the derivative of cosine theta, you get negative sine theta, then the negatives cancel out, you get that right over there. Then over here, we have the derivative of cosine theta, which is negative sine theta, so we can essentially kind of treat this, we can kind of make the substitution that u is cosine theta, that's essentially what we're doing in our head. So the antiderivative of this is going to be equal to plus cosine cubed theta over three, and then the antiderivative of 1 1β2 with respect to theta is just going to be plus 1 1β2 theta. The antiderivative of cosine two theta, well we want the derivative of this thing sitting someplace, the derivative of this thing over here is two, so if we put a two here, we can't just multiply by two arbitrarily, we'd have to multiply and divide by two, so we could put a two here, and then we could also, but we'd also have to divide by two, so then that would become a four, and we haven't changed this, notice this is now 2β4 cosine of two theta, the exact same thing as 1 1β2 cosine two theta, but this is useful because now, the way I've written it here, we have the derivative of two theta right over here, and so we can just say, well we'll just take the antiderivative of this whole thing, which is going to be sine of two theta, but we still have the one, so the antiderivative of this part right over here is sine of two theta, and then we have the 1β4 out there, so plus 1β4 sine of two theta, and then the antiderivative of cosine theta is just sine theta, so minus four sine theta, minus four sine theta, antiderivative of this right over here, we can kind of pick whichever way we want to do it, but we could say, well the derivative of sine theta is cosine theta, so this is going to be the same thing as four, sine squared theta, sine squared theta over two, over two, or instead of saying over two, instead of writing that four, I'll just divide the four by the two, and I will get a two, I will get a two, so let me erase that, and put a two right over here, and you could work it out yourself, you were to take the derivative of this thing right over here, it'd be the derivative of sine theta, if you just use chain rule, which would be cosine theta, and then times four sine of theta, so that's exactly what we have right over here, and then we have this last part, the derivative of sine theta is cosine theta, and so once again, just like we've done before, the antiderivative of this whole thing is going to be negative sine cubed of theta over three, and we need to evaluate this entire expression between zero, zero, and two pi, so let's see how it evaluates.
|
Evaluating line integral directly - part 2 Multivariable Calculus Khan Academy.mp3
|
If you take the derivative of cosine theta, you get negative sine theta, then the negatives cancel out, you get that right over there. Then over here, we have the derivative of cosine theta, which is negative sine theta, so we can essentially kind of treat this, we can kind of make the substitution that u is cosine theta, that's essentially what we're doing in our head. So the antiderivative of this is going to be equal to plus cosine cubed theta over three, and then the antiderivative of 1 1β2 with respect to theta is just going to be plus 1 1β2 theta. The antiderivative of cosine two theta, well we want the derivative of this thing sitting someplace, the derivative of this thing over here is two, so if we put a two here, we can't just multiply by two arbitrarily, we'd have to multiply and divide by two, so we could put a two here, and then we could also, but we'd also have to divide by two, so then that would become a four, and we haven't changed this, notice this is now 2β4 cosine of two theta, the exact same thing as 1 1β2 cosine two theta, but this is useful because now, the way I've written it here, we have the derivative of two theta right over here, and so we can just say, well we'll just take the antiderivative of this whole thing, which is going to be sine of two theta, but we still have the one, so the antiderivative of this part right over here is sine of two theta, and then we have the 1β4 out there, so plus 1β4 sine of two theta, and then the antiderivative of cosine theta is just sine theta, so minus four sine theta, minus four sine theta, antiderivative of this right over here, we can kind of pick whichever way we want to do it, but we could say, well the derivative of sine theta is cosine theta, so this is going to be the same thing as four, sine squared theta, sine squared theta over two, over two, or instead of saying over two, instead of writing that four, I'll just divide the four by the two, and I will get a two, I will get a two, so let me erase that, and put a two right over here, and you could work it out yourself, you were to take the derivative of this thing right over here, it'd be the derivative of sine theta, if you just use chain rule, which would be cosine theta, and then times four sine of theta, so that's exactly what we have right over here, and then we have this last part, the derivative of sine theta is cosine theta, and so once again, just like we've done before, the antiderivative of this whole thing is going to be negative sine cubed of theta over three, and we need to evaluate this entire expression between zero, zero, and two pi, so let's see how it evaluates. So first let's evaluate everything at two pi, so this evaluated at two pi is negative one, this evaluated at two pi is one third, this evaluated at two pi is just going to be pi, this evaluated at two pi is zero, cosine of four pi is going to be zero, this evaluated at two pi is going to be zero, this evaluated at two pi is going to be zero, this evaluated at two pi is going to be zero, So that's a nice simplification. So that's everything evaluated at 2 pi. And from that, we're going to have to subtract everything evaluated at 0.
|
Evaluating line integral directly - part 2 Multivariable Calculus Khan Academy.mp3
|
The antiderivative of cosine two theta, well we want the derivative of this thing sitting someplace, the derivative of this thing over here is two, so if we put a two here, we can't just multiply by two arbitrarily, we'd have to multiply and divide by two, so we could put a two here, and then we could also, but we'd also have to divide by two, so then that would become a four, and we haven't changed this, notice this is now 2β4 cosine of two theta, the exact same thing as 1 1β2 cosine two theta, but this is useful because now, the way I've written it here, we have the derivative of two theta right over here, and so we can just say, well we'll just take the antiderivative of this whole thing, which is going to be sine of two theta, but we still have the one, so the antiderivative of this part right over here is sine of two theta, and then we have the 1β4 out there, so plus 1β4 sine of two theta, and then the antiderivative of cosine theta is just sine theta, so minus four sine theta, minus four sine theta, antiderivative of this right over here, we can kind of pick whichever way we want to do it, but we could say, well the derivative of sine theta is cosine theta, so this is going to be the same thing as four, sine squared theta, sine squared theta over two, over two, or instead of saying over two, instead of writing that four, I'll just divide the four by the two, and I will get a two, I will get a two, so let me erase that, and put a two right over here, and you could work it out yourself, you were to take the derivative of this thing right over here, it'd be the derivative of sine theta, if you just use chain rule, which would be cosine theta, and then times four sine of theta, so that's exactly what we have right over here, and then we have this last part, the derivative of sine theta is cosine theta, and so once again, just like we've done before, the antiderivative of this whole thing is going to be negative sine cubed of theta over three, and we need to evaluate this entire expression between zero, zero, and two pi, so let's see how it evaluates. So first let's evaluate everything at two pi, so this evaluated at two pi is negative one, this evaluated at two pi is one third, this evaluated at two pi is just going to be pi, this evaluated at two pi is zero, cosine of four pi is going to be zero, this evaluated at two pi is going to be zero, this evaluated at two pi is going to be zero, this evaluated at two pi is going to be zero, So that's a nice simplification. So that's everything evaluated at 2 pi. And from that, we're going to have to subtract everything evaluated at 0. So cosine of 0, well, that's going to be, once again, 1. And we have a negative sign, so it's negative 1. Then you're going to have plus 1 3rd.
|
Evaluating line integral directly - part 2 Multivariable Calculus Khan Academy.mp3
|
And from that, we're going to have to subtract everything evaluated at 0. So cosine of 0, well, that's going to be, once again, 1. And we have a negative sign, so it's negative 1. Then you're going to have plus 1 3rd. And then you're going to have 0. And then all of these other things are going to be 0. And so if you simplify it, you get this is going to be equal to negative 1 plus 1 3rd plus pi.
|
Evaluating line integral directly - part 2 Multivariable Calculus Khan Academy.mp3
|
Then you're going to have plus 1 3rd. And then you're going to have 0. And then all of these other things are going to be 0. And so if you simplify it, you get this is going to be equal to negative 1 plus 1 3rd plus pi. And then we have plus 1 minus 1 3rd. Well, that cancels with that. That cancels with that.
|
Evaluating line integral directly - part 2 Multivariable Calculus Khan Academy.mp3
|
And so if you simplify it, you get this is going to be equal to negative 1 plus 1 3rd plus pi. And then we have plus 1 minus 1 3rd. Well, that cancels with that. That cancels with that. And we deserve a drum roll now. It all simplified, just like when we used Stokes' theorem in the four videos. Actually, I think it was a little bit simpler to just directly evaluate the line integral over here.
|
Evaluating line integral directly - part 2 Multivariable Calculus Khan Academy.mp3
|
And graphically, this has the interpretation that if you have a graph of f, setting its derivative equal to zero means that you're looking for places where it's got a flat tangent line. So in the graph that I drew, it would be these two flat tangent lines. And then once you find these points, so for example, here you have one solution that I'll call x1, and then here you have another solution, x2, you can ask yourself the question, are these maxima or are they minima, right? Because both of these can have flat tangent lines. So when you do find this, and you want to understand is it a maximum or a minimum, if you're just looking at the graph, we can tell. You can tell that this point here is a local maximum, and this point here is a local minimum. But if you weren't looking at the graph, there's a nice test that will tell you the answer.
|
Warm up to the second partial derivative test.mp3
|
Because both of these can have flat tangent lines. So when you do find this, and you want to understand is it a maximum or a minimum, if you're just looking at the graph, we can tell. You can tell that this point here is a local maximum, and this point here is a local minimum. But if you weren't looking at the graph, there's a nice test that will tell you the answer. You basically look for the second derivative, and in this case, because the concavity is down, that second derivative is gonna be less than zero. And then over here, because the concavity is up, that second derivative is greater than zero. And by getting this information of the concavity, you can make a conclusion that when the concavity is down, you're at a local maximum.
|
Warm up to the second partial derivative test.mp3
|
But if you weren't looking at the graph, there's a nice test that will tell you the answer. You basically look for the second derivative, and in this case, because the concavity is down, that second derivative is gonna be less than zero. And then over here, because the concavity is up, that second derivative is greater than zero. And by getting this information of the concavity, you can make a conclusion that when the concavity is down, you're at a local maximum. When the concavity is up, you're at a local minimum. And in the case where the second derivative is zero, it's undetermined. You would have to do more tests to figure it out.
|
Warm up to the second partial derivative test.mp3
|
And by getting this information of the concavity, you can make a conclusion that when the concavity is down, you're at a local maximum. When the concavity is up, you're at a local minimum. And in the case where the second derivative is zero, it's undetermined. You would have to do more tests to figure it out. It's unknown. So in the multivariable world, the situation is very similar. As I've talked about in previous videos, what you do is you'd have some kind of function, and let's say it's a two-variable function, and instead of looking for where the derivative equals zero, you're gonna be looking for where the gradient of your function is equal to the zero vector, which we might make bold to emphasize that that's a vector.
|
Warm up to the second partial derivative test.mp3
|
You would have to do more tests to figure it out. It's unknown. So in the multivariable world, the situation is very similar. As I've talked about in previous videos, what you do is you'd have some kind of function, and let's say it's a two-variable function, and instead of looking for where the derivative equals zero, you're gonna be looking for where the gradient of your function is equal to the zero vector, which we might make bold to emphasize that that's a vector. And that corresponds with finding flat tangent planes. And if that seems unfamiliar, go back and take a look at the video where I introduced the idea of multivariable maxima and minima. But the subject of this video is gonna be on what is analogous to this second derivative test, where in the single-variable world, you just find the second derivative and check if it's greater than or less than zero.
|
Warm up to the second partial derivative test.mp3
|
As I've talked about in previous videos, what you do is you'd have some kind of function, and let's say it's a two-variable function, and instead of looking for where the derivative equals zero, you're gonna be looking for where the gradient of your function is equal to the zero vector, which we might make bold to emphasize that that's a vector. And that corresponds with finding flat tangent planes. And if that seems unfamiliar, go back and take a look at the video where I introduced the idea of multivariable maxima and minima. But the subject of this video is gonna be on what is analogous to this second derivative test, where in the single-variable world, you just find the second derivative and check if it's greater than or less than zero. How can we, in the multivariable world, do something similar to figure out if you have a local minimum, a local maximum, or that new possibility of a saddle point that I talked about in the last video? So there is another test, and it's called the second partial derivative test. And I'll get to the specifics of that at the very end of this video.
|
Warm up to the second partial derivative test.mp3
|
But the subject of this video is gonna be on what is analogous to this second derivative test, where in the single-variable world, you just find the second derivative and check if it's greater than or less than zero. How can we, in the multivariable world, do something similar to figure out if you have a local minimum, a local maximum, or that new possibility of a saddle point that I talked about in the last video? So there is another test, and it's called the second partial derivative test. And I'll get to the specifics of that at the very end of this video. But to set the landscape, I wanna actually talk through a specific example where we're finding when the gradient equals zero, just to see what that looks like and just to have some concrete formulas to deal with. So the function that you're looking at right now is f of x, y is equal to x to the fourth minus four x squared plus y squared, okay? So that's the function that we're dealing with.
|
Warm up to the second partial derivative test.mp3
|
And I'll get to the specifics of that at the very end of this video. But to set the landscape, I wanna actually talk through a specific example where we're finding when the gradient equals zero, just to see what that looks like and just to have some concrete formulas to deal with. So the function that you're looking at right now is f of x, y is equal to x to the fourth minus four x squared plus y squared, okay? So that's the function that we're dealing with. And in order to find where its tangent plane is flat, we're looking for where the gradient equals zero. And remember, this is just really a way of unpacking the requirements that both partial derivatives, the partial derivative of f with respect to x at some point, and we'll kind of write it in as we're looking for the x and y where this is zero, and also where the partial derivative of f with respect to y at that same point, x, y, is equal to zero. So the idea is that this is gonna give us some kind of system of equations that we can solve for x and y.
|
Warm up to the second partial derivative test.mp3
|
So that's the function that we're dealing with. And in order to find where its tangent plane is flat, we're looking for where the gradient equals zero. And remember, this is just really a way of unpacking the requirements that both partial derivatives, the partial derivative of f with respect to x at some point, and we'll kind of write it in as we're looking for the x and y where this is zero, and also where the partial derivative of f with respect to y at that same point, x, y, is equal to zero. So the idea is that this is gonna give us some kind of system of equations that we can solve for x and y. So let's go ahead and actually do that. In this case, the partial derivative with respect to x, we look up here, and the only places where x shows up, we have x to the fourth minus four x squared. So that x to the fourth turns into four times x cubed minus four x squared, that becomes minus eight x, and then y, y just looks like a constant, so we're adding a constant, and nothing changes here.
|
Warm up to the second partial derivative test.mp3
|
So the idea is that this is gonna give us some kind of system of equations that we can solve for x and y. So let's go ahead and actually do that. In this case, the partial derivative with respect to x, we look up here, and the only places where x shows up, we have x to the fourth minus four x squared. So that x to the fourth turns into four times x cubed minus four x squared, that becomes minus eight x, and then y, y just looks like a constant, so we're adding a constant, and nothing changes here. So the first requirement is that this portion is equal to zero. Now the second part, where we're looking for the partial derivative with respect to y, the only place where y shows up is this y squared term, so the partial derivative with respect to y is just two y, and we're setting that equal to zero. And I chose a simple example where these partial derivative equations, you know, this one nicely only includes x, and this one nicely only includes y, but that's not always the case.
|
Warm up to the second partial derivative test.mp3
|
So that x to the fourth turns into four times x cubed minus four x squared, that becomes minus eight x, and then y, y just looks like a constant, so we're adding a constant, and nothing changes here. So the first requirement is that this portion is equal to zero. Now the second part, where we're looking for the partial derivative with respect to y, the only place where y shows up is this y squared term, so the partial derivative with respect to y is just two y, and we're setting that equal to zero. And I chose a simple example where these partial derivative equations, you know, this one nicely only includes x, and this one nicely only includes y, but that's not always the case. You can imagine if you intermingle the variables a little bit more, these will actually kind of intermingle x's and y's, and it'll be a harder thing to solve. But I just want something where we can actually start to find the solutions. So if we actually solve this system, this equation here, the two y equals zero, just gives us the fact that y has to equal zero.
|
Warm up to the second partial derivative test.mp3
|
And I chose a simple example where these partial derivative equations, you know, this one nicely only includes x, and this one nicely only includes y, but that's not always the case. You can imagine if you intermingle the variables a little bit more, these will actually kind of intermingle x's and y's, and it'll be a harder thing to solve. But I just want something where we can actually start to find the solutions. So if we actually solve this system, this equation here, the two y equals zero, just gives us the fact that y has to equal zero. So that's nice enough, right? And then the second equation, that four x cubed minus eight x equals zero, let's go ahead and rewrite that, where I'm gonna factor out one of the x's and factor out a four. So this is four x multiplied by x squared minus two has to equal zero.
|
Warm up to the second partial derivative test.mp3
|
So if we actually solve this system, this equation here, the two y equals zero, just gives us the fact that y has to equal zero. So that's nice enough, right? And then the second equation, that four x cubed minus eight x equals zero, let's go ahead and rewrite that, where I'm gonna factor out one of the x's and factor out a four. So this is four x multiplied by x squared minus two has to equal zero. So there's two different ways that this can equal zero, right? Either x itself is equal to zero, so that would be one solution, x is equal to zero, or x squared minus two is zero, which would mean x is plus or minus the square root of two. So we have x is plus or minus the square root of two.
|
Warm up to the second partial derivative test.mp3
|
So this is four x multiplied by x squared minus two has to equal zero. So there's two different ways that this can equal zero, right? Either x itself is equal to zero, so that would be one solution, x is equal to zero, or x squared minus two is zero, which would mean x is plus or minus the square root of two. So we have x is plus or minus the square root of two. So the solution to the system of equations, we know that no matter what, y has to equal zero, and then one of three different things can happen. X equals zero, x equals positive square root of two, or x equals negative square root of two. So this gives us three separate solutions.
|
Warm up to the second partial derivative test.mp3
|
So we have x is plus or minus the square root of two. So the solution to the system of equations, we know that no matter what, y has to equal zero, and then one of three different things can happen. X equals zero, x equals positive square root of two, or x equals negative square root of two. So this gives us three separate solutions. And I'll go ahead and write them down. Our three solutions as ordered pairs are gonna be either zero, zero, for when x is zero and y is zero. You have square root of two, zero, and then you have negative square root of two, zero.
|
Warm up to the second partial derivative test.mp3
|
So this gives us three separate solutions. And I'll go ahead and write them down. Our three solutions as ordered pairs are gonna be either zero, zero, for when x is zero and y is zero. You have square root of two, zero, and then you have negative square root of two, zero. These are the three different points, the three different values for x and y that satisfy the two requirements that both partial derivatives are zero. What that should mean on the graph then is when we look at those three different inputs, all of those have flat tangent planes. So the first one, zero, zero, if we kind of look above, I guess we're kind of inside the graph here, zero, zero, is right at the origin.
|
Warm up to the second partial derivative test.mp3
|
You have square root of two, zero, and then you have negative square root of two, zero. These are the three different points, the three different values for x and y that satisfy the two requirements that both partial derivatives are zero. What that should mean on the graph then is when we look at those three different inputs, all of those have flat tangent planes. So the first one, zero, zero, if we kind of look above, I guess we're kind of inside the graph here, zero, zero, is right at the origin. And we can see, just looking at the graph, that that's actually a saddle point. You know, this is neither a local maximum nor a local minimum. It doesn't look like a peak or like a valley.
|
Warm up to the second partial derivative test.mp3
|
So the first one, zero, zero, if we kind of look above, I guess we're kind of inside the graph here, zero, zero, is right at the origin. And we can see, just looking at the graph, that that's actually a saddle point. You know, this is neither a local maximum nor a local minimum. It doesn't look like a peak or like a valley. And then the other two, where we kind of move along the x-axis, and I guess it turns out that this point here is directly below x equals positive square root of two, and this other minimum is directly below x equals negative square root of two. I wouldn't have been able to guess that just looking at the graph, but we just figured it out. And we can see visually that both of those are local minima.
|
Warm up to the second partial derivative test.mp3
|
It doesn't look like a peak or like a valley. And then the other two, where we kind of move along the x-axis, and I guess it turns out that this point here is directly below x equals positive square root of two, and this other minimum is directly below x equals negative square root of two. I wouldn't have been able to guess that just looking at the graph, but we just figured it out. And we can see visually that both of those are local minima. But the question is, how could we have figured that out once we find these solutions if you didn't have the graph to look at immediately? How could you have figured out that zero, zero corresponds to a saddle point and that both of these other solutions correspond to local minima? Well, following the idea of the single variable second derivative test, what you might do is take the second partial derivative of our function and see how that might influence concavity.
|
Warm up to the second partial derivative test.mp3
|
And we can see visually that both of those are local minima. But the question is, how could we have figured that out once we find these solutions if you didn't have the graph to look at immediately? How could you have figured out that zero, zero corresponds to a saddle point and that both of these other solutions correspond to local minima? Well, following the idea of the single variable second derivative test, what you might do is take the second partial derivative of our function and see how that might influence concavity. For example, if we take the second partial derivative with respect to x, and I'll try to squeeze it up here, second partial derivative of the function with respect to x, we're doing that twice, we're taking the second derivative of this expression with respect to x, so we bring down that three, and that's gonna become 12, because three times four times x squared, 12 times x squared minus eight, minus eight. So what this means, whoop, kinda moved that around, what this means in terms of the graph is that if we move purely in the x direction, which means we kind of cut it with a plane representing a constant y value, and we look at the slice of the graph itself, this expression will tell us the concavity at every given point. So these bottom two points here correspond to plus and minus x equals the square root of two.
|
Warm up to the second partial derivative test.mp3
|
Well, following the idea of the single variable second derivative test, what you might do is take the second partial derivative of our function and see how that might influence concavity. For example, if we take the second partial derivative with respect to x, and I'll try to squeeze it up here, second partial derivative of the function with respect to x, we're doing that twice, we're taking the second derivative of this expression with respect to x, so we bring down that three, and that's gonna become 12, because three times four times x squared, 12 times x squared minus eight, minus eight. So what this means, whoop, kinda moved that around, what this means in terms of the graph is that if we move purely in the x direction, which means we kind of cut it with a plane representing a constant y value, and we look at the slice of the graph itself, this expression will tell us the concavity at every given point. So these bottom two points here correspond to plus and minus x equals the square root of two. So if we go over here and think about the case where x equals the square root of two, and we plug that into the expression, what are we gonna get? Well, we're gonna get 12 multiplied by, if x equals square root of two, then x squared is equal to two, so that's 12 times two minus eight, so that's 24 minus eight, and we're gonna get 16, which is a positive number, which is why you have positive concavity at each of these points. So as far as the x direction is concerned, it feels like, ah, yes, both of these have positive concavity, so they should look like local minima.
|
Warm up to the second partial derivative test.mp3
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.