Sentence
stringlengths
131
8.39k
video_title
stringlengths
12
104
Another fun one is a vector field, where every input point is associated with some kind of vector, which is the output of the function there, so this would be a function with a two-dimensional input and then two-dimensional output, because each of these are two-dimensional vectors. And the fun part with these guys is that you can just kind of imagine a fluid flowing, so here's a bunch of droplets like water, and they kind of flow along that, and that actually turns out to give insight about the underlying function. It's one of those beautiful aspects of multivariable calc, and we'll get lots of exposure to that. Again, I'm just sort of zipping through to whet your appetite. Don't worry if this doesn't make sense immediately. And one of my all-time favorite ways to think about multivariable functions is to just take the input space, in this case this is going to be a function that inputs points in two-dimensional space, and watch them move to their output. So this is going to be a function that also outputs in two dimensions, and I'm just going to watch every single point move over to where it's supposed to go.
Multivariable functions Multivariable calculus Khan Academy.mp3
Again, I'm just sort of zipping through to whet your appetite. Don't worry if this doesn't make sense immediately. And one of my all-time favorite ways to think about multivariable functions is to just take the input space, in this case this is going to be a function that inputs points in two-dimensional space, and watch them move to their output. So this is going to be a function that also outputs in two dimensions, and I'm just going to watch every single point move over to where it's supposed to go. These can be kind of complicated to look at or to think about at first, but as you gain a little bit of thought and exposure to them, they're actually very nice, and it provides a beautiful connection with linear algebra. A lot of you out there, if you're studying multivariable calculus, you either are about to study linear algebra, or you just have, or maybe you're doing it concurrently, but understanding functions as transformations is going to be a great way to connect those two. So with that, I'll stop jabbering through these topics really quickly, and in the next few videos, I'll actually go through them in detail, and hopefully you can get a good feel for what linear algebra functions can actually feel like.
Multivariable functions Multivariable calculus Khan Academy.mp3
I'll call this region R sub three, R with a subscript three. It's going to be the set of all points in three dimensions, the set of all x, y's and z's, such that the x, z pairs are a member of a domain, I'll call this domain D three, D sub three, and, let's put a comma here, y is going to vary between two surfaces that are functions of x and z. So y is going to be greater than or equal to the surface, I'll call it H one of xz, is going to be less than or equal to y, less than or equal to y, which is going to be less than or equal to, y is going to be bounded from above by the surface H two of xz. And once again, let's close our set notation. So let's think about whether some of these regions that we already saw were type one and type two, whether they're type three, and then think about what would not be a type three region. So let's go to this sphere. What could be our domain?
Type III regions in three dimensions Divergence theorem Multivariable Calculus Khan Academy.mp3
And once again, let's close our set notation. So let's think about whether some of these regions that we already saw were type one and type two, whether they're type three, and then think about what would not be a type three region. So let's go to this sphere. What could be our domain? Well, the domain is a set of xz's, so it's going to be in the xz plane. So over here, our domain could be this region, right over here in the xz plane. So I'll color it in.
Type III regions in three dimensions Divergence theorem Multivariable Calculus Khan Academy.mp3
What could be our domain? Well, the domain is a set of xz's, so it's going to be in the xz plane. So over here, our domain could be this region, right over here in the xz plane. So I'll color it in. It could be that region right over there in the xz plane. And then the lower bound on y will be the part that is behind the sphere in this direction right over here. This might be a little bit hard to visualize since I'm redrawing on top of it.
Type III regions in three dimensions Divergence theorem Multivariable Calculus Khan Academy.mp3
So I'll color it in. It could be that region right over there in the xz plane. And then the lower bound on y will be the part that is behind the sphere in this direction right over here. This might be a little bit hard to visualize since I'm redrawing on top of it. And then the upper bound on y is going to be this side right over here. It's going to be this side right over here. It's going to be the upper bound.
Type III regions in three dimensions Divergence theorem Multivariable Calculus Khan Academy.mp3
This might be a little bit hard to visualize since I'm redrawing on top of it. And then the upper bound on y is going to be this side right over here. It's going to be this side right over here. It's going to be the upper bound. So all of this is now going to be green. Let me redraw the sphere just to make it clear. So assuming that, let me just draw another coordinate axis.
Type III regions in three dimensions Divergence theorem Multivariable Calculus Khan Academy.mp3
It's going to be the upper bound. So all of this is now going to be green. Let me redraw the sphere just to make it clear. So assuming that, let me just draw another coordinate axis. We have another coordinate axis. The back side of the sphere in the y direction, so I guess let's think of it this way. So this is the hemisphere mark.
Type III regions in three dimensions Divergence theorem Multivariable Calculus Khan Academy.mp3
So assuming that, let me just draw another coordinate axis. We have another coordinate axis. The back side of the sphere in the y direction, so I guess let's think of it this way. So this is the hemisphere mark. This is the halfway point for my sphere. And once again, that's kind of the boundary of our domain. And then the back side, let me do the front side first.
Type III regions in three dimensions Divergence theorem Multivariable Calculus Khan Academy.mp3
So this is the hemisphere mark. This is the halfway point for my sphere. And once again, that's kind of the boundary of our domain. And then the back side, let me do the front side first. The front side, y's upper bound, that would be h2, would be all of this business right over here. So this would be h2 that I'm coloring in. h2 would be the side that is facing in that direction.
Type III regions in three dimensions Divergence theorem Multivariable Calculus Khan Academy.mp3
And then the back side, let me do the front side first. The front side, y's upper bound, that would be h2, would be all of this business right over here. So this would be h2 that I'm coloring in. h2 would be the side that is facing in that direction. And let me see how well I can color it. That didn't do a good thing. So h2 is all of this stuff out on this side of the sphere.
Type III regions in three dimensions Divergence theorem Multivariable Calculus Khan Academy.mp3
h2 would be the side that is facing in that direction. And let me see how well I can color it. That didn't do a good thing. So h2 is all of this stuff out on this side of the sphere. And then h1 was the lower bound on y. So it's going to be that side right over there. I know I could draw it probably a little bit neater, but hopefully you get the point.
Type III regions in three dimensions Divergence theorem Multivariable Calculus Khan Academy.mp3
So h2 is all of this stuff out on this side of the sphere. And then h1 was the lower bound on y. So it's going to be that side right over there. I know I could draw it probably a little bit neater, but hopefully you get the point. It would be all of that side. And then y can vary between those two and essentially fill up the region. We make the exact same argument with the cylinder.
Type III regions in three dimensions Divergence theorem Multivariable Calculus Khan Academy.mp3
I know I could draw it probably a little bit neater, but hopefully you get the point. It would be all of that side. And then y can vary between those two and essentially fill up the region. We make the exact same argument with the cylinder. The cylinder can be defined the same way. So first of all, the sphere is a type 1, type 2, and type 3 region. It meets all of the constraints.
Type III regions in three dimensions Divergence theorem Multivariable Calculus Khan Academy.mp3
We make the exact same argument with the cylinder. The cylinder can be defined the same way. So first of all, the sphere is a type 1, type 2, and type 3 region. It meets all of the constraints. The cylinder, at least the way it was oriented there, actually any cylinder, actually will also be a type 3 region. Exact same argument. So let me draw my axes again.
Type III regions in three dimensions Divergence theorem Multivariable Calculus Khan Academy.mp3
It meets all of the constraints. The cylinder, at least the way it was oriented there, actually any cylinder, actually will also be a type 3 region. Exact same argument. So let me draw my axes again. And here, to make the cylinder that we've been making, our domain could be a rectangle in the xz plane. So our domain can be a rectangular region in the xz plane, just like that. And then the lower bound on y could be that side of the cylinder, the side facing in that direction right over there.
Type III regions in three dimensions Divergence theorem Multivariable Calculus Khan Academy.mp3
So let me draw my axes again. And here, to make the cylinder that we've been making, our domain could be a rectangle in the xz plane. So our domain can be a rectangular region in the xz plane, just like that. And then the lower bound on y could be that side of the cylinder, the side facing in that direction right over there. That is the lower bound. And then the upper bound on y could be the side facing in that direction. The upper bound on y would be this side right over there.
Type III regions in three dimensions Divergence theorem Multivariable Calculus Khan Academy.mp3
And then the lower bound on y could be that side of the cylinder, the side facing in that direction right over there. That is the lower bound. And then the upper bound on y could be the side facing in that direction. The upper bound on y would be this side right over there. So once again, this is also a type 3 region. By the same logic, this one right over here, this hourglass, can be a type 3 region. The front side would be this front side right over here.
Type III regions in three dimensions Divergence theorem Multivariable Calculus Khan Academy.mp3
The upper bound on y would be this side right over there. So once again, this is also a type 3 region. By the same logic, this one right over here, this hourglass, can be a type 3 region. The front side would be this front side right over here. All of this, including that stuff that I... So it would be all of that. And then the back side, when you think of it in terms of y, the back side would be that part right over there.
Type III regions in three dimensions Divergence theorem Multivariable Calculus Khan Academy.mp3
The front side would be this front side right over here. All of this, including that stuff that I... So it would be all of that. And then the back side, when you think of it in terms of y, the back side would be that part right over there. And once again, this could also be a type 3 region. The domain would be kind of this cross... The boundary of the domain would be this cross section right over here.
Type III regions in three dimensions Divergence theorem Multivariable Calculus Khan Academy.mp3
And then the back side, when you think of it in terms of y, the back side would be that part right over there. And once again, this could also be a type 3 region. The domain would be kind of this cross... The boundary of the domain would be this cross section right over here. So the boundary of our domain could be that cross section right over there. The lower bound on y would be the back half of this hourglass. And the upper bound on y would be the front half.
Type III regions in three dimensions Divergence theorem Multivariable Calculus Khan Academy.mp3
The boundary of the domain would be this cross section right over here. So the boundary of our domain could be that cross section right over there. The lower bound on y would be the back half of this hourglass. And the upper bound on y would be the front half. Let me do that magenta color because I've been using that, or actually that green color. So the upper bound on y would be this right half right over here. So what would not be a type 3 region?
Type III regions in three dimensions Divergence theorem Multivariable Calculus Khan Academy.mp3
And the upper bound on y would be the front half. Let me do that magenta color because I've been using that, or actually that green color. So the upper bound on y would be this right half right over here. So what would not be a type 3 region? Well, if we just rotated this around like that. So let me just draw something that is not a type 3 region just to show you that this definition does not include everything. So something that would not be a type 3 region for the same argument as we've seen before for type 2 and type 1 regions is an hourglass where it's along the y-axis, or at least it's oriented this way.
Type III regions in three dimensions Divergence theorem Multivariable Calculus Khan Academy.mp3
So what would not be a type 3 region? Well, if we just rotated this around like that. So let me just draw something that is not a type 3 region just to show you that this definition does not include everything. So something that would not be a type 3 region for the same argument as we've seen before for type 2 and type 1 regions is an hourglass where it's along the y-axis, or at least it's oriented this way. It actually does not have to be right centered on the y-axis. But an hourglass that looks like this, now all of a sudden the y can't be just expressed as being between two surfaces that are functions of x and z. You would have to break this up in order to do that.
Type III regions in three dimensions Divergence theorem Multivariable Calculus Khan Academy.mp3
So let's compute the two-dimensional curl of a vector field. And the one I have in mind will have an x component of, let's see, not nine, but y cubed minus nine times y, and then the y component will be x cubed minus nine times x. And you can kind of see I'm just a sucker for symmetry when I choose examples. And I showed in the last video how the two-dimensional curl, the 2D curl of a vector field, of a vector field v, which is a function of x and y, is equal to the partial derivative of q, that second component, with respect to x, minus the partial derivative of p, that first component, with respect to y. And I went through the reasoning for why this is true, but just real quick, kind of in a nutshell here, this partial q, partial x, is because as you move from left to right, vectors tend to go from having a small or even negative y component to a positive y component. That corresponds to counterclockwise rotation. And similarly, this dp dy is because if vectors, as you move up and down, as you kind of increase the y value, go from being positive to zero to negative, or if they're decreasing, that also corresponds to counterclockwise rotation.
2d curl example.mp3
And I showed in the last video how the two-dimensional curl, the 2D curl of a vector field, of a vector field v, which is a function of x and y, is equal to the partial derivative of q, that second component, with respect to x, minus the partial derivative of p, that first component, with respect to y. And I went through the reasoning for why this is true, but just real quick, kind of in a nutshell here, this partial q, partial x, is because as you move from left to right, vectors tend to go from having a small or even negative y component to a positive y component. That corresponds to counterclockwise rotation. And similarly, this dp dy is because if vectors, as you move up and down, as you kind of increase the y value, go from being positive to zero to negative, or if they're decreasing, that also corresponds to counterclockwise rotation. So taking the negative of that will tell you whether or not changes in the y direction around your point correspond with counterclockwise rotation. So in this particular case, when we start evaluating that, we start by looking at partial of q with respect to x. So we're looking at the second component and taking its partial derivative with respect to x.
2d curl example.mp3
And similarly, this dp dy is because if vectors, as you move up and down, as you kind of increase the y value, go from being positive to zero to negative, or if they're decreasing, that also corresponds to counterclockwise rotation. So taking the negative of that will tell you whether or not changes in the y direction around your point correspond with counterclockwise rotation. So in this particular case, when we start evaluating that, we start by looking at partial of q with respect to x. So we're looking at the second component and taking its partial derivative with respect to x. And in this case, nothing but x's show up, so it's just like taking its derivative, and you'll get three x squared minus nine there. Three x squared minus nine. And that's the first part.
2d curl example.mp3
So we're looking at the second component and taking its partial derivative with respect to x. And in this case, nothing but x's show up, so it's just like taking its derivative, and you'll get three x squared minus nine there. Three x squared minus nine. And that's the first part. And then we subtract off whatever the partial derivative of p with respect to y is. So we go up here, and it's entirely in terms of y, and kind of do the symmetry. We're just taking the same calculation.
2d curl example.mp3
And that's the first part. And then we subtract off whatever the partial derivative of p with respect to y is. So we go up here, and it's entirely in terms of y, and kind of do the symmetry. We're just taking the same calculation. Three y squared, that derivative of y cubed, minus nine. So this right here is our two-dimensional curl. And let's go ahead and interpret what this means.
2d curl example.mp3
We're just taking the same calculation. Three y squared, that derivative of y cubed, minus nine. So this right here is our two-dimensional curl. And let's go ahead and interpret what this means. And in fact, this vector field that I showed you is exactly the one that I used when I was kind of animating the intuition behind curl to start off with, where I had these specific parts where there's positive curl here and here, but negative curl up in these clockwise rotating areas. So we can actually see why that's the case here, and why I chose this specific function for something that'll have lots of good curl examples. Because if we look over in that region where there should be positive curl, that's where x is equal to three and y is equal to zero.
2d curl example.mp3
And let's go ahead and interpret what this means. And in fact, this vector field that I showed you is exactly the one that I used when I was kind of animating the intuition behind curl to start off with, where I had these specific parts where there's positive curl here and here, but negative curl up in these clockwise rotating areas. So we can actually see why that's the case here, and why I chose this specific function for something that'll have lots of good curl examples. Because if we look over in that region where there should be positive curl, that's where x is equal to three and y is equal to zero. So I go over here and say if x is equal to three and y is equal to zero, this whole formula becomes, let's see, three times three squared. So three times three squared minus nine, minus nine, and then minus the quantity, now we're plugging in y here, so that's three times y squared is just zero, because y is equal to zero, minus nine, minus nine. And so this part is 27, that's three times nine is 27, minus nine gives us 18.
2d curl example.mp3
Because if we look over in that region where there should be positive curl, that's where x is equal to three and y is equal to zero. So I go over here and say if x is equal to three and y is equal to zero, this whole formula becomes, let's see, three times three squared. So three times three squared minus nine, minus nine, and then minus the quantity, now we're plugging in y here, so that's three times y squared is just zero, because y is equal to zero, minus nine, minus nine. And so this part is 27, that's three times nine is 27, minus nine gives us 18. And then we're subtracting off a negative nine, so that's actually plus nine. So this whole thing is 27, it's actually quite positive. So this is a positive number, and that's why when we go over here and we're looking at the fluid flow, you have a counterclockwise rotation in that region.
2d curl example.mp3
And so this part is 27, that's three times nine is 27, minus nine gives us 18. And then we're subtracting off a negative nine, so that's actually plus nine. So this whole thing is 27, it's actually quite positive. So this is a positive number, and that's why when we go over here and we're looking at the fluid flow, you have a counterclockwise rotation in that region. Whereas, let's say that we did all of this, but instead of x equals three and y equals zero, we looked at x is equal to zero and y is equal to three. So in that case, we would instead, so x equals zero, y equals three, let's take a look at where that is, x is zero, and then y, the tick marks here are each one half, so y equals three is right here, it's in that clockwise rotation area. So if I kind of play this, we've got the clockwise rotation, we're expecting a negative value.
2d curl example.mp3
So this is a positive number, and that's why when we go over here and we're looking at the fluid flow, you have a counterclockwise rotation in that region. Whereas, let's say that we did all of this, but instead of x equals three and y equals zero, we looked at x is equal to zero and y is equal to three. So in that case, we would instead, so x equals zero, y equals three, let's take a look at where that is, x is zero, and then y, the tick marks here are each one half, so y equals three is right here, it's in that clockwise rotation area. So if I kind of play this, we've got the clockwise rotation, we're expecting a negative value. And let's see if that's what we get. We go over here, and I'm gonna evaluate this whole function again, but plugging in zero for x, so this is three times zero, times zero, minus nine, and then we're subtracting off three times y squared, so that's three times three squared, three squared, minus nine, and this whole part is zero minus nine, so that becomes negative nine, and over here, we're subtracting off nine, 27 minus nine, which is 18, so we're subtracting off 18. So the whole thing equals negative 27.
2d curl example.mp3
So if I kind of play this, we've got the clockwise rotation, we're expecting a negative value. And let's see if that's what we get. We go over here, and I'm gonna evaluate this whole function again, but plugging in zero for x, so this is three times zero, times zero, minus nine, and then we're subtracting off three times y squared, so that's three times three squared, three squared, minus nine, and this whole part is zero minus nine, so that becomes negative nine, and over here, we're subtracting off nine, 27 minus nine, which is 18, so we're subtracting off 18. So the whole thing equals negative 27. So maybe I should say equals that, that equals negative 27. So because this is negative, that's what corresponds to the clockwise rotation that we have going on in that region, and if you went and you plugged in a bunch of different points, like you can perhaps see how if you plug in zero for x and zero for y, those nines cancel out, which is why over here, there's no general rotation around the origin when x and y are both equal to zero. And you can understand that every single point and the general rotation around every single point just by taking this formula that we found for 2D curl and plugging in the corresponding values of x and y.
2d curl example.mp3
Now we've talked about Lagrange multipliers, this is a highly related concept. In fact, it's not really teaching anything new, this is just repackaging stuff that we already know. So to remind you of the setup, this is gonna be a constrained optimization problem setup. So we'll have some kind of multivariable function, f of x, y, and the one I have pictured here is, let's see, it's x squared times e to the y times y. So what I have shown here is a contour line for this function. So that is, we say what happens if we set this equal to some constant, and we ask about all values of x and y such that this holds, such that this function outputs that constant. And if I choose a different constant, then that contour line could look a little bit different.
The Lagrangian.mp3
So we'll have some kind of multivariable function, f of x, y, and the one I have pictured here is, let's see, it's x squared times e to the y times y. So what I have shown here is a contour line for this function. So that is, we say what happens if we set this equal to some constant, and we ask about all values of x and y such that this holds, such that this function outputs that constant. And if I choose a different constant, then that contour line could look a little bit different. It's kinda nice that it has similar shapes. So that's the function, and we're trying to maximize it. The goal is to maximize this guy.
The Lagrangian.mp3
And if I choose a different constant, then that contour line could look a little bit different. It's kinda nice that it has similar shapes. So that's the function, and we're trying to maximize it. The goal is to maximize this guy. And of course, it's not just that. The reason we call it a constrained optimization problem is because there's some kind of constraint, some kind of other function, g of x, y. In this case, x squared plus y squared.
The Lagrangian.mp3
The goal is to maximize this guy. And of course, it's not just that. The reason we call it a constrained optimization problem is because there's some kind of constraint, some kind of other function, g of x, y. In this case, x squared plus y squared. And we want to say that this has to equal some specific amount. In this case, I'm gonna set it equal to four. So we say you can't look at any x, y to maximize this function.
The Lagrangian.mp3
In this case, x squared plus y squared. And we want to say that this has to equal some specific amount. In this case, I'm gonna set it equal to four. So we say you can't look at any x, y to maximize this function. You are limited to the values of x and y that satisfy this property. And I talked about this in the last couple videos, and kind of the cool thing that we found was that you look through the various different contour lines of f, and the maximum will be achieved when that contour line is just perfectly parallel to this contour of g. And you know, a pretty classic example for what these sorts of things could mean or how it's used in practice is if this was, say, a revenue function for some kind of company. You're kind of modeling your revenues based on different choices you could make for running that company.
The Lagrangian.mp3
So we say you can't look at any x, y to maximize this function. You are limited to the values of x and y that satisfy this property. And I talked about this in the last couple videos, and kind of the cool thing that we found was that you look through the various different contour lines of f, and the maximum will be achieved when that contour line is just perfectly parallel to this contour of g. And you know, a pretty classic example for what these sorts of things could mean or how it's used in practice is if this was, say, a revenue function for some kind of company. You're kind of modeling your revenues based on different choices you could make for running that company. And the constraint that you'd have would be, let's say, a budget. So I'm just gonna go ahead and write budget, or b for budget here. So you're trying to maximize revenues, and then you have some sort of dollar limit for what you're willing to spend.
The Lagrangian.mp3
You're kind of modeling your revenues based on different choices you could make for running that company. And the constraint that you'd have would be, let's say, a budget. So I'm just gonna go ahead and write budget, or b for budget here. So you're trying to maximize revenues, and then you have some sort of dollar limit for what you're willing to spend. And these, of course, are just kind of made-up functions. You'd never have a budget that looks like a circle in this kind of random configuration for your revenue. But in principle, you know what I mean, right?
The Lagrangian.mp3
So you're trying to maximize revenues, and then you have some sort of dollar limit for what you're willing to spend. And these, of course, are just kind of made-up functions. You'd never have a budget that looks like a circle in this kind of random configuration for your revenue. But in principle, you know what I mean, right? So the way that we took advantage of this tangency property, and I think this is pretty clever. Let me just kind of redraw it over here. I'm looking at the point where the two functions are just tangent to each other, is that the gradient, the gradient vector for the thing we're maximizing, which in this case is r, is gonna be parallel, or proportional, to the gradient vector of the constraint, which in this case is b.
The Lagrangian.mp3
But in principle, you know what I mean, right? So the way that we took advantage of this tangency property, and I think this is pretty clever. Let me just kind of redraw it over here. I'm looking at the point where the two functions are just tangent to each other, is that the gradient, the gradient vector for the thing we're maximizing, which in this case is r, is gonna be parallel, or proportional, to the gradient vector of the constraint, which in this case is b. It's gonna be proportional to the gradient of the constraint. And what this means is that if we were going to solve a set of equations, what you set up is you compute that gradient of r, which will involve two different partial derivatives, and you set it equal, not to the gradient of b, because it's not necessarily equal to the gradient of b, but it's proportional, with some kind of proportionality constant lambda. Now let me, that's kind of a squirrely lambda.
The Lagrangian.mp3
I'm looking at the point where the two functions are just tangent to each other, is that the gradient, the gradient vector for the thing we're maximizing, which in this case is r, is gonna be parallel, or proportional, to the gradient vector of the constraint, which in this case is b. It's gonna be proportional to the gradient of the constraint. And what this means is that if we were going to solve a set of equations, what you set up is you compute that gradient of r, which will involve two different partial derivatives, and you set it equal, not to the gradient of b, because it's not necessarily equal to the gradient of b, but it's proportional, with some kind of proportionality constant lambda. Now let me, that's kind of a squirrely lambda. Lambda. That one doesn't look good either, does it? Why are lambdas so hard to draw?
The Lagrangian.mp3
Now let me, that's kind of a squirrely lambda. Lambda. That one doesn't look good either, does it? Why are lambdas so hard to draw? All right, that looks fine. So the gradient of the revenue is proportional to the gradient of the budget. And we did a couple examples of solving this kind of thing.
The Lagrangian.mp3
Why are lambdas so hard to draw? All right, that looks fine. So the gradient of the revenue is proportional to the gradient of the budget. And we did a couple examples of solving this kind of thing. This gives you two separate equations from the two partial derivatives, and then you use this right here, this budget constraint, as your third equation. And the Lagrangian, the point of this video, this Lagrangian function, is basically just a way to package up this equation along with this equation into a single entity. So it's not really adding new information, and if you're solving things by hand, it doesn't really do anything for you.
The Lagrangian.mp3
And we did a couple examples of solving this kind of thing. This gives you two separate equations from the two partial derivatives, and then you use this right here, this budget constraint, as your third equation. And the Lagrangian, the point of this video, this Lagrangian function, is basically just a way to package up this equation along with this equation into a single entity. So it's not really adding new information, and if you're solving things by hand, it doesn't really do anything for you. But what makes it nice is that it's something easier to hand a computer, and I'll show you what I mean. So I'm gonna define the Lagrangian itself, which we write with this kind of funky-looking script L. And it's a function with the same inputs that your revenue function, or the thing that you're maximizing has, along with lambda, along with that Lagrange multiplier. And the way that we define it, and I'm gonna need some extra room, so I'm gonna say it's equal to, and kind of define it down here, the revenue function, or whatever it is that you're maximizing, the function that you're maximizing, minus lambda, that Lagrange multiplier, so that's just another input to this new function that we're defining, multiplied by the constraint function, in this case, b, evaluated at x, y, minus whatever that constraint value is.
The Lagrangian.mp3
So it's not really adding new information, and if you're solving things by hand, it doesn't really do anything for you. But what makes it nice is that it's something easier to hand a computer, and I'll show you what I mean. So I'm gonna define the Lagrangian itself, which we write with this kind of funky-looking script L. And it's a function with the same inputs that your revenue function, or the thing that you're maximizing has, along with lambda, along with that Lagrange multiplier. And the way that we define it, and I'm gonna need some extra room, so I'm gonna say it's equal to, and kind of define it down here, the revenue function, or whatever it is that you're maximizing, the function that you're maximizing, minus lambda, that Lagrange multiplier, so that's just another input to this new function that we're defining, multiplied by the constraint function, in this case, b, evaluated at x, y, minus whatever that constraint value is. In this case, I put in four, so you'd write minus four. If we wanted to be more general, maybe we would write b for whatever your budget is. So over here, you're subtracting off little b.
The Lagrangian.mp3
And the way that we define it, and I'm gonna need some extra room, so I'm gonna say it's equal to, and kind of define it down here, the revenue function, or whatever it is that you're maximizing, the function that you're maximizing, minus lambda, that Lagrange multiplier, so that's just another input to this new function that we're defining, multiplied by the constraint function, in this case, b, evaluated at x, y, minus whatever that constraint value is. In this case, I put in four, so you'd write minus four. If we wanted to be more general, maybe we would write b for whatever your budget is. So over here, you're subtracting off little b. So this here is a new multivariable function, right? It's something where you could input x, y, and lambda, and just kind of plug it all in, and you'd get some kind of value. And remember, b, in this case, is a constant, so I'll go ahead and write that.
The Lagrangian.mp3
So over here, you're subtracting off little b. So this here is a new multivariable function, right? It's something where you could input x, y, and lambda, and just kind of plug it all in, and you'd get some kind of value. And remember, b, in this case, is a constant, so I'll go ahead and write that. This right here is not considered a variable, this is just some constant. Your variables are x, y, and lambda. And this would seem like a totally weird and random thing to do, if you just saw it out of context, or if it was unmotivated, but what's kind of neat, and we'll go ahead and work through this right now, is that when you take this, is that when you take the gradient of this function, called the Lagrangian, and you set it equal to zero, that's gonna encapsulate all three equations that you need.
The Lagrangian.mp3
And remember, b, in this case, is a constant, so I'll go ahead and write that. This right here is not considered a variable, this is just some constant. Your variables are x, y, and lambda. And this would seem like a totally weird and random thing to do, if you just saw it out of context, or if it was unmotivated, but what's kind of neat, and we'll go ahead and work through this right now, is that when you take this, is that when you take the gradient of this function, called the Lagrangian, and you set it equal to zero, that's gonna encapsulate all three equations that you need. And I'll show you what I mean by that. So let's just remember the gradient of L, that's a vector, it's got three different components, since L has three different inputs. You're gonna have the partial derivative of L with respect to x, you're gonna have the partial derivative of L with respect to y, and then finally, the partial derivative of L with respect to lambda, our Lagrange multiplier, which we're considering an input to this function.
The Lagrangian.mp3
And this would seem like a totally weird and random thing to do, if you just saw it out of context, or if it was unmotivated, but what's kind of neat, and we'll go ahead and work through this right now, is that when you take this, is that when you take the gradient of this function, called the Lagrangian, and you set it equal to zero, that's gonna encapsulate all three equations that you need. And I'll show you what I mean by that. So let's just remember the gradient of L, that's a vector, it's got three different components, since L has three different inputs. You're gonna have the partial derivative of L with respect to x, you're gonna have the partial derivative of L with respect to y, and then finally, the partial derivative of L with respect to lambda, our Lagrange multiplier, which we're considering an input to this function. And remember, whenever we write that a vector equals zero, really we mean the zero vector. Often you'll see it in bold, if it's in a textbook, but what we're really saying is, we set those three different functions, those three different partial derivatives, all equal to zero. So this is just a nice, closed-form, compact way of saying, set all of its partial derivatives equal to zero.
The Lagrangian.mp3
You're gonna have the partial derivative of L with respect to x, you're gonna have the partial derivative of L with respect to y, and then finally, the partial derivative of L with respect to lambda, our Lagrange multiplier, which we're considering an input to this function. And remember, whenever we write that a vector equals zero, really we mean the zero vector. Often you'll see it in bold, if it's in a textbook, but what we're really saying is, we set those three different functions, those three different partial derivatives, all equal to zero. So this is just a nice, closed-form, compact way of saying, set all of its partial derivatives equal to zero. And let's go ahead and think about what those partial derivatives actually are. So this first one, the partial with respect to x, partial derivative of the Lagrangian with respect to x. It's kind of fun, you know, you have all these curly symbols, the curly D, the curly L, it makes it look like you're doing some truly advanced math.
The Lagrangian.mp3
So this is just a nice, closed-form, compact way of saying, set all of its partial derivatives equal to zero. And let's go ahead and think about what those partial derivatives actually are. So this first one, the partial with respect to x, partial derivative of the Lagrangian with respect to x. It's kind of fun, you know, you have all these curly symbols, the curly D, the curly L, it makes it look like you're doing some truly advanced math. But really, it's just kind of artificial fanciness, right? But anyway, so we take the partial derivative with respect to x, and what that equals is, well, it's whatever the partial derivative of R with respect to x is, minus, and then lambda, from x's perspective, lambda just looks like a constant. So it's gonna be lambda.
The Lagrangian.mp3
It's kind of fun, you know, you have all these curly symbols, the curly D, the curly L, it makes it look like you're doing some truly advanced math. But really, it's just kind of artificial fanciness, right? But anyway, so we take the partial derivative with respect to x, and what that equals is, well, it's whatever the partial derivative of R with respect to x is, minus, and then lambda, from x's perspective, lambda just looks like a constant. So it's gonna be lambda. And then this inside the parentheses, the partial derivative of that with respect to x, well, it's gonna be whatever the partial derivative of B is with respect to x. But subtracting off that constant, that doesn't change the derivative. So this right here is the partial derivative of lambda with respect to x.
The Lagrangian.mp3
So it's gonna be lambda. And then this inside the parentheses, the partial derivative of that with respect to x, well, it's gonna be whatever the partial derivative of B is with respect to x. But subtracting off that constant, that doesn't change the derivative. So this right here is the partial derivative of lambda with respect to x. Now, if you set that equal to zero, and I know I've kind of run out of room on the right here, but if you set that equal to zero, that's the same as just saying that the partial derivative of R with respect to x equals lambda times the partial derivative of B with respect to x. And if you think about what's gonna happen when you unfold this property, that the gradient of R is proportional to the gradient of B, written up here, that's just the first portion of this, right? If we're setting the gradients equal, then the first component of that is to say that the partial derivative of R with respect to x is equal to lambda times the partial derivative of B with respect to x.
The Lagrangian.mp3
So this right here is the partial derivative of lambda with respect to x. Now, if you set that equal to zero, and I know I've kind of run out of room on the right here, but if you set that equal to zero, that's the same as just saying that the partial derivative of R with respect to x equals lambda times the partial derivative of B with respect to x. And if you think about what's gonna happen when you unfold this property, that the gradient of R is proportional to the gradient of B, written up here, that's just the first portion of this, right? If we're setting the gradients equal, then the first component of that is to say that the partial derivative of R with respect to x is equal to lambda times the partial derivative of B with respect to x. And then if you do this for y, if we take the partial derivative of this Lagrangian function with respect to y, it's very similar, right? It's gonna be, well, you just take the partial derivative of R with respect to y. In fact, it all looks just identical.
The Lagrangian.mp3
If we're setting the gradients equal, then the first component of that is to say that the partial derivative of R with respect to x is equal to lambda times the partial derivative of B with respect to x. And then if you do this for y, if we take the partial derivative of this Lagrangian function with respect to y, it's very similar, right? It's gonna be, well, you just take the partial derivative of R with respect to y. In fact, it all looks just identical. Whatever R is, you take its partial derivative with respect to y, and then we subtract off. Lambda looks like a constant as far as y is concerned. And then that's multiplied by, well, what's the partial derivative of this term inside the parentheses with respect to y?
The Lagrangian.mp3
In fact, it all looks just identical. Whatever R is, you take its partial derivative with respect to y, and then we subtract off. Lambda looks like a constant as far as y is concerned. And then that's multiplied by, well, what's the partial derivative of this term inside the parentheses with respect to y? Well, it's the partial of B with respect to y. And again, again, if you imagine setting that equal to zero, that's gonna be the same as setting this partial derivative term equal to lambda times this partial derivative term, right? You kind of just bring one to the other side.
The Lagrangian.mp3
And then that's multiplied by, well, what's the partial derivative of this term inside the parentheses with respect to y? Well, it's the partial of B with respect to y. And again, again, if you imagine setting that equal to zero, that's gonna be the same as setting this partial derivative term equal to lambda times this partial derivative term, right? You kind of just bring one to the other side. So this second component of our Lagrangian equals zero equation is just the second function that we've seen in a lot of these examples that we've been doing, where you set one of the gradient vectors proportional to the other one. And the only real difference here from stuff that we've seen already, and even then it's not that different, is that what happens when we take the partial derivative of this Lagrangian with respect to lambda, with respect, now I'll go ahead and give it that kind of green lambda color here. Well, when we take that partial derivative, if we kind of look up at the definition of the function, R, R never has a lambda in it, right?
The Lagrangian.mp3
You kind of just bring one to the other side. So this second component of our Lagrangian equals zero equation is just the second function that we've seen in a lot of these examples that we've been doing, where you set one of the gradient vectors proportional to the other one. And the only real difference here from stuff that we've seen already, and even then it's not that different, is that what happens when we take the partial derivative of this Lagrangian with respect to lambda, with respect, now I'll go ahead and give it that kind of green lambda color here. Well, when we take that partial derivative, if we kind of look up at the definition of the function, R, R never has a lambda in it, right? It's purely a function of x and y. So that looks just like a constant when we're differentiating with respect to lambda. So that's just gonna be zero when we take its partial derivative.
The Lagrangian.mp3
Well, when we take that partial derivative, if we kind of look up at the definition of the function, R, R never has a lambda in it, right? It's purely a function of x and y. So that looks just like a constant when we're differentiating with respect to lambda. So that's just gonna be zero when we take its partial derivative. And then this next component, b of x, y minus b, all of that just looks like a constant as far as lambda is concerned, right? There's x's, there's y, there's this constant b, but none of these things have lambdas in them. So when we take the partial derivative with respect to lambda, this just looks like some big constant times lambda itself.
The Lagrangian.mp3
So that's just gonna be zero when we take its partial derivative. And then this next component, b of x, y minus b, all of that just looks like a constant as far as lambda is concerned, right? There's x's, there's y, there's this constant b, but none of these things have lambdas in them. So when we take the partial derivative with respect to lambda, this just looks like some big constant times lambda itself. So what we're gonna get is, I guess, we're subtracting off, right? We're up here kind of writing a minus sign. We're subtracting off all the stuff that was in those parentheses, b of x, y minus b, that constant.
The Lagrangian.mp3
So when we take the partial derivative with respect to lambda, this just looks like some big constant times lambda itself. So what we're gonna get is, I guess, we're subtracting off, right? We're up here kind of writing a minus sign. We're subtracting off all the stuff that was in those parentheses, b of x, y minus b, that constant. And this whole thing, if we set that whole thing equal to zero, well, that's pretty much the same as setting b of x, y minus b equal to zero. And that, that's really just the same as saying, hey, we're setting b of x, y equal to that little b, right? Setting this partial derivative of the Lagrangian with respect to the Lagrange multiplier equal to zero boils down to the constraint, right?
The Lagrangian.mp3
We're subtracting off all the stuff that was in those parentheses, b of x, y minus b, that constant. And this whole thing, if we set that whole thing equal to zero, well, that's pretty much the same as setting b of x, y minus b equal to zero. And that, that's really just the same as saying, hey, we're setting b of x, y equal to that little b, right? Setting this partial derivative of the Lagrangian with respect to the Lagrange multiplier equal to zero boils down to the constraint, right? The third equation that we need to solve. So in that way, setting the gradient of this Lagrangian function equal to zero is just a very compact way of packaging three separate equations that we need to solve the constraint optimization problem. And I'll emphasize that in practice, if you actually see a function for r for the thing that you're maximizing and a function for the budget, it's much better, I think, to just directly think about these parallel gradients and kind of solve it from there.
The Lagrangian.mp3
Setting this partial derivative of the Lagrangian with respect to the Lagrange multiplier equal to zero boils down to the constraint, right? The third equation that we need to solve. So in that way, setting the gradient of this Lagrangian function equal to zero is just a very compact way of packaging three separate equations that we need to solve the constraint optimization problem. And I'll emphasize that in practice, if you actually see a function for r for the thing that you're maximizing and a function for the budget, it's much better, I think, to just directly think about these parallel gradients and kind of solve it from there. Because if you construct the Lagrangian and then compute its gradient, all you're really doing is repackaging it up only to unpackage it again. But the point of this, kind of the reason that this is a very useful construct is that computers often have really fast ways of solving things like this, things like the gradient of some function equals zero. And the reason is because that's how you solve unconstrained maximization problems, right?
The Lagrangian.mp3
And I'll emphasize that in practice, if you actually see a function for r for the thing that you're maximizing and a function for the budget, it's much better, I think, to just directly think about these parallel gradients and kind of solve it from there. Because if you construct the Lagrangian and then compute its gradient, all you're really doing is repackaging it up only to unpackage it again. But the point of this, kind of the reason that this is a very useful construct is that computers often have really fast ways of solving things like this, things like the gradient of some function equals zero. And the reason is because that's how you solve unconstrained maximization problems, right? This is very similar to as if we just looked at this function L out of context and were asked, hey, what is its maximum value? What are the critical points that it has? And you said its gradient equal to zero.
The Lagrangian.mp3
And the reason is because that's how you solve unconstrained maximization problems, right? This is very similar to as if we just looked at this function L out of context and were asked, hey, what is its maximum value? What are the critical points that it has? And you said its gradient equal to zero. So kind of the whole point of this Lagrangian is that it turns our constrained optimization problem involving R and B and this new made-up variable lambda into an unconstrained optimization problem where we're just setting the gradient of some function equal to zero. So computers can often do that really quickly. So if you just hand the computer this function, it will be able to find you an answer.
The Lagrangian.mp3
And you said its gradient equal to zero. So kind of the whole point of this Lagrangian is that it turns our constrained optimization problem involving R and B and this new made-up variable lambda into an unconstrained optimization problem where we're just setting the gradient of some function equal to zero. So computers can often do that really quickly. So if you just hand the computer this function, it will be able to find you an answer. Whereas it's harder to say, hey, computer, I want you to think about when the gradients are parallel and also consider this constraint function. It's just kind of a cleaner way to package it all up. So with that, I'll see you next video where I'm gonna talk about the significance of this lambda term, how it's not just a ghost variable, but it actually has a pretty nice interpretation for a given constraint problem.
The Lagrangian.mp3
This is on the XY plane. TS plane should just be some separate space over here, and we're imagining moving that separate space over into three dimensions, but that's harder to animate, so I'm just not gonna do it, and I'm gonna instead keep things inside the XY plane here, and we're thinking about the squares being T and S ranging each from zero to three, and what I said for partial derivative with respect to T is you imagine the line that represents movement in the T direction, and you see how that line gets mapped as all of the points move to their corresponding output, and the partial derivative vector gives you a certain tangent vector to the curve representing that line which corresponds to movement in the T direction, and the longer that is, the faster the movement, the more sensitive it is to nudges in the T direction, so in the S direction, let's say we were to take the partial derivative with respect to S, so I'll kind of clear this up here, also clear up this guy, and if you said instead, what if we were doing it with respect to S, right? Partial derivative of V, the vector value function with respect to S, and well, you do something very similar. You would say, okay, what is the line that corresponds to movement in the S direction, and the way I've drawn it, it's always gonna be perpendicular because we're in the TS plane, the T axis is perpendicular to the S plane, and in this case, this line represents T equals one, right? You're saying T constantly equals one, but you're letting S vary, and if you see how that line maps as you move everything from the input space over to the corresponding points in the output space, that line tells you what happens as you're varying the S value of the input, and I guess it kind of starts curving this way, and then it curves very much up and kind of goes off into the distance there, and again, the grid lines here really help because every time that you see the grid lines intersect, one of the lines represents movement in the T direction and the other represents movement in the S direction, and for partial derivatives, we think very similarly. You think of that partial S as representing, whoop, whoop, zoom on back here, that partial S you think of as representing a tiny movement in the S direction, just a little smidge and nudge, somehow nudging that guy along, and then the corresponding nudge you look for in the output space, you say, okay, if we nudge the input that much and we go over to the output, and maybe that tiny nudge correspond with one that's like three times bigger, I don't know, but it looked like it stretched things out, so that tiny nudge might turn into something that's still quite small, but maybe three times bigger, but it's a vector. What you do is you think of that vector as being your partial V, and you scale it by whatever the size of that partial S was, right?
Partial derivative of a parametric surface, part 2.mp3
You would say, okay, what is the line that corresponds to movement in the S direction, and the way I've drawn it, it's always gonna be perpendicular because we're in the TS plane, the T axis is perpendicular to the S plane, and in this case, this line represents T equals one, right? You're saying T constantly equals one, but you're letting S vary, and if you see how that line maps as you move everything from the input space over to the corresponding points in the output space, that line tells you what happens as you're varying the S value of the input, and I guess it kind of starts curving this way, and then it curves very much up and kind of goes off into the distance there, and again, the grid lines here really help because every time that you see the grid lines intersect, one of the lines represents movement in the T direction and the other represents movement in the S direction, and for partial derivatives, we think very similarly. You think of that partial S as representing, whoop, whoop, zoom on back here, that partial S you think of as representing a tiny movement in the S direction, just a little smidge and nudge, somehow nudging that guy along, and then the corresponding nudge you look for in the output space, you say, okay, if we nudge the input that much and we go over to the output, and maybe that tiny nudge correspond with one that's like three times bigger, I don't know, but it looked like it stretched things out, so that tiny nudge might turn into something that's still quite small, but maybe three times bigger, but it's a vector. What you do is you think of that vector as being your partial V, and you scale it by whatever the size of that partial S was, right? So the result that you get is a tangent vector that's not puny, not a tiny nudge, but is actually a sizable tangent vector, and it's gonna kind of correspond to the rate at which tiny changes, not just tiny changes, but the rate at which changes in S cause movement in the output space. So let's actually compute it for this case, just kind of get some good practice computing things, and if we look up here, the T value, which used to be considered a variable when we were doing it with respect to T, but now that T value looks like a constant, so its derivative is zero, then negative S squared with respect to S has a derivative of negative two S. S T, S looks like a variable, T looks like a constant, the derivative is just that constant, T. Down here, T S squared, T looks like a constant, S looks like a variable, so two times T times S, and then over here, we're subtracting off, S is the variable, T squared looks like a constant, so that constant. And let's say we plug in the value, one, one, right?
Partial derivative of a parametric surface, part 2.mp3
What you do is you think of that vector as being your partial V, and you scale it by whatever the size of that partial S was, right? So the result that you get is a tangent vector that's not puny, not a tiny nudge, but is actually a sizable tangent vector, and it's gonna kind of correspond to the rate at which tiny changes, not just tiny changes, but the rate at which changes in S cause movement in the output space. So let's actually compute it for this case, just kind of get some good practice computing things, and if we look up here, the T value, which used to be considered a variable when we were doing it with respect to T, but now that T value looks like a constant, so its derivative is zero, then negative S squared with respect to S has a derivative of negative two S. S T, S looks like a variable, T looks like a constant, the derivative is just that constant, T. Down here, T S squared, T looks like a constant, S looks like a variable, so two times T times S, and then over here, we're subtracting off, S is the variable, T squared looks like a constant, so that constant. And let's say we plug in the value, one, one, right? This red dot corresponds to one, one, so what we would get here, S is equal to one, so that's negative two, T is equal to one, so that's one, then two times one times one, oh, let's see, I'll write it, two times one times one minus one squared, minus one squared is gonna correspond to one, that's two minus one. So what we would expect for the tangent vector, the partial derivative vector, is the X component should be negative, and then the Y and Z components should each be positive. And if we go over and we take a look at what the movement along the curve actually is, that lines up, right?
Partial derivative of a parametric surface, part 2.mp3
And let's say we plug in the value, one, one, right? This red dot corresponds to one, one, so what we would get here, S is equal to one, so that's negative two, T is equal to one, so that's one, then two times one times one, oh, let's see, I'll write it, two times one times one minus one squared, minus one squared is gonna correspond to one, that's two minus one. So what we would expect for the tangent vector, the partial derivative vector, is the X component should be negative, and then the Y and Z components should each be positive. And if we go over and we take a look at what the movement along the curve actually is, that lines up, right? Because you're moving, as you kind of zip along this curve, you're moving to the left, so the X component of the partial derivative should be negative, but you're moving upwards as far as Y is concerned, and you can also kind of see that the leftward movement is kind of twice as fast as the upward motion, the slope favors the X direction, and then as far as the Z component is concerned, you are in fact moving up. And maybe you could say, well, how do you know what you're moving? Are you moving that way, or is everything switched the other way around?
Partial derivative of a parametric surface, part 2.mp3
And if we go over and we take a look at what the movement along the curve actually is, that lines up, right? Because you're moving, as you kind of zip along this curve, you're moving to the left, so the X component of the partial derivative should be negative, but you're moving upwards as far as Y is concerned, and you can also kind of see that the leftward movement is kind of twice as fast as the upward motion, the slope favors the X direction, and then as far as the Z component is concerned, you are in fact moving up. And maybe you could say, well, how do you know what you're moving? Are you moving that way, or is everything switched the other way around? And the benefit of animation here is we can say, ah, as S is ranging from zero up to three, this is the increasing direction, and you just keep your eye on what that direction is as we move things about, and that increasing direction does kind of correspond with moving along the curve this way. So you get a tangent vector in the other way. And one kind of nice thing about this then is the two different partial derivative vectors that we found, each one of them you could say is a tangent vector to the surface, right?
Partial derivative of a parametric surface, part 2.mp3
Are you moving that way, or is everything switched the other way around? And the benefit of animation here is we can say, ah, as S is ranging from zero up to three, this is the increasing direction, and you just keep your eye on what that direction is as we move things about, and that increasing direction does kind of correspond with moving along the curve this way. So you get a tangent vector in the other way. And one kind of nice thing about this then is the two different partial derivative vectors that we found, each one of them you could say is a tangent vector to the surface, right? So the one that was a partial derivative with respect to T over here, is kind of goes in one direction, and the other one, it kind of gives you a different notion of what a tangent vector on the surface could be. And various different, you could have a notion of directional derivative too that kind of combines these in various ways, and that'll get you all the different ways that you can have a vector tangent to the surface. And later on, I'll talk about things like tangent planes, if you wanna express what a tangent plane is, and you kind of think of that as being defined in terms of two different vectors.
Partial derivative of a parametric surface, part 2.mp3
And one kind of nice thing about this then is the two different partial derivative vectors that we found, each one of them you could say is a tangent vector to the surface, right? So the one that was a partial derivative with respect to T over here, is kind of goes in one direction, and the other one, it kind of gives you a different notion of what a tangent vector on the surface could be. And various different, you could have a notion of directional derivative too that kind of combines these in various ways, and that'll get you all the different ways that you can have a vector tangent to the surface. And later on, I'll talk about things like tangent planes, if you wanna express what a tangent plane is, and you kind of think of that as being defined in terms of two different vectors. But for now, that's really all you need to know about partial derivatives of parametric surfaces. And in the next couple videos, I'll talk about what partial derivatives of vector-valued functions can mean in other contexts, because it's not always a parametric surface, and maybe you're not always thinking about a curve that could be moved along, but you still wanna think, you know, how does this input nudge correspond to an output nudge, and what's the ratio between them? So with that, I'll see you next video.
Partial derivative of a parametric surface, part 2.mp3
And we found a decent parameterization. And then given that parameterization, we were able to come up with ds for that surface, for surface 2, just simplified. All this business simplified to 1, so it just equaled du dv. And so now we are ready to evaluate the surface integral. This surface integral right over here, we're ready to evaluate the surface integral over surface 2 of z ds. And it's going to be equal to a double integral over u and v. And so let's write this. I'm going to do two different colors for the different variables of integration.
Surface integral ex3 part 2 Evaluating the outside surface Multivariable Calculus Khan Academy.mp3
And so now we are ready to evaluate the surface integral. This surface integral right over here, we're ready to evaluate the surface integral over surface 2 of z ds. And it's going to be equal to a double integral over u and v. And so let's write this. I'm going to do two different colors for the different variables of integration. So yellow for one, and maybe purple for the other. And we're taking the integral of z. And in our parameterization, z is equal to v. So this right over here is the same thing as v. So we can write v right over there.
Surface integral ex3 part 2 Evaluating the outside surface Multivariable Calculus Khan Academy.mp3
I'm going to do two different colors for the different variables of integration. So yellow for one, and maybe purple for the other. And we're taking the integral of z. And in our parameterization, z is equal to v. So this right over here is the same thing as v. So we can write v right over there. And we already saw that ds is the same thing as du dv. Or we could even write that as dv du. We could just switch the order right over there.
Surface integral ex3 part 2 Evaluating the outside surface Multivariable Calculus Khan Academy.mp3
And in our parameterization, z is equal to v. So this right over here is the same thing as v. So we can write v right over there. And we already saw that ds is the same thing as du dv. Or we could even write that as dv du. We could just switch the order right over there. And I'm going to choose to integrate with respect to dv first, to do dv on the inside integral, and then do du on the outside integral. And the reason why I'm choosing to integrate with respect to v first is based on the bounds of our parameters. v is bounded on the low end by 0.
Surface integral ex3 part 2 Evaluating the outside surface Multivariable Calculus Khan Academy.mp3
We could just switch the order right over there. And I'm going to choose to integrate with respect to dv first, to do dv on the inside integral, and then do du on the outside integral. And the reason why I'm choosing to integrate with respect to v first is based on the bounds of our parameters. v is bounded on the low end by 0. It's bounded on the low end by 0. But on the high end, it's bounded by essentially a function of u. It's upper bound changes.
Surface integral ex3 part 2 Evaluating the outside surface Multivariable Calculus Khan Academy.mp3
v is bounded on the low end by 0. It's bounded on the low end by 0. But on the high end, it's bounded by essentially a function of u. It's upper bound changes. Because you see right over here, depending where we are, depending on what our x value is, essentially we have a different height that we need to get to. And since it's a function of u, we can integrate with respect to v, our boundaries are going to be 0 and 1 minus cosine of u. All of this business in magenta will give us a function of u.
Surface integral ex3 part 2 Evaluating the outside surface Multivariable Calculus Khan Academy.mp3
It's upper bound changes. Because you see right over here, depending where we are, depending on what our x value is, essentially we have a different height that we need to get to. And since it's a function of u, we can integrate with respect to v, our boundaries are going to be 0 and 1 minus cosine of u. All of this business in magenta will give us a function of u. And then we'll be able to integrate with respect to u. And u just goes from 0 to 2 pi. And so that will give us a nice, straightforward number, assuming all of this works out OK. And so this is simplified to a straight up double integral.
Surface integral ex3 part 2 Evaluating the outside surface Multivariable Calculus Khan Academy.mp3
All of this business in magenta will give us a function of u. And then we'll be able to integrate with respect to u. And u just goes from 0 to 2 pi. And so that will give us a nice, straightforward number, assuming all of this works out OK. And so this is simplified to a straight up double integral. And now we're ready to compute. And so let me write the outside part. The outside part is from 0 to 2 pi.
Surface integral ex3 part 2 Evaluating the outside surface Multivariable Calculus Khan Academy.mp3
And so that will give us a nice, straightforward number, assuming all of this works out OK. And so this is simplified to a straight up double integral. And now we're ready to compute. And so let me write the outside part. The outside part is from 0 to 2 pi. It's du. And so the inside part, the antiderivative of v, is v squared over 2. And we're going to evaluate that from 0 to 1 minus cosine of u.
Surface integral ex3 part 2 Evaluating the outside surface Multivariable Calculus Khan Academy.mp3
The outside part is from 0 to 2 pi. It's du. And so the inside part, the antiderivative of v, is v squared over 2. And we're going to evaluate that from 0 to 1 minus cosine of u. And so this is going to be equal to, once again, the outside integral 0, 2 pi. I'll write du. I'll write du right over here.
Surface integral ex3 part 2 Evaluating the outside surface Multivariable Calculus Khan Academy.mp3
And we're going to evaluate that from 0 to 1 minus cosine of u. And so this is going to be equal to, once again, the outside integral 0, 2 pi. I'll write du. I'll write du right over here. And so this is going to be equal to all of this. Let me just write 1 half. And actually, I can even write the 1 half out here.
Surface integral ex3 part 2 Evaluating the outside surface Multivariable Calculus Khan Academy.mp3
I'll write du right over here. And so this is going to be equal to all of this. Let me just write 1 half. And actually, I can even write the 1 half out here. I'll just write 1 half times 1 minus cosine u squared. Well, that's just going to be 1 squared minus 2 times the product of both of these. So minus 2 cosine of u.
Surface integral ex3 part 2 Evaluating the outside surface Multivariable Calculus Khan Academy.mp3
And actually, I can even write the 1 half out here. I'll just write 1 half times 1 minus cosine u squared. Well, that's just going to be 1 squared minus 2 times the product of both of these. So minus 2 cosine of u. Actually, let me give myself a little bit more real estate here. Let me give myself a little bit more real estate. 1 minus 2 cosine of u plus cosine of u squared minus this thing evaluated at 0, which is just going to be 0.
Surface integral ex3 part 2 Evaluating the outside surface Multivariable Calculus Khan Academy.mp3
So minus 2 cosine of u. Actually, let me give myself a little bit more real estate here. Let me give myself a little bit more real estate. 1 minus 2 cosine of u plus cosine of u squared minus this thing evaluated at 0, which is just going to be 0. So we just get that right over there. And then we have du. And so now we can evaluate this.
Surface integral ex3 part 2 Evaluating the outside surface Multivariable Calculus Khan Academy.mp3
1 minus 2 cosine of u plus cosine of u squared minus this thing evaluated at 0, which is just going to be 0. So we just get that right over there. And then we have du. And so now we can evaluate this. We can integrate this with respect to u. So let's do that. And let me just take the 1 half on the outside just to simplify things.
Surface integral ex3 part 2 Evaluating the outside surface Multivariable Calculus Khan Academy.mp3
And so now we can evaluate this. We can integrate this with respect to u. So let's do that. And let me just take the 1 half on the outside just to simplify things. So we have the 1 half out here. And so if you take the antiderivative of this with respect to u, you still have this 1 half out front. So this is equal to 1 half.
Surface integral ex3 part 2 Evaluating the outside surface Multivariable Calculus Khan Academy.mp3
And let me just take the 1 half on the outside just to simplify things. So we have the 1 half out here. And so if you take the antiderivative of this with respect to u, you still have this 1 half out front. So this is equal to 1 half. And we're going to take the antiderivative. So let's do it carefully. Actually, let me just simplify it so it's easier to take the antiderivative.
Surface integral ex3 part 2 Evaluating the outside surface Multivariable Calculus Khan Academy.mp3
So this is equal to 1 half. And we're going to take the antiderivative. So let's do it carefully. Actually, let me just simplify it so it's easier to take the antiderivative. So it's going to be 1 half times the integral. I'll break this up into three different integrals. 1 half times the integral from 0 to 2 pi of 1 du, which is just du, minus 2 times the integral from 0 to 2 pi of cosine of u du.
Surface integral ex3 part 2 Evaluating the outside surface Multivariable Calculus Khan Academy.mp3
Actually, let me just simplify it so it's easier to take the antiderivative. So it's going to be 1 half times the integral. I'll break this up into three different integrals. 1 half times the integral from 0 to 2 pi of 1 du, which is just du, minus 2 times the integral from 0 to 2 pi of cosine of u du. That's this term right over here. Plus the integral from 0 to 2 pi of cosine squared u. And cosine squared u, it's not so easy to take the antiderivative of that.
Surface integral ex3 part 2 Evaluating the outside surface Multivariable Calculus Khan Academy.mp3
1 half times the integral from 0 to 2 pi of 1 du, which is just du, minus 2 times the integral from 0 to 2 pi of cosine of u du. That's this term right over here. Plus the integral from 0 to 2 pi of cosine squared u. And cosine squared u, it's not so easy to take the antiderivative of that. So we'll use one of our trig identities. I always forget the formal name. I just think of it as the one that takes us from cosine squared to cosine of 2u.
Surface integral ex3 part 2 Evaluating the outside surface Multivariable Calculus Khan Academy.mp3
And cosine squared u, it's not so easy to take the antiderivative of that. So we'll use one of our trig identities. I always forget the formal name. I just think of it as the one that takes us from cosine squared to cosine of 2u. So this trig identity, this thing right over here, is the same thing. This comes straight out of our trigonometry class. This is 1 half plus 1 half cosine of 2 theta.
Surface integral ex3 part 2 Evaluating the outside surface Multivariable Calculus Khan Academy.mp3
I just think of it as the one that takes us from cosine squared to cosine of 2u. So this trig identity, this thing right over here, is the same thing. This comes straight out of our trigonometry class. This is 1 half plus 1 half cosine of 2 theta. Plus 1 half cosine of 2u. So this last integral right over here, I can rewrite it as 1 half plus 1 half cosine of 2u. And then we have our final du.
Surface integral ex3 part 2 Evaluating the outside surface Multivariable Calculus Khan Academy.mp3
This is 1 half plus 1 half cosine of 2 theta. Plus 1 half cosine of 2u. So this last integral right over here, I can rewrite it as 1 half plus 1 half cosine of 2u. And then we have our final du. And now let me close the brackets. And all of that is times 1 half. So this thing right over here, cosine squared u, just a trig identity, takes us to that.
Surface integral ex3 part 2 Evaluating the outside surface Multivariable Calculus Khan Academy.mp3