Sentence
stringlengths 131
8.39k
| video_title
stringlengths 12
104
|
|---|---|
When s is pi over 2, I want to do multiple different colors. When s is pi over 2, up here we've rotated exactly 90 degrees, right? Pi over 2 is 90 degrees at this point. And then if we vary t, we're essentially tracing out the top of the donut. So let me make sure I draw it. So the cross section, the top of the donut, we're going to start off right over here. So when s is pi over 2 and you vary it right, and then you vary t, I'm having trouble drawing straight lines, and then you vary t, it's going to look like this.
|
Introduction to parametrizing a surface with two parameters Multivariable Calculus Khan Academy.mp3
|
And then if we vary t, we're essentially tracing out the top of the donut. So let me make sure I draw it. So the cross section, the top of the donut, we're going to start off right over here. So when s is pi over 2 and you vary it right, and then you vary t, I'm having trouble drawing straight lines, and then you vary t, it's going to look like this. That's the top of that circle right there. The top of this circle is going to be right there. Top of this circle is going to be right over there.
|
Introduction to parametrizing a surface with two parameters Multivariable Calculus Khan Academy.mp3
|
So when s is pi over 2 and you vary it right, and then you vary t, I'm having trouble drawing straight lines, and then you vary t, it's going to look like this. That's the top of that circle right there. The top of this circle is going to be right there. Top of this circle is going to be right over there. Top of this circle is going to be right over there. So then I just connect the dots. It's going to look something like that.
|
Introduction to parametrizing a surface with two parameters Multivariable Calculus Khan Academy.mp3
|
Top of this circle is going to be right over there. Top of this circle is going to be right over there. So then I just connect the dots. It's going to look something like that. That is the top of our donut. If I was doing this top view, it would be the top of the donut just like that. And if I want to do the bottom of the donut, just to make the picture clear, if I make the bottom of the donut, the bottom of the donut would be, let's see, if I kick s as 3 pi over 4 and I vary t, that's the bottoms of our donut.
|
Introduction to parametrizing a surface with two parameters Multivariable Calculus Khan Academy.mp3
|
It's going to look something like that. That is the top of our donut. If I was doing this top view, it would be the top of the donut just like that. And if I want to do the bottom of the donut, just to make the picture clear, if I make the bottom of the donut, the bottom of the donut would be, let's see, if I kick s as 3 pi over 4 and I vary t, that's the bottoms of our donut. So let me draw the circle. So it's right there. The circle is right there.
|
Introduction to parametrizing a surface with two parameters Multivariable Calculus Khan Academy.mp3
|
And if I want to do the bottom of the donut, just to make the picture clear, if I make the bottom of the donut, the bottom of the donut would be, let's see, if I kick s as 3 pi over 4 and I vary t, that's the bottoms of our donut. So let me draw the circle. So it's right there. The circle is right there. You wouldn't even be able to see the whole thing if this wasn't transparent. So you'd be tracing out the bottom of the donut just like that. I know that this graph is becoming a little confusing, but hopefully you get the idea.
|
Introduction to parametrizing a surface with two parameters Multivariable Calculus Khan Academy.mp3
|
The circle is right there. You wouldn't even be able to see the whole thing if this wasn't transparent. So you'd be tracing out the bottom of the donut just like that. I know that this graph is becoming a little confusing, but hopefully you get the idea. When s is 2 pi again, you're going to be back to the outside of the donut again. That's also going to be in purple. So that's what happens when we hold the s constant at certain values and vary the t. Now let's do the opposite.
|
Introduction to parametrizing a surface with two parameters Multivariable Calculus Khan Academy.mp3
|
I know that this graph is becoming a little confusing, but hopefully you get the idea. When s is 2 pi again, you're going to be back to the outside of the donut again. That's also going to be in purple. So that's what happens when we hold the s constant at certain values and vary the t. Now let's do the opposite. What happens if we hold t at 0 and we vary the s? So if we hold t at 0 and we vary the s. So t is 0. That means we haven't rotated it all yet.
|
Introduction to parametrizing a surface with two parameters Multivariable Calculus Khan Academy.mp3
|
So that's what happens when we hold the s constant at certain values and vary the t. Now let's do the opposite. What happens if we hold t at 0 and we vary the s? So if we hold t at 0 and we vary the s. So t is 0. That means we haven't rotated it all yet. So we're in the zy plane. So t is 0. And s will start at 0 and it'll go to 2 pi.
|
Introduction to parametrizing a surface with two parameters Multivariable Calculus Khan Academy.mp3
|
That means we haven't rotated it all yet. So we're in the zy plane. So t is 0. And s will start at 0 and it'll go to 2 pi. This is this point. Sorry, pi over 2. That's that point over there.
|
Introduction to parametrizing a surface with two parameters Multivariable Calculus Khan Academy.mp3
|
And s will start at 0 and it'll go to 2 pi. This is this point. Sorry, pi over 2. That's that point over there. Then it'll go to pi. This point is the same thing as that point. Then it'll go to 3 pi over 4.
|
Introduction to parametrizing a surface with two parameters Multivariable Calculus Khan Academy.mp3
|
That's that point over there. Then it'll go to pi. This point is the same thing as that point. Then it'll go to 3 pi over 4. And then it'll come back all the way to 2 pi. So this line corresponds to this circle right there. We could keep doing these.
|
Introduction to parametrizing a surface with two parameters Multivariable Calculus Khan Academy.mp3
|
Then it'll go to 3 pi over 4. And then it'll come back all the way to 2 pi. So this line corresponds to this circle right there. We could keep doing these. If we pick when s is pi over 2. Sorry, when t is pi over 2. Let me do a different color.
|
Introduction to parametrizing a surface with two parameters Multivariable Calculus Khan Academy.mp3
|
We could keep doing these. If we pick when s is pi over 2. Sorry, when t is pi over 2. Let me do a different color. That's not different enough. When t is pi over 2, just like that, we would have rotated around the z-axis 90 degrees. So now we're over here.
|
Introduction to parametrizing a surface with two parameters Multivariable Calculus Khan Academy.mp3
|
Let me do a different color. That's not different enough. When t is pi over 2, just like that, we would have rotated around the z-axis 90 degrees. So now we're over here. And now when we vary s, s will start off over here and it'll go all the way around like that. So this line corresponds to that circle. We could keep doing it like this.
|
Introduction to parametrizing a surface with two parameters Multivariable Calculus Khan Academy.mp3
|
So now we're over here. And now when we vary s, s will start off over here and it'll go all the way around like that. So this line corresponds to that circle. We could keep doing it like this. When t is equal to pi, that means we've gone all the way around the circle like that. And now when we vary s from 0 to pi over 2, we're going to start all the way over here. And then we're going to vary all the way.
|
Introduction to parametrizing a surface with two parameters Multivariable Calculus Khan Academy.mp3
|
We could keep doing it like this. When t is equal to pi, that means we've gone all the way around the circle like that. And now when we vary s from 0 to pi over 2, we're going to start all the way over here. And then we're going to vary all the way. We're going to go down and hit all those contours that we talked about before. And I'll do one more just to kind of make the scaffold clear, this dark purple. Hopefully you can see it.
|
Introduction to parametrizing a surface with two parameters Multivariable Calculus Khan Academy.mp3
|
And then we're going to vary all the way. We're going to go down and hit all those contours that we talked about before. And I'll do one more just to kind of make the scaffold clear, this dark purple. Hopefully you can see it. When t is 3 pi over 4, we've rotated all the way. So we're on the xz-plane. And then when you vary s, s will start off over here.
|
Introduction to parametrizing a surface with two parameters Multivariable Calculus Khan Academy.mp3
|
Hopefully you can see it. When t is 3 pi over 4, we've rotated all the way. So we're on the xz-plane. And then when you vary s, s will start off over here. And as you increase s, you're going to go around the circle just like that. And of course, when you get all the way back full circle, t over pi over 2, that's the same thing. You're back over here again.
|
Introduction to parametrizing a surface with two parameters Multivariable Calculus Khan Academy.mp3
|
And then when you vary s, s will start off over here. And as you increase s, you're going to go around the circle just like that. And of course, when you get all the way back full circle, t over pi over 2, that's the same thing. You're back over here again. So this is going to be, we can even shade it the same color. And hopefully you're getting a sense now of the parameterization. I haven't done any math yet.
|
Introduction to parametrizing a surface with two parameters Multivariable Calculus Khan Academy.mp3
|
You're back over here again. So this is going to be, we can even shade it the same color. And hopefully you're getting a sense now of the parameterization. I haven't done any math yet. I haven't actually showed you how to mathematically represent it as a vector-valued function. But hopefully you're getting a sense of what it means to parameterize by two parameters. And just to get an idea of what these areas on our xst-plane correspond to onto this surface in R3, this little square right here, let's see what it's bounded by.
|
Introduction to parametrizing a surface with two parameters Multivariable Calculus Khan Academy.mp3
|
I haven't done any math yet. I haven't actually showed you how to mathematically represent it as a vector-valued function. But hopefully you're getting a sense of what it means to parameterize by two parameters. And just to get an idea of what these areas on our xst-plane correspond to onto this surface in R3, this little square right here, let's see what it's bounded by. It's this little square. I want to make sure I pick a square that I can draw neatly. So this square right here, that it is between, when you look at t, it's between this, t is equal to 0 and pi over 2.
|
Introduction to parametrizing a surface with two parameters Multivariable Calculus Khan Academy.mp3
|
And just to get an idea of what these areas on our xst-plane correspond to onto this surface in R3, this little square right here, let's see what it's bounded by. It's this little square. I want to make sure I pick a square that I can draw neatly. So this square right here, that it is between, when you look at t, it's between this, t is equal to 0 and pi over 2. So between 0 is t between 0 and pi over 2. And s is between 0 and pi over 2. So this right here is this part of our torus.
|
Introduction to parametrizing a surface with two parameters Multivariable Calculus Khan Academy.mp3
|
So this square right here, that it is between, when you look at t, it's between this, t is equal to 0 and pi over 2. So between 0 is t between 0 and pi over 2. And s is between 0 and pi over 2. So this right here is this part of our torus. If you're looking at it from an outer edge, or sorry, from the top, it would look like that right there. You can imagine we've transformed this square. I haven't even shown you how to do it mathematically yet, but we've transformed this square to this part of the donut.
|
Introduction to parametrizing a surface with two parameters Multivariable Calculus Khan Academy.mp3
|
So this right here is this part of our torus. If you're looking at it from an outer edge, or sorry, from the top, it would look like that right there. You can imagine we've transformed this square. I haven't even shown you how to do it mathematically yet, but we've transformed this square to this part of the donut. Now, I think we've done about as much as I can do on the visualization side. I'll stop this video here. In the next video, we're going to actually talk about how do we actually parameterize using these two parameters.
|
Introduction to parametrizing a surface with two parameters Multivariable Calculus Khan Academy.mp3
|
I haven't even shown you how to do it mathematically yet, but we've transformed this square to this part of the donut. Now, I think we've done about as much as I can do on the visualization side. I'll stop this video here. In the next video, we're going to actually talk about how do we actually parameterize using these two parameters. Remember, s takes us around each of these circles, and then t takes us around the z-axis. And if you take all of the combinations of s and t, you're going to have every point along the surface of this torus or this donut. How do you actually go from an s and a t that goes from 0 to 2 pi in both cases, and turn it into a three-dimensional position vector valued function that would define this surface?
|
Introduction to parametrizing a surface with two parameters Multivariable Calculus Khan Academy.mp3
|
I needed a break. But where we left off, we were in the home stretch. We were evaluating the third surface integral. And we were setting it up as a double integral with respect to r and theta. And we just had to set up the bounds. And we know that r takes on values between 0 and 1. And theta takes on values between 0 and 2 pi.
|
Surface integral ex3 part 4 Home stretch Multivariable Calculus Khan Academy.mp3
|
And we were setting it up as a double integral with respect to r and theta. And we just had to set up the bounds. And we know that r takes on values between 0 and 1. And theta takes on values between 0 and 2 pi. So r, we're going to integrate with respect to r first. r takes on values between 0 and 1. And theta takes on values between 0 and 2 pi.
|
Surface integral ex3 part 4 Home stretch Multivariable Calculus Khan Academy.mp3
|
And theta takes on values between 0 and 2 pi. So r, we're going to integrate with respect to r first. r takes on values between 0 and 1. And theta takes on values between 0 and 2 pi. And so now we are ready to integrate. So let's do the first part. Let's do this inside part right over here.
|
Surface integral ex3 part 4 Home stretch Multivariable Calculus Khan Academy.mp3
|
And theta takes on values between 0 and 2 pi. And so now we are ready to integrate. So let's do the first part. Let's do this inside part right over here. And I'm just going to rewrite the outside part. So this is going to be equal to we have our square root of 2 times the integral from 0 to 2 pi of, and we have d theta right over here. So that's the outside.
|
Surface integral ex3 part 4 Home stretch Multivariable Calculus Khan Academy.mp3
|
Let's do this inside part right over here. And I'm just going to rewrite the outside part. So this is going to be equal to we have our square root of 2 times the integral from 0 to 2 pi of, and we have d theta right over here. So that's the outside. This inside part right over here, we can rewrite it as if we distribute the r, it's r minus r squared cosine of theta. Now we're going to integrate with respect to r. So when we integrate with respect to r, cosine of theta is just a constant. So if you integrate this with respect to r, you get, and I'll do it in that pink color, if you integrate with respect to r, the antiderivative of r is r squared over 2.
|
Surface integral ex3 part 4 Home stretch Multivariable Calculus Khan Academy.mp3
|
So that's the outside. This inside part right over here, we can rewrite it as if we distribute the r, it's r minus r squared cosine of theta. Now we're going to integrate with respect to r. So when we integrate with respect to r, cosine of theta is just a constant. So if you integrate this with respect to r, you get, and I'll do it in that pink color, if you integrate with respect to r, the antiderivative of r is r squared over 2. So it's r squared over 2. And the antiderivative of r squared is r cubed over 3 minus r cubed over 3 cosine theta. This is just a constant.
|
Surface integral ex3 part 4 Home stretch Multivariable Calculus Khan Academy.mp3
|
So if you integrate this with respect to r, you get, and I'll do it in that pink color, if you integrate with respect to r, the antiderivative of r is r squared over 2. So it's r squared over 2. And the antiderivative of r squared is r cubed over 3 minus r cubed over 3 cosine theta. This is just a constant. r cubed over 3 cosine theta. And we're going to evaluate that from 0 to 1. So when you would evaluate it at 1, you get 1 half minus 1 third cosine theta.
|
Surface integral ex3 part 4 Home stretch Multivariable Calculus Khan Academy.mp3
|
This is just a constant. r cubed over 3 cosine theta. And we're going to evaluate that from 0 to 1. So when you would evaluate it at 1, you get 1 half minus 1 third cosine theta. So you get 1 half, I'll just do it right over here, 1 half minus 1 third cosine theta. And then minus both of these evaluated at 0, well, that's just going to be 0. 0 squared minus 0 squared times whatever.
|
Surface integral ex3 part 4 Home stretch Multivariable Calculus Khan Academy.mp3
|
So when you would evaluate it at 1, you get 1 half minus 1 third cosine theta. So you get 1 half, I'll just do it right over here, 1 half minus 1 third cosine theta. And then minus both of these evaluated at 0, well, that's just going to be 0. 0 squared minus 0 squared times whatever. It's all going to be 0. So this business right over here just evaluates to 1 half minus 1 third cosine of theta. And so we get the integral.
|
Surface integral ex3 part 4 Home stretch Multivariable Calculus Khan Academy.mp3
|
0 squared minus 0 squared times whatever. It's all going to be 0. So this business right over here just evaluates to 1 half minus 1 third cosine of theta. And so we get the integral. This is all going to be equal now to the square root of 2 times the integral from theta 0 to 2 pi of 1 half minus 1 third cosine of theta d theta. And this is equal to square root of 2 times the antiderivative of 1 half is 1 half 1 half theta. And the antiderivative of cosine theta is sine theta.
|
Surface integral ex3 part 4 Home stretch Multivariable Calculus Khan Academy.mp3
|
And so we get the integral. This is all going to be equal now to the square root of 2 times the integral from theta 0 to 2 pi of 1 half minus 1 third cosine of theta d theta. And this is equal to square root of 2 times the antiderivative of 1 half is 1 half 1 half theta. And the antiderivative of cosine theta is sine theta. So minus 1 third sine theta. And we're evaluating it from 0 to 2 pi. When you evaluate these at 2 pi, you have, let me just write it all out.
|
Surface integral ex3 part 4 Home stretch Multivariable Calculus Khan Academy.mp3
|
And the antiderivative of cosine theta is sine theta. So minus 1 third sine theta. And we're evaluating it from 0 to 2 pi. When you evaluate these at 2 pi, you have, let me just write it all out. It's a home stretch. I don't want to make a careless mistake. We have square root of 2 times, let's evaluate it at 2 pi.
|
Surface integral ex3 part 4 Home stretch Multivariable Calculus Khan Academy.mp3
|
When you evaluate these at 2 pi, you have, let me just write it all out. It's a home stretch. I don't want to make a careless mistake. We have square root of 2 times, let's evaluate it at 2 pi. 1 half times 2 pi is pi minus 1 third times sine of 2 pi. Well, that's just going to be 0. And when you evaluate it at 0, 1 half times 0 is 0.
|
Surface integral ex3 part 4 Home stretch Multivariable Calculus Khan Academy.mp3
|
We have square root of 2 times, let's evaluate it at 2 pi. 1 half times 2 pi is pi minus 1 third times sine of 2 pi. Well, that's just going to be 0. And when you evaluate it at 0, 1 half times 0 is 0. Sine of 0 is 0. So that all comes out to 0. So all of this business simplifies to pi.
|
Surface integral ex3 part 4 Home stretch Multivariable Calculus Khan Academy.mp3
|
And when you evaluate it at 0, 1 half times 0 is 0. Sine of 0 is 0. So that all comes out to 0. So all of this business simplifies to pi. And we are done. We have evaluated surface 3. It is square root of 2, or the third surface integral, or the surface integral over surface 3.
|
Surface integral ex3 part 4 Home stretch Multivariable Calculus Khan Academy.mp3
|
So all of this business simplifies to pi. And we are done. We have evaluated surface 3. It is square root of 2, or the third surface integral, or the surface integral over surface 3. It is square root of 2 pi. And we are done. So this part right over here is square root of 2, or root 2, times pi.
|
Surface integral ex3 part 4 Home stretch Multivariable Calculus Khan Academy.mp3
|
In the last video, I started introducing the intuition for the Laplacian operator in the context of the function with this graph and with the gradient field pictured below it, and here I'd like to go through the computation involved in that. So the function that I had there was defined, it's a two variable function, and it's defined as f of x, y is equal to three plus the cosine of x divided by two multiplied by the sine of y divided by two, y divided by two. And then the Laplacian, which we define with this right side up triangle, is an operator of f, and it's defined to be the divergence, so kind of this nabla dot, times the gradient, which is just nabla, of f. So two different things going on, it's kind of like a second derivative. And the first thing we need to do is take the gradient of f. And the way we do that, we kind of imagine expanding this upside down triangle as a vector full of partial differential operators, partial partial x and partial partial y. And with the gradient, you just kind of imagine multiplying that by the function. So if you imagine multiplying that by the function, what it looks like is just a vector full of partial derivatives. You're taking the partial of f with respect to x and the partial of f with respect to y.
|
Laplacian computation example.mp3
|
And the first thing we need to do is take the gradient of f. And the way we do that, we kind of imagine expanding this upside down triangle as a vector full of partial differential operators, partial partial x and partial partial y. And with the gradient, you just kind of imagine multiplying that by the function. So if you imagine multiplying that by the function, what it looks like is just a vector full of partial derivatives. You're taking the partial of f with respect to x and the partial of f with respect to y. Those are the two different components of this vector valued function that is the gradient. And in our specific example, when we take the partial derivative of f with respect to x, what we get, so we look over here, three just looks like a constant, so nothing happens. Cosine of x halves the derivative of that with respect to x, we kind of take out that 1 1.5.
|
Laplacian computation example.mp3
|
You're taking the partial of f with respect to x and the partial of f with respect to y. Those are the two different components of this vector valued function that is the gradient. And in our specific example, when we take the partial derivative of f with respect to x, what we get, so we look over here, three just looks like a constant, so nothing happens. Cosine of x halves the derivative of that with respect to x, we kind of take out that 1 1.5. So 1 1.5, and the derivative of cosine is negative sine. So that's negative sine of x over two. And sine of y over two, well, y just looks like a constant, so sine of y over two is just some other constant.
|
Laplacian computation example.mp3
|
Cosine of x halves the derivative of that with respect to x, we kind of take out that 1 1.5. So 1 1.5, and the derivative of cosine is negative sine. So that's negative sine of x over two. And sine of y over two, well, y just looks like a constant, so sine of y over two is just some other constant. So in our derivative, we just keep that constant in there, that sine of y over two. And then for the second component, the partial derivative of f with respect to y three still looks like a constant, because it is a constant. Now cosine of x over two looks like a constant, because as far as y is concerned, x is a constant, so cosine of x is a constant.
|
Laplacian computation example.mp3
|
And sine of y over two, well, y just looks like a constant, so sine of y over two is just some other constant. So in our derivative, we just keep that constant in there, that sine of y over two. And then for the second component, the partial derivative of f with respect to y three still looks like a constant, because it is a constant. Now cosine of x over two looks like a constant, because as far as y is concerned, x is a constant, so cosine of x is a constant. But then the sine of y has a derivative of cosine, and we also take out that 1 1.5. So you take out that 1 1.5 when you take the derivative of the inside, and then the derivative of the outside is cosine of whatever was in there. So in this case, y over two.
|
Laplacian computation example.mp3
|
Now cosine of x over two looks like a constant, because as far as y is concerned, x is a constant, so cosine of x is a constant. But then the sine of y has a derivative of cosine, and we also take out that 1 1.5. So you take out that 1 1.5 when you take the derivative of the inside, and then the derivative of the outside is cosine of whatever was in there. So in this case, y over two. And we're multiplying it by that original constant, cosine of x over two. So still we have our cosine of x over two, since it was a constant times a certain variable thing. X over two.
|
Laplacian computation example.mp3
|
So in this case, y over two. And we're multiplying it by that original constant, cosine of x over two. So still we have our cosine of x over two, since it was a constant times a certain variable thing. X over two. So that's the gradient. And then the next step here is to take the divergence of that. So with the divergence, we're gonna imagine taking that del operator and dot-producting with this guy.
|
Laplacian computation example.mp3
|
X over two. So that's the gradient. And then the next step here is to take the divergence of that. So with the divergence, we're gonna imagine taking that del operator and dot-producting with this guy. So if I scroll down to give some room here, we're gonna take that vector that's kind of, the same vector, the partial partial x, and I say vector, but vector-ish thing, partial y. And now we're gonna take the dot-product with this entire guy. So I'll go ahead and just copy it over.
|
Laplacian computation example.mp3
|
So with the divergence, we're gonna imagine taking that del operator and dot-producting with this guy. So if I scroll down to give some room here, we're gonna take that vector that's kind of, the same vector, the partial partial x, and I say vector, but vector-ish thing, partial y. And now we're gonna take the dot-product with this entire guy. So I'll go ahead and just copy it over. Just kind of copy it over here. And let's see. So we'll need a little bit more room to evaluate this.
|
Laplacian computation example.mp3
|
So I'll go ahead and just copy it over. Just kind of copy it over here. And let's see. So we'll need a little bit more room to evaluate this. So here, when you imagine taking the dot-product, you kind of multiply these top components together. So we're taking the partial derivative with respect to x of this whole guy, and when you do that, what you get, you still have that 1 1β2, and then the derivative of negative sine of x over two. So that 1 1β2 gets pulled out when you're kind of taking the derivative of the inside, and the derivative of negative sine is negative cosine.
|
Laplacian computation example.mp3
|
So we'll need a little bit more room to evaluate this. So here, when you imagine taking the dot-product, you kind of multiply these top components together. So we're taking the partial derivative with respect to x of this whole guy, and when you do that, what you get, you still have that 1 1β2, and then the derivative of negative sine of x over two. So that 1 1β2 gets pulled out when you're kind of taking the derivative of the inside, and the derivative of negative sine is negative cosine. So negative cosine of that stuff on the inside, that x over two. And of course, we still multiply it by this. This looks like a constant, the sine of y over two, and we multiply by that.
|
Laplacian computation example.mp3
|
So that 1 1β2 gets pulled out when you're kind of taking the derivative of the inside, and the derivative of negative sine is negative cosine. So negative cosine of that stuff on the inside, that x over two. And of course, we still multiply it by this. This looks like a constant, the sine of y over two, and we multiply by that. Sine of y over two. And then we add that, because it's kind of like a dot-product, you add that to what it looks like when you multiply these next two components. So we're gonna add, and you have that 1 1β2, and then cosine of y over two.
|
Laplacian computation example.mp3
|
This looks like a constant, the sine of y over two, and we multiply by that. Sine of y over two. And then we add that, because it's kind of like a dot-product, you add that to what it looks like when you multiply these next two components. So we're gonna add, and you have that 1 1β2, and then cosine of y over two. When we differentiate that, you also pull out the 1 1β2. So again, you have that pulled out 1 1β2, and the derivative of cosine is negative sine. So now we're taking negative sine of, and then that stuff on the inside, y over two.
|
Laplacian computation example.mp3
|
So we're gonna add, and you have that 1 1β2, and then cosine of y over two. When we differentiate that, you also pull out the 1 1β2. So again, you have that pulled out 1 1β2, and the derivative of cosine is negative sine. So now we're taking negative sine of, and then that stuff on the inside, y over two. And we continue multiplying by the constant. As far as y is concerned, cosine of x over two is a constant, so we multiply it by that. Cosine of x over two.
|
Laplacian computation example.mp3
|
So now we're taking negative sine of, and then that stuff on the inside, y over two. And we continue multiplying by the constant. As far as y is concerned, cosine of x over two is a constant, so we multiply it by that. Cosine of x over two. And then that, so that is the divergence of that gradient field. So the divergence of the gradient of our original function gives us the Laplacian. And in fact, we could simplify this further, because both of these terms kind of look identical.
|
Laplacian computation example.mp3
|
So in the last video, I introduced the vector form of the multivariable chain rule. And just to remind ourselves, I'm saying you have some kind of function f, and in this case I said it comes from a 100 dimensional space. So you might imagine, well, I can't imagine a 100 dimensional space, but in principle, you're just thinking of some area that's 100 dimensions. It could be two if you wanted to think more concretely, two dimensions. And it's a scalar valued function, so it just outputs to a number line, some kind of number line that I'll think of as f as its output. And what we're gonna do is we compose it with a vector valued function. So some function that takes in a single number t, and then it outputs into that super high dimensional space.
|
Multivariable chain rule and directional derivatives.mp3
|
It could be two if you wanted to think more concretely, two dimensions. And it's a scalar valued function, so it just outputs to a number line, some kind of number line that I'll think of as f as its output. And what we're gonna do is we compose it with a vector valued function. So some function that takes in a single number t, and then it outputs into that super high dimensional space. So you're thinking, you go from the single variable t to some very high dimensional space that we think of as full of vectors, and then you take from that over to a single variable, over to a number. And you know, the way you'd write that out is you'd say f composed with the output of v. So f composed with v of t. And what we're interested in doing is taking its derivative. So the derivative of that composition is, and I told you, and we kind of walked through where this comes from, the gradient of f evaluated at v of t, evaluated at your original output, dot product with the derivative of v, the vectorized derivative.
|
Multivariable chain rule and directional derivatives.mp3
|
So some function that takes in a single number t, and then it outputs into that super high dimensional space. So you're thinking, you go from the single variable t to some very high dimensional space that we think of as full of vectors, and then you take from that over to a single variable, over to a number. And you know, the way you'd write that out is you'd say f composed with the output of v. So f composed with v of t. And what we're interested in doing is taking its derivative. So the derivative of that composition is, and I told you, and we kind of walked through where this comes from, the gradient of f evaluated at v of t, evaluated at your original output, dot product with the derivative of v, the vectorized derivative. And what that means, you know, for v, you're just taking the derivative of every component. So when you take this and you take the derivative with respect to t, all that means is that each component, you're taking the derivative of it. The x1 dt, the x2 dt, on and on until d, and then the 100th component, dt.
|
Multivariable chain rule and directional derivatives.mp3
|
So the derivative of that composition is, and I told you, and we kind of walked through where this comes from, the gradient of f evaluated at v of t, evaluated at your original output, dot product with the derivative of v, the vectorized derivative. And what that means, you know, for v, you're just taking the derivative of every component. So when you take this and you take the derivative with respect to t, all that means is that each component, you're taking the derivative of it. The x1 dt, the x2 dt, on and on until d, and then the 100th component, dt. So this was the vectorized form of the multivariable chain rule. And what I want to do here is show how this looks a lot like a directional derivative. And if you haven't watched the video on the directional derivative, maybe go back, take a look, kind of remind yourself.
|
Multivariable chain rule and directional derivatives.mp3
|
The x1 dt, the x2 dt, on and on until d, and then the 100th component, dt. So this was the vectorized form of the multivariable chain rule. And what I want to do here is show how this looks a lot like a directional derivative. And if you haven't watched the video on the directional derivative, maybe go back, take a look, kind of remind yourself. But in principle, you say if you're in the input space of f and you nudge yourself along some kind of vector v, and maybe just because I'm using v there, I'll instead say some kind of vector w. So not a function, just a vector. And you're wondering, hey, how much does that result in a change to the output of f? That's answered by the directional derivative.
|
Multivariable chain rule and directional derivatives.mp3
|
And if you haven't watched the video on the directional derivative, maybe go back, take a look, kind of remind yourself. But in principle, you say if you're in the input space of f and you nudge yourself along some kind of vector v, and maybe just because I'm using v there, I'll instead say some kind of vector w. So not a function, just a vector. And you're wondering, hey, how much does that result in a change to the output of f? That's answered by the directional derivative. And you write directional derivative in the direction of w of f, the directional derivative of f, and I should say at some point, some input point, p for that input point, and it's a vector in this case, like a 100-dimensional vector. And the way you evaluate it is you take the gradient of f, this is why we use the Nabla notation in the first place, it's an indicative of how we compute it, the gradient of f, evaluated at that same input point, same input point, same input vector p. So here, just to be clear, you'd be thinking of whatever vector to your input point, that's p. But then the nudge, the nudge away from that input point is w. And you take the dot product between that and the vector itself, the vector that represents your nudge direction. But that looks a lot like the multivariable chain rule up here, except instead of w, you're taking the derivative, the vector value derivative of v. So this whole thing, you could say, is the directional derivative in the direction of the derivative of t, and it's kind of confusing, directional derivative in the direction of a derivative of f. And at what point are you taking this?
|
Multivariable chain rule and directional derivatives.mp3
|
That's answered by the directional derivative. And you write directional derivative in the direction of w of f, the directional derivative of f, and I should say at some point, some input point, p for that input point, and it's a vector in this case, like a 100-dimensional vector. And the way you evaluate it is you take the gradient of f, this is why we use the Nabla notation in the first place, it's an indicative of how we compute it, the gradient of f, evaluated at that same input point, same input point, same input vector p. So here, just to be clear, you'd be thinking of whatever vector to your input point, that's p. But then the nudge, the nudge away from that input point is w. And you take the dot product between that and the vector itself, the vector that represents your nudge direction. But that looks a lot like the multivariable chain rule up here, except instead of w, you're taking the derivative, the vector value derivative of v. So this whole thing, you could say, is the directional derivative in the direction of the derivative of t, and it's kind of confusing, directional derivative in the direction of a derivative of f. And at what point are you taking this? At what point are you taking this directional derivative? Well, it's wherever the output of v is. So this is very compact, it's saying quite a bit here, but a way that you could be thinking about this is v of t, I'm gonna kind of erase here, v of t, as you're zooming all about and as you shift t, it kind of moves you through this space in some way.
|
Multivariable chain rule and directional derivatives.mp3
|
But that looks a lot like the multivariable chain rule up here, except instead of w, you're taking the derivative, the vector value derivative of v. So this whole thing, you could say, is the directional derivative in the direction of the derivative of t, and it's kind of confusing, directional derivative in the direction of a derivative of f. And at what point are you taking this? At what point are you taking this directional derivative? Well, it's wherever the output of v is. So this is very compact, it's saying quite a bit here, but a way that you could be thinking about this is v of t, I'm gonna kind of erase here, v of t, as you're zooming all about and as you shift t, it kind of moves you through this space in some way. And each one of these output points here represents the vector, v of t at some point. The derivative of that, what does this derivative represent? That's the tangent vector to that motion.
|
Multivariable chain rule and directional derivatives.mp3
|
So this is very compact, it's saying quite a bit here, but a way that you could be thinking about this is v of t, I'm gonna kind of erase here, v of t, as you're zooming all about and as you shift t, it kind of moves you through this space in some way. And each one of these output points here represents the vector, v of t at some point. The derivative of that, what does this derivative represent? That's the tangent vector to that motion. So you're zipping about through that space, the tangent vector to your motion, that's how we interpret v prime of t, the derivative of v with respect to t. And why should that make sense? Why should the directional derivative in the direction of v prime of t, this change to the intermediary function v, have anything to do with the multivariable chain rule? Well, remember what we're asking when we say d dt of this composition, is we're saying we take a tiny nudge to t, so that tiny change here in the value t, and we're wondering what change that results in after the composition.
|
Multivariable chain rule and directional derivatives.mp3
|
That's the tangent vector to that motion. So you're zipping about through that space, the tangent vector to your motion, that's how we interpret v prime of t, the derivative of v with respect to t. And why should that make sense? Why should the directional derivative in the direction of v prime of t, this change to the intermediary function v, have anything to do with the multivariable chain rule? Well, remember what we're asking when we say d dt of this composition, is we're saying we take a tiny nudge to t, so that tiny change here in the value t, and we're wondering what change that results in after the composition. Well, at a given point, that tiny nudge in t causes a change in the direction of v prime of t. That's kind of the whole meaning of this vector value derivative. You change t by a little bit, and that's gonna tell you how you move in the output space. But then you say, okay, so I've moved a little bit in this intermediary, 100-dimensional space.
|
Multivariable chain rule and directional derivatives.mp3
|
Well, remember what we're asking when we say d dt of this composition, is we're saying we take a tiny nudge to t, so that tiny change here in the value t, and we're wondering what change that results in after the composition. Well, at a given point, that tiny nudge in t causes a change in the direction of v prime of t. That's kind of the whole meaning of this vector value derivative. You change t by a little bit, and that's gonna tell you how you move in the output space. But then you say, okay, so I've moved a little bit in this intermediary, 100-dimensional space. How does that influence the output of f based on the behavior of just the multivariable function f? Well, that's what the directional derivative is asking. It says you take a nudge in the direction of some vector.
|
Multivariable chain rule and directional derivatives.mp3
|
But then you say, okay, so I've moved a little bit in this intermediary, 100-dimensional space. How does that influence the output of f based on the behavior of just the multivariable function f? Well, that's what the directional derivative is asking. It says you take a nudge in the direction of some vector. In this case, I wrote v prime of t over here. More generally, you could say any vector w, you take a nudge in that direction. And more importantly, the size of v prime of t matters here.
|
Multivariable chain rule and directional derivatives.mp3
|
It says you take a nudge in the direction of some vector. In this case, I wrote v prime of t over here. More generally, you could say any vector w, you take a nudge in that direction. And more importantly, the size of v prime of t matters here. If you're moving really quickly, you would expect that change to be larger. So the fact that v prime of t would be larger is helpful. And the directional derivative is telling you the size of the change in f as a ratio of the proportion of that directional vector that you went along.
|
Multivariable chain rule and directional derivatives.mp3
|
And more importantly, the size of v prime of t matters here. If you're moving really quickly, you would expect that change to be larger. So the fact that v prime of t would be larger is helpful. And the directional derivative is telling you the size of the change in f as a ratio of the proportion of that directional vector that you went along. Another notation for the directional derivative is to say partial f and then partial whatever that vector is basically saying you take a size of that nudge along that vector as a proportion of the vector itself, and then you consider the change to the output and you're taking the ratio. So I think this is a very beautiful way of understanding the multivariable chain rule because it gives this image of, you know, you're thinking of v of t and you're thinking of zipping along in some way, and the direction and value of your velocity as you zip along is what determines the change in the output of the function f. So hopefully that helps give a better understanding both of the directional derivative and of the multivariable chain rule. It's one of those nice little interpretations.
|
Multivariable chain rule and directional derivatives.mp3
|
And before we jump into it, I just want to give a quick review of how you think about the determinant itself, just in an ordinary linear algebra context. So if I'm taking the determinant of some kind of matrix, let's say, three, zero, one, two, one, two, something like this. To compute the determinant, you take these diagonal terms here. So you take three multiplied by that two, and then you subtract off the other diagonal. Subtract off one multiplied by zero. And in this case, that evaluates to six. But there is, of course, much more than just a computation going on here.
|
The Jacobian Determinant.mp3
|
So you take three multiplied by that two, and then you subtract off the other diagonal. Subtract off one multiplied by zero. And in this case, that evaluates to six. But there is, of course, much more than just a computation going on here. There's a really nice geometric intuition. Namely, if we think of this matrix, three, zero, one, two, as a linear transformation, as something that's going to take this first basis vector over to the coordinates three, zero, and that second basis vector over to the coordinates one, two, you know, thinking about the columns, you can think of the determinant as measuring how much this transformation stretches or squishes space. And in particular, you'll notice how I have this yellow region highlighted.
|
The Jacobian Determinant.mp3
|
But there is, of course, much more than just a computation going on here. There's a really nice geometric intuition. Namely, if we think of this matrix, three, zero, one, two, as a linear transformation, as something that's going to take this first basis vector over to the coordinates three, zero, and that second basis vector over to the coordinates one, two, you know, thinking about the columns, you can think of the determinant as measuring how much this transformation stretches or squishes space. And in particular, you'll notice how I have this yellow region highlighted. And this region starts off as the unit square, a square with side lengths one, so its area is one. And there's nothing special about this particular region. It's just nice as a canonical shape with an area of one so that we can compare it to what happens after the transformation.
|
The Jacobian Determinant.mp3
|
And in particular, you'll notice how I have this yellow region highlighted. And this region starts off as the unit square, a square with side lengths one, so its area is one. And there's nothing special about this particular region. It's just nice as a canonical shape with an area of one so that we can compare it to what happens after the transformation. Ask, how much does that area get stretched out? And the answer is, it gets stretched out by a factor of the determinant. That's kind of what the determinant means, is that all areas, if you were to drop any kind of shape, not just that one square, are gonna get stretched out by a factor of six.
|
The Jacobian Determinant.mp3
|
It's just nice as a canonical shape with an area of one so that we can compare it to what happens after the transformation. Ask, how much does that area get stretched out? And the answer is, it gets stretched out by a factor of the determinant. That's kind of what the determinant means, is that all areas, if you were to drop any kind of shape, not just that one square, are gonna get stretched out by a factor of six. And we can actually verify, looking at this parallelogram that the square turned into, it has a base of three, and then the height is two. And three times two is six. And that has everything to do with the fact that this three showed up here and this two showed up there.
|
The Jacobian Determinant.mp3
|
That's kind of what the determinant means, is that all areas, if you were to drop any kind of shape, not just that one square, are gonna get stretched out by a factor of six. And we can actually verify, looking at this parallelogram that the square turned into, it has a base of three, and then the height is two. And three times two is six. And that has everything to do with the fact that this three showed up here and this two showed up there. So now, let's think about what this might mean in the context of what I've been describing in the last couple videos. And if you'll remember, we had a multivariable function, something that you can write out as F1 with two inputs, and then the second component, F2, also with two inputs. And the function that I was looking at, that we were kind of analyzing to learn about the Jacobian, had the first component, X plus sine of Y. X plus sine Y.
|
The Jacobian Determinant.mp3
|
And that has everything to do with the fact that this three showed up here and this two showed up there. So now, let's think about what this might mean in the context of what I've been describing in the last couple videos. And if you'll remember, we had a multivariable function, something that you can write out as F1 with two inputs, and then the second component, F2, also with two inputs. And the function that I was looking at, that we were kind of analyzing to learn about the Jacobian, had the first component, X plus sine of Y. X plus sine Y. And the second component was Y plus the sine of X. And the idea was that this function is not at all linear. It's gonna make everything very curvy and complicated.
|
The Jacobian Determinant.mp3
|
And the function that I was looking at, that we were kind of analyzing to learn about the Jacobian, had the first component, X plus sine of Y. X plus sine Y. And the second component was Y plus the sine of X. And the idea was that this function is not at all linear. It's gonna make everything very curvy and complicated. However, if we zoom in around a particular region, which is what this outer yellow box represents, zooming in, it will look like a linear transformation. In fact, I can kind of play this forward, and we see that even though everything is crazy, inside that zoomed-in version, things loosely look like a linear function. And you'll notice I have this inner yellow box highlighted.
|
The Jacobian Determinant.mp3
|
It's gonna make everything very curvy and complicated. However, if we zoom in around a particular region, which is what this outer yellow box represents, zooming in, it will look like a linear transformation. In fact, I can kind of play this forward, and we see that even though everything is crazy, inside that zoomed-in version, things loosely look like a linear function. And you'll notice I have this inner yellow box highlighted. And this yellow box inside corresponds to the unit square that I was showing in the last animation. And again, it's just a placeholder as something to watch to see how much the area of any kind of blob in that region gets stretched. So in this particular case, when you play out the animation, areas don't really change that much.
|
The Jacobian Determinant.mp3
|
And you'll notice I have this inner yellow box highlighted. And this yellow box inside corresponds to the unit square that I was showing in the last animation. And again, it's just a placeholder as something to watch to see how much the area of any kind of blob in that region gets stretched. So in this particular case, when you play out the animation, areas don't really change that much. They get stretched out a little bit, but it's not that dramatic. So if we know the matrix that describes the transformation that this looks like zoomed in, the determinant of that matrix will tell us the factor by which areas tend to get stretched out. And in particular, you can think of this little yellow box and the factor by which it gets stretched.
|
The Jacobian Determinant.mp3
|
So in this particular case, when you play out the animation, areas don't really change that much. They get stretched out a little bit, but it's not that dramatic. So if we know the matrix that describes the transformation that this looks like zoomed in, the determinant of that matrix will tell us the factor by which areas tend to get stretched out. And in particular, you can think of this little yellow box and the factor by which it gets stretched. And as a reminder, the matrix describing that zoomed-in transformation is the Jacobian. It is this thing that kind of holds all of the partial differential information. You take the partial derivative of f with respect to x, sorry, partial of f1 of that first component, and then the partial derivative of the second component with respect to x.
|
The Jacobian Determinant.mp3
|
And in particular, you can think of this little yellow box and the factor by which it gets stretched. And as a reminder, the matrix describing that zoomed-in transformation is the Jacobian. It is this thing that kind of holds all of the partial differential information. You take the partial derivative of f with respect to x, sorry, partial of f1 of that first component, and then the partial derivative of the second component with respect to x. And then on the other column, we have the partial derivative of that first component with respect to y, and the partial derivative of that second component with respect to y. And if you, let's see, we'll close this off, close off this matrix. And if you evaluate each one of these partial derivatives at a particular point, at whatever point we happen to zoom in on, in this case it was negative two, one, once you plug that into all of these, you get some matrix that's just full of numbers.
|
The Jacobian Determinant.mp3
|
You take the partial derivative of f with respect to x, sorry, partial of f1 of that first component, and then the partial derivative of the second component with respect to x. And then on the other column, we have the partial derivative of that first component with respect to y, and the partial derivative of that second component with respect to y. And if you, let's see, we'll close this off, close off this matrix. And if you evaluate each one of these partial derivatives at a particular point, at whatever point we happen to zoom in on, in this case it was negative two, one, once you plug that into all of these, you get some matrix that's just full of numbers. And what turns out to be a very useful thing later on in multivariable calc concepts is to take the determinant of that matrix, to kind of analyze how much space is getting stretched or squished in that region. So in the last video, we worked this out for the specific example here, where that top left function turned out just to be the constant function one, right? Because we were taking the partial derivative of this guy with respect to x, and that was one.
|
The Jacobian Determinant.mp3
|
And if you evaluate each one of these partial derivatives at a particular point, at whatever point we happen to zoom in on, in this case it was negative two, one, once you plug that into all of these, you get some matrix that's just full of numbers. And what turns out to be a very useful thing later on in multivariable calc concepts is to take the determinant of that matrix, to kind of analyze how much space is getting stretched or squished in that region. So in the last video, we worked this out for the specific example here, where that top left function turned out just to be the constant function one, right? Because we were taking the partial derivative of this guy with respect to x, and that was one. And likewise, in the bottom right, that was also a constant function of one. And then the others were cosine functions. This one was cosine x, because we were taking the partial derivative of this second component here with respect to x.
|
The Jacobian Determinant.mp3
|
Because we were taking the partial derivative of this guy with respect to x, and that was one. And likewise, in the bottom right, that was also a constant function of one. And then the others were cosine functions. This one was cosine x, because we were taking the partial derivative of this second component here with respect to x. And then the top right of our matrix was cosine of y. And these are in general functions of x and y, because you're gonna plug in whatever the input point you're zooming in on. And when we're thinking about the determinant here, let's just go ahead and take the determinant in this form, in the form as a function.
|
The Jacobian Determinant.mp3
|
This one was cosine x, because we were taking the partial derivative of this second component here with respect to x. And then the top right of our matrix was cosine of y. And these are in general functions of x and y, because you're gonna plug in whatever the input point you're zooming in on. And when we're thinking about the determinant here, let's just go ahead and take the determinant in this form, in the form as a function. So I'm gonna ask about the determinant of this matrix, or maybe you think of it as a matrix-valued function. And in this case, we do the same thing. I mean, procedurally, you know how to take a determinant.
|
The Jacobian Determinant.mp3
|
And when we're thinking about the determinant here, let's just go ahead and take the determinant in this form, in the form as a function. So I'm gonna ask about the determinant of this matrix, or maybe you think of it as a matrix-valued function. And in this case, we do the same thing. I mean, procedurally, you know how to take a determinant. We take these diagonals, so that's just gonna be one times one, and then we subtract off the product of the other diagonal. Subtract off cosine of x, multiplied by cosine of y. And as an example, let's plug in this point here that we're zooming in on, negative two, one.
|
The Jacobian Determinant.mp3
|
I mean, procedurally, you know how to take a determinant. We take these diagonals, so that's just gonna be one times one, and then we subtract off the product of the other diagonal. Subtract off cosine of x, multiplied by cosine of y. And as an example, let's plug in this point here that we're zooming in on, negative two, one. So I'm gonna plug in x is equal to negative two, and y is equal to one. And when you plug in cosine of negative two, that's gonna come out to be approximately negative 0.42. And when you plug in cosine of y, cosine of one in this case, that's gonna come out to be about 0.54.
|
The Jacobian Determinant.mp3
|
And as an example, let's plug in this point here that we're zooming in on, negative two, one. So I'm gonna plug in x is equal to negative two, and y is equal to one. And when you plug in cosine of negative two, that's gonna come out to be approximately negative 0.42. And when you plug in cosine of y, cosine of one in this case, that's gonna come out to be about 0.54. And when we multiply those, we can take one minus the product of those. It's gonna be about negative 0.227. And that's all stuff that you can plug into your calculator if you want.
|
The Jacobian Determinant.mp3
|
And when you plug in cosine of y, cosine of one in this case, that's gonna come out to be about 0.54. And when we multiply those, we can take one minus the product of those. It's gonna be about negative 0.227. And that's all stuff that you can plug into your calculator if you want. And what that means is that the total determinant evaluated at that point, the Jacobian determinant, at the point negative two, one, is about 1.0, sorry, 1.227. So that's telling you that areas tend to get stretched out by this factor around that point. And that kind of lines up with what we see.
|
The Jacobian Determinant.mp3
|
And that's all stuff that you can plug into your calculator if you want. And what that means is that the total determinant evaluated at that point, the Jacobian determinant, at the point negative two, one, is about 1.0, sorry, 1.227. So that's telling you that areas tend to get stretched out by this factor around that point. And that kind of lines up with what we see. We see that areas get stretched out maybe a little bit, but not that much, right? It's only by a factor of about 1.2. And now let's contrast this if instead we zoom in at the point where x is equal to zero and y is equal to one.
|
The Jacobian Determinant.mp3
|
And that kind of lines up with what we see. We see that areas get stretched out maybe a little bit, but not that much, right? It's only by a factor of about 1.2. And now let's contrast this if instead we zoom in at the point where x is equal to zero and y is equal to one. So I'm gonna go over here, and all I'm gonna change, all I'm gonna change is that x is equal to zero and y will still equal one. And what that means is that cosine of x, instead of being negative 0.42, instead of what's cosine of zero, that's actually precisely equal to one. We don't have to approximate on this one.
|
The Jacobian Determinant.mp3
|
And now let's contrast this if instead we zoom in at the point where x is equal to zero and y is equal to one. So I'm gonna go over here, and all I'm gonna change, all I'm gonna change is that x is equal to zero and y will still equal one. And what that means is that cosine of x, instead of being negative 0.42, instead of what's cosine of zero, that's actually precisely equal to one. We don't have to approximate on this one. Which means when we multiply them, one times 0.54, well that, that's gonna now be about 0.54. So this one, once we actually perform the subtraction, instead when you take one minus 0.54, that's gonna give us 0.46. So even before watching, because this determinant of the Jacobian around the point zero one is less than one, this is telling us we should expect areas to get squished down.
|
The Jacobian Determinant.mp3
|
We don't have to approximate on this one. Which means when we multiply them, one times 0.54, well that, that's gonna now be about 0.54. So this one, once we actually perform the subtraction, instead when you take one minus 0.54, that's gonna give us 0.46. So even before watching, because this determinant of the Jacobian around the point zero one is less than one, this is telling us we should expect areas to get squished down. Precisely, they should be squished by a factor of 0.46. And let's see if this looks right, right? We're looking at the zoomed in version around that point, and areas should tend to contract around that.
|
The Jacobian Determinant.mp3
|
So even before watching, because this determinant of the Jacobian around the point zero one is less than one, this is telling us we should expect areas to get squished down. Precisely, they should be squished by a factor of 0.46. And let's see if this looks right, right? We're looking at the zoomed in version around that point, and areas should tend to contract around that. And indeed they do. You see it got squished down, it looks like by a fair bit. And from our calculation, we can conclude that they got scaled down precisely by a factor of 0.46.
|
The Jacobian Determinant.mp3
|
We're looking at the zoomed in version around that point, and areas should tend to contract around that. And indeed they do. You see it got squished down, it looks like by a fair bit. And from our calculation, we can conclude that they got scaled down precisely by a factor of 0.46. That's what the determinant means. So like I said, this is actually a very nice notion throughout multivariable calculus, is that you look at a tiny little local neighborhood around a point, and if you just wanna get a general feel for does this function as a transformation tend to stretch out that region or to squish it together, you know, how much do areas change in that little neighborhood, that's exactly what this Jacobian determinant is, you know, built to solve. So with that, I'll see you guys next video.
|
The Jacobian Determinant.mp3
|
Although this should hopefully be second nature to you at this point. If it's not, you might want to review the definite integration videos. But if I have some function, this is the xy-plane, that's the x-axis, that's the y-axis. And I have some function, let's call that, you know, this is y is equal to some function of x. You give me an x and I'll give you a y. If I wanted to figure out the area under this curve between, let's say, x is equal to a and x is equal to b. So this is the area I want to figure out.
|
Double integral 1 Double and triple integrals Multivariable Calculus Khan Academy.mp3
|
And I have some function, let's call that, you know, this is y is equal to some function of x. You give me an x and I'll give you a y. If I wanted to figure out the area under this curve between, let's say, x is equal to a and x is equal to b. So this is the area I want to figure out. This area right here. What I do is I split it up into a bunch of columns or a bunch of rectangles. Let me draw one of those rectangles where you could view.
|
Double integral 1 Double and triple integrals Multivariable Calculus Khan Academy.mp3
|
So this is the area I want to figure out. This area right here. What I do is I split it up into a bunch of columns or a bunch of rectangles. Let me draw one of those rectangles where you could view. And there's different ways to do this, but this is just a review. Where you could review, that's maybe one of the rectangles. The area of the rectangle is just base times height, right?
|
Double integral 1 Double and triple integrals Multivariable Calculus Khan Academy.mp3
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.