playlist
stringclasses
160 values
file_name
stringlengths
9
102
content
stringlengths
29
329k
MIT_6042J_Mathematics_for_Computer_Science_Spring_2015
2111_Stable_Matching_Video.txt
We've seen graphs involving boys and girls and connections between them in the context of our sexual demographics calculation and study. A similar problem comes up in terms of what I call stable matching, which is again the issue of matching up boys and girls in a special way according to some constraints. It turns out to have a lot of applications which we'll discuss toward the end. Let's just look at what the problem is. The setup is that there's some number of boys, in this case 5, 1 through 5, and an equal number of girls labeled A through E. And each of the boys has a ranking of the girls. Different rankings, because different boys have different preferences. And likewise, the girls have rankings of the boys. Different girls have different preferences. So here, girl A likes boy 3 best and boy 5 second best, and boy 1 likes girls C best and girl D least. So the problem, basically, is that we want to get all the boys married to all the girls. We want to form 5 monogamous bisexual marriages, a boy and a girl, and we'd like, in some vague way, to acknowledge these preferences and satisfy as many as we can. I'll be more specific about that in a minute. But let's just play with that idea of trying to accommodate people's preferences. So one way to do it is let's just decide, well, we'll favor the boys this time. Let's try a greedy strategy for the boys. Let's just look at the boy preferences. And a greedy strategy means I'm going to try to give each boy the best possible choice that he can make. So I'm going to start off by deciding that let's let boy 1 have his first choice, girl C. I'm going to marry them off. And once I've married them off, I'll just stop considering 1 and C. And now I have a reduced problem. I have four remaining boys and four remaining girls. And proceeding in this way-- greedy way for boys-- I'm going to now give boy 2 his next choice that remains, namely girl A. And I'll marry them off. And again, now 2 and A can be eliminated from consideration. I continue in this way and I wind up with this set of five marriages ending with boy 5 married to girl E. OK. Now if we look at this set of marriages, there's a problem which motivates the whole stable marriage problem that we're going to be examining. Namely, we've married off boy 1 to his first choice, girl C. He should be happy, but she may not be. And we've also married off boy 4 to girl B. Now a difficulty here is that if you look at the preferences, girl C actually is more desirable to boy 4 than girl B. Girl C, or boy 4, likes somebody else's wife better than his own. And what makes it really bad is that girl C, the other person's wife, likes boy 4 better than her husband. Each of them would be better off if they ran off together. They are, whether they do or not, they certainly are under tremendous pressure. It makes this set of marriages unstable. So they're called a rogue couple. When you have in a set of marriages a boy and a girl who prefer each other over their current spouses, they are said to be a rogue couple and a source of instability. So the stable marriage problem is let's see if we can get everybody married off and have no rogue couples. It'll be a stable set of marriages. Now people may not be happy, but it doesn't matter because they'll never find anybody else that is unhappy in the same way that would be willing to run off with them and make them happier. So it's stable. And it turns out that there always is a way to find a stable set of marriages. A couple of ways. But why don't we just play with the idea. Here is display of those preferences again and you can stop the video and fiddle with a piece of paper and see if you can come up with a stable set of marriages between the boys and girls. We used to do this in class in real time. We would give five different boys a chart of preferences of girls, and we'd give the five different girls a chart of preferences of boys. They were not supposed to tell each other what their preferences were, but the girls were supposed to be interviewing the boys and the boys interviewing the girls simultaneously in a parallel, and try to agree to get married and see whether the set of marriages that they wound up with were stable. Most of the time they actually did wind up with a stable set of marriages, but not always. Just an amusing exercise. And it does illustrate something about the fact that the procedures that we're going to be going through to find stable marriages work very nicely if you choose to do them in parallel. Anyway, there are, it turns out, two sets of stable marriages that we can find in this particular set of preferences. The simplest one to understand is all the girls get their first choice. It so happens, if you look at that chart, all of the girls have different first choice boys. If we simply give them their first choice, no girl is going to be tempted to be part of a rogue couple because she's got her first choice. It's absolutely stable. But of course, that's a very special circumstance. It would be unusual that all the girls' first choices were different, or likewise, it would be unusual if all the boys' first choices were different. But if they were, then you instantly get a stable set of marriages. There's another stable set that's not quite so obvious. And you can check that all of these pairs have no instability. There's no rogue couples in here when I marry 5 to A and 1 to E. This is a so-called "boy optimal" set of marriages. It turns out that in this set of marriages, every boy gets the best possible spouse that he could possibly get in any set of stable marriages. There's no set of stable marriages in which boy 5 gets a more desirable girl than A. There's no set of stable marriages in which boy 1 gets a girl that's more desirable to him than girl E. The sad news is that it's simultaneously pessimal for the girls. That is, each girl is getting their worst possible spouse among all sets of stable marriages. We'll examine that further in a minute. But let me just point out that this is more than a puzzle. I mean, it's fun, and it's a nice puzzle, but it's more than a puzzle. Because the original case where it was studied or published first was in a paper by Gale and Shapley in 1962. You may remember the name David Gale from the subset game that we played early in the term when we were practicing with set relations. And what they were dealing was with the problem of college admissions where students have rankings of colleges that they've applied to and their preferences and colleges have rankings of students that have applied to them. And we're trying to get matching up between college offers and student preferences. And in a circumstance where a college made an offer and a student sort of accepted, but then later the student got another offer from a place they preferred more, and they were changing their mind, and withdrawing acceptances and so on, it was making everybody crazy, the administrators and the students themselves. And the desire was, let's get some stable set of offers on the table. And Gale and Shapley proposed a protocol to get stable marriages that would apply to college admissions. It turns out, interestingly enough, that although Gale and Shapley are credited with the stable marriage solution that we're going to discuss, they did that because they were the first to publish it. But, in fact, it had been discovered and put into practice at least 20 years earlier by a national board whose job was to match interns in hospitals, that is graduating medical students who were about to start their further clinical training as interns, now called residents in modern language, had to be matched up with hospitals. And the residents had preferred hospitals that they'd like to go to, and the hospitals had rankings of residents that met their criteria. And again, the issue was, how do you assign residents to hospitals in a stable way? Before they had discovered this stability algorithm, it was a mess. Again, there's a wonderful story in the book by Gusfield and Irving that is an entire book about the stable marriage problem published by MIT Press, I think in '89. And as a matter of fact, I was the editor of the series in which it appears. This stable marriage problem turns out to have a lot of structure. And they describe a wonderful anecdote in their preface about the problems that were happening between the hospitals and the residents and the measures that were taken to try to achieve stability before they discovered this algorithm. Another genuine computer science application is one that was described by Tom Leighton who was a co-author of the text and then now the CEO of Akamai, an internet infrastructure plumbing company that has some large number of servers, I think 65,000 servers in 2010 around the world. And it basically is providing cached web pages so that they can respond more quickly to local calls. They have a problem that they get something like 62,000, 65,000 servers and they get I think 200 billion web requests a day. So the web requests are kind of acting like the boys and the servers are kind of acting like the girls, or the hospitals and the residents. And the web requests have preferences based on proximity and speed of server, and the servers have preference based on where they're located and the magnitude of the web request that they're coming to. And the question is, how do you assign web requests to servers so that things get done expeditiously? And it turns out that the stable marriage method gave a satisfactory way to accomplish that kind of matching. And in particular, because there's such large numbers involved that the stable marriage ritual, which we'll describe shortly, is very amenable to being run in parallel. Another application, it turns out, to come up was in matching dance partners when I was teaching this course ten years ago with a co-instructor who was a member of the Indian dance team. She said, we could use this, because it turns out that, again, there are boy and girl partners in the dance, and it was constantly the case that one boy would like another boy's partner better, and vice versa. And they would start pairing up and leaving the other people hanging and there were bad feelings, and it was a source of disruption in the society. There's a picture of that Indian dance group. My co-instructor's not actually there, but it gives you some sense of the reality of the problem here at MIT. And she told me that it was actually being used by that group.
MIT_6042J_Mathematics_for_Computer_Science_Spring_2015
341_Generalized_Counting_Rules_Video.txt
PROFESSOR: There are two generalizations of the bijection rule and the product rule that come up all the time and play an essential role in the repertoire of any counter. So let's look at those. The first of these is a generalization of the product rule. And let's see an instance where it comes up. Suppose I wanted to count the number of lineups of five students in the class. So if I let S be the number of students, and let's say, for the afternoon session S is 91, then the number of lineups of five students-- if I use the ordinary product rule, I would get-- I'm talking about S to the fifth, that is, sequences of length five of elements of S. And so the product rule would say, take 91 to the fifth as the number of lineups of five students. And that would be correct if the same student could appear twice in line, but that, of course, isn't possible with real students. So the lineups have no repeats. And what we're really counting is the number of those sequences of length five of students with no repeats. And the generalized product rule tells you quite straightforwardly how to count those. Namely, there are 91 ways to choose the first student among the 91. And whichever first student you've chosen, that leaves 90 other students you could choose to be second. And once you've chosen the first two, that leaves 89 students you could choose for the third, and 88 for the fourth, and 87 for the fifth. And the formula then is 91 times 90 times 89, 88, 87, for the number of sequences of distinct students of length five. Now, one nice way to express the 91 down to 87, in terms of factorials, is its 91 factorial, which is the product from 1 to 91, and divided by the product from 1 to 86, which cancels out the first 86 terms in 91 factorial, leaving me with exactly 87 through 91 product. So the second rule is a sort of obvious generalization of the bijectional, but I'm getting ahead of myself. Let's state the generalized product rule in general. So if we let Q be a set of length-k sequences with the following property, there are n1 possible first elements among these length-k sequences. And for every one of the first possible elements, if you look at the number of tuples with what the second possible coordinates for a given first coordinate, it's always n2. And likewise, if you look at the number of possible third coordinates given the first two, it's n3 and it's uniform no matter what the first two are. Then if you have this kind of a set up, which is exactly what happens when you're picking one student after another and they can't compete, you discover that the length-k sequences with n1 , first possible choices, n2, second possible choices, down through nk, k-th possible choices is n1 through nk. So that's the statement of the generalized product rule in the magenta box. Now, we come to the generalized bijection rule, which is called the division rule. And a simple, memorable way to illustrate is if you wanted to count the number of students in class 6.042, you could count the number of students fingers and divide by 10. Now, it's probably harder to count fingers than students, so this is not meant as a practical method. But it illustrates a basic and straightforward idea. Of course, it's implicitly assuming that we don't have any instances of amputations or polydactylism, and that, in fact, every student has exactly 10 fingers. OK, so in general, the division rule can be stated this way, if I have a total function from a set A to a set B, domain A, co-domain B, and this mapping is k-to-1, then the cardinality of A is simply k times the cardinality of B. So k-to-1 means that exactly k A elements hit each B element. Another way to say it is that there are exactly k arrows into every element of B. So then the number of arrows is simply k times B. And if you have a total function on A, the number of arrows is equal to the size of A, and that's where we get the formula. OK. And that's the generalized bijection rule. Let's apply it in a crucial example that is absolutely basic and we'll be using repeatedly. Suppose that I want to know how many possible subsets of size four are there from the numbers 1 through 13? So I have 13 possible numbers that I can choose. I want to pick out any four of them and I want to know how many ways are there to do that. And we'll do that by finding a mapping from things we know how to count to these particular subsets. So what we know how to count is if I let A be the set of all permutations of 1 through 13, then I know that the size of A is 13 factorial because there's 13 choices for the first element of the permutation, 12 for the second, down to one for the 13th. And let's let B be this object that I want to count, namely, the set of size four subsets of 1 through 13. And I want to find a mapping from A that I know how to count to B that I don't yet know how to count, but in a way where I can figure out that it's k-to-1 for a k that I can also count. How do I do that? Well, let's take an arbitrary permutation of A, that is to say, a sequence of the elements of A in some order-- call them a1, a2, through a13. So these numbers a1 through a13 are those numbers 1 through 13 in some unknown order. And I'm going to map a permutation of A, like this, to its first four elements. Just take the first four elements of the permutation and map them to the set consisting of those four elements. Now, since this is a permutation, these elements are all different, so I really do get a set of four different things here, a1, a2, and a3. And a4 is supposed to be different. This gives me a very well-defined total function from a permutation of 13 numbers to set of its first four elements. And now what we want to know is what kind of a mapping is this? And I'm going to argue that it's k-to-1 for a k that's not very hard to count. So when I look at what other things map to the set a1, a2, a3, a4, we mapped a permutation to its first four elements. And if we've got a1 through a4 as the set, what other things map to that set a1, a2, a3, a4? Well, the answer is any permutation with the same first four elements, but possibly in a different order, because we're just going to take the first four in sequence and map them to the set of those first four. The order in which the first four doesn't matter. OK? And likewise, the order of the remaining nine elements, 5 through 13, also doesn't matter. Whatever they are, if I have a given set of four elements to start, no matter what the remaining 9 are, they're going to map to the same subset of four elements. So there are 4 factorial possible ways that the first four elements can be permuted. And there are 9 factorial ways that the last nine elements can be permuted. And every one of these goes to the same set of four elements, a1 through a4. And those are the only ones that go there. And so what we've figured out is that the mapping of these kind of sequences with the given four elements first in some order and the remaining nine elements in some other order is 4 factorial times 9 factorial-to-1. There are 4 factorial times 9 factorial permutations that map to any given set of four elements. And that means that by applying the division rule, I've discovered that the size of A, which I know is 13 factorial, is equal to that k of the k-to-1 of 4 factorial times 9 factorial times the size of B. B is the subsets of size four that I'm trying to count. And so what I get is that the size of B is simply 13 factorial divided by that k, 4 factorial 9 factorial, 13 factorial over 4 factorial 9 factorial. And this number comes up so often that it has this special notation called binomial coefficient notation, which we read as 13 choose 4. In general, if I have an n element set and I'm going to choose a subset of m of them-- generalizing this argument because the 4 and the 9 and the 13 were completely arbitrary and the argument works in general-- is that the number of ways to choose a set of m elements among n is n choose m. And the definition of n choose m is n factorial over the m factorial ways to permute the first m elements and the n minus m factorial ways to permute the remaining n minus m elements. And again, that notation, the binomial coefficient, is called n over m is n choose m. This is an absolutely fundamental formula that you need to remember because we will be using it constantly and repeatedly.
MIT_6042J_Mathematics_for_Computer_Science_Spring_2015
243_Reducing_Factoring_To_SAT_Video.txt
PROFESSOR: We've mentioned the P equals NP question a number of times now as the most important question in theoretical computer science, and we've said that one way to formulate it is exactly to ask whether there's an efficient that is polynomial-time procedure to test whether or not a formula in propositional logic is satisfiable. Now, why is that such an important problem? We're not just logicians and we want to know whether or not some formula is satisfiable. How did it take on this enormous importance and apply to so many fields? And illustrating how you could use a satisfiability tester to factor efficiently is a good hint about why it is that all sorts of things reduce to SAT and why it, in fact, is such a centrally important problem. So let's suppose that we have a satisfiability tester and use it to find how to factor a number n. Now, the observation begins with how you use a SAT solver is that you can begin by writing or observing that it's easy enough to design a digital circuit that multiplies, that does arithmetic multiplications. In other words, it's got some number of bits reserved for an input x, a k bits, and another k bits for an input y, and it's got 2k output lines that produce the digits of x times y. You might need one extra digit, but never mind that. So this is a multiplier circuit takes an x, a k bit x in and a k bit y in and it spits out the product, which is another 2k bit number, and this is not a terribly big circuit. The naive way to design it would use a number of gates and a number of wires that was about quadratic in the number k. It's easy enough to design one of these things where the size is literally bounded by 5 times k squared, maybe plus a constant. And so this definitely a small polynomial. Given the number of bits that I'm working with, it's easy enough to build this multiplier circuit. Now, suppose that I have a way to test satisfiability of circuits. How am I going use this multiplier circuit to factor? Well, the first thing I'm going to do is let's suppose the number that I'm factoring is n and is the product of two primes, p and q. Those are the kinds of n's that we've been using in RSA, and let me also observe that it's very easy to design an n tester-- that is, a little digital circuit that has 2k input lines and produces on its one output line precisely when the input is the binary representation of n. So let's attach this equality tester that does nothing but ask whether it's being fed the digits of n as input and it produces an output, 1 for n and 0 if the input pattern is and the digital representation, the binary representation of anything other than n. That's another trivial circuit to build. So we put those two together, and now watch what happens. I'm going to take the circuit and set the first of the input bits to 0, and then I'm going to ask the SAT solver the following question-- is there a way to set the remaining input bits other than 0? So I've set the first one to 0. What about these other bits? The SAT solver can tell me whether or not it's possible to get a 1 out of this circuit with the 0 there fixed. So let's ask the SAT solver what happens, and the SAT solver says, hey, yes, there is a way to fill in the remaining digits and get an output 1. Well, what does that tell me? Well, it tells me that there is a factor that starts with 0, so let's fix the 0 based on the fact that it's possible for me to fill in the remaining digits with the bits of factors x and y that equal n. Let's try to set the second input bit to 0 and see what happens. Well, we'll ask the SAT tester, is it possible now to fill in the remaining digits to get the two numbers x and y that multiply and produce n and therefore output 1? And the SAT tester says, no, this is an unsatisfiable circuit. You can't get a 1 out of it any more. That tells me that I have to set the second bit to 1 in order to have a factor of n where the x and y will multiply together to be n. All right, fine. Go to the third bit, ask whether or not 0 works. The SAT tester says, let's say, yes. So then I could fix 0. I now know the first all three bits of x. And of course, I go on and in 2k SAT tests, I know exactly what p and q are, and I have, in fact, found the factors p and q. So that wraps that one up. That's how you use a SAT tester. You just do the SAT test 2k times and you factored this 2k bit number. And of course, if the SAT test is polynomial in k, then doing it 2k times just is also polynomial in k with one degree higher. Now, the satisfiability problem, as we formulated, was a problem about formulas that as you wrote out a propositional formula and asked whether or not it was satisfiable, and I'm instead asking about satisfiability of binary circuits. But in fact, as we did in some early exercises, you can describe a binary circuit by assigning a fresh variable to every wire in the circuit and then writing a little formula around each gate which explains how the input wires to that gate are related to the output wire of that gate. And that little formula explains that wiring of that gate, and you take the "and" of all those formulas and you have a formula that is describing the structure of the circuitry, and in fact the formula is satisfiable if and only if the circuit can produce an output 1. So we really have by assuming that I could test satisfiability of formulas, I can therefore test satisfiability of circuits, and therefore I can factor. So that's the simple trick to find a propositional formula that's equisatisfiable to the circuit-- if the circuit produces output 1 if and only if this formula of about the same size as the circuit is satisfiable. And that's the last piece that I needed in order to completely reduce the factoring to the satisfiability problem, and you could see that this is actually a general method that will enable you to reduce most any kind of one-way function to a few SAT tests.
MIT_6042J_Mathematics_for_Computer_Science_Spring_2015
233_The_Ring_Z_Video.txt
PROFESSOR: Another way to talk about congruence and remainder arithmetic is to work strictly with remainders, which makes things a little simpler because you don't have to worry about the fact that the product of two remainders may, for example, be too big to be a remainder. To knock it back in range, you have to take the remainder again. And that's what this abstract idea of the ring of integers modulo n, the ring Z sub n, captures in a quite elegant way. So it's going to allow us to talk strictly about equality instead of congruence. And let's remind ourselves that the basic idea behind working with a remainder arithmetic was that every time we got a number that was too big to be a remainder, we just hit it with the remainder operation again to bring it back in range. And so the operations in Zn work exactly that way. The elements of Zn are the remainders. That is, the numbers from 0, including 0, up to n, but not including n. So there are n of them from 0, 1, up through n minus 1. And the definitions of the operations in Zn are given right here. Addition just means take this sum but then take the remainder immediately, just in case it's too big. And likewise, the product in Zn is simply multiply them and take the remainder. This isn't really a very dramatic idea, but it turns out to pay off in making some things just a little bit easier to say, because we're talking about equality instead of congruence. So this package together, this mathematical structure consisting of the integers in this interval-- remember this notation, square bracket means inclusive and round parenthesis means exclusive. So this includes zero, it doesn't include n. The integers in that interval, under the operations of plus and times modulo Zn, as defined here, is called the ring of integers Zn. So it's got two operations and a bunch of things that are operated on. Now I guess it's worth highlighting. That's what Zn is, the ring of integers. Mod n, or modulo n. Now, arithmetic in Zn is really just arithmetic-- congruence arithmetic, except that it's equality now instead of congruence. So we can say, for example, in Z7 that 3 plus 6 is literally equal to 2 because, well, 3 plus 6 is 9, the remainder on division by 7 is 2, and we go directly to the two in Zn, suppressing the mention of taking remainders and not even really having to think about it, which is what's helpful about working with Zn. Likewise, 9 times 8 is literally equal to 6 in Z11. So what's the connection between the set of all the integers and the integers mod n? And we can state this abstractly in the following way. Let's just, for convenience, abbreviate the remainder of k on division by n as R of k. So n is fixed. And what's the connection between Z and Zn? Well, it's fairly simple. If you take the remainder of i plus j, that's literally equal to taking the sum of the remainders in Zn. Once you've taken the remainders, you're in the range of numbers that Zn works with. And this sum, then, keeps you in on the Zn side. Likewise, if you take the remainder of a product of real integers, that's literally equal to the product of the remainders in Zn. This operation, by the way, this connection between mathematical structures, the structure of the integers under plus and times and Zn under plus and times, is called a homomorphism. R, in this case, is defining a homomorphism from Z to Zn. That's a basic concept in algebra that you'll learn more about if you take some courses in algebra, but I'm just mentioning it for cultural reasons. We're not going to exploit it any further, or look further into this idea. OK. What's the connection between equivalence mod n, or congruence mod n, and Zn? Well, it's fairly simple. In Zn, we convert congruences into equalities. So i is congruent to j mod n if and only if r of i is equal to r of j in Zn. And this is just a rephrasing of the fact that two numbers are congruent if and only if they have the same remainder. Now once you've got this self-contained system Zn, you can start talking about algebraic rules that it satisfies. And now, they hold with equality and they're pretty familiar. So let's look at some of the rules for addition, for example, that hold true in Zn. First of all, addition is associative. i plus j plus k is i plus j plus k. We have an identity element, literally zero. Zero plus any i is i. We have a minus operation, an inverse operation, with respect to addition, which is that-- how do I get back some slides? Excuse me. OK, let's keep going. I have an inverse operation, which is that for every i, there's an element called minus i. It's additive inverse such that if you add i and minus i, you get zero. And finally, commutativity, which is that i plus j is the same as j plus i. You don't really need to memorize these names, but you will probably hear them a lot in various other contexts, and especially in algebra courses, but even in terms of arithmetic. These are some of the basic rules that addition satisfies. And in fact, multiplication satisfies pretty much the same rules. Multiplication is likewise associative. There's an identity for multiplication called 1. 1 times i is i. Multiplication is also commutative. The one obvious omission here is inverses. You can't count on there being inverses in Zn. And finally, there's an operation that connects addition and multiplication called distributivity. Namely, i times j plus k is ij plus ik, as you well know from ordinary arithmetic. And this rule works fine for remainders and working in Zn. As I said, the one thing we have to watch out for, it shouldn't be a surprise, is we know that you can't cancel with respect to congruence mod n. And that's reflected in the fact that you can't cancel in Zn. Namely, in Z12, for example, 3 times 2 is equal to 2 times 8. Again, 3 times 2 is 6, 2 times 8 is 16, you immediately take the remainder to get back to 6. In Z12, these two things are equal. But if you tried to cancel the 2, you'd conclude that 3 was 8, and neither 3-- 3 and 8 are different numbers in the range from 0 to 12, and they're different in Z12. So you can't cancel 2. OK. Now the rules that we already figured out for when you can cancel in congruence translate directly over to when you can cancel in Zn. And now there's a standard abbreviation that's useful to use here. If I write Zn*, what I mean is the elements in Zn that are relatively prime to n. The elements whose GCD with n is 1. So what we have is the following equivalent formulations of Zn*, which correspond to the facts we've already figured out about congruence. Namely, an integer i in the range from 0 to n is in Zn* if and only if the GCD of i and n is 1, or i is cancelable in Zn, or i has an inverse in Zn. All of these three things are equivalent. They give you the sense that Zn* is a kind of robust subset of Zn that you'd want to be thinking about. And in fact, it's very valuable to be paying attention to. What else do we know about Zn*? Well, the definition of phi of n was the number of integers in the interval from 0 to n that are relatively prime to n. Of course, that's exactly the size of Zn*. So phi of n is simply the size of that collection of elements. Not surprising. They were defined that way. So now I can restate Euler's Theorem in a slightly convenient way. Instead of mentioning congruence, we can just talk about equality. Euler's Theorem says that if you raise a number k to the power phi of n, it's literally equal to 1 in Zn, at least for those k's that are relatively prime to n. That is, those k's that are in Zn*. And it's going to turn out that the proof of Euler's Theorem is actually pretty easy. It just follows in a couple of steps from a couple of simple observations. So let's start on those. So the first remark is that if I have any subset, S, of elements in Zn-- I don't care whether they are relatively prime to n or not-- if I multiply each of them by k, this notation for k times S means that I'm taking the set of elements that are of the form k times an element of S over all the elements of S. So kS, which is this set of multiples of k-- multiples of elements of S by k, has exactly the same size as S. Now, why is that? Well, this of course is only true for k that are cancelable. But the Lemma is, no matter what subset you take of Zn, if you multiplied every one of them by an element that's cancelable in Zn*, you get a set of the same size. And that's clear because how could ks1 and ks2 be equal? Well, only if s1 and s2 were equal. Or another way to say it is that if you had different elements in S, s1 not equal to s2, when you multiply them by k, you have to get different elements of ks, because k is cancelable. OK. So that's an easy remark. Holds in general. Multiply any subset by a cancelable element, and you get a new set that's the same size. The second remark is that if you look at numbers i and j that are in the interval from 0 to n in Zn, then if you multiply the two of them, then you're going to get an element in Zn* if and only if the original two elements were in Zn*. Well, let's just look at it in the left to right direction, which is the only one we need. If i and j are relatively prime to Zn*, then so is their product, because if neither i nor j has a prime factor in common with n, then their product obviously doesn't have a factor in common with n. And then when you take remainders, it's still going to be a number whose GCD is the same. And so we have this remark that if you multiply two cancelable elements, you get a cancelable element. If you multiply two elements relatively prime to Zn*, you get an element of Zn*. There's about-- every one of these formulations of Zn* in terms of GCDs are cancelable or inverse, and each of them gives a separate and straightforward proof of the fact that if i and j are in Zn*, then so is their product. Now it's worth mentioning, by the way, that, in general, their sum is not. If you add two elements that are relatively prime to Zn*, even if their sum is non-zero, you will typically get an element that is no longer relatively prime to n. But for multiplication, it works great, and that's what matters to us. OK. So as a corollary of this is that I can actually conclude that, if I choose an element that's cancelable, an element in Zn*, if I take the whole set Zn*, all those elements that are relatively prime to n, and I take multiples of k by each of them, then, in fact, I get the same set, Zn*. And the proof of that is really straightforward. Let's think about it for a minute. Because what do I know is that these two sets are the same size. kZn* and Zn* are the same size. As long as k is cancelable, I don't even care that this was Zn*. On the other hand, if k is in Zn*, k times Zn* only gives you elements in Zn*. So kZn* is a subset of the left-hand side, and it's the same size by the Lemma that says that multiplying by k preserves sizes. So they have to be equal. So basically what that means is that if you take all the elements in Z*, all the elements relatively prime to n, and you take another one of them, pick one out of that set, and multiply every element in the set by that element k, if you had them lined up in one order beforehand, when you multiplied by k you get exactly the same elements but just reordered. That is, multiplying by k has the effect of permuting the elements of Zn*. Let's look at an example. So let's look at Z9. And we know that phi of 9, by the previous formula, is 3 squared minus 3, or 6. There are going to be 6 elements from 0 to n that are relatively prime to 9, and that comprise Zn*. So let's look at what they are. So you can do-- check the calculation. But Zn* is exactly the elements 1, 2, 4, 5, 7, 8. We know we got them all because there's only supposed to be six of them, and we can check that those are all relatively prime to 9. None of them has 3 as a divisor. Now what happens, for example, if I multiply them all by 2? Two is another good number-- it's right here-- that's in Zn*. And multiplying them by 2, well, let's check. 2 times 1 is 2, 2 times 2 is 4, 2 times 4 is 8, 2 times 5 is 1-- because it's 10 with a remainder of 1-- 2 times 7 is 14-- translates into 5-- 2 times 8 is 16-- [INAUDIBLE] translates into 7. And, as claimed, look at this. Here's 2, 4, 8, 1, 5, 7. It's the same numbers as 1, 2, 4, 5, 7, 8, just in a different order. Let's do one more example. Let's try multiplying by 7. That's another respectable element over here. 7 times 1 is 7, 7 times 2 is 14, which means it's 5 in Z9. 4 times 7 is 28. Well, 3 times 7 is 27, so that leaves a remainder of 1. And 4 times 7 is 1 in Z9. Likewise, 5 times 7 is 8, 7 times 7 is 4, and 7 times 8 is 56, which translates to 2. And sure enough, as claimed, I see the same numbers, 7, 5, 1, 8, 4, 2, just these numbers scrambled in order. They're permuted, which is the outcome of multiplying by 7. OK. So let's go back. What we've just illustrated is this fact that we've already concluded that, if you take Zn* and you multiply it by an element k in Zn*, you get the same set in a different order. So Zn* is equal to k times Zn*. And now we're on the brink of proving Euler's Theorem. Because what I want to do is say, look, these two sets are the same. Let's multiply all the elements on the left together, and multiply all the elements on the right together. Let's take the product of those elements. So let's take the product of Zn* and compare it to the product of kZn*. So big pi here is indicating the product of all of the elements in this set, the product of all of the elements in this set. Well, let's look at the set on the right. This is the product of k times all the elements in Z*. Well how many elements are there? Phi of n elements in Z*, by definition. And let's factor out all the k's. So this expression here, the product of k times each element in Zn*, is the same as the product of the elements in Zn* times k to as many elements as there were, namely k to the phi of n. I'm just factoring k out of this product. And there's my k to the phi of n. And now look what I got here. That's pi Zn*, and that's pi Zn*. What do I know about multiplying elements in Zn*? They're in Zn*. This product will be some other element is Zn*. So will this product. But what do I know about Zn*? They're cancelable. So just looking-- ignoring the middle term now, what I'm concluding is that the product of Zn* is k to the phi of n times the product of Zn*. Let's cancel those cancelable terms. And I'm done. I've just figured out that 1, which is the result of canceling the term on the left, is equal to k to the phi of n. And we have successfully proved Euler's Theorem, which is what we were aiming for in this segment.
MIT_6042J_Mathematics_for_Computer_Science_Spring_2015
131_Well_Ordering_Principle_1_Video.txt
PROFESSOR: The well ordering principle is one of those facts in mathematics that's so obvious that you hardly notice it. And the objective of this brief introduction is to call your attention to it. We've actually used it already. And in subsequent segments of this presentation, I'll show lots of applications of it. So here's a statement of the well ordering principle. Every nonempty set of nonnegative integers has a least element. Now this is probably familiar. Maybe you haven't even thought about it. But now that I mentioned it, I expect it's a familiar idea. And it's pretty obvious too if you think about it for a minute. Here's a way to think about it. Given an nonempty set of integers, you could ask, is 0 the least element in it? Well, if it is, then you're done. Then you could say, is 1 the least element in it? And if it is, you're done. And if it isn't, you could say 2, is 2 the least element? And so on. Given that it's not empty, eventually you're to hit the least element. So, if it wasn't obvious before, there is something of a hand-waving proof of it. But I want to get you to think about this well ordering principle a little bit because it's not-- there are some technical parts of it that matter. So for example, suppose I replace nonnegative integers by nonnegative rationals. And I asked does every nonempty set of nonnegative rationals have a least element? Well, there is a least nonnegative rational, namely 0. But not every nonnegative set of rationals has a least element. I'll let you think of an example. Another variant is when, instead of talking about the nonnegative integers, I just talk about all the integers. Is there a least integer? Well, no, obviously because minus 1 is not the least. And minus 2 is not the least. And there isn't any least integer. We take for granted the well ordering principle just all the time. If I ask you, what was the youngest age of an MIT graduate well, you wouldn't for a moment wonder whether there was a youngest age. And if I asked you for the smallest number of neurons in any animal, you wouldn't wonder whether there was or wasn't a smallest number of neurons. We may not know what it is. But there's surely a smallest number of neurons because neurons are nonnegative integers. And finally, if I ask you what was the smallest number of US coins that could make $1.17, again, we don't have to worry about existence because the well ordering principle knocks that off immediately. Now for the remainder of this talk, I'm going to be talking about the nonnegative integers always, unless I explicitly say otherwise. So I'm just going to is the word number to mean nonnegative integer. There's a standard mathematical symbol that we use to denote the nonnegative integers. It's that letter N at the top of the slide with a with a diagonal double bar. These are sometimes called the natural numbers. But I've never been able to understand or figure out whether 0 is natural or not. So we don't use that phrase. Zero is included in N, the nonnegative integers. And that's what we call them in this class. Now, I want to point out that we've actually used the well ordering principle already without maybe not noticing it, even in the proof that the square root of 2 was not rational. That proof began by saying, suppose the square root of 2 was rational, that is, it was a quotient of integers m over n. And the remark was that you can always express a fraction like that in lowest terms. More precisely, you can always find positive numbers m and n without common factors, such that the square root of 2 equals m over n. If there's any fraction equal to the square root of 2, then there is a lowest terms fraction m over n with no common factors. So now we can use well ordering to come up with a simple, and hopefully very clear and convincing, argument for why every fraction can be expressed in lowest terms. In particular, let's look at numbers m and n such that the square root of 2 is equal to m over n-- that fraction. And let's just choose the smallest numerator that works. Find the smallest numerator m, such that squared of 2 is equal to m over n. Well, I claim that that fraction, which uses the smallest possible numerator, has got to be in lowest terms because suppose that m and n had a common factor c that was greater than 1-- a real common factor. Then you could replace m over n by m over c, the numerator is a smaller numerator that's still an integer, and n over c. The denominator is still an integer. And we have a numerator that's smaller than m contradicting the way that we chose m in the first place. And this contradiction, of course, implies that m and n have no common factors. And therefore, as claimed, m over n is in lowest terms. And of course, the way I formulated this was for our application of the fraction that was equal to the square root of 2. But this proof actually shows that any rational number, any fraction, can be expressed in lowest terms.
MIT_6042J_Mathematics_for_Computer_Science_Spring_2015
223_Inverses_mod_n_Video.txt
PROFESSOR: So, now we come to the place where arithmetic, modulo n or remainder arithmetic, starts to be a little bit different and that involves taking inverses and cancelling. Let's look at that. So first of all, we've already observed that we have these basic congruence rules that if a and b are congruent then c and d are congregant, then a plus c and b plus d are congruent, a times c and b times d are congruent. So, that's the sense in which arithmetic mod n is a lot like ordinary arithmetic. But here's the main difference. Let's look at this one. 8 times 2 is 16, which means it's congruent to 6 mod 10, which is the same as 3 times 2. So, 8 times 2 is congruent to 3 times 2. And you'd be tempted, maybe, to cancel the twos. And what happens then, well then you could discover that you think that 8 is congruent to 3 mod 10, which it ain't. So in short, you can't cancel arbitrarily. You can't cancel two, in this case in particular. So, that leads, naturally, to the question of when can you cancel a number? When can you cancel a number k when both sides of inequality are multiplied by k and I'd like to cancel k? And the answer is simple, when k has no common factors with a modulus n. So, the proof of that is based on the following idea. Let's say that a number k prime is an inverse of k mod n. If k prime times k is congruent to 1 mod n. So, k prime is like 1 over k with respect to mod n. But of course, 1 over k is going to be a fraction unless k is 1. And so, k prime is going to be an integer that simply acts like 1 over k. So, how are we going to prove this? And it's going to turn out to be an easy consequence of the fact that the gcd is a linear combination. So, how am I going to prove-- find this k prime that's an inverse of k? Well remember, given the gcd of k and n is 1, I have a linear combination of k and n is 1. So, s times k plus t times n is 1. But if you stare at that for a moment, what that means is that k prime is simply the coefficient s of k. So, all you have to do is apply the pulverizer to k and n to get the coefficient s of k in the linear combination of k and n is equal to 1. Let's look at that slightly more carefully and see what's going on. I have that sk plus tn is 1. So, that means in particular, since they're equal, they're certainly congruent to each other, modulo n. sk plus tn is congruent to 1 mod n. But, n is congruent to 0 mod n. So, this becomes t times 0, and we're left with sk congruent 1 mod n, which is exactly the definition of s being an inverse of k. Now, I can also cancel k if it's relatively prime to n. And the reason is that if I have ak equivalent to bk mod n and the gcd of k and n is 1, then I have this k prime that's an inverse of k. So, I just multiply both sides by the inverse of k, namely k prime. And I get that the left hand side is a times k, k inverse. And the right hand side is b times k, k inverse. And of course, that's a times 1 is equivalent to b times 1. And so, a is congruent to b mod n. So I can cancel, in that case, trivially. And in fact, you can work out the converse implications. The punch line of this-- well first of all, this is the cancellation rule. You can cancel providing that the gcd of k and n is 1 if k is relatively prime to n. So, this is the summary. [? k is ?] cancelable mod n if and only if k has an inverse mod n, if and only if the gcd of k and n is 1, which I can restate as k is relatively prime to n. And that's the story.
MIT_6042J_Mathematics_for_Computer_Science_Spring_2015
283_Isomorphism_Video.txt
PROFESSOR: We've briefly looked at graph isomorphism in the context of digraphs. And it comes up in even more fundamental way really for simple graphs where the definition is a bit simpler. So let's just look at this graph abstraction idea and how isomorphism connects with it. This is an example of two different ways of drawing the same graph. That is here's a 257, and there's 257. It's connected directly to 122, as here. And also 257 is connected to 99, as here. And if you check, it's exactly the same six vertices and exactly the same eight edges. But they're just drawn differently. So we don't want to confuse a drawing of a graph, like these two, with the graph itself. The graph itself consists of just the set of nodes and the set of edges. And if you extracted that from these two diagrams, you would get the same set of nodes and the same set of edges. So same graph, different layouts. But here's a case where it's really the same layout. You can see these two pictures, if you ignore the labels, are exactly the same with the two grays and the two grays and the red and the red. The difference now is that I've renamed the vertices. So we've assigned different labels to those vertices. And the connection between the two graphs now, this graph with vertices which are integers and this graph with vertices that are the names of people, is that they are isomorphic. And what isomorphism means is that all that matters between two graphs are their connections. And so graphs with the same connections among the same number of vertices are said to be isomorphic. To say it more precisely, two graphs are isomorphic when there's an edge preserving matching between their vertices. Matching meaning byjection junction between their vertices. And edge preserving means that where there is an edge on one side there's an edge between the corresponding vertices on the other side. Let's look at an example. Here are two graphs. And I claim that they are isomorphic. On the left, we've got a bunch of animals, dog, pig, cow, cat. And on the right we have a bunch of animal foods, hey, corn, beef, tuna. And it's a hint on how we're going to do the matching. So I'm going to tell you that the dog vertex on the left corresponds to the beef vertex on the right. So I'm defining a function, a byjection, from the vertices on the left in blue to the vertices on the right in red. And f of dog is beef. Likewise, f of cat, cats eat tuna. I'm going to map cat to tuna. And continuing for the remaining two vertices, I'm going to map cow to hay, which is what they eat, and pig to corn, which is frequently what's fed to pigs. OK, so this is a byjection. I mean, it's a perfect correspondence between the four vertices on the left and the four vertices on the right. But I have to check now that the edges are preserved. What does that mean? Well, let's do an example. There's an edge on the left between dog and pig. That means that there should be an edge on the right between where they go to. So there ought to be an edge between beef and corn, because that's where dog and pig go. And indeed, there's an edge there. So that part's good. And you can check the others. The other thing that we have to check on the left is since the edge preserving is an if and only if, there's an edge on the right if and only if there's an edge on the left, that's the same as saying there's no edge on the left if and only if there's no edge on the right. So let's check non-edges on the left. There's no edge between cow and pig. And indeed, cow goes to hay, and pig goes to corn. And sure enough, there is no edge on the right between hay and corn. And you can check the remaining cases. These two graphs are isomorphic. And that function f is in fact the edge preserving byjection. So stating it again, an isomorphism between two graphs G1 and G2 is a byjection between the vertices V1 of G1 and the vertices V2 of G2 with the property that there's an edge uv in G1, an E1 edge, if and only if f of u f of v is an edge in the second graph in E2. And it's an if and only if that's edge preserving. So if there's an edge here, there's an edge there. If there's no edge on the left, there's no edge on the right. And that's a definition that's worth remembering. It's basically the same as the digraph case. Except in the diagram case, the edges have a direction. So it would be an edge from u to v if and only if there is an edge from f of u to f of v. But since we don't have to worry about direction in the simple case, the definition gets slightly simpler. What about non-isomorphism? How do you show that two graphs are not isomorphic? I can show you the two graphs are isomorphic by simply telling you what the byjection between their vertices is. And then it becomes a simple matter of checking whether the edges that should be there are there are not. How do you figure out the two graphs are not isomorphic and that there isn't any byjection that edge preserves edges? Well, for a start, these both have four vertices, so it's perfect. There are lots of byjections between the four vertices on the left and the four vertices on the right. Why isn't there an edge preserving one? Well, if you look at the graph on the left, it's actually got two vertices of degree 2 marked in red here. There's a degree 2 vertex. There's a degree 2 vertex. And on the right, every vertex is degree 3, if you check. Now one of the things that properties of isomorphism is that the edges that come out of the red, these two edges, have to correspond to two edges that come out of wherever it's mapped to. So a degree 2 vertex can only map to a degree 2 vertex. There aren't any. That's a proof that there can't be an isomorphism between the two graphs. So in general, the idea is that we're looking at properties that are preserved by isomorphism. This is almost like a state machine invariant kind of idea. So a property is preserved by isomorphism. Means that if two graphs-- if graph one has the property and graph one is isomorphic to graph two, then graph two has the property. And clearly if there's a property that's preserved by isomorphism and one graph has it and the other graph doesn't have it, that's a proof that they can't be isomorphic. So what are some of these properties that are preserved by isomorphism? Well, the number of nodes. Clearly there's got to be a byjection, so they have to have the same number of nodes. They have to have the same number of edges for similar reasons. Because the edges are preserved. An edge on one side corresponds to an edge on the other side. Others things that matter is we've just made this argument that the degrees are preserved as a consequence of the preserving of the edges. And all sorts of other structural properties are going to be preserved by isomorphism, like for example, the existence of circular paths, and distances between vertices, and things like that. Those will all be properties that are preserved by isomorphism. So that gives you a hook on trying to figure out whether or not two graphs are or are not isomorphic. But in general, there will be, if you've got a graph with a few 100 or 1,000 vertices, there are an awful lot of potential byjections between them to check. And the question is, how do you do it? It's a huge search that can't really be effectively done exhaustively. So what you look for is properties that are preserved by isomorphisms that give you a guide. So for example, if the graph on the left happens to have a degree 4 vertex and that degree 4 vertex is adjacent to a degree 3 vertex, then the adjacency of a degree 4 and a degree 3 is a typical property that's preserved by isomorphism. So you know for sure that if there's going to be a byjection between the first graph and the second graph, this pair of adjacent vertices of degree 4 and degree 3 can only map to another pair of adjacent vertices in the second graph that also have degrees 4 and 3. So that will cut down enormously the number of places that this given vertex can map to in the other graph. And it gives you some structure to use to try to narrow down the search for the number of isomorphisms, and where the isomorphism is, and whether or not it exists. So having a degree 4 adjacent to a degree 3, for example, is a typical property that's preserved under isomorphism. But even so, if I give you two very large graphs, and these are actually extracted graphs from some communication network, an image of them, it's very hard to tell whether or not they're isomorphic. Well, you could guess, because of course, we took the same picture and copied it twice. But if there was some subtle difference between these two, like I erased one edge somewhere in the middle of that mess, how would you figure out that the two graphs were not isomorphic in that case? And the answer is that like these NP complete problems, there is no known procedure to check whether or not two graphs are isomorphic that is guaranteed to be efficient and to run in polynomial time. On the other hand, there are technical reasons, there are technical properties, that says that graph isomorphism is not one of these NP complete problems, unless [? peoples ?] NP or something like that. And so that's one distinguishing characteristic of this problem. The important one is that, as a matter of fact, in practice there are some really good isomorphism programs around that will in many cases figure out, given two graphs, whether or not they are isomorphic in time that's approximately the size of the two graphs. So pragmatically, graph isomorphism seems to be a manageable problem. Although theoretically you can't be sure that these efficient procedures that work most of the time are going to work always. Well, known procedures in fact blow up exponentially on some example or another.
MIT_6042J_Mathematics_for_Computer_Science_Spring_2015
333_Counting_with_Bijections_Video.txt
PROFESSOR: An elementary idea that gets you a long way in counting things is this idea of counting with bijections, which is counting one thing by counting another. And we can illustrate that by example. Let's begin with looking at some stuff that is easy to count using just the simple sum and product rules. So suppose that I'm trying to count passwords. This is a contrived, over-simplified example, but it gives you the idea. And this is what I mean by a password. A password is a sequence of characters that are either letters or digits subject to the constraints that they are supposed to be between six and eight characters long. They're supposed to start with a letter, and they're case sensitive. So you can tell the difference between uppercase and lowercase letters. So let's define the set L of all the letters-- uppercase and lowercase together. And let D be the set of digits from 0 through 9. Then we said that passwords are supposed to be between six and eight words long, but it's a little bit easier actually to just use links as a parameter. So let's think about words of length n that satisfy the password conditions. So Pn is going to be the length n words starting with a letter, which is one of the password constraints. So we can express that has a length n word can be broken up into the first character, which is an L, paired with the rest of the word-- the remaining n minus 1 characters. And the remaining n minus 1 characters can be either L's or D's. So though length n passwords can be expressed as the product of L with the n-th power of L union D-- that is, L union D cross L union D cross L union D, n minus 1 times. Well, now we have an easy way to count this, because the size of this product by the product rule is the size of L times the size of L union D to the n minus first power. And of course, L union D, since letters and digits don't overlap by the sum rule, the size of them is just L plus D. And so I get this nice formula that is 52 letters times 52 letters plus 10 digits raised to the n minus first power. What about the passwords? Well, the passwords were then P6 union P7 union P8. And since words of length six don't overlap with words of length seven or eight, this is a disjoint union. And therefore, the total number of passwords as specified is simply the size of P6 plus the size P7 plus the size of P8. There's the formula when I plug in. And it turns out to be a good size number, 19 times 10 to the 14th. That's one simple example where I'm translating a spec into because something that I can express easily as a products and disjoint sums of stuff that I already know the size of. Let's just do another example. Suppose that I want to count the number of 4-digit numbers. So the elements of these 4-digit numbers are 0 through 9-- there are 10 possibilities-- with at least one 7-- the number of 4-digit sequences of digits that have at least one 7 in them. And one way to count is I can make it a sum of different 4-digit numbers containing one 7, depending on where the first 7 is. If there's at least one 7, there's a first 7. That's the well-ordering principle applied. So if we let x abbreviate any digit-- there are 10 possible values of x-- and o represent any digit other than 7-- so there's nine possible values of o-- then the words that start with 7 can then be followed with any three digits. So 7xxx is one possible pattern when the first occurrence of 7 is first. Another possible pattern is when you have a digit that's not 7 followed by a 7. This is when 7 occurs second followed by anything at all. Likewise, here 7 occurs third, and here, 7 occurs forth. Now, these individual patterns are easy enough to count using the product rule, because here, I have to count how many triples of any digits are there. Well, there's 10 digits, so it's 10 cubed. Here, how many sequences of where the first choice is 9 and the second two choices are 10. And it's 9 times 10 squared. Here, it's 9 squared times 10. And here it's 9 cubed. These are disjoint, because they're distinguished by where the first 7 occurs. And so I just add them up. And I get this number. It's not especially interesting, but it's 3,439. So that's an exercise in counting something by somewhat ingeniously breaking it up into a sum of disjoint things that are themselves easier to count. There's another way that's another standard trick that comes up in combinatorics of how do you count the sequence of 4-digit numbers with at least one 7, by counting the complement. Count the numbers of 4-digit numbers that don't have any 7's and simply subtract that number, the number of 4-digit numbers with no 7's, from the total number of 4-digit numbers. And that's going to be the numbers that are left over that have one 7. Now, the number of 4-digit numbers is easy to count. And it will turn out that the number of 4-digit numbers with no 7's is also really easy to count, because the number of 4-digit numbers is 10 to the fourth and the number of 4-digit numbers with no 7's, there's nine possible choices for each of the remaining digits. So it's just the digits 0 through 9, leaving out 7, to the fourth power, or 9 to the fourth. And you can double check that 10 to the fourth minus 9 to the fourth is 3,4,39. So now, with that practice using the basic sum and product rules, we can start applying and thinking about the bijection rule. So the bijection rule simply says that if I have a bijection between two sets A and B, then they have the same size, at least assuming that they are finite sets. And the only kind of things we're counting are finite sets. Let's use an example of that, where I'm going to count the number of subsets of a set A by finding a bijection between the subsets of a set A and something that I do know how to count. In fact, we've already counted them, the binary strings of a given length. What's the bijection? Well, suppose that A is a set of n elements, call them a1 through an. And I have some arbitrary subset of A. Say, it's got a1, and it doesn't have a2, and it has a3, and it has a4, and it doesn't have a5. And then it's got some selection of the other numbers. And it turns out it has a n in it. Well, if I think of a subset laid out this way up against the corresponding elements in A, I can code this in an obvious way by putting a 1 where the element is in the subset and a 0 where the element is not in the subset. In effect, this is the so-called characteristic function of the subset where 1 means that that index element-- a 1 in the i-th position means that ai is there. And a 0 in the i-th position means that ai is not there. So the second coordinate here is a 0. That means a2 is not there. And this is easily seen to be a bijection. That is, given the string, you could figure out what the subset is. Given the subset, you can figure out what the unique string is. So we have a bijection. And what we conclude then is that the number of n-bit strings is equal to the size of power set of A. It's equal to the number of subsets of A. And of course, we know how to count the number of n-bit strings. It's 2 to the n. So what we just figured out is, if I have a set of size n, it's got 2 to the n subsets. And a slick way to say that without mentioning n is that the size of the power set of A is simply 2 the size of A. One more example of bijection counting that is kind of fun and interesting will illustrate the fact that we learn something by finding a bijection, even if we don't know how to count either one yet. So what I'm interested in is, suppose I have a situation where there are five kinds of doughnuts-- five different flavors of doughnuts. And I want to sort of select a dozen. Now, I want to know how many selections there are. So for example-- these little O's represent doughnuts-- I might choose a selection of a dozen by choosing two chocolate and no lemon-- I don't like those so much-- and six sugars and two glazed and two plain. So there are 12 doughnuts here using four out of the five possible flavors of doughnuts. This is what I'll call a selection of a doughnut. And I'd like to know how many such selections of doughnuts are there. Well, let that be the set A, the set of all these different ways of selecting 12 doughnuts when there are five flavors of doughnuts available. Well, this is, again, an obvious correspondence between the set A of doughnut selections and the set B of 0's and 1's of length 16 that contain four 1's. What's the correspondence? Well, here's my doughnut selection. And of course, the reason why I use those O's for doughnuts is that they also correspond to 0's. I can just put in 1's as delimiters between the groups of flavors. So after the chocolate doughnuts, I put a 1. And then after the lemon doughnuts, that happen to be none, I put another 1. And then after the six sugar doughnuts, I put a 1. And then I kind of consolidate and I extract from the doughnut selection this 16-bit word with 12 0's corresponding to 12 doughnuts and four 1's corresponding to breaking up those groups of 0's into five categories, five slots, corresponding to the number of doughnuts of each flavor. So the general bijection, of course, is that if I have a selection of c chocolate doughnuts, l lemon doughnuts, s sugar doughnuts, g glazed, and p plain of any number really, a selection of doughnuts with this number of chocolates, lemons, glazed, plain corresponds to a binary word with c plus l plus s plus g plus p 0's and four 1's. And so what we can say is that the set of 16-digit words with four 1's is exactly the same size as the number of doughnut selections, even though at this moment we don't know how to count either one. We will see in the next lecture an easy way to count the number of those 16-bit words with four 1's. But for now, our conclusion from bijection counting is that these two sets are the same size, even though I haven't counted yet either one.
MIT_6042J_Mathematics_for_Computer_Science_Spring_2015
211_GCDs_Linear_Combinations_Video.txt
So now we begin on four classes on number theory. The purpose of taking it up now is that we're still practicing proofs. And number theory is a nice self-contained elementary subject as we'll treat it, which has some quite elegant proofs and illustrates contradiction and other structures that we've learned about. A little bit of induction, and definitely some applications of the well-ordering principle. The ultimate punchline of the whole unit is to understand the RSA crypto system and how it works. Along the way, we will-- today, actually-- establish one of those mother's milk facts that we all take for granted about unique factorization of integers into primes. But in fact, that's a theorem that merits some proof as an example, and the homework shows where we exhibited a system of numbers which didn't factor uniquely. And finally, we will be able to knock off the Die Hard story once and for all. So let's begin by stating the rules of the game. We're going to assume all of the usual algebraic rules for addition and multiplication and subtraction. So you may know some of these rules have names like the first equality is called distributivity of multiplication over plus-- of times over plus-- and then the second rule here is called commutativity of multiplication, and here are some more familiar rules. This is called associativity of multiplication. This is called the additive identity. a minus a is 0-- or actually additive inverse. 0 is the additive identity and minus a is the inverse of a. a plus 0 equals a is the definition of 0 being an additive identity. a plus 1 is greater than a. So these are all standard algebraic facts that we're going to take for granted and not worry about. And one more fact that we also know and we're going to take as an axiom, if I divide a positive number-- sorry. If I divide a number a by a positive number b, then when we're talking about integers, what I'm going to get is a quotient and a remainder. What's the definition of the quotient and a remainder? Well, the division theorem says that if I divide a by b, that means if I take the quotient times b plus the remainder I get a. And in fact, there's a unique quotient of a/b and there's a unique remainder of a/b where the remainder-- what makes it unique is the remainder is constrained to be in the interval greater than or equal to 0 and less than the divisor b. So we're going to take this fact for granted too. Proving it is not worth thinking about too hard, because it's one of those facts that's so elementary that it's hard to think of other things that would more legitimately prove it. I'm sure it could be proved by induction, but I haven't really thought that through. OK. A key relation that we're going to be looking at today is the relation of divisibility between integers. So by the way, all of the variables for the next week or so are going to be understood to range over the integers. So when I say number, I mean integer. When I talk about variables a and c and k, I mean that they're taking integer values. So I'm going to define c divides a using this vertical bar notation. It's read as divides. c divides a if and only if a is equal to k times c for some k. And there are a variety of synonyms for a divides b, like-- a is a-- a divides c-- sorry-- c divides a is to say that a is a multiple of c and c is a divisor of a. OK. Let's just practice this. So 5 divides 15? Well, because 15 is 3 times 5. A number n divides 0. Every number n divides 0. Even 0 divides 0, because 0 is equal to 0 times n. So 0 is a multiple of every number. Another trivial fact that follows from the definition is that if c divides a, then c divides any constant times a. Well, let's just check that out, how it follows from the definition. If I'm given that c divides a, that means that a is equal to k prime c for some k prime. That implies that if I multiply both sides of this equality by s, I get that s a is equal to s k prime c, and if I parenthesize the s k prime, I can call that to be k, and I have found, sure enough, that s a is a multiple of c. That's a trivial proof, but we're just practicing with the definitions. So we have just verified this fact that if c divides a, then c divides a constant times a. As a matter of fact, if c divides a and c divides b, then c divides a plus b. Let's just check that one. What we've got is c divides a means that a is equal to k1 times c. And c divides b means that b is equal to k2 times c. So that means that a plus b is simply k1 plus k2 times c, where what I've done here is used the distributivity law to factor c out and used the fact that multiplication is commutative so that I can factor out on either side. OK. Let's put those facts together. If c divides a and c divides b, then c divides s a plus t b, where s and t are any integers are all. So a combination of two numbers, a and b, like this is called a linear combination of a and b-- an integer linear combination, but since we're only talking about integers, I'm going to stop saying integer linear combination and just say linear combination. A linear combination of a and b is what you get by of multiplying them by coefficients s and t and adding. OK. So we've just figured out that in fact if c divides a and c divides b, then c divides an integer linear combination of b. When c divides two numbers, it's called a common divisor of those two numbers. So we could rephrase this observation by saying common divisors of a and b divide integer linear combinations of a and b, which is a good fact to just file away in your head. Now, what we're going to be focusing on for the rest of today is the concept of the greatest common divisor of a and be, called the GCD of a and b. The greatest common divisor of a and b exists by the well-ordering principle, because it's a set of non-negative integers with an upper bound. Namely, a is an upper bound on the greatest common divisor of a and b. So as we did in an exercise, or I think in the text, that implies that there will be the greatest one among all the common divisors, assuming there are any. But 1 is always a common divisor, so there are guaranteed to be some. Let's look at some examples. The greatest common divisor of 10 and 12. You can check. It's 2. Mainly because 10 factors into 2 times 5 and 12 factors into 2 times 6, and the 6 and the 5 have no common factors. So the only one that they share is 2. The GCD of 13 and 12 is 1. They have no common factors in common. You can see that because 13 is a prime, and so it has no factors other than 1 and 13, and 13 doesn't divide 12 because it's too big. So it's got to be 1. The GCD of 17 and 17 is 17. That's a general phenomenon. The GCD of n and n is always n. The greatest common divisor of 0 and n is equal to n for any positive n. That's because everything is a divisor of 0 and it means the GCD of 0 and n is simply the greatest divisor of n, which is of course n by itself. One final fact to set things up for the next segment is to think about the GCD of a prime and a number, and it's either 1 or p. The reason is that the only divisors of a prime are plus/minus 1 and plus/minus p. So if p divides a, the GCD is p, and otherwise the GCD is 1.
MIT_6042J_Mathematics_for_Computer_Science_Spring_2015
162_Sets_Operations_Video.txt
PROFESSOR: Let's define a few familiar and standard operations on sets. So here's a picture of two sets A and B, where the idea is that the circle represents the points in A. The other circle represents the points in B. The overlapping area, this lens-shaped region, are the points that are in both A and B. And the background are the points that are in neither A and B. So this sort of a general picture allows you to classify points with respect to And B, and it's called a Venn diagram, in this case for two sets. It's still useful for three sets. It gets more complicated for four sets. And after that point, they're not really very useful. But a lot of the basic operations can be illustrated nicely in terms of the Venn diagram for two sets, and that's what we're about to do. So the first operation is union. It's the set of points shown here in magenta. It's the set of points that are in either A or B, all of them. And so if we were defining this in terms of set-theoretic notation or predicate notation, the union symbol-- the U is the union symbol. So A union B is defined to be those points x that are in A OR are in B. And you can already begin to see an intimate relationship between the union operation and the propositional OR connective. But don't confuse them. If you apply an OR to sets, your compiler is going to give you a type error. And if you apply union to propositional variables, your compiler is also going to give you a type error. So let's keep the propositional operators and the set-theoretic operators separate [? and ?] clearly distinct even though they resemble each other. All right. Next basic operation is intersection where, again, it's the points that are both in A and B, the points in common, which are now highlighted in blue. So the definition of A intersection B-- we use an upside-down union symbol for intersection-- it's the set of points that are in A and are in B. So let's stop for a minute and make use of the similarity between the set-theoretic operations and the propositional operators. Let's look at a set-theoretic identity, which I claim holds no matter what sets A, B, and C you're talking about. And we're going to prove it by making the connection between set-theoretic operations and propositional operators. And so let's read the thing. It says that if you take A union the set B intersection C, that's equal to the set A union B intersected with A union C. Now, let's not think through yet how to make this an intuitive argument. It's going to really crank out in an automatic way in a moment. But we can remember it as saying that you can think of this as union distributing over intersection. So if you think of union as times here and intersection as plus, then we've got a rule that says that A times B plus C is A times B plus A times C. Now, it's also true that if you reverse the role of union and intersection, you get another distributive law that AND distributes over union, but never mind that. Let's just look at this one. We're trying to prove the distributive law for union over intersection. How shall we prove it just from the definitions? Well, the way we're going to do it is by showing that the two sets on the left-hand side and the right-hand side have the same set of elements. Namely, if I have an element x that appears in the set described on the left-hand side, then that point is in the right-hand side. And it's an if and only if. So that says that the left-hand side and the right-hand side expressions defines sets with the same set of points. This holds for all x. And it turns out that the proof is going to follow by analogy to a propositional formula that we're going to make use of in the proof. That was a propositional equivalence that we proved in an earlier talk, namely that OR distributes over AND. So P OR Q AND R is equivalent to P OR Q AND P OR R. So you can see this equivalence in purple has the same structure as the set-theoretic equality in blue, except that union's replaced by OR, intersection's replaced by AND, and set variables A, B, C is replaced by propositional variables P, Q, R. So let's just remember that we've already proved this propositional equivalence, and we're going to make use of it in the middle of this proof that these two sets are equal. So again, we said we were going to prove the two sets are equal by showing they have the same points. So here's the proof. It's going to be a lovely if and only if argument the whole way. So looking at the left-hand side, a point x is in A union B intersection C by definition of union if and only if x is an A OR x is in B intersection C. I've just applied the definition of union there. OK. Now, let's look at this expression. x is in B intersection C. That's the same as x is in B AND x is in C, again, just using the definition of intersection. And now I have a propositional formula involving OR and AND and the basic assertions about sets of x is a member of one of those A's, B's, C's. Now, at this point, I can immediately apply my propositional equivalence and say that the assertion x is an A OR x is in B AND x is in C holds if and only if this expression, x is an A OR x is in B AND x is in A OR x is in C. Why is that? Well, I'm just invoking the propositional equivalence. Let's look at it. That if I think of the x is in A as proposition P-- and let's replace all the x [? over ?] A's by P-- and I think of x is in B as a Q and x is in C as an R, then I can see that the first set-theoretic assertion has the form of P OR Q AND R. And I can transform it by the propositional equivalence into P OR Q AND P OR R. And then remember what P and R are to get back to the set-theoretic membership, basic membership assertions. So now we've just proved that x is in A OR x is in B AND x is in A OR x is in C. And that's if and only if it was in the left-hand side set. Well, now I'm going to go back the other way. Namely, this OR, that x is in A OR x is in B, is the same as saying that x is in A union B, likewise here just by applying the definition of union. And this assertion that x is in this set AND x is in this set is the same as saying that x is in their intersection. And I've completed my proof, namely the point that was in the left-hand side if and only if it's in the right-hand side. You have to remember that that was the right-hand side of the identity. So this is a general method actually, where you can take any set-theoretic equality involving union and intersection and the operations of difference and complement that we'll talk about in a moment, and we can convert any such set-theoretic equality into a propositional equality or a propositional equivalence so we can check that the propositional assertion is an equivalence. And from that, using this method of converting the membership statements in the set expression into a propositional combination, we can check, and automatically check, any kind of set-theoretic identity involving union, intersection, and minus. And that, in fact, is the way that automatic engines like Mathematica can prove these set-theoretic identities. So let's just for the record put down that last operation. The difference operation is the set of elements that are in A AND not in B. So we'd write it as A minus B is the set of points that are in A AND not in B. and it's Illustrated by this region that's highlighted in orange. And a special case of the minus operation or the difference operation is complement. When you know the overall domain that you expect all your sets to be part of, then you can define a complement to be everything that's not in A-- the set of x such that x is not in A, where x is understood to be ranging over some domain of discourse. So if we're going to picture that, we're looking at the whole orange region, all of the stuff that's not in A if we think of the whole slide as representing the domain of discourse D.
MIT_6042J_Mathematics_for_Computer_Science_Spring_2015
271_Partial_Orders_Video.txt
PROFESSOR: Partial orders are another way to talk about digraphs and they offer to us an interesting lesson in the idea of axiomatizing a mathematical structure and mathematical ideas. So let's begin by discussing some of the properties that we're going to use to axiomatize partial orders and digraphs. So if we think about walks in a digraph, the basic property of walks is that if you have a walk from u to v and you have a walk from v to w, then you put the two walks together and you wind up with a walk from u to w. Expressed in terms of the positive walk relation in G, what this is saying is that if u G plus v and v G plus w, then u G plus w. And that abstract property-- which I'm highlighting with the magenta box-- when you apply it to an arbitrary relation is called a transitivity property. So a relation R on a set-- that is, R is relating-- the domain and co-domain of R are the same-- is that u R v and v R w implies u R w. And a relation that has that property is said to be transitive. And, of course, what we've just seen is the positive path relation of any graph G is transitive. Another way to say transitivity is to read u R v as saying there's an edge from u to v. And what this says is that if there's an edge from u to v and an edge from v to w, there's an edge from u to w. Or, in other words, if there's a path of length 2, there's a path of length 1. And then by easy induction it follows that if there's a path of any length between-- of any positive lengths between two vertices, then in fact there's a path of length 1. That is, an edge between them. OK, so the basic theorem that we have to begin with is what is transitivity capturing as a property of a relation. And a relation R is transitive if and only if, in fact, R is equal to the positive walk relation for some digraph G. The proof of this is basically trivial because you could let the relation R be the digraph that it's the positive path relation of. If we look now at directed acyclic graphs, then what we have is that if there's a positive length path from a vertex u to a vertex v, then since there's no cycles in a directed acyclic graph, there can't be a path back from v to u, and that property is called asymmetry. So D plus, which is the positive path relation, in a DAG has this asymmetry property. Namely, if u can get to v by a positive length path and it's not possible for v to get back to u by a positive length path. So, abstracted, u R v implies not v R u. That's the asymmetry property of an arbitrary relation R. And by definition of acyclic, D plus is asymmetric in a graph without cycles. OK. A strict partial order is simply a relation that has these two properties of being transitive and asymmetric. And some examples of strict partial orders are the proper containment relation on sets, which we've previously commented can be viewed as a DAG, but now it satisfies transitivity. And the fact that if one set's properly contained in another, the second one can't be properly contained in the first because proper means you have something extra. The indirect prerequisite relation on MIT subjects would be another example of a strict prerequisite. If I'm a prerequisite of you, you can't be a prerequisite of me. And finally, the less than relation on real numbers. These are all examples of strict partial orders. And putting together the previous reasoning, what we can say is that a relation R is a strict partial order if and only if R is the positive path relation for some DAG, D. So the axioms that define strict partial order, namely transitivity and asymmetry, can be said to abstractly capture the property of a relation that it comes from a DAG. Another important property of partial orders is the idea of being path-total, or linear as some authors call it. And the simple definition of path-total is that given any two elements, one is going to be bigger than the other with respect to the relation. Most familiar example of that would be the less than relation of a less than or equal to relational on the reals given any two distinct real numbers-- x and y. Either x is less than y or y is less than x. And we take that property for granted. Now, the formal definition then is simply that if x is not equal to y, then either x R y or y R x, and relation R that has that property is called path-total. Another way to say it is that there are no incomparable elements under R. And I've, again, highlighted with a magenta box this property, which is called path-totality. Another way to say that a path-total is that the whole order looks like a chain. If you give me a bunch of elements, there's going to have to be a biggest one and then a next biggest one and so on, assuming you've given me any finite set of elements. So the basic example, again, of path-total would be number properties of bigger than. And a basic example of something that would typically not be path-total would be, let's say, subset containment where you can perfectly well have two sets, neither of which is contained in the other. So a weak partial order is a small variation of a strict partial order. That is another familiar concept where we take the strict property, which guarantees that nothing's related to itself, and we relax it. So a strict partial order is just like a weak partial order except that the condition that there's no positive length path between an element in itself is relaxed. So, in fact, it's not only relaxed, but it's completely denied. In a weak partial order, we insist that every element is related to itself. An example of that would be the less than or equal to relation. Sorry, the improper containment relation. The ordinary subset relation on sets where now a is a subset with a bar under it. a is just a subset of a, not necessarily a strict subset or a proper subset means that, in fact, a is a subset of a. And then less than or equal, and you put the little bar under the less than sign to indicate that equality is also a possibility, you get a weak partial order on the real numbers. So the property that distinguishes the weak from the strict is this property of reflexivity. A relation R on a set is reflexive if every element is related to itself, if and only if a R a for all little a in the domain capital A. And what we can observe immediately is that the path of the walk relation-- G star-- which includes walks of length zero is reflexive because, by definition, there is a length zero walk from any vertex to itself. So if you're going to play with axioms, then you can reformulate asymmetry-- the idea of asymmetry except for elements being related to themselves. It's called antisymmetry, and it says that a relation R is antisymmetric if and only if it's asymmetric except for the a R a case. And more precisely, the difference between asymmetry and antisymmetry is that in asymmetry a R a is never allowed, and in antisymmetry a R a is a possibility. It's not disallowed. So an antisymmetric relation on R stated abstractly is that u R v implies not v R u for u not equal to v. So the first line is exactly the statement of asymmetry, and then I add this proviso that it only has to hold when the u and the v are not equal. That's the formal way of saying antisymmetry is the same as asymmetry except for a R a. And the walk relation in a digraph, which includes length zero walks, is antisymmetric. So weak partial orders-- just what you get when you put these things together-- weak partial order is transitive, antisymmetric, and reflexive. So in a weak partial order, we insist that every element be related to itself. So there's a-- just a quick remark. Asymmetric implies nothing's related to itself. Reflexive implies everything is related to itself. And it's possible that there be some graph in which some elements are related to themselves and some not. That would be something that was neither a strict nor a weak partial order. It would just be transitive and antisymmetric. Those don't come up much and so we don't bother to give them a name or talk about them. And, finally, the theorem that summarizes up this whole story is that R is a weak partial order if and only if R is equal to the walk relation for some DAG, including length zero walks.
MIT_6042J_Mathematics_for_Computer_Science_Spring_2015
319_Stirlings_Formula_Video.txt
PROFESSOR: Our method for estimating sums can also be used to estimate products, basically by taking logs to turn a product into a sum. And we're going to use that to come up with another important estimate of a quantity that will come up, really, very regularly called n factorial. S n factorial is the product of the first n integers, 1 times 2 up through n minus 1 times n. In concise product notation, it's the product-- that's pi, capital pi, for product-- from i equals 1 to n of i. And its standard abbreviation is to write it as n! pronounced n factorial. So what I'd like to do is get an asymptotic estimate for n factorial. Again, n factorial is one of these quantities where there isn't any exact formula that doesn't have those ellipses in it. There's no short formula with basic operations fixed size of formula that expresses n factorial. But we get a nice formula for a tight asymptotic estimate. So as I said, the first trick is to turn the product into a sum by taking logs. So log of n factorial is the log of 1-- the product of 1 through n. But a log of a product is the sum of the logs, so it's simply log of 1 plus log of 2 up through log of n. And expressed in sum notation, it's the sum from i equals 1 to n of log of i. Now, the integral method gives us a way to estimate this sum by bracketing it between the values of some integrals, namely restating the integral method for sum-- for bounding integrals by sums. This time, we're looking at an increasing function because it's log of x. Let f be a weakly increasing function from positive reals the positive reals. I'm interested in the sum from i equals 1 to n of f of i. And I want to relate it and bound it by the integral from 0-- from 1 to n of f of x where, in this case, the particular f that we're interested in is f of x is log x. And the theorem says that with increasing functions s is bracketed between the integral plus the last term in the sum and the integral plus the first term in the sum. Remember, since the function is weakly increasing, f of 1 is smaller than f of n. So that's the way you remember which way the bounds go. So s is between I plus f of 1 and I plus f of n by our general formula for applying integral bounds to sums. Well, what that tells us then is that the sum from 1 to n of log of i, which is what we're interested in, is bracketed between the integral from 1 to n of log x and the integral from 1-- well, it's plus log of 0, but-- log of 1 rather, but that's 0. And the integral from 1 to n of log of x plus the last term, which is log of n. In case you don't remember from first term calculus, the integral of log of x is, in fact-- has the indefinite integral is x log of x over e, which you can easily check by differentiating x log x over e. ln means natural log, remember. In computer science, L-O-G, log means log to the base 2 unless you explicitly put some base on it like log, L-O-G, sub 10. So ln is the natural log from calculus. And plugging in this value for the indefinite integral of log of x and using the bounds 1, n, what we come up with is that the sum of the logs is bounded between n times log n over e and n times log n over e plus log of n. It's a pretty tight bounds. What that means is that informally speaking, the sum of the logs is about this term plus that term. Plus, let's take the average value of that term, which is half this term. So we could say that the sum of logs is approximately equal. That's a little vague, but live with it. n log n over e plus half of log n. Well, now, if I'm interested, remember, in an estimate for n factorial-- so let's exponentiate both sides. So taking e to this sum gives me a product of e to this times e to that. Well, e to this is-- really, it's e to the log of n over e to the nth power, which means it's n over e to the n. And this is e to the log of n to the power half, or square root of n. So we wind up with n factorial is approximately equal to the square root of n times n over e to the n. Now, this approximately equal is imprecise. It's not asymptotically equal because we were doing an arithmetic average of 0 and log n over 2. In addition, it's very dangerous when you have two things that are approximately equal to exponentiate them and expect that they're still approximately equal. Often, they aren't. But nevertheless, this is a kind of a heuristic derivation of some kind of asymptotic estimate that we would expect that n factorial was roughly like the square root of n times n over e to the nth power. And it turns out that it's-- that this heuristic gives a pretty accurate answer. A precise approximation is that n factorial is actually asymptotically equal to the square root of 2 pi n times n over e to the n. And we're not going to prove that. It requires elementary calculus, but more than we want to take time for. And this crucial formula that we will be using very regularly to estimate the size of n factorial is called Stirling's Formula, and it's one to have on your crib sheets if you haven't memorized it.
MIT_6042J_Mathematics_for_Computer_Science_Spring_2015
1117_The_Halting_Problem_Video_Optional.txt
PROFESSOR: [? Diagonal ?] arguments are elegant, and infinite sets-- some people think-- are romantic. But you could legitimately ask what is all this weird infinite stuff doing in a course that's math for computer science? And the reason is that diagonal arguments turn out to play a fundamental role in the theory of computing. And what we're going to talk about now is the application of diagonal arguments to show that there are noncomputable sets and examine a particular one. So let's look at the class of infinite binary strings. Now, we've seen that there are an uncountable number of infinite binary strings, and that's because there was a simple bijection between the infinite binary strings and the subsets of the natural numbers-- that is the power set of n. Let's look at the infinite binary strings that we might think of and call computable strings. And what I mean by a computable string is that there's simply a procedure that will tell me what its digits are. So what I mean is that the procedure applied to argument n will return the n-th digit of the string s. That's the definition of what I mean by saying s is computable. I can compute its digits, whichever digits are needed. Now, we saw that there were only a countable number of finite binary sequences, and I mention that now because I want to think about sequences over the slightly larger alphabet instead of 0 1, the 256 ASCII characters. And by the same argument, the set of finite ASCII strings is also countable. You just list them in order of length-- same argument that we used for the binary strings. Now, the point of looking at the ASCII strings-- the 256 keyboard characters-- is that every procedure that we enter into a computer, we type in an ASCII string. Every procedure can be represented by an ASCII string. And since they're only countably many finite ASCII strings, it follows that there are only countably many computable procedures. Now, since in order to be a computable infinite string, there has to be a procedure which computes is digits, we can immediately conclude that there are only countably many infinite binary sequences that are computable-- only countably many computable infinite binary sequences. But I already said there are an uncountable number of those infinite binary sequences. So it has to be that there are noncomputable sequences, noncomputable infinite binary strings, that exist. So we can conclude that as a matter of fact, since the set of infinite binary strings is uncountable and the computable ones are a countable subset, there have to be an uncountable number of noncomputable infinite binary sequences. Most infinite binary sequences are actually noncomputable. OK. That's kind of abstract thing to know. They're out there, and you can't get a hold of them computationally. But the reasonable question to ask is what do they look like? And what we're going to see is that if you consider a sensible concrete computational problem of giving a procedure, figuring out whether it will run and turn a value successfully on some argument or not is called testing the halting property of procedures. I want to know-- given a procedure and argument that I could apply it to, does it return a value or does something else bad happen and it runs forever and returns an error? We don't get a satisfactory value out. And if it does [? satisfactorily ?] return something, we say it halts. And what I'm going to argue is that the halting problem is not decidable. That is, there's no procedure which given input that describes a procedure, the fixed procedure can figure out what its input is doing. Let's look at that in more detail. So let's think about string procedures because we're thinking about procedures being represented by ASCII strings. So let's think about procedures that take a string argument. So an example of a procedure P, it might be that when you apply P to the string no, it returns 2. When you apply it to the string albert, it returns meyer. When you apply it to this string of weird symbols, that causes an error. And you apply it to the sequence of characters what now, and it actually runs forever. These are just illustrations of the kind of behavior that some weird string procedure might exhibit. So what I want to think about is-- suppose I have an ASCII string s, a finite ASCII string. That's the one that defines this procedure P. When I'm trying to run P on the computer, I'll have to type in s in order to give the computer the definition of P to tell it what to do. And I'm going to say that s HALTS-- the string has this property called halting or HALTS-- if and only if this procedure P that s describes returns successfully when it's applied to s. This is where we're really doing a diagonal argument. We're taking the sth object-- the procedure that's described by s and applying it to s. And that's kind of going down the diagonal of s applied to s, where the n-th element of the n-th row in a pictorial diagonal argument. That's the idea that we're going here. But let's to go back to the definition. A string is said to HALT if when you interpret it as the description of a procedure that takes a string argument and you apply that string procedure to that very same thing, s, you successfully return. That's the halting problem. And what I want to argue is that it's impossible that there could be a procedure Q that decided the property HALTS of strings. That is to say, Q applied to a string returns yes if s does return successfully-- if s HALTS. And it returns no otherwise. Q applied to s will say no if s runs forever or if s has a type error or s does anything other than successfully return a value. Let's suppose, for the sake of contradiction, that there was this HALTS decider. I claim there can't be such a Q. For the sake of contradiction, let's assume there was one. Then this is the trick that I'm going to do. I'm going to modify Q to act as though it was complementing the diagonal. More precisely, this is what I'm going to do with Q. I'm going to modify Q to be another procedure Q prime, which just behaves a little bit differently. Namely, Q prime of s returns yes when Q of s returns no, and Q prime of s returns nothing-- that is, it doesn't HALT-- if you Q of s returns yes. So Q prime is like complementing the bits on the diagonal, but here's the precise definition. Q of s says no. Q of prime of s says yes. Q of s says yes. s HALTS successfully. Q prime then does not HALT successfully. It returns nothing at all. Let's go crank through the consequences of these definitions. So s HALTS means Q prime of s returns nothing. That was the way we define Q prime of s. Now, let's let t be the text for Q prime. If Q was a procedure, then surely we can tweak this procedure Q to get the procedure Q prime. So Q prime will have a text that describes. It'll be the ASCII string that defines Q prime. Let's let t be that ASCII string. What do we have? Then by definition of HALTS, t HALTS if and only if the procedure that t scribes-- namely Q prime applied to t-- returns a value successfully. OK? Now by definition of Q prime however, Q prime was the thing that on t, it returned a value successfully if and only if it was not the case that t HALTS. So if you put those two things together-- that is, we're looking at t HALTS if and only if Q prime of t returns, and Q prime of t returns a value successfully if only if not t HALTS-- then put the two together, and we have a contradiction. t HALTS if and only if t doesn't HALT. And that contradiction says that our original hypothesis that we had a Q that would decide the halting problem can't be right. It's impossible to write a procedure that determines of strings whether they describe a procedure that HALTS when applied to itself. OK. That at least gives us some concrete problem that we can say is not something that a computer can do. Even though it's a very well defined clear question, it's just not possible to get a computing procedure that will on an arbitrary string, figure out the right answer. Any program that applied to strings is trying to do this job, either it will give the wrong answer. Or if it never gives a wrong answer, it means it doesn't give an answer at all on some strings. All right. Well, you could say that I don't really care very much about whether a program HALTS or not. So let's look at how do you apply the same reasoning-- or more precisely, as a corollary of the fact that the halting problem is not computable, let's talk about something that sounds closer to a practical interest, mainly type-checking. So I want to think about the type-checking problem. And what I want to say is that in fact, there's no strict procedures that type-checks procedures perfectly. So what I mean is that I want to be able to write a program that will look at a program text, an ASCII string that describes a procedure, and figure out whether that ASCII string, if you ran it, would cause a run-time type error. That's what type-checkers are supposed to do. They're supposed to check your program, figure out whether the program will cause a run-time type error. If so, it reports it. If not, it says, this program is safe. Other things may go wrong, but it's not going to commit a run-time type error. So let's suppose that I had such a type checking procedure C. So what that means is that for program text s, C of s returns yes if running s would cause a run-time type error. And C of s returns no-- the output string no-- otherwise if s would not cause a run-time type error. In other words, s is safe. All right. Now, what I claim is that if you give me C-- if I have a procedure that's this perfect type-checker-- I can use C to build a tester for HALTS, which we said is impossible. So how would I use C to get a HALTS tester, H. Here's how. I'm going to tell you how to compute H of s. I'm describing the procedure that this tester H carries out on argument s. And what it does is given argument s, it's going to construct a new program that's a small modification of s. It's going to construct this new program s prime that acts like an interpreter for s. So s is a computer program or a procedure. I want to know whether if you just run it, it'll HALT or not. I'm going to tweak it a little bit so that s prime acts like s but in a slightly modified way. And here's how s prime works. S prime is going to be an interpreter that's simulating step-by-step how s behaves. But at the moment that it discovers that s is about to commit a run-time type error-- that the next instruction that s prime would execute in simulating s was going to be a run-time type error-- s prime would just skip it. And who knows what the consequences of skipping it will be, but it'll skip it and just keep going. OK. If s prime in simulating program s discovers that in fact s returns successfully-- those that is s HALTS-- then s prime will purposely make a type error. So let's think about what that means. Well, actually let me just wrap up what the definition of H is. So the way H works is given input s, it constructs the program s prime and applies the type-checker C to s prime and returns the same value that c does. So what we can figure out by these definitions is the s HALTS-- the string s is a cloud halting string-- if and only if the string s prime makes a run-time type error. Because remember, the interpreter, which is what s prime was behaving like, was simulating what s did. And if s was going to HALT successfully, s prime makes a run-time type error. That means that C is going to say yes to s prime-- yes, it has a run-time type error. And by definition of H, that means that H of s says yes because H of s constructed s prime to C. OK. On the other hand, if s does not HALT, that means that something else goes wrong with s. It's not going to successfully return. Then s prime-- when it's simulating s-- is never going to make a run-time type error because that's the way s prime goes. Whenever it detects that there would be about to be a run-time type error, it skips it. So s prime is likely to keep running forever because it's simulating this program s that doesn't HALT, but it won't make a type error. And that means that C of s prime is going to say no-- no type error. And H of s is going to say no. And that means that when s does not HALT, H of s properly says no. In other words, I've just walked through the argument that this procedure H that I've described actually is a decider for HALTS. And that's a contradiction. The H that I gave you would solve the halting problem if there was a perfect type-checker, and there can't be a halting problem decider. So there can't be a perfect type checker. C must make a mistake. It can't accurately predict run-time errors. And that is an example of how you reason from this kind of contrived halting problem that's sort of self-referential whether the string procedure applied to its own definition HALTS or not. And we can apply it to all sorts of questions and properties of procedures that we really care about. In fact, the same reasoning really shows that it's not just type-checking. That's a kind of arbitrary example, but there's more or less no perfect checker for any kind of property that procedure outcomes might exhibit. Which is why theoretical computer scientists interested in the theory of computation have great respect and interest in diagonal arguments because they crystallize a whole set of absolutely logical, intrinsic limitations on the power of computation.
MIT_6042J_Mathematics_for_Computer_Science_Spring_2015
263_Scheduling_Video.txt
So we saw in the last video why if you represent scheduling constraints among courses by a digraph that it's critical that that digraph in fact be a DAG. And let's now look at this scheduling issue represented by DAGs in more detail. So here's a chart of a selection of Course 6 prerequisites-- some of them obsolete, but they serve the purposes of being an illustrative example-- and the little arrows here are indicating arrows in the digraph. So what this tells me is that 18.01 is listed as an immediate prerequisite in the catalog for 6.042. 18.01 is also an immediate prerequisite of 18.02. 6.001 and 6.004 are both prerequisites of 6.033, and 6.042 of 6.046, and 6.046 of 6.840. So we're seeing here this indirect prerequisite issue that I mentioned before, which is that even though the only thing listed as a prerequisite for 6.840 in the catalog is 6.046, as a matter of fact in order to take 6.046 you have to have taken 6.042. So 6.042 is an indirect prerequisite of 6.840. So in terms of graph language and path language, a subject u is an indirect prerequisite of v when there's is a positive length path from u to v in the digraph that describes the prerequisite structure among the classes. It simply means that-- using our notation for R plus is the positive length path relation of a digraph or a binary relation R-- it simply means u R plus v, which is read as there is a positive length path from u to v. Now, a key idea that we're going to be examining in learning how to do scheduling is the idea of a minimal subject. So the definition of a minimal subject is a subject that has no prerequisites, no arrows in, a freshman subject. So nothing comes in. There are three examples of subjects with no prerequisites in the preceding chart, namely 18.01, 8.02, and 6.001. Let me say a word about where this funny terminology minimal comes from. It's because another way to talk about DAGs is in terms of things that are like order relations called partial orders, which we'll be looking at shortly. And so you think of the later subjects as being bigger than the earlier subjects. So a minimal subject is one where there is nothing less than it. Now, there might be several minimal subjects, because it might be that neither one of them is less than the other, but there's nothing less than 18.01. There's no other subject that you have to take before 18.01. So that's the definition of minimal. Nothing smaller. Now, you could ask what's a minimum, which you may be more familiar with. A minimum means that not only is there nothing before it, but it comes before everything else. It would be the earliest of all possible subjects in the indirect prerequisite chain. There isn't any in this example, but there actually used to be one at MIT. For a while, we experimented with giving an orientation week summer assignment, that is, an assignment over the summer for newly admitted students in order for them to take a subject during orientation week in which they discussed some book that they had all been assigned to read beforehand. Seemed like a great idea to kind of pull the freshman community together, but it turned out to be unsustainable because they couldn't find enough faculty and others willing to conduct these seminars. So MIT stopped having a minimum subject. So let's look at the prerequisites again, and discuss how to do a scheduling. And the first thing we're going to do in the schedule is, as I say, identify the minimal elements. There are the three of them that we mentioned. And we're going to start by deciding that we'll take those three in the first term. So we're going to be operating with basically what's called a greedy strategy. We're going to take as many things as we possibly can take at any term given the constraints. So we can take all the freshman subjects in our first term because they have no prerequisites. Well, the next step, then, is just get rid of them because they're scheduled already. So we can get rid of all those occurrences of 18.01, 8.02, and 6.001, not only-- there are other occurrences as well here where 18.01 is a prerequisite for things. So they're all gone, and we get a simplified diagram where we've removed the minimal elements. Now in the new diagram, there are now things that didn't used to be minimal before but are minimal now. These are the new minimal elements, and we can identify those. Here are five subjects-- four here and one there-- that now have no more prerequisites. These are kind of the second level minimal elements, and we're going to schedule them next. So those are all the subjects that we can possibly take after we've taken the first set of minimal subjects. They're the second level minimals. And we'll schedule them in the next term. This is our five subject second term schedule. Likewise, you delete these guys, and then you discover that 6.046 and 6.004 are the resulting minimal ones, which it's now possible to take because all their prerequisites have been satisfied. So we schedule them in the third term, 6.840 and 6.033, by the same reasoning, in the fourth term, and 6.857 in the fifth term. There is our complete term schedule obtained in this particular way. There's, of course, many other ways to schedule it, but this is a particular orderly way where the strategy, again, is greedy. You take as many things as you possibly can take in a given term. Now, there are some concepts that come up when you're talking about schedules that are worth introducing. So one of them is an antichain. An antichain is-- in this particular example means a set of subjects where there are no indirect prerequisites among them. They can be taken in any order, because it doesn't matter whether you've taken one or not when you're thinking about taking the others. In technical language, again motivated by the idea of thinking of there being a path as though it was less than or equal to something, these are elements that are incomparable. Neither one is less than or equal to another. So in terms of the path relation, u is incomparable to v if and only if there is no path from u to v of positive length and there's no positive length path from v to u. So let's look at some antichains-- and the part of the point of defining it is we have chosen antichains as our schedule for each term. So the freshman subjects with no prerequisites, clearly there's no path among them, because there is no path to them at all. So they are an antichain. The next level we chose were the second level minimal elements, which only had as prerequisites the original minimal elements, and so certainly none of them was a prerequisite of the others. So that's another example of an antichain. And of course the third level and the fourth level and the fifth level are antichains. But not all antichains are there in our schedule. So for example here, is a diagonal lying antichain. 6.840, 6.004, and 6.034 have no paths between them. So in fact it's possible to take them simultaneously, because you could have taken all their prerequisites in the upper left here and then take the three of them. So that's what an antichain means here. So the technical definition is no path between any two of them, but in terms of the scheduling of courses, it means it's possible to take them in the same term if you've satisfied all their prerequisites, which it is possible to do. So let's ask about the various patterns of scheduling that are possible. We've discovered this particular greedy one, where we take as many things as we can each term. But suppose that I was constrained to only take one subject per term. I was going to-- I have an outside job, I'm too busy to take more than one class a term, and if MIT will let me dawdle so long, that's what I'd like to do. So can I do this? Yeah, well sure. Just schedule all the minimal elements first in any order, one, two, three. And then schedule the five second level minimal elements next, and the third level, and so on. And it's perfectly possible, then, to modify the schedule that we found into a schedule in which you only take one subject per term, and of course you only take a subject after you've taken all of its indirect and direct prerequisites. This is called a topological sort. Again, the sorting word comes from the motivation of thinking of there being a path as like a less than or equal to relation. So we're sorting things in order of increasing size. 18.01 would be, in this case, a smallest element and 6.857 a biggest in this list of elements. A chain is kind of technically, literally, a thing called the dual of an antichain. A chain is a sequence of subjects that must be taken in order. That is, these are subjects where for any two of them, you know which one has to come first. That is, between any two of them there is a path in one way or the other. Now of course, it's a DAG, so there can't be paths in both directions. So a chain is simply a set of comparable elements, which implies that there's an order in which they have to be taken. So here are some chains. This one was shown pictorially as a vertical chain with five courses in it. Here's a vertical chain of four. And not all of them are vertical. Here's a chain where you have to take 18.01 before you take 18.03 before you take 6.004. So they form a chain. It's important to realize that this is a chain with five subjects in it, but a chain doesn't have to have every possible element that could be in it. It's still a chain even if it's only got these three subjects, because there's a path from 8.02 to 6.004 and a path from 6.004 to 6.857. But maximum length chains, chains that are as full as possible, are important theoretically. And so this in particular is a maximal length chain. The longest chain here is of length 5. Now, it's not the only one. There's another chain of length 5 here if you look for it. But no chain is of length longer than 5 and there is one of length 5, and that leads us to the question of how many terms is it necessarily going to take to graduate. Well, we saw that you can graduate in five. But given that there's a maximum chain of length 5, it means that you can't do it in fewer, because those five courses have to be taken consecutively. The third has to be taken in a term after the first two have been taken. The second has to be taken after the first. If you have a chain of any size, actually, the number of terms to graduate has to be at least as big as that chain, which means it has to be at least as many terms as a maximum size chain. So five terms are necessary, and we saw using our minimal strategy of being greedy that you can always do it in maximum chain length. So five are also sufficient. This is providing that you can take an unlimited number of subjects per term. Remember our strategy to graduate in five terms was to take as many subjects as we possibly could each term. So there's the sufficient way to take subjects to graduate in five terms. And of course, one consequence is that in my second term freshman year, I was taking five subjects because it was possible. But that leaves me with a kind of heavily loaded term compared to-- here's a term with two subjects, and there's a term with only one subject at the very end. So it's possible, in fact, to somewhat adjust the term load. Let's just shift taking 18.02 to the third term. It's perfectly feasible to do that, because I will have satisfied all the prerequisites of 18.02 after the first term, but I don't have to take it in the second term. Let's shift it off. So now I've lightened the load in the second term to four subjects, somewhat increasing the load-- I had to do it somewhere-- in the third term to three subjects. So now I have to take no more than four subjects a term. And as a matter of fact, if you fiddle, you can actually find a graduating schedule in which you can only take three subjects per term. And we will examine what's the minimum number of subjects per term in the next segment.
MIT_6042J_Mathematics_for_Computer_Science_Spring_2015
241_RSA_Public_Key_Encryption_Video.txt
The RSA crypto systems is one of the lovely and really important applications of number theory in computer science. So let's start talking about it. The RSA crypto system is what is known as a public key cryptosystem, which has the following really amazing properties-- namely, anyone can send a secret encrypted message to a designated receiver. This is without there being any prior contact using only publicly available information. Now, if you think about that, it's really terrific because it means that you can send a secret message to Amazon that nobody but Amazon can read even though the entire world knows what you know and can see what you sent to Amazon. And Amazon knows that it's the only one can decrypt the message you sent. This in fact is hard to believe if you think about it. It sounds paradoxical. How can secrecy be possible using only public info? And in fact, the existence of this public key cryptosystem has some genuinely paradoxical consequence, which kind of are a mind bender. So let me tell you about one of them. I don't know if you've heard of mental chess, but it's a standard thing in the chess world. Chess masters are so talented and have such deep insight into the game that they don't need a chessboard, and they don't need chess pieces. They can just go for walk on a country lane talking to each other and saying pond to king 4 and knight to bishop 3 and just talking chess code and play an entire chess game that way. That's known as mental chess. It's quite impressive. In fact, the grand masters can play multiple games of mental chess against opponents who are staring at the chessboard and win the great majority of the games. Of course, these are not against other grand masters, but still. OK. So now, this is what I propose. How about playing mental poker? If you know how to play poker, we deal our cards and we bet and so on. And my only condition is that I'll deal. Now, that sounds like a joke and an absurd thing for you to agree to do, but it's amazing. It's actually possible. One of the famous papers of Rivest and Shamir was how to play mental poker using public key crypto. So I once tried to persuade an eminent MIT dean who's a physicist researcher about this, and he just wouldn't believe it. He argued that it was just impossible logically. And what he was thinking about was that if you know how to compute a function, then of course you can figure out how to invert it. That is to say if I know how to compute some function f of a number and let's say that the function is one arrow in-- that is an injection-- then if I know what f of n, there's a unique n that it came from. So how can I not be able to find n? And it's an insight of computer science and complexity theory that says it's quite possible. It's not that you can't find the n that produced f of n. It's that the search for it will be prohibitive. There are, in short, one-way. That is, functions that are easy to compute in one direction but hard to invert. They're easy to compute but hard to invert. In particular, we're thinking about multiplying and factoring. It's an observation that it's easy to compute the product of two large prime numbers. We all know how to multiply. And in fact, there are faster ways to multiply than you know. But the current state of our knowledge of number theory and complexity theory is that given a number n that happens to be the product of two primes, it seems to be hopelessly hard in general to factor n into the components p and q. Now, this is an open problem. It's similar to the p equals np question-- that famous open problem. It's actually a weaker-- it's quite possible that you could factor, and np would not equal to np. But nevertheless, it's the same kind of problem. And more generally, the existence of one way functions is closely related to that p equals np question. Nevertheless, even though it's an open problem and theoretically has not been settled either way, it's widely believed-- the banks, the governments, and the commercial world have really bet the family jewels on the difficulty of factoring when they use the RSA protocol. So I like to make the joke that my most important contribution to MIT was being involved in the hiring of our S and A. So this is A, Adi Shamir, R, Ron Rivest, and A, Len Adleman back in the late '70s when they first came up with these ideas. So let's look at the way this RSA protocol actually works. So here's what happens. To begin with, you have to make some information public so that people can communicate with you. We're looking at two players here. There's a receiver who's going to get encrypted messages, and there's a sender who is trying to send an encrypted message to the receiver. So what the receiver does before hand is generates two primes, p and q. Now, in practice, you want these to be pretty big primes-- hundreds of digits. And we'll examine it in a moment, the question of how you find them. But the receiver's job is to find two quite substantial large primes, p and q, chosen more or less randomly because if you have any kind of predictable procedure for how you got them, that would be a vulnerability. But if you just choose them at random, then there's enough primes in the hundreds of digits that it's hopeless that people would guess which one you wound up with. OK. What do you do to begin with is multiply p and q together, which is easy to do. Let's call that number n. And now the other thing the receiver is going to do is find a number e that's relatively prime to this peculiar number p minus 1, q minus 1. Now as a hint, you might notice that p minus 1, q minus 1 is in fact Euler's function of n-- phi of n. But for now, we don't need to understand that this is Euler's function. It's just the recipe of what the receiver has to do. Find a number e that's relatively prime to p minus 1, q minus 1. Again, you don't want e to be too small, and we'll discuss in a moment how do you find such an e. But the receiver's job is to find such an e. This pair of numbers e and n will be the public key which the receiver publishes widely where it can easily be found by anyone who cares to look for it. Basically there's a phone directory where if you want to know how to send somebody a secret message, you look them up, and you find the receivers name in there. And then you see his public e and n, and that's what you use to send him a message. Now, how do you use it to send him a message? Well, I'll explain that in a minute, but let's look at one more thing that the receiver needs to do to set himself up. The receiver is going to find an inverse of this number e that he's published-- the part of his public -- modulo p minus 1, q minus 1. That is, this e since it's relatively prime to p minus 1, q minus 1, it will have an inverse in Z star p minus 1, q minus 1. Let's let that inverse be d. And of course, we know how to find d because you can do that with a Pulverizer. D is the private key. That's this crucial piece of information that the receiver has and that the receiver is not going to tell anybody. Only the receiver knows that because the receiver chose the p and the q and the e more or less randomly-- maybe even as randomly as they can manage-- and then they find the d. And that's their secret. OK. That's what the receiver does. How does the sender send a message? Well, to send a message, what the sender wants to do is choose a message that is in fact a number in the range from 1 to n where-- we're thinking again, of n, if it's a product of two primes of a couple of hundred digits each, then the product is around 400 digits. And so you can pick any message m that can be represented by a 400 digit number. Now, there's a lot of messages that will fit within 400 digits. And of course, if it's bigger, you just break it up into 400 digit pieces. So that's the kind of message you're going to send. So the message is going to be a number in this range from 1 to n. And what the sender is going to do is look up the public key e and the other part of the public key n and raise the secret message to the power e in Z n. So we're going to compute m to the e in Zn and send that encoded message m hat. So m hat is what we think of as the encrypted version of the message m. So then we have the problem if that's what the sender sends to the receiver, how does the receiver decode the m hat, and the answer is the receiver just computes m hat to the power d-- the secret key-- also in the ring Zn. And the claim is that in fact, that's equal to m. Now, you can check in class problem, and it's easy to see that the reason why that method of decrypting works is precisely an application of Euler's theorem-- at least when m happens to be relatively prime to n. Now, the odds of finding an m that's not relatively prime to n are basically negligible because if you'd find such an m, it would enable you to factor them. And we believe factoring is very hard. But in fact, it actually works for all m, which is a nice theoretical results. And you'll work this out in class problem. OK. That's how it works. The receiver publishers e and n, keeps a secret key d. The sender exponentiates the message to the power e. The receiver simply decodes by raising the received message to the power d and reads off what the original was. OK. So we need to think about the feasibility of all of this because we believe that it's impossible to decrypt, but there's a lot of other stuff going on there that the players have to be able perform. And let's examine what their responsibilities and abilities have to be. So the receiver to begin with has to be able to find large primes. And how on earth do they do that? Well, without going into too much detail, we can make the remark that there are lots of primes. That is to say by appealing to the prime number theorem, we know that among the n digit numbers, about log n of them are going to be primes so that you don't have to go too long before you stumble upon a random prime. That is, if you're dealing with a 200 digit n and you're searching for a prime of around that size, you're not going to have to search more than a few hundred numbers before you're likely to stumble on a prime. And of course, how do you know that you stumbled on a prime? Well, you need to be able to check whether a number is prime or not-- and efficientlY-- in order for this whole thing to be feasible. So we'll have to discuss that brieflY-- how do you test whether or not a number is prime in an efficient way? The other thing the receiver has to do is find an e that's relatively prime to p minus 1, q minus 1. But that's easy. Well, it's easy because first of all, if you just kind of randomly guess a medium sized e and then search consecutively from some random number you've chosen somewhere in the middle of the interval up to p minus 1, q minus 1. Again, you're very likely to find in a few steps a number e that is relatively prime to p minus 1, q minus 1. How do you recognize that it's relatively prime? Well, you just compute the GCD, which we know how to do using Euclid's algorithm. So that's really quite efficient. Recognizing that it's relatively prime is easy, you just don't have to search very many numbers until you stumble on an e. OK. The other thing you have to do is find the d that an e inverse modulo p minus 1, q minus 1. And again, that is the extended Euclidean algorithm, the extended GCD, namely the Pulverizer. So those are the pieces that the receiver has to do. Now, let's look at this a little bit more and think about the information about the prime. So the famous theorem about the primes is their density, which is if you let a pi of n be the number of primes less than or equal to n, then it's a deep theorem of number theory that pi event actually approaches a limit in an asymptotic sense-- which we'll discuss in more detail-- that pi of n as n grows gets to be very close to n over log n. That's the natural log of n. Now, that's a deep theorem. But in fact, if we want a self-contained treatment for our purposes, there's an exercise that will be in the text where we can derive Chebyshev's bound, which is weaker than they tight prime number theorem. But Chebyshev's bound, which can be proved by more elementary means that's within our own ability at this point with the number theory we have-- to be able to show that n over 4 log n is a lower bound on pi of n. So basically that says that if you're dealing with numbers of size n, which means they're of length log n a few hundred digits, then you only have to search maybe 1,000 digits before your very likely to stumble on a prime. And if you search 2,000 digits, it becomes extremely likely that you'll stumble on a prime. So the primes are dense enough that we can afford to look for them, providing we can have a reasonably fast way to recognize when a number is prime. Well, one simple way that it almost is perfect-- but works pragmatically pretty well-- is called the Fermat test. But let me just reemphasize this -- I got ahead of myself-- that if I'm dealing with 200 digit numbers, then about one in 1,000 is prime using just the weaker Chebyshev's bound. And that says that I don't have to search too long-- only a few thousand numbers to be able to find a prime. And a few thousand numbers is well within the ability of a computer to carry out, providing that the test for recognizing that a number is prime isn't too time consuming. So one naive way that the really almost works to be a reliable primality test is to check whether Fermat's theorem is obeyed. Fermat's theorem-- the special case of Euler's theorem-- says that if n is prime, then if I compute a number a to the n minus 1, it's going to equal 1 in Z n. And that's going to be the case for all a that are not 0 if n is prime. Now that means that if this equality fails in Z n, then I immediately know a is not prime. Go on. Search for another one. OK. So suppose I'm unlucky-- or lucky-- and I choose an a to test and it turns out that a to the n minus 1 is 1, does that mean that n is prime? Unfortunately not. It might be that I just hit an n that happened to satisfy Fermat's equation even though n was not prime. But it's not a very hard thing to prove that if n is not prime, then half of the numbers from 1 to n are not going to pass the Fermat test. So if half of the numbers are not going to pass the Fermat test, then what I can do is just choose a random nonzero number in the interval from 1 to n, raise it to the n minus first power, and see what happens. And if n is not prime, the probability that this random numbers that I've chosen fails this test is at least a half. So I try it 50 times. And if in fact 50 randomly chosen a's in the interval 1 to n all satisfy Fermat's theorem, then there's one chance in 2 to the 50th that n is not prime. That's a great bet. Leap for it. So that basically is the idea of a probabilistic primarily test. Now, there's a small complication which is that there are certain numbers n where this property that half the numbers will fail to satisfy Fermat's theorem doesn't hold. They're known as the Carmichael numbers, and they're known to be pretty sparse. So that really if you're choosing an n at random, which is kind of what we're doing when we choose random primes p and q, the likelihood that you'll stumble on a Carmichael number is another thing that you just don't have to worry about. So really, the Fermat primality test is a plausible pragmatic test that you could use to pretty reliably detect whether or not a number was prime-- what was the last component of the powers that we needed the receiver to have. OK. So now we come to the question of why do we believe that the RSA protocol is secure? And the first thing to notice is that if you could factor n, then it's easy to break. Because if you can factor n, then you have the p and the q. And that means you know what p minus 1 times q minus 1 is. And therefore you can use the Pulverizer in exactly the same way the receiver did to find the inverse of the public key e. You could find d easily. So surely if you can factor, then RSA breaks. No question about that. What about the converse? Well, what you can approve-- and there's an argument that's sketched in class problem, not fully, in the book-- is that if I could find the private key d, then in fact, I can also factor n. So if I believe that factoring is hard, then in fact finding the secret key is also hard. And we could try to be confident that our secret key is not going to be found even given the public. Now, unfortunately this is not the strongest kind of security guaranteed you'd like because there's a logical possibility that you might be able to decrypt messages without knowing the secret key. Maybe there's some other walk around whereby you can decrypt the secret message m hat by a method other than raising it to the dth power. And what you'd really like is a theorem of security that said that breaking RSA-- reading RSA messages by any means whatsoever-- would be as hard as factoring. That's not known for RSA. It's an open problem. And so RSA doesn't have the theoretically most desirable security assurance, but we really believe in it. And the reason we really believe in it is that for 100 or more years, mathematicians and number theorists have been trying to find efficient ways to factor. And more pragmatically, the most sophisticated cryptographers and decoders in the world using the most powerful networks of supercomputers have been attacking RSA for 35 years and have yet to crack it. Now, the truth is that in the course of the 35 years, various kinds of glitches were found that required some added rules about how you found the p and the q and how you found the e, but they were easily identified and fixed. And RSA really is a robust public key encryption method that has withstood attack for all these years. That's why we believe in it.
MIT_6042J_Mathematics_for_Computer_Science_Spring_2015
441_Bigger_Number_Game_Video.txt
ALBERT MEYER: Today's topic is random variables. Random variables are an absolutely fundamental concept in probability theory. But before we get into officially defining them, let's start off with an example that in fact, is a game because that's a fun way to start. So we're going to play the bigger number game and here's how it works. There are two teams, and Team 1 has the task of picking two different integers between 0 and 7 inclusive, and they write one integer on one piece of paper and the other integer on the other piece of paper. They turn the two pieces of paper face down so the numbers are not visible, and the other team then sees these two pieces of paper whose other side has different numbers written on them sitting on the table. What Team 2 then does is picks one of the pieces of paper and turns it over and looks at the number on it. And then, based on what that number is, they make a decision, stick with the number they have or switch to the other unknown number on the face down piece of paper. And that'll be their final number. And the game is that Team 2 wins if they wind up with the larger number. So they're going to look at the number on the paper that they expose and they're going to try to decide whether it looks like a big number or little number. If it looks like a big number, they'll stick with it. If it looks like a little number, they'll switch to the other one that they hope is larger. So which team do you think has an advantage here? Course, if you've read the notes, you know. But if you haven't been exposed to this before, it's not really so obvious. And what we encourage and what we used to do when we ran this in real time in classes that we would have students in teams, split their team in half, one would be Team 1 and the other would be Team 2, and they'd play the game few times, see if they could figure out who had the advantage. And if you have the opportunity, this might be a good moment to stop this video and try playing the game with some friends if they're around. Otherwise, let's just proceed and see how it all works. So this is the strategy Team 2 is going to adopt. They're going to take this idea about big and small that I mentioned and act on it in a methodical way. So they're going to pick a paper to expose, giving each paper equal probability. So that guarantees that they have a 50/50 chance of picking the big number and a 50/50 chance of picking the little number. Whatever ingenuity Team 1 tried to do on which piece of paper was on the left and which was on the right, it doesn't really matter if Team 2 simply picks a piece of paper at random. There's no way that Team 1 can try to fake out Team 2 on where they put the number. OK. The next step is that Team 2 is going to decide whether the number that they can see, the exposed number, is small. And if so, would they switch? And otherwise they stick. So that is, they're going to define some threshold Z where being less than or equal to Z means small, and being greater than Z means large. The question is, how do they choose Z? Well, a naive thing to do would be to choose Z to be in the middle of the interval from 0 to 7. Let's say, you choose Z equals 3. So there would be four numbers less than or equal to Z and four numbers greater than Z. But of course, as soon as Team 1 knew that that was your Z, what would they do? Well, they would make sure that both numbers were on the same side of Z. If you're Z was 3, they would always choose their numbers to be, say, 0 and 1. And that way, when you were switching, your Z would tell you that you had a small number, you should switch to the other one. And you'd only have a 50/50 chance of winning. So if you fixed that value of Z, Team 2 has a way of ensuring that you have no advantage. You can only win with probability 50/50. And that's true no matter what Z you take. If Team 1 knew what your Z was, they would just make sure to pick their two numbers on the same side of your Z. And then your Z wouldn't really tell you anything. You'd switch or stick in both cases, and you'd only have a 50/50 chance of picking the right number. So what you do-- and this is where probability comes in-- is you pick Z in a way that can't be predicted or made use of by Team 1. You pick Z at random, to be any number from 0 to 7, not including 7 including 0. That is, your number is either 0, 1, 2, up through 6. And being less than or equal to Z means small, and being greater than Z means large. And when you see a small number, you'll switch and when you see a large number, you'll stick. But what's going to be large and what's going to be small is going to vary each time you play the game, depending on what random number, Z, comes out to be. So let's analyze your probability if you're Team 2. What's the probability that you're going to win now? Well, let's suppose that Team 1 picks these two numbers. We don't know what they are, but they have to pick a low number that's less than a high number. So these two numbers are at least 1 apart, they can't have the same number on both pieces of paper. Otherwise, it's clear that you are not going to be able to pick the large one, that would be cheating. OK, so there's two different numbers. So one of them has to be less than the other. We don't know how much less, might be a lot less, might be only one less, but low is less than high. OK, now we can consider three cases of what happens with your strategy. The most interesting case is the middle case. That is, when your Z, which was chosen at random, happens to fall in the interval between low and high. That is, your Z is strictly less than high and greater than or equal to low. And then in that case, your Z is really guiding you correctly on what to do. If you turn over the low card, then it's going to look low because it's less than or equal to Z so you'll switch to the high card and win. If you turn over the high card, it's going to be greater than Z so it'll look high and you'll know to stick with it. So in this case, you're guaranteed to win. If you were lucky enough to guess the right threshold between low and high, you're going to win. And so the probability that you win, given the middle case occurs, is 1. Now, what about the middle case? How often does that happen? Well, the difference between low and high is at least 1, so there's guaranteed to be 1 chance in 7 that your Z is going to fall between them. And it could be more if low and high are further apart, but as long as they're at least one apart, there's a 1/7 chance that you're going to fall in between them. OK. Now, in case H, that's the case where Z happens to be chosen greater than or equal to the high number that Team 1 shows. In other words, Z is bigger than both numbers than Team 1 shows and put on the pieces of paper. Well, in that case, Z just isn't telling you anything. So what's going to happen is that both numbers are going to look high to you-- sorry-- both numbers are going to look low to you because they're both less than or equal to Z. So you'll switch. And that means that you'll win, if and only if, you happen to turn the low card over first. Well that was 50/50. So the probability that you win, given that Z-- both cards are on the low side of Z, you'll win with half the time. And symmetrically, if Z is less than the low card, that is, Z is less than both cards chosen by Team 1, then they're both going to look high, and so you'll stick. And that means that you'll stick, you'll win, if and only if, you happen to have picked the high card. There's a 50/50 chance of that. So again, in this case that Z makes both cards look high, or Z itself is low, Team 2, you win with probability 1/2. Well, that's great because now we can apply total probability. And what total probability tells us is that Team 2 wins is the probability that they win given case M times the probability of M plus the probability that they win given not the middle case times the probability of not the middle case. But we figured out what these were. Well, at least inequalities on them, because there's probability 1 that you'll win 1/7 of the time. And there's probability a 1/2 that you'll win the rest of the time, the other 6/7 of the time. You're going to win 4/7 of the time. The probability that you win playing your strategy is 4/7. It's better than 50/50. You have an advantage. And whether that was a priori obvious or not, I don't know. But I think it's kind of cool. OK, you win with probability 4/7. Now, Team 2 has the advantage. And the important thing to understand is it does not matter what team does. No matter how smart Team 1 is, Team 2 has gotten control of the situation because they picked-- which piece of paper they picked at random 50/50. So it doesn't matter what strategy Team 1 used on where they placed the numbers. And they chose Z randomly, so again, it doesn't matter what numbers Team 1 shows. Team 2 is still going to have their 1/7 chance of coming out ahead, which is enough to tip the balance in their favor. It's interesting that symmetrically, Team 1 also has a random strategy that they can use, which guarantees that no matter what Team 2 does, Team 2 wins with probability at most 4/7. So either team can force the probability that Team 2 wins to be at most 4/7 and at least 4/7. So if they both play optimally, it's going to stay at 4/7. And that's again, true no matter what Team 2 does, Team 1 can put this upper bound to 4/7 on it. So essentially we can say that the value of this game, the probability that Team 2 wins is optimally for both is 4/7. OK, now what does this game got to do with anything, with our general topic of random variables? Well, we'll be formal in a moment. But informally, a random variable is simply a number that's produced by a random process. And just to give an example before we come up with a formal definition, the threshold variable Z was a thing that took a value from 0 to 6 inclusive, each with probability 1/7. So it was producing a number by a random process, that chose a number at random with equal probability. If Team 2 plays properly at random picking which piece of paper to expose, then the number of the exposed card, or more precisely, whether the exposed card is high or low, will also be a random variable. And if Team 1 plays optimally, the number on the exposed card is going to be a random variable. That is, Team 1 in their optimal strategy that puts an upper bound to 4/7 is in fact, going to choose the two numbers randomly. So the exposed card is going to wind up being another random variable, a number produced by the random process. And likewise, the number of the larger card if Team 1 picks its larger and smaller cards randomly, it's going to be another example of a number produced by a random process. And likewise, the number of the smaller card. So that's enough examples. This little game has a whole bunch of random variables appearing in it. And in the next segment, we will look again officially, what is the definition of a random variable?
MIT_6042J_Mathematics_for_Computer_Science_Spring_2015
291_Coloring_Video.txt
PROFESSOR: Graph coloring is the abstract version of a problem that arises from a bunch of conflict scheduling situations. So let's look at an example first and then define the problem. So let's think about a bunch of aircraft that have to be scheduled on the ground at jet ports or gates. Now, if two flights are on the ground at the same time, they need to be assigned to different gates since a gate serves one airplane. And what we'd like to do is try to figure out how many different gates do we need to be able to service all the planes that might be on the ground. How many gates are needed? So let's look at a sample schedule. There are six slides here numbered 122, 145, through 99. And the horizontal bar is, say, times during the day. And this blue block indicates that flight 122 is on the ground from, let's say, 3:00 AM to 7:00 AM, and flight 145 is on the ground at a completely disjoint time interval. So is 67. 257 is on the ground from midnight until about 6:00 AM. It does overlap with 122, and so on. So this is the information we have. And what we're trying to figure out is how many gates do we need. Well, it's easy to see here that the worst case, if you just think of this vertical green line sliding across the bar, and you look at the maximum number of blue intervals that the green line ever crosses, it's three. The largest number of planes that are on the gate at any given moment is three, which means we can get by with three gates. So we have to cope with that conflict. So abstractly, what we're going to do is assign each aircraft to be a vertex of a graph. And we're going to put an edge in to indicate not compatibility, but conflict. Compatibility was what we were looking at previously with our examples of matching. Now this line means that 306 and 145 are on the ground at the same time. They conflict. They need the same gate, and we have to serve them with different gates. And likewise, 99 and 145 are on the ground. 306 and 99. And this was the three flights that were on the ground at the same time. And then if I fill in the graph with all the other vertices and draw an edge when two flights are on the ground at the same time, I wind up with this little graph. OK, now we can talk abstractly about the coloring problem, which is let's assign colors to the vertices in such a way that no two adjacent vertices have the same color. Adjacent vertices should have different colors. And it should be clear from the description of how we derive this graph from the aircraft schedules that the minimum number of distinct colors needed to color the graph corresponds to the minimum number of gates needed to serve the aircraft. So let's try coloring this graph. I'll start with coloring 257 red, and 122 yellow, and 99 green. There's no loss of generality here because these are the three that are on the ground at the same time, reflected by the fact that they're in a triangle. And I'm going to have to use three different colors since each one is adjacent to the other two. OK, what next? Well, let's color 145 yellow. I might as well reuse it since it's not adjacent to a yellow vertex. And then here, I've got another triangle. So if I'm not going to use an extra color, the sensible thing to do would be to color that red. But oops, I didn't do that. I used a red here. There's another triangle, I guess, that allows me to color. And then I color this black because here, I'm stuck. I'm adjacent to both a yellow, a black, and a green vertex. So I have to come up with a fourth color. All right, we did it with four colors. It means that we could have gotten away with four gates. And the colors tell us which aircraft to assign to which gate. So 257 and 67 can both be assigned to the red gate because they are not on the ground at the same time. There's no edge between them. 122 and 145 can be assigned the yellow gate, and so on. Now, this was not the smartest way to color. A better coloring is shown here. You can check that every two adjacent vertices have different colors. And now I've done it with only three colors-- red, yellow, and green. So now there are three gates and I get a better schedule. Another example of this kind of conflict problem comes up with scheduling final exams. Two subjects conflict if a student is taking both. Because if a student's taking both, I can't have the final exams at the same time. And so I need to assign different time slots during exam period to subjects that overlap, that have a student in common. And then the question is, given this data about which pairs of subjects have a student in common, we want to know how short an exam period can we get away with. Again, it becomes a simple graph model and a coloring problem. So here, we've drawn a graph with some sample subjects. 6.042 and 18.02 have a student in common. That's what that edge means. They need to have final exam scheduled at different times. Likewise, 8.02 and 6.042 have a student in common, so they need to be scheduled at different times. On the other hand, 6.042 and 18.02-- sorry. What are some two that are not adjacent? 3.091 and 18.02 have no edge between them, which means that they can be scheduled at the same time. There's no student who's taking both 3.091 and 18.02, at least according to the data in this graph. So let's try coloring it. And again, there's a triangle. I'm going to have to use three different colors for a triangle. And here's another triangle. And to be economical, let's just reuse green. Now, here, I've got another vertex that's adjacent to three different color vertices. And so it's going to have to be colored with a fourth color. This time, it turns out that the four colors are best possible. You can check that. And it corresponds to a schedule in which the 6.042 is scheduled on Monday morning at 9:00, and 6.001 is scheduled on Monday at 1:00. But 8.02 and 3.091 can both be scheduled for Tuesday 9:00 AM. And finally, 18.02 is scheduled on Tuesday at 1:00 PM. OK, so this kind of a conflict modeling situation comes up all the time. Another place where you get these kind of compatibility graphs or incompatibility graphs that you want to color would be if you were running a zoo and you had to have separate habitats for certain kinds of species of animals that you don't want to mix together. Big fish eat little fish. It's a truism in the aquarium world. And so you need to keep big fish separate from little fish. And you don't want the tigers living together with the chimpanzees. So we could again model this problem as how many cages do we need. We create a vertex for each species and put an edge between two species that mustn't share a habitat or share a cage. Another one would be assigning different frequencies to radio stations. And again, if two radio stations are close to each other, they will interfere. So they have to be assigned to different colors or different frequencies. So now, we would be using radio stations as vertices. And radio stations that were near enough to interfere with each other would get connected by an edge, indicating that they needed to be assigned different color frequencies. And one of the classic ones is literally to color a map. If you were trying to take, say, a map of the US and assign colors to it in such a way that you never had two states that shared a border with the same color-- and this is an illustration of doing it with four colors. The question is if I give you some kind of a planar map like this, what's the minimum number of colors that will work? Now, you're allowed to have two countries share the color if they only meet at one point. But if they have a positive length boundary, they have to be different colors. OK, the way that this turns into a vertex coloring problem is if you take a planar graph like this-- here's just an arbitrary one-- what I can do is I'm interested in coloring the regions, the countries, with different colors, but I'll just replace each region by a vertex. So I'm going to stick a vertex in the middle of each of the regions. Notice there's an outer region here too that gets a vertex. So one, two, three, four, five, six regions, or six vertices. And then I'll simply connect two vertices when there is a positive length edge that their regions share. So that edge corresponds to the fact that there's this boundary that's shared between this region and this region. If you look at this same triangular-shaped region, it has a boundary with the outside region. So there's going to be an edge from here to the vertex that represents the outside. And there's the rest of the edges. An edge appears between two regions that share a boundary. And now, the question is coloring the countries corresponds to coloring the vertices. And we'd like to color the graph with as few colors as possible. Well, a famous result that was proved in the '70s is that every planar graph is in fact four-colorable. Now, this was first claimed to be proved in the 1850s, but in fact, the published proof was wrong. It sat around in the journal literature for decades before somebody found a bug. Or that is to say that the proof was wrong, not the result. There was a big hole in the proof that had not been adequately justified. The proof did give a correct argument for five coloring, and the four color problem was opened for over 100 years. Then in the 1970s, two mathematicians came up with a proof of the four color theorem that was very controversial because a lot of their proof required a computer program to crank through several thousand sample graphs that needed to be verified for four-colorability. They had an argument that showed that there could only be a few thousand counter examples if there was-- or rather, if there was any graph that couldn't be four colored, it would be one of these several thousand graphs. And then they went to work on coloring these several thousand graphs, which were generated with the aid of a computer and then colored with the aid of a computer, and also by hand. This did not make the mathematical community happy because a proof like that is essentially uncheckable. A much improved version was developed in the 1990s, but it still requires, in the end, a computer program to generate about 600 maps and verify their colorability. So it remains to find a proof of the four color theorem that you could say is humanly comprehensible without the aid of a computer. But there's no longer any doubt, really, about this theorem in the mathematical community. In general, if I take an arbitrary graph and I ask what's the minimum number of colors to color it, that's called the chromatic number of the graph. So chi of G is the minimum number of colors to color G. Let's look at some chis for different kinds of graphs. So here we have a simple cycle of length 4. And it's obvious that that can be colored with two colors-- just alternate the colors. On the other hand, somewhat-- and in fact, generalizes, by the way, to any even length cycle. The chromatic number of an even length is simply two. You color the vertices alternately. On the other hand, if the cycle is of odd length, you're going to need a third color. There's no way around it because even if you try to get by with two colors, then you color things alternately. And then when you wrap around, you discover that you can't alternately color. You're going to need a third color in order to avoid a clash. So in general, the chromatic number of an odd length cycle is 3. The complete graph on five vertices is shown here. This is a five vertex graph in which every vertex is adjacent to the other four. And since every vertex is adjacent to the other four, you're going to need five colors. You can't do any better. They have to all have different colors. And so the chromatic number of the complete graph on n vertices where every vertex is adjacent to the other n minus 1 is n. Another simple example that comes up is if I take the cycle and I put on an axle in the middle-- we call it a wheel then. A wheel with a cycle of length of 5 around the outside, a perimeter of length 5, is called W5. And we can color it with four colors. And in general, the argument that the chromatic number for an odd length wheel is four is that I know I'm going to need three colors to color the rim. And since the axle is adjacent to everything on the rim, I'm going to need a fourth color for it. On the other hand, again, if the perimeter is even, then I can get by with three colors. One more remark about chromatic numbers is there's an easy argument that shows that if you have a graph, every one of whose vertices is at most degree k-- there are at most k other vertices adjacent to any given vertex-- then that implies that the graph is k plus 1 colorable. And the proof is really constructive and trivial. Basically, you just start coloring the vertices any way you like subject to the constraint that when you color a vertex, it's supposed to not be the same color as any of the vertices around it. But that's easy to do. Because when it's time to color some vertex, even if all the vertices around it are colored, there's only k of them. And so I will always be able to find a k plus first color to assign it and get us a satisfactory coloring. So I can get by with k plus 1 colors. Now, the general setup with colorability is that to check whether a graph is two-colorable is actually very easy. And we may talk about that in some class problems. But three-colorability dramatically changes. We're back in the realm of NP-complete problems. In fact, a result of a student of mine almost 40 years ago was that even if a graph is planar where you know it can definitely be colored with four colors, determining whether or not it can be colored with three colors is as hard as satisfiability. And it is, in fact, an NP-complete problem. In fact, a proof of how you reduce satisfiability to colorability appears in a problem in the text, which we may assign as a problem set problem. So in general, finding chi of G is hard, even for three colors. Now, it's not any worse, again, from a theoretical point of view for checking what chi of G is even if it's a very large number, although pragmatically, three color will be easier to check than n-colorability. And that is our story about colorability.
MIT_6042J_Mathematics_for_Computer_Science_Spring_2015
451_Expectation_Video.txt
PROFESSOR: We ask about averages all the time. And in the context of random variables, averages get abstracted into a lovely concept called the expectation of the random variable. Let's begin with a motivating example which, as is often the case, will come from gambling. So there's a game that's actually played in casinos called Carnival Dice where you have three dice, and the way you play is you pick your favorite number from 1 to 6, whatever it happens to be. And then you roll the three dice. The dice are assumed to be fair, so each one of them has a one in six chance of coming up with any given number. And then the payoff goes as follows. For every match of your favorite number, you get $1.00. And if none of your favorite-- if none of the die show your favorite number, then you lose $1.00. OK. Let's do an example. Suppose your favorite number was five. You announce that to the house, or the dealer, and then the dice start rolling. Now if your roll happened to come up with the numbers two, three, and four, well, there's no fives there, so you've lost $1.00. On the other hand, if your rolls came out five, four, six, there's one five, you've one $1.00. If it came out five, four, five, there's two fives, you've won $1.00. And if it was all fives, you've actually won $3.00. Now real carnival dice is often played where you either win or lose $1.00 depending on whether there's any match at all, but we're playing a more generous game where, if you double match, you get $2.00. If you triple match, you get $3.00. So the basic question about this is, is this a fair game. Is this worth playing, and how can we think about that? Well, we're going to think about it probabilistically. So let's think about the probability of rolling no fives. If five is my favorite number, what's the probability that I roll none of them? Well, there's a five out of six chance that I don't roll a five on the first die, and on the second die and on the third die. And since the die rolls are assumed to be independent, the dies are independent, the probability of no fives is 5/6 to the third, which comes out to be 125/216. I'm writing this out because we're going to put all the numbers over 216 to make them easier to compare. OK. What's the probability of one five? Well, the probability of any single sequence of die rolls with a single five is 5/6 of no five times 5/6 of no five times 1/6 of one five. And there are 3 choose 1 possible sequences of dice rolls with one five, and the others non-fives. Likewise, for two fives, there's 3 choose 2 times 5/6 to the 1, which is one way of choosing the place that does not have a five. And 1/6 times 1/6, which is the probability of getting fives in the other places. I didn't say that well, but you can get it straight. OK. The probability of three fives is the probability of 1/6 of getting a five on the first die, 1/6 of getting a five on the second die, 1/6 of getting a five on the third die. It's simply 1/6 cubed. OK, so we can easily calculate these probabilities. This is a familiar exercise. Let's put them in a chart. So what we've figured out is that 0 matches has a probability of 125 over 216. And in that case, I lose $1.00. One match turns out to have a probability of 75 out of 216, and I win $1.00. Two matches is 15 out of 216, I win $2.00. And three matches, there's one chance in 216 that I win the $3.00. So now I can ask about what do I expect to win. Suppose I play 216 games, and the games split exactly according to these probabilities. Then what I would expect is that I would wind up with 0 matches about 125 times. That was the probability of there being no matches. It was 125/216. So if I played 216 games, I expect about 125 are going to-- I'm going to win nothing. Or, I'm going to get no matches, which actually means I'll lose $1.00 on each. One match I expect about 75 times. 2 matches, 15 times. 3 matches, once. So my average win is going to be 125 times minus 1, 75 times 1, 15 times 2 plus 1 times 3 divided by 216. So these numbers on the top were how the 216 rolls split among my choices of losing $1.00, winning $1.00, winning $2.00, and winning $3.00. And it comes out to be slightly negative. It's actually minus $0.08-- minus 17/216 of $1.00, which is about minus $0.08. So I'm losing, on the average, $0.08 per roll. This is not a fair game. It's really biased against me. And if I keep playing long enough, I'm going to find that I average out a kind of steady loss of about $0.08 a play. So we would summarize this by saying that you expect to lose $0.08, meaning that your average loss is $0.08 and you expect that that's going to be the phenomenon that comes up if you keep playing the game repeatedly and repeatedly. It's important to notice, of course, you never actually lose $0.08 on any single play. So what you-- this notion of your expecting to lose $0.08, it never happens. It's just your average loss. Notice every single play you're either going to lose $1, win $1, win $2, win $3. There's no $0.08 at all showing up. OK. So now let's abstract the expected value of a random variable R. So a random variable is this thing that probabilistically takes on different values with different probabilities. And its expected value is defined to be its average value where the different values are weighted by their probabilities. We can write this out as a precise formula. The expectation of a random variable R is defined to be the sum over all its possible values-- it doesn't indicate what the summation is, but that's over all possible values v-- of v times the probability that v comes up, the probability that R equals v. So this is the basic definition of the expected value of a random variable. Now let me mention here that this sum works because since we're assuming accountable sample space, R is defined on only countably many outcomes, which means it can only take countably many values. So this is a countable sum over all the possible values that R takes, because there are only countably many of them. And what we've just concluded, then, is the expected win in the carnival dice game is minus 17/216. Check this formal definition of the expectation of a random variable versus the random variable defined to be how much you win on a given play of carnival dice, and it comes out to be that average. Minus 17/216. Now there's a technical result that's useful in some proofs that says that there's another way to get the expectation. The expectation can also be expressed by saying it's the sum over all the possible outcomes in the sample space-- S is the sample space-- of the value of the random variable at that outcome times the probability of that outcome. So this is another alternative definition of compared to saying it's the sum over all the values times the probability of that value. Here, it's the sum over all the outcomes of the value of the random variable, the outcome times the probability of the outcome. It's not entirely obvious that those two definitions are equivalent. This form of the definition turns out to be technically helpful in some proofs, although outside of proofs you don't use it so much in applications. But it's not a bad exercise to prove this equivalence. So I'm going to walk you through it. But if it's boring-- it's kind of a boring series of equations on slides, and you're welcome to skip past it. It is a derivation that I expect you to be able to carry out. So let's just carry out this derivation. I'm going to prove that the expectation is equal to the sum over all the outcomes of the value of the random variable at the outcome times the probability of the outcome. And let's prove it. In order to prove it, let's begin with one little remark that's useful. Remember that this notation R equals v describes the event that the random variable takes the value v, which by definition is an event is the set of outcomes where this property holds. So it's the set of outcomes omega where R of omega is equal to v. So let's just remember that, that brackets R equals v is the event that R is equal to v, meaning the set of outcomes where that's true. So what that tells us in particular is that the probability of R equals v is, by definition, the sum of the probabilities of the outcomes in the event. So it's the sum over all those outcomes. Now let's go back to the original definition of the expectation of R. The original definition is-- and the standard one is-- it's the sum over all the values of the value times the probability that the random variable is equal to the value. Now on the previous slide, we just had a formula for the probability that R is equal to v. It's simply the sum over all the outcomes of where R is equal to v, of the probability of that outcome. So I can replace that term by the sum over all the outcomes of the probability of the outcome. OK. So I'm trying to head towards an expressions that's only outcomes, which is kind of the top-level strategy here. So the first thing I did was I got rid of that probability of v and replaced it by the sum of all these probabilities-- of the probabilities of all the outcomes where R is v. Well, next step is I'm going to just distribute the v over the inner sum. And I get that this thing is equal to the sum, again, over all those outcomes in R equals v of v times the probability of the outcome. But look, these outcomes are the outcomes where R is equal to v. So I could replace that v by R of omega. That one slipped sideways a little bit, so let's watch that. This v is simply going to become an R of omega. I'm still [INAUDIBLE] over the same set of omegas, but now I've gotten rid of pretty much everything but the omegas. So I've got this inner sum of over all possible omegas in R of v of R of omega times the probability of omega. And I'm summing over all possible v. But if I'm summing over all possible v and then all possible outcomes where R is equal to v, I wind up summing over all possible outcomes. And so I've finished the proof that the expectation of R is equal to the sum over all the outcomes of R of omega times the probability of omega. Now I'd never do a proof like this in a lecture, because I think watching a lecturer write stuff on the board, a whole bunch of symbols and manipulating equations, is really insipid and boring. Most people can't follow it anyway. I'm hoping that in the video, where you can go back if you wish and replay it and watch it more slowly, or at your own speed, the derivation will be of some value to you. But let's step back a little bit and notice some top-level technical things that we never really paid attention to in the process of doing this manipulative proof. So the top-level observation, first of all, is that this proof, like many proofs in basic foundations of probability theory and random variables, in particular, involves taking sums and rearranging the terms in the sums a lot. So the first question is, why sums? Remember here we were summing over all the possible variables, all the possible values of some random variable. Why is that a sum? Well it's a sum because we were assuming that the sample space was countable. There were only a countable number of values R of omega 0, R of omega 1, R of omega n, and so on. And so we can be sure that the sum over all the possible values of the random variable is a countable sum. It's a sum, and we don't have to worry about integrals, which is the main technical reason why we're doing discrete probability and assuming that there are only a countable number of outcomes. There's a second very important technicality that's worth mentioning. All the proofs involved rearranging terms in sums freely and without care. But that means that we're implicitly assuming that it's safe to do that, and that, in particular, that the defining sum for expectations needs to be absolutely convergent. And all of these sums need to be absolutely convergent in order for that kind of rearrangement to make sense. So remember that absolute convergence means that the sum of the absolute values of all the terms in the sum converge. So if we look at this definition of expectation, it said it was the sum over all the values in the range. We know that's a countable sum of the value times the probability that R was equal to that value. But the very definition never specified the order in which these terms, v times probability R equals v, got added up. It better not make a difference. So we're implicitly assuming absolute convergence of this sum in order for the expectation to even be well-defined. As a matter of fact, the terrible pathology that happens-- and you may have learned about this in first-time calculus, and we actually have a problem in the text about it-- is that you can have sums like this, that are not absolutely convergent, and then you pick your favorite value and I can rearrange the terms in the sum so that it converges to that value. When you're dealing with non-absolute value sums, rearranging is a no-no. The sum depends crucially on the ordering in which the terms appear, and all of the reasoning and probability theory would be inapplicable. So we are implicitly assuming that all of these sums are absolutely convergent. Just to get some vocabulary in place, the expected value is also known as the mean value, or the mean, or the expectation of the random variable. Now let's connect up expectations with averages in a more precise way. We said that the expectation was kind of an abstraction of averages, but it's more intimately connected to averages than that, even. Let's take an example where suppose you have a pile of graded exams, and you pick one at random. Let's let S be the score of the randomly picked exam. So I'm turning this process, this random process of picking an exam from the pile, is defining a random variable, S, where by definition of picking one at random, I mean uniformly. So S is actually not a uniform random variable, but I'm picking the exams with equal probability. And then they have different scores, so the outcomes are of uniform probability. But S is not, because there might be a lot of outcomes, a lot of exams with the same score. All right. S is a random variable defined by this process of picking a random exam. And then you can just check that the expectation of S now exactly equals the average exam score, which is the typical thing that students want to know when the exam is done, what's the average score. Actually, the average score is often less informative than the median score, the middle score, but people somehow rather always want to know about the averages. The reason why the average may not be so informative is because-- well, it has some weird properties that I'll illustrate in a second. But the point here of what we did where we took the-- we got at the average score on the exam by defining a random variable based on picking a random exam. So that's a general process. We can estimate averages in some population of things by estimating the expectations of random variables based on picking random elements from the thing that we're averaging over. That's called sampling, and it's a basic idea of probability theory that we're going to be able to get a hold of averages by abstracting the calculation of an average into taking-- defining a random variable and calculating its expectation. Let's look at an example. It's obviously impossible for all the exams to be above average, because then the average would be above average. That's absurd. So if you translate that into a formal statement about expectations, it translates directly-- by the way, I don't know how many of you listen to the Prairie Home Companion, but one of the sign-offs there is at the town of Lake Woebegone in Wisconsin, where all the children are above average. Well, t'ain't possible. That translates into this technical statement that the probability that a random variable is greater than its expected value is less than 1. It can't always be greater than its expected value. That's absurd. On the other hand, it's actually possible for the probability that the random variable is bigger than its expected value to be as close to 1 as you want. And one way to think about that is that, for example, almost everyone has an above average number of fingers. Think about that for a second. Almost everyone has an above average number of fingers. Well, the explanation is really simple. It's simply because amputation is much more common than polydactylism. And if you can't understand what I just said, look it up and think about it.
MIT_6042J_Mathematics_for_Computer_Science_Spring_2015
273_Representing_Partial_Orders_As_Subset_Relations_Video.txt
PROFESSOR: So we've seen that partial orders are a set of axioms that capture the positive path relation or the arbitrary path relation in directed acyclic graphs, or DAGs. But there's still another way to understand these axioms that gives a kind of representation theorem for the kind of mathematical objects that are partial orders and that every partial order looks like. So let's look at that example. I'm interested in the proper subset relation. A is a proper subset of B, which, you remember, means that B has everything in it that A has and something extra. So in particular, since B has something extra, B is not a subset of A, certainly not a proper subset of A. So let's look at an example of that. Here are seven sets, and the arrows indicate the proper subset relation. Or more precisely, the positive path relation in this graph represents the proper subset relation where edges are understood to be pointing upwards. So I've left out the arrowheads. This is also known as a Hasse diagram, where the height is an indication of which way the arrows go. So if arrows are pointing up, this is telling me that, for example, this set of two elements, 1 and 5, because there's a path up to the top set, the top set has everything that this lower set has. Namely the top set has 1 and 5, and it's got extra stuff. The set consisting of just 1 is a proper subset of 1 and 5 because the set has 1 in it, but it has an extra thing, 5. And also, there's a path from 1 up to 1, 2, 5, 10 because 1, 2, 5, 10 has a 1 in it and extra stuff. So that's what the picture is illustrating, the proper subset relation on this particular collection of seven sets. Now, let's look at a very similar example of the proper divides relation on some number. So proper divides means a properly divides b if a divides b and it's not equal to b. And I'm interested in the proper divides relation on this set of seven numbers, 1, 2, 3, 5, 10, 15, and 20. And now there's a path from 5 to 30 because 5 is a divisor of 30 and it's not equal to 30. It's a proper divisor of 30. And of course, the point of this picture is to show that the proper divides relation on these seven numbers has exactly the same shape as the proper subset relation on those seven sets. So there's the seven sets and their proper subset relation shown by the picture followed by the proper divides relation on this set of seven numbers. And the precise notion or sense in which these things have the same shape-- obviously they can be drawn and one superimposed on the other. But abstractly what we care about with partial orders and digraphs in general is when things are isomorphic-- is the technical name for the same shape-- and isomorphic means that all we care about are the connections between corresponding vertices. Two graphs where the vertices correspond in a way that where there's a connection between two vertices there's also a connection between the corresponding vertices are isomorphic. And the precise definition of isomorphic is that they're isomorphic when there's an edge-preserving matching between their vertices. Matching means bijection. And the formal definition is G1 is isomorphic to G2 if and only if there's a bijection from V1, the vertices of G1, to V2, the vertices of G2, with the property that if there's an edge between two vertices u and v in the first graph, then there's an edge between the corresponding two vertices f of u and f of v in the second graph. And that's an if and only if relation. There's an edge between f of u and f of v if and only if there's an edge between u and v in the original graph. And that's the official definition of isomorphism for digraphs. And the theorem that we illustrated with that example of proper divides and proper subset is that, in fact, every strict partial order is isomorphic to some collection of subsets partially ordered by less than. So this is a kind of a representation theorem. If you want to know what kinds of things are partial orders, the answer is that a strict partial order is something that looks like a bunch of sets under containment. It's isomorphic to a bunch of sets under containment. And the proof, actually, of this is quite straightforward. What I'm going to do to find an isomorphism is you give me your arbitrary strict partial order R, and I'm going to map an element a in the domain of R to the set of all of the elements that are, quote, "below it," that is, all of the elements that are related to R. So a is going to map to the set of b's such that bRa or b is equal to a. And that is added for a technical condition. Remember, R restrict, so a is not related to a under R. But I want it to be in the set that a maps to, so I'm throwing that in. Another way to say this is that the mapping f of a is equal to R inverse of a union a. And let's just illustrate that by the example of, how do you turn the divides relation into the subset relation? Well, the smallest element in the proper divides example was the number 1, and I'm going to map it to the set consisting of 1, which is all of the elements that properly divide 1 along with 1. And then I'm going to map the number 3 to all of the elements that properly divide 3 along with 3, and that is 1 and 3. 5 maps to 1 and 5. 2 maps to 1 and 2. And at the next level, I'm going to map 15 to all of the numbers that properly divide 15 along with 15. So 1, 3, 5, and 15 are what the number 15 maps to. That's a set. Likewise, 10 maps to 1, 2, 5, 15, and 30 maps to all of the numbers that are below it, including itself. And this is the general illustration of the way that you take an arbitrary strict partial order and map elements into sets, which are basically their inverse images under the relation. And the sets have exactly the same structure under proper containment as the relation. So this is, again, a representation theorem that tells us that if we want to understand partial orders, they are doing nothing more than talking about relations with the same structure as the proper subset relation on some collection of sets.
MIT_6042J_Mathematics_for_Computer_Science_Spring_2015
315_Book_Stacking_Video.txt
So, now we'll look at third kind of sum that comes up all the time called harmonic sums. And we'll begin by examining an example where they come up really directly. So, here's the puzzle. Suppose that I'm trying to stack a bunch of books on a table. Assume all the books are the same size and weight and uniform, and I'd like to stack them up one on top of the other in some way. And try to get them as far out past the end of the table as I can manage. Now, notice in this picture, it seems kind of paradoxical. The top book, the back end of the top book is past the edge of the table. Is it possible to do that? Is it possible to get the top book, the back of the top book past the edge of the table? And how far out can you get the further most book to the right? That's the question we want to ask. Well, let's go back and do it for the simplest case, which is one book. So, this amount will be a function of how many books we have. We're interested in the overhang using n books. Overhang is the amount past the edge of the table that the rightmost end of any book can be. What do you do with one book? Well, with one book, assuming that the thing is uniform, the center of mass in the middle. Let's assume the book is of length 1. So, the center of mass of the book is at halfway down the book. And if that center of mass is not over the table, then you're going to have torque and the book is going to fall. So, you've got to keep the center of mass supported. And the way to get the largest overhang is to have the center of mass right at the edge of the table here. And in that case, you can get the book to stick out half a book length without falling. And what that tells us is that the one book overhang is 1/2. It'll balance with the furthest end out exactly if the center of mass is on the edge, and I get a half a book length for unit overhang. Let's proceed recursively or inductively. Suppose I have n books. How am I going to get them to balance? Well, let's assume that I figured out how to get a so-called stable stack of n books, which if I completely supported it flat on the table, it wouldn't fall over. And I'm going to show you how to go from n to n plus 1, which is how you construct an arbitrarily large stack of books that won't fall over. Well, if the stack completely resting on the table won't fall over, that means that if I have the center of mass of it past the edge of the table, by definition of the center of mass, there's going to be an equal amount of weight on both sides of the center of mass, and the thing is going to fall off the edge of the table by the same reasoning as we did for one book. So, the stable and n stack-- stable in the sense that it won't fall over of self if it was lying completely over the table. In fact, it won't fall over as long as its center of mass is over the table. And to get it out the furthest amount to the right, what I'm going to do is put it at the edge of the table. So, now I know how to place a stable stack of n books to get the largest overhang out of it. What next? Well, let's consider n plus 1 books. And what do I have to do? So, I'm trying to do the same deal. Suppose that I have a nice stack of n books and I know how to support it so it won't tip over. And I now have n plus 1 books and I want to get out further. What do I have to do? Well, by the basic reasoning that we just went through, now the center of mass of the whole stack of n plus 1 books has to be over the edge of the table. That's the way I'm going to get out furthest. So, I know where the center of mass of n plus books is going to be, at the edge of the table. What about the center of mass of the top n books? Well, I need them to be supported. I need their center of mass to be supported. They'll be supported, providing their center of mass is over the bottom book somewhere. And the way to get it out furthest is to have it over the right edge of the bottom book. So, I'm going to put the center of mass of the top n books at the edge of the n plus first book here. And that means that the incremental overhang that I get, the increase in overhang that I get by adding one more book, we can call the delta overhang. And it's the distance between the center of mass of n plus 1 books and the center of mass of n books. N here, and n plus 1 here. Well, let's see what's going on. The center of mass of the n books is at some location here. And the center of mass of the bottom book is halfway away, half a book length away from where the n books are balanced on the edge of the bottom book. So, the center of mass of the n books is here. The center of mass of the bottom book is there. The distance between them is 1/2. And I need the table to be at the balance point between the n books and the one book. That's where the center of mass of the n plus 1 books will be. So, I need to calculate amount that's going to be, the increase in overhang. So, let's abstract it a little bit. The delta overhang is the distance from the n book to the n plus 1 book centers of mass. And if we think of this as a balancing diagram, there's the n books. Or at least, there's the center of mass of the n books. There's the center of mass of the 1 book, their distance 1/2 apart, which we said. And they have to balance at the edge of the table. So, think of the edge of the table as the pivot point and it's there. And I need to calculate, where is that pivot point? How do I get this fulcrum or this balance beam to balance with weight n here and weight 1 there, when their total length apart is 1/2. What's this distance? That distance is the delta that I'm trying to calculate. Well, you just know from physics that the balance point is going to be the distance 1/2 divided by the sum of n and n plus 1. I need the n times this amount to equal 1 times that amount. And if you check that out, it means that delta is 1/2 over n plus 1. Or simplifying, 1 over twice n plus 1. You should stare at that diagram a little bit and remember your elementary physics to realize the reasoning behind the formula for delta. Well, now I'm done because basically, I've just figured out that the increase is this delta overhang. And now I know what it is. It's 1 over twice n plus 1. And so, what I can conclude is that the overhang of n books, B1 is 1/2 and Bn plus 1 is Bn plus 1 over twice n plus 1. So this is a recursive definition of Bn, but it's easy to see how it unwinds. It means that Bn is 1/2 plus 1/2 of 1 plus 1 plus 1/2 of 2 plus 1 plus 1/2 of 3 plus 1 and so on. If I factor out the 1/2, Bn is 1/2 times 1 plus 1/2 plus 1/3 out through 1 over n. That sum is the harmonic sum. The sum 1 plus 1/2 up through 1 over n is called Hn, or the harmonic sum. And what we figured out, or really the harmonic number, is the value of that sum. And what we figured out is that Bn, the amount that I can get n books out up past the edge of the table is Hn over 2.
MIT_6042J_Mathematics_for_Computer_Science_Spring_2015
2112_Matching_Ritual_Video.txt
PROFESSOR: OK. So how do we find these stable marriages? Well there's out procedure for doing it, which is kind of elegantly described as a day by day mating ritual that the boys and girls cooperate in. So let's see what happens on the first day. On the morning of the first day, each boy looks at his list of girls and picks the one that he likes the best, at the top of this list, and he goes off and serenades her or proposes to her. So here we have Billy Bob and Brad proposing to Angelina. That means that on the first day Angelina was at the top of both Brad's list and Billy Bob's list. And they're both going to be proposing and asking if she's willing to marry them. Well in the afternoon, each girl rejects all but her favorite suitor. So in this case, Angelina likes Brad best of all. So she says to everybody else, if you're not Brad take a hike. And that's what happens at that stage. And then in the evening, here we look at rejected boy, Billy Bob. A boy who's rejected crosses off the girl who rejected him. So Billy Bob is going to cross Angelina off his list. And then the whole ritual is going to repeat starting the next morning. Except now Billy Bob will have a different woman at the top of his list because Angelina's gone forever. This mating ritual continues until nothing changes. And that's going to happen exactly when each girl has at most one suitor. Because if she has more than one suitor, she's going to reject the less favorite ones. So that is the definition of the stopping condition. And on that day, by definition, no girl can have more than one suitor. So she will marry the one suitor she has. And that's the definition of the marriages that result, if and when, the mating ritual stops. And we claim that they are going to be stable marriages. Now if we think about this process, it's really a state machine. In fact, if you think about it, what the states are is the set of girls on the boys list on any given morning. And then those states evolve to a new list after the crossing out happens on the next morning. So this is kind of a memorable way to tell a story about the transitions of a state machine. And we can bring our state machine concepts to bear. So the first thing we want to prove is that this state machine terminates. That is to say, there exists a wedding day. Then we want to prove that this state machine is partially correct. And what that means that when the machine stops, everyone is married and that the marriages are stable. So that's our task. Well termination's easy. Because if you look at the state, the state is the boys lists on a given morning. And things evolve because boys get rejected and they cross girls off the list. So what that means, if we take the total number of names remaining on the boys lists on any given morning, that is a strictly decreasing and non-negative integer valued variable. So by the well-ordering principle, that's strictly decreasing. Well-ordered derived variable will reach a minimum value. And by definition, that's when the algorithm has to stop because it's strictly increasing. So it can't move once it's reached its minimum. So there's a wedding day. All right. So now let's examine correctness of this procedure and figure out what's supposed to happen on the wedding day. We want to show that everybody's married and that the marriages are stable. And in order to do that, we're going to look at some more derived variables and an invariant that explains why the mating ritual works. So the first remark is that the girls improve day by day, or at least they don't get any worse. Namely, a girl's favorite tomorrow will be at least as desirable to her as her favorite today. That's for any given day. If you look at a girl's favorite on this day, and if there is a tomorrow, then her favorite tomorrow is going to be at least as good as the one she has. And why is that? Well because today's favorite will keep serenading her until he gets rejected. And he only gets rejected when the girl that he's serenading gets yet a better suitor. And so the girls are always going to improve. We could reformulate this in state machine language by saying that the rank of a girl's favorite, where she rates the boy that's serenading her on her own list of preferences, is a weekly increasing variable. It never gets any worse from one day to the next. By the same reasoning or similar reasoning, a boy's favorite tomorrow is going to be no better than today's. It might be worse. And so it's no more-- the woman that he's going to say, tomorrow is no more desirable to him than today's, is basically because he works straight down his list. If he hasn't been rejected, he'll keep serenading the same woman tomorrow. If he has been rejected, then he's going to be working on somebody lower on his list that's less desirable. So again, the rank of a girl on a boy, that a boy serenades, is a weekly decreasing derived variable of this process. And these observations lead us to an invariant that holds for the mating ritual. And the invariant is that if you look at any girl G, and if there's a boy B and G is not on B's list, that is G must've been crossed off by B at some point, then the invariant is that the girl G has a favorite that is better than B. And the reason is that again, when G rejected B, she had a better suitor than B. And her suitors keep getting better by the weekly increasing property of her suitors. And therefore, she's going to have a suitor that she likes better than B, whose list she's not on. This holds for all G's and B's. And that is an invariant from one day to the next. So let's look at what happens on the wedding day and use the invariant to prove that everybody's married and that stability holds. So the first remark is that each girl has at most one suitor, and we've observed that's by definition of a wedding day. And now what we want to prove is that each boy gets married. Well what's going on with a boy, a boy is either married because he's serenading the top woman on his list, or maybe all the women on his list have been crossed off, then he's not serenading anybody. So that's the only way he could be not married. Now the reason that this is the case is that there's no bigamy here. So boys serenade only one girl at a time. So if a boy is married, there's only one possible woman that he can be married to. And a woman's married to one possible boy. So let's now put these facts together and argue that everybody is married on the wedding day. And the proof is by contradiction. Suppose there's some boy B that's not married. Well that happens exactly when his list is empty. Otherwise, he'd be serenading somebody and be married to her. But if his list is empty by the invariant, every girl has a suitor that she likes better than B, which means she's going to be married to somebody that she likes better than B. Every girl's married. But there are the same number of boys and girls. So in fact, given that there's no bigamy, all the boys have to be married too. And that settles that one. So the next crucial property that we're interested in is stability. That in fact this set of marriages, which must come into being on the wedding day, are all stable. And the argument for stability has two cases, both of them trivial that follow immediately from the invariant. First of all, let's look at an arbitrary boy, Bob. I claim that he won't be in a rogue couple with case one, any girl G that's on his final list. Because if a girl is on his final list, then he's already married to the best of them. He marries the girl at the top of his list. So he's not going to be tempted to switch-- to be part of a rogue couple with some girl that's still on his list. Case two is, he's not going to be in a rogue couple with a girl that's not on his list. Because, by the invariant, she's married to somebody she likes better than him. So there's no available girl either way for Bob to be in a rogue couple with. Bob, of course, is an arbitrary boy. And therefore, no boy is in a rogue couple. And indeed, there are no rogue couples.
MIT_6042J_Mathematics_for_Computer_Science_Spring_2015
2105_Spanning_Trees_Video.txt
One of the multiple definitions of trees that we saw is that it's a minimum edge simple graph that connects up a bunch of vertices. And that leads to the idea of finding a spanning tree within a simple graph that maintains the same connections. So let's begin with a precise technical definition. A spanning subgraph of a graph, G, is simply a subgraph that has all the vertices of G. So, again, a subgraph of a graph means it's got a subset of the vertices, and a subset of the edges. Spanning subgraph is that has all of the vertices, but a subset of the edges. And the definition of a spanning tree is a spanning subgraph that is a tree. Now, not all graphs are going to have a spanning tree, because the tree has to be connected. If the original graph is not connected, there's no way you can find a spanning tree using only the edges that are there already. But it's going to turn out that if the graph is connected, it's guaranteed to have a spanning tree. Let's look at an example. Here's a simple graph. And what I want, then, is a spanning tree, a selection of edges, that connect up all the vertices such that we're only using edges in the original graph, and they form a tree. There it is. So if you check on these magenta edges that I've highlighted, they define a tree. I haven't used three of the edges in the original graph. Now this particular choice of spanning tree is kind of arbitrary. In general, there's lots of spanning trees. Here's another one, this time with green edges. Again, I'm using only edges from the original graph-- I've left out three different ones, and used a different set of edges to form the tree. But there it is. It's got no cycles, and it spans the graph because every vertex in the graph is part of it. And of course, it's connected since it's a tree. There's actually some lovely combinatorial theory, which enables you to calculate the number of spanning trees in a simple graph without too much difficulty, just given its adjacency matrix. But we're not going to go into that for now. First remark is that every connected graph is going to have a spanning tree, and the reason is, you just pick a minimal edge tree-- a minimal edge connected spanning subgraph, rather. So G, itself, if it's connected, is by definition, a spanning graph of itself, because it's got all its own vertices. That means by the well-ordering principle, there's going to be a connected spanning subgraph with a minimum number of edges. And that one, given that it has a minimum number of edges, it's guaranteed to be a spanning tree. Now the problem gets more interesting when it has a little more structure-- instead of just trying to find a spanning tree that has a minimum number of edges, it's quite typical in applications that the edges have weights, and we want to find a minimum weight spanning tree. So here's an example where we have a simple graph with a bunch of edges, and a bunch of vertices. And the edges all are assigned, in this case, an integer weight. Now the motivation for this kind of graph, as you could think of, these weights on the graph as indicating the cost of transporting some quantity commodity from this vertex to that vertex, directly by a road. Or the time it takes to transmit a signal over this channel. Lots of ways that simple graphs are used to model issues of communication among various locations. And it's quite typical that the channels and connections between them have different costs. And it's a natural question to say, OK, what's the minimum cost overall tree structure that will enable me to have everything connected to everything else in the same way, but that I can tolerate some of my edges going down? And I still would like to have the cheapest kind of tree that spans them all. Well, there's a fairly simple way to construct such a minimum weight spanning tree, and that's what we're going to talk about now. How do you find it? Well, the idea is to build it using grey edges. So what that means is that starting off with the vertices, we're going to start building a tree. And at any point, we will have a bunch of edges that are going to be part of our spanning tree-- that means that the edges don't have any cycles among them, they're a so-called forest, but they're not yet connected. And at each stage in this procedure, we're going to look at the connected components of the graph that we have at this moment, and color them black or white. And then look at the gray edges. So a grey edge is defined to be an edge where one end point is black, and the other end point is white. And what I'm going to do, at any stage in the procedure as I'm growing my minimum weight spanning tree, is I'm going to look at all the gray edges and pick a minimum weight gray edge. Well, let's do an example to get this clear. So to begin with, I don't have any edges. All I have are the isolated vertices. So it means that I have six connected components, each of which is a single vertex with no edges. That says that I'm allowed to color them black and white in any way I choose, and I will do that. The only constraint on the coloring is there has to be at least one black component, and one white component. So there's an arbitrary coloring-- I've colored two of the vertices white, and the other four black. Now, in this particular coloring-- I could've chosen any one, but I chose this one-- where are the gray edges? Well, I've highlighted them by thickening them. So this is a gray edge, because it's black and white. This is a gray edge because it's black and white-- black and white, black and white. This is not a gray edge, because it's white and white. This is not a gray edge, because it's black and black. So that's a simple enough idea. Now what I'm supposed to do is among my gray edges, pick the one with the minimum weight. Well, if you look at the weights of the gray edges, I got a four, a four, a nine, a seven, and a two. The two is the minimum weight gray edge. I'm going to choose that to start building my tree. So at this moment, once I've committed to that magenta edge, what I now have is a graph with five components-- namely the component defined by this edge, with two vertices. And the other four isolated vertices, which still don't have any edges connecting them in the structure of magenta edges that I'm building to be my minimum spanning tree. So according to the rules now, with these five components, I can recolor them. And as long as I recolor them in a way that this component gets the same color-- there's a recoloring. I've made both of those vertices in this component black, and the other four vertices-- which are isolated components-- I can color arbitrarily. So here's my new coloring. Now, again, once I have this coloring, I can proceed to identify the gray edges. There they are. And this time there are only two gray edges, because I chose to have only one white vertex. There's a gray edge and there's a gray edge. And of course, the minimum weight among the two gray edges is three. So that's going to be my next edge in my minimum weight spanning tree that I'm growing. What's next? Well now, I have four components left. Here's one component defined by that edge, here's another connected component defined by that edge. And these two vertices are isolated, still, so they're components all by themselves. And the rule is, recolor in such a way that both of these vertices in that component have the same color. All the vertices in this component have the same color-- I could switch them from black to white, in fact I will-- and those can be colored arbitrarily. Let's do that. There's a recoloring. Now this component is all white, that component is all white. These two separate components happen both to be black. I could have colored one of them white, and one of them black. I need to have one black, given the other commitment to colors. So now, again, we could find a minimum weight edge, a grey edge I guess it would be. There are two ties for minimum, both of those ones. And I proceed in this way, and I wind up with this minimum weight spanning tree. That's the procedure. Now I haven't discussed why it works yet, and that is explained in the notes. But we're going to hold off on that and just examine applying this algorithm. So there are a bunch of ways, now, to grow a minimum weight spanning tree. One way is to start at any vertex, and then keep building around that vertex. So you start with that vertex and color it black, and everything else white. That means that all the gray edges are going to be connected to that vertex. So pick a minimum weight. Now you have a component with two vertices. Color it black and everything else white, and in that way, you keep working on one component that you're going to grow by always coloring it one color, and everything else the other color. This is a method known as Prim's algorithm for growing a minimum weight spanning tree. Another one is to globally, among all the different connected components, find a minimum weight edge among them. So what that means is that you find the minimum weight edge among all the connected components, and then having identified where that minimum weight edge is, you can color one of its components black, and the other one white, and that will have to conform to our procedure for picking a minimum weight edge between different colored components. That's Kruskal's algorithm. And finally, you can grow the trees in parallel. You can just start choosing the minimum weight edge around each connected component, because you can always take a connected component, color it one color, and color all the other edges another color. And so all of the edges touching a given component will be gray in that color, and you can choose the minimum one and grow that component. And if they're not too close to each other-- so that your choice of edges doesn't conflict-- you can grow these trees in parallel. So I call that, jokingly, Myer's procedure. And that is the application of this coloring approach to finding minimum weight spanning trees.
MIT_6042J_Mathematics_for_Computer_Science_Spring_2015
112_Intro_to_Proofs_Part_1.txt
PROFESSOR: In 6.042, we're going to be pretty concerned with proofs. We're going to try to help you learn how to do rudimentary proofs and not be afraid of them. The most important skill, in some ways, is the ability to distinguish a very plausible argument that might not be totally right from a proof which is totally right. That's an important skill. And it's a basic understanding of what math is. It's that distinction between knowing when a thing is mathematically, absolutely unarguable and inevitable as opposed to something that's just very likely. It's interesting. Physicists by and large do a lot of math, and they tend not to worry so much about proofs. But all the theoreticians and the mathematicians are in agreement that you don't really understand the subject until you know how to prove the basic facts. Pragmatically, the value of proofs is that there's an awful lot of content in this subject and in many other mathematical subjects. And if your only way to figure out what the exact details are is memorization, you're going to get lost. Most of these rules and theorems that we prove, I can never remember them exactly. But I know how to prove them, so I can debug them and get them exactly right. So let's begin by looking at just examples of proofs before we start to try to get abstract about what they are. And we'll look at a famous theorem that you've all seen from early on in high school, the Pythagorean theorem. It says that if I have a right triangle with sides a and b and hypotenuse c, then there's a relationship between a, b and c-- namely that a squared plus b squared equals c squared. Now, this is, as I said, completely familiar. But is it obvious? Well, every once in a while, students say it's obvious, but what I think they really mean is that it's familiar. It's not obvious. Part of the argument for the fact that it's not obvious is that for thousands of years, people have kept feeling the need to prove it in order to be sure that it's true and explain why it's true. There's a citation in the notes of a website devoted to collecting Pythagorean theorem proofs. There's over a hundred of them, including one by a former president of the United States. So let's look at one of my favorite proofs of the Pythagorean theorem. And it goes this way. There are four triangles that are all the same size, four copies of this abc triangle, which we've put in different colors to distinguish them, and a square, which for the moment, is of unknown size. And the proof of the Pythagorean theorem is going to consist of taking these four shapes and reassembling them so that they form a c by c square first, and then finding a second arrangement so that they form two squares-- an a by a square and a b by b square. Then by the theorem of conservation of paper or conservation of area, it has to be that the c by c area is the same as the a by a plus b by b area. And so a squared plus b squared is equal to c squared. Well, let's look at those rearrangements. And probably, you should take a moment to try it yourself before I pop the solution up. But there's the solution to the first one. It's the easier of the two. This is the c by c arrangement. The hint is that if it's going to be c by c, you don't have a lot of choice except to put the c [? long ?] hypotenuses on the outside. And then it's a matter of just fiddling the triangles around so they fit together. And you discover there's a square in the middle. And that's just where that extra square that is provided will fit. Also, this enables you to figure out what the dimensions of the square are. Because if you look at it, this is a b side. We're letting b be the longer of the two sides of the triangle. And this is the a side, the shorter side of another triangle. So what's left here has to be b minus a. So now we know that it's a b minus a by b minus a square from this arrangement. And that's what we've indicated here. Now, the next arrangement is the following. We're going to take two of the triangles and form a rectangle, another two triangles to from a rectangle, line them up in this way, and fit the b minus a by b minus a square there. Now, where are the two squares? Well, I didn't say that the a by a and a and b by b square needed to be separate. In fact, they're not. They're attached. But where are they? Well, let's look at this line. How long is it? Well, it's a plus b minus a long, which means that it's b long. And suddenly, there is a b and there's a b, and I've got a b by b rectangle right there. But wait a second. Here's a b minus a, and it's lined up against a b side. So if I look at what's left, it's b minus b minus a. It tells me that that little piece is a. And so sure enough, when I add this hidden line-- conceptual line to separate the two squares, this part's a by a, and that part's b by b. And we've proved the Pythagorean theorem. So what about this process? It's really very elegant, and it's absolutely correct. And I hope it's kind of convincing. And so this is a wonderful case of a proof by picture that really works in this case. But unfortunately, proofs by pictures worry mathematicians, and they're illegitimately worrisome because there's lots of hidden assumptions. An exercise that you can go through is to go back and think about all of the geometric information that's kind of being taken for granted in this picture. Like over here, how did we know that that was a right angle, that this thing was a rectangle? We needed that to be a right angle because we were claiming that this was a square. Well, how did we know that that was a rectangle? Well, the answers are obvious. We're using the fact that the complementary angles of a right triangle sum to 90 degrees because the angles of a triangle in general sum to 180 degrees. We're using that in a bunch of other places. We're also using the fact that this is a straight line, which may or may not be obvious. But it's true, and that's why it's safe to add those distances to figure out what it was. My point is that there are really a whole lot of hidden assumptions in the diagram that it's easy to overlook and be fooled by. So let me show you an example of getting fooled by a proof by diagram. And here is how to get infinitely rich. Let's imagine that I have a 10 by 11 piece of gold foil. Actually, they could be slabs of gold, but let's think of this as a rectangular shape that's made out of gold. And it's going to be rectangles. Those are right angles there. And what I'm going to do is mark off the corners. I'm going to mark off a length down of 1, and then I'm going to mark off a length of 1 and shift it so that it touches the diagonal, and do the same thing in this lower corner. And now, let's just shift these shapes, the top one going southwest and the second one going northeast. And what I wind up with is this picture so that I've now got those little red triangles protruding above the shape. OK, cool. Well, what do we know? This is now side 10 because I've subtracted 1 from its length here . And this is side 11 because that used to be 10, and I've added 1 to its length there. So that's cool because now, what I can do is take those protruding triangles out, and they'll form a little 1 by 1 square. And suddenly, I have this little bit of gold that's extra. But look what's here. It's a 10 by 11 rectangular shape of gold foil again. So I just rotate this by 90 degrees, and I start all over again. I can keep generating these little 1 by 1 shapes of gold foil forever. I could get infinitely rich. OK, well there's something wrong with that. It's violating all kinds of conservation principles, not to mention that it would undermine the price of gold. So what's the bug? Well, you probably can spot this, but maybe you've been fooled. What's going on is there's an implicit assumption that those little triangles that I cut off were right triangles and that this line that I claimed was of length 11 was a straight line, and it's not. Those triangles have two sides that are of length 1. They're isosceles triangles, but they're lying up against a diagonal that's not 45 degrees. And so they're not right triangles. And that line isn't straight. And 10 and 11 were close enough that it wasn't visually obvious. So this is a way to simply put one over on you with a proof by picture. And if I had been asked to justify how do I know it's a straight line, that bug would have emerged. But you're not likely to notice that if it isn't visually obvious, which is why we worry about that some of these proofs [? by picture. ?]
MIT_6042J_Mathematics_for_Computer_Science_Spring_2015
425_Bayes_Theorem_Video.txt
PROFESSOR: One of the most important applications of conditional probability is in analyzing the results of diagnostic tests of uncertain reliability. So let's look at a fundamental example. Suppose that I have a diagnostic test for tuberculosis. It really sounds great because it's going to be 99% accurate-- in fact, more than 99% accurate, really, because here are the properties this test has. If you have TB, this test is guaranteed to detect it and say, yes, you have TB. If you don't have TB, 99% of the time, the test says correctly that you don't have TB, and 1% of the time, it gets it wrong. Now, suppose the doctor gives you the test and the test comes up saying that you have TB. That's kind of scary because TB is, in fact, quite a serious disease. It's getting worse because there are all of these antibiotic-resistant versions of TB. Now in Asia, where all the known antibiotics are not very effective-- if effective at all-- of curing it, and this test that was 99% accurate says I have this disease, it sounds really worrisome. But in fact, we can ask more technically, should you really be worried? What is the probability given that this apparently highly accurate test says you have TB? What's the probability that you actually have TB? That's what we want to calculate. What's the probability that you have it? So in other words, I want the conditional probability that I have TB given that the test comes in positive. The test says, yes, you have TB. That test positive is a big word that I won't have room for on other slides, so let's just abbreviate it by [? plus. ?] Plus means, in green, that the test said, yes, positive-- you have TB. OK, so that's the probability that we're trying to calculate, this conditional probability. What do we know about the test? Let's translate the information we have about the test into the language of conditional probability. And the first thing we said was that the test is guaranteed to get it right if you have TB. So given that you have TB, the probability that the test will say so-- it will return a positive result-- is 1. Given that you don't have TB, the probability that the test will say that you do have TB is only 1 in 100. Because 99% of the time, it correctly says you don't have TB. And 1% of the time, it says oops, you do have TB. So this is what's called a false positive rate. It's falsely claiming that you have TB when you really don't. And that rate, we're hypothesizing, is only 1%. Now, what we're trying to calculate, again, is the probability that you have TB given that the test came in positive and said you had TB. Well, let's look at the definition of conditional probability. The probability that you have TB given that the test came in positive, that said you do, is simply the probability that both the test comes in positive and you have TB divided by the probability that the test comes in positive. Well, using the definition of conditional probability again, this intersection, this AND of having TB and the test coming in positive, is simply the probability that the test comes in positive given that you have TB times the probability that you have TB. Now, this one we know. It's 1 because the test is perfect. If you have TB, the test is definitely going to say positive. So that lets me simplify things nicely. What I've just figured out is the probability that you have TB given that the test says you do is simply the quotient of the probability that you have TB given no other information and the probability that the test comes in positive. Well, what is that probability that the test comes in positive? How are we going to calculate that? That's the key unknown here. And we're going to use the probability rule, the total probability rule. Total probability says that you do or you don't have TB. So that the way to calculate the probability that the test comes in positive is to look at the probability that the test comes in positive when you do and don't have TB. And we know those numbers. So let's look at the total probability formula. The probability that the test comes in positive is simply the probability that it comes in positive if you have TB times the probability you have TB, plus the probability it comes in positive given that you don't have TB times the probability you don't have TB. Well, we know a lot of these terms. Let's work them out. Well, the probability the test comes in positive given that you have TB is 1. And the probability that the test comes in positive given that you don't have TB is 1/100. That's the false positive rate. We figured that already. What about the probability that you don't have TB? Well, that's simply 1 minus the probability that you do have TB. Now I have this nice arithmetic formula in the probability of TB. So I wind up with the probability of TB plus 1/100 minus [? 1/100 ?] of the probability of TB. It leaves me with 1/100 plus the 99/100 of TB. So that's what this simplifies to. The probability that the test comes in positive given no other information is 99/100 of the probability that a person has TB plus 1/100. We'll come back to this formula. Well, we were working on the probability that you have TB given the test came in positive. We figured out that it was this quotient. And now, I know what the denominator is. The denominator is 99/100 times the probability of TB plus 1/100. Multiply numerator and denominator through by 100, and you get that the probability that you have TB given that the test says you do is 100 times the probability that you have TB divided by 99 times the probability that you have TB plus 1. So let's hold formula. Notice the key unknown here is the probability that you have TB independent of the test, the probability that a random person in the population has TB. If we can figure that out or if we can look that up, then we know what this unknown is, the probability have TB given that the test says you do. Well, what is the probability that a random person has TB? Well, there were 11,000 cases of TB reported in 2011, according to the Center for Disease Control in the United States. And you can assume that there's going to be a lot of unreported cases if there are 11,000 reported ones, because a lot of people don't even know they have the disease. So let's estimate, on that basis given that the population of the US is around 350 million, that the probability of TB is about 1/10,000. Let's plug that into our formula. The probability that you have TB given the test is positive is this formula. When I plug in 1/10,000 for TB. I get 100/10,000 and 99/10,000 plus 1. Well now, I can see that the denominator is essentially 1. It's 1.01. And the numerator is 1/100. And this is basically about 1/100. In other words, it's not very likely that you have TB. Because of the relatively high false positive rate that was relatively high of 1%, that false positive rate washed out the actual number of TB cases, which the TB rate was only 0.01%, so that almost all of the reports of TB were caused by the high false positive rate. And that means that when you have a report that you've got TB, you still only have a 1% chance that you actually have the TB. So the 99% accurate test was not very useful here for you to figure out what kind of action to take and what kind of medicine to take or treatment to take given that the test came in positive. With 1 in 100 chance, the odds are you won't do anything, in which case you can wonder why your doctor gave you the test. Well, the 99% test sounds good. We figured out that it isn't. And a hint about why 99% accurate isn't really so useful is that there's an obvious test that's 99.99% accurate. What's the test? Always say no. After all, the probability is only 1 in 10,000 that you're going to be wrong. And that's the 99.99% rate. So it sounds as though this test is really worthless. But no, it's not. If you think about it a little bit, it will be useful. And I'll explain that in a minute. I forgot I'm getting ahead of myself. Because the basic formula that we used here was we figured out what the probability of TB given that the test said you had TB in terms of the inverse probabilities which we knew-- that is, the probability that the test came in positive given that you had TB. This is an example of what's a famous rule in probability theory. It's called Bayes' rule, or Bayes' law. And this is it. It's just stated in terms of arbitrary events A and B. It expresses the probability of B given A in terms of the probability of A given B and the probabilities of A and B independently. Now, I can actually never remember this law, but I re-derive it right every time I need to do it as we've done in the previous slides. It's really a quite straightforward law to derive and prove. But let's go back to this 99% accurate test that seemed worthless since there was a trivial test that was 99.9% accurate. But in fact, it's really helpful because it did increase the probability that you had TB by a factor of 100. Before you took the test and before you know anything, you thought that your probability was the same as everybody elses-- about 1 in 10,000. Now the test says the probability that you have TB is 1 in 100. That's a hundred times larger. What's the value of that? Well, suppose you only had 5 million doses of medicine to treat this American population of 350 million people. Who should you medicate? Well, if you medicated a random 5 million people out of 350 million, the likelihood that you're going to get very many of the real TB cases is small. It's only going to be about 1 in 30. You'll only get about 1/30 of the cases. But if you use your 5 million doses to medicate the 3.5 million people who would test positive under this 99% accurate test, then when you test all 350 million people, you're going to get about 3.5 million who test positive. You have enough medication to treat all of them. And if you treat all of them, you're almost certain to get all of the actual TB cases, all 10,000 of them. So the 99% accurate test does have an important use in this final setting, a lot more so then the 99.99% accurate test that simply always said no-- food for thought.
MIT_6042J_Mathematics_for_Computer_Science_Spring_2015
455_Total_Expectation_Video.txt
PROFESSOR: The law of total expectation will give us another important tool for reasoning about expectations. And it's basically a rule like the law of total probability, closely related to it really, for reasoning by cases about expectation. So it requires a definition of what's called conditional expectation. So the expectation of a random variable R, given event A, is simply what you get by thinking of replacing the probability that R equals v by the probability that R equals v given A. So it's the sum over all the possible values that R might take of the probability that R takes that value, given A. OK, with that definition, we can state the basic form of the law of total expectation, which says if you want to calculate the expectation of R, you can split it into cases, according to whether or not A occurs. It's simply the conditional expectation of R given A times the probability of A, plus the conditional expectation of R, given not A times the probability of not A. So it really looks [? as ?] the same format as the law of total probability. Now, of course it generalizes to many cases. So the general form would say that I can calculate the expectation of R by breaking it up into the case that A 1 holds times the probability of A 1, the case that A 2 holds times the probability of A 2, through A n. And this could very well, and typically is, an infinite sum, where the [? A i's ?] of course, are a partition of the sample space-- so they're all the different cases, either A 1 or A 2 or A 3, they're disjoint. And altogether, they cover the entire set of possibilities. Well, let's use this to get a nice different and simpler way-- more elementary way-- of calculating the expected number of heads and flips. So let's let of n be the expected number of heads and flips-- just shorthand, because the notational will be easier to work with than writing capital E brackets of H n. So what do we know about expectation of n? Well, I can express it in terms of the expectation of the remaining flips. So if I have n flips to perform, they're independent. Then if I perform the first flip, something happens. And after that I'm going to do n more flips, and the expected number of flips is going to be the expected number on the remaining n minus 1 plus what happened now. Well, if I flipped a head first, then I've got a 1 as adding to my total number of heads. And then I'm going to do n more flips, so the expected number of flips is going to be that 1 plus the expected number on the rest of them. If the first flip was not a head, it was a tail, then the total expected number of heads is simply the expected number of heads on the rest of the flips. And these are two cases where I can apply total expectation. So by total expectation, the expected number in n flips is 1 plus e n minus 1 times the probability of a head, plus e n minus 1 times the probability of a tail. Well, now we could do a little algebra multiply through here by p-- that becomes a p, and this becomes a p times e n and minus 1. So I've got e n minus 1 times p, and e n minus 1 times q-- remembering that p plus q is 1, this simplifies to being simply e n minus 1 plus p. Well, this is a very simple kind of recursive definition of e n, because you can see what's going to happen. Subtracting 1 from n adds a p. So if I subtract 2 from n, I add another p-- I get 2 p. And continuing all the way to the end, by the time I get to 0, I've gotten n times p. And I've just figured out what I was familiar with already-- which we previously derived by differentiating the binomial theorem-- the expected number of heads in n flips is n times p. But this time I got it in a somewhat more elementary way, by appealing to total expectation.
MIT_6042J_Mathematics_for_Computer_Science_Spring_2015
281_Degree_Video.txt
PROFESSOR: So now we start on another topic in graph theory, namely the topic of simple graphs. So last week we were talking about directed graphs where the arrows have a beginning and an end, as shown here. But simple graphs are simpler. The edges don't have direction. They just correspond to a mutual connection, which is symmetric. So there's a picture of a simple graph, and the edges are shown without an arrowhead. A special thing about directed graphs is that it's possible to have an arrow going in each direction between two vertices. But when we have undirected edges like this, that doesn't happen. So there's only one edge between a pair of vertices in a simple graph. In addition, a directed graph might have a self loop, an edge that starts and begins at the-- starts and ends at the same vertex. And those are also disallowed in simple graphs. Now, you could allow those things. There's a thing called multi-graphs where there are multiple edges between vertices. And there could also be self loops, but we don't need those. Let's not complicate matters. We're talking about simple graphs. OK, so the formal definition of a simple graph is that it's an object G that has a bunch of parts. Namely it has a nonempty set, v of vertices, just like directed graphs. And it has a set E of edges, but the edges now are somewhat different since they don't have beginnings and ends. An edge just has two endpoints that are in V, and we don't distinguish the endpoints. So let's just draw a picture. Here's a case where there are six vertices V shown in blue, and there are these undirected edges shown in green. In this case I see seven edges in E. There is an example of an edge that goes between two vertices that I've highlighted in yellow and red. And we've made that particular edge dark green. An edge like that can formally be represented as just the set of its end points, a set of two things, red and yellow. In text, we'll often indicate it as the two vertices connected by a horizontal bar. But you have to remember that the order in which the red and the yellow occur don't matter, because it's really a set consisting of red and yellow. When two vertices are connected by an edge, they are said to be adjacent. And the edge is said to be incident to its end points, just a little vocabulary that we use here. A basic concept in graph theory, which is what we're going to make a little bit of in this video segment, is the idea of the degree of a vertex. The degree of a vertex is simply the number of incident edges, the number of edges, that touch it, the number of edges for which its an end point. So let's look at the red vertex. There are two edges incident to the red vertex, so its degree is 2. OK, let's look at the yellow vertex. Here there are four edges incident to the yellow vertex, so its degree is 4. No surprises yet. So let's examine some properties of vertex degrees that are motivated by a simple example. Suppose I asked the question, is it possible to have a graph with vertex degrees of 2, 2, and 1. So implicitly it's a 3 vertex graph. And one vertex has degree 2, another has degree 2, and one has degree 1. Well let's see what it looks like. If I'm going to have a vertex of degree 1, then I know what it looks like. There's the vertex. It's got one edge out of it. It's going to some other vertex. Now this other vertex must have degree 2, so it's connected to something else. And the something else must be another vertex with degree 2, because those are the only possible spectrum of degrees, 1, 2, and 2. And that means that this last guy has to have an edge out of it, because it's degree 2. And it can't go back to 2, because there's already an edge between these two. And it can't go back to 1, because that has degree 1. So we're stuck. And by this ad hoc reasoning we figured out that there can't be a degree 3 graph with this spectrum of degrees 2, 2, 1. It's impossible. Well, we could have reasoned more generally. And there's a very elementary property of degrees that we're going to actually make something of in a minute. And it's called the handshaking lemma. It says that the sum of the degrees summed over all the vertices is equal to twice the number of edges. There it is written as a formula, twice the number of edges. So that's the cardinality symbol. Absolute value of a set means the size of the set. E here is finite. Twice the number of edges is equal to the sum over all the vertices of the degree of the vertices. Why is that true? Well, if you think about it, in the sum on the right, every edge is counted twice, once for each vertex that it's the end of. So we're really just count-- and this is a way of summing up over all of the vertices in which each vertex gets numerated twice. So the sum is twice the number of vertices. And the proof is trivial, but let's make something of this. You might wonder why it's called the handshaking lemma. That will emerge in some problems that we're going to have you do. But let's go on and apply the handshaking lemma in an interesting way. And by the way, of course, since 2 plus 2 plus 1 is odd, we could have without that ad hoc analysis figured out that the sum of the degrees can't be odd, because it's twice something. All right, so here's the applications designed to get your attention. It is an application of graph theory to sex. And we ask the question, are men more promiscuous than women? And there have been repeated studies that are cited in the notes that show again and again that when they survey collections of men and women and ask them how many sexual partners they have, it's consistently the case that the men are assessed to have 30% more, 75% more, sometimes 2 and 1/2 times, 3 times as many partners as the women. And there's got to be something wacky about this. We're going to come up with a very elementary graph theoretic argument that says that this is complete nonsense. By the way, the most recent study that we could find was one that's mentioned in the notes in 2007 by the US Department of Health. And the statistician who collected the data knew that the results were impossible. But her job was to report the data, not to explain it or interpret it. And the men reported 30% more partners than the women. And we're going to show that somebody is lying. Here's how we're going to do it. We're going to model the relationships between men and women by having a graph that comes in two parts. It's going to be called a so-called bipartite graph. So there's going to be one set of vertices called M and another set of vertices called F. M for men and f for women, or females. And we're going to have edges going between men and women, between M's and F's, precisely when they have been involved in a sexual liaison. So looking back at this graph, this edge from that blue M to that orange F indicates that they had a sexual liaison. They were partners. OK, so this is a simple graph structure that we can use to represent who got together with whom in any given population of men and women. Now, if you think about the same argument that we use for handshaking, if you sum the degrees of the men, you're counting each edge exactly once. And so the sum of the degrees of the men is equal to the number of edges in this graph. And likewise, if you sum over the females, you're counting each edge once. And so the sum of the female degrees is also equal to the number of edges. In particular, the sum over the degrees of the males is equal to the sum over the degrees of the females. Because every time there's a liaison, it involves one male, one female. All right, now let's just do a little bit of elementary arithmetic. I'm going to divide both sides of this equality by the size of the male population, by the number of men. And if I do that, I get this formula. The left hand side is the number of-- is the sum of the degrees of men divided by the size of the M population. And here I'm doing a little trick. Notice that the F's cancel out. But I've expressed some of the female degrees divided by M as the sum of the female degrees divided by F times this factor F over M, which is the ratio of the populations of women to men. Now the reason I'm doing this is that if you look at this thing on the left, this is the average degree of the men. This is the sum of all the degrees of men divided by the number of men. So it's the average number of partners that men have. And likewise, now you can recognize over here that I've got the average number of partners that each woman has. And what we've just figured out then is that there's a fixed relationship between the average number of partners of men, the average degree of the M vertices and the average degree of the F vertices. And these two average degree, these average numbers of partners, is simply related by the ratio of the populations. The men degree is the female population divided by the M population times the average degree of the females. Now, what this tells us is that these wild figures of twice as many, and 30% more, and so on are completely absurd. Because we know a lot about the ratio of females to males in the population. As a matter of fact, in the US overall there are slightly more women than men. There is 1.035 women for each man in the US population. And that tells us then that if you surveyed the population of all the men and women in the country, you would discover that the men looked 3 and 1/2 percent more-- had 3 and 1/2 percent more partners than women per man. But this has nothing to do with their behavior, or promiscuity, or lack of it. It's simply a reflection of the ratio of the populations. Which gets us to the question of, where do these crazy numbers come from? And the answer seems to be that people are lying. One explanation would be that men exaggerate their number of partners, and women understate their number of partners. But the truth is that nobody knows exactly why we get these consistently false numbers. But we do get them consistently in one survey after another. You will no longer be fooled by such nonsense.
MIT_6042J_Mathematics_for_Computer_Science_Spring_2015
1113_Countable_Sets_Video.txt
PROFESSOR: OK. So we come to the idea of countable sets, which are the most familiar kind of infinite sets. And a countable set is one where you can list the elements-- a0, a1, a2 and so on. So there's a list of all of the elements of A in which every element in A appears at some point. You can count up to any given element of A, and every element of A you will eventually get to, you'll be able to count up to it. So it's just a matter of listing it. And the technical definition of "A is countable" is if there's a bijection between A and the non-negative integers. Because this listing, in effect, really is a mapping from the non-negative integers to A. 0 is a0, 1 maps to a1, 2 maps to a2, and implicitly there's a bijection being indicated here. That's assuming that all of the [? a's ?] are distinct for it to be a bijection. So we also have, as a special case, the finite sets are also considered to be countable. So really, if n is a bijection to A, then A is called countably infinite. The other possibility is that A is finite. And the two together, I just say A is countable. So what we've just figured out, then, from the previous examples, is that the positive integers are countable. And all the integers are countable, because in both cases we exhibited bijections to the non-negative integers. Another important and not very hard example is the set of finite binary words. So we use this notation, "0, 1 star," meaning all the finite-- star means all the finite sequences of these elements, 0 and 1. So this is just the finite binary words. How are they countable? Well, I need a way to be able to list them in some orderly way. Well, let's just do it by length. Let's begin by listing the empty word, or string, of length zero. And then I'm going to list all the one-bit strings, the strings of length one. And there are two of those. So let the second element, the next element of the list after the empty string, be 0, and then let the next element after that be 1. Then let's list all the length two strings. Well, there's four length two binary strings. And let's just list them in some sensible order-- say, by their binary representation. And then keep going. List all the length three binary strings-- there's eight of those. And finally, keep going up until you get to the length n binary strings, of which there are 2 to the n. And this is a description of a way to list, one after another, all of the finite binary words, or finite binary strings. And that listing is implicitly a description of a bijection from the non-negative integers n to the nth element in my listing. And that's a bijection, so the binary words are countable. Another example of a countable set is the pairs of non-negative integers. So how can-- now I've got the non-negative integers. I've got to find a bijection of pairs of non-negative integers. How am I going to do that? Well, it's the same idea as we used with binary strings. There's a bunch of ways to prove it, but let's just propagate the binary string idea. Let's start listing the pairs of non-negative integers. And after 0, 0, I'm going to list two pairs-- 0, 1 and 1, 0. And after them, I'm going to list three pairs-- 0, 2, 2, 0, and 1, 1. And after them, 0, 3, 3, 0, 1, 2, 2, 1. And if you can see what I'm doing, I'm basically listing the pairs in the order of the sum of their coordinates. So the nth block of pairs that I'm going to list will be the pairs the sum of whose two coordinates is n. There'll be n plus one of those. And I keep going in this way. This is a nice orderly description of-- or a description of a nice orderly way to list all of the pairs of non-negative integers. Within a block, invent some alphabetical rule for listing the pairs. So I'm going to-- I've hinted at a rule here for listing the finite set of pairs whose sum is n, and you can invent-- any one will do. So that tells us that we have a bijection between the non-negative integers and the pairs of non-negative integers. So that's another important bijection. Now, when you're trying to prove countability, it's very useful to have the following lemma, which gives an alternative characterization of countability-- namely, a set A is countable if and only if you can list A allowing repeats. Remember, our original definition is that you can list A without repeats if it's infinite, or else it's finite. So that was-- the bijection between the non-negative integers and A, in effect, is saying that that's a listing of all of an infinite set A with no repeats, because it's a bijection we're mapping. If an element appeared twice, we'd have two different non-negative integers mapping to it, which would break the bijection property, the injection property. And so suppose we allow repeats. And the claim is that that's fine, because you can fix that. So the lemma says that, if there's a surjective function from the non-negative integers to A, then A is countable. Well, let's just check quickly in one direction. If A is finite, then there's clearly a surjective function from the non-negative integers to A. There's lots of extra non-negative integers you don't need. If it's a finite set, like 10 elements in A, map 0 through 9 to those 10 elements, and map every other non-negative integer, say, to 10th element, or last element, of A. So there's certainly a surjection if A is finite. Now, suppose that A is infinite, and I have a surjection from the non-negative integers to A. So I'm listing A with repeats. And I'm supposed to have a bijection if it matches the other definition. How do you do that? Well, if you're a computer scientist, you know how to change a sequence with repeats into a sequence without repeats. You just filter it for duplicates, going from left to right. Take this infinite sequence of elements of A in which there are repeats, and keep only the first occurrence of each element. That will define a bijection with the non-negative integers if a is infinite. And that's how we prove this lemma, which I'm just going to settle for talking through. So now we have another convenient way to show that a set is countable, just by describing, not a bijection, but a surjection between the non-negative integers in A. Surjections are often easier to describe than bijections, which is why this is a useful lemma. A corollary of this is that, if I'm trying to show that a set A is countable, all that I really need to do is find some other set that I know to be countable and describe a surjection from that other set C to A. Because I know that if C is countable, then there'll be a bijection between the non-negative integers and C. And since when you combine a bijection with a surjection, you wind up with a surjection, that will implicitly define a surjection from the non-negative integers to A, which by the lemma tells me that A is countable. So the general way to prove something is countable is just describe a surjection, from something you know to be countable, that hits your target. And let's look at an example of that. I claim that the rationals are countable, the rational numbers are countable. Well, this is kind of a little bit more striking at first, because you can see how you can count the non-negative integers, the positive integers, all the integers, because there's a nice sensible way to have one come after other. But with the rationals, it's messy. In between any two rationals, there's another rational. There isn't any first rational. There isn't any obvious way to list them all. But really, if you stop thinking about the rationals of how they are laid out on the real line, but just think of them as pairs of integers, then it becomes clear how to list them, because we already know that the pairs of non-negative integers are countable. So I'm just going to map a pair of non-negative integers m, n to the rational number m divided by n. Well, n might be zero, so if n is zero, just map all of those pairs to your favorite rational number. Call it a half. And that gives us a nice surjective mapping, because every rational number can be expressed as m over n-- at least every non-negative rational number. So in effect, what we have is a surjection from the pairs of non-negative integers, which we know is countable, onto the non-negative real numbers-- sorry, the non-negative rational numbers, quotients of integers. Which means that the rationals, sure enough, are countable, even though they seem to be spread out all over the line. So, again, we saw that if N cross N is countable, and there's a surj, described above, to the non non-negative rationals, so they're countable. Well, just looking ahead a little bit, it's going to turn out that, in contrast to the rational numbers, the real numbers are not countable. And in fact, neither are the infinite binary sequences that we saw-- there was a bijection between the infinite binary sequences and the power set of the non-negative integers. And both of these are going to be basic examples of uncountable sets, so sets that are not countable, which we will be examining in the next lecture.
MIT_6042J_Mathematics_for_Computer_Science_Spring_2015
221_Congruence_mod_n_Video.txt
PROFESSOR: The idea of congruence was introduced to the world by Gauss in the early 18th century. You've heard of him before, I think. He's responsible for some work on magnetism also. And it turns out that this idea, after several centuries, remains an active field of application and research. And in particular, in computer science it's used significantly in crypto, which is what we're going to be leading up to now in this unit. It's plays a role in hashing, which is a key method for managing data in memory. But we are not going to go into that application. Anyway, the definition of congruence is real simple. Congruence is a relation between two numbers, a and b. It's determined by another parameter n, where n is considered to be greater than one. All of these, as usual, are integers. And the definition is simply that a is congruent to b mod n if n divides a minus b or a minus b is a multiple of n. So that's a key definition to remember. There's other ways to define it. We'll see very shortly an equivalent formulation that could equally well have been used as a definition. But this is a standard one. A is equivalent to b means that a minus b is a multiple of n. Well let's just practice. 30 is equivalent to 12 mod 9 because 30 minus 12 is 18, and 9 divides 18. OK. An immediate application is that does this number with a lot of 6's is ending in a 3 is equivalent to 788253 modulo 10. Now why is that? Well, there's a very simple reason. If you think about subtracting the 6 number ending in 3 from the 7 number ending in 3, what you can immediately see without doing much of any of the subtraction-- just do the low order digits-- when you subtract these, you're going to get a number that ends in 0. Which means that it's divisible by 10. And therefore those two numbers are congruent. It's very easy to tell when two numbers are congruent mod 10 because they just have the same lower digit. OK. Another way to understand congruency and what it's really all about is the so-called remainder lemma, which sets that a is congruent to b mod n, if and only if a and b have the same remainder on division by n. So let's work with that definition. We can conclude using this formulation, equivalent formulation, that 30 is equivalent to 12 mod 9 because the remainder of 30 divided by 9, well it's 3 times 9 is 27, remainder 3. And the remainder of 12 by 9 is 3. So they do indeed have the same remainder 3. And they're congruent. By the way, this equivalent sign with the three horizontal bars is read as both equivalent and congruent. And I will be bouncing back between the two pronunciations indiscriminately. They are synonyms. OK, let's think about proving this remainder lemma just for practice. And in order to fit on the slide, I'm going to have to abbreviate this idea of the remainder of b divided by n with a shorter notation r sub b n. Just to fit. OK. So the if direction of proving the remainder limit that they're congruent if and only if they have the same remainder. The if direction here in an if and only if is from right to left. I've got to prove that if they have the same remainder, then they're congruent. So there are the two numbers, a and b. By the division theorem, or division algorithm, they can each be expressed as a quotient of a divided by n times the quotient sub a plus the remainder of a divided by n. And likewise, b can be expressed in terms of quotient and remainder. And what we're given here is that the remainders are equal. But if the remainders are equal, then clearly when I subtract a minus b, I get qa minus qb times n. Sure enough, a minus b is a multiple of n. And that takes care of that one. The only if direction now goes from left to right. So in the converse, I'm going to assume that n divides a minus b, where a and b are expressed in this form by the division algorithm or division theorem. So if n divides a minus b, looking at a minus b in that form what we're seeing is that n divides this qa minus qb times n, plus the difference of the remainders. That's what I get just by subtracting a and b. But if you look at this n divides that term, the quotient times n. And it therefore has to divide the other term as well. Because the only way that n can divide a sum, when it divides one of the sum ands, is if it divides the other sum and. So n divides ra minus the remainder of 8 divided by n from b divided by n. But remember, these are remainders. So that means that they're both in the interval from 0 to n minus 1 inclusive. And the distance between them has got to be less than 1. So if n divides a number that's between 0 and n minus 1, that number has to be 0. Because it's the only number that n divides in there. So in fact, the difference of the remainders is 0. And therefore, the remainders are equal. And we've knocked that one off. So there it is restated. The remainder lemma says that they're congruent if and only if they have the same remainders. And that's worth putting a box around to highlight this crucial fact, which could equally well have used as the definition of congruence. And then you'd prove the division definition that we began with. Now some immediate consequences of this remainder lemma are that a congruence inherits a lot of properties of equality. Because it means nothing more than that the remainders are equal. So for example, we can say the congruence is symmetric, meaning that if a is congruent to b, then b is congruent to a. And that's obvious cause a congruent to b means that a and b have the same remainder. So b and a have the same remainder. One that would actually take a little bit of work to prove from the division definition-- not a lot, but a little bit-- would be that if a is congruent to b, and b is congruent to c, then a is congruent to c. But we can read it is saying the first says that a and b have the same remainder. The second says that b and c have the same remainder. So obviously a and c have the same remainder. And we've prove this property that's known as transitivity of congruence. Another simple consequence of the remainder theorem is a little technical result that's enormously useful called remainder lemma, which says simply that a number is congruent to its own remainder, modulo n. The proof is easy. Let's prove it by showing that a and the remainder of a have the same remainder. Well, what if I take remainders of both sides, the left hand side becomes the remainder of a divided by n. The right hand side is the remainder of the remainder. But the point is that the remainder is in the interval from 0 to n. And that means when you take its remainder mod and its itself. And therefore the left hand side is the remainder of a divided by n, and the right hand side is also the remainder of the a divided by n. And we have proved this corollary that's the basis of remainder arithmetic. Which will basically allow us whenever we feel like it to replace numbers by their remainders, and that way keep the numbers small. And that also merits a highlight. OK. Now, in addition to these properties like equality that congruence has, it also interacts very well the operations. Which is why it's called a congruence. A congruence is an equality-like relation that respects the operations that are relevant to the discussion. In this case, we're going to be talking about plus and times. And the first fact about congruent says that if a and b are congruent, then a plus c and b plus c are congruent. The proof of that follows trivially from the definition. Because the a congruent to b mod, n says that n divides a minus b. And if n divides a minus b, obviously n divides a plus c minus b plus c. Because a plus c minus b plus c is equal to a minus b. That one is deceptively trivial. It's also the case that if a is congruent to b, then a times c is congruent to b times c. This one takes a one line proof. We're given that n divides a minus b. That certainly implies that n divides any multiple of a minus b. So multiply it by c and then apply distributivity, and you discover that n divides ac minus bc, which means ac is congruent to bc modulo n. It's a small step that I'm going to omit to go from adding the same constant to both sides to adding any two congruent numbers to the same sides. So if a is congruent to b and c is congruent to d, then in fact, a plus c is congruent to b plus d. So again, congruence is acting a lot like ordinary equality. If you add equals to equals, you get equals. And of course the same fact applies to multiplication. If you multiply equals by equals, you get equals. A corollary of this is that if I have two numbers that are congruent modulo n, then if I have any kind of arithmetic formula involving plus and times and minus-- and what I want to know is what it's equivalent to modulo n-- I can figure that out freely substituting a by a prime or a prime by a. I can replace any number by a number that it's congruent to, and the final congruence result of the formula is going to remain unchanged. So overall what this shows is that arithmetic modulo n is a lot like ordinary arithmetic. And the other crucial point thought that follows from this fact about remainders is that because a is congruent to the remainder of a divided by n, then when I'm doing arithmetic on congruences, I can always keep the numbers involved in the remainder interval. That is, in the remainder range from 0 to n minus 1. And we use this standard closed open interval notation to mean the interval from 0 to n. So it's sometimes used in analysis to mean the real interval of reals. But we're always talking about integers. So this means-- the integers that square bracket means 0 is included. And a round parenthesis means that n is excluded. So that's exactly a description of the integers that are greater and equal to 0 and less than n. Let's do an application of this remainder arithmetic idea. Suppose I want to figure out what's 287 to the ninth power modulo 4? Well, for a start but if I take the remainder of 287 divided by 4, it's not very hard to check that that's 3. And that means that 287 to the ninth is congruent mod 4 to 3 to the ninth. So already I got rid of the three digit number, the base of the exponent, and replaced it just by a one digit number, 3. That's progress. Well, we can make more progress because 3 to the ninth can be expressed as 3 squared, squared, squared times 3, right? Because when you iterate taking powers, it means that the exponents multiply. So this is 3 to the 2 times 2 times 2, or 8, times 3-- which adds 1 to the exponent-- or 9. So that's simple exponent arithmetic. But notice that 3 squared is 9. And 9 is congruent to 1 mod 4. So that means I can replace 3 squared by 1, and the outer 2 squared stays. It becomes 1 squared squared, but that's 1 times 3. And the punchline is that 287 to the ninth is congruent to 3 mod 4 by a really easy calculation that did not involve taking anything to the ninth power.
MIT_6042J_Mathematics_for_Computer_Science_Spring_2015
411_Tree_Model_Video.txt
PROFESSOR: So now we start on a unit of about a half a dozen lectures on probability theory which most students have been exposed to, to some degree, in high school. We'll be taking a more thorough and theoretical look at the subject in our six lectures but, before we begin, let's give a little pitch for the significance of it. There's been extensive debate among the faculty that probability theory belongs right up there with physics and chemistry and math as something that should be a fundamental requirement for all students to know. It plays an absolutely fundamental role in the hard sciences, and the social sciences, and in engineering that pervades all those subjects. And it's hard to imagine somebody legitimately being called fully-educated if they don't understand the basics of probability theory. Historically, probability theory starts off in a somewhat disreputable way in the 17th and early 18th centuries with the analysis of gambling, but then it goes on to be the basis for the insurance industry and underwriting, predicting life expectancies, so that you could understand what kind of premiums to charge. And then it goes on to allow the interpretation of noisy data with errors in it and the degree to which it confirms scientific and social science hypotheses. But true to the historical basis, let's begin with an example from gambling that illustrates the first idea of probability and then we're going to be working up to a methodology for inventing probability models, called the tree model. So let's begin with an example from poker and I'd like to ask a question. If I deal a hand of five cards in poker, what's the probability of getting exactly two jacks? So there are 13 ranks and there are four kinds of jacks-- space, hearts, diamonds, clubs-- what's the probability that, among my five cards, I'm going to get two of them? Well, that's really a counting problem because I'm going to ask, first of all, how many possible five-card hands are there? We can think of these as the outcomes of a random experiment of just picking five cards. And there are 52 choose 5 five-card hands in a 52-card deck. Then, there are 4 choose 2 ways of picking the suits for the two jacks that we have and so the total number of hands that have two jacks is simply 4 choose 2 times 52 minus 4, the remaining 48 cards, choose the remaining 3 cards in the five-card hand. And then what we would say is that the probability of two jacks is basically the number of hands with two jacks divided by the total number of hands. Turns out to be about 0.04 and, under this interpretation, basically, what we're thinking of probability as telling us is, what fraction of the time do I get what I want? What's the fraction of the time that I quote, "win" , if winning consists of getting a pair of jacks and, by symmetry and the fact that we think of one hand is as likely to come up as another, this fraction of hands that equal two jacks, it makes sense to think of that as that's the probability that we'll get that hand. If we think of all the hands as being equally likely, we yank 1 out of the deck, the fraction of time that we would expect to get two jacks is this number. About 0.04. So, the general setup of probability, the first idea based on this illustration with a pair of jacks, is that, abstractly, we have some random experiment that's capable of producing outcomes. These are mathematical black boxes called outcomes. Now, a certain set of the outcomes, we will think of as an event that we're interested in whether or not it happens. We could think of it as the event of getting two jacks or the event of winning some game. Then we define the probability of an event as simply the fraction of the outcomes in the event divided by the total number of outcomes. Among all the outcomes, what fraction of outcomes are in the event? And we define that to be the probability of the event. That's the first naive idea about probability theory and it applies to a lot of cases, but not always. So now, let's begin with an example which illustrates why this first idea needs to be refined and it doesn't really give us the kind of theory of probability that we'd like. So let's turn to a game that was really famous in the 1970s. An enormously popular TV game hosted by a man named Monty Hall. The actual name of the TV show was called Let's Make a Deal, but we'll refer to it as the Monty Hall game, and the way that this Let's Make A Deal show worked was, roughly, that there were three doors. This is an actual picture of the stage set. Door 1, door 2, door 3. And by the way, this game show still has a fan base. There's a website for it that you can look at. Even 40 years later, people are still caught up in the dynamics of the game. So there are these three doors and the idea is that behind the doors, they're going to have a prize behind one of them and some kind of booby prize, often a goat held by a beautiful woman holding a goat on a leash just to keep things visually interesting, and that's what you got if you lost. And contestants were going to somehow or other pick a door and hope that the prize was behind it. There's a picture of the staff. There's Monty Hall and the woman who was his assistant, Carol Merrill. Her job was to pick doors to open and show them to contestants to see what was behind them. OK. So here are the rules for the Monty Hall game. The actual quiz show had more flexible rules but, for simplicity, we're going to define a simple, precise, and fixed set of rules. The rules are that, behind the three doors, two of the doors are going to have goats and one of the doors is going to have a prize behind it. Often the prize is something like an automobile. Something really desirable. So we can assume that the staff, on purpose, will place the price at random behind the three doors because they don't want anybody to have a guess that some doors are more likely than others to have the prize and they're not allowed to cheat. That is, once they've decided which door is going to have the price, it's supposed to stay there throughout the game. They can't move it in response to which door that the contestants pick. That would be cheating. OK. Next, the contestant is given an opportunity to pick one of the doors. They're all closed and it's hard to understand how the contestant would make a choice, but if the contestant was worried about the staff trying to outguess him on where to put the goat and where to put the prize, the contestant should just pick all the doors with equally likelihood. Then he can't be beaten by their trying to outguess him. He can only be beaten by if they cheated him by moving the goat after he picked or moving the prize after he picked. At this point, once the contestant has picked a door-- let's say he picks door 2-- then Monty instructs Carol to open a door with a goat behind it. So he's going to choose an unpicked door. If the contestant has picked door 2, that means that door 1 and door 3 are unpicked doors, and Monty tells Carol, open either door 1 or door 3, whichever one-- or perhaps both-- have a goat behind them. And so Carol is going to open one of those doors and show a goat and everybody knows that they're going to see a goat because that's the way the game works. And then at this point, when the contestant has seen that there's a door that has a goat behind it and they're sitting on a picked door and there's another unopened door that hasn't been picked, the contestant's job is to decide whether to stick with the door that they originally picked or switch to the other unopened door. So if they picked door 2 and Carol opened door 3, they could stick with door 2 or they could switch to the closed door 1 and hope that maybe 1 has the price behind it. Those are the rules of the game. Now, the game got a lot of prominence in a magazine column written by a woman named Marilyn Vos Savant. The name of the magazine column was called Ask Marilyn and she advertises herself as having the highest recorded IQ of all time, some IQ of 200, and so she runs a popular science and math column with various kinds of puzzles. And she took up the analysis of the Monty Hall statistics and came to a conclusion and the conclusion caused a firestorm of response. Letters from all sorts of readers, even quite sophisticated PhD Mathematicians who were arguing with her conclusion about the way the game worked and the probability of winning according to how the contested behaved. The debate basically came down to these two positions. Position 1 said that sticking and switching were equally good. It really didn't matter what the contestant did, whether they stuck with the door that they originally picked or switched to the unpicked door after the third door had been opened and that their likelihood of finding the prize was the same. And the other argument, very emphatically, said switching is much better. You should really switch no matter what. And how can we resolve this question? Well, the general method that we're proposing for dealing with problems like this where we're really trying to figure out, what is the probability model? Is to draw a tree that shows, step-by-step, the progress of the process or experiment that's going to yield a random output and try to assign probabilities to each of the branches of the tree as you go and use that as a guide for how to assign probabilities to outcomes. So let's begin, first of all, by finding out what the outcomes are, and we're going to be analyzing the switch strategy. So, just for definiteness, let's suppose that the contestant adopts the strategy that they pick a door, Carol opens a door that shows a goat, and they're going to switch to the non-goat closed door that they did not originally pick. They're going to switch to the other door that they can switch to and we're going to ask about, what are the outcomes and consequences of winning or losing if you adopt that strategy? Well, the tree of possibilities goes like this. The first step in this process that we've described is that the staff picks a prize location, a door to put the prize behind, and so there are three possibilities. They could put the prize behind door 1, door 2, and door 3. OK Well, let's examine the possibility that they put the prize behind door 1. So the next stage is they pick a door and if the prize is behind one and they pick a door, again, there are three possible doors that the contestant might pick. The contestant has no idea where the price is and so the contestant can choose either door 1 or door 2 or door 3. At that point, the third event in this random process, or experiment, is that Carol opens a door that has a goat behind it. So let's examine those possibilities. So, one possibility is that the prize is behind one and the contestant picks door one, initially. Well that means that Carol can open either door 2 or door 3 in that circumstance because both of them have goats behind them. On the other hand, if the prize is at 1 and the contestant picks door 2, the two closed doors have-- one has the prize, 1, and the other doesn't have the prize, 3. Carol has to open door three. Likewise, if the contestant picks door 3 when the prize is behind door 1, Carol has to open door 2. Here she's got a two-way branch. She can choose to open either of the two goat doors, 2 or 3. Here there's only one unopened door with a goat, she's got to open 3 there, too. OK. And that describes the outcomes of the experiment. That's the process of the experiment and these nodes at the end, these leaves of the tree, describe the final outcomes on this branch. Now, if you look at the classification of these outcomes according to winning and losing, well, we're looking at the switch strategy. So if the price was behind 1 and the contestant picked door 1 initially, then their strategy is to switch and they're going to switch away from the prize door. So whichever door Carol opened to reveal the goat, 2 or 3, the contestant is going to switch to the other one and they're going to lose. So both of these outcomes count as losses for the contestant. On the other hand, if the prize was behind door 1 and the contestant picked door 2, then Carol opens the non-prize door, 3, and the contestant switches from 2. The only choice they have is to switch to 1, the prize door. They win. And this other case is symmetric. And that summarizes the wins and losses in this branch of the tree. Now, of course, the rest of the tree is symmetric so we don't need to talk it through again. This is just simply the case where the prize is behind 2. The contestant has the same choices and [? Marilyn ?] has the same choices of which unopened door to choose and likewise if the prize is behind 3. So if we look at this tree, the tree is telling us that this is an experiment which we think of as having twelve outcomes, four in each of these major branches. So there are twelve outcomes of this random experiment, of which, six are losses and six are wins for the contestant and so we discover that there's six wins and six losses. Now, the way that this game works, if you think about it-- if the switching strategy wins, that means that the sticking strategy would have lost because if switching wins, it meant that you switched to the door that had the prize and so if you hadn't switched, you must have been at a door that didn't have the prize and likewise. If switching loses, then you must have switched from the door with the prize to a door that didn't have the prize-- switching-- and that means if you'd stuck, you would have won. So what we can say is that really analyzing the switch strategy enables us to analyze the stick strategy simultaneously because you win by sticking if and only if you lose by switching. Now this simplification doesn't hold when there's more than three doors, and that's another exercise, but for now, it's telling us that if we analyze the switch strategy, we also understand the stick strategy. And of course, that means that if you use the stick strategy then the six wins become losses and the six losses become wins and, again, there are six ways to lose and six ways to win. So the first false conclusion from this is by reasoning about it as though they were poker hands, and the false conclusion says, look, sticking and switching win with the same number of outcomes and lose with the same number of outcomes. So it really doesn't matter whether you stick or switch because the probability of winning, in both cases, is half the outcome. 6 out of 12. The probability doesn't matter. It makes no difference whether you stick or switch. And that's wrong, and we will see why soon. The other false argument is that we think about what happens after Carol has opened a door. So, where are we? The contestant has picked a door, has no idea where the goat or the prize is. Carol opens the door and shows the contestant a goat. What's left? Well, there's two closed doors left. One is the door with the prize and the other is the door without the price that has a goat behind it and, by symmetry of the doors, the contestant has no idea what's behind the door that he picked or the remaining unopened door. They're equally likely to contain the prize and so the argument is, again, that whether you stick or switch between those two doors that haven't yet been opened, it doesn't really matter and so, again, the stick strategy and the switch strategy each win with the same 50-50 probability. And that's wrong, too. What's wrong? Well, let's go back and look at this tree a little bit more carefully to understand what's going on. And the first thing to notice about the tree is that the structure of the tree leading to the leaves is not the same. Here's a leaf that has degree [? 2. ?] One way to get in and only one way out and here's a leaf that has degree 3. One way in and two ways out, if we think of going from the root to the leaf. And so it's not clear that these branches, these leaves, should be treated the same way. Well let's think about it more carefully, about-- how are we going to assign probabilities to the various steps of the experiment? Well, what we're going to assume, for simplicity, is that the staff chooses a door at random to place the prize. So that means that each of these branches occurs with probability 1/3. 1/3 of the time, they put the prize behind door 1, 1/3 behind door 2, and 1/3 behind door 3. OK. Let's continue exploring the branch where they put the prize behind door 1. At that point, the contestant is going to pick a door and they can pick either door 1, 2, or 3 and, absent any knowledge and also to be sure that they can't be outguessed by the staff realizing that they mostly prefer door 1. So if they're going to switch, they'll put the prize behind door 1 to fool the contestant. The contestant's protection is, pick a door at random. Choose door 1 1/3 of the time, and door 2 1/3 of the time, and door 3 1/3 of the time in a completely unpredictable way. And so the contestants is going to choose each of those possible doors as their first choice with probability 1/3. Now what happens next? Well, the next thing that happens is that Carol opens a door. Now this is the case where Carol has a choice. The prize is behind one and the contestant happened to pick door 1. That means doors 2 and 3 both have goats and, again, for simplicity, let's assume the Carol, when she has a choice-- she can open either door 2 or door 3, here-- does them with equal probability. So we're going to assign probability 1/2 to her opening door 2 when she has the choice between 2 or 3 and probability 1/2 that she'll open door 3 and, by the way, we saw that those were losing outcomes for the contestant. But here, things are a little different. If the prize is behind door 1 and the contestant has chosen door 2, Carol has no choice but to open the only other unchosen door with the goat behind, namely, door 3. So we could say that this choice, really, is probability 1 and I got a little bit ahead of myself here but, having filled in the probabilities on these edges, what we figured out is that the probability of this topmost branch of losing is we said, well, 1/3 of the time you go here and 1/3 of that third you go here and 1/2 of that time you go to this vertex. So it's 1/3 of 1/3 and 1/2 of that, or a weight of 1/18 and, by symmetry, this gets weight 1/18. But this way, 1/3 of the time, the prize is behind door 1. 1/3 of the time, the contestant picks door 2 and after that, Carol is was forced to open door 3. So this branch occurs with certainty, as with probability 1, which means that we wind up at this leaf 1/3 of 1/3 of the time for sure, and its weight is 1/9. And of course, by symmetry, the similar weights get assigned to the winning and the losing. So what we've concluded is that, although there are six wins, the weight of the wins is 6/9 because they're each worth 1/9 of the time and that winning will occur 2/3 of the time. Likewise, there are six losses but they each only occur 1/18 of the time and so we lose 1/3 third of the time by the switch strategy. The summary, then, is that the probability of winning if you switch is 2/3 and, by the remark that you win with switching if and only if you lose with sticking, it follows that you lose by sticking 2/3 of the time. And so sticking is really a bad strategy and switching is the dominant way to go. Now, in class, we back up this theoretical analysis. It's very logical but the question is, is it true? And you can do statistical experiments and have students pick doors and goats and prizes and, sure enough, it turns out that roughly 2/3 of the time, and closer and closer to 2/3 the more times you play the game, the switching strategy wins 2/3 of the time. So, the second key idea in probability theory is that the outcomes may have different probabilities. They may have different weights. Unlike the poker hand case, when we look more closely at a random experiment with different outcomes, we will agree that, for various kinds of reasons of symmetry or logic and so on, that it make sense to assign different probability weights to the different outcomes. It's not the case that the outcomes have uniform probability, that they're all equally likely. So, to summarize, what happens, especially-- this example illustrates the confusion about of probability theory that was engendered to even some serious experts-- but, in general, intuition is very important, as in any subject, but it's also dangerous in probability theory. Particularly, for beginners who aren't experienced about some of these traps that you can fall into and so our proposal is that you be very wary of intuitive arguments. They're valuable but you need another disciplined way to check them, and we propose that you stick with what we call the four-part method when you're trying to devise a probability model for some random experiment. So, the steps are, first, that you try to identify the outcomes of the random experiment and this is where the tree structure comes up. If you try to model, step-by-step at each stage of the tree, what the possible sub-steps are in the overall process that yields the random outcome, that's where the tree comes in as we illustrated with Monty Hall. The next thing to do is, among the outcomes, identify the ones that you consider to be of the winning events or the winning outcomes or the outcomes in the event that you are concerned about whether or not it happens. Getting two jacks, picking the door with the prize. So you need to identify the target event whose probability you're interested in. We could call it the winning event, the probability of winning. The third key step is to try to use the tree and logic of it to assign probabilities to the outcomes and the fourth step, then, is, simply, to compute the probability of the event which you do in a very straightforward way by basically adding up the probabilities of each of the outcomes in the event. That is the four-step method. Now, this Monty Hall tree that we came up with was very literal and wildly, unnecessarily complicated. So let's take another look at that and a simpler argument that will lead us to the same conclusion about how the Monty Hall game works, and we'll do that in the next video.
MIT_6042J_Mathematics_for_Computer_Science_Spring_2015
453_Expected_Number_Of_Heads_Video.txt
So for practice with expectation, let's calculate the expected number of heads in n coin flips, and just working directly from the definition, because we have tools to do that. So we're imagining n independent flips of a coin with bias p. So the coins might not be fair. The probability of heads is p. It would be biased in favor of heads if p is greater than 1/2 and biased against heads if p is less than 1/2. And we want to know how many heads are expected. This is a basic question that will come up again and again when we look at random variables and probability theory. So what's the expected number of heads? Well, we already know-- we've examined the binomial distribution B n,p. B n,p is telling us how many heads there are in n independent flips. So we're asking about the expectation of the binomial variable B n,p. Well, let's look at the definition. The definition of B n,p is it's the sum over all the possible values of B, namely all the numbers from 0 to n-- that's k-- of the probability of getting k heads. And this formula here is the probability of getting k heads, which we've worked out previously. n choose k times p to the k, 1 minus p to the n minus k. Well, let's introduce an abbreviation, a standard abbreviation. Let's replace 1 minus p by q, where-- so p plus q equals 1, and they're both not negative and between 0 and 1. And when I express the expectation this way, it starts to look like something a little bit familiar. And our strategy is going to be to use the binomial theorem, and then the trick of differentiating it is going to wind up giving us a closed formula for this expression for the expectation of the binomial random variable. So let's remember the binomial theorem says that the nth power of x plus y is the sum all from k equals 0 to n of n choose k, x to the k, y to the n minus k. And if I differentiate this, what happens is that on the left hand side, if I differentiate with respect to x, I get x plus y to the n minus 1 times n. And if I differentiate the right hand side-- let's differentiate it term by term. And differentiating with respect to x is going to turn this n choose k x to the k, y to the n minus k into an x to the k minus 1 times k term. But I'd like to keep the n-- the k here and the k there matching. So that after differentiating, that becomes an x to the k minus 1. Let's multiply it by x to make it x to the k. And of course, I have to undo that multiplication by dividing the whole thing by 1/x. So by differentiating the binomial formula, we get the following formula for this sum that is starting to look just like the expectation of B n,p, 1/x times the sum from k equals 0 to 1 of k times n choose k, x to the k, y to the n minus k. Well, let's compare the two terms. So here's this term and there's this one. I'm going to replace this line by the formula for expectation of the binomial random variable. So this is what we're trying to evaluate, and I have this great theorem. You can see how they match up. So what I'm going to do is replace p and q-- replace x and y in this general formula that I got by differentiating the binomial theorem with p and q. And what happens? So I just plug in the p and q. Now, the left hand side. p plus q is 1. So the left hand side is going to become n. And this right hand side now is exactly the expectation of B n,p-- this part of it, anyway. So what I'm going to wind up with is that n is equal to 1/p times the expectation of B n,p. In other words, the expectation of B n,p is n times p, and that is the basic formula that we were deriving by first principles without using any general properties of expectation, just the definition of expectation and the stuff that we had already worked out in terms of the binomial theorem.
MIT_6042J_Mathematics_for_Computer_Science_Spring_2015
265_Time_versus_Processors_Video.txt
PROFESSOR: The example of scheduling courses in terms is really a special case of a general problem that you can probably see of scheduling a bunch of tasks or jobs under constraints of which ones have to be done before other ones, which is a topic that comes up actually in lots of applications. But you can see applications in computer science where you might have a complex calculation, pieces of which could be done in parallel and other parts had to be done in order because later results depended on the results of an earlier computation. It leads us to the general discussion of parallel scheduling. And we've already worked out some theory of that really just from the example. Namely, if we look at the minimum number of terms to graduate, this corresponds to the minimum amount of number of stages or the minimum amount of time that it takes to process a bunch of tasks, assuming that you can do tasks in parallel and as many in parallel as you need to-- that there's no limit on the amount of parallelism allowed. In that case, what we can say is that the minimum parallel time for a bunch of constrained tasks is simply the maximum chain size in the constraint graph. We saw that example with the course prerequisites where we had five. And in general, this is the theorem. Minimum parallel time is exactly equal to maximum change size for chains in the graph that constrains the order in which tasks can be completed. Now what about the maximum term load? Well, that corresponds to the number of processors you need to be doing tasks in parallel. So for the course scheduling example, it means how many subjects can you take in one term? But if you were, say, doing computations, how many separate CPUs would you need in order to be able to fully utilize the parallelism to as much in parallel as you possibly could and abound on the number of processors that are needed for minimum time is simply the maximum antichain size, which in the example from the previous segment on course scheduling, it turns out there were five courses you could take in one term, the second term. And that was, in fact, the maximum antichain size. So that's an upper bound on the number of processors that you need to achieve minimum time. But in fact, it's a course upper bound because although the number of processors needed to achieve minimum parallel time is at most the maximum antichain size. In fact, in the previous example, it turns out you could get away with three processors. It was possible to schedule the subjects so you only took three courses a term and still finished in minimum time. So can you do better than three subjects? Well, there's a trivial argument that says, no, you can't. Because in that previous example, we had 13 subjects to schedule. The maximum chain size was 5. So it was going to take at least five terms. So that means you have to distribute these 13 subjects among five terms. There has to be some term that has at least the average number of subjects, namely 13 divided by 5. So that means there has to be a term in which you're taking 13 divided by 5 subjects. Of course, you round up because it has to be an integer. So the minimum number of terms to finish and graduate-- finishing these 13 subjects in five terms-- is 3 because 13 divided by 5 rounded up is 3. And this is a general phenomenon that applies. And what we can say is that if you have a DAG with n vertices and the maximum chain size is c-- so that's how deep it can be at most-- and the maximum antichain size is a-- that's the largest number of things that you could ever possibly do in parallel-- then clearly, the total number of vertices is c times a, at most. So the total number of tasks that you can do where you are going to finish in c steps using at most a processors is bounded by c times a. So what that tells you is that you can't both have the antichain size and the chain size be too small because their product has to be at least n. That can be rephrased as a lemma that is credited to a guy named Dilworth. Dilworth is actually famous for Dilworth's theorem of which this Dilworth's lemma is a special case, but we don't need the general theorem. Dilworth's lemma says that if you have an n-vertex DAG, then for any number t, it either has a chain of size bigger than t, or it has an antichain of size greater than or equal to n over t. And we proved this on the previous slide. The product of these two things has to be at least n, and the general case is t times n over t is greater than or equal to n. And this holds for all t between 1 and n. Well, let's think of a simple application of that. If I choose the t that balances antichain size from chain size, then I choose t to be the square root of n. So over here, I have square root of n, and here I have n divided by the square root of n, which is also square root of n. And what we can conclude is that every end vertex DAG has either a chain of size at least the square root of n or an anti chain of size at least square root of n. This turns out to actually have a few applications, but we're just going to look at a fun application of this remark that you have to have a chain or an antichain of size at least square root of n. You might have only one of these. You might have both. But one or the other has to be at least as big as square root of n. Let's think of a new DAG that I'm going to construct as follows. I'm going to draw an edge between students in the class, and I'm going to think of one student as having a direct edge to another student if the first student is both shorter and younger-- actually meaning no taller than and no older than the other. Let's just say shorter-- meaning shorter or possibly the same height-- younger or possibly the same age. And so the rule is if I think of a student as being represented by their shortness s and their age a, then a student with a height s 1 and age a 1 has a direct arrow to another student with height s 2 and age a 2, providing that the first pair is less than or equal to the second pair in both coordinates. S 1 is less than or equal to s 2. And A 1 is less than or equal to A 2. Now, we don't want ties here because that would break the DAG property if I have two students with exactly the same age and height. So let's assume that we're measuring age in microseconds and the height in micrometers. And with that kind of a fineness, the likelihood of a tie is pretty low. So then it becomes a DAG again. So this is the definition of taking a DAG built out of pairs-- there's a pure DAG for height, and there's a pure DAG for age. And I combine them into pairs, and I get a new DAG by looking at how the coordinates behave together. This is called the product graph. It's a general construction that comes up, and we will talk a little bit more about when we reexamined DAGs in the context of the language of relations and partial orders. Anyway, this is the product graph. According to Dilworth's lemma, in a class like ours of 141 students, it means that we're going to have a chain or an antichain in this product DAG of size square root of 141 rounded up, or 12. According to Dilworth's lemma, in this particular age-height graph, what does it mean for this to be an antichain? Suppose I take a bunch of students and I line them up in order of size with the tallest on the left and the shortest on the right. If this is going to be an antichain, it means that they have to be getting older as they get shorter. Because if I ever had a case where somebody to the right was both shorter and younger than somebody to the left, it wouldn't be an antichain because they'd be comparable. So an antichain-- according to Dilworth's lemma-- if you'd sort the students by height, they have to be getting older and as they get shorter. If it was a chain, they would be getting younger as they got shorter. But the more interesting one is the antichain in this height-birthday example. So we should be looking at-- we'll either have a chain or an antichain in this class according to this product DAG. As a matter of fact, we really had an antichain. Here's a quick list of a dozen students. And indeed, if you look at the birthdays, there's somebody who's 6'1 was born in August '94 and then somebody who was born in April '94 and is 6'0 all the way down to somebody who was born in 1991 and who's five feet tall. So we lucked out. We could have only had to chain, but we actually had the antichain in this case.
MIT_6042J_Mathematics_for_Computer_Science_Spring_2015
2103_Tree_Coloring_Video.txt
PROFESSOR: Now we can use the unique path characterization of trees to very quickly figure out that every tree is 2-colorable. So we know that a tree is a graph with unique paths between every pair of vertices. And as a consequence, the chromatic number of a tree with two or more vertices is 2. The proof is just to show you how to color it. You clearly can't get by with one color if you've got any two adjacent vertices. The 2-colorable way is that you just choose an arbitrary vertex and-- call it the root-- but you make the arbitrary choice on what the root is. And there's a unique path from the root to every vertex, using this unique path characterization. And so we're just going to color vertices by whether the path from the root is of odd or even length. If it's of even length, color it red. And if it's odd length, color it green. And so we wind up alternating red and green. And the fact is that adjacent nodes are going to be at a distance where one is an odd distance and one is an even distance, which is why this method of coloring is going to work. A general property of 2-coloring is that to figure out whether or not a graph is 2-colorable and how to do it, is you just start. Pick an arbitrary vertex, color it red. And then color all the vertices adjacent to it green. And keep going in that way, coloring a vertex with a color different from an adjacent vertex that's colored, until you get stuck. If you don't get stuck it's 2-colorable. And if it's not 2-colorable, you're guaranteed to get stuck. So it's a very easy way to figure out if a graph is 2-colorable. Another characterization of 2-colorability in general, is that a graph is 2-colorable providing that all the cycles that it has, if any, are of even lengths. Of course, a tree has no cycles, so that makes it 2-colorable for sure.
MIT_6042J_Mathematics_for_Computer_Science_Spring_2015
182_Bogus_Induction_Video.txt
PROFESSOR: Understanding proofs includes the ability to spot mistakes in them. And as a test of that skill and also your understanding of induction, let me see if I can put one over on you. I'm going to show you a bogus proof by induction. And I'm going to prove something that's patently absurd. Namely, that all horses have the same color. Say they're all black. So, there's a bunch of black horses with maybe some highlighted brown manes. I'm going to prove this by induction. And for a start, there's no n mentioned in the theorem, so that's common for various kinds of things that you need to prove by induction. So I have to rephrase it in terms of n. It's going to be by induction on n. The induction hypothesis is going to be that any set consisting of exactly n horses will all have the same color. Let's proceed to prove this. Now, I'm going to use the base case n equals 1, just because I don't want to distract you with suspicions that the base case n equals 0, that is no horses, is cheating somehow. It would be, in fact, be perfectly legitimate to start with n equals 0 and argue that all the horses in the empty set have the same color, because there's nothing in the empty set. However, let's not get into that. We'll start with n equals one. And indeed, if you look at any set of one horse, it is the same color of it as itself. And in fact, I've proved the base case n equals 1. Let's keep going. Now, in the inductive step, I'm allowed to assume that n horses have the same color, where n is any number greater than or equal to 0. Now right here, students complain that that's not fair, because you're already assuming something false and that's absurd. Well, yeah, it is absurd. But that can't be the problem. I'm just allowed to assume an induction hypothesis. All I have to do is prove that n implies n plus 1. Since it's absurd, there ought to be some problem with the proof. Well, let's watch and see if there's a problem with the proof. So again, I can assume that any set of n horses have the same color. I have to prove that any set of n plus 1 horses have the same color. How will I do that? Well, there's a set of n plus 1 horses, and let's consider the first n of those horses. Now by induction hypothesis, the first n of them have the same color. Black, maybe. Also by induction hypothesis, the second set of n horses-- that is, all but the first horse-- have the same color. And what that tells me is that the first and the last horse have the same color as all of the horses in the middle. And therefore, in fact, they all have the same color. End of proof, QED. So, there had better be something wrong. And what's wrong? Well, what's wrong is that the proof that P of n implies P of n plus 1 is wrong. It looked good, but the proof that P of n implies P of n plus 1 has to work for all n greater than or equal to the base case. And if you look at this proof, it doesn't work in the base case. When n is 1 and you're trying to go from the base case to two and so on by implication, the proof breaks down. Because what happens with our argument in the case that we're trying to prove that P of n implies P of n plus 1 in the case that n equals 1, well in that case, there aren't any middle horses. N plus 1 is 2, so there's a first horse. And that's the first n horses. And then the second half of set of n horses is the other horse, and there are no middle horses that they both have a color in common with. So, the proof just broke there. But, you might not have noticed because that was the only place it was broken. This is a case where we were misled by ellipsis, by the way. Where because I was drawing n plus 1 horses with-- showing a lot of horses with dots in the middle, it looked like there were some. But they weren't. And again, as I said, the point though is that the only fallacy in this proof was that it didn't work when n was one. But it certainly worked for implying that if all sets of two horses are the same, that does imply that all sets of three horses are the same color. And again, it's a false, we'll imply anything, kind of example. But even here, the proof was logically OK. But if it breaks in one place, if there's one domino that's missing from the line when the one before it falls, the rest of them stop falling and the proof breaks down.
MIT_6042J_Mathematics_for_Computer_Science_Spring_2015
135_Well_Ordering_Principle_3_Video.txt
PROFESSOR: So let's look at a last example of applying the well ordering principle, this time to something that we actually care about-- a theorem that really does require some-- The theorem is the following famous formula for the sum of a geometric sum-- for a geometric series or a geometric sum. So the numbers on the left, the powers of r starting at 1, which is r to the 0 followed by r, which is r to the 1, followed by r squared, up through the nth powerful of r. You add all those numbers up and it turns out that there's a nice, simple, fixed formula that doesn't have those three dots in it that tells you exactly what the value of that sum is in the formula. As you can read is r to the n plus 1 minus 1 is the numerator and r minus 1 is the denominator. And the claim is that this identity holds for all non-negative integers n and for all real numbers r that aren't 1 because I don't want the denominator to be 0. So how are we going to prove this? Well, I'm going to prove it by using the well ordering principle, and let's suppose that this identity didn't hold for some non-negative integer n. So we'll apply the well ordering principle and we'll let m be the smallest number n where this equality fails-- it becomes an inequality. Now, what I know about m immediately is that this equality, if you look at it, when n is 0 the left-hand side comes down. It degenerates to just r to the 0, or 1. The right-hand side, if you check it, is r minus 1 over r minus 1, which is also 1. So equality holds when n is 0, and that means that the least m for which equality doesn't hold has to be positive. So, what we know about the least number where this equality fails is that it's positive. And that means in particular since it's the least one [? word ?] fails, if you go down one to m minus 1, the equality holds. So we can assume that the sum of the first m powers of r, starting at 0 and ending at r to the m minus 1, is equal to the formula where you plug in m minus 1 for n and you get that formula on the right, which I'm not going to read to you. Well, we can simplify it a little bit. If you look at the exponent, r to the m minus 1 plus 1 is after all just r to the m. So repeating what I've got is that the sum of those first powers of r up to m minus 1 we can assume is equal to the formula r to the m minus 1 divided by r minus 1 because m failed and this was the number one less where it had to succeed. So now we take the obvious strategy. What I'm interested in is properties of the sum of the powers up to r to the m. Now, the left-hand side is the powers up to r to the m minus 1, so there's an obvious strategy for turning the left-hand side into what I'm interested in. Namely, let's add r to the m to both sides. So the left-hand side becomes just the sum that I want and the right-hand side becomes this messy thing, r to the m minus 1 over r minus 1 plus r to the m. Well, let's just simplify a little bit. Let's put r to the m over the denominator, r minus 1, which I do by multiplying it by r minus 1. And then it comes out to be r to the m plus 1 minus r to the m over r minus 1. And I collect terms and look what I got. I've got the formula r to the m plus 1 minus 1 over r minus 1, which means that the identity that I was originally claiming, in fact, holds at m contradicting the assertion that it didn't hold at m. In other words, we've reached a contradiction assuming there was a least place were equality fails, that means there's no counter example and the equality holds for all non-negative integers n. So here's the general organization of a well ordering proof which we've been using. Let's just summarize it into a kind of template for proving things. So what you have in mind is that there's some property, P of n, of non-negative integers. And what you'd like to prove is that it holds for every non-negative integer. So for all n in non-negative integers, P of n holds. And we're going to try to prove this by the well ordering principle, which means that we're going to define the set of numbers for which P doesn't hold. That is, the set of counter examples, and call that C. So C is the set of non-negative integers for which not P of n holds. Now, by the well ordering principle there's got to be a minimum element, call it m, that's in C. And at this point the job-- by assuming that m is the smallest counter example, we have to reach a contradiction somehow. Now, up to this second bullet it's the template, but the third bullet is where the real math starts and there isn't any template anymore. How you reach a contradiction is by reasoning about properties of P of n, and there's no simple recipe. But the usual organization of the contradiction is one of two kinds. You find a counter example that's smaller than m-- you find a C that's in the set of counter examples and C is less than m that would be a contradiction because m is the smallest thing at C. Or you reach a contradiction by proving that P does hold for m, which means it's not a counter example. And those are the two standard ways to organize a well ordering proof.
MIT_6042J_Mathematics_for_Computer_Science_Spring_2015
2117_Bipartite_Matching.txt
PROFESSOR: The stable match problem that we just looked at is one example of a bipartite matching problem. So the setup with a bipartite matching problem is you've got a simple graph. And the vertices are split into two groups, as in the stable matching problem, we can call them the girls and the boys, the G and the b. So the definition of a bipartite graph is a graph where there are some vertices called the left vertices and a disjoint set of vertices called the right vertices. And every vertex is either left or right. And edges only go between a left vertex and a right vertex. Now, in this case, the matching problem that we want to consider is that there is a specification that each girl is willing to be paired with certain boys, but not all of them. So we can specify that by adding edges where, if this is the first girl on the list, and she is willing to be paired with the second boy and the last boy. And that's what those two edges indicate. So edges are signaling compatibility constraints on matching up the girls and the boys. And what we're trying to accomplish is getting all of the girls matched with a unique boy-- match each girl to a unique compatible boy. So there's an example of a match, where there is one highlighted magenta edge out of each girl. And they go to different boys. So formally, we want a bijection from the girls to the boys that follows edges. Well, let's look at a case where I can't find a match. Suppose that that edge was missing. We used that edge in the match. But let's suppose it was not there. Let's get rid of it. And what we find now is that this last girl no longer can be matched to this second boy, which is what we previously had. So let's try to find some other match. And there isn't any. And the reason is that if you look at this group of three girls on the left and you look at all of the boys on the right that they are collectively compatible with-- that is, one of these three girls at least is willing to be paired with one of the boys on the right-- there are only two boys that have to be shared among three girls. And that is one example of what's called a bottleneck. So we have three girls. And collectively, they only like two boys. There just are not enough boys to go around for these girls. That proves that a match is not going to be possible. So more generally, if you have a set S of girls on the left and you look at the image of S under the edge relation-- that is E of S, which is collectively the set of all of the boys that are compatible with one or more of the girls in S-- then whenever you have-- So we previously just had an example where S was 3 and E of S was 2. And because 3 was greater than 2-- because S was greater than E of S-- we were bottlenecked. And we couldn't possibly find a match. And more generally, the definition of a bottleneck is that if you have a set where the size of S is greater than the size of the image of S, then that's called a bottleneck. And the first observation we can make is the bottleneck lemma says that a bottleneck is a set S of girls without enough boys. And if S is greater than E of S, that's called a bottleneck. And when there is one, no match is possible, obviously. So this is a reason why there might not be a match, is that there is a bottleneck. Now, a rather deep theorem is conversely, if there are no bottlenecks, then in fact there is a match. This is known as Hall's theorem. It's not obvious, although we'll find an understandable proof of it. And that's what we're going to do in the next segment.
MIT_6042J_Mathematics_for_Computer_Science_Spring_2015
186_WOP_vs_Induction_Video_optional.txt
PROFESSOR: So we come to the part that a lot of students have been asking about, but which in fact is entirely optional. So that if you care to skip this little piece of video, you're welcome to. It's not going to appear on any exam or anything. But people have consistently asked how they choose which method of proof to use among ordinary induction or strong induction or well-ordering. And the answer is that it's hard to tell them apart, because in an easy technical sense, they're really all equivalent. So let's look at them one by one. First of all, it's clear that ordinary induction is a special case of strong induction. In the ordinary induction, you're allowed to assume only p of n. In strong, you can assign everything from p of 0 up to p of n to prove p of n plus 1. But you don't have to use all the extra assumptions. You could just use p of n so that any ordinary induction can be seen as just a special case of a strong induction. It would be a little misleading to call it strong induction, but it is strong induction. So why bother with it? Well, the answers, basically, it's an expository difference. It helps your reader to know that the proof for n plus 1 is only going to depend on n not on the k's that are less than n as they would typically in a genuine strong induction proof. Second, is some argument that an ordinary induction going from n to n plus 1 is more intuitive than strong induction that goes from anywhere less than or equal to n up to n plus 1. I'm not sure that I subscribe to that, but I've heard people make that claim. All right. There's another perspective, which is interesting and maybe surprising, which is, why not always use ordinary induction? Oh, wait a minute. How do you replace strong induction with ordinary induction? Well it's easy. Suppose that you've proved for all m P of m using strong induction with induction hypothesis P of m, what have you done? Well, it's the same base case whether you're using ordinary or strong. But in strong, you would do an inductive step where you actually assumed not just p of n, but P of for all k less than or equal to n. And then using all those hypotheses about P of k, you prove P of n plus 1 in the strong induction. Well, how do you turn it into an ordinary induction? Just let Q of n be that assumption, that for all k less than or equal to n P of k. And if you think about it for a moment, just revising the induction hypothesis to include that universal quantifier, for all k less than or equal to n, means that the strong induction on P of k becomes an ordinary induction on Q of n. And we have a trivial change decorating a bunch of occurrences of formulas with for all we have converted and strong induction into an ordinary induction. So we see that strong induction and no power above and beyond ordinary induction. It just lets you omit a bunch of universal quantifiers that would otherwise have to be made explicit if you were going to do it by ordinary induction. Then why use strong? Just precisely, because it's cleaner. You don't have to write those for all k less than or equal to ends all over. And now we come to the final question about, what's the relation between the well-ordering principle in induction? Well, it's basically the same deal. You can easily rephrase an induction proof. An induction proof, just transform it's template to fit the template of a well-ordering proof and vice versa. We're not going into the details of exactly how, because it's not important, but it is routine. It follows that well-ordering principle is not adding any new power or even new perspective on the mathematics of any given proof. It's just a different way to organize and tell the same story. And it also means conceptually, which is nice that these apparently different inference rules, strong induction, ordinary induction, well-ordering principle, there's really only one. And the others can be justified in terms of it and explained as variations of it. So that's intellectually economical to not have a proliferation of different reasoning principles, which brings us to the question of which one to use. And all I can say is that it's a matter of taste. The truth is that when I'm writing up proofs, I will often try different versions. I'll try it by ordinary induction. And I'll try it by well-ordering. And I'll read the two and decide which one seems to come out the more cleanly. And I'll go with that one. So there isn't any simple rule about which to choose. But in a certain sense, it really doesn't matter. Just pick one. The only exceptions to that, of course, is when on an exam or similar setting, you're told to use one of these particular methods as a way to demonstrate that you understand it, then, of course, you can't pick and choose. So finally, we come to a pedagogical question about, why is it that in 6042 we taught well-ordering and principal first, in fact, the second lecture, and are only now at the end of third week getting to the induction principle, which is much more familiar, and people argue they like it better, at least most of them. Well, the answer is it's a pedagogical strategy. And it's one, in fact, which the authors disagree with, not united on. My view is that we're better off doing well-ordering principles first. And the reason is that our impression from conversations with students, and surveys, and from exam performance shows that only about 20% of the students get induction no matter how hard we try to explain and teach it. They report worrying about things about that assuming P of n to prove p of n plus 1 is somehow circular. And it's certainly measurable that 20% or so of the class just can't reliably do proofs by induction. Now this baffles the 80% to whom it's obvious and who know how to do it easily. And it baffles us instructors. We can't figure out what the problem is that those 20% have. And we've been trying to teach induction lots of different ways. On the other hand, nobody has trouble believing the well-ordering principle and working with it. And they certainly don't have any harder time using it then they do using ordinary induction or strong induction. And this conceptual problem about is it safe and do I really believe in it just doesn't come up with a well-ordering principle. Everybody agrees that it's obvious that a non-negative set of a non-empty set of non-negative integers is going to have at least one. And so we chose to do well-ordering right away, because there's no overhead in explaining it. And it lets us get going on interesting proofs from the get go as opposed to waiting a while or spending a couple of lectures working through induction and get leaving that with the main if only method that people have for proving things about non-negative integers.
MIT_6042J_Mathematics_for_Computer_Science_Spring_2015
1104_Structural_Induction_Video.txt
PROFESSOR: So whenever you define a recursive data type, implicit in the definition is a method for proving things about it called structural induction. And the way structural induction works is that if you want to prove that every element in some recursively defined data type has a particular property P, then you proceed by showing that every one of the elements in the base case that are in R has property P, and moreover that if you apply a constructor to an element x, then it has property P whenever x has property P. That is, you can assume as a structural induction hypothesis P of x and then you need to show that P of c of x holds. And this applies for each constructor c. Some constructors take more than one argument, but this is meant to illustrate the general pattern. Let's do an easy example first. This is what we've actually seen. And we took for granted this method of proof without highlighting it when we argued that the set E that was recursively defined in the last presentation contained only even numbers. So remember the definition of E was that the 0 is in E. And we're going to be proving that x is even by induction. So we need to check the base case, yes, 0 is even. And then we need to show that assuming the structural induction hypothesis that n is even, then when we apply the constructor n plus 2, it's even-- well, obviously it is-- or if we apply the constructor minus n, that's also even. And it is as well. And that's why structural induction tells us that in fact, every string in the set E is even. Now let's look at a somewhat more interesting example, which was the set M of matching right and left brackets. And what I want to prove by structural induction is that every string in M has the same number of left brackets and right brackets. I can restate this by defining EQ to be the set of strings with the same number of right and left brackets. And what I'm really trying to say is that M is a subset of EQ. All right. Now the way I'm going to prove this is by defining my induction hypothesis P of s to be that s is in EQ, that is s has an equal number of left and right brackets. Well, let's remember what the definition of M was. The base case of M was the empty string with no brackets at all. And does the empty string satisfy P of s? Well, yeah. It has 0 right brackets and 0 left brackets so it does have an equal number of left and right brackets. So we've established that the base case P of the empty string is true. Now we have to consider the constructor case. In the case of M, there's only one constructor, namely if r and t are in M, then so is s, which you get by putting brackets around r and following it by t. Well, here's the argument. We're allowed to assume-- we're trying to prove that s has an equal number of left and right brackets. And we're allowed to assume that r does and so does t. So let's look at the number of right brackets in s. Well, where they come from? The right brackets in s consist of-- well, the first symbol in s is a left bracket, so that doesn't matter. Then it's the right brackets in r. And then there is a new right bracket that gets added. And then there are the right brackets in t. So what I can say is that the right brackets in s are simply, the number of them is the sum of the number in r plus the number in t plus one more, cause the constructor threw in one more right bracket. By exactly the same reasoning, the number of left brackets in s is the number of left in r, left in t plus 1. Now, because of hypothesis P of r, the number of right and left brackets in r are equal. And likewise, by the induction hypothesis P of t, the number of right and left brackets in t are equal. And so the right hand sides of both of these equations are equal. And that means that the left hand sides are equal. We've just proved that the number of right brackets in s and the number of left brackets in s are the same, so P of s is true. The constructor case is covered. And we can conclude by structural induction that every s in the set M, recursively defined set of strings of matched brackets, is in fact has an equal number of left and right brackets, which means that M is a subset of Q as claimed. Well, those were pretty easy structural inductions. And as with regular induction proofs, when you get the right induction hypothesis, the proofs tend to be easy. And we are going to work on an interesting example having to do with the F18 functions. One of the reasons why the F18s are what's considered in first term calculus is that if you look at all of those functions-- remember, you've got them by taking constant functions and the identity function and the function sine x, then you could combine them in various ways by adding, multiplying exponentiating, composing, taking inverses-- that we didn't need to add a constructor of taking the derivative. Because it turns out that you can prove by structural induction that the F18s are closed under taking derivatives. And that makes a lovely class problem.
MIT_6042J_Mathematics_for_Computer_Science_Spring_2015
326_Asymptotic_Blunders.txt
ALBERT R. MEYER: Let's take a quick look at some blunders that people regularly make in dealing with asymptotic notation, in particular with big O notation, which tends to confuse people. So the most important thing to remember is that this notation, something equals O of something else-- 1/x equals O of 1, say-- is actually to be understood as just a not such attractive notation, misleading notation for a binary relation between two functions. This is supposed to be a function there, and this is supposed to be a function there. And this is saying that there's a relation between the growth rates of these two functions. O of f is not quantity. And you mustn't treat it as such. So, for example-- and the equality, of course, is not an inequality. Once upon a time, we tried to get the equality replaced by an epsilon, which makes much better sense-- that is, a membership symbol. But there was a sense that this notation was so deeply embedded in the mathematical culture-- multiple mathematical communities-- that there was no way we were going to change it. In particular, a confusion where you think that that equality sign means some kind of an equality is to write instead of f equals O of g, well, if f equals O of g by symmetry, O of g ought to equal f. Don't write that. The reason is that it's a recipe for confusion, because look at this. I know that x is O of x trivially, which would suggest that O of x is equal to x, if you believe in symmetry and you think of O of x as being quantity. Well, remember, though, that 2x is also equal to O of x by definition of O. So what we wind up with is combining 2x equals O of x with O of x equals x is I get 2x is equal to this thing, is equal to x. I conclude that 2x is equal to x, which is absurd. So that's nonsense. It's the kind of trouble that you can get into if you start thinking of this equality as meaning equality between two quantities, as opposed to just being a part of the name of a relation. Another mistake that people make is less serious but it's sloppy, is to think that big O corresponds to a lower bound, so that people will say things like f is at least O of n squared. Well, again, at least O of n squared is starting to treat O of n squared like a quantity. You could say that f is equal to O of n square, but that means that n squared is an upper bound on f to within a constant factor after a certain point. If you want to say intuitively that n squared is a lower bound on f, then all you have to do is say that n squared is O of f. And that is a proper use of O of f of getting a lower bound on a function, by saying that the lower bound is O of the function. Another example of the kind of nonsense that you see-- this is a stretch, but let's look at it as a reminder of things not to do. I'm going to prove to you that the sum from i equals 1 to n of i-- that is that 1 plus 2 plus 3 up to n-- is O of n. Now, of course, it's not. We know that the sum of the first n integers n times n plus 1 over 2, which is O of n squared-- theta of n squared actually. So I'm going to prove something false. Watch carefully how I do it. Here's the false proof. Let's, first of all, notice that any constant is O of 1. So 0 is O of 1, 1 is O of 1, 2 is O of 1, and so on. Any constant function is O of the constant function 1. OK, that's true. So that means that each i in this sum, i is a number, so that means it might be 1, it might be 2, it might be 3, it might be 50. Whatever it is, it's O of 1. And that means that I could think of this sum from i equals 1 to n as O of 1 plus O of 1 plus O of 1. And that's, of course, n times O of 1, which is O of n. Now, there's all kinds of weird arithmetic rules of things being used here, none of which are justified. But it's just a heads up. You do see stuff like this from inexperienced students. And I hope that you won't fall into this kind of a sloppy trap. O of something is not a quantity. It's part of the name of a relation.
MIT_6042J_Mathematics_for_Computer_Science_Spring_2015
133_Well_Ordering_Principle_2_Video.txt
PROFESSOR: So let's look at two examples of using the well-ordering principle. One of them is pretty obvious, and the other one is not hard but a little bit more interesting. So what we're going to prove is that every integer greater than one is a product of primes. So remember a prime is an integer greater than 1 that is only divisible by itself and the number 1. It can't be expressed as the product of other numbers greater than 1. So the way we're going to prove this is by contradiction, and we're going to begin by assuming suppose that there were some numbers that were non-products of primes. OK. That is to say, the set of non-products is non-empty. So applying the least-- the well-ordering principle to this non-empty set of non-products, there's got to be a least one, so m is a number greater than 1 that is not a product of primes. Now, by convention, if m itself was a prime, it's considered to be a product of one prime, so we know that m is not a prime. Now look. M is not a prime, or if it was a prime, it would be a product of just itself, so that means that it must be a product of two numbers, call them j and k, where j and k are greater than 1 and less than m. That's what it means to be a non-prime-- it's a product of j and k. Well, j and k are less than m, so that means that they must be prime products, because they're less than m and greater than 1, and m is the smallest such number that's not a product of primes. So we can assume that j is equal to some product of prime say, p1 through p94, and k is some other product of primes, q1 through q13, so you can see where this is going. Now what we have is that m, which is jk, is simply the product of those p's followed by the product of those q's and is, in fact, a prime product which is a contradiction. So what did we assume that led to the contradiction? We assumed that there were some counter-examples, and there must not be any, and no counter-examples means that, in fact, every single integer greater than 1 is indeed a product of primes as [AUDIO OUT]. Let's start looking at a slightly more interesting example using the well-ordered principle to reasoning about postage. So suppose that we have a bunch of $0.05 stamps and $0.03 stamps, and what I want to analyze is what amounts of postage can you make out of $0.05 stamps and $0.03 stamps? So I'm going to introduce a technical definition for convenience. Let's say that a number n is postal. If I can make n plus $0.08 postage from $0.03 and $0.05 stamps. So this is what I'm going to prove. I claim that every number is postal. In other words, I can make every amount of postage from $0.08 up. I'm going to prove this by applying the well-ordering principle, and as usual with well-ordering principles we'll begin by supposing that there was a number that wasn't postal. That would be a counter-example, so if there's any number that's not postal, then there's at least one m by the well-ordering principal, because the set of counter-examples is non-empty if some number is not postal, so there's at least one. So what we know, in other words, is that this least m that's not postal has the property. It's not postal, and any number less than it is postal. See what we can figure out about m. First of all, m is not 0-- 0 is postal, because 0 plus $0.08 can be made with a $0.03 stamp and a $0.05 stamp. M is not 0, because m is supposed to be not postal. As a matter of fact, by the same reasoning, m is not 1 because you can make 1 plus $0.08 with three $0.03, and m is not 2, because you can make 2 plus $0.08-- $0.10-- using two $0.05. So we've just figured out that this least counter-example has to be greater than or equal to 3, because 0, 1, and 2 are not counter-examples. So we've got that m is greater than or equal to 3, the least non-postal number, so if I look at m minus 3, that means it's a number that's greater than or equal to 0, and it's less than m, so it's postal, because m is the least non-postal one. So, in other words, I can make-- out of $0.03 and $0.05 stamps, I can make m minus 3 plus $0.08 but, look, if I can make m minus 3 plus $0.08, then obviously m is postal also, because I just add $0.03 to that and minus 3 number, and I wind up with m plus $0.08, which says that m is postal and is a contradiction. So assuming that there was a least non-postal number, I reach a contradiction and therefore there is no non-postal number. Every number is postal-- 0 plus 8 is postal, 1 plus 8 is postal, 2 plus 8 is postal. Every number greater than or equal to $0.08 can be made out of $0.03 and $0.05 stamps.
MIT_6042J_Mathematics_for_Computer_Science_Spring_2015
193_Derived_Variables_Video.txt
The technique of derived variables comes up in analyzing state machines. So let's just take a quick look at it together. So a derived variable is simply a function on the states of a state machine that assigns some value to the states. So it's just that kind of a function mapping. If the values happen to be, say, the non-negative integers, it's called non-negative integer value, but it could be real value, complex value, and even take on other kinds of odd kinds of values, not necessarily numerical. No pun there-- not odd numbers, but unusual values. So let's look at the example of the robot on the grid. The states were pairs of non-negative integers giving the coordinates of where the robot was. And one of the derived variables that we found was real useful was the sum value, sigma, of a state, which is defined to be x plus y. And this would be a non-negative integer valued derived variable. So the word "derived" comes because we're making it up. It's not part of the specification of the state machine or part of the program that defines the state machine. So in the robot example, the actual states were composed of the two coordinates x and y, but the derived variable that we made up was their sum of signal. Another useful derived variable for that robot example was the parity of sigma, whether or not the number was even or odd. So sigma is a 0, 1 valued variable, which takes the value 0 if the sum is even and 1 if the sum is odd. So in the case of fast exponentiation, we looked at the actual variable z, which was part of the invariant and a crucial part of the program. And what we noticed about z was that z was a strictly decreasing and natural number valued variable. As a matter of fact, we noticed that it halved at each step. But its values were non-negative integers, and it's strictly decreasing at every step. So that implies by the Well Ordering Principle that it will take a minimum value. And what we know about the minimum value of a strictly decreasing variable is that the algorithm is stuck, because once z has reached its minimum value, if the machine took another step, then it would get smaller. So it means that the algorithm has to terminate. So this gives you a general methodology for proving termination-- find a non-negative integer valued strictly decreasing variable guarantees the program stops. As a matter of fact, we can say sometimes how long it will take for the program to stop. As we saw with fast exponentiation, it took not z, which was the obvious bound, but in fact, log of z, because z not only went down at every step, it got halved at every step. So in general, the concept of a strictly decreasing variable is one-- as shown here-- that at every step of the state machine, at each transition, it gets strictly smaller. A related idea is a weakly decreasing variable. These are not necessarily useful for proving termination, but they are often useful, as you'll see as we progress through the term-- examples where it helps you analyze the behavior of the algorithm. So a weakly decreasing variable is one which goes down or stays constant. It never gets larger. So if we looked at the example of sigma, the sum of the coordinates, that's up and down all over the place. It's neither increasing nor decreasing. The other extreme is the parity variable pi, which was the 0 or 1 according to whether or not the sum of the coordinates was even or odd. And pi is a constant, and that means that it's both weakly increasing and weakly decreasing in the degenerate sense that weakly increasing is allowed to stay the same. In fact, something is weakly increasing and weakly decreasing if and only if it's a constant. By the way, we used to call weakly decreasing variables "non-increasing," which is the standard terminology in the field. In calculus, you talk about non-increasing functions. And we just found that it caused a lot of confusion, because you have to remember that non-increasing is not the same as not increasing. So there's an example of a function that is not increasing, but it's certainly not non-increasing. And if that didn't register, I'll let you think about it. By the way, this method of proving termination by finding a strictly decreasing natural number valued variable generalizes straightforwardly to a variable which takes on values from a well-ordered set of real numbers. Remember, a well-ordered set of real numbers, one of the definitions of it is that it's a set of numbers where it's impossible to find an infinite decreasing sequence of values-- w0 less than w1 less than w2 less than w1 going on forever. If that can't happen, then the set is called well ordered. Of course, the non-negative integers are the most obvious basic case, but there are a bunch of others described in the notes. And in general, the termination principle is that if you can find a strictly decreasing variable of derived variable whose values always come from a well-ordered set, that also is a way to prove termination. That's going to guarantee termination for the same reason that the variable will have to take a minimum value. That's the other definition of well ordered. And when it does, the machine can't move anymore.
MIT_6042J_Mathematics_for_Computer_Science_Spring_2015
427_Monty_Hall_Problem_Video.txt
Now, conditional probability will let us explain a lot of the confused arguments that people brought up about Monty Hall. And we'll see that it is a little bit confusing and where there is some correct sounding arguments that give you the wrong answer. So let's go back and look at our Monty Hall tree that allowed us to derive the sample space and probability space for the whole process of the prize being placed and the contest picking a door and Carol opening a door. Now, this tree was way more complicated than we needed if all we were trying to do was figure out the probability of winning if you switch. But having the tree will allow us to discuss a whole bunch of other events in their probabilities that will get us a grip on some of the arguments that gave the wrong answer. So let's look at the event, first of all that the goat is at 2. Now, this is the branch where the prize is at 2. And so in all the other branches the goat is at 2, l which means that we have these eight of the 12 outcomes in the event-- goat is at 2. Now, let's also look at the event that the prize is at 1. That's just this branch of the tree, OK? So one of the arguments is that when the contestant is at the point where they've seen that the open door and they're trying to decide whether to stick or switch, they know that the goat is at the door 2. Say without loss of generality that that was the door that they got to look at behind, that Carol opened. And so we want to ask the probability, given that he picked 1, what's the probability that the prize is at 1 given that the goat is at 2? That means that if you're at door 1 then you should stick if that probability is high and otherwise you shouldn't stick. So we can look at this event, the prize at 1 given the goat at 2, and what we can see is that it's taking up exactly half of the outcomes for goat at 2 and the same kind of outcomes-- red ones and green ones. The red ones are worth an 1/18 and the green ones are worth a 1/9 in probability, and that implies that the probability that the prize is at 1 given that the goat is at 2 is 1/2. It really is. And that's the argument that people were saying. They said, look, when the contestant sees that the goat is at door 2, and they're trying to decide whether the goat-- the prize is at the door-- is it door 1 or at the other door, and it's equally likely. And so it doesn't matter whether they stick or switch. That's a correct argument but it's not calculating the probability of the stick strategy winning. Why? Well, because there's more information that's available than goat is at 2. The contestant not only knows that the goat is at 2 and trying to figure out the probability that the prize is at 1, but the contestant knows what door he picked. So let's suppose that the contestant did pick door 1 and learned that the goat was at door 2, that's a different event. If the blue one is marked off at the places where the contestant picks one, this is where the door is picked-- is 1 and here's 1 and here's 1. This 1 splits into one event, this 1 splits into one event, but this choice of 1 splits into two outcomes. And so when we look at the event that both the goat is at 2 and the contestant picked 1, which is what the contest really knows when they get to see that there's a goat at door 2, we wind up with the overlap of just three outcomes. Two outcomes that have probability 1/8 and one outcome that has probability a 1/9. It's just those three. And the result is that the probability that the prize is at 1 given that you picked 1 and the goat is at 2-- so this is the event-- goat at 2 and picked 1, these three outcomes. The prize is at 1 is these two outcomes, which are each worth an 1/18 and this is an outcome that's worth a 1/9. So the prize at 1 outcomes amount to 1/2 of the total probability of this event, goat at 2 picked at 1. So, again, the probability that the prize is at 1 given that the contestant picked 1 and saw the goat at 2 is a 1/2 also. That's confusing. So it seems as though the contestant may as well stick because at the point that he has to decide whether to stick or switch, and he knows where-- he sees where the goat is and he knows what door he's picked, it's 50-50 whether he should stick or switch. The probability that the prize is at door 1 that he picked is a 1/2, so it really doesn't matter if he stays there or if he decides to switch to the unopened door. But wait a minute, that's not right because the contestant not only knows what door he picked, not only knows that there's a goat behind a given door that Carol has opened, but he knows that Carol has opened that door. That's how he got to know that the goat was there. So let's go back and look at the tree. What basically the previous two arguments are conditioning on the wrong events. It's a typical mistake and one that you really have to watch out for. So if you use the correct event, what we're looking at is the contestant knows that they've picked door 1. That's the outcomes of picked door 1 are marked here. In addition, the contestant will get to know, for example, in a play of the game that Carol has opened door 2. Carol opening door 2 is quite a different event from the goat being at 2. This is a picture of the outcomes in Carol opening door 2, and we're interested in the intersection of them. That is, just this guy that's in both and this guy that's in both. There they are. And so what we can do is identify that the event that you picked 1 and that Carol opened door 2 consists simply of two outcomes-- one worth an 1/18 and one worth a 1/9. Now, of these two outcomes, which one has the prize at 1? Well, only that one. Remember the first component here is where the prize is. And so the prize at 1 event among the given picked 1 and opened 2 is just this red outcome. Now, the red outcome has probability 1/18 and the green outcome has probability that's twice as much. So that means that relative to this event, the probability that the prize is at 1 given that you picked 1 and opened 2 is actually 1/18 over 1/18 plus 1/9, or 1/3. So given that you picked 1 and you get to see what Carrol did, the probability the prize is at the door you picked is only 1/3, which means that if you stick you only have a 1/3 chance of winning. You should switch. And if you do, you'll have a 2/3 probability of winning. So when we finally condition on everything that we know, which is the contestant knows what door he picked and what door Carol opened, then we discover that it correctly-- as we deduced previously-- that the probability of switching wins is 2/3. So we're not trying to rederive the fact that the probability of switching wins is 2/3. We're trying to illustrate a very basic blunder that you have to watch out for, which is when you're trying to reason about some situation and you condition on some event that you think summarizes what's going on, if you don't get the conditioning event right, you're going to get the wrong answer. So it's easy to see how many people got confused, and, in fact, finding the right event can be tricky. When in doubt, the 4 step method with constructing the tree where you're not even thinking about conditional probabilities but you're just examining the individual outcomes is a good fall back to avoid these kinds of confusing situations.
MIT_6042J_Mathematics_for_Computer_Science_Spring_2015
351_The_Pigeonhole_Principle_Video.txt
ALBERT MEYER: The pigeonhole principle is accounting principle. It's so obvious that you may not have noticed that you're using it. In simple form, it says that if there are more pigeons than pigeonholes, then you have to have at least two pigeons in the same hole. OK. We'll get some mileage out of that shortly. But let's remember that this is actually just an informal way of saying something that we've formally seen already. One of the mapping rules is that if you have a total injection from a set A to a set B, that implies that the size of A is less than or equal to the size of B. And taking the contrapositive of that, it means that if the size of A is greater than the size of B, then no total injection from A to B is possible. No total injection means that there's no relation that has an arrow out of everything in A and at most one arrow into B. If everything out of A has an arrow out of it, there have to be at least two arrows, two pigeons, coming to the same pigeonhole in B. So we know this rule already. And the only thing that's surprising about it is how you make use of it. We're not going to make elaborate uses of it in this little video. You can read in the text about some amusing applications about proving that there have to be 3 people in the Boston area with more than 10,000 hairs in their heads. But the exact same number, or that there have to be two different subsets of 90 numbers, of 25 digits, that have the same sum. But we will take a much more modest application of the pigeonholing principle. Namely, if I have a set of five cards that I have to have at least two cards with the same suit, why? Well, there are four suits-- spades hearts, diamonds, clubs-- indicated here. And if you have five cards, there's more pigeons cards than suits holes. So if you're going to assign a pigeon to a hole, again, the pigeons are going to have to crowd up. There are going to have to be at least two pigeons in the same hole, at least two cards of the same suit, maybe more. OK. Slight generalizations. Suppose I have 10 cards. How many cards must I have of the same suit? What number of cards of the same suit am I guaranteed to have no matter what the 10 cards are? Well, now, if I have the four slots and I'm trying to distribute 10 cards, is it possible that I had less than three cards in every hole? No, because if I have only two cards in every hole, then I have at most 8 elements and I got 10 to distribute in the four slots. I have to bunch them up and have at least three cards of the same suit. You could check that I needn't have any more of course. So the reasoning here is that the number of cards with the same suit is going to be what you get by dividing up the 10 cards that you have by the four slots. And argue that at least one of the slots has to have an average number of cards, namely, 10 over 4. They can't all be below average. And of course since there are an integer number of cards, you could round up this-- remember, these corner braces mean round up to the nearest integer. So 10 divided by 4 rounded up is 3, and that's a lower bound on the number of cards that you have to bunch up in one slot. More generally, if I have n pigeons, and I'm going to be assigning pigeons to unique holes, and if I have H holes, then some hole has to have n divided by H rounded up. Again, n divided by H can be understood as the average number of pigeons per hole. And the pigeonhole principle can be formulated as saying at least one whole has to have greater than or equal to the average number. And that is the generalized pigeonhole principle.
MIT_6042J_Mathematics_for_Computer_Science_Spring_2015
423_Law_of_Total_Probability_Video.txt
The law of total probability is another probability law that gives you a way to reason about cases, which we've seen is a fundamental technique for dealing with all sorts of problems. So the point of cases, of course, is that you can prove a complicated thing by breaking it up into, if you're lucky, easy sub cases. So here's the way to understand the law of total probability abstractly. It starts off with set theoretic reasonings. Suppose that I have a set A embedded in some larger sample space S. So A is really an event, but we're just going to think of it as a set. Now suppose that I have three sets, B1, B2, and B3 that partition the sample space. That is B1, B2, and B3 three don't overlap. They're disjoined, and everything is in one of those three sets. So there's a picture of B1, B2, and B3, cutting up the whole sample space S, represented by the square or rectangle. Now of course, these three sets that cut up the whole space, willy nilly cut up the set A into three pieces. The first piece is the points in A that are in B1. The second piece is the point in A that are in B2. And the third is the points in A that are in B3. So that's why we a basic set theoretic identity that says that as long as B1, B2, and B3 have the property, that they're union is in the universe. Everything. And they are pairwise disjoined, then any set A is equal to the union of the part of A that's in B1, the part of A that's in B2, the part of A that's in B3. And this is a disjoint union, because the B's don't overlap. That means that if I was talking about cardinality, I could add them up. But in terms of probability, I can apply the sum rule for probabilities and discover that the probability of A is simply the probability of B1 intersection A, B2 intersection A, B3 intersection A. Now the most useful form of the law of total probabilities is when you replace this intersection, B1 intersection A, by the conditional probability using the product rule-- so let's replace it by the probability of A given B1 times the probability of B1. That's another formula for B1 intersection A. And if I do that with the rest of them, I now have the law of total probability stated in the usual way in terms of conditional probabilities where it's most useful. Now I did it for three sets. But it obviously works for any finite number of sets. As a matter of fact, it works fine for any countable union of sets. If I have a partition of the sample space S in to B0, B1, and so on-- a partition with a countable number of blocks-- then it's still the case that the probability of A is equal by the sum rule to the probability of these disjoint pieces, the parts of A that are in each of the different blocks of the partition. And reformulating that as a conditional probability, I get the rule that the probability of A is the sum over all possible i of the probability of A given Bi times the probability of Bi. And that basic rule is one and we're going to get a lot of mileage out of when we turn in the next segment to analyze and understand the results of tests that are maybe unreliable.
MIT_6042J_Mathematics_for_Computer_Science_Spring_2015
217_Prime_Factorization_Video.txt
PROFESSOR: Now we come to a more serious application of the fact that the GCD is a linear combination. We're going to use it to prove the prime factorization theorem-- which we've talked about earlier. This is the unique prime factorization theorem. So let's begin by looking at a technical property of primes, which is familiar, but we're going to need to prove it. If you believe in prime factorization, then this Lemma-- which says that if p divides a product, it divides one or the other of the components of the product-- that's an immediate consequence of the prime factorization theorem. But we mustn't prove it that way, because we're trying to use this to prove prime factorization. So how can I prove, based on the facts of what we know about GCDs, without appealing to prime factorization that if p is a prime, and p divides a product, then it divides one of the components of the product, either the multiplier or the [? multiplicand? ?] OK, well here's how to prove that. Suppose that p divides ab, but it doesn't divide a. Of course it does divide a, I'm done. So we may as well assume that it doesn't divide a. Now that means that since the only divisors p are p and 1-- the only positive divisors of p are p and 1-- that if p doesn't divide a, the GCD of a and p is 1. All right, now comes the linear combination trick. Given that the GCD of p and a is 1, that means that I have a linear combination of a and p that's equal to 1-- sa plus tp is equal to 1, for some coefficients, s and t. Cool-- multiply everything by b on the right. So that means that sab plus tpb is equal to 1 times b. But look at what we have now. The first term on the left is something times ab, and p divides ab, so that first term is divisible by p. The second term explicitly has a p in it, so it's certainly divisible by p. So the left hand side is a linear combination of multiples of p, and therefore, itself is a multiple of p-- which means the right hand side is a multiple of p, and the right hand side is b. So sure enough, p divides b. We're done-- a very elegant little proof that follows immediately from the fact that you can express the GCD of two numbers as a linear combination of those numbers. Now this is the key technical Lemma that we need to prove unique factorization. A corollary of this that I'm actually going to need is that if p divides a product of more than two things-- if p divides a product of a lot of things-- it has to divide at least one of them. And this you could prove by induction, with the base case being that it works for m equals 2. But it's not very interesting, and we're going to take that for granted. If p divides a product of any size, it divides one of the components of the product. All right, now we're ready to prove what's called the fundamental theorem of arithmetic, which says that every integer greater than one factors uniquely into a weakly decreasing sequence of primes. Now the statement of weakly decreasing is a little bit technical and unexpected. What we want to say is that a number factors into the same set of primes. Well that's not quite right, because the set of primes doesn't take into account how many times each prime occurs. You could try to make a statement about every number uniquely is a multiple of a certain number of each kind of prime. But a slick way to do that is simply to say, take all the prime factors, including multiple occurrences of a prime, and line them up in weakly decreasing order. And when you do that, that sequence is unique. This fundamental theorem of arithmetic is also called the prime factorization theorem. And here's what it says when we spell it out-- without using the words weakly decreasing. It says that every integer, n, greater than 1 has a unique factorization into primes-- mainly it can be expressed as a product of p 1 through p k is equal to n. With p 1 greater than or equal to p 2, greater than or equal to each successive prime in the sequence, with the smallest one last. Let's do an example. So there's a number that was not chosen by accident, because I worked out what the factorization was. And it factors into the following weakly decreasing sequence. You start with the prime 53, you followed by three occurrences of 37, two 11s, a 7 and three 3s. And the point is that if you try to express this ugly number as a weakly decreasing sequence of primes, you're always going to get exactly this sequence-- it's the only way to do it. All right, how are we going to prove that? Well, let's suppose that it wasn't true. Suppose that there was some number that could be factored in two different ways. Well, by the well-ordering principle, there's at least one. So we're talking about numbers that are greater than 1, so there's a least number greater than 1 that can be factored in two different ways. Supposed that it's n. So what I have is that n is a product p 1 through p k. And it's equal to another product, q 1 through q m, where the p's and the q's are all prime. And these two weakly decreasing sequences are not the same. They differ somehow. So we can assume that the p's are listed in a weakly decreasing order, and the q's are likewise listed in weakly decreasing order. Now the first observation-- suppose that q 1 is equal to p 1. Well that's not really possible, because if q 1 is equal to p 1, then I could cancel the p 1 from both sides, and I would get the p 2 through p k is equal to q 2 through q m, and these would still be different. Since they were different, and I took the same thing from their beginning, I'm left with a smaller number that does not have unique factorization, contradicting the minimality of n. So in short, it's not possible for q 1 to equal p 1. So one of them has to be greater. We may as well assume that q 1 is bigger than p 1. So q 1 is bigger than p 1, and p 1 is greater than or equal to all the other p's, so in fact, q 1 is bigger than every one of the p's. Well that's going reach a contradiction because of the corollary. What I know is that q 1 divides n, and n is a product of the p's. And since q divides the product of the p's, by the corollary, it's got to divide one of them-- q 1 must divide p i for some i, but that contradicts the fact that q 1 is bigger than p i. That's not possible for the larger number to divide the smaller number. And we're done. And we have proved the unique factorization theorem.
MIT_6042J_Mathematics_for_Computer_Science_Spring_2015
143_Digital_Logic_Video.txt
PROFESSOR: Propositional operators play a basic role in the design of digital circuitry, and we're going to illustrate that in the section by designing a little binary addition circuit. So let's begin with a review of binary notation and addition in binary. So the way binary works is like decimal. Except instead of using powers of 10, you're using powers of 2. So here is the binary representation of the number 39. The way to understand that is this is the ones place. That's the twos place. That's the fours place. So 1 plus 2 plus 4 is 7. Then this is the eighth place with nothing, this is the 16th place with something, and this is the 32 place with 1. So we get 32 to 7, and get 39. Likewise, the binary representation of 28 is 011100. I'll let you check how that works with contributing 1, 2, 4, 8, 16, and 32. And finally, let's add these two numbers in binary. Now, binary addition works just like decimal addition except that the only numbers are ones and zeros so that when you get 1 plus 1, you have to carry 1. Let's do that. So 1 plus 0 is 1. That fills in the first column. Now we have another 1 plus 0 is 1. That's fine. Now we have a 1 plus 1. And that's going do a 0 here and contribute a carry of 1 to the next column. Now, the next column has two ones. So it becomes a 0 and contributes to another carrying. Now we have two ones. We get a 0 and contribute another carry. And now we have two ones, and we finally get a 1 0. So this is the binary representation of the sum, and you can check that this is 1 plus 2 is 3 plus 64. So the answer should be 67. And you can check that it is. So that's how binary addition works. So now let's try to design a bit of circuitry using digital logic signals of 0 and 1, which will do addition. And so we're going to try to design a little six bit binary addition circuit. So I'm going to have as inputs, the six digits of the first binary number-- a 5 down through a 0 and then the second binary number. Let's call it b 0 through b 5. So these are two binary numbers that are six digits long, and I'm going to add them up by thinking of a 1 as a 0 or 1 signal. a 0 is a 0 1 signal. b 0 is a 0 1 signal. And these can be transmitted down wires into some boxes that contain digital operators that will cause the right signals to come out. And what we want to come out of here is the possibly seven digit representation of their binary sum. So d 0 is the sum of a 0 and b 0-- the lower digit possibly with carry and so on. And then c 5 if the sum of two six digit numbers runs to seven digits, which it might as we saw in the previous example, then c 5 would become 1, otherwise 0. So this is the specification. I want a and b to come in, and I want their binary sum to come out as d's with a high order c if need be. Now, the way I'm going to do that is it's clear that the behavior of the inputs for a and b, which produced the lower digit, might produce a carry, and that carry has to be transmitted to the next column if it exists. So I'm going to need a wire that sends a 0 1 signal from this box over to that one that carries 0 or 1 and likewise for all of the others. So this is the kind of basic structure of my binary addition circuit. This is called a ripple carry organization. It's mimicking exactly the way that we added up to two numbers column by column, possibly propagating carry of 0 or 1-- or really a carry of just 1 to the next column. And I've got all the wires in place that I need. What we need to do is design the digital circuitry that's in those boxes. Well, this box is different from the others because it's only got two inputs. All the others have three inputs. So the three input boxes, we'll call full adders and the two input boxes a half an adder. And the specification of a half an adder, again, is that the output is the binary representation of a 0 plus b 0. So it's a two digit binary representation-- never be bigger than 2 because there's only two numbers. The output of a full adder is it gets three inputs in this case-- b 1, a 1. And the carry, c 0. And it produces the binary representation of the sum of those three numbers, which is a two digit binary representation that might be anything from 0 to 3. OK. Well, let's start with the easy case. What's a half adder? Well, a half adder, again, has inputs b and a, and it's supposed to produce as output the binary representation of b plus a. So d is the lower digit in the zeros place, and c is the high order digit, namely the twos place. Well, what does that look like? Well, here's the circuit. This is the digital designer symbol for an exclusive [INAUDIBLE] gate that returns. So d is going to be the exclusive or of a and b according to this pictorial diagram. Notice I'm using this colon colon equal symbol which is convenient as a reminder that I'm defining the thing on the left. You could replace it by equal, but it's informative to realize that it's not an equality that you've proved or that some derivation of the two interesting things are proven to be equal, but rather that I'm just defining what the d is. This output d is defined to be a XOR b. And likewise, this is an AND gate. So the output c a AND b. And let's check that. The low order digit is definitely the [INAUDIBLE] sum, XOR of a and b. And when is there a carry? Well, the only way there's a carry is when the value is 2, in which case the output c would be 1 and d would be 0. And that's exactly when both a and b are 1, that is c is a and b. So that's a half adder. That was easy. Well, a full adder looks like this. It's a little bit more complicated, and I'm going to write out the equations without trying to justify them completely. But I need a name in order to describe this with propositional operators. I need a name for that important signal. Call it s, which is what we were not calling it in the previous one. But now this is a half adder with inputs a and b and outputs s, which is a XOR b and another output here, which we know is just going to be a and b. OK. How do I express this set of connections as formulas? Well, first of all, s is the output of this first half adder, which is a XOR b. OK. The output d I get by taking s, and it's the first output of the second half adder, which means it's c in XOR or s. That's easy. And what about c out? Well, c out is getting-- this is an OR gate by the way. So c out is going to be an OR of what comes out of this half adder, which is c in and c s and OR with the output of this half adder, which is just a and b. So there are a bunch of equations that completely characterize the structure of this little bit of digital logic and how it is wired up and fits together. Now, let's go back to describing our ripple carry circuit of what was going on here. Now that we have the equations that characterize the behavior of these full adders and half adders, I can explain to you what the formulas are for all of these outputs-- the c's and the d's. And that goes as follows. So the first one, looking at this half adder with a 0, b 0 coming in and c 0, d 0 coming out, I know that d 0 is a 0 XOR b 0 and c 0 is a 0 AND b0. That's just that the formulas that we have for the half adder when the inputs are a 0 and b 0 and I call the outputs b 0 and c 0. Now, the more general case of the full adder-- what's coming in here is an a and a b with the same subscript-- a i and b i. And what's coming out is the ith digit of the binary sum, d i and the carry c i. And I could describe those just by using the formulas for the full adder. So what it means is that I'm going to introduce a new convening variable, s i, which I'm going to define to be a i or XOR b i. D i is then going to be c i minus 1-- the carry from the previous place-- XOR with s i. And the new carry, c i, is going to be the output of the first half adder, which is c i minus 1 AND s i or the output of the first half adder, which is a i and b i. So the point is that I've just taken the wiring and translated it into equations like this, and you can see how these equations might be better to use than the particular way that you drew the picture with all the wires connected because the logical behavior of the circuit doesn't depend on how it's laid out. It just depends on these logical connectives between the values of these different variables.
MIT_6042J_Mathematics_for_Computer_Science_Spring_2015
173_Relational_Mappings_Video.txt
PROFESSOR: So in this short segment, we'll talk about some relational properties that I call mapping properties. They can also be referred to as archery [? on ?] relations. This segment is mostly vocabulary. There are a half a dozen concepts and words that are standard in the field and that one needs to know to be able to do discrete math and so on. The applications will come in the next short segment where we start applying these properties to counting. Although, there'll be a punchline about counting at the end of this segment. So let's go back or proceed. And remember that a binary relation is a thing with three parts. It's got a domain illustrated as A here, a codomain illustrated as B here, and relationship's an association between domain elements and codomain elements indicated by the arrows, the arrows being called the graph of the relation. And we already observed one aspect of archery and arrows that the concept of a function could be captured by saying that there was less than or equal to 1 arrow out of every element in the domain. That implied that there was a unique other end of an arrow out of a domain point called the value of that point under the relation which is in fact a function F. So F of green equals magenta where there is an arrow out of a green element. But in this picture-- as is typical-- not every domain element-- not every green dot-- has an arrow out if it. So this would be an illustration of a partial function where F of a green element isn't always defined if there's no arrow out. Well, the general idea of archery relations pursues this function idea that basically we're going to classify relations according to-- first, how many arrows come out of domain elements? Really, in three categories. The relations where there's at most one arrow out of every domain element, there's exactly one arrow out of every domain element, or there's at least one arrow out of every domain element. And symmetrically, we're going to classify codomain relations with respect to codomain in the same way-- relations where every codomain element has greater than or equal to 1. Arrow in has exactly one arrow in, or at most, one arrow in is the other part of the classification. And various combinations of these things have standard name, which it turns out that you'll need to know. So we'll lead you through them. OK. So let's begin with the idea of a total relation. Total relation means there's at least one arrow out every domain element. So if you look at this picture, it's not quite total yet because there are two green domain elements with no arrows out of them. So I've just highlighted them in red, and we can fix this by making them disappear. Now I'm left with a total relation. Every domain elements has at least one arrow coming out of it. So that's what makes it total. Another way to say total is to say that if you look at the inverse image of the codomain, it is equal to the domain. That means if you take all the arrows that are coming out of the domain and you turn them around and you look at all the things that have arrowheads into them, it's the entire domain. So that's what R inverse of B-- a nice, slick way to say it using relational operators and sets related to applying relations. So total and function means that there's exactly one arrow out, and that's probably the most familiar case of functions. And lots of fields just assume that functions are total, but the truth is that there often is not total. And people aren't careful about. So let's look at a calculus-like example. Here's a function g that takes a pair of reals and returns a real. It maps the real plane into the real line. And the definition of it is g of x, y is 1 over x minus y. Now, the domain of this function g is in fact all the pairs of reals. That's what it means to say that it goes from R cross R-- shorthand R squared-- to the codomain R. The codomain is the set of all reals. But this g is obviously not total because 1 over 0 is not defined, which means that on the 45 degree line, g is not defined. g of r, r is not defined. So g in fact, is not a total function even though it's familiar. And you'd not worry about partial functions normally. You wouldn't notice that this was partial because you're not used to paying attention to that. OK. Let's look at a slight variation. This is function g 0 that goes from some unspecified domain. I'll specify it in a minute to the reals. It has exactly the same formula g of x, y is 1 over x minus y. But now, I'm going to tell you that the domain-- instead of being all the reals-- is the reals except for that 45 degree line. I just want to get rid of the bad points and not worry about them. The minute I do that, I have these two functions relations that have the same graph but different domains. And the result is that I've removed from the domain of g all the bad points. I'm left with a total function g 0. OK. Let's keep going. The next concept is of a surjection, and that's a relation where there is at least one arrow into every point in the codomain. There's at least one arrow into every point in B. Well, again this is a picture where that doesn't quite work because there's at least one bad point there-- there it is in red-- that doesn't have an arrow in. So let's fix things again by making it disappear. Now I'm left with a surjective relation, or a surjection, because in fact, everything in the codomain in B has at least one arrow coming. Everything's the endpoint of an arrow. So likewise, we can say in terms of set operations that R is a surjection if and only if the image of the domain is the codomain. Or still another way to say it is-- if and only if the range of the function is its entire codomain. Remember, the range of the points that are hit-- that's R of A-- it's not always equal to the codomain. But when it is, that is what makes it a surjection. All right. Injections-- another variation on the theme-- an injection is a relation where there is at most one arrow into every element in the codomain. So looking at this picture now, this is not quite an injection because there are at least two points here that have more than one arrow coming into them. That's what keeps it from being an injection. So let's fix that by deleting a couple of those edges that are crowding up points, and now I'm left with a situation where, in fact, everything in B has at most one an arrow coming in. And so I'm showing you a picture of an injection. And the final concept is when you have all the good properties. A bijection is when you have exactly one arrow out and exactly one arrow in. It's a total function that is an injection and a surjection because it's got greater than or equal to 1 and less than or equal to 1 and equal to 1 for all of the domains and codomains. Now, there's an obvious thing though about bijections, which we'll wrap up with, which is why they're useful in counting theory because it's clear that since there's exactly one arrow out of every element in A-- the number of arrows is the same as the size of A-- and since there's exactly one arrow coming into every element of B, the number of arrows is the same as the size of B. And guess what. That means that where there's a bijection, the sets are of equal size. If there's a bijection between two finite sets A and B, that means that they're the same size.
MIT_6042J_Mathematics_for_Computer_Science_Spring_2015
181_Induction_Video.txt
PROFESSOR: So we come now to the topic of induction, which is a standard part of a high school curriculum and you've probably seen before, but is nevertheless worth looking at at the level that we look at things in this class. So the idea of induction can be-- one way to explain it is this way. Suppose that I plan to be assigning colors to the non-negative integers like, say, in this example, I color zero blue and one red and two blue and three red and four and five green, and it goes on somehow. OK. Now I'm going to describe to you a coloring that I have in mind, and we'll see whether you can figure out what it is. Here are the properties that my coloring has. First of all, I've colored zero red, and I've also continued the coloring satisfying the following rule. If I have an integer that's next to a red integer, then it's red also. Any integer next to a red integer is red also. So what's my coloring? Well, you obviously realize that they're all red. They have to be. And there they are. OK. This is actually a statement. It can be read as a statement of the rule of induction. It's kind of a self-evident axiom about numbers, but let's state it abstractly. So first of all, what induction is assuming is that you have some property of numbers. Call it red, R. And R of zero you're told, and you're also told that R of zero implies R of 1, and that R of 1 implies R of 2, and R of 2 implies R of 3, and in general R of n implies R of n plus 1, and so on. So we've written it out this way as an infinite set of implications to emphasize that that's what the rule that I stated, that if an integer is next to a red integer then it's red, is shorthand for. It's really a shorthand for this infinite number of different implications, each of which has to hold in order for you to be able to apply the rule of induction. Well, what can you conclude if all of these things hold? Well, then you can conclude that zero is red and one is red and two is red and n is red, and so on. OK. Now of course, there's a much more concise way to express both these antecedents above the line and the conclusion below the line using quantifiers, namely the antecedents could simply be said by two predicate formulas, R of 0, comma, for all n, R of n implies R of n plus 1. That's really a summary of what we said on the first slide, that if an integer is less than a red integer, then it's red. That is, n plus 1 is next to n. If n is red, then n plus 1 is red. Similarly, the stuff below the line, that R of zero, R of one and so on hold is simply expressed as for all m, R of m holds. And this is the form of the induction rule. It's read that if you've proved R of 0 and you've proved that, for every n, R of n implies R of n plus 1, then you can conclude that for every m, R of m holds, where the variables are all ranging over the non-negative integers. By the way, notice that I used n for the variable name above the line in the antecedent, and m for the variable line below in the consequent. I can use any names that I like for bound variables, just as in when you define a procedure you can name the parameters of the procedure anything you like, because they're local variables. And I've used an m in the bottom and an n in the top just to emphasize that those variables have nothing to do with each other, which is a point that sometimes confuses students. OK. Sometimes the rule of induction is explained in terms of dominoes. You have all these dominoes lined up next to each other. You knock one over, it knocks over the next one, and so on. If that helps you think about and remember dominoes, that's fine. OK. Let's apply induction-- maybe one of the most basic and standard applications would be to prove a numerical identity. So let's prove one that we've actually seen before. This is the formula that we've previously proved using the well-ordering principle for the sum of a geometric-- for a geometric sum. The sum of R to the 0, R to the 1, up to R to the n. And the claim is that that's equal to R to the n plus 1 minus 1 divided by R minus 1. So this sum of n plus 1 terms actually can be expressed concisely with a fairly single-- a single, simple term. Of course, this only works if R is not 1, because I can't have the denominator be zero. All right. How do we prove it? Well, I'm going to do the proof. And at the same time that I do the proof, I'm going to show you kind of a standard template that you can pull out and use for induction proofs. So the template, it's just an organizational method to do the proof. I'm doing the template in magenta. So that's the part that really is form, not substance. There's no math in it, it's just the structure that we're going to organize the proofs in, at least in the beginning. So here we go. The first thing you do is tell your reader that you're going to be using proof by induction. That helps them understand what's coming. So you begin with the line proof by induction on n. Now n is not in magenta because sometimes you use different variables, and sometimes there'll be many variables in the assertion. So you need to tell the reader just which one is the one that you're going to be applying induction to. All right. That said, the most important part of the proof, the part where there's usually a mistake-- if there's a mistake anywhere, it's usually in the absence of the statement of an induction hypothesis, or a misguided induction hypothesis. So the next part of the template says the induction hypothesis P of n is-- and in this case, our induction hypothesis is that this equality holds. That's what we're trying to prove. So the induction hypothesis is P of n. The objective, then, implicitly, when we're doing induction with this induction hypothesis, is to prove that for all n, P of n calls. This identity works for all non-negative integers n. OK. Having stated the induction hypothesis, the first thing we have to do is work on the base case. That is, prove it for n equals 0. Now we're telling the reader that it's n equals 0 because sometimes it's convenient to start at n equals 1 or n equals 2, and then you're just concluding that the property holds for all n greater than or equal to 1, or however-- all n greater than or equal to wherever you started. So we're going to start at 0, which is the standard place. And what do we have to check? We have to check that the sum on the left, when n goes to-- when n is 0, is equal to the sum-- to the formula on the right when n is 0. Well the sum on the left, when n is 0, it's really just 1, because it's going from R to the 0 to R to the 0. The R and the R squared, they're a little misleading because they're not really there when n is 0. So the left-hand side is one, and the right-hand side is R minus 1 over R minus 1, which is 1 since R is not 1. So sure enough, it checks, and we're OK. The case n equals 0 has now been proved. So the next thing we have to do in the template is to go to the inductive step. And that's where we assume that P of n holds. And we're allowed to use the P of n assumption in order to prove that P of n plus 1 holds, where the only thing that we know about n besides that P of n holds is that n is greater than or equal to 0. And our proof has to work for all possible n that are greater than or equal to 0. All right. Well, now we can start doing the non-template method that has to do with the content of what we're actually trying to prove. This is what I want to prove. This is P of n plus 1. It's gotten by replacing the n's in the previous equation by n plus 1's. I'd like to have that be. OK. How do I get to that? Well, I can assume P of n, which kind of looks like it already. It's a good head start to getting to P of n plus 1. So I'm allowed to assume that this equality holds for n. I don't know what n is except that it's a non-negative integer, but this equality holds for n. And I have to prove that it holds for n plus 1. Well, if you look at this, what I'm trying to prove is something about the sum that goes up to R to the n plus 1. So given this equation, I can turn the left-hand side into the sum that I'm interested in. That is, the sum of powers of R up to the n plus 1st power of R simply by adding R to the n plus 1 to both sides, an obvious strategic move, or tactical move. OK. So doing that, I get this equality, which I've now proved from the induction hypothesis. Namely, the sum up to R to the n plus 1, which is what I'm interested in, is equal to this algebraic expression on the right-hand side. And if I'm lucky, and of course I will be, the right-hand side is going to simplify to be the target expression, with n replaced by n plus 1. So what happens is-- let's put R to the n plus 1 over this common denominator, R minus 1. And I get the second term, and then you can do a little bit of algebraic simplification, trivial, and you'll realize that, sure enough, it simplifies to R to the n plus 1 plus 1 minus 1 over R minus 1, which was exactly the equality that I was hoping to prove. So in fact, at this point we can say that we've proved P of n plus 1, and we've completed the induction proof. We're done. OK. That is the first basic example of an induction proof. And the whole template is now visible, except maybe there should have been a QED or a Done statement. All right. By the way, as an aside, and we already saw a little problem with this, the three dots that appeared in the sum are called an ellipsis. Plural is ellipses. And they're used where the writer is trying to tell the reader that there's an obvious pattern that the reader is expected to see, which I think is fairly clear in this case. You go-- you know, it's R to the 0, R to the 1, R to the 2, R to the 3, up to R to the n. The difficulty is that sometimes the ellipsis can cause some confusion. For example, we had to figure out that when n is 0, the left-hand side actually just meant 1. It was just R to the 0, so the R and the R squared weren't really there. One way to really avoid that kind of fence post problem where you've shown-- in order to make clear what the pattern is, you've shown more than it may-- more of a pattern than may always be there-- is to use a precise mathematical notation where I actually tell you the pattern of the i-th term, and tell you that you should sum from i equals 0 to n. So the sigma notation is shorthand for sum, and I'm telling you that the i-th term in the sum is R to the i, and it's going to run from i equals 0 to n. So this is a sort of mathematical notation for a for loop or a do loop-- do from i equals 0 to n, add R to the i plus R to the i to the accumulator. And the sum notation is certainly more precise. But sometimes, it's actually harder to read than simply showing you the pattern, because the pattern often is visible visually. OK, now let me tell you a little story. And it's a made-up story, but it's kind of fun to tell. This is the familiar building, the Stata Center. And this is actually a design mock-up that the architects produced for the MIT team that was overseeing the construction and design of the building to show what the student lobby would look like, the student street. Now the story goes that part of the plan for the student street was to have a plaza that was going to be built out of unit size squares, but an uncertain number of them. There was going to be a parameter that determined the size of the square. And the size of the square was actually going to be a power of two by a power of two made out of unit size tiles. So there would be 2 to the n times 2 to the n unit size tiles filling up this square plaza. And the plaza was to be tiled with these unit tiles, but one tile space was to be left blank so that the statue of a-- what was then the potential donor, Bill, could be placed in the middle as an incentive for him to donate funds for the completion of the building, which indeed he did. So the puzzle, then, was put forward by the architect Frank Gehry, who many regard, after Frank Lloyd Wright, as the greatest architect of the 20th century. Gehry specified for aesthetic reasons that he wanted to the square to be tiled with L-shaped tiles that were made out of three unit squares. He thought that that would give a pretty design, and it actually does. So here's an example of tiling the n equals 3 case, [? 2 ?] [? to the-- ?] [? 2 ?] [? cubed ?] equals 8x8 plaza with Bill in the middle. There is the 8x8 plaza tiled with these L-shaped tiles, each consisting of three unit tiles. So the question was that the exact size of the square was to be determined by other architectural considerations. So it was parametrized by n, which is going to be 2 to the n by 2 to the n. The question was, can you always find such a tiling no matter how big the square is and leave Bill in the middle. Well, let's try to prove it by induction. The induction hypothesis-- we're trying to prove a theorem that, for any 2 to the n by 2 to the n plaza, we can make Bill and Frank happy. That is, Bill's happy when he's in the middle, and Frank is happy when the rest of the square is covered with L-shaped tiles. By the way, middle is a little bit ambiguous because there are really four middle squares. But of course, it doesn't matter which one you fill, because if you wanted a different one you could just rotate the whole square and get any one of the four middle squares empty for the Bill statue. So an induction proof would proceed by induction on something or other. And the obvious thing is the n that's in the statement of the theorem. And the induction hypothesis would straightforwardly be that we can tile to 2 to the n by 2 to the n plaza with Bill in the middle. OK. The base case is n equals 0. That's a 2 to the 1 by 2 to the 2. It's a 1x1 square. OK, well, not a problem. You just put Bill in the one square, and you tile the rest with no L-shaped tiles. That fits the rules. The base case n equals 1-- n equals 0 is covered. All right. So now we come to the double size square, the square that's of size 2 to the n plus 1 by 2 to the n plus 1. I have to tile that with Bill in the middle, but I have a fairly powerful induction hypothesis that I'm allowed to assume, namely that I can tile the half size square, the 2 to the n by 2 to the n square, and get Bill in the middle. So obviously, the double size square is made out of four half-sized squares. And so I can try to fill up the whole square that's 2 to the-- the whole full-sized square, or double-sized square, 2 to the n plus 1 by 2 to the n plus 1, by working with my ability to tile them with L-shaped tiles, leaving Bill in the middle for each of those four subsquares. So I can assume that, and now I'm stuck really. What do I do? How do I use this ability to put Bill in the middle of each of those four quadrants in order to color-- to fill in the whole thing with N-shaped tiles? I'm stuck. And the point of this example is to show you the way to get unstuck, which is kind of unexpected. I'm actually going to get unstuck by proving something stronger. I'm actually going to prove that we can find a tiling using L-shaped squares with Bill placed in any specified square that you like. Wherever you want to put him, I can tile the rest with L-shaped tiles and leave the specified single square blank for Bill to be inserted, for a statue of Bill to be put there. So what's unintuitive about this is that I'm proving something stronger. It ought to be harder to prove, right? But because I'm trying to prove a conclusion that's stronger, I also have a stronger induction hypothesis to work with in conducting the proof. And the net proof actually, as you'll see, is going to be easier. So let's do it with the stronger induction hypothesis. The theorem is, again, for any 2 to the n by 2 to the n plaza, so we can make Bill and Frank happy. Prove by induction on n. But with a revised induction hypothesis-- I'm calling it P of n again-- which is I can tile the square with Bill anywhere. So the base case, n equals 0, is the same as before. It's just 1x1. So I put Bill in the only tile that there is, which is both the middle and the corner and everything else. And the base case doesn't change. For the inductive step, now I have a more powerful thing that I can assume is the induction hypothesis. I can assume that, in any given square of a 2 to the n by 2 to the n-- any given tile location, unit square, of a 2 to the n by 2 to the n plaza, I can tile the rest with L-shaped squares and get Bill where I wanted him to be. And I have to use that hypothesis to show that I can get Bill anywhere that's required in a 2 to the n plus 1 by 2 to the n plus 1 square. So suppose that we want to tile Bill in that designated arbitrary square of the 2 to the n plus 1 by 2 to the n plus 1 plaza where we happen to choose a location where Bill is in the upper right quadrant, in the upper right half size square. All right. So my hypothesis, I can fill in the purple square, that quadrant, with L-shaped tiles, leaving Bill in the place that he's supposed to be. Well, here's the trick. With the other three, since I can tile them with Bill anywhere, I'm going to tile them with Bill in the respective corners of those three other subsquares, which meet in the center of the full-size plaza, as shown here. And having done that, now it's obvious how to fill up the whole 2 to the n plus 1 by 2 to the n plus 1 plaza, because I pull those four separate pieces together to form the full-size 2 to the n plus 1 by 2 to the n plus 1 plaza. And look, I just put an L-shaped tile in the middle to fill up those three corner Bills, and I'm done. And the proof is complete. We have just proved by induction that indeed you can tile any power of 2 by power of 2 square putting-- leaving Bill wherever you want him, and the rest filled with L-shaped tiles. Now notice that a part of this process actually is implicitly defining a recursive procedure to actually do the tiling. If you watch the way the proof went, if I was going to write a recursive procedure to do the tiling, what I would do is say OK, you give me input n plus 1, which are the dimensions-- the specification of the dimensions of the plaza, input n plus 1-- or input n means it's 2 the n by 2 to the n. How do I do that? Well, you tell me where you want Bill to be as another parameter, and then I will call myself recursively on four half size squares. So that is, call myself to do squares with dimension parameter n minus 1 four times for each quadrant, each time specifying in those quarters where I want Bill to be. The recursive procedure will return an L-shaped tiling of those four pieces, and then I take those, fit them together, tile that middle, and I wind up with a tiling of the whole region. So what I've just talked through is the description of a very easily written recursive procedure that would print out a picture of an L-shaped tiling given, as input, any number n. And that's, in fact, that how we got the 8x8 tiling, although we did it by hand rather than writing a program. And that's enough of two examples of basic mathematical induction.
MIT_6042J_Mathematics_for_Computer_Science_Spring_2015
442_Random_Variables_Independence_Video.txt
PROFESSOR: We just saw some random variables come up in the bigger number game. And we're going to be talking now about random variables, just formally what they are and their definition of independence for random variables. But let's begin by looking at the informal idea. Again, a random variable is a number that's produced by a random process. So a typical example that comes up where you get a random variable is you've got some system that you're watching and you're going to time it to see when the next crash comes, if it crashes. So assuming that this is unpredictable that it happens in some random way, then the number of hours from the present until the next time the system crashes is a number that's produced by this random process of whether the system works or not. Number of faulty pixels in a monitor. When you're building the monitors and delivering them to the actual computer manufacturers, there's a certain probability that some of the millions of pixels in the monitor are going to be faulty. And you could think of that number of pixels is also produced from an unpredictable randomness in the manufacturing process. One that really is modeled in physics as random is when you have a Geiger counter, you're measuring alpha particles. The number of alpha particles that are detected by a given Geiger counter in a second is believed to be a random number. There's a distribution that it has but the number of alpha particles is not always the same from second to second, and so it's a random variable. And finally, we'll look at the standard abstract example of flipping coins. And if I flip coins then the number of heads in a given number of flips-- let's say I flip a coin n times. The number of heads will be another rather standard random variable. OK what is abstractly a random variable? Oops, I'm getting ahead of myself again. Let's look at that example of three fair coins. So each coin has a probability of being heads that's a half and tails being a half. I'm going to flip the three of them. And I'm going to assume that they're distinguishable. So there's a first coin, a second coin, and a third coin. Or alternatively you could think of flipping the same coin three times. So the number of heads is a number that comes out of this random process of flipping the three coins. So it's a number that's either from 0 to 3. There could be no heads or all heads. So it is a basic example of a random variable where you're producing this integer based on how the coins flip. Another one is simply a [? 0-1 ?] valued random variable where it signals 1 if all 3 coins match in what they come up with, and 0 if they don't match. Now once I have these random variables defined, one of the things that's a convenient use of random variables is to use them to define various kinds of events. So the event that C equals 1, that's an event that-- it's a set of outcomes where the count is 1 and it has a certain probability. This is the event of exactly 1 head. There are 3 possible outcomes among the 8 outcomes of heads and tails with 3 coins. So it has probability 3/8. I could also just talk about the outcome that C is greater than or equal to 1. Well C is greater than or equal to 1 when there is at least 1 head. Or put another way, the only time that C is not greater than or equal to 1 is when you have all tails. So there's a 7/8 chance, 7 out of 8 outcomes involve 1 or more heads. So the probability that C greater than or equal to 1 is 7/8. Here's a weirder one. I can use the two variables C and M to define an event. What's the probability that C times M is greater than 0? Well since C and M are both non-negative variables, the probability that their product is greater than 0 is equal to the probability that each of them is greater than 0. OK, what does it mean that M is greater than 0 and C is greater than 0? Well it says there's at least 1 head-- that's what C greater than 0 means. And M greater than 0 means all the coins match. This is an obscure way of describing the event all heads, and it has a course probability 1/8. Now we come to the formal definition. So formally, a random variable is simply a function that maps outcomes in the sample space to numbers. We think of the outcomes in the sample space as the results of a random experiment. They are an outcome and they have a probability. And when the outcome is translated into a real number that you think of as being produced as a result of that outcome, that's what the random variable does. So formally, a random variable is not a variable. Or it's a function that maps the sample space to the real numbers. And it's got to be total, by the way. It's a total function. Usually this would be a real valued random variable. Usually it's the real numbers. Might be a subset of the real numbers like the integer valued random variables. Occasionally we'll use complex valued random variables. Actually, that happens in physics a good deal in quantum mechanics, but not for our purposes. We're just going to mean real value from now on when we talk about random variables. So abstractly or intuitively what the random variable is doing really is it just packaging together in one object R, the random variable, a whole bunch of events that are defined by the value that R takes. So for every possible real number, if I look at the event that R is equal to a, that's an interesting event. And it's one of the basic events that R puts together. And if you knew the answer to all of these R equals a's, then you really know a lot about R. And with this understanding that R is a package of events of the form R is equal to a, then a lot of the event properties carry right over to random variables directly. That's why this little topic of introducing random variables is also about independence because the definition of independence carries right over. Namely, a bunch of random variables are mutually independent if the events that they define are all mutually independent. So if and only if the events that are-- each event defined by R1 and R2 and through Rn, that set of events are mutually independent no matter what the values are chosen that we decide to look at for R1 and R2 through Rn. And of course there's an alternative way we could always express independent events in terms of products instead of conditional probabilities. So we could say-- or instead of invoking the idea of mutual independence we could say explicitly where it comes from as an equation. It means that the probability that R1 is equal to a1 and R2 is equal to a1 and Rn is equal to an is equal to the product of the probabilities-- of the individual probabilities-- that R1 is a1 times the probability of R2 is a2. And the definition then of mutual independence of the random variables R1 through n, Rn holds is that this equation it holds for all possible values, little a1 through little an. So let's just practice. Are the variables C, which is the count of the number of heads when you flip three coins, and M, [? the 0-1 ?] valued random variable that tells you whether there's a match, are they independent? Well certainly not, because there's definitely a positive probability that the count will be 1 that you'll get at least a head. And there's a positive probability that they all will match. It's the probability of a quarter. So the product of those 2 is positive, but of course the probability that you match and you'll have exactly 1 head is 0 because if you have exactly 1 head you must have 2 tails and there's no match. So without thinking very hard about what the probabilities are we can immediately see that the product is not equal to the probability of the conjunction or the and, and therefore they're not independent. Well here's one that's a little bit more interesting. In order to explain it I've got to set up the idea of an indicator variable, which itself is a very important concept. So if I have an event A, I can package A into a random variable. Just like the match random variable was really packaging the event that the coins matched into a [? 0-1 ?] valued variable, I'm going to define the indicator variable for any event A to be 1 if A occurs and 0 if A does not occur. So now I'm able to capture everything that matters about event A by the random variable IA. If I have IA I know what A is, and if I have A I know what IA is. And it means that really I can think of events as special cases of random variables. Now when you do this you need a sanity check. Because remember we've defined independence of random variables one way. I mean it's a concept of independence that holds for random variables. We have another concept of independence that holds for events. Now the definition for random variable was motivated by the definition for events but it's a different definition of independence of different kinds of objects. Now if this correspondence between events and indicator variables is going to make sense and not confuse us it should be the case that two events are independent if and only if their indicator variables are independent. That is, IA and IB are independent if and only if the events A and B are independent. And this is a lovely little exercise. It's like a three-line proof for you to verify. I'm not going bother to do it on the slide because it's good practice. So this would be a moment to stop and verify that using the two definitions of independence, the definition of what it means for IA and IB to be independent as random variables and comparing that to the definition of what it means for A and B to be independent as events, they match. If we look at the event of an odd number of heads we can ask now whether the event M, which is the indicator variable for a match-- the random variable M-- and the indicator variable IO are dependent or not. Now both of these depend on all the three coins. IO is looking at all 3 coins to see if there are an odd number of heads, M is looking at all 3 coins to see if they're all heads or all tails. And it's not clear with all that common basis for returning what value they have. It's not immediately obvious that they're independent, but as a matter of fact they are. And again this is absolutely something that you should check out. If you don't stop the video now to work it out, you should definitely do it afterward. It's an important little exercise and it's easy to check. All you have to do is check that the probabilities of the event of odd number of heads in the event all match are independent as events. Or you could use the random variable definition and check that these two random variables were independent by checking 4 equations because this can have values 0 and 1. And this can have value 0 and 1. Remember with independent events we had the idea that if A was independent of B it really meant that A was independent of everything about B. In particular it was independent of the complement of B as well. And a similar property holds for random variables. So intuitively if R is independent of S then R is really independent of any information at all that you have about S. And that can be made more precise that R is independent of any information about S by saying pick an arbitrary function that maps R to R, total function. So what I can do is think of f as giving me some information about the value of S. So if R is independent of S then in fact R is independent of f of S, any transformation of S by a fixed non-random function. And of course the notion of k-way independence carries right over from the event case. If I have k random-- if I have a bunch of random variables, a large number much more than k, they're k-way independent if every set of k of them are mutually independent. And of course as with events we use the 2-way case to call them pairwise independent. Again, we saw an example of this in terms of events already but we can rephrase it now in terms of indicator variables. If we let Hi be the indicator variable for a head on a flip i-- of the i flip of a coin-- where i ranges from 1 through k, if we have k coins and Hi is the indicator variable for how coin I came out, whether or not there's a head, now O can be nicely expressed. The notion that there's an odd number of heads is simply the mod 2 sum of the Hi's. And this by the way, is a trick that we'll be using regularly that events now can be defined rather nicely in terms of doing operations on the arithmetic values of indicator variables. So O is nothing but the mod 2 sum of the values of the indicator variables Hi from 1 to k. And what we saw when we were working with their event version is that any k of these events are independent. I've got k plus 1. There's k Hi's and there's O, which makes the k plus 1-- k plus first. [AUDIO OUT] And the reason why any k of them were independent was discussed in the previous slide when we were looking at the events of there being an odd number of heads and a head coming up on the i flip. The reason why pairwise independence gets singled out is that we'll see that for a bunch of major applications this pairwise independence is sufficient and rather than verifying mutual independence. It's harder to check mutual independence. You've got a lot more equations to check. And in fact it often doesn't hold in circumstances where pairwise does hold. So this is good to know. We'll be making use of it in an application later when we look at sampling and the law of large numbers.
MIT_6042J_Mathematics_for_Computer_Science_Spring_2015
251_Digraphs_Walks_Paths_Video.txt
PROFESSOR: In this video lecture, we're going to introduce the idea of directed graphs, or digraphs for short. So normally and before this class, you might have thought of graphs as being something like this. Y is a function of x and graphed on the xy plane. But that's not what we want to be thinking about. Instead, we want to think about something like this. This is a graph to a computer scientist. Show a bunch of vertices, which are those point that you see, and edges, which connect vertices. Being more specific and direct about this, it's composed of a set V of vertices and a set E of edges, which are composed of 2V each. The way you write that out, an edge is v comma w, and that specifies an edge going from v to w. And in the graph, it would look something like this. Note that they are directed. That an edge from v to w is not the same thing as an edge from w to v in a directed graph. For example, here we have one directed graph, and you write out vertices as the set of all the vertices you see there. And edges are pairs of vertices. You can also realize that digraph is the same thing as a binary relation on the vertices, because each edge just defines the relationship from one vertice to another. So, every binary relation can be drawn out as a graph. You just put each of the things in each of the sets as vertices and edges being the things that relate from one set to the other. So, now we're going to define walks and paths. Now, a walk follows successive edges but it can repeat vertices or edges. For example, I can start at the black vertice there, and we can go to red, blue, yellow, red. And w can go back to blue again. There's nothing stopping us. And the length of these paths is not how many vertices we've gone through, but the number of edges that we've gone through. So here, the length would be five because we went from white to black, black to blue, blue to yellow, yellow red, red blue. It's not the six vertices that we went through. And you have to be careful about that, because that difference of one kind of gets you. A path, on the other hand, walk through vertices with that repeating a single vertex. So, for example, start at blue, you can go to yellow, you can go red, you can go pink, you can go green, but then we're stuck. We can't go back to red. We've already been there. So, that's it. That would be the end of our path. If we went to red again, then it wouldn't be a path anymore. Not be a valid path. And here, the [INAUDIBLE] length is four edges, not five vertices. And every graph can be represented as a matrix representation. You draw it out like this. And what we're going to do is if there's a edge that goes from one of the things on the right over to one of things on the top, we'll put a one at that position. For example, there's an edge that goes from the black to the red. So, on the black row in the red column, we're going to put in a one. Same thing, there's one that goes from black to green. We'll put black row, green column, put in another one. And so on for all the edges that we have in our graph. And the rest we just filled with zeroes. And this is called an adjacency matrix. And as you can see, it uniquely defines a graph. Every edge is represented here, and every one of the vertices is represented here. So, any graph can be drawn up this way.
MIT_18102_Introduction_to_Functional_Analysis_Spring_2021
Lecture_22_The_Spectral_Theorem_for_a_Compact_SelfAdjoint_Operator.txt
[SQUEAKING] [RUSTLING] [CLICKING] CASEY RODRIGUEZ: OK, so let's continue our discussion about spectral theory for self adjoint compact operators. So let me just briefly recall the spectrum of a bounded operator, which was supposed to be a generalization of the eigenvalues of a matrix. So we defined the resolvent set of A to be those complex numbers such that A minus lambda times the identity, which I just write is A minus lambda, is an invertible, bounded linear operator, meaning it is bijective and, which by the open mapping theorem, tells you that the inverse is also continuous. And the spectrum of A is simply those lambda so that A minus lambda is not invertible, so the complement of resolvent set of A. So from linear algebra, you have the following characterization of what the spectrum is, that if H is CN, and A is just therefore matrix on CN, or RN if you like, then the spectrum is just simply the set of eigenvalues of A. And if we restrict our attention to Hermitian matrices, which are referred to as-- which are also referred to as self adjoint also in linear algebra, or symmetric if we're just looking at real vector spaces Rn then the eigenvalues are real. And you can find an orthonormal basis of the space CN or RN, depending on what you're looking at for this symmetric matrix so that in that basis, the matrix is completely diagonalized. and. What are the diagonal elements there? The eigenvalues of the matrix. And what we're going to end up proving is that that picture, this picture where for a self adjoint matrix on CN, that the spectrum is given by the eigenvalues. And you can diagonalize, essentially diagonalize this operator, meaning you can find an orthonormal basis consisting entirely of eigenvectors of the operator is also true for self-adjoint compact operators. This shouldn't come too much of a surprise, as too much of a surprise, since compact operators are limits of finite rank, i.e. matrices in the space of bounded linear operators. So that's where we're headed. Now, so just to follow up on this and for a little bit of review, so in the finite dimensional case, the spectrum can be-- or the spectrum is always just the eigenvalues of the matrix A-- not so in the infinite dimensional setting, for example, if we're looking at little l2. And then A times a is equal, to let's say, a1 over 1, a2 over 2, and so on, for a, a sequence in l2. Then what you can prove is that 0 is in the spectrum of A. One way to see that is that each of the basis vectors of H given where you just have 1 in the n-th slot and 0 otherwise, each of those is an eigenvector of this operator A with eigenvalue 1 over n, where the n tells you where the 1 is and 0 otherwise. So 1 over n is an eigenvalue of this operator for each n. And 1 over n convergence to 0. And since the spectrum of a bounded linear operator is a compact set, in particular closed, 0, which is the limit of that sequence, has to also be in the spectrum. And, in fact, what we're going to show is that in the nondegenerate case, what we see here is what in general happens for compact self-adjoint operators, that if it's not a finite rank operator, then it has countably many-- or countably infinite many-- that's not a very good string of words-- countably infinite eigenvalues which converge to 0. And 0 may be an eigenvalue, may not be. And what's more, that's going to be the only that completely characterizes the spectrum. So let me just write here for this example that, in fact, the spectrum of A is equal to the point 0 union 1 over n in a natural number. And these are-- again for this operator here, these are the eigenvalues. And this is just-- well, 0 is 0. But 0 is not an eigenvalue, OK. Now, this is not the general picture, meaning that for a compact self-adjoint operator, you'll have infinitely many eigenvalues. And then 0 will not be an eigenvalue. You could have 0 in eigenvalue as well. But in general what the picture is is that anything in the spectrum that's not 0 has to be an eigenvalue. And that's essentially what will prove our first result of this lecture is the following. So this is the Fredholm alternative. So let A be a self-adjoint compact operator and lambda be a nonzero real number. Then the range of A minus lambda is closed. So this is the conclusion. And thus the range of A minus lambda is equal to the orthogonal complement of the orthogonal complement of itself, which-- the orthogonal complement of the range of A minus lambda is equal to the null space of the adjoint. And since A is self-adjoint and lambda is a real number, the adjoint is just A minus lambda. So this right here is the main conclusion from which you get this. And therefore just to spell out the conclusion of this equality, therefore either one of two alternatives happens. So that's the name, alternative-- either A minus lambda is bijective, or the null space of A minus lambda, which is just the eigenspace of A minus lambda, or I should say the eigenspace corresponding to lambda, is non-trivial and finite dimensional. Moreover, this equality here tells you when you can solve the equation A minus lambda u equals f. You can solve a minus lambda u equals f if and only if f is in the range of A minus lambda, which is if and only if f is orthogonal to the null space of A minus lambda. So let me make a little remark here. So first off, the fact that this space has to be finite dimensional we proved last time. We proved at the end of lecture, the last lecture, that for a compact self-adjoint operator, the null space, the eigenspace corresponding to a given nonzero eigenvalue, is finite dimensional. And then we also prove the eigenspaces corresponding to two different eigenvalues are orthogonal. And we also prove that the eigenvalues that are nonzero, or any eigenvalue, has to be real. So now let me make a couple of remarks. The first is just a rephrasing of what's in the theorem. Therefore f is in the range of A minus lambda, meaning you can solve the equation A minus lambda u equals f if and only if f is in the null space of-- or the orthogonal complement of A minus-- or the null space of A minus lambda. So what this says is that you can solve for f, given that f satisfies finitely many linear conditions because the null space of A minus lambda is finite dimensional. So f being orthogonal to that means pick a finite base, a finite orthonormal basis of A minus null space of A minus lambda. Then f in our product with those finitely many vectors has to be 0. So you have finitely many conditions on f to be able to solve for f. And not only that, this solution that you compute is also unique up to finitely many conditions because, again, the null space of A minus lambda is-- or unique up to a finite dimensional subspace, again, because the null space of A minus lambda is finite dimensional. And the second is that for a self-adjoint operator, we have that the spectrum is a subset of the real numbers. For a self-adjoint operator, this is something we proved last lecture. It doesn't have to be compact, just a self-adjoint operator. So since the spectrum is a subset of the reals, this proves that for a compact self-adjoint operator, the spectrum of A is equal to the set of eigenvalues of A, or I should say nonzero eigenvalues of A. Let's write it this way. If I look at what's in the spectrum other than 0 possibly, then the nonzero numbers that are in the spectrum have to be eigenvalues. So for a compact self-adjoint operator, and just in the case of matrices, the spectrum, the nonzero spectrum has to be-- are nothing but eigenvalues. And last time, remember, we proved that the eigenvalues are countably infinite, or countable. They're either finite or countably infinite. And if they're countable infinite, they converge to 0. So, again, by last lecture, we conclude that the spectrum of A take away 0 equals either finitely many eigenvalues or countably infinite eigenvalues that are converging to 0. So from the Fredholm alternative we get a lot of information about when we can solve equations. But it also tells us-- I mean from that ability to say when we can solve the equations, we can also characterize the nonzero spectrum of a self-adjoint compact operator. So we need to prove that-- so remember all of this just followed from stuff we had proven and the main conclusion of the theorem, which is that the range of A minus lambda is closed. So we need to prove that the range of A minus lambda is closed when lambda is a nonzero real number. So suppose that you have a sequence in the range, which I'll write as A minus lambda times un converging to some element f in H. So what we'd like to be able to show is that f is in the range. So we want to show f is in the range of H-- or not range of H, range of A-- minus lambda, sorry. Hopefully I didn't make that mistake elsewhere-- no, just-- OK. Now, we're only assuming that a minus lambda when it hits u sub n converges to f. We're not a priori assuming the use of n's converge. In fact, we can't. But in the end we would like to come up with maybe a subsequence or a part of the u sub n's up to a subsequence which does converge, and then we conclude that f is in the range. So first I want to get rid of the useless part of the u sub n's. So let W be-- well, I don't need to give it a name. Really, where is my eraser? So let vn be the projection onto orthogonal complement of the null space of A minus lambda of u sub n. So now this is just part of-- so every u sub n is written-- you can write as something in the null space of A minus lambda. So since null space of A minus lambda is a closed subspace of H, it has an orthogonal complement so that it and its orthogonal complement gives the direct product-- or when you take their direct product, gives you H. So why am I saying that? Because then if I take A minus lambda u sub n, this is equal to A minus lambda applied to pi, so the projection onto the null space of u sub n plus the projection onto the orthogonal complement of the null space, which I defined as v sub n. Now, this element here is in the null space of A minus lambda. So when A minus lambda hits it, I get 0. So I get A minus lambda applied to v sub n, all right. Then a minus lambda v sub n equals A minus lambda u sub n, which converges to f. So, basically, I've taken away some noise, all right, the part that when it hits A minus lambda, I get 0. So now I just have these v sub n's which lie in the orthogonal complement of the null space of A minus lambda. So my claim is, first, is that the sequence v sub n is bounded. Basically, once I can show this, then I'm done because if I can show v sub n is bounded, then since A is a compact operator, when A hits v sub n up to a subsequence, that converges. Now, this whole expression converges and therefore lambda times v sub n converges. Lambda is nonzero, so then v sub n converges up to a subsequence to something. And therefore A minus lambda v sub n then converges to A minus lambda v for some v, which shows that f is in the range of A minus lambda. So this is really the whole ball game. And we'll use here crucially as well that we threw away useless parts of u sub n, useless at least to this argument. So I claim this is bounded, so suppose not. Then there exists subsequence v sub n sub j such that v sub n sub j goes to infinity as j goes to infinity. All right, now if I look at A minus lambda applied to v sub n sub j over norm of v sub n sub j, this converges-- so first off, since is a linear operator, this is equal to 1 over norm v n sub j times A minus lambda applied to v sub n sub j. And so this scalar, 1 over norm v n sub j converges to 0. This converges to f. So I get the 0 vector in H. So this thing converges to 0 in the Hilbert space H. Now, why is that bad? Because essentially what this is going to say is that there exists some element, or that this sequence converges at least up to a subsequence to an element v with norm 1 because they all have norm 1 so that A minus lambda v equals 0. But all of these are in the null space of A minus lambda and we get a contradiction. So we have that A minus lambda-- so we have that part. Since is A is a compact operator, there exists a subsequence, so a further subsequence-- I'm just going to call it n sub k instead of n sub j sub k-- v sub n sub k of v sub n sub j such that the sequence a sub v sub n sub k converges. But then I get that v sub n sub k-- or I should say v sub n sub k over norm of v sub n sub k. So these all have norm 1. And A applied to something that has unit length-- or the image by A of the closed unit ball is a precompact-- or the closure of it is compact. And therefore every sequence has a convergent subsequence. Then v sub n sub k over norm of v sub n sub K, , this is equal to 1 over lambda times A applied to v sub n sub k over norm of v sub n sub k minus A minus lambda applied to v sub n sub k. Now this sequence of elements converges to 0. That's, in fact, what we just proved. And this converges by how we've taken this subsequence because A is a compact operator. So I have this sequence of vectors is equal to-- and here we can divide by lambda because lambda is nonzero. It's equal to a linear combination of two sequences which converge. And therefore we get that v sub n sub k over norm of v sub n sub k, k converges to an element v. And now the null space or the orthogonal complement of the null space of A minus lambda, each of these is in-- remember the orthogonal complement of the null space. And since it's converging to an element and this is closed, this element has to be in the same set. So, again, this follows from-- the fact that v is in here is because this set is closed. The orthogonal complement of any subset of a Hilbert space is closed. Then by continuity of the norm, basically, the norm of v has to be equal to limit as k goes to infinity of the norm of the elements converging to it, which all equal 1. And if I compute A minus lambda applied to v, this is equal to-- since the v sub n sub k's over norm v sub n sub k's are converging to v A minus lambda applied to v sub n sub k over norm v sub n sub k. And, remember, this is a subsequence of v sub n sub j's. And when A minus lambda hits that, they're converging to 0, so-- all of that predicated upon the assumption that the norm of v sub n sub k's converges to infinity or that the sequence is unbounded. So we have this element in the null space that has-- or the orthogonal complement of the null space that has norm 1 but also gives you 0. And therefore we get that v is in the null space of A minus lambda from this computation and it's orthogonal complement. But the only possible vector that's in a space and it's orthogonal complement is the 0 vector, or the only vector in a subspace and its orthogonal complement is the zero vector. And therefore v equals 0, which this is a contradiction to the fact that the norm of v equals 1. So we started off with the sequence v sub n's. Assuming that they are unbounded, v sub n over norm v sub n is a sequence of essentially-- yeah. So maybe you got lost in the subsequences. But let's just assume I'm talking about the entire sequence. Then v sub n over norm v sub n, when a. Minus lambda hits it, converges to 0. Since A is a compact operator, we can show essentially that A applied to v sub n over norm v sub n, because those things have unit length, converges to something. And since lambda is nonzero, we can then conclude that those vectors converge, in fact, to something, not just their images by A or their images by A minus lambda, again, because lambda is nonzero. And since they all have unit length, their limit must have unit length. And since when a minus lambda hits these guys they go to 0, the limit must also, when A minus lambda hit it, equals 0. And that gives us our contradiction because this limit v had to be in the orthogonal complement of the null space, but then also in the null space, and have unit length. Those three things can all happen at once. So thus the sequence v sub n is bounded. So, remember, what were the v sub n's to start with? They were so that A minus lambda v sub n's converged to this element f. And we wanted to show that f is in the range. So we coming back over here. We had these v sub n's so that A minus lambda v sub n converges to f. And we want to show f is in the range to conclude that the range of A minus lambda is closed. But now since it's bounded and we've done this argument already, this is essentially the whole ball game. Since the subsequence v sub n is bounded, and A is a compact operator , we conclude that there exists a subsequence v sub n sub j such that-- so this has nothing to do with the previous argument now, but I just don't feel like using different letters-- such that a applied to v sub n sub j converges. So, remember, is A compact operator, which we stated it in terms of the closure of the image of the closed unit ball being compact. Equivalently by scaling the unit ball, it means that A takes any bounded sequence to a sequence that has a convergent subsequence. So we showed v sub n is bounded. And therefore since A is a compact operator, we can find a subsequence so that when A hits it, we have a convergence sequence. And by that same trick we used a minute ago, we conclude that v sub n sub j, which is-- and we can do this because lambda is nonzero. We can divide by it. So A v sub n sub j minus A minus lambda V sub n sub j-- now, again, this here is converging to something. This here is converging to f. So this linear combination of convergent sequences is convergent-- converges to and element v. And therefore I get that f which is the limit as n goes to infinity of the v sub n's. But convergence still holds if I look at a subsequence, A minus lambda v sub n sub j equals-- and since A-- so A is a bounded linear operator. Lambda times the identity is a bounded linear operator. This is equal to A minus lambda v. And therefore f is in the range of A minus lambda. OK, so Fredholm alternative tells you that the range of A minus lambda for a compact self-adjoint operator is closed. So where did we really used the fact that it was self adjoint? Nowhere in this argument. So this fact that the range of A minus lambda is close is still true if A is just a compact operator and lambda is just some nonzero complex number. But where we use that it's self adjoint is I guess in the rest of the conclusion, that the range of A minus lambda is therefore equal to the null space of A minus lambda. And therefore either A minus lambda is bijective, or the null space of A minus lambda is non-trivial and finite dimensional by what we did in the previous lecture. OK, now, again, this is a very powerful theorem. And, again, what this says is that if I look at the nonzero spectrum of a compact self-adjoint operator, then that consists entirely of eigenvalues of A. Now earlier we proved that plus or minus the norm of A has to be in the spectrum of a self-adjoint operator. And therefore what we can conclude is that if we have a non-trivial self-adjoint compact operator, then it has at least one eigenvalue. And we can characterize that eigenvalue. So this has a following theorem. Let A be a non-trivial compact self-adjoint operator, A equals A star. Then has a non-trivial eigenvalue lambda 1. And we can characterize lambda 1-- or at least the absolute value of lambda 1-- as the supremum over norm u equals 1 Au, u equals-- and this supremum is actually achieved where u1 is a normalized eigenvector corresponding to lambda 1. So why do we have-- or I should say, let me make sure I have everything here. OK, so why is this? So first off, we've shown that plus or minus the norm of A is, in fact, in the spectrum of A. for any self-adjoint bounded linear operator, not necessarily compact, that plus or minus-- not necessarily both of them, but plus or minus, one of these, at least one of plus or minus, since a is self adjoint, meaning A star equals A. Then lambda 1-- then I'm going to say plus or minus, meaning not both, actually at least one of these, is an eigenvalue of A by the Fredholm alternative. The Fredholm alternative and the fact that-- so lambda one is going to be either plus norm of A or minus norm of A, depending on whether which one is in the spectrum. Let's say plus if it's in the spectrum, minus if plus is not. And the fact that we can identify it as this quantity here is because we have that for self-adjoint operators, norm of A, which is the absolute value of one of those plus or minuses that are in the spectrum is equal to-- so this is where-- so this is equal to sup u equals 1, Au equals u. So that's the end of the proof. And now what we're going to do is we are going to keep going. So the end result will be that we can determine all of the eigenvalues via a certain maximum principle and build up a sequence of eigenvectors, normalized eigenvectors, which are pairwise orthogonal simply because they come from how they're built up. But we'll see. And what? And then we'll show that essentially that set of eigenvectors that we get, along with a set of-- or an orthonormal basis chosen for the null space of A, form an orthonormal basis for a separable Hilbert space H. So that's where we're going. But we can basically take this theorem and keep applying it. So we have the following maximum principle. If you like, this is the first step in a maximum principle. This says if you want to find the largest eigenvalue, lambda 1-- I should say-- I can even say largest eigenvalue, largest in the sense of absolute value. Why? Because remember the spectrum is contained in the interval minus norm A plus norm of A. So anything in the spectrum has to have absolute value less than or equal to the norm of A. And if it's eigenvalue, or if it's a-- anything other than 0 has to be an eigenvalue so we get this. So what's this maximum principle? And why was I saying all that? Oh, so this gives you a way to-- if you'd like try and find or at least approximate the first eigenvalue of a bounded linear operator, this is a maximization problem with a constraint. So you could use the method of Lagrange multipliers. That doesn't maybe make sense to you to be able to do on an infinite dimensional Hilbert space. But let's say you choose a big basis of your Hilbert space. And you just restrict to looking at that big but finite dimensional, or finite basis, or span of that finite basis, and try to solve the approximate problem, then you should get close to the eigenvalue and get an approximate eigenvector because as I've said, the eigenvector will be achieved-- or I should say that the eigenvector achieves this maximum here , or the supremum. So the maximum principle is the following. So let A-- again, we're only looking at compact self-adjoint operators, compact operator-- then the nonzero eigenvalues of A can be ordered. Now, this part we already know. They can be ordered lambda 1 less than or equal to lambda 2, less than or equal to lambda 3, counted with multiplicity meaning if lambda 1 has a two-dimensional eigenspace then lambda 2-- or the absolute value of-- or lambda 2 will be lambda 1. So we'll repeat it according to multiplicity. So we know that there's finitely many distinct eigenvalues. That was the part that I was saying. We know we can order them, but maybe it's not clear that you can order them with multiplicity with corresponding orthonormal eigenbasis functions uk. So u1 is a normalized eigenfunction for lambda 1. u2 will be a normalized function for lambda 2, which is orthogonal to u1. So these are pairwise orthonormal. And how do we obtain the eigenvalues in this order and these eigenfunctions via the following process so that lambda j is equal to the supremum over all unit vectors that are orthogonal to the first j minus 1? And this is equal to-- this will be achieved on the u sub j. So we have the first one, if you like. We built up lambda 1, u1. and. Now what this maximum principle theorem is saying is that we can repeat this, is that if we now look at this quantity here, the supremum over all norm 1 vectors that are orthogonal to u1, then we're going to pick up the next largest eigenvalue counted with multiplicity, meaning it'll be the absolute value of lambda 2, which will be less than lambda 1 if lambda 1 only has a one dimensional eigenspace. Or it'll be lambda-- or the absolute value of lambda 2 will be the absolute value of lambda 1 again if lambda 1 has a, let's say, two-dimensional eigenspace. And if it had a three-dimensional eigenspace, we would get the absolute value of lambda 1 for-- it would equal this number and this number as well. So, again, these nonzero eigenvalues can be ordered in this way. Oh, and I left off one. And the lambda j is going to 0. OK, so we have the first one, the absolute value of lambda 1, and the first eigenvector u1. and. Now we're just going to basically repeat or apply the previous theorem to a modification of A. Now I should say this is not entirely true because there may only be finitely many. So really this should be a remark, if the decreasing sequence does not terminate, then the absolute value of the lambda j's goes to 0. Now, so what's the new bit of information here? We do know that we can-- for each, let's say, capital N, the number of eigenvalues outside of-- or with absolute value bigger than 1 over N has to be finite. So the fact that we can order them is not really that much new information. And the fact that they to go to 0 if this sequence is infinite, that's also not new information. I mean, we did prove that in a previous lecture that if you have-- so there we proved it for a sequence of distinct eigenvalues. But because we now know that each of these has a finite dimensional eigenspace, if you just look back at the same proof or that proof, you can make a small adjustment to be able to say that if you count the eigenvalues with multiplicity, then also the sequence has to go to 0, not just necessarily the sequence consisting of the distinct eigenvalues. So let me make that small remark. What the new piece is that we can compute the eigenvalues in this way and choose the eigenvectors in this way. That's a. New piece and that's what, in the end, we'll use to be able to show that H can be-- or that A can basically be diagonalized, or that you can find an orthonormal basis of a Hilbert space, separable Hilbert space consisting entirely of eigenvectors of A. So the construction proceeds inductively. So at one point I said, if you can d it for one then you can do it for the next. And then I won't be so formal every time I do an inductive construction. But in this case it pays to be careful. So we'll construct the sequence of lambda j's and uk's in this way via an inductive argument. So coming up with k equals 1, that was the previous theorem. So previous-- or I should say j equals 1, that's the previous theorem. We found the largest eigenvalue, or the eigenvalue with-- an eigenvalue with the largest absolute value via this theorem. And then we obtained an eigenvector this way, just basically as a consequence of the Fredholm alternative. And then that we know that plus or minus the norm of A has to be in the spectrum. | the fact that we can find the first eigenvalue of lambda 1, that follows from the previous theorem. So now we want to do the inductive step, i.e. we want to suppose we have found lambda 1 up to lambda-- let's say, what did I use here? Lambda n, so j equals n and with orthonormal eigenvectors u1 up to un, satisfying this maximum principal. So j starting from-- starting at 1 up to n, we've constructed or found a lambda 2, lambda 3, up to lambda n, and the u went up to un, satisfying that maximum property. So then there's two cases. So in case 1 is the fact that A minus-- A is equal to the sum from k equals 1 to n of lambda k, u inner product uk, uk. And therefore what this shows is that we found all the eigenvalues and the process terminates. So this is kind of the degenerate case that A is a finite rank operator. So I could have started this whole theorem off with let's assume A is not a finite rank operator, and therefore we wouldn't have to deal with this case. But I just stated it for an arbitrary A. So there is the possibility that that sequence stopped. We found all of the eigenvalues with multiplicity in this process. And then the theorem-- or construction is done in that case. And the case 2 is that it is not a finite rank operator, so it's not equal to k equals 1 to n. Now we have to show how can we find lambda sub n plus 1 and the eigenvector u sub n plus 1. Let a Sub n be the operator A minus k equals 1 to n lambda k inner product u. So should say Au minus u sub k, all right. Now. Note since we are in case 2, this operator is nonzero. So we're basically going to apply the previous theorem now to this operator. But let me first make a few remarks. Then A is A sub n is a-- this is something you can check. It's a self-adjoint compact operator. So why is it self adjoint? Basically because A is self adjoint and these lambda k's are real numbers because eigenvalues have to be real, because A is self adjoint and these are orthonormal. So that is why they're self adjoint. Why is it a compact operator? Well, it's the sum of a compact operator A and a finite rank operator here. So it's also a compact operator. And it's a nontrivial one because it's not identically equal to 0. So I shouldn't say not equal to 0, but which is not identically the zero operator. So here's a couple of facts that if u is in the span of u1 up to un, then I get that A sub n applied to u is the zero vector. Why is that? So it suffices to check that this gives me-- this formula gives me 0 when u is one of the uk's. Now if u is one of the uk's, then this will be 1 only when-- or let's say it's one of the uj's. I need a different letter. So if u is one of these uj's, then this will give me 1 only when j equals k and I pick up lambda j times uj. And then I have A hitting u sub j. And that spits out lambda j times u sub j because u sub j is an eigenvector of A. So then I get the same thing here and there and subtracting them gives me 0. Now, if u is orthogonal to the span of the u1 up to un, then I get A n of u equals Au. So you just see this. If u is orthogonal to these, then this whole term is 0. Can't I just pick up An u equals Au. For all u in H, in the span of u1 up to un, if I compute A and u inner product v, this is equal to u and v since a n is self adjoint. And since v is in the span of the u1's up to un's, by the first property that's going to be 0. So this equals 0 and therefore, what have I shown? I've shown that A and u inner product v is 0 no matter what u is in H. And therefore, that means that v has to be in the orthogonal complement of the range. So I said that backwards. Anyways, here u is fixed. v is a thing that's changing. So what does this say? u as fixed. A and u inner product v when v is in the span has to be 0. This holds for all v in the span. And therefore A n u has to be in the orthogonal complement of the span of these guys. So this proves that the range of A n is a subset of your orthogonal complement of u1 up to un. And so from this previous property we get one last property, that if a n u equals lambda u is nonzero, meaning we have a nonzero eigenvalue of A sub n, then that implies that u-- so u is equal to 1 over lambda A sub n applied to u. In other words, u is equal to A sub n applied to u over lambda. So that implies u is in the range of A sub n, which again is contained in the orthogonal complement of u1 up to u sub n. And since it's in the orthogonal complement, that means A sub nu equals Au. So any nonzero eigenvalue of A sub n has to be a nonzero eigenvalue of A. Now we just apply the previous theorem to A sub n to the next eigenvalue lambda sub n plus 1 and eigenvector u sub n plus 1. By the previous theorem, a sub n as a nonzero eigenvalue, which I will call lambda n plus 1, with unit eigenvector u sub n plus 1 so that lambda n plus 1 is equal to the sup over all norm-- ah, OK, sup over all norm or unit length vectors of absolute value of A sub n applied to u, inner product u. Now, thos sup is the same sup over all 1 so that A sub n u is nonzero. So that in particular includes-- OK. So if u is in the span u1 up to un, then A and u equals 0. So the supremum over all unit length is the same as u-- the supremum over everything in the orthogonal complement of these guys because these guys, when you stick them into A, and gives me a 0. So that's the same supremum. But now also when u is in the orthogonal complement of these u1's up to un, this is equal to sup of u on the orthogonal complement of the u1 up to un's. A n u, remember, equals Au. So that gives us the fact that I can find this from this supremum. And so why does this eigenvector has to be in the orthogonal complement of u1 up to un? it's. Because I can choose it that way because when A sub n hits anything in here, I get 0. So I should say this is also equal to A n apply to un plus 1. But that's the same as A applied to un plus 1-- un plus 1. And this is less than or equal to the sup over norm u equals 1 u in span u1 up to un minus 1, orthogonal complement Au applied to u, which equals the absolute value of the n-th eigenvalue counted with multiplicity. So we found the next eigenvalue in this sequence of eigenvalues countable multiplicity. So now, we can conclude the following spectral theorem for compact self-adjoint operators. Let A be a self-adjoint compact operator on a separable Hilbert space H. Let lambda 1 bigger than or equal to lambda 2 be corresponding eigenvalues, be the-- eigenvalues-- or I should say nonzero eigenvalues-- of A counted again with multiplicity, so counted with multiplicity, as we've constructed in this theorem which I called the maximum principle, with corresponding orthonormal eigenvectors uk. So we have these eigenvectors coming from this process. And the conclusion is this subset of eigenvectors, or the orthonormal eigenvectors is an orthonormal basis for the closure-- or for the range of A. In fact, we can upgrade that the uk is, in fact, an orthonormal basis for the range of A closure. And there exists an orthonormal basis, call it f sub j, of the null space of A if it's nonzero, such that uk k fj. So, first off, the union of these two sets of orthonormal vectors is then going to be again a subset of orthonormal vectors because all the fj's would correspond to the eigenvalue 0. And these uk's would correspond to eigenvalues that are nonzero. And by what we proved last time, any eigenvector for two distinct eigenvalues would have to be orthonormal. So this subset is orthonormal from this subset. But, moreover, is orthonormal like I said, but also an orthonormal basis for H. So, in other words, I can find an orthonormal basis for H consisting entirely of eigenvectors of this self-adjoint compact operator, OK. So I will have one piece of this basis coming from the null space and the other piece corresponding to nonzero eigenvalues. So, really, 2 follows from 1. Not sure why I decided to state them separately, but here we are. So proof of 2 will show that uk is in orthonormal basis for the range of A. So first off note as we did in the previous proof that the process of obtaining the lambda 1 for the eigenvalues and eigenvectors, or orthonormal eigenvectors terminates if and only if A was finite rank. In other words, their exists an n so that Au is this finite rink operator u in a product uk, uk. In this case, if it's a finite rank operator, then the range is contained in uk, uk, which is what we wanted to prove for this case, that A is this finite rank operator. So suppose otherwise, in other words the process does not terminate, so that we have countably infinite many nonzero eigenvalues counted with multiplicity and corresponding orthonormal eigenvectors u sub k. So this is a more interesting-- so the eigenvalues are countably infinite. OK, so now we want to show-- so as in the remark that I made afterwards, you know that these lambda k's have to go to 0. Now, we want to prove that the uk's are an orthonormal basis for the range of A. What does that mean? By definition, that means that they're in maximal orthonormal subset of the range of. A So we have to show that if something's in the range and it's orthogonal to every one of these eigenvectors, then that thing has to be 0. So the claim if f is in the range of A, and for all k f inner product uk equals 0, then f is a 0 vector. So this is the claim we want to prove. So suppose we have something in the range. That means we can write it as A times u and f inner product uk equals 0 for all k. Then for all k, if I look at lambda ku inner product uk, this is equal to-- lambda k is a real number so I can bring it all the way in, get lambda k, uk. And this is equal to u A apply to uk. Now, a is self adjoint so I can move this a over here to u. And this is equal to f inner product uk equals 0 for all k. So by this maximum principle which we proved just a minute ago we conclude that the norm of f, which is equal to the norm of Au, which is equal to the norm of A minus sum from k equals 1 to n of lambda k u, uk, uk applied to u because every one of these numbers is 0, so I haven't subtracted off anything. So I can write this in terms of this A sub n applied to u, where A sub n was this thing-- or A sub n is this thing, which by the proof of the maximization-- or the maximum principle is less than or equal to lambda plus 1 n plus 1 of u because, again, remember this thing here is the supremum over all u's of unit length of A sub n applied to u. So lambda n plus 1 is less than-- or this quantity here divided by the norm of u so that you as a unit length is always less than or equal to this quantity here. But now lambda-- so this-- I had a fixed thing here, norm of f. And I've shown it's less than or equal to lambda n plus 1 times the norm of u. This is a fixed thing. And the lambda n's are converging to 0. And thus I started off with something non-negative, less than or equal to something converging to 0. And therefore that thing had to be 0. Therefore f is 0. So this proves, 1, that these eigenvectors are a maximal orthonormal subset of the range of A. And for 2 we simply note that by 1, we have that the range of A closure, this is since the eigenvectors are in orthonormal basis for the range of A, the closure is contained in-- closure of the spans of the uk's, here this is a finite span, which, remember, this is by an assignment exercise, this is equal to k ck, uk such that-- and therefore, this implies that is an orthonormal basis for the range of A closure. And this is equal to the range of A orthogonal complement, orthogonal complement, which is equal to the null space of-- or the orthogonal complement of the null space of A star. A star equals A, so we get that. So the uk's-- the eigenvectors form an orthonormal basis for the orthogonal complement of the null space of A. So once we've chosen orthonormal basis for null space of A, that's it. Since H is separable and the null space of A is a closed subspace of H, null space of A inseparable. And we've proven that every separable Hilbert space, or even just a-- so this is a close one so it's a Hilbert space anyways. But every separable Hilbert space has an orthonormal basis of-- and therefore, again, so since these two-- call this fj-- this is an orthonormal basis for the null space of A direct product null space of A orthogonal complement. That equals H. And just in the nick of time I'm finished. So next time where do we go from here? We'll see some of this applied in a concrete setting of differentiable equations and also discuss the functional calculus.
MIT_18102_Introduction_to_Functional_Analysis_Spring_2021
Lecture_6_The_Double_Dual_and_the_Outer_Measure_of_a_Subset_of_Real_Numbers.txt
[SQUEAKING] [RUSTLING] [CLICKING] PROFESSOR: So let's finish our discussion of the Hahn-Banach Theorem. Let me just recall. This was the theorem that stated, that if v is norm space, m is a vector-- or is a subspace of v. And u is a bounded linear functional on m. So it's a linear map that also satisfies for all t and m, u of t, and absolute value is-- or the modulus of u of t is less than or equal to a constant times norm of t. Then there exists a bounded linear functional on the whole space, which extends it. So that there exists a capital U, so that when U is restricted to m this gives me little u. And it's extended continuously in the sense that it satisfies the same bound that little u does. Now for all t and v. u of t is less than or equal to a constant times the norm of t. So last time, we went through the proof of this using Zorn's lemma and the lemma that we ran up against at the end of class. Which, when writing my notes for the lecture, I got a little bit flippant towards the end and made a little small error. Which can be remedied if you just look at the lecture notes, which I have online. Or if you look at Richard's lecture notes just for the very end of concluding that Lemma we needed. So minus epsilon, we proved the Hahn-Banach Theorem. You can fill in the epsilon by just looking at the notes. But the proof of the Hahn-Banach Theorem is not the point. It's not the main point. It's important to know it, or at least have seen it, but what is important is it as a tool. And I mentioned last time, one application is to show that the dual of little l infinity is not little l1. Another application is the following is, so v is going to be a norm space. Then for all v and t, take away 0. There exists a bounded linear functional and element of the dual space such that the norm of f equals 1. And f of v gives me the norm of v. So there is a bounded linear functional that, when you apply it to v, you get the norm of v. And we'll use this just in a second to say something a little bit deeper about the relation between the dual and then taking the dual of that. But let's prove this simple theorem. So first, I define. So I have a little v that's non-0 in the space, and I define a bounded linear-- or a linear map from the span of v to c by u of lambda v equals lambda times the norm of v. Now, every element that's a scalar multiple of v can be written uniquely as this. So this function is well-defined, and it's clear that it's linear because linearity is occurring in lambda. v is fixed. Remember? So, in particular, I get that u of t is less than or equal to u-- to the norm of t for all t in the span of this fixed vector v. And you have one-- or I should say, u of v, which, remember, is just a-- So this is equal to just 1 times the norm of v. It gives me norm of v. And so, then, by the Hahn-Banach Theorem, there exists an f element of the dual extending u. Such that now, for all t and v, f of t and modulus is less than or equal to the norm of t. And in particular. So since it extends, u must have equals the norm of v. And by that inequality over there, the norm of little f must be less than or equal to 1. Well, first, let me finish with what this inequality tells you. This tells you that-- So since f of t is less than or equal to t for all t and v, this implies that the norm of f-- Which, remember, is the sup of f of t with t unit length. It must be less than or equal to 1. But I have that 1 is equal to f of b over norm of b, which must be less than or equal to the norm of f. And, therefore, the norm of f equals 1. So f is this desired, continuous linear functional on v that, when it hits v, little v, it spits out the norm of little v. The double dual of a space is-- if you can take the dual of a space, you can take the dual of that space. The double dual of a norm space v is, by definition, we denote it by v double prime. This is the dual of the dual of v. So, remember, the prime is the space of bounded linear functionals on v. And so, then, v double prime is the space of bounded linear functionals on the space of bounded linear functionals of v. Now, let me just give you an example of-- so, a quick little example of an element that you can associate in v double prime. Fix an element in capital V and define. Now, let's call it something. Let's call it t of v. So this will be an element of the dual space of v prime in the end. But t sub v by-- So this thing has to eat an element of the dull space and spit out a number. Now, think about the two pieces of data I have here. I have some fixed little v and capital V, and I would like to use that to define some bounded linear functional on capital V prime. Now, capital V prime eats elements of capital V. So I can define a map from V prime to C by that. Then I claim that T of V is an element of the double dual. So the dual space of V prime. And why is that? So first off, it has to be linear in the argument. So little v prime is in capital V prime. It has to be linear in the argument V prime. And this is clear. So remember, little v is fixed. So if I take a linear combination of two elements of capital V prime, then this expression is clearly linear in V prime. So T sub V is linear clearly. Now we just need to check that it's bounded. And if I take-- so T sub v is linear. This is a check. Just how we talk through it. And T sub v is bounded because if we take T sub v of v prime, if we take its modulus, this is by definition equal to v prime of v. Now, v prime is a bounded linear functional on v. So this is less than or equal to the norm of v prime times the norm of v. And in fact, let me write it this way. Norm of v, v prime. So we've shown that this linear operator from v prime to C in modulus is bounded by this constant times the norm of v prime. And therefore, we conclude that T of v-- or T sub v is in the dual of V prime, the double dual. And the norm of this as an operator going from v prime to C, which we've just shown, is bounded by this constant. Remember, the operator norm is the best constant that appears here. And therefore, this is less than or equal to that. So we've shown that every element of v, we can associate an element of the double dual, the dual space of the dual space, via this relation. So an element of v can be defined as an element of v double prime by letting it act on v prime by this formula here. But we can say a little bit more. And let me just introduce some terminology. Let v be in capital V. Or, no, I'm not quite there. If V and W are normed spaces, then we say that a bounded linear operator from V to W is isometric. So isometric meaning doesn't change length, doesn't change distances. If for all v and capital V the length of T of V is equal to the length of V. And now, the next theorem is that this map, this relation I've given between elements of v and the double dual is, in fact isometric. So let v be in V. And as before, define T sub v as the map from v prime to C via T sub v needs to be elements of the dual. So you can spit out numbers so it's defined in this way. Then the map T from V to V double prime, where this map T is taking V to T little v is isometric. So it's a bounded linear operator from V to its double dual. And the length of a vector in v is same as the length of its image in the double dual. Which is the length in the double dual is defined in terms of the operator norm. So we've done basically all of the work when I was discussing the example. So we've shown already that the map v to T of v, this is a bounded linear operator from v to v double prime. So we showed it's bounded, namely that the image is bounded by-- or the norm of the image is bounded by the norm of v. But it's also clear that this is linear in little v. So a minute ago, we showed it's linear in v prime. But it's also linear in v. Because for each fixed v prime, this is linear in v. So this is a bounded-- this map from v taking you to this element in T of v, this is a bounded linear operator from v to v double prime. So what's left is to show that it's isometric. So we've shown this and that the norm of the image in the double dual is less than or equal to the norm of v. Now to show it's isometric, we must show that the norm of v equals T of v. So by the theorem we just proved a minute ago, so first off, so as in the statement of the theorem, let me denote this map by this is T. So we've shown that the norm of T is less than or equal to 1. Because we've shown that the norm of the image is always less than or equal to the norm of the input. And now to show that it's isometric we just need to show the norm of T equals 1. So let v be in v non-zero. with norm v equal to 1. Well, so I just said something really stupid a minute ago. Let me go back to what I had written down there a minute ago. So not that-- so remember, we're trying to show not that the norm is one but that the norm of the image is equal to the norm of the input. I got backwards there for a minute, sorry. And the norm of the image is less than or equal to the norm of the input. So now we just want to show the reverse inequality for all little v in capital V. Now we show all v in capital V, T of v is equal to the length of v. And to do that, we use this theorem we just proved a minute ago. So this is clear if v is 0. So suppose v is non-zero, then there exists an element of the dual space by the previous theorem such that the norm of f equals 1 and f of v equals the norm of v. Then the norm of v is equal to f of v. And I can even put modulus on that. And this is less than or equal to though, thinking of this as a-- so now thinking of this as an operator from bounded linear functional, so [INAUDIBLE], so as an element of the dual this is less than or equal to the norm of T sub v times the norm of f. And now the norm of f is 1. So this is equal to the norm of T sub v. And therefore, v, the norm of v, is less than or equal to the norm of T sub v. And since I already had the reverse inequality, I conclude that the norm of T sub v equals the norm of v. And thus, this map going from v to v, to the double dual of v described this way is, in fact, isometric. It's a bounded linear operator that preserves links. Now we have a special name. So first off, it should be clear that isometric bounded linear operators are one to one. So for something to be one to one for a linear operator, that means the only thing that gets sent to 0 is 0. And from this equality here we have that if this equals 0, then the vector had to be 0. So isometric bounded linear operators are always one to one. So what this theorem tells you is that this map defined in this way gives you a isometric injection from v to v double prime, from v to its double dual. So I have this map that goes into the double dual that's isometric, meaning it doesn't change distances. Is it onto? Is the double dual always equal to the original space itself? So we have a name for spaces that satisfy that. So a Banach space V is reflexive if using-- if this map is onto. If V equals v double prime, in the sense that this map taking element of capital V to an element of the double dual that, as we defined earlier, is onto. Now for example, and you can check that. This may seem like abstract stuff. But for little lp spaces, this letting the dual space eat a vector or whatever, remember, we identify the dual space of little lp with little lq, where 1 over p plus 1 over q equals 1. So little lp is reflexive for p between 1 and infinity because the dual of little lp is going to be little lq, where 1 over p plus 1 over q equals 1. And the dual of 1 over-- of little lq is going to be a little lp as long as q is between 1 and infinity. There is that one case where the dual is not given by what you think it should be. So little l1 is not reflexive. Since if I take a look at the dual of little l1, so this should be 1, this is equal to l infinity. And as you'll show in the homework, the dual of little l infinity is not equal to little l1. So the dual of the dual is not going to give you back the space you started with. And I don't know if I'm going to put this in the assignment or eventually on a midterm. The space C0, which was the space of all sequences converging to 0, this is not reflexive either. The reason is because you can, in fact, identify the dual of C0 with little l1. And the dual of little l1 is little L infinity. Which does not equal C0. So this also, you can see how the space is a subspace of the double dual. The space of all sequences converging to 0, that's a subspace of the space of all bounded sequences. So you see this, I shouldn't say by hand, but OK, by hand, that the original space-- forget this prime-- is a subspace of the double dual which in this case is little infinity. So that concludes the general stuff about Banach spaces for now. And we're going to turn to Lebesgue measure and integration. Because so we've been talking about little lp spaces. These are spaces of sequences. And so you might think, well, maybe we can define big LP spaces to be, let's say, Riemann integral functions whose p'th power is integrable or something like that. Just how these are p'th power summable. So moving on now to Lebesgue measure and integration. Missing a letter there. Now, why the Lebesgue measure and integration, why not just stick to Riemann integration? There's a couple of reasons. One is Lebesgue integration has much better convergence theorems. So you only have really one convergence theorem for Riemann integration, which is the uniform limit of Riemann integrable functions is Riemann integrable. And the integral of the limits is the limit of the integrals. But there are much better limit theorems. And in the sense that they're more useful, you can use them more often, and therefore prove better things. And maybe you're just like, OK, so what? But there's even bigger reasons why you consider Lebesgue measure and integration. And this is because that if you look at the space of Riemann integrable functions on a, b, say, 0, 1, say, let's make it concrete, then let me in fact write this down. L1-- and I'm going to put an R here. This is a set of all f from 0, 1 C such that f is Riemann integrable on 0, 1. Now in 18.100 and 100A, these are usually real-valued. But when I mean Riemann integrable and its complex valued, I just mean the real part and the imaginary part are Riemann integrable. So you don't need to know anything fancy. And if I define a norm on capital L1 by-- so I have this space I have this norm. Now, it's not quite a norm. Because you can have Riemann integrable functions which are not 0 everywhere, whose integral is 0. If you have a function that's 0 except for at one point, and at that one point it's 5, well, the function's non-zero, but the integral is 0. So this is a semi-norm. But let's imagine it's a norm. So the problem is, is that even if I mod out by those things that give me a semi-norm equal to 0 so that I get an actual norm space, but ignore that for a minute. Let's just imagine that this is an actual honest to God norm. Then this thing ends up being this is not a Banach space. So one reason to consider more general integration is because Riemann integration, or at least only restricting to those functions which are Riemann integrable does not give you a Banach space. This is not-- this is a norm space modulo this little bit that there are functions that have norm 0 that aren't exactly identically 0. But it's not a Banach space. It's not complete. And in functional analysis, we're interested in spaces that are complete. And not just in functional analysis for the abstract love of it, but in problems where, as I said at the beginning of this course, where you have differentiable equations or some functional defined on some space functions you want to deal with complete spaces. And a lot of functionals are defined on not just L1 but say LP, where now I replace this by a p'th power. So in order to be able to say certain things exist or get to the heart of the subject, I need these spaces to be complete. And if I just restrict to Riemann integrable functions, it's not complete. So we have to do Lebesgue integration. Now, what we'll find out is that-- and this is how it's done in the original notes for this class, is that this is not a Banach space. But I can take its completion. Just like the set of rational numbers is not complete, I can take it's what's called completion and get the set of real numbers. I could take this space, take its completion, and what I get is a abstract space, which I can actually identify with the space of Lebesgue integrable functions. That's how it's done in the notes. But I don't want to do it that way. Instead, we're just going to build up Lebesgue measure and integration from the ground up. And we will see at one point that these functions are in fact dense in the space of Lebesgue integrable functions. And therefore, we conclude that the abstract completion-- if you haven't covered completions, that's fine-- of this space is, in fact, the integrable-- the space of Lebesgue integrable functions. So summing it up is that we have to do Lebesgue measure and integration because these spaces are not complete. And we have better convergence theorems in these spaces. But it just takes a little bit of effort to learn. Not much, but I mean, it's quite intuitive once you start seeing the arguments and going through it. So we're going to be defining a new notion of integration that's more general than Riemann integration. And so, integration should somehow be the theory of area underneath the curve. So the simplest type of functions you should consider first is, if I have a subset E, maybe it's crazy, or maybe it's simple. And I have a function, which is just a 1 when I'm on it and 0 otherwise. So I'll denote this function by 1 with an E beneath it. Then in some sense, I want to be able to integrate this function. So 1E is the function that's 1 if x is in E and 0 if x is not in E. So if E is just the interval a, b, then I just have the function that 0 outside of a, b and 1 over a, b. So the question is, how do we integrate such functions? Or our first task is to define the integral for these types of functions. Well, since the integral should be a theory of the area underneath the curve, for example, if E equals-- so this is all just discussion right now, I should add. So this is motivation. In a minute I'll get to definitions and theorems. But this is just a discussion. If E equals a, b so that I'm just-- I have a, b, 1, 0 outside, then the integral of this guy in whatever sense this integral is, this should be at least the area underneath this curve. So it should be b minus a. Which is the length of the interval a, b. So and therefore, if we have a general set E, we should expect that our integral over the function, which is 1 on E and 0 off it, should somehow-- this should be the length of E. But length is not a very good word because length applies to an interval a, b because there's a start and there's a stop. And everything in between is in the set. So rather than write the length of E, I'll write m of E. And m of E being the what I'll say is a measure of E. A measure of how much of E there is. And so, our first task, if we're hoping to develop a notion of integration more general than Riemann, in which the resulting spaces are a Banach space, that's really the goal in the end we should start by defining what does this mean? What is a measure of a set? Which sets are we measuring? So our task right now is define before we even get to integration, we should be able to integrate the simplest types of functions. And this requires us to be able to define the measure of subsets of R. And this is Lebesgue measure that we'll be developing. So what are some properties that we want that a reasonable measure of sets to have? So the first one we would like is, we should be able to measure everything. What's awful about Riemann integration is we can't really integrate any function. But we can't even integrate this function when E is, say, the rational numbers between 0 and 1. When I have the function which is 1 on the rationals and 0 off of that, that's not Riemann integrable. So I would like to be able to measure any subset of real numbers. So I would like to be able to define the measure of any subset of real numbers. And the second property I would like is kind of a sanity check, that if I is an interval, then the measure of E should be the length of I. And by I, I mean half open, half closed. I guess, a half open is half closed. But closed, open interval, the measure of that should not care about missing the two endpoints. And I should just get out the length of that interval b minus a. A third property is a measure of the whole should be the sum of the measures of its parts. If I have some set which can be written as a union of disjoint chunks, then the measure of that whole set should be the sum of the measure of the individual chunks. So that's stated as if En is a collection of disjoint sets. So think of these as making up a bigger set. And this is, I should say, a countable collection of disjoint sets, then I would like for the measure of their union to be the sum of the measures. This is a reasonable thing to ask for. The measure of the whole is the sum of the measure of the parts. And in the last one, which is kind of specific to how we view R is, if I take a set here next to where I'm standing, and then I take that set, don't do anything to it, I just walk it over here and take it's measure, that measure, the measure of this set which I've now walked over here should be the same as the measure of that set over when I was standing over here. So we should like m to be translation invariant. Meaning if E is in R and x is an element of r, then the measure of the set x plus E, which means the measure of the set of elements x plus y, where y is in E. So just take E and shift it by x. This should be the same as the measure of the original set E. So unfortunately, this is impossible to have a function which is defined on every subset of real numbers satisfying these three properties. So that's very unfortunate. Such a function m going from the power set of R, meaning the set of all subsets of R, and of course this thing should be non-negative since it's a measure of how much is there, does not exist. Meaning what? If you assume all of these four properties, you'll be able to come up with a set which has finite measure and then also simultaneously has infinite measure. So it's impossible to have a function which is defined for every subset of real numbers satisfying the conditions two, three, and four. It's just not logically-- it's just not possible. So that's unfortunate. But what we can do is drop the assumption that measure is defined for every subset of real numbers and focus on trying to find a set function-- I mean, defined on subsets of real numbers. So that's why I'm calling it a set function-- m which satisfies two, three and four on a big collection of sets, that's defined on a large collection of sets. And these sets for which such a measure will be defined on satisfying two, three, and four will be quite large in the end. This is the set of Lebesgue measurable sets. And m is the Lebesgue measure. And so, our goal is to construct the Lebesgue measure and Lebesgue integrable sets. So this is our task. Or I guess we had that task before. So what is the plan now? Construct an m defined for a large class-- so defined for many different sets but not necessarily-- but not every set. And these sets will be, we will call Lebesgue measurable sets such that the conditions two through four hold. That if I have an interval, the measure of that interval-- so first off, this class of sets should contain all intervals. And the measure of that interval gives me the length of the interval. And if I have a countable collection of sets for which I can measure, then their union is measurable. And the measure of that disjoint union is equal to the sum of the measures. And then it's also translation-invariant. So that's the plan. So it won't be defined on for every set. But it will be defined for a large class of sets which contain most reasonable sets. And how we're going to do this is due to Carathéodory. And we'll go about it like this. We'll first-- so how we'll construct this, we'll first construct a function m star, which is defined for every subset of real numbers. This we'll call the outer measure, which satisfies two, namely that the m star of an interval is the length of the interval 3. And I shouldn't say 3, but and almost 3. Then we restrict, I should say, satisfying two, almost three, and four. Then we restrict m star to well-behaved subsets of R. These will be the class of the big Lebesgue sets. And m will just be m star restricted to these subsets. So this is the plan of this chapter, this part of the course. Again, this is all discussion. So our first topic will be m star, which is called outer measure. So this was all game plan discussion. If you didn't follow everything I just said, that's fine. You could start listening now. Because then I'm just going to start defining things and proving theorems about them. But you should know the path that we're on so that you don't lose the forest through the trees or something like that. I think it goes something like that. Anyways, so outer measure-- so for an interval-- so let me just note a little notation, as I used a minute ago. If I is an interval, meaning open closed, half open, infinite, little l of I denotes its length. If it's unbounded, then this just is a stand-in for infinity. And the length of the interval-- so the length of a, b both including a, b or not, or over 1 and maybe not the other, is equal to b minus a regardless if it's an open, closed, or half open interval. So for a subset of real numbers, we define its outer measure-- we define the outer measure of a. This is m star of a. This is equal to the infimum of numbers sum of lengths of I sub n. And what are the I sub n's? This is equal where I sub n is a countable collection of open intervals such that a is contained in their union. So how I compute the outer measure, so I can cover any set-- any subset of the real numbers by a union of open intervals. And so then, the outer measure is then defined as the infimum of the sum of the lengths of these open intervals. So you have your set. You cover it by open intervals. You sum up the length of the open intervals. This gives you some rough approximation to the size of a. And now if you make those intervals small, and smaller, and smaller, then you should be picking up more detailed information. And so, the infimum of some of the length of these intervals that are covering a, this gives you the outer measure. So let me give you the stupidest example ever, which is the outer measure of a point. So let's say, I don't know, the set containing just 1. So I claim that-- or the set containing 0. The outer measure of this is just 0. So the set with just one point should-- it doesn't fill anything. So it shouldn't have any measure, it shouldn't have a positive measure. At least if we're wanting to have measure be related to length. Now, why is this? Well, let epsilon be positive. What I'm going to show is that the outer measure of this set is less than epsilon. And since this is always a non-negative number, it's the infimum over a subset of positive numbers. So therefore, that infimum is always non-negative. So I should note that this is always bigger than or equal to 0. So I'm going to show that the outer measure is less than epsilon. Then the set containing only 0 is contained in the open interval epsilon over 2-- minus epsilon over 2 to epsilon over 2. And therefore, the outer measure of 0, which is the infimum of the sum of links over all collection of intervals covering a, this should be less than or equal to if I just take one of these. So length of, which equals epsilon. Now, the measure of this is always less than or equal to epsilon for all epsilon positive. So then I can take-- send epsilon to 0. And I get that this measure-- outer measure is 0. So again, the outer measure is the infimum of the sum of lengths, of intervals, of open intervals, where these open intervals cover a. Simplest one is the set containing a single point. It has measure 0. Now I bet if you sit and think for a little bit, you can then prove that the outer measure of a set containing finitely many points, this is also 0. And if you work a little bit harder, you can even prove that the measure of a countable set is also 0 just based on what we've done here, just based on-- you know what? Let's do that. That's a fun exercise. So is countable, then the measure of a equals 0. So for example, the rational numbers are countable. There's a lot of rational numbers throughout R. But they have measure 0. They don't fill anything in a certain sense. Not feel as in F-E-E-L, fill, as an F-I-L-L. So what's the proof of this? So if a is countable, then I can list the elements of a. I mean, a being countable means there's a bijection from a to the natural numbers. And I'll do the countably infinite one and leave the finite case to you. So that means there exists a bijection from a to the natural numbers. But that just means that I can list the elements of a, a1, a2, a3, a4, and so on. You can write it as-- OK. Now, I am, just like I did a minute ago, I'm going to show that the outer measure of this set is less than or equal to epsilon, where epsilon is an arbitrary positive number. And therefore, the outer measure has to be 0. Let epsilon be positive. We will show the outer measure of a, which is, again, a non-negative number is less than or equal to epsilon. And then, any number that's less than or equal to an arbitrarily small number has to be 0. For each n natural number, let I n be the interval that takes the form epsilon over 2n plus 1. I should write it this-- a sub n minus epsilon over 2n plus 1, a sub n plus epsilon over 2n plus 1. So I sub n is an open interval that contains a sub n for each n. And so, a is contained in this countable union, or this countable union of open intervals. And since the outer measure is the infimum of the sum of lengths of open intervals covering a, this implies that the measure, the outer measure of a, which again is the infimum, so it's smaller than sum of lengths of open intervals covering a, this is less than or equal to sum n equals 1 to infinity of the length of I n, which equals sum from n equals 1 to infinity of now the length of each of these intervals is epsilon over 2 to the n. And the sum from n equals 1 to infinity of epsilon over 2 to the n, this is just epsilon. And therefore, the outer measure is less than epsilon. Since epsilon was arbitrary and the outer measure is non-negative, that we conclude that the outer measure has to be 0. I mean, this argument, we can generalize this argument a little bit to prove something that I'm getting a little out of-- what I'm about to do is slightly out of order. But we can generalize the argument we just gave. And I want to do it now since it's there and we just did it, to prove that outer measure satisfies something that's almost like three. So and it's kind a generalization of this argument. Except for, this argument we were able to give an explicit I sub n. So we have the following theorem that, well, first off let me state that a very easy property of outer measure is that if I have two subsets of real numbers, and a is contained in b, then the outer measure of a is less than or equal to the outer measure of b. Because this just follows from the definition. Any covering of b by open intervals is a covering of a by open intervals. And therefore, the infimum has to be less than or equal to the sum of those lengths. Or I should say, then the outer measure has to be less than or equal to the sum of those links. And thus, the infimum over all of those intervals covering b, which is the outer measure, has to be bigger than or equal to the outer measure of a. So just think about it for a minute. But this should be pretty much clear from the definition. So the next theorem is that we have something that's like three. Let a and A n be a collection-- be a countable collection, I should say. So here n is a natural number, collection of subsets of R. Then the outer measure of their union-- so this is just arbitrary subsets. I mean, they don't have to be pairwise disjoint like one and three. But so, just an arbitrary collection. If I take the outer measure of the union, this is less than or equal to the sum of the outer measures. So in particular, if this collection is a collection of pairwise disjoint sets, then the measure of the hole is less than or equal-- less than or equal to the sum of the measure of the parts, which is half of the three that we want. So we're getting there. But anyways, so let's prove this. First off, if there exists an n such that it has infinite outer measure, just meaning that that subset that I'm taking the infimum over is unbounded, or-- no, no, no. That's not what I mean. But if that infimum is infinite, meaning I can't cover the set by a collection of intervals with sum of links as a finite number, or the sum is infinite, meaning it's not convergent, then this inequality is true. I mean, then I'm just saying something is less than or equal to infinity. Or now I'm treating infinity like an element of the extended real numbers. So if one of the measures is infinite, or if this series diverges and I just get an infinite number, then this inequality holds trivially. So let's just consider the case when-- so we just need to consider the case when all of the measures are finite and the sum of the measures is finite. So suppose while n be an outer measure of and the outer measure-- the sum of the outer measures converges. So it's finite. So this is an argument you have to get used to, that instead of proving the inequality that you want to prove, you prove this inequality plus epsilon, where epsilon is an arbitrary thing-- arbitrary positive number, just to give yourself a little room. And then if you're able to prove the inequality you want, plus epsilon, where epsilon is an arbitrary positive number, then the inequality holds by letting epsilon go to 0. So that's the goal is we're going to prove that inequality with an epsilon on the right-hand side. So plus an epsilon. Let epsilon be positive. For each n let collection I n sub k, kn natural number, be a collection of open intervals covering a sub n-- so I sub n sub k. And so remember, the outer measure is an infimum. So if I go above this infimum a little bit, I can always find a collection of open intervals covering the set whose sum of links is less than that infimum plus the small number. So and sum of k equals 1 to infinity of I n sub k is less than 2 to the n. Again, just because this is defined as an infimum, and so I can always for any little bit, if I go above it-- or a sub n, sorry-- if I go above it, then I should be able to find a collection of open intervals so that the sum of the links-- oh, God, what is that? Sum of links-- it's the end of the day. So I'm starting to make careless board mistakes. And now, each of these collections for each n cover the a sub n's. And therefore, the union of the a sub n's is contained in the union of n in n, k in n of I in k. So this is a collection of open intervals covering this union is a countable, yes. Because it's in one to one correspondence with n cross n. n cross n is, again, a countable set. So this is a countable collection of open intervals covering the union of a sub n. And therefore, the measure of a sub n should be less than or equal to the length of the sums. And this is equal to sum over n sum over k, I in k. And this is less than sum n-- if I'm summing in k, remember I have that-- the sum in k is less than m star of a sub n plus epsilon over 2 to the n. And now I'm summing in n. So n is going from 1 to infinity and so is k. And so this equals the sum of n, m star of a sub n plus epsilon. So I started with-- ah, I keep making silly mistakes. Sorry about that. So I started with the measure of the union of the a sub n's. It's less than or equal to the sum of the lengths of these open intervals that cover this union. Because the outer measure is the infimum over all such sums. So that's equal to this sum. How did we choose these open intervals? We chose them so that these lengths, the sum and k of these lengths gives me approximately the outer measure of each a sub n plus a little error. And I chose this error so that it's summable. If I would have just put epsilon here, I would have been summing something that can't be summed. And in the end, I get this. So I've shown that for all epsilon positive, the outer measure of the union is less than or equal to the sum of the outer measures plus epsilon. I guess, less than. And now I just let epsilon go to 0. And I conclude that-- the bound that I want. So this also gives a second proof, if you like, of what we proved a minute ago, that countable sets have measure 0. And if you-- and what we did right there, which is one element. We proved that the set containing one element has outer measure 0. Now, a countable set is equal to a union of such sets. We've shown that the outer measure of this union, which would be this countable set, is less than or equal to the sum of the outer measure of the individual points in that set. And we did, in this example, we did it for 0. But it doesn't matter that it was 0, that the outer measure of a singleton, a set with a single point, is 0. And therefore, the sum would be 0. And we would conclude that the outer measure of countable set is 0. So that's a second proof showing that the outer measure of a countable set is 0. So we've shown that outer measure is-- so this is something that's defined for every subset of real numbers and that it satisfies three. Now, I'm going to leave-- it's actually going to be an exercise in the assignment, that outer measure also satisfies four. The outer measure of a set shifted is equal to the outer measure of the original set. Why is this? A way to think about it is that if something holds for open intervals it should hold for outer measure. Because outer measure is defined in terms of sums of lengths of open intervals. So if I take an open interval and shift it, then it's length does not change. The length of ab is the same as the length of a plus x to b plus x. So it's going to be an exercise in the assignment, that outer measure, in fact, satisfies four. So what's left, and what we'll do next time, is show that if I is an interval, that the outer measure is the same as the length of the interval. And as intuitive as that is, it takes just a little bit of work to show-- not too much, but just a little. And that will complete our construction of outer measure which is this set function which satisfies one through four but not three really. It satisfies almost three. And then, once we restrict that outer measure to a certain class of subsets, certain well-behaved subsets, then we'll be able to get three. And that class of well-behaved subsets we will call Lebesgue measurable sets. All right. So I think we'll stop there.
MIT_18102_Introduction_to_Functional_Analysis_Spring_2021
Lecture_8_Lebesgue_Measurable_Subsets_and_Measure.txt
[SQUEAKING] [RUSTLING] [CLICKING] CASEY RODRIGUEZ: OK, let's continue our discussion of measurable sets. Let me just briefly recall for you, the end of last time, we discussed some general notions, some special collections of subsets of R, one of those being an algebra, which is closed under taking complements and taking finite unions. And then we said a collection of subsets of R's is a sigma algebra if it's also closed under taking countable unions. And not every algebra is a sigma algebra. Why did we bring all this up? Well, we were in the middle of discussing Lebesgue measurable sets. So recall that we say E is Lebesgue measurable if for all subsets of R A, the outer measure of A is equal to the outer measure of A intersect E plus the outer measure of A intersect the complement. So in some sense, a measurable set is one that divides sets nicely with respect to outer measure. And we denoted script M to be the set of all E such that E is Lebesgue measurable. And I'm going to stop saying Lebesgue measurable and just say measurable from now on. And what we showed last time, we showed that M, the set of measurable subsets of R, do form an algebra. So it follows from the definition that if E is measurable, then E complement is measurable. But we also showed that if I take a finite union of measurable sets, or a finite collection of measurable sets, their union is measurable. And what we're going to do today is we're going to show that the set, the collection of measurable, Lebesgue measurable, subsets of R form a sigma algebra. So they have this stronger property. And also that-- last time-- there's no way to write this in-- maybe not definition, but notation-- that B is the Borel sigma algebra, which is, recall, we proved it at the end of last time. So we gave this as an example of a sigma algebra. This is the smallest sigma algebra containing all open sets. So any other sigma algebra that contains all open sets contains the sigma algebra B, the Borel sigma algebra. So like I said, our goal for this lecture is we're going to show that this collection of Lebesgue measurable sets is a sigma algebra. We're going to show it contains the Borel sigma algebra. And then what is-- I mean, I can even state it now, or at least I'll just say in words. What is Lebesgue measure? It's simply-- Lebesgue measure of a measurable set will be the outer measure of that set as long as it's Lebesgue measurable. So what we're going to do is we're going to show that M is a sigma algebra. Now, one preliminary remark that or lemma that we're going to prove is that so this condition 3 says that you need to prove that every countable collection is closed under taking unions, to ensure that an algebra is a sigma algebra. But in fact, you don't have to check it for an arbitrary collection. You really just need to check it only for a countable collection of disjoint subsets. So that's the lemma that we'll use when we prove that the collection of Lebesgue measurable sets is a sigma algebra is the following-- so this is general. This has nothing to do, necessarily, with Lebesgue measure. Let A be an algebra in-- so again, N here is a natural number-- a collection of elements of A. So each of these is a subset of R. Then there exists a countable collection F n of elements of A that are disjoint-- so if I take F n and F m, and n does not equal m, then their intersection is empty-- such that the union of the E n's equals the union of the F n's. So what does this say? This says if we have-- let me make this a remark, then. What is the conclusion of this? We only need to check this third condition for being a sigma algebra, that a countable collection is closed under taking unions, condition 3 for sigma algebra, for disjoint collections of elements. This follows immediately from this lemma because if I have any arbitrary collection of elements of the algebra, then their union is equal to a union of elements of A, where the union is now over elements that are disjoint from each other. So this is the point. I only need to check the condition for an algebra to be a sigma algebra by checking that countable unions of disjoint sets remain in the algebra. So the proof is not very hard. So let G-- let's call it something-- G n to be the union k equals 1 to n of E k. So then, these are growing because G n contains the first n E k's, and then n plus 1, I tack on another one. So G1 is contained in G2 is contained in G3. And this is just kind of easy to check. And I'll leave it to you that the union of the E n's is equal to the union of the G n's. I mean, the G n's are really just unions of the E k's, finite unions of the E k's. So this is pretty clear. I mean, if you actually want to sit down and do the argument, every element in G n is contained in E1 up to E n. And so every G n is contained in this union. And therefore, the union of the G n's is contained in this union. And then it's easy to go the other way around showing this union is contained in this union so that they're equal. And now, I take F1 to be G1 and F n plus 1 to be F n plus 1 take away F n for n bigger than or equal to 1, or I should say-- sorry about that, that should be G n plus 1 take away G n. So what is this? I take the first set to be G1 and then F2 will be what's in G2 but not in G1. So then, what is this? What do we get? We get that the union of any finite guy is equal to, in fact-- so this is pretty clear, again, because what am I doing here? F2 will be what's in G2 take away G1. What's in F3 is whatever's in G3 take away G2. And their union is going to be the same as over here because I'm including G1 and so on. I hope this is clear. And in fact, I don't know why I'm writing n. We in fact get this union is equal to this union. And again, I'm leaving some details out, but you can check that this union is contained in this union and then this union is contained in this union very easily. But I hope the idea is pretty clear because at each stage, you're taking whatever is in this set, cutting out what was appeared before it, in the sets before it. So now, let's go back to measurable sets. We're now going to show that-- almost-- we are going to prove that the collection of Lebesgue measurable sets is a sigma algebra. But first, we need the following theorem, which is the following-- Let A be a subset of R. And this is essentially the whole game, as we'll see. Let A be a subset of R E1 up to E n Lebesgue measurable, and not just measurable, but be disjoint measurable sets. So I only have finitely many of them. Then, the outer measure of A intersect the union of these guys is equal to the sum k equals 1 to n of the outer measure of A intersect E k. So this shouldn't come as too much of a surprise because this is kind of close to saying-- I should say this is very true if say E1-- say you just have two sets-- E1 is E and E2 is E complement. Then that's just the definition of being measurable. And you build up from that by using induction. That's the basic argument. Again, I'm sorry I keep moving away from the chalkboard because I'm used to lecturing to a classroom, even though it's been months now since I've been in a classroom with students. And because I'm a freak of nature who writes from my left hand, I have to write and then do this kind of move. But anyways, we're going to prove this by induction. So the proof is by induction. So n equals 1. That's just one set. This is clear. That's just the outer measure of A intersect E1 is equal to the outer measure of A intersect E1. So that's fine. So I'm going to put a check there without writing anything down. So now, let's label this statement by star. So now, let's do the induction step. So suppose star holds for n equals m. And what we now want to show, meaning that if I have E1 up to up to E m disjoint measurable sets, then star holds. Now, I want to show that statement holds for m plus 1. So let E1 up to E m plus 1 be measurable disjoint sets. So each of them-- they're pairwise disjoint. Then, since E m plus 1 is measurable, what do we have? Well, first, before I use that, let me just note something. Since E m plus 1 is just disjoint from E1 up to E m, let A be a subset of R. We want to verify star for A, so I need to have an A. So since E k intersect E m plus 1 equals the empty set for all k equals 1 to m, we get that A intersect the union k equals 1 m plus 1. What does this equal to? Or then I intersect that with E m plus 1, this is simply equal to A intersect E m plus 1 because E1 up to E m-- these are disjoint. So when I transfer this through, I can write this intersection by bringing this intersect, each of these E k's, where k equals 1 to m plus 1. And then this is empty when k is not equal to m plus 1. So I just pick up E m plus 1 and I get that there. But also, if I look at intersect complement, what is this? This is equal to-- now k, E k for k going from 1 to m, these are all contained in the complement of E m plus 1 because they're disjoint. So this is equal to just A intersect union A equals 1 to m E k. Now, I got ahead of myself a minute ago, but now we're at this stage. Since E m plus 1 is measurable, the measure of A intersect this union A equals 1 m plus 1 E k, this is equal to the outer measure of A intersect k equals 1 m plus 1 E k intersect E m plus 1 plus the outer measure of A intersect union k equals 1 m plus 1 E k intersect E m plus 1 complement. Just using that E n plus 1 is measurable. And now, we plug in what these things are. So this whole thing is equal to A intersect E m plus 1. So this is equal to the measure of A intersect E m plus 1. And now, this whole thing is equal to the measure of A intersect the union only going up to m. And now, this is where we use the induction hypothesis because now we have a union of m disjoint measurable sets. So this is equal to plus sum from k equals 1 to m A intersect E k. So this is by our induction hypothesis. And combining these two terms is exactly what we want for n equals m plus 1, k equals 1 m plus 1 outer measure of A intersect E k. And that's the proof. So now using this theorem and the lemma before it, we will prove that the collection m of Lebesgue measurable sets is a sigma algebra. We already know it's an algebra up to this point. But we just need to verify it's a sigma algebra. So what's the proof? Now, again, based on this remark here, we only need to verify that if I have a countable collection of disjoint measurable sets, then the union is measurable. I don't have to check it for every collection of measurable sets, just disjoint measurable sets. So we just need to check. So we've already checked it's an algebra. So by the lemma, the first thing we proved during this lecture, we just need to show m is closed under taking countable disjoint unions, meaning if I have a countable collection of disjoint measurable sets, I need to show the union is measurable. So let E sub n be a countable collection here-- n is a natural number-- of disjoint measurable sets. So we need to verify the definition of being measurable. But remember, that reduces really to one inequality, since one of those inequalities is always clear. So let A be a subset of R, and let's denote E to be the union. So what do we need to show? We want to show that the outer measure of A intersect E complement plus the outer measure of A intersect E-- E here, again, is a union-- is equal to the outer measure of A. But we always have the outer measure of A is less than or equal to this, so we just need to show this. So let's do this. Let N be a natural number. So let N be a natural number by-- so since M is an algebra, this, a finite union, n equals 1 to capital N of E sub n is measurable. And therefore, if I want the outer measure of A, this is equal to the outer measure of A intersect n equals 1 to N E sub n plus the outer measure of A intersect n equals 1 to N E n complement. Now, this finite union is contained in E. E is the total union. And therefore, the complement of this finite union contains the complement of E. So this complement here contains the complement of E, and therefore, A intersect this is a bigger set than A intersect the complement of E. So this is bigger than or equal to the outer measure of A intersect union n equals 1 to N E n plus the outer measure of A intersect E complement-- again, because this finite union is contained in the whole union. So when I take complements, that switches around what's contained in what. So then I get A intersect E complement is contained in A intersect the complement of this finite union. And that's good. So this is now up here. We wanted this to be smaller than the measure of A here. And now what do we have? We have the outer measure of A with a finite union of disjoint measurable sets. And we can write this using the previous theorem as sum from n equals 1 to capital N outer measure of A intersect E n plus-- just keeping the second term. Now, this holds for every N. N was arbitrary. So I can let capital N go to infinity. Remember, I've shown that the outer measure of A is bigger than or equal to this quantity here. So I can let N go to infinity to conclude that the outer measure of A is bigger than or equal to sum from n equals 1 to infinity of the outer measure of A intersect E n plus the outer measure of the A intersect the complement of the union. Now, remember what we proved about outer measure-- that the sum of the outer measures is bigger than or equal to the outer measure of the union. So this thing is bigger than or equal to the outer measure of the union over all N A intersect E n plus A intersect E complement. And this is just equal to plus the outer measure of A intersect E complement. So we've shown that the collection of all Lebesgue measurable subsets of R form a sigma algebra. So let me just-- maybe I should have said this at the start, but let me just pause here for a second. And maybe you're wondering why all this sigma algebra business anyways? Why have I imposed this condition? Am I just making this definition up at as I go along? But it's kind of a condition that's forced upon us based on our expectations, in the following sense-- you remember one of the properties that we wanted of Lebesgue measure, or measure in general, was that the measure of a countable union of disjoint sets is equal to the sum of the measures. And I stated without proof-- but there is a proof in the textbook-- that you cannot have a measure defined on every subset satisfying those properties that we outlined. So you have to have some collection of subsets on which you have these properties, that the measure of an interval is the length of the interval, and that's translation invariant, and that the measure of the union is the sum of the measures. That last statement, though, hidden in there, there's kind of a subtle assumption-- that if you have a measure defined on some collection of subsets, then that collection of subsets better be closed under taking countable unions. If I'm to be able to make the statement that I want, that the measure of a countable union of disjoint sets is equal to the sum of the measures, then hidden in there is that my measure has to be defined for a countable union of measurable sets. In other words, if I have a countable collection of measurable sets, then for the statement I want to even make sense, I should have that the union of this countable collection of measurable sets is contained in the class of sets that I'm measuring. And so before we even discussed outer measure, all that, how you could maybe see coming that we would have some condition like the class of sets that we're going to measure will be a sigma algebra, or should be a sigma algebra, should be closed under taking countable unions. So maybe I was rambling, but hopefully, you got something out of that. So we've shown that the set of measurable sets is a sigma algebra. Now what I'm going to show is that it contains the Borel sigma algebra. So remember, the Borel sigma algebra is the smallest sigma algebra that contains all open sets. If I have any other sigma algebra containing open sets, that contains all open sets, then it must contain the Borel sigma algebra because the Borel sigma algebra is the smallest. And I should quantify smallest-- smallest meaning with respect to inclusion. If there's any other sigma algebra that contains all open sets, then the Borel sigma algebra B is contained in that sigma algebra. But first, let's prove kind of a simpler case. So let me state this, and then I'll explain. For all a in R, the open interval a to infinity is measurable. So in the end, we want to be able to show-- so we already know M is a sigma algebra. If we can then show that every open set is measurable, then that means that M is a sigma algebra containing all open sets, and therefore, it must contain the Borel sigma algebra, which remember, is the smallest sigma algebra containing all open sets. So to build up to showing that every open set is measurable, let's start with a very simple type of open set, half open or half infinite open interval. So let A be a subset of R. And let's write A1 to be A intersect a to infinity, and A2 be A intersect the complement, which is just minus infinity to a closed. So what do we want to show? To show that this set is measurable, we want to show that the outer measure of A1 plus the outer measure of A2 is less than or equal to the outer measure of A because A1 is A intersect a set, A2 is A intersect a complement of that set. Now, if the outer measure of A is infinite, this holds regardless. So if the outer measure is infinite, done. So suppose the outer measure is finite. So what we're going to do is we're going to show that this sum on the left-hand side is less than or equal to this plus epsilon, where epsilon is arbitrary. And then we can send epsilon to 0 to get the inequality. And everything will reduce to what we've done with intervals, as you'll now see. So let I n be a collection of open intervals such that-- remember, the outer measure of A is the infimum of the sum of lengths of intervals covering A-- so such that sum over n length of I n is less than or equal to the outer measure of A plus epsilon. Define J n to be I n intersect A infinity K n to be I n intersect the complement of A infinity. Now, the union of the J n's-- well, first off, let me say then each of these sets J n and K n are intervals. They're the intersection of two intervals, one open and the other closed, or empty. Now, the union of the I n's covers A. So if I take the unions of the J n's, this will cover A intersect A infinity. Then A1 is contained in the union of the J n's. And similarly, A2 is contained in the union of the K n's. And one more thing-- now, I n is an interval, and it's simple to check since just based on this being a finite interval that if I take the length of I n, this is equal to the length of J n plus the length of K n. So I take each I n, and I split it up into two subintervals. They're not going to be open intervals, necessarily. This will be an open interval, this one not necessarily, but the sum of the lengths of these two intervals will add up to the length of the interval I n. That's clear. There's, if you'd like, J n and K n will be including that. And now, we're almost home free. So A1 is contained in this union. A2 is contained in this union. So if I look at the measure of A1 plus the measure of A2, the measure of A1, since it's contained in this union, is going to be less than or equal to the sum of the outer measures of J n's. And the measure of A2, again, is contained in K, in the union of the K n's, so that's less than or equal to the sum of the measures of the K n's. And now, bringing these two together in one sum over n, and using the fact that this is equal to-- so the outer measure of an interval is equal to its length, which we proved last time-- this is equal to sum length of the I n. And remember, how did we choose the I n? We chose this I n so that some of the lengths of the I n is less than or equal to the outer measure of A plus epsilon. So I should have added here, let epsilon be positive. Sorry about that. So I've shown that for arbitrary epsilon positive, this number over here is less than or equal to-- I still haven't finished. Sorry, getting ahead of myself. That's what happens when you get excited. And this is less than or equal to the outer measure of A plus epsilon. So we've shown that for arbitrary epsilon positive, this number is less than or equal to this number plus epsilon. So I can send epsilon to 0 to get that the outer measure of A1 plus the outer measure of A2 is less than or equal to the outer measure of A, which remember, that's what we wanted to show. So we've shown that these open intervals from A to infinity are measurable. It's now not a long trip to saying that every open set is Lebesgue measurable. So theorem-- every open set is Lebesgue measurable, and thus, a Borel sigma algebra, which is the smallest sigma algebra containing all open sets, is contained in the sigma algebra of measurable sets. So we've shown these types of intervals are open. Let's show finite open intervals are also-- I mean, we've shown these types of open intervals are measurable. Let's now show that finite open intervals are measurable. So now, we have for all b in R that minus infinity to b, this is equal to the union n equals 1 to infinity of what? Of the union of these half closed intervals, which we can write as the complement of these types of open intervals. Now, we just showed that these types of intervals are open, I mean, these types of open intervals are measurable. Therefore, their complement is measurable and the collection of measurable sets is a sigma algebra. So countable unions are also measurable. So we conclude that this guy is measurable. And therefore, remember for an algebra, it's closed under-- for a sigma algebra, it's closed under taking complements and taking countable unions. But by De Morgan's laws, that means it's also closed under taking intersections. We conclude that for all a, b, and R, any finite open interval a, b, which is equal to minus infinity to b intersect a to infinity, this is measurable by what we just proved. This is measurable by the theorem we proved before. And sigma algebras are also closed under taking countable unions. This is just a finite union. So this is also measurable. Now, maybe you covered this in 100B, maybe you didn't. It's going to appear on the assignment. But I'll underline what's going to be on the assignment. But since every open subset of R is a countable union of, in fact, disjoint open intervals-- open intervals meaning it could be a finite one, it could be one like that, it could be one like we wrote over there where it's a to infinity-- but every open subset can be written as a countable union of disjoint open intervals. And since open intervals, we've now concluded, are always measurable, that means their union is measurable. We conclude that every open set is measurable. We now have a collection that we've said are measurable sets. They form a sigma algebra. They contain the Borel sigma algebra, which contains all open sets. So now, we define Lebesgue measure. If E is a measurable set, the Lebesgue measure of E is denoted by m of E. And it's just given by the outer measure of E. So like I said when we first started all this business, Lebesgue measure is simply outer measure restricted to a collection of well-behaved sets. And you see that here, is that Lebesgue measure is nothing but outer measure restricted to those sets which we call measurable. And measurable meant that they had this property that they kind of split sets evenly with respect to outer measure. So immediately, we get to a few simple things. Theorem-- if A and B are measurable, and A is contained in B, then the measure of A is less than or equal to the measure of B. Why is this? Again, because outer measure satisfies that. m of A is just the outer measure of A, and that's less than or equal to the outer measure of B, which is, by definition, the measure of B. So the point is that Lebesgue measure inherits many properties from outer measure. In particular, we have this. And then we also have, since we know-- let me make one more. Every interval is measurable. Let's state it this way. If I is an interval, then I is measurable and the measure of I equals the length of I. So we've shown all open intervals are measurable. That's what we did a minute ago when we showed-- so first off, let me not get ahead of myself. If I've shown that every I is measurable, then since Lebesgue measure is just a restriction of outer measure, and the outer measure of an interval is the length of the interval, this is immediate. So I just need to verify that every interval is measurable. Now, we've shown every open interval is measurable, but from there, it's not difficult to get that every interval is measurable. So for example, if I take a closed and bounded interval, this is equal to-- what is this equal to? This is equal to b infinity complement intersect minus infinity a complement. So b infinity complement gives me b to infinity, including b. The complement of minus infinity to a is a to infinity, including a, and their intersection gives me a, b. Now, open intervals are measurable. Therefore, the complement is measurable. So each of these things is measurable, and therefore, their intersection is measurable. So that's measurable. And it's the same game if I take away one of these guys, except now, we would use kind of a trick like that. So let me do one of them. Let's say we look at a, b. Well, I mean, I don't think we even have to do anything like that. Let's say I look at something like this-- this is equal to b intersect complement. So a half-open interval including a, not including b, is equal to minus infinity b intersect minus infinity a complement. Because then, this gives me a to infinity, including a. And this is measurable, this is measurable, the complement's measurable, and therefore, the intersection's measurable. So this is measurable. And those are the only two examples I'm going to do. And basically, by taking the complements of these, you get the other types of intervals. So this is a good one because this is one of the properties, remember, that we wanted of measure. We at least wanted to be able to measure intervals. And we wanted the measure of an interval, or at least a Lebesgue measure of an interval, to be the length of that interval. And now, let's verify that other condition that we wanted, that the measure of a countable disjoint union is the sum of the measures. So suppose E n is a countable collection of disjoint measurable sets. Then the Lebesgue measure of the union is equal to the sum of the Lebesgue measure of the sets. Now this, we have one of the inequalities here or we always have this is less than or equal to this simply because that follows from outer measure. Remember, outer measure satisfied this with an inequality there, with M star and M star, but not equality. Specializing to these well-behaved sets gives us equality, as we'll see. So let me just reiterate that we do have one-- we'll prove this by showing one is less than or equal to the other and vice versa. We always have this is less than or equal to this because of outer measure. So first off, we know that we have this countable union is measurable because M is a sigma algebra. We showed that already. And therefore, the measure of the union is, by definition, equal to the outer measure of the union, which is less than or equal to the sum of the outer measures of the E n's. And remember, these are all measurable. So by definition, the outer measure is the measure. So this is what I meant by we always have one inequality or we already have one side of this inequality from outer measure. So now we just need to show the opposite inequality. We now show sum n measure of E n is less than or equal to-- so how do we show this? Let m be a natural number. Then what is the measure of a finite union of these guys? Well, I can write this also as the measure. So this is measurable, and therefore, I can write this as, in fact, R intersect. This is kind of a stupid way to write it, but I'm just doing it this way so that it looks like something we've already proven. Earlier, we proved that for measurable sets, disjoint measurable sets, the outer measure of a set intersect a finite union of disjoint measurable sets is equal to the sum of these measures. And this is equal to E sub n. And the outer measure of a measurable set is, by definition again, the measure of the set. So this is equal to n equals 1 to E sub n. So what we've shown is that in fact, for a finite disjoint union, the measure is equal to the sum of the measures. I mean, we already did prove that, except with an A there. But this gives us the opposite inequality once we realize that we have this sum which is equal to the measure of, and which is less than or equal to the measure of the big total union. Because this set is contained in this set. So what I have is that N was arbitrary. I had that thing on the left-hand side is less than or equal to this thing on the right-hand side for arbitrary N. So now I let N go to infinity to conclude that-- as desired. So this is that other property that we wanted, that the measure of a disjoint union is the sum of the measures. Now, there was that last property we wanted that measure is translation invariant. That will be in the assignment, so let me state it here. So what you'll prove in the assignment is if E is a measurable set and x is in R, then the shift of the set x E plus x, which is the set of all y plus x such that y is in E, is measurable. And measure of E equals measure of E plus x. So this will be in the next assignment, which will be assignment, I think, 4, which is the third property that we desired. So Lebesgue measure, which is just outer measure restricted to the class of measurable subsets, which is a sigma algebra, satisfies the three major things we wanted of a measure. Unfortunately, the measure is not defined on all subsets, but it's defined on a large class of-- a very rich class of subsets of real numbers because it contains open sets, closed sets, and like I said, sigma algebras are closed under taking countable intersections and complements. So you could take a collection of open sets and take its intersection, which is not necessarily an open set, but that would be in the sigma algebra. And then you could take a countable union of those types of sets and still remain in the sigma algebra. And then you could take complements of those types of sets and stay in the sigma algebra. So like my instructor said, if you can write down the set, chances are it's measurable. So one last theorem we'll prove about measure, and then we'll call it a day, and call it for the theory of measure by itself. Then we'll move on to measurable functions, and then Lebesgue integration of measurable functions is the following, if you like continuity of measure, which is the following. So suppose E k is a collection of measurable sets such that E1 is contained in E2 is contained in E3, and so on. Then the measure of k from 1 to infinity of E k-- this is equal to limit as n goes to infinity of the measure of k equals n of E k, which equals limit as n goes to infinity of the measure of E n. Now, I just need to show this is equal to this. And that's what I'll show, is that this is equal to this. The fact that this is equal to this follows from the assumption that they're nested. E1 is contained in E2 is contained in up to E n, so the union is equal to E n. So I'm just going to show that these two things underlined in yellow are equal to each other. And then the fact that this is equal to that just follows from this assumption here. So for the proof, I'm going to do this trick again where we're going to write the union, this countable union, as a union of disjoint sets. So we let F1 equal to be E1 F k plus 1 equal to E k plus 1 take away E k for k bigger than or equal to 1. And let me just remark, this is equal to E k plus 1 intersect E k complement, since they're nested, since E1 is contained in E2 is contained, and so on. And note that since the E k's are measurable, this is measurable, this is measurable. Its complement is measurable. The intersection is measurable. So each of these are measurable. Then, F k is a disjoint collection of measurable sets. And for all n in N, if I look at the union k equals 1 to n of F k-- so how am I building these guys? I take F1 to be E1. F2 is going to be E2 take away whatever was already contained in E1. F3 is whatever E3 is take away everything that was already appearing in E2. So you can check that the union from k equals 1 to n of F k is equal to E n, and therefore also, that the union is equal to this union. And now, we conclude that if I take the measure of k equals 1 to infinity of E k, this is equal to this union here, which is a union of disjoint measurable sets. So this is equal to the sum of k equals 1 to infinity of the measure of F sub k, which is equal to the limit as n goes to infinity k equals 1 to n measure of F k. And again, if you like, I can rewrite this as limit as n goes to infinity, this finite sum of measures of these disjoint sets, as the measure of k equals 1 to n of F k which equals, as we noted right here, is equal to the measure of E n. So that takes care of the definition of Lebesgue measure. Next time, we will define Lebesgue measurable functions, which are, in a certain sense-- with respect to integration-- the analog of continuous functions. So continuous functions have the property that if I have an open set in the target, so if F is a function going from x to y, if I have a open set in y, then the inverse image of that open set is an open set in x. Measurable functions will be similar to that, except now with measurable sets, but not quite. We're not going to ask that they take Lebesgue measurable sets to Lebesgue measurable sets, but Borel measurable sets to the inverse image should be a Lebesgue measurable set. We'll stop there.
MIT_18102_Introduction_to_Functional_Analysis_Spring_2021
Lecture_10_Simple_Functions.txt
[SQUEAKING] [RUSTLING] [CLICKING] CASEY RODRIGUEZ: All right, so let's continue with our discussion of measurable functions. So last time we introduced the notion of measurable functions. So if I have a measurable set, E from-- or f from E to the extended real numbers is measurable, or is a measurable function, if for all alpha in R, the inverse image of the half infinite open interval alpha to infinity is measurable. And then we proved that if we have a function which is measurable, then not only does is the pre-image of these intervals measurable, but the pre-image of any Borel set, any member of the Borel sigma algebra, which includes open sets, close sets, and so on, the pre-image of those sets is also measurable. And we proved that being measurable is closed undertaking sups, imps, lim sups, lim imps, limits in particular, and changing the function a little bit on-- or changing a function on the set of measure 0 also preserves measurability. And also being measurable is closed under the algebraic operations of taking linear combinations and products. Now, this definition and properties we've worked out was for extended real-valued functions. Quite often we'll be dealing with-- or in general, functions that take values in the complex numbers. So let me define what it means for a complex value function to be measurable. It's not too crazy. Let E be a subset of R be measurable. We say a function f from E now to the complex numbers is measurable if the two functions given by the real part of f, which now goes from E to R and the imaginary part of f, which now is a function from E to R, are measurable. So function into the complex numbers you can always write as f equals the real part of f plus the imaginary part of f times i. So we just-- to say that f is measurable, just require that it's real and imaginary parts are measurable, OK. And then you can verify the following simple theorem. Maybe I'll put it on assignment or not, haven't decided yet-- or just parts of it. If f G were measurable-- and I didn't say so, but if we're talking about measurable functions , the domain E, the domain is always assumed to be a measurable subset of R. So if these are measurable functions and alpha is a complex number, then the function's alpha times f, f plus g, f times g-- and then we can do a few other things to complex numbers that change it. The complex conjugate of f and modulus of f are measurable functions. And then we have the following theorem. If fn from E to C is measurable for all n. And these functions converge pointwise to a function f, then the limiting function is measurable, OK. And this follows, again-- so both of these theorems follow immediately from what we know about measurable extended real-value functions. So, for example, this one here follows from the fact that limit n goes to infinity of fn of x equals f of x if and only if limit as n goes to infinity of the real part of f of f, fn of x equals the real part of f of x, and the limit as n goes to infinity of the imaginary part fn of f of x equals the imaginary part of f of x. And if we're assuming fn is measurable for all n, then the real part and imaginary parts of fn are measurable for all n. And therefore there are pointwise limits, which is the real part of f and imaginary part of f are measurable by the theorem we proved about extended real-value measurable functions. And so we conclude that the real part of f and the imaginary part are measurable, and therefore f is measurable. So you don't have to work very hard. You can just use of what we proved from the previous lecture about extended real-valued functions which are measurable. All right, so now as far as measurable functions go, we've shown that if I have a continuous function, then that is-- so this is from the previous time, from previous lecture, continuous functions are measurable. And if I have a measurable subset of E, then the indicator function of that set is measurable. And we know that linear combinations of measurable functions are measurable. So if I take linear combinations of indicator functions with, say, complex coefficients, then that will remain measurable as well. And those functions are the simplest type in that they only take finitely many values. They're so simple that we glorify them by giving them that name. And we'll show that every measurable function is, in a sense, approximately a simple function. So we have the following definition, if E is measurable, a measurable function phi from E to C is simple, or we call it a simple function. So we'll say is a simple function if the range of E phi of E is equal to finitely many values. So a measurable function is a simple function if its range is finite. So let me make a general remark about simple functions. And also when I write that phi of E is equal to a1 up to a n, this is a set. And when I write a1, a2, up to a n, I am implicitly writing here. I'm not saying the set-- I'm saying each of these elements are different from each other. So a simple function that just takes the value 1, I would not write its range as 1, 1, 1, 1. So although that's a kind of simple and silly remark to make, I just want to make it now. So if phi is a simple function, then we can write it in a canonical way with-- so if phi is a simple function with phi of E equals a1 up to a n, then for all i, the set a sub i, which is equal to the inverse image of-- so here phi is going from E to C. The inverse image of the single set, singleton, which is a closed-- well, this is going to be-- so the inverse image of this guy here is measurable because it's equal to the intersection of two measurable sets. It's equal to the intersection of when the real part of phi equals the real part of a sub i intersect the set of all x's where the imaginary part of phi equals the imaginary part of a sub i. So that's why the set is measurable, is measurable. And we have a few properties of this guy. If I take two different elements in the range-- again, this is why I made that comment that when I write the range this way, I'm listing the distinct elements of the range-- or the image of E-- For all i equals not equal to j, these two sets are disjoint. And if I take the union of i equals 1 to n of the ai's, this is equal to the total set E because this is just the inverse image of this set, which is just E, since E of E equals that, OK. And finally for all x in E, I can write phi of x as sum from i equals 1 to n of ai chi of a sub i of x. So for a simple function, I can write it in a canonical way where it's just a linear combination of indicator functions where the sets those indicator functions are non-zero on or disjoint from each other. And their union gives me E, the domain, OK. So these three are the simple but important properties of how to represent a simple function. So it's not difficult to verify, again, just from the definition of a simple function, that scalar multiples, linear combinations, and products of simple functions are, again, simple functions. So I've said that these functions are so simple that that's what we call them, and that they are somehow universal that, in a sense, any measurable function is almost a simple function. So in what sense do I mean that? So that's the content of the following theorem, which is that if f from E to-- and let's first-- we're going to do this for extended real-valued functions. And then the proof will essentially carry over to the complex value. So I'm just going to do it for the extended real-value non-negative functions. And, again, I'll indicate what the difference is once we go to complex valued measurable functions. So if E is from-- so now this is an extended real value, but it's non-negative. So if this is a measurable function, then there exists a sequence of simple functions phi sub n such that three things hold. We can do this-- let's write down-- let's go over here. three things hold. For all x in E-- or let me hold off on stating that. F dominates these simple functions. And these simple functions are pointwise increasing. For all x in E, 0 is less than or equal to phi 1 of-- phi 0 of x is less than or equal to phi 1 of x is less than or equal to phi 2 of x and so on. And they all sit below f of x. For all x in E, these phi's are converging to f of x. So A and B says that there exists a sequence, a measure of simple functions that increase to f of x. The last part is that if this function, if this measurable function is bounded, or wherever it's bounded, this convergence is not just pointwise but uniform. For all B bigger than or equal to 0, the sequence phi n converges to f uniformly on the set where f is bounded. So, again, the take home is that for every non-negative extended real-value measurable function, we can find a sequence of simple functions that well approximates f. And in what sense does this will approximate f, they increase to f. And if f is bounded, then that convergence is, in fact, uniform. Or more precisely, wherever f is bounded, convergence is uniform. So this is in what sense every measurable function is almost a simple function. So let me-- let's get started with the proof. And first I'm going to draw a few pictures so that you can get the idea of how we're going to do this, or how we're going to construct this sequence of simple functions, and then turn that into math which might be a little bit jarring if I went that route first and then drew pictures. And this picture I'm going to draw kind of looks like what I was talking about where we split the range up rather than the domain when we were motivating why we would even introduce the concept of a measurable function. So let's say we have our function f. Now how I'm going to build these phi's are that-- what I'm going to do is-- so here I'm going to draw phi 0 now. Where's the yellow chalk. What I do is this will indicate the power of 2, how high up I'm going, and also the resolution, how much of my dividing how high up I'm going into smaller parts. So phi 0 you should think that my height will be-- well, my height is going to be 1. And so what I do is, how I define this simple function phi 0 is I look at where f is above the final value 1. And my simple function will be that final value 1 there. And move this over a little bit. And that just leaves where f is less than 1. And there I set the value of my simple function to be 0, and 0 simply because that's the lower bound, or that's the smallest value of this interval 0, 1. And that's how I define phi 0. It goes up to height 1. And I only split each part into 1. So there's only one part here. Now maybe some of that didn't make any sense to you. That's OK. We're going to phi 1 and then I'm going to stop there because this is going to grow exponentially, which means I'll probably draw an exponentially worse picture each time. So now we're going to draw phi 2. And what I do is-- there's one-- or phi 1, sorry. One, again, should indicate the power of 2 that I'm both going up and resolving the axis. So now I go up to-- let's make this a-- so, again, this will not be quite the scale, but hopefully that's OK. So let's make this a little closer to scale. OK, so one, now I have two parts, 2 to the 1. So this is parameterized how-- tallest I'm going and cutting up the range of my function f. And then I'm going to now resolve each part in 1/2, 2 to the minus 1. So now I take-- this is now 3/2, and this is now 1/2, 2 to the minus 1. So why am I writing to 2 to the minus 1? Again because this one here corresponds to-- and that one here corresponds to-- I'm cutting the whole increments into halves. If I go on to phi 2, I'm going to be going up to 2 to the 2, which is now 4; and from 4 to 3; 3 to 2; 2 to 1; I'm going to be cutting those into fourths. And, again, what I do is I look at-- I cut-- and now I look at where the function is in these widths. So if it's above my highest bound, then my simple function will be 2 on that set of x's where it's at its highest, or where f goes past the largest number which I'm resolving the axis in. And then now here, for example-- so you should try and draw your own picture on not just go off mine because, again, mine is looking kind of rough already. But now I look at the function-- the set of X's where the function is between 2 and 3/2. So that's going to be this piece and this piece. And then I set my simple function to be equal to the value, the lower bound on this interval that I've cut the range into. And then I do that from 3/2 to 1. And, for example, that's happening here on this x. And I set it equal to-- I set my simple function equal to, again, the lower bound on this interval that I've cut the range up into, which is 1, and so, and so on. And now the function between 1 and 1/2, so this is kind of only the last piece. Everywhere else is filled in. And there's no f in between here, so I don't assign a value there. All right, now that was me talking my way through how this function how the sequence of functions look. Now I'll just write down how these functions are defined. So for n equals 0, 1, 2, and so on for k between 2 to the 2n minus 1, I define sets E kn. This is equal to the set of all X's in E such that f of x is between k times 2 to the minus n is less than or equal to k plus 1 2 to the minus n, which is just me explicitly writing out that this is the inverse image of k times 2 to the minus n k plus 1 2 to the minus n closed. And this is an interval. And since f is measurable, assumed to be measurable, this is a measurable-- the inverse image of that interval is a measurable set, OK. And then I define fn to be the inverse image of when f exceeds my top value of how I'm cutting up the range, 2 the n, again, which is measurable. And, finally, I will take my simple function phi n to be sum from k equals 0 to 2 to the 2n minus 1, k times 2 to the minus n, which is-- again, this is the lower-- this would correspond to the lower part of my E kn that I'm looking at. So for this example, if this Ekn is the two 3/2, and 3/2 is the lower part times chi E kn plus 2 to the n times where f exceeds 2 to the n. So I encourage you to maybe write out phi 1, what that actually is. I mean, there's-- in fact here, I'll do that. I drew the picture that goes over here. Let me write out what phi 1 actually looks like. Phi 1 is equal to 0 times the indicator function where f is between 0 and 1/2 plus a 1/2 times the indicator function where f is between 1/2 and 1 plus 3/2 times the indicator function of when f is between-- no, that's not 3/2. That should be 1/2. No, no, no, what am I doing, 1-- times indicator function where f is between 1 and 3/2 plus 3/2 times chi f inverse of where f is between now 3/2 and 2. So 2 is how high up I break up the range. And then plus this last part, this fn part, which is 2 times chi, where f-- the indicator function of where f is bigger than 2 to the n, So bigger than 2. So all of these sets are disjoint. The Ekn's and fn's for k different from k prime and fixed n are disjoint. And they are disjoint from this set. So this is a simple function, the finitely many values it takes is 2 to the n along with k times 2 to the minus n. Or at least the finitely many values it can take is a subset of that. So this is a simple function for each n. And by design it's always sitting below f. So let me in fact bring it up here. Well, let me make the statement and then I'll say. So by definition for all x in E, En is non-negative and always sits below f of x. So how do we see that? You can see it from the general formula, but I'll just indicate why just for safety. One, let's say-- so x has to be in one of these sets. Let's say it's here where f is between 1/2 and 1. Then phi-- 1 of x is equal to 1/2 which is less than f of x because f takes on the value between 1/2 and 1. In fact, here, I'll give a the brief argument here. If x is in Ekn then by definition this means that k2 to the minus n is less than f of x is less than or equal to k plus 1 2 the minus n, which implies that phi n of x, which is by definition k times 2 to the minus n-- just, again, by how we've defined the phi sub n's-- where did we define the phi sub n's-- there is less than f of x. So that's for x and Ekn. And if x is in f sub n, then that means f of x is greater than 2 to the n, which is always bigger than or equal to the phi n of x-- well, I mean this is actually equal to phi of x. So we always have-- the phi n's are non-negative. And they always sit below f of x. So now let's prove that they are, in fact, increasing, so part A. So now we're proving part A, the phi n's increase to f. And, again, for fixed n, Ekn and fn, the Ekn's, and fn form a disjoint union of E. I just need to check that what I want holds on each of the Ekn's and fn's. So suppose x is in Ekn, then f of x is less than or equal to k plus 1 times 2 to the minus n. n is bigger than k 2 to the minus n, and which by a silly trick of just multiplying and dividing by 2 tells me that f is between 2 times k times 2 to the minus n minus 1 is less than f of x is less than or equal to 2k plus 2 times 2 to the minus n minus 1, which implies that x is in the union of E2k n plus 1, 2k n plus 1 union 2K plus 2 n plus 1. If x is in E2k n plus 1, then I get that phi n of x is equal to, by definition, k times 2 to the minus n, which equals 2k 2 to the minus n minus 1, which because x is in E 2k n plus 1, this is equal to phi n plus 1 of x. And if x is in E2k plus 2 n plus 1-- no, I shouldn't-- this shouldn't be a 2. This should be a 1, I'm sorry-- because this goes from 2k up to 2k plus 1, and then from 2k plus 1 up to 2k plus 2. So it's either in E2k or 2k plus 1. And if E is in a 2k plus 1, then the n of x is still-- I mean, x is still in Ekn, so the n of x is equal to still k times 2 to the minus n, which is equal to 2k times 2 to the minus n minus 1, which is less than 2k plus 1 times 2 to the minus n minus 1, which is by definition, since x is in E2k plus 1, n plus 1 n plus 1 of x. And, similarly, if x is in Fn, then the n of x is less than or equal to phi n plus 1 of x. So we verified for all x since E is equal to this union over k equals 0 2 to the 2n minus 1 Ekn union Fn. This implies that for all x in E, phi n of x is less than or equal to phi n plus 1 of x. And this proves A. Now, how to prove B and C-- these things will follow from part A in a simple estimate that we're going to prove. So B and C will follow immediately from the following claim in part a, which the claim is that for all x in set y in E that f of y is less than or equal to 2 to the n. This part we already know that f of x minus phi n of x is non-negative. But, in fact, this is bounded by 2 to the minus n. So then B and C from A and this claim. Why does B follow from this claim? Well, wherever x is-- I don't want to have to also explain what happens if f is equal to infinity. That also follows basically from the definitions, not necessarily from this estimate. But the more important part is when f is finite. So let's assume f is just finite for every x. So then every x in E is eventually in one of these sets. So there exist let x be fixed. Then for n sufficiently large, x is in one of these sets since f of x is finite. And therefore for all n-- for all capital N sufficiently large, this minus phi n of x is less than or equal to 2 to the minus n. But then f of x minus phi to the m is also less than or equal to 2 to the minus n for every m bigger than or equal to n because the phi n's are increasing. If phi n is this close to f, then phi m is also that close to f if m is bigger than or equal to n, again, because they're increasing. So that proves-- that's why pointwise convergence B follows from this estimate. As far as part C, that also follows from this estimate since if I have a fixed B, then I can choose a natural number just depending on what B is so that that's set depending on B and the original statement I'm pointing at it, but I don't think you can see it from the camera. That set which depends on B is contained in one of these sets, and therefore for all x in the set where f is bounded by B, this holds uniformly in x. And that's where the uniform convergence comes from. But the whole point is that this is the estimate that gives us B and C once we've proved A. So let's prove this claim. It's not hard. It's basically because we are cutting up the range not only to height 2 to the n, but with resolution 2 to the minus n at each stage, at each n. So to prove the claim we have that the set of y's in E such that f of x is less than or equal to 2 to the n, this is equal to the union k equals 0 2 to the 2n minus 1 of the Ekn's. So if I want to check that bound, I just have to check it for each, if x is in one of these. So suppose x is in E to the kn, then-- I mean, it really is just following from the fact that we're cutting up the range with resolution 2 to the minus n. So I'm just going to draw a small little picture here. We have the axis here. And here's k plus 1 times 2 to the minus n. Here's k times 2 to the minus n. If x is in there, that means there's a-- we're looking at the portion of f that sits between k 2 to the minus n and k plus 1 2 to the minus n, then k times 2 to the minus n is less than f of x is less than or equal to k plus 1 2 to the minus n. And remember the simple function on this piece evaluated in here-- so here's the x-- gives the value at the lower bound. And therefore we get that fn of x minus phi n of x, this is equal to-- not fn, sorry-- f of x minus k2 to the minus 1. Now x, again, is an Ekn. So f of x is between these two numbers. So this is less than or equal to k plus 1 times 2 to the minus n minus k 2 to the minus n. And this equals 2 to the minus n. So whenever we have an x in one of these sets, it has to be within 2 to the minus n. The simple function evaluated at that x has to be within 2 to the minus n of f. This is just by construction, by how we've cut up the y-axis, if you like, the range. We're cutting it up with resolution 2 to the minus n. All right, so that proves the claim which, as I said along with part A, proves B and C. So that proves that every measurable function is almost at least non-negative extended real valued measurable function is a limit of a sequence of simple functions. Now, this theorem carries over without difficulty to complex value functions after I just introduce a breakup of a function in general. So if E is a function from minus infinity to infinity, so an extended real-value function now, we define its positive and negative part f plus of x to equal the max of f of x, 0. So this is the positive part of F f. f minus of x is equal to the min of-- the max, I'm sorry-- minus f of x, 0. This is the negative part of f. So what about these positive and negative parts? Then f is equal to f plus minus F minus. You can just check that. I mean, take any x if f is positive or non-negative on that, then I get f of x out. If it is negative, then I will get minus f of x, which is the absolute value times minus gives me back f of x. And the absolute value of f is equal to f plus plus f minus. So let me make a comment that if-- this is just a definition for an arbitrary function from E to the extended real numbers-- if f is measurable, then each of these functions is measurable because this is-- if you like the supremum of the sequence of functions given by f and then 0 afterwards. And this is given by the supremum of functions given by minus f and 0 afterwards. So if f is measurable, it's positive. And negative parts are also measurable functions. And they're also non-negative. So maybe that wasn't clear, or at least it should be clear. This is the max always involving 0, so it's always non-negative. So now the construction we did a minute ago essentially carries over to the case of complex value measurable functions. So let E be measurable and f from E to C be measurable. Then there exists sequence of simple functions phi n such that-- analogs of those three properties hold, A is for all x in E. These are increasing in modulus, in absolute value. 0 is bigger than or equal to-- of course ph 0 of x is less than or equal to the absolute value of phi 1 of x less than or equal to the, let's say absolute value and modulus, the same thing-- is less than or equal to the absolute value of f of x. These phi n's are converging to f pointwise. And, finally, convergence is uniform on sets where f is bounded. So for all B bigger than or equal to 0-- What did I say a minute ago. Oh, phi n convergence to f uniformly on the set x in E such that the absolute value of f of x, or the modulus of f of x, is less than or equal to B. So this theorem follows immediately from the previous theorem because now what I do is I just take f. I split it into its real and imaginary parts. And then I split the imaginary-- the real and imaginary parts into both positive and negative parts. So I will leave it to you to actually fill in the details, but you apply the previous theorem to the real part of f plus or minus, which are now non-negative measurable functions. And the positive and negative parts of the imaginary part of f, again, which are non-negative measurable functions, and then you just take linear combinations of these simple functions that add up to f. You will take the sequence of simple functions corresponding to the real part of f and subtract the sequence of simple functions that you got for the minus for the negative part of f, real part of f. And you will take that and add i times the positive part or the sequence of simple functions converging to the positive part of the imaginary part of f minus the sequence of functions, simple functions converging to the negative part of the imaginary part of f. So that's what I mean by apply previous theorem to the positive and negative parts of the real and imaginary parts of f. So what's the significance of this theorem? Not only-- let me just say if-- also showing that somehow measurable functions are well approximated or almost simple functions, this also gives us a way of maybe defining the integral at least of non-negative functions that way we don't have to deal with possible deals with subtracting infinity from infinity, but by simply defining its integral to be the limit of these integrals of these simple functions. And for a simple function, we would presumably know how to define an integral. It would just be the numbers time the measure of the sets that appear in the indicator functions for these simple functions. And like I said last lecture, if you wanted to define the Lebesgue integral that way, you would run up against, well, does this number depend on the sequence of simple functions you chose to approximate f? But we're not going to define the Lebesgue integral in that way. We're going to define it a little bit differently, which is what we're going to move on to now, which is the Lebesgue integral of a non-negative function. And then we will define the Lebesgue integral, or Lebesgue integrable functions. These will be complex value functions now for which we can define an integral for. And that is the full theory of-- and that's the end of the game for as far as defining Lebesgue integral. And then we'll prove some convergence theorems along the way, which make Lebesgue integral stronger than Riemann integral. So now we're moving on to the Lebesgue integral of a non-negative function. And why start with a non-negative function? Again, because I just pulled this trick on you a minute ago, that if somehow we know how to do stuff for non-negative measurable functions, then by playing this game where we take the real and imaginary parts and split it up into positive and negative parts, hopefully we can do something for general functions. So that's why we start out with introducing or defining the Lebesgue integral of a non-negative function. So definition, if E is a measurable subset of R, we define L plus of E. This is a set of all extended real-value functions, non-negative real-value functions that are measurable . And now what the goal is is to define the integral of a function like that's in this class. And it may be an infinite number. It may not be. And in order to do that we're first going to define how to integrate the simplest type of functions-- well, simple functions. So let phi be a simple function. And let's write phi in this canonical way. So phi is equal to a sum of aj chi Aj where for all i not equal to j-- well, first off, for all j, Aj is a subset of E. For all i not equal to j, Ai intersect Aj is empty. So these are disjoint. And their union gives me the set E. The Lebesgue integral of the simple function phi is the number, which I said is the simplest way, or what you would expect to define-- how to define the integral of a simple function. So we know how to measure sets. And remember we initially built up measure so that the integral, which would be a theory of area underneath the curve should be-- the integral of the indicator function should be the area underneath the curve of 1, area being the measure of the set where that indicator function is 1. And therefore by linearity, this will be how we define the Lebesgue integral of a simple function. So the loop Lebesgue of phi is the following number, E phi-- this is how it's defined-- sum from j equals 1 to n Aj measure of Aj. And this number could be infinite. All right, and instead of writing just the integral over E of phi, I might add a dx in there. I'm just warning you ahead of time. All right, so this is the Lebesgue integral of a simple function. Again, we split it up into this canonical way where it's just the indicator function of disjoint sets which whose union gives the set of the domain and where these numbers out in front, these coefficients, give you the numbers that go-- of the elements of the range. So, for example again, I mean, this is-- let's say A is from-- or let's say E is the interval a, b. You can check that-- well, I mean, it's just from the definition that if my simple function is, in fact, these sets are just intervals, so if this is what my simple function looks like, it takes finitely many values, this one, this one, this one, this one, this one. And on these sets whose disjoint union forms a, b, then as I've defined and since-- as I've defined the integral and since the measure of an interval equals the length of the interval, again, you should count this as some more comments. This integral equals the sum of-- so this would be aj. Then how I've defined the Lebesgue integral, it spits out the area underneath this simple function that is taking these values on these intervals. I hope that's clear. It's towards the end of the day so maybe my explanations are getting a bit wonky, but I hope it's clear. And then we're going to use this definition of simple-- of how to integrate simple functions. We're going to extend it to general elements of L plus of E, the non-negative measurable functions. But first let's prove a few properties of how we've defined the integral for simple functions. Let's take two simple functions. First is that if c is bigger than or equal to 0, then the integral of c times phi over E, this is equal to c times the integral of E phi. Two, the Lebesgue integral over E of phi plus psi is equal to the Lebesgue integral of phi plus the Lebesgue integral of psi. And third property is if phi is less than or equal to psi on E, so I'm writing that shorthand by that. So what I'm saying in that statement is for all x in E, phi of x is less than or equal to psi of x, then what you expect is the integral of phi is less than or equal to the integral of psi. And let me include one more very simple property. If f is-- let's add a little punctuation. If f is a measurable subset of E, then phi which is a simple function on E is also a simple function on f. It takes only finitely many values on f as well, and therefore it has an integral over f. And my claim is that is equal to the integral over E of phi times the indicator function of f. And that's less than or equal to the integral over E. Now I will leave problem 4 to you. I might even put on the assignment. But it will follow once you've seen how we prove 1, 2, and 3. You'll be like, OK, so I know how to do this. So number 1 is pretty easy simply because multiplying by a non-negative constant just carries through and just changes the constant, but not the sets. So c times phi is equal to c times a sub j. And therefore the integral of c times phi over E, this is by definition equal to j-- sum from j equals 1 to n of c times a sub j measure of a sub j. And this is equal to c times sum from j equals 1 to n of a sub j m of a sub j. And this equals c times the integral of phi over E. All right, so to prove 2 we write phi in this canonical form, sum over a sub j times the indicator function of a sub j's where, again, these sets are disjoint. And their union gives me E. And then also I do the same thing for phis, now k equals 1 to m, maybe take on different number of values, bk, chi, bk, where again the bk's is are disjoint and their union gives me E. So since the union of the Aj's gives me E and the union of the bk's give me, this implies that if I want to look at one of these Aj's it's equal to a union of a certain type, is equal to the union k equals 1 to m of Aj intersect Bk because this is just going to be equal to Aj intersect the union of the Bk's, but the union of the Bk's gives me E, and similarly for the Bk's because the union of Aj's gives me E. And, again, and these unions are disjoint. The Aj's are disjoint from each other. The Bj's are disjoint from each other. So for all j and k, Aj intersect Bk is going to be disjoint from another Aj prime intersect Bk prime when j or k do not equal each other. So since these are disjoint unions, we have from the additivity property of Lebesgue measure, we get that the integral over phi E plus the integral over E of psi, this is equal to, by definition, j equals 1 to n of Aj measure of Aj plus-- equals 1 to m Bk measure of Bk. Now, Aj is written as this disjoint union. And therefore the measure of Aj is equal to the sum of the measure of these, of Aj intersect Bk plus-- and then the same thing here. The measure of-- so Bk is equal to this disjoint union. So its measure is the sum of the measure. So this sum now includes j. So let me just rewrite this as sum over jk, aj plus bk measure of age aj bk. But the point is that the sum of these two simple functions you can check, this is equal to the sum over jk of Aj plus Bk times the indicator function of Aj intersect Bk. So at those x's when phi equals Aj, and at those x's where phi equals Bk, then I'm in this set and the two sides agree. This implies that the integral over E of phi plus psi is equal to measure of Aj intersect Bk, which as we just saw is equal to the sum of the integrals. So that was 2. 3 is not too difficult either. So, again, let's assume that phi and psi are written in that way, in this canonical way. Then for all x in E, phi of x is less than or equal to psi x. This is equivalent to aj is less than or equal to bk whenever aj in bk is nonempty. Thus, again, we're going to use the additivity of the Lebesgue measure and the fact that the unions of these Bk's give me E, then if I look at the interval of E of phi, this is sum j equals 1 to n aj measure of Aj. This is equal to the sum jk aj times the measure of Aj Bk because the Aj is a union over Aj intersect Bks. And that's a disjoint union. And now whenever this is non-zero, that means that Aj intersect Bk is nonempty. And therefore that aj appearing there is going to be less than or equal to Bk. And whenever this is 0, well whatever's there is almost always less than or equal to Bk times the measure of Aj intersect Bk. So this is less than or equal to jk bk measure of Aj intersect Bk. So whenever this is-- so maybe that previous explanation was not good. Whenever this is nonempty I will always have aj less than or equal to bk. So I should have just said that I don't know what I was going on about measure zero stuff, but ignore that. Whenever this is nonempty, like I just said a minute ago, aj is less than or equal to bk, all right. And now we just reverse course. And this is equal to k equals 1 to m of bk measure of Bk because union over j's of these sets give me Bk. And this is equal to the integral of psi of E. And, again, I will leave 4 to you as a very simple exercise. OK, I'm about out of time. So what we've done is we've defined the Lebesgue integral of a simple function. And as that picture shows, I hope, or at least convinces you that if the simple function takes on that of a step function, meaning the Aj's are just intervals, then the Lebesgue integral of that step function will, in fact, be the area underneath phi. So it's, again, this is-- you can think of that as two ways, as the Lebesgue integral giving a theory of the area underneath the curve. You may also think of that as being the first indication that when I have a Riemann integral function, it will be also Lebesgue integral because if I have a step function like phi is in the picture, then that is a Riemann integral function. And the Riemann integral is the area underneath the curve, which also agrees with the definition of the Lebesgue integral. So that, like I said, should maybe indicate that the Lebesgue integral reduces to the Riemann integral whenever we're integrating a Riemann integrable function. And so next time we will define the integral of a non-negative measurable function using how we've defined the integral of simple functions, and prove some basic properties, including two of the main convergence theorems that go along with this theory of integration. So we'll stop there.
MIT_18102_Introduction_to_Functional_Analysis_Spring_2021
Lecture_3_Quotient_Spaces_the_Baire_Category_Theorem_and_the_Uniform_Boundedness_Theorem.txt
[SQUEAKING] [RUSTLING] [CLICKING] PROFESSOR: OK, so let's continue our discussion of normed spaces and Banach spaces. So last time, I introduced the space of bounded linear operators from one normed space from another. And we saw that when the target is a Banach space, then this space of bounded linear operators is also a Banach space. So now, we'll look at another example of a way to obtain a normed space from a given normed space. So I'll be talking about subspaces and quotients, all right? So let me just recall briefly what a subspace is. You should remember this from linear algebra. V will always be at least a vector space, usually a normed space. So we say a subset-- and I've been using this notation. It doesn't mean strict-- a strict subset. I'm just kind of used to using this notation from 18.100A. And the textbook use this notation for just being a subset, not necessarily a strict subset. So quick note about that. So a subset of V is a subspace if for all elements, two elements in the field of scalars and elements in W, their linear combination is also in W. So W is closed, undertaking linear combinations. And so it's quite easy to prove-- and I'll leave this to you-- that the subspace of a normed space-- of a Banach space, sorry, is itself a Banach space, meaning it's complete with respect to the norm it inherits from V. It's a Banach space if and only if it's a closed subset of V with respect to this metric we have induced by the norm. I mean, you can let me talk our way through it real quick. Assuming W is a Banach space, we want to show W is also a closed subset of V. So that means you need to show, for example, that-- one equivalent way of showing this is that every sequence that converges-- every sequence of elements from W that converges converges to an element of W. Now, take a sequence in W that converges to some element in V. We now want to show that element's in W. Then that sequence converges in W-- or is Cauchy in W. And since W is a Banach space, there must be an element in W which it converges to. And therefore, by uniqueness of limits, that limit of that original sequence must lie in W. And now, going the opposite direction, assuming the subspace is closed, we want to show that now this subspace is a Banach space, meaning it's complete. Well, let's take a Cauchy sequence in W. This is also a Cauchy sequence in V. Therefore, it has a limit. This limit has to be in W, since W is closed. And therefore, every Cauchy sequence in W has a limit in W. And therefore, W is a Banach space. OK, so I didn't write out what the-- I didn't write out the details, but we talked our way through it. Since we're taking a trip down memory lane, I just recalled what a subspace is. Now, given a subspace, we can obtain another vector space with the data of the space and the subspace called the quotient. So let me recall the quotient. So this is about briefly recalling what the quotient is. So let's take a subspace of V. We define an equivalence relation on V by saying that v is related to v-prime if and only if v minus v-prime is in W. Now, what do I mean by equivalence relation? v is always related to v because the difference of v is equal to 0, which is an element of W because W is the subspace and therefore a vector space in its own right. If v is related to v-prime, then this implies v-prime is related to v, simply because we-- again, if this is in W, then it's minus is also in W. And so this is reflexivity. And then we also have transitivity, that if v is related to v-prime and v is related-- and v-prime is related to v-double prime, then v is related to v-double prime. Just simply take v minus v-double prime and add and subtract v-prime. Then you obtain the sum of two elements in W which must be in W. So this is an equivalence relation. And so then we define the equivalence class of v. This is the set of all v-prime in V such that v-prime is related to v. And then we define a new set, v-mod W, to be the set of all equivalence classes. And instead of writing the equivalence class of v, we typically write v plus W instead of the equivalence class of v. So you think of the equivalence class of v as being v plus all elements of W. Or two elements are the same if they differ by an element of W. So this is a set. But it becomes a vector space with the addition and scalar multiplication operations defined in a kind of natural way with v1. And scalar multiplication is just the equivalence class given by lambda times v. Now, you have to take a second and make sure that these operations are well-defined, which I'm sure you did in linear algebra. What do I mean by well-defined? Well, these are two equivalent classes. So if I take two other representatives-- say v1-prime and v2-prime-- do I get the same equivalence class as I did with the unprime, meaning is v1-prime plus v2-prime plus W the same equivalence class as v1 plus v2 plus W? And this is not too hard to say-- or to see. So v-mod W, which is typically how you pronounce this v slash W-- v-mod W is a vector space. And so note, we can identify W, so just the set W, with the equivalence class 0 plus W, which is also equal to little w plus capital W. OK, so when we first started talking about norms, I also introduced what was called a seminorm. Now, a seminorm, recall, satisfied not all three properties a norm satisfied. It satisfies homogeneity, meaning how scholars pull out, and also the triangle inequality. But it didn't satisfy positive definiteness, meaning the seminorm could be 0 for certain non-zero vectors, OK? How could that arise? How could a seminorm arise? Well, think of taking the maximum of the derivative of a function as being a norm or as being a potential norm. Then it satisfies homogeneity and the triangle inequality. But it is not a norm because the derivative of constant functions is 0. But constant functions aren't identically 0, if you think of them as elements of, say, the space of continuously differentiable functions. But this next theorem says that if you mod out by the elements in a vector space on which the seminorm is 0, then you get essentially a normed space where the norm is given by the seminorm. So if you mod out by the elements with 0 seminorm, you get an actual norm space. The statement is let-- let's take a seminorm on vector space V. Then we define E to be the set of all v in capital V such that this seminorm of these vectors is 0. This is a subspace of V. And if I define the following function on v-mod E-- so what is it? This is simply the norm of the representative v of this equivalence class. So if I define a function like this, so then this function defines a norm on the space v-mod E. So in short, if I have a seminorm and I mod out by all the elements with seminorm equal to 0, then I get an actual normed space, where this norm is essentially the seminorm. OK, so let's prove this. So why is this a subspace? Well, if I take two elements in this space of elements that have 0 seminorm and two scalars, and if I take the seminorm of the linear combination form, I want to show this is equal to 0. And this follows essentially by homogeneity and the triangle inequality. This is less than or equal to-- if I apply the triangle inequality and then pull the scalars out, this is less than or equal to plus-- and if these both have seminorm equal to 0, this is equal to 0. And now, remember, a seminorm is always non-negative. So if I show something is less than or equal to 0, it's also-- then it must be equal to 0. So first off, we define this function. So now, let's show that this defines a norm on v-mod E. We need to first show it's well-defined because I'm defining it in terms of a representative of this equivalence class, all right? And what does this mean? i.e. If, v plus-- if I have two equivalence classes with-- or if I have the same equivalence class with two different representatives, v and v-prime, then-- and I should write E, not W. Then the norm of v equals the norm of v-prime. And therefore, this function is well-defined. Now, how do we do that? Again, it's going to be essentially the triangle inequality. So suppose v plus e equals v-prime plus em meaning I have two representatives for the same equivalence class. And that means there exists a little e capital E such that v is equal to v-prime plus e. Then if I take the norm of v, this is equal to norm of v-prime plus e. And by the triangle inequality, this is less than or equal to norm of v-prime plus seminorm of E. And now, since E is coming from this set of all vectors with seminorm equal to 0, this is equal to 0. And therefore, I get the seminorm of v-prime. So I've shown that the seminorm of v is less than or equal to the seminorm of v-prime. Now, there was nothing-- this argument is symmetric in v and v-prime, all right? If v is equal to v-prime plus e for some element in capital E, so is v-prime. So I can also switch the-- I can also switch v and v-prime, meaning I've shown that v is less than or equal to v-prime and the norm of v-prime is less than or equal to v. And therefore, their two norms agree. So this function here is well-defined. And so I'm going to leave it to you now to check that this function now, which is well-defined on v-mod E, is, in fact-- does, in fact, satisfy all the properties of a seminorm. I mean, it's non-negative. So from essentially the homogeneity and triangle inequality property of the seminorm, the triangle inequality and homogeneity of this function then follow. And we've essentially equated all elements with seminorm equal to 0 to 0. So this is why it's also positive definite. So I'm going to leave that to you, OK? OK, now, there's one other way-- if we're given-- so in this process, we started with a seminorm on the space, identified the subspace of all, if you like, zero norm elements, and obtained a new normed space. But you could start off with a normed space, a closed subspace of that normed space, and obtain a new normed space on v-mod W, if w is a closed space, in similar fashion, where I define this norm this way. And well, the norm on that space won't be the same as this one. But that'll be in the exercises, OK? OK. So that's about-- this is kind of concluding the elementary section of functional analysis, meaning the bare bones of Banach spaces and normed spaces. And now, with the rest of this lecture and then next lecture, we're going to get into the fundamental results of functional analysis related to Banach spaces. So these theorems that I'll now be stating and proving in the coming couple of lectures have names attached to them. So you should definitely know what they are, what they state, what they don't state. But to prove them, I first need a result from metric spaces, which I did teach last semester. But we didn't cover this theorem. I can't remember if I covered it when I taught 18.100B. I don't think I did. So this is about when can you write-- or in what way-- OK, so let me just state that them. And then I'll interpret it for you. So this is Baire's theorem. It also goes by the name of the Baire category theorem, although it has nothing to do with category theory as is, I guess, used today. I don't know if category theory gets used or if it just is created for itself. But Baire's category theorem is the only category theorem that I know. So what does it state? If M-- so again, this is a theorem about metric spaces, so not necessarily normed spaces. So if M is a complete metric space and Cn is a collection of closed subsets of M such that M is their union-- so this is a-- so I have an N here. So I should specify this is N as a natural number-- then at least one Cn contains an open ball B x, r. So recall this is a set of all y in M such that the distance from x to y is less than r. So at least one Cn contains an open ball could be stated more succinctly as at least one Cn has an interior point. So sometimes in applications, these Cn's are not necessarily closed. But if M is equal to the union of these sets, then it's also equal to the union of their closures. And so what this theorem says is that if you can write a metric space as a union of sets, then the closure of one of those sets has to contain an open ball. Now, let's think a little bit of what it means for-- or there's a specific terminology for when a closed-- when the closure of a set does not contain an open ball, that means it's nowhere dense. Or that's the phrase we usually use for that, is that it's nowhere dense. So Baire's theorem says that if you have M equal to the union of a collection of sets, then the closure of one of those sets has to be dense somewhere. Or you can't write M, a complete metric space, as the union of nowhere dense subsets. And so this theorem is quite simple to state. It's quite very useful and, I mean, quite powerful in applications. In fact, you can use this theorem to give an alternative proof to something that hopefully you saw in your analysis class, which is that there exists a continuous function which is nowhere differentiable. Maybe I'll put that in the exercises. Maybe I'll just point you to it somewhere. We'll see. All right, so the proof is by contradiction. I believe this is the first contradiction proof that we've done in the class. So in 18.100A and B, I mean, you start off-- everything is by contradiction. So let's suppose not, i.e. you can find there is a collection of closed subsets of M such that M is equal to this union. And what we're going to do is we're going to find a point that's not in this union and therefore not in M. And that will be our contradiction. We'll use the completeness of M to obtain this point because how we're going to obtain this point is as a limit of a certain sequence. And to be able to say that this limit exists, we're going to show the sequence as Cauchy. All right, so we're going to build up this sequence inductively. And I will write this inductive proof kind of carefully this first time. And then in the future, I'm just going to say, OK, choose p1 this way, choose p2 this way. Continuing in this manner, we obtain a sequence of points, blah, blah, blah. But at this stage, I'll write this inductive proof kind of carefully. OK, so suppose not. Since M certainly contains an open ball and C1 cannot contain an open ball, this implies M does not equal to the first closed set. Otherwise, the first closed set would contain an open ball. Thus, there exists an element p1 in M take away C1. Now, C1 is closed. This implies that its complement is open. And therefore, I can find a small epsilon 1 such that the ball centered at p1 of radius epsilon 1 intersect C1 equals the empty set. Now, I'm going to pick C2. Now the ball centered at p1 of radius epsilon 1 over 3, this is not contained in C2 because, again, this is by contradiction. So we're assuming that M is written as a union of closed sets and none of these closed sets contain an open ball. So this open ball cannot be contained in C2. And therefore, there exists some point p2 in this open ball centered at p1 of radius epsilon 1 over 3 such that p2 is not in C2. Now, again, C2-closed implies that there exists an epsilon 2 less than epsilon over 3-- so we can make it very small if we wish-- such that the ball centered at p2 of radius epsilon 2 intersect C2 equals the empty set. So now, I've picked p2-- p1 and p2. Now, at this point, I would usually say, continuing in this manner, we obtain a sequence of points. But let me write out the argument carefully. Suppose there exists k points-- p1, pk and positive epsilon 1, epsilon k such that two things occur. So epsilon k is less than epsilon k minus 1 over 3 and-- which is less than epsilon k minus 2 over 3 and so on all the way down to epsilon 1 over 3 k minus 1. So if you like, k here is bigger than or equal to 2. And pj satisfies-- pj is in the ball centered at pj minus 1 of radius epsilon j minus 1 over 3. And the ball centered at pj epsilon j intersects Cj equals the empty set. So let me star those two properties right there. We're going to obtain another point satisfying those two properties. And I don't know why I'm being a stickler. It doesn't really matter. But let me put a 2 because that's not necessarily satisfied for p1. But OK, now, I want to show there exists-- I can obtain a k plus first point satisfying those two properties. Again, this is just formally saying that once I've chosen p2, then I can choose a p3 doing this argument again. And it'll satisfy everything p2 satisfied with epsilon 1, except now with epsilon 2. But now, I'm just making it more formal. So now, again, M is the union of all these closed sets, none of which contain an open ball. So since the ball of radius pk epsilon k over 3 is not contained in Ck plus 1, there exists an element pk plus 1 not in Ck plus 1. Or I should say-- over 3-- such that pk plus 1 is not in Ck plus 1. So let me-- there's pk epsilon k over 3, Ck plus 1. So maybe they're not disjoint. Maybe there's some overlap. But then I can find a point pk plus 1 in there that's not in Ck plus 1. So then there exists an epsilon k plus 1, which I can choose very small, say smaller than epsilon k over 3, such that-- again, because Ck plus 1 is closed, pk plus 1 is not in Ck plus 1. So I can find a small ball around pk plus 1 disjoint from Ck plus 1-- a ball-- we said Ck equals the empty set. So then given k points, I can then choose a k plus first point satisfying these two bullet points here, all right? So by induction, we have found a sequence of points, pk, in M such that-- and epsilon k. So if you like, k starts at 2. Epsilon k and 0 epsilon 1 such that for all k, those two bullet points hold, such that-- which I denoted-- which I put a little star by, star [? hold. ?] So I'm talking about these two points. So when I say star, I mean these two statements here, OK? OK, now, let's show that the sequence is Cauchy. So now, I claim-- how do we do that? This follows. So I'm not going to write out the full epsilon M argument. But I'm just going to write down the crucial estimate that proves this. This follows from the fact that for all k for all l, If I look at the distance between pk and pk plus l because in the end, when I look at-- if I want to show something's Cauchy, I need to take the difference of two-- or the distance between two points. One I can choose as the point occurring earlier in the sequence, and then one occurring later. Anyways, then by the triangle inequality repeated, I can write that this is less than or equal to the distance from pk pk plus 1 plus the distance from pk plus 1 pk plus l. And now, I could do the triangle inequality again. And so I get that this is less than or equal to the distance from pk to pk plus 1 plus the distance from pk plus 1 to pk plus 2 and so on until I get to the distance between pk plus l minus 1 and pk plus l. Now, by star, what's in yellow, pk plus 1 is in the epsilon k over 3 ball centered at pk. So this is less than epsilon k over 3. And the same thing here, less than epsilon k plus 1 over 3 plus epsilon k plus l minus 1 over 3. And now, all of those are less than-- if you go to the first one, it's less than epsilon 1 over 3 k minus 1. So then this is less than epsilon 1 over 3 k plus epsilon 1 over 3 k plus l, yeah. And now, I can actually sum this up. This is strictly less than epsilon 1. And if I sum now from l equals k to infinity of 1 over 3 k, this is equal to 1 over epsilon 1 3 k 1 over 1 minus 1/3, which equals epsilon 1 over 2 3 to the minus k plus 1. So the distance between a point pk and pk plus l is less than some constant times 3 to the minus k plus 1. So if k is very large, this is very small independent of l. So I shouldn't have used l here. And that's M, M. So this shows that the sequence is Cauchy. So since M is complete, there exists a p in M such that these elements pk converge to p. And now, we're going to show this point p does not lie in any of the CJ's. And we'll do it by showing it's essentially in all of these balls, pj epsilon j. All right, and it's kind a similar computation to what we just did. So now, for all k natural number, if you look at the distance between pk plus 1, pk plus 1 plus l-- again, so if I just go back to this computation right here, this, we proved, was less than epsilon k plus 1 times 1 over 3 k plus 1 plus-- or actually, 1 over 3 plus 1 over 3 squared plus 1 over 3 k plus 1-- k plus l. Or I'm sorry, I think that's just l. And now, this thing is less than if I replace it by the infinite sum. So this is less than epsilon k plus 1 times the infinite sum again, m equals 0 to infinity 3 to the minus m, which equals what? So this is, again, equal to epsilon k plus 1 times 3/2. OK, so we've proven that the distance between pk plus 1 pk plus 1 plus l is less than epsilon k plus 1 times 3 over 2. Let me take the limit as l goes to infinity. pk plus 1 plus l converges to the point p. So I get that the distance between pk plus 1 and the point p is less than or equal to 3/2 epsilon k plus 1. And remember, epsilon k plus 1 is less than a third epsilon k. So this is less than 1/2 epsilon k, all right? And since pk plus 1-- remember, this is in the ball of radius 13 epsilon k. So I get that the distance between pk and p is less than or equal to the distance between pk and pk plus 1 plus 1 and p. We get this is less than or equal to 1/3 epsilon k plus 1/2 epsilon k. And this is less than epsilon k, which means that p is in the ball of radius pk of radius epsilon k, which by the second property up in star-- the fact that all these balls are disjoint from-- each of the balls is disjoint from Ck means p is not in Ck. Now, k was arbitrary here. So we conclude that p is not in this union, which is equal to M. And that's a contradiction. So I mean, I said the strategy at the top of the proof, the technical argument maybe muddled what was going on. But again, the point is you can build-- if this conclusion does not hold, you can build a sequence which has this disjointness property from this collection of open sets such that the sequence is Cauchy. And because M is a complete metric space, you can then extract a limit p. This limit p will have all the properties that these pk's had, namely that they're not in any of the Ck's. And therefore, this point p does not lie in the metric space, which is nonsense. So we arrived at our contradiction. And therefore, Baire's category theorem is proven. OK, so that's Baire categories theorem. Let's now use this to prove some fundamental results of functional analysis. So the first result we'll prove is what's called the uniform boundedness theorem, which says that if you have a sequence of linear-- bounded linear operators on a Banach space, then pointwise boundedness implies uniform mindedness or uniform boundedness in the operator norm. So let B be a Banach space. And let Tn be a sequence of bounded linear operators, which we denoted last time by this script B, to some other normed space, V. So I didn't write that, but V is a normed space. If for all b in capital B, sup n of Tn v-norm is less than infinity-- so if I assume that this sequence is pointwise bounded-- pointwise bounded, then I can conclude that they're uniformly bounded, namely the sup of the operator norms are bounded. So we're going to use the Baire category theorem. So we're going to write the Banach space B as a union of closed subsets. What will be these closed subsets? Remember, we're trying to find a uniform bound on the Tn. So just playing around, so let's define the subset of B to Ck. This is a set of all elements b in B such that the norm of B is less than or equal to 1 and sup n Tn of B is less than or equal to k. All right, so first off, these are closed. So I should say, what is k? k here is a natural number. Ck is closed. Why is this? Well, we need to show that a sequence converging-- a sequence of elements from Ck that converges converges to an element of Ck. So if Bn-- this is a sequence of elements in Ck-- and Bn converges to B, then the norm of b is equal to the limit as n goes to infinity of the norm of bn. And each of these is less than or equal to 1. So the limit has to be less than or equal to 1. And so I shouldn't-- maybe I'll use AI a-- I have the Tn's. I have the Bn's. So let me use M. And for all M in natural number, Tm applied to be and normed, this is equal to the limit as m goes to infinity of Tm Bn in norm because these are bounded linear operators and therefore continuous. OK, so Tm of B is equal to the limit as n goes to infinity of Tm of Bn. And now, the norm of Tm of Bn is always-- well, this is always less than or equal to k because the Bn's are in Ck and sup of all these guys is less than or equal to k. So that's also less than or equal to k. Thus, B is in Ck, which implies Ck is closed. Now, the closed ball-- or I shouldn't say-- it's not the closure of the open ball, but the closed ball, b in capital B, such that b is less than or equal to 1. This is equal to the union over all k of Ck. Why? So first off, each of these is contained in this closed ball. But I'm assuming by star that no matter what b is in the Banach space, sup over n of these is less than infinity. So there's always some integer k given a b-- so given a certain b in this set, I can always find a large enough integer such that this is less than that integer. So it has to-- every element in here has to lie in one of these sets C sub k So this is equal to the union. So this is a complete metric space because it's a closed subset of M. So this is a complete metric space written as the union of a-- a union of closed subsets. So by Baire's theorem, there exists one of these sets containing an open ball of the form B, B0, delta 0. So one of these Ck's contains an open ball. Now, we're going to show-- we're going to use this k to derive a uniform bound. So if b is in B and-- let's see. So let's write it this way. If b is in the open ball of radius delta 0 centered at 0, so this means i.e. the norm of b is less than delta 0. Then b0 plus b is in this open ball centered at B0. And therefore, sup n of Tn of b0 plus b norm is less than or equal to k. So this ball is contained in Ck. And therefore, if I take the sup over n of T sub n applied to this thing, this is less than or equal to k. But then I conclude that sup over n of Tn applied to B, which is less than or equal to sup n-- so I'm going to add and subtract B0. So let me actually write this as sup of Tn B0 plus Tn B0 plus B. This is less than or equal to by the triangle inequality. And then carrying the sup through, this is less than or equal to sup n Tn B0 plus sup n Tn B0 plus B. And so B0 is certainly in this ball, which is contained in Ck. So this is less than or equal to k. And this is still in that ball of radius delta 0 centered at B0. So it lies in CK. And therefore, this sup is also less than or equal to k equals 2k. So I've shown that if I take any element in the open ball of radius delta 0, then the sup over n of Tn of B is less than or equal to 2k, all right? And now, it's just a simple rescaling argument to show that the sup of the norms are now bounded. So suppose norm of B equals 1. Then the norm of Tn of delta 0 over 2 applied to B is less than or equal to-- so for all n certainly less than or equal to the sup over n of all of these, which is less than or equal to 2k because this is an element with norm delta 0 over 2, which is less than delta 0. So this is less than or equal to 2k, which implies-- so, again Tn is linear. So I should have reversed this. Let M be a natural number. Let B equal to 1. Yeah, OK. So then Tn applied to B is less than or equal to 4k over delta 0. Now, this holds for all B with length equal to 1. And therefore, the operator norm, which is the sup over all B, would normally equal to 1, is less than or equal to 4k over delta 0. This holds for all n. And therefore, the sup over n of the operator norm is less than or equal to 4k over delta 0, and therefore giving us the uniform bound. I have time for this? OK, I don't think I have time for the entire proof of what's to come. So I think we'll just stop there for now.
MIT_18102_Introduction_to_Functional_Analysis_Spring_2021
Lecture_11_The_Lebesgue_Integral_of_a_Nonnegative_Function_and_Convergence_Theorems.txt
[SQUEAKING] [RUSTLING] [CLICKING] CASEY RODRIGUEZ: We defined the Lebesgue integral for simple functions, which have this canonical representation as a finite linear combination of indicator functions on sets which are pairwise disjoint and whose union gives me E. We defined their integral-- the integral of phi was defined to be the sum from j equals 1 to n aj times the measure of capital Aj sum as j goes from 1 to n. And we proved a couple of properties of this. Namely, we proved that if I multiply a simple function by a non-negative scalar, then the scalar pulls out of the integral. The integral of the scalar multiple of phi is equal to scalar multiple of the integral. If I have two non-negative simple functions and I add them, I get another non-negative simple function and the integral of the sum is the sum of the integrals. And we also prove that if I have two simple functions, one less than or equal to the other, then the integrals respect this inequality. The integral of the smaller one is less than or equal to the integral of the bigger one. So now, we're going to define the Lebesgue integral for a general non-negative measurable function. In some sense, how one should view the Lebesgue is kind of how one should view the Riemann integral, I guess, if you-- or at least when you think about the Riemann integral as you build up these approximations to the integral by cutting up the domain for the Riemann integral. And you choose points in between and you form these boxes. Now, you have some freedom in how you choose these boxes that approximate the integral of f. But one way to choose the boxes-- or at least when I picture it, I always picture the boxes sitting below the graph of f. And as you dice up the domain smaller and smaller then these things are kind of-- these boxes are getting narrower and kind of your approximation is filling in the integral or the area underneath the curve from below. And we've already seen that for non-negative measurable functions. There always exists a sequence of simple functions that increase to f. So that for every x I stick into the sequence of functions-- phi 1 is less than or equal to phi 2 is less than or equal to phi 3 and so on-- and these phis are increasing pointwise to f. So if you're trying to build this on your own, you would think, OK, let me define the integral as of a function, of a non-negative measurable function, as the limit of the integrals of a sequence of simple functions increasing to f. Which I know exists, because we constructed one. Of course-- and that's one way to do it, and some textbooks do that. But you come across this problem that this number that you've defined to be the limit of these integrals-- maybe it depends on the sequence of measurable functions that you took in the beginning. So we're not going to quite do that. In the end, we'll see that this number we defined can be given as of the limit of the integrals of simple functions. So for a general non-negative measurable functions, we define the integral of f over E to be the sup of the integral over E of phi, where now, phi is a non-negative simple function and phi sits below f. So in some sense, this is kind of like taking the integral of f to be defined as kind of-- I don't want to say limit, because it's not exactly a limit, but in some sense, the thing that's getting filled up by all the integrals of the simple functions below the graph that sit below the graph of f. OK, so let me just prove a very simple theorem, which is useful. So if E is a subset of R is measurable-- in fact, it's a set of measures 0. So remember, all sets of outer measure 0 are measurable. So I don't have to say really if E is a subset of R with E measurable. When I say this, I'm kind of saying two things at once. It's outer measure of E is 0 or 1, and therefore it's measurable. So then for all f that are non-negative and measurable on E, the integral over E of f is 0. OK, so it's only kind of interesting to take the integral over sets that have positive measure. So no matter what function I take, the integral over a set of measure 0 is 0. OK, this is kind of akin to, in Riemann integration, the integral over a point being 0. But now we have more interesting sets of measure 0 other than just a point. So what's the proof of this? Well, we don't have much to go off of. We just have the definition. So let's use the definition. Let phi be simple with this sort of canonical representation with the less than or equal to f. So what I'd like to show is that the integral of phi is 0. But this is-- and therefore, the integral of f, which is the sup over all of these integrals, is therefore 0. But this is clear because since all of these Aj's are subsets of E, which is a set of measure 0, this implies the measure of Aj equals 0 for all j. And therefore, the integral of phi over E, which is equal to Aj measure of Aj equals 0. And thus, the integral of f, which is the sup over all of these, is just a sup of 0 then. So I guess I should have started this proof off with let f be an L plus of E. So I left that off, but you know what I was doing, hopefully. OK, so it's only interesting to take the integral over sets of positive measure. Now, we have a few facts which carry over. Well, not really carry. Well, one of them carries over from what we did for simple functions and the others just kind of follow from the definition. And it'll be an exercise in the assignment, so it's the following. If phi is-- then the two definitions of the integral of phi agree. And so I'll say exactly what I mean in just a second. Two, f and g are in L plus of E. c is a non-negative real number and f is less than or equal to g on E. Then a couple of things. The integral of c times f is equal to c times the integral of f and the integral over E of f is less than or equal to the integral of g over E. And one final property is the following. If f is a measurable function on E and f is a measurable subset of E, then I can integrate f also on capital F. So this is in the, I think, maybe assignment 5 that if I have a function which is measurable on a set and I take any subset of that measurable set, which is measurable, then f restricted to this set is also measurable. So what I'm about to say makes sense. Then the integral of little f over capital F is equal to the integral of E of little f times the indicator function of f, which is less than or equal to the integral over E of little f. So I think the rest of the statements, statements 2 and 3 are completely unambiguous. Maybe you were wondering what exactly did I mean by 1. So we had kind of two definitions of the integral of a simple function, right? We defined it first as the sum of the coefficients times the measure of these sets. And then we also have a second definition because a simple function-- a measurable non-negative simple function is also in this set. So I should have that what's here should be equal to what's on the right. So statement 1 is the statement that what's underlined three times is equal to what was underlined four times. That's a lot of lines. All right, so this will be a fairly straightforward exercise just using the definitions. And also, what we did for simple functions. One consequence of this theorem and the one before it is the following. Is that I can relax this condition here in 2 to an almost everywhere statement. So if f and g are non-negative measurable functions, and f is less than or equal to g almost everywhere on E, then the integral of E of f is less than or equal to the integral of g. So let's write the proof of this. So let f be the set of all x's and E such that f of x is less than or equal to g of x. So it's not difficult to realize that this is a measurable set. This is, if you'd like, f minus g. The inverse image of-- or let's do it this way. Write this as g minus f inverse image of 0 infinity. And since g and f are measurable functions, their difference is measurable, non-negative. So this is always non-negative. OK, so maybe there's a small issue with what happens at infinity, but you're dealing with that in the assignment. So I'm just going to erase this from the board and you'll just have to accept that this is measurable under the wisdom that I gave you that if you can write it down, typically it's measurable. And what is the measure of the complement is 0 because this is supposed to hold almost everywhere. OK, so I left off the fact, which strictly speaking, we might need for this, but. OK, so this is also an exercise. So then the integral of little f over E, which is equal to the integral over E of f union, f complement here-- this is equal to the integral over f, little f, plus the integral over f complement of little f So strictly speaking I didn't write why this is true down. But let's think about it just for a moment. So these are two disjoint subsets that make up E. Why is this going to be equal to this? Well, it's true for simple functions. So I mean, if I make this statement and assume phi-- I mean, f is simple, then it's not hard to convince yourself that this is true. And if it's true for simple functions, then from how we've defined the integral as to be this sup, it will carry over to general, non-negative measurable functions. So let's just accept this and you can prove it on your own. It's not difficult. But since we have this and f complement is a set of measure 0, this is the integral of little f. All right, since this is 0. Now, on capital F, little f is less than or equal to g. So this is less than or equal to the integral of g over capital F. And just going backwards, this is equal to the integral of fg plus f complement, which is equal to the integral of g, G. So modulo this equality, which I leave to you to fill in, proves the theorem. OK, so now we have the definition of Lebesgue integral for non-negative measurable functions. We have some properties of it. What's kind of missing from this list that I've given so far is that linearity, I guess, right? The integral of the sum of two non-negative measurable functions is equal to the sum of the integrals. We had that for simple functions. How do we get that for general non-negative measurable functions? OK, so what I'm about to prove is not just a tool for proving that but is one of the big three convergence theorems that you find in Lebesgue measure in integration or Lebesgue integration, which is the following-- monotone convergence theorem, which is the following that if fn is a sequence of non-negative measurable functions such that f1 is less than or equal to f2 is less than or equal to f3 on E. So pointwise they are increasing. The sequence is increasing. And there exists a function f so that fn goes to f pointwise on E. So let me just recall, this means for all x and E limit as n goes to infinity of fn of x equals f of x. So in particular, f is going to be a non-negative measurable function because remember, the pointwise limit of measurable functions is measurable, so what I'm about to say makes sense. Then the integrals converge to the integrals of the limit. So the limit of the integrals is the integral of the limits. So this is a much stronger statement than anything you come across in Riemann integration. Riemann integration usually requires uniform convergence while here, at least for monotone sequences, we just need pointwise convergence. So I think there is a version of this theorem that one could state for Riemann integration. But still, just on the face of it, you have a pointwise statement implying convergence of integrals. So that should immediately kind of suggest to you that what we've built up, this Lebesgue integration, is much more powerful than Riemann integration. So let's prove the theorem. So since f1 is less than or equal to f2 is less than or equal to f3 and so on, this implies the integral of E of f1 is less than or equal to the integral of f2 and so on. And what else? So-- which implies that the limit as n goes to infinity of the integrals of fn exists in 0 infinity. So this is a non-negative increasing sequence of real numbers now. The integral over E of f1, that's a real number. This is a real number. All of these are non-negative numbers because this is a sup over non-negative numbers. So I have an increasing sequence of non-negative numbers. So that either has a limit, a finite limit, or it must go to infinity. That's not difficult to prove knowing what from basic analysis. Moreover, since their pointwise increasing and converging for all x, this implies that we still have f1 is less than or equal to f3 and so on. But they all sit below f. So for each x-- this is a f1 of x, f2 of x, f3 of x and so on this is an increasing sequence of real numbers converging to f of x, which is either a finite number or infinity. . So these numbers are increasing to this limit, and therefore, they must always sit below the limit. And since all of these functions sit below f, this implies that for all n, the integral of fn over E is less than or equal to the integral of f. And therefore, the limit, which we know exists as either a finite number or infinity, is less than or equal to the integral of f. So just based on the assumptions, we immediately get that one of these quantities that we want to show is equal to the other is less than or equal to the other quantity. So standard trick of analysis. If you get kind of for free one quantity is less than or equal to the other quantity that you want to show are equal, let's try and go the reverse direction. So now we show that the integral of f over E is less than or equal to the integral of the limit as n goes to infinity of the integral over E of fn. And therefore, the two are equal. All right. Now to show this, we're going to show that for every simple function less than or equal to f, the integral of that simple function sits below this. Now, we know these fn's so here's the game plan. We know the fn's are increasing to f. So if I take a simple function less than or equal to f-- if the simple function is less than f, then eventually, fn is going to pass it up, right? Because the fn's are increasing to f and the simple function sits below f. And therefore, eventually, we should have this is bigger than or equal to the integral of that simple function. Now, we're only requiring the simple function to be less than or equal to f so we'll give ourselves a little bit of room and then send that bit of room to 0. So let phi be a non-negative simple function. Phi equals sum j equals 1 to n Aj chi Aj with phi less than or equal to f. And our goal is to show that the integral of phi is less than or equal to the limit as n goes to infinity of fn. So here's that little bit of room I was referring to. Let epsilon be a small number between 0 and 1. And let En be the set of all x's and E such that fn of x is greater than or equal to 1 minus epsilon f of x-- I mean, phi of x. So note for all x in E, 1 minus epsilon times phi of x, this is strictly less than now f of x. So we had phi of x is less than or equal to f of x. But now, if I multiply this by a small number that's close to-- or a number that's slightly less than 1, then I will have strict inequality. So in particular, then since for all x and E I have a-- equals f of x. This implies that every x must eventually lie in one of these E sub fn's right? Because 1 minus epsilon phi of x is less than f of x. The fn's are approaching f of x so it must pass up this value at some point in its quest to get to f of x, or at least close to f of x. So simply from this fact, this implies that the union over n equals 1 to infinity of the En's gives me E. Let me highlight this. Now, since these functions are increasing-- I should say, they're pointwise increasing. Not that they are increasing functions, but they are pointwise increasing. So f1 is less than or equal to f2 is less than or equal to f3 and so on. This implies that E1 is contained in E2 is contained in E3 and so on. If I have some n so that x is in this set-- so fn of x is bigger than or equal to 1 minus epsilon phi of x-- then f of n plus 1 of x is bigger than or equal to fn of x, which is bigger than or equal to 1 minus epsilon phi of x. And therefore, that x is then En plus 1. So this is not only a sequence of sets whose union gives me E, they're an increasing sequence of sets. Increasing in the sense of inclusion. OK? Now, we're going to use these two highlighted things in just a minute, along with continuity of Lebesgue measure to get what we want. So we have the integral of E fn. This is less than or equal to the integral of fn over a smaller set, so E sub n. Now, on E sub n-- remember, the E sub n's are defined as where fn is bigger than or equal to 1 minus epsilon phi of x. So this is bigger than or equal to E sub n 1 minus phi-- sorry, what am I doing? Minus epsilon phi, which is equal to 1 minus epsilon times the integral of E sub n over phi. And this is, by definition, equal to a sum from j equals 1 to n. Over the set E sub n, I get integral Aj measure of Aj intersect E sub n. And I made a small notational error. Let's change this into an m since we have n already denoting the-- indexing the functions, we do not want this n right here. So that should be an m. It's just a fixed finite number depending on the simple function. So I have this for all n. And therefore, the limit as n goes to infinity of the integral over E of f sub n is less than or equal to the limit as n goes to infinity of 1 minus epsilon times sum from j equals 1 to m Aj measure of Aj E sub n. Now, the E sub n's are increasing to E, and therefore, for each fixed j-- so in fact, let's pause on this real quick and come back to this thing. We're going to eventually take the limit as n goes to infinity of this quantity. So let's look at what this does as n goes to infinity since by those two things that I highlighted, that En's are increasing subsets of E whose union gives me E, I get-- since E1 Aj is a subset of E2 Aj subset of E3 and so on. And the union n equals 1 to infinity of E sub n intersect Aj equals Aj, because then this is just going to be equal to the union of the E sub n's intersect Aj. That just gives me E intersect Aj, which is just Aj. We get by the continuity of Lebesgue measure-- this implies for all j-- that the limit as n goes to infinity of the measure of Aj intersect En is equal to the measure of Aj, which, as we just said, this En intersect Aj, which-- remember, this set is equal to Aj, so this is measure of Aj. So from the two yellow boxes we had before, we get this useful one. For all j, we have the limit as n goes to infinity of the measure of Aj intersect En is equal to the measure of Aj. So now, we'll stick this into this inequality after we take the limit. So I got ahead of myself a minute ago. Thus, limit as n goes to infinity of the integral over E of fn, this is bigger than or equal to limit as n goes to infinity of 1 minus epsilon m now Aj measure of Aj intersect En. Now, these numbers here all converge to-- each of these numbers here for each j converges as n goes to infinity to the measure of A sub j. So the limit is then equal to 1 minus epsilon times the integral j equals 1 to m times-- what am I writing the integral for? Aj measure of Aj. And this is just equal to 1 minus integral, by definition, the integral of phi. So I've shown that for all epsilon between 0 and 1, this 1 minus epsilon times the integral of phi is less than or equal to this number over here, which may be infinite, may be finite. And since this hold for all epsilon, I can send epsilon to 0. So I have this inequality between now fixed things along with an epsilon here, so I can send epsilon to 0 and I get the integral of phi is less than or equal to the limit as n goes to infinity of fn. And since phi is an arbitrary simple function that's less than or equal to f, the sup over all of these-- which is, by definition, the integral of f-- is less than or equal to the limit of the integrals. All right. That's the end of the proof. So monotone convergence theorem, a very useful theorem, important theorem throughout all of this. OK, so let's get a few applications from this. So this first one is kind of a way how would you evaluate now this integral? Remember, the integral, which I just erased, is defined as the sup over all integrals of simple functions. So in order to actually compute the integral of a non-negative measurable function, I would have to find out the integral of every simple function less than or equal to it and take the sup over that whole set, which is kind of a useless or impossible way of computing the integral. It's similar to when you come across Riemann integration and the Riemann integral is defined as the limit of Riemann sums. Nobody-- you can compute maybe three integrals just from Riemann sum. So we need a more efficient way of being able to compute the Lebesgue integral, and the monotone convergence theorem gives us that kind of for free. So we have the following, if f is a non-negative measurable function and phi n is a sequence of simple functions, which are all non-negative and pointwise increasing and converging pointwise to f then the integral over E of f is equal to the limit as n goes to infinity of the integral of the simple functions. So back when we discussed measurable functions, we actually constructed such a sequence of simple functions that satisfies the hypotheses of this theorem. So this is not a vacuous theorem. But this theorem tells you that if you want to compute the integral of f, just take any sequence of simple functions increasing up to f and compute the limit of the integral. And that'll give you the integral of f. Now, there's just this-- this follows immediately from the monotone convergence theorem. I have the taking fn is equal to the phi n's. So there's no proof to go with that. The next theorem is linearity of the integral. So if f and g are two non-negative measurable functions, then the integral of f plus g is equal to the integral of f plus the integral of g. Now, note there's no ambiguity with how to define, so we kind of had a-- there's some touchy business about adding and subtracting two extended real valued measurable functions, but there's none of that here since these are both non-negative measurable extended real valued functions. So this will always only be of the form infinity plus infinity, which we define to be infinity. So just let me make that small note. So the integral is linear, so what's the proof? Let phi n and psi n be two sequences of simple functions such that they're increasing to f and g, respectively. So 0 is less than or equal to phi 1 is less than or equal to phi 2 and so on. And phi n converges to f pointwise on E. OK, so I should have-- everything's happening on this set E. And the same for the psis. And psi n converging to g pointwise. Then, if I take the sum of these two simple functions or sequences of simple functions, I get an increasing sequence of simple functions. And phi n plus psi n converges to f plus g pointwise. And by this theorem that followed immediately from the monotone convergence theorem, I get that the integral of f plus g over E-- this is equal to the limit as n goes to infinity of integral of E of phi n plus psi n. And now, we've proved linearity of the integral for simple functions. So this is equal to the limit as n goes to infinity of the integral of E of phi n plus the integral of E of psi n. And again, by the theorem that I just stated a minute ago, by the monotone convergence theorem, this converges through the integral of f, this converges through the integral of g. So the limit of the sum is the sum of the limits, and I get g. OK. Using the same kind of argument, if you like-- except now not for two functions, but for one function-- you can show that the integral of a function over a union of two disjoint sets is the sum of the integrals. This is something that I pointed to but didn't prove at the very beginning of this lecture. But using that that is true for simple functions and this argument using the monotone convergence theorem-- which didn't require what I had proved earlier so this is not a circular argument-- you can prove that the integral of non-negative measurable function over a union of two disjoint sets is the sum of the integrals, one over the first set, one over the second set. All right, so that's cool. This integral is linear over-- so the integral of the sum of two measurable functions is the sum of the integrals. What's even better is that the integral over an infinite sum is equal to the infinite sum of the integrals. So actually, let me state-- or let's just do this. If fn is a sequence of non-negative measurable functions, then the integral of the sum of E is equal to the infinite sum of the integrals. Well, first off, this is a well-defined function because it's a sum of non-negative. So pointwise for each x, this is a sum of non-negative real numbers. So that's either going to be a finite number if this series converges or it's going to be infinite, all right? Remember, we're allowing extended real value non-negative measurable functions in our framework for now. So this is meaningful. And it's a measurable function by stuff we proved in the section on measurable functions. OK, so the proof is pretty short. By an induction argument and the previous theorem for the sum of two functions, we have the statement that for every fixed natural number of capital N, the integral of the sum, n equals 1 to capital N fn E is equal to the finite sum of the integrals. So if you can do something for two, usually you can do something for n by an induction argument. So I'll leave the details of this to you or you can just believe it based on how many induction arguments you've done in your life. So we have this. And so since n equals 1, 1 fn is less than or equal to the sum from n equals 1 to 2 of fn is less than or equal to sum from n equals 1 to 3 fn. Because these are all non-negative functions, so adding non-negative functions to something only increases it. And sum from n equals 1 to n converges to pointwise simply by defining this to be the limit as capital N goes to infinity of fn of x pointwise. All right, since I have these two things, then by monotone convergence theorem, I get that the integral of n equals 1 to infinity of fn of E. this is equal to the limit as capital N goes to infinity of the integral of E sum from n equals 1 to capital N, which, by what we have right here, is equal to the limit as capital N goes to infinity of-- now the finite sum comes out. And this is by definition this infinite sum. So for non-negative measurable functions, the integral of the sum is equal to the sum of the integrals, even for an infinite sum. So again, this is simply false for if I replace everything by Riemann integration. Because, in fact, I can come up with a sequence of functions, fn, whose Riemann integral is always 0, but the sum is not Riemann integrable. Think of taking fn to be the function, which is 0 off of a rational number. And then so first, enumerate the rationals Q1, Q2, Q3, Q4, and so on. And take fn to be the function that is 0 when x is not equal to Qn and 1 when x is equal to Qn. Then the infinite sum is just going to be the indicator function of the rational, say, in 0, 1. That's not Riemann integrable, but the sum of these integrals is just 0. So this is not true for Riemann integrals again. So we're doing something much more powerful here. OK. Let's do some more properties of the integral. Now, back to properties of the integral. So if I have a non-negative measurable function, then the integral of f equals 0 if and only if x equals 0 almost everywhere on E. So now what's-- this is a two-way street, so one direction. If f is equal to 0 almost everywhere, then it's less than or equal to 0 almost everywhere. And therefore, the integral of f is less than or equal to the integral of 0, and the integral of 0 is 0. So this direction follows from the fact that f is less than or equal to 0 almost everywhere, which implies that the integral of f over E is less than or equal to the integral of 0, which you can check is 0. And this is a non-negative quantity, so. So now, how about the other direction that the integral of f being 0 for a non-negative measurable function implies f is 0 almost everywhere. So let's let fn to be the set of all x's and E such that f of x is bigger than 1 over n. And let's let f be the set of all x's such that f of x is bigger than 0. Now, if x is-- so this is x and E, I should say. Now, if I have an x where f of x is bigger than 0, then at least for some large n, f of x will be bigger than 1 over n So then the union from n equals 1 to infinity of the fn's equals f. I mean, each of these is a subset of capital F, so their union is contained in capital F and I've just-- the argument I gave a minute ago shows you that capital F is contained in the union. So this union equals f. And just by how it's defined, 1 is bigger than or equal to a 1/2, which is bigger than the 1/3, f1 is contained in f2 is contained in f3 and so on. If f of x is bigger than 1/2, it's certainly bigger than 1/3. Right, so now we'll use again, continuity of Lebesgue measure. So then for all n, 0, which is less than or equal to 1 over n times the measure of fn-- this is equal to the integral over E of 1 over n times the indicator function of fn. Now, on this set-- but what am I saying? Yeah, let's write it this way. This is equal to 1 over n. And now, on fn, 1 over n is less than or equal to f of x. So this is less than or equal to the integral of fn F. And capital F sub n is a subset of capital E, so this is less than or equal to the integral of E over E of f. But by assumption, this is 0, right? And sandwiched in between 0 and 0 is 1 over n times the measure of f sub n. And therefore, for all n, measure of f sub n equals 0, which tells me that the measure of f-- which is equal to that union, which is equal to this increasing union-- is by the continuity of Lebesgue measure, equal to the limit as n goes to infinity of the measure of f sub n, which equals 0. And therefore, the set of all x's where f of x is positive has Lebesgue measure 0. And f equals 0 almost everywhere. Now, using what we've done here and the monotone convergence theorem, we can slightly relax the assumptions in the monotone convergence theorem. So we have the following theorem. If fn is sequenced in non-negative measurable functions such that now for almost every x in E, we have f1 of x is less than or equal to f2 of x is less than or equal to f3 of x and so on. And limit as n goes to infinity of fn of x equals a function f of x. So remember, in the statement of the monotone convergence theorem, we assume these two things for every x. Now, we're just assuming them for almost every x in E. Then we get the same conclusion. Then the integral of E of f is equal to the limit as n goes to infinity of the integral of E over E of f sub n. OK. So we call these two conditions star. Let capital F be the set while x is in E such that fn's are increasing to F, so star holds. Then the measure of the complement is, by assumption, equal to 0. And I should say here, the complement in E. So I should say E take away f. So let me-- and that should have been in the-- so if I write complement, you should interpret that as the complement within E. So E take away f. Then f minus chi sub f-- so this is the indicator function over capital F. This equals 0 almost everywhere. And fn minus chi sub f fn equals 0 almost everywhere for all n. These equal 0 when x is an f which the complement is as measure 0, which I didn't finish writing down. Now, by monotone convergence theorem applied now to these parts, if you like, and the previous theorem, we have that the integral of f of E, this is equal to the chi sub f. And so this is equal to-- so since f minus f-- wait. Yeah, so OK. So since f equals f times chi sub f almost everywhere, the integrals equal. And so this is equal to the integral over capital F of little f. And by the monotone convergence theorem applied here, this is equal to the limit as n goes to infinity of the fn's, because they are pointwise increasing on capital F to little f, and this is equal to-- OK, so I really didn't need the previous theorem. I could have used what I had earlier that if I have two functions which equal each other almost everywhere. So this previous theorem should not be referring to what I just proved a minute ago, but really to the theorem at the beginning of lecture that if I have two functions that equal each other almost everywhere, then their integrals equal each other. Although, maybe I didn't state that. I just stated the less than or equal to. But if they're equal almost everywhere, they're less than or equal to each other almost everywhere. Anyways, back to this. This is equal to the limit fn. So the whole point is that sets of measure 0 don't affect statements that involve integrals. That should be the take home, that if your conclusions are in terms of integrals, conditions holding almost everywhere suffice, typically. So for example, the simplest one we had earlier was that if f is less than or equal to g, then the integral of f is less than or equal to g. We can relax that to the integral that if f is less than or equal to g almost everywhere, then the conclusion, which is stated in terms of integrals, still holds. The integral of f is less than or equal to the integral of g. So now, we'll do the second big convergence of integrals-- or this one's actually an inequality between integrals, but it's still extremely useful. In fact, it's equivalent to the monotone convergence theorem, so it is neither stronger nor weaker. So we have Fatou's lemma, stated as a theorem, of course which states that if fn is a sequence in L plus of E, then the integral of E of the liminf is the integral that's infinity of fn of x. This is a function. For each x, I take the liminf as n goes to infinity of fn of x. This is less than or equal to the liminf as n goes to infinity of the integrals of fn. So let me state it this way. So the liminf of f sub n, let me just recall, what is the liminf? This is equal to the sup over n equals 1 inf over k bigger than or equal to n fk of x. So that is the definition of the limsup, if you like, if-- in fact, let me not just be specific to fn of x, just of a sequence of real numbers, the liminf of a sub n is equal to this thing on the right hand side. OK. So this follows pretty easily from the monotone convergence theorem. I said a minute ago that it's, in fact, equivalent to the monotone convergence theorem. You can prove if you-- so we're going to use the monotone convergence theorem to prove it. You can also assume Fatou's lemma holds and then prove the monotone convergence theorem from it. You can also prove it independently from the monotone convergence theorem. I mean, using essentially what's a similar argument to how you prove the monotone convergence theorem. OK, so first off, so we have liminf of fn of x, which is, again, by what I've written up here, sup n bigger than or equal to 1 inf k bigger than or equal to n fk of x. This is now for a fixed n-- or what happens to what's in the bracket as n is increasing? Well, this inf is being taken over a smaller set. And the inf of a smaller set is bigger than or equal to the inf of the larger set. So this inf here, this thing in brackets, is increasing in n. So this sup is, in fact, the limit as n goes to infinity of this increasing sequence of real numbers defined as the inf over k bigger than or equal to n of fk of x. And so what I just told you is not specific to fk of x. It's specific to ak, for sequence ak. OK, and basically, I'm going to write down what I said a minute ago. Since fk bigger than or equal to 1 fk of x is less than or equal to inf a bigger than or equal to 2. We have fk of x is less than or equal to-- now it changes to 3 and so on. This implies by the monotone convergence theorem that the integral of the liminf of fn is equal to the limit as n goes to infinity of the integral over E of the inf k bigger than or equal to n fk. So I have this function here, which is defined in this way. So for each n, I get a function here. All right. Now, for all j bigger than or equal to m, this function given by the inf over k bigger than or equal to-- let me add one more quantifier in here. So for all j bigger than or equal to n, for all x in E, I have that the inf over k bigger than or equal to n of fk of x-- this is certainly less than or equal to fj of x. This is the inf overall fk of x for k bigger than or equal to n. And for any fixed j bigger than or equal to n, that's certainly less than or equal to fj of x. Because this is a lower bound for all of these guys for all j bigger than or equal to n. And therefore, since this function here sits below this function, I have for all j bigger than or equal to n, the integral of E of inf of fk is less than or equal to the integral of fj. So I have this number here sits below this number here for all j. This is a fixed number depending on n, this is a fixed number depending on j. And this holds for all j bigger than or equal to n. So this thing has to be a lower bound for the set of all numbers of this form for j bigger than or equal to n. And therefore, the integral of E of inf k bigger than or equal to n of fk is less than or equal to the inf overall j bigger than or equal to n of the integral of f sub j over E. Now, we're going to take this and stick it into this inequality here. So that's what we had before, which was that the liminf of fn over E, which is equal to limit as n goes to infinity of the inf k bigger than or equal to n fk. This is, by what we've just shown, is less than or equal to the limit as n goes to infinity of the inf over j bigger than or equal to n of fj. But this is just, by definition, equal to the liminf of the integrals of the fn's, which is Fatou's lemma. OK, so one more theorem about the Lebesgue integral, which is a very useful one. Throughout all this, we have had functions that are extended real value. So we're dealing with non-negative functions, which can equal infinity at points. And maybe that makes you nervous, but I'm going to tell you that as long as the integral is finite, you don't have to be nervous too often. So if f is a non-negative measurable function over a measurable set E and the integral is finite, then the set of x is where f of x is infinite as the set of measure 0. So the measure of the set where it's infinite is 0. So what's the proof? So it's kind of how we did, in spirit, the proof that if the integral is 0, then the function is 0 almost everywhere. So let f be the set of all x in E such that f of x equals infinity and fn be the set of all x's in E such that f of x is bigger than n. Oh well, that's what I had in my mind, but what I wrote in my notes is a little bit different than the proof I had in my mind just now. So let's go with what's in my notes that's a little more cautious. OK. Then for all n, a natural number, n times chi f is less than or equal to f times chi f, where this is the indicator function of capital F. Because f on capital F is just infinite, so this always holds, right? And therefore, n times the measure of f-- so of all n, n times a measure of f is less than or equal to the integral over E times f chi f, which is less than or equal to the integral of E of f, which is finite. That's a fixed number. Then for all n, the measure of f is less than or equal to 1 over n times the integral of f over E. Again, this is a fixed finite number which goes to 0 as n goes to infinity. This is just a fixed number as well. Thus, measure of f equals 0. OK, so that seems like a good place to stop. Next time-- so we've defined the Lebesgue integral of a non-negative measurable function. We will then define the class of Lebesgue integrable functions and extend the definition of integral to those functions in a fairly straightforward way. Prove some simple properties of the Lebesgue integral. And also, the last big convergence theorem, which is the dominated convergence theorem. And we may or may not finish by the end of next lecture the proof that Lp spaces, which are based on the Lebesgue integral-- so we built the Lebesgue integral to have a space of functions for which kind of-- OK, so let me stop. That alarm kind of threw me off. So it's not too difficult to show proof. Or you can just accept for now-- and we'll actually see why this is the case soon-- that the space of continuous functions with norm being the integral. So the norm of f being, let's say, the integral of the absolute value of f is a norm space, but it's not a Banach space. Or you could change the integral of the absolute value of f to what's called a big Lp-norm, the integral of f raised to the p all raised to the 1 over p. So the analog of the little Lp norms, which we encountered a few weeks ago. None of those are Banach spaces when restricted to continuous functions, or even Riemann integrable functions. So our goal-- at least it was a while back when we started this section, this big section on Lebesgue integration-- was to build, or at least come upon a space where the resulting integrable functions form a Banach space. And so we may or may not, by the end of next lecture, introduce those. But that's where we're headed. That's where we're almost at. And these spaces arose because one wants to apply functional analysis facts, tools to concrete questions, such as questions about convergence of Fourier series, which arose immediately after Fourier said that any periodic function can be expanded as a Fourier series. So a lot of people went to a lot of trouble to fill in precisely what it means expanded as, expanded as pointwise. Does this Fourier series converge to the function that was kind of hard to do on average? Do you mean average as in measured with respect to some norm that's integrated that involves integration? So which is why we're coming here. But we'll see that next time. Or we'll see the applications of this integration theory, along with the functional analysis later in the course when we circle around to Fourier series. All right, so we'll stop there.
MIT_18102_Introduction_to_Functional_Analysis_Spring_2021
Lecture_15_Orthonormal_Bases_and_Fourier_Series.txt
[SQUEAKING] [RUSTLING] [CLICKING] PROFESSOR: OK, so ortho-- today, we're going to be discussing orthonormal bases of a Hilbert space. So let me just recall what we did at the end of last time. We introduced maximal orthonormal sets, so a collection of vectors in a-- we could say a pre-Hilbert space, but let's say in a Hilbert space. So this is maximal if u in H and u, e lambda equals 0 for all lambda in the index implies that u equals 0. So a collection of-- so I should say that a collection of orthonormal-- orthonormal vectors is maximal if the only thing that's orthogonal to all of them is the zero vector. And last time, we proved that-- let H be a separable Hilbert space, meaning it has a countable dense subset . Most-- I mean, I think all of the Hilbert spaces you've come into contact with are separable. Cn, Rn, also little l2, big L2-- this is what you're doing in the assignment. That's also separable-- then H has a countable maximal orthonormal subset, e sub n. So we proved this at the end of last time via the Gram-Schmidt process. We took a countable collection of-- a countable collection of elements in the Hilbert space, which were dense. And then we formed this countable orthonormal subset. And I shouldn't put N in, capital N because this could be a finite collection or it could be countably infinite collection. But then we applied the Gram-Schmidt process to that collection of dense elements and came up with this maximal orthonormal subset. Now, we have a special name for countable maximal orthonormal subsets for a good reason as we'll see. So let H be a Hilbert space. An orthonormal basis of H is a countable maximal orthonormal subset-- I shouldn't say n and N again-- of H. So an orthonormal basis is just a special name that we give to those maximal orthonormal subsets which are countable, meaning they can be either finite or countably infinite. So again, all of the Hilbert spaces we've encountered in practice-- for example, Cn, little l2, big L2, which are all separable, meaning they do have countable maximal orthonormal subset, meaning they have an orthonormal basis-- so they have these objects. Now, why do we call-- why do we call a countable orthonormal-- or a countable maximal orthonormal subset an all orthonormal basis? So a basis is supposed to be something where you can write every vector in the space as some sort of linear combination of the elements of the basis, right? Now, we dealt with Hamel basis-- or bases, which are you can write every element as a finite linear combination of the elements in this subset. Now, in this setting, in what sense is this a basis? Or what is-- I shouldn't say, is it a basis? But in what sense does this connect to what we've encountered, say, in finite dimensions? An orthonormal basis-- so as stated in the next theorem, every element can be written now as an infinite linear combination of elements of an orthonormal basis, not necessarily finite. So the statement is the following-- if en is an orthonormal basis in the Hilbert space H, then for all u in H, if I look at the partial sum, un inner product e sub n, e sub n, this converges as n goes to infinity to the element u in the space H. Written in shorter form is that u is equal to sum from n equals 1 to infinity of u inner product e sub n, e sub n. Sometimes, this is referred to as a Fourier-Bessel series. So just like in finite dimensions, if you have an orthonormal basis consisting of only finitely many vectors, then you can expand every vector in that, say, Cn in this way. Now, the statement is, in a Hilbert space, if you have an orthonormal basis in this sense, meaning it's a countable maximal or orthonormal subset, then you can expand every element as an infinite linear combination of these elements. OK, so to prove this, we will use Bessel's inequality and the completeness of H. This is where we're using the fact that H is not a pre-Hilbert space, but a Hilbert space, all right? What we're going to do is we're going to be able to show that the right-hand side of this equality-- so we're going to be able to show that this series here converges in H. That uses Bessel's inequality and the fact that H is a Hilbert space converges to some element. And then we're going to use the fact that this collection is maximal to show that the inner product of whatever this defines with each element e sub n is the same as the inner product of this thing with each element e sub n, and then conclude that this thing on the right side has to equal this thing. All right, so if we first prove that the sequence of partial sums-- so m is Cauchy. So let epsilon be positive. So by Bessel's inequality, we have that the sum of n equals 1 to infinity of u squared-- this thing converges. So the only thing we have to check is that, since this is a series involving non-negative terms, that this thing is bounded above. And we have by Bessel's inequality that this is always bounded above by the norm of u squared. So this sum, this series converges to a finite number. In particular, the series-- the partial sums corresponding to this series is a Cauchy sequence of real numbers, of non-negative real numbers. Thus, there exists an N natural number such that for all-- let's see-- all N bigger than or equal to M, we have that the tail is small, meaning n equals N plus 1 infinity of less than epsilon. You can just look back into your 18.100 notes or whatever that if I have a series of non-negative terms which converges, then for every epsilon, I can find a capital N so that the tail, so starting the sum at a large enough entry, will be small. Then for all m greater than l greater than this capital M, if I compute the norm of n equals 1 to m u inner product e sub n, e sub n minus N equals 1 to l u e sub n, e sub n squared. And let's make this a square. I can do that. This you can just compute. Expanding this out as the inner product of this whole thing with itself, this gives you sum from n equals l plus 1 to m of u en squared. And since l is bigger than or equal to m, this is certainly less than or equal to-- well, I can put infinity here. n equals l plus 1 u inner product e sub n squared. And that's less than-- since l is bigger than or equal to capital M, we can use this inequality. And that's less than epsilon squared. So we've shown that for n and l bigger than or equal to capital M, the norm squared of the difference is less than epsilon squared. And therefore, the norm of the difference is less than epsilon, proving that the sequence of partial sums is Cauchy. OK, so we've proven that the sequence of partial sums is Cauchy. So since H is complete, there exists an element u in H such that u-bar is equal to limit as n goes to infinity equals 1 m of these partial sums, where this limit means that the norm of the difference in H converges to 0. And so just in shorthand, again, this means-- now, what I'd like to do is show that u-bar equals u. I don't mean complex conjugate. I just mean an element of H. And now, how we're going to do that is show that the inner product of u minus u-bar against every element of this maximal orthonormal subset is 0, and therefore conclude u equals u-bar. Now, we have this result, that the inner product is continuous with respect to the entry. So by continuity of the inner product, we have that the inner product of u-- so let me-- so then for all l natural number, the inner product of u minus u-bar with el, this is equal to-- u-bar is equal to the limit as m goes to infinity of this partial sum in H. So this inner product is equal to the limit as m goes to infinity of u minus sum from n equals 1 to m of u en, en inner product el. And this is also-- I mean, I'm sure you've seen this at some point. But this is also why the coefficients appearing in front of the en's are the way they are. So this is equal to the limit as m goes to infinity of u inner product el minus the sum from n equals 1 to m of u en, en inner product el. And when n does not equal l-- so remember, these are orthonormal. When n does not equal l, I get 0. And when n equals l, I get 1. So all I pick up is u-- this just reduces to u inner product el, which cancels with that one. And therefore, I get 0. All right, so I've shown that for every l, u minus u-bar inner product with el is 0. And since this collection is maximal, anything that inner product with-- so the only element that's orthogonal to every element in this collection is 0. That implies that u minus u-bar equals 0. And therefore, u equals u-bar, which means u is equal to this series. OK, so we've shown that if a Hilbert space has an orthonormal basis, then every element can be expanded in what's typically referred to as a Fourier-Bessel series in this way, in terms of the elements of the orthonormal basis. Now, let me just tie one thing up. So we know that every separable Hilbert space does have an orthonormal basis, a countable maximal orthonormal subset. And so if H is separable, that implies H has an orthonormal basis. Now, what this theorem also proves is that if H has a orthonormal basis, then H is separable. So let me just state that as a simple theorem that follows from this. So we've shown that if H-- that was the first theorem my stated at the beginning, that if H is separable, then it has an orthonormal basis. If H has an orthonormal basis, H is a Hilbert space. So I should have said that at the beginning. But H is a Hilbert space. Then H is separable, meaning it has a countably-- a countable dense subset. So what is this countable dense subset? I'm just going to give you the subset and then talk through why this subset works. So suppose en is an orthonormal basis for H, again a Hilbert space. Then if I define S to be the union over all, lets say, m natural number of elements of the form sum from n equals 1 to m of qn en where q1-- these are just rational numbers. So first off, this is a countable subset of H. Why is that? Well, each of these is in one to one correspondence with the m-fold Cartesian product of the rational numbers. And now, the rational numbers are countable. And you proved back in 18.100 that any-- in fact, well, any Cartesian product, finite Cartesian product of a countable set is, again, countable. And then we're taking a countable union. Another thing you proved in 18.100 is that a countable union of countable subsets is countable. So that's why S is countable. And then so now, I'm just going to state by the previous theorem S is dense in H. And then I'm going to put a box here and explain why. So every element-- so if H has an orthonormal basis, by the theorem we proved, every element can be expanded in one of these Fourier-Bessel series as coefficients times-- as an infinite linear combination of orthonormal vectors, of these orthogonal vectors. Now, this means the partial sums are converging to a given element u. So what we have to show for this thing to be dense-- we have to show for every epsilon, there exists something from this set within epsilon distance of that given vector. Now, give yourself a vector. Its Fourier-Bessel series converges to it. So we can cut off a-- we can cut the series off at a certain point and still be within epsilon over 2 to that element we're trying to get close to. Then that finite-- that finite sum will be in one of these-- well, it won't be in one of these. But it will just be the sum from n equals 1 to m of some coefficients time the e sub n's. Now, the rational numbers-- I should say-- there, now that's correct. Now, any complex number here can be approximated by a rational number here plus i times the rational number here. This is still in one to one correspondence with q squared, the Cartesian product of with itself. And therefore, this is still countable-- and so I should have said 2m here. I was thinking about real numbers-- so that you can get close to the actual Fourier-Bessel coefficients that appear in that sum by now just rational or complex numbers with rational real and imaginary part. So I hope that explanation was clear. Maybe sit down, and think about it, and actually write down carefully the epsilon argument. But that's essentially why this is true. So what we've shown is that a Hilbert space is separable if and only if it has an orthonormal basis. So let me make a remark that what we've shown up to this point is that if H is a Hilbert space, then H is separable if and only if H has an orthonormal basis. All right, so we've proven that, in the Hilbert space, if a Hilbert space has an orthonormal basis, then every element can be expanded in this kind of infinite series involving or the orthonormal vectors. I say infinite. That's only if the orthonormal basis is countably infinite. If it was finite, then that's actually not an infinite sum. It's a finite sum. Now, what follows from that is that-- so we have Bessel's inequality, which says that the sum of squares of the coefficients appearing in this Fourier-Bessel series is always less than or equal to the norm of u squared. That's no matter what you assume about the orthonormal subset. That's for any orthonormal subset. But now, we have that if it's an orthonormal basis, then, in fact, we have equality. So if H is the Hilbert space and this is a countable orthonormal basis, then for all u in H, we have that the sum n equals 1-- I'll just put sum over n because this may be a finite sum-- u en, which by Bessel's inequality, we always had less than or equal to the norm of u squared, in fact, equals norm of u squared. And this is sometimes referred to as Parseval's identity. All right, so what's the proof of that? We have that u equals the sum of u, en en. And therefore, immediately, by the continuity of the inner product, we can write norm of u squared as-- so this is for the case that we have countably infinite. If it's finite, if it's a finite orthonormal basis, this follows immediately. But sum from m equals-- or the sum as m goes to infinity of n equals 1 to m u, en en sum from, let's say, l equals 1 to m u, el el. And this is equal to the limit as m goes to infinity of sum of n equals-- say nl equals 1 to m of u, en. And now, this constant here comes out with a complex conjugate-- u, el times en inner product el. And now, we only pick up-- so n and l are going from 1 to m. For a fixed n, we only pick up the l where l equals n because these vectors are orthonormal. And when n equals l, this gives me 1. So I only pick up-- when l equals n, I get u, en times the complex conjugate of u inner product en, which is just the norm squared when I multiply by that. And I'm taking the limit as m goes to infinity. So I pick up the whole sum, the infinite sum, if it's infinite. All right, so we've seen a couple of simple applications of the theorem. We proved that if I have a normal basis, then the Hilbert space must be separable. And also, we have equality in Bessel's inequality for the-- in terms of the orthonormal basis. What's more is that the previous actually gives us a way to identify every separable Hilbert space with the one you were introduced to in the first week, although I didn't call it a Hilbert space. So this is the following theorem, that if H-- so if H is a finite dimensional Hilbert space, that means I can find finitely many orthonormal vectors that span the space. And it's quite easy to show that then that is isomorphic to Cn in an isometric way. So I'm just going to state the infinite dimensional version. If H is an infinite dimensional separable Hilbert space, then H is isometrically isomorphic-- and I'll spell out what these words mean-- to l2, little l2, meaning what? Meaning there exists a bijective map T going from H to little l2-- so it's one to on and on to-- so that for all u, v in H, we have that the norm of Tu-- I should say bijective linear map, bijective linear map. Or I should even say, I mean, bijective linear operator. So it's going to be a bounded linear map. And this follows immediately from what I'm about to write down-- such that if I take the norm of the image in little l2, this is equal to the norm of the vector in H. So that's the isometric part here, is that the map is not changing lengths. And basically, if I have two maps between-- or if I have a map between two Hilbert spaces that preserve the lengths, then it also preserves inner products by-- not the parallelogram law, it's the polarization identity, which you see in the assignment. But anyways-- and also, we have Tu, Tv-- taking this inner product and l2 because these are elements now of little l2, this is the same as the inner product in H of u and v. And it really just follows kind of immediately from what we've done, once I just write down the map. So what's the proof? So since H is a separable Hilbert space, it has an orthonormal basis, en, which is countably infinite, since we're in the infinite dimensional setting. And by the previous theorem, we have for all u in H, u is equal to sum from n equals 1 to infinity u inner product en, en. And with the norm of u squared equals-- if you'd like, let me remove the squares and I can write this as sum from n equals 1 to infinity of u en norm squared 1/2. So now, this should just jump at you. How do I define my map from H to little l2? Then I define T of u to simply be the sequence of coefficients appearing here. And this is an element of little l2 by this identity here, Parseval's identity. Then T does the job. So I didn't go through and check all the properties that I needed to check. I mean, it's clear that it's going to be linear in u because these coefficients are linear in u. And that's clear. It's one to one. It follows from the fact that every u is expanded this way. And therefore, if the coefficients are the same for two different u's, then those two u's have to be the same. So that makes it injective. The fact that it's surjective-- again, it doesn't take too much to prove, although you just show that for every choice of sequence in little l2, that forming such a sum-- so let's label these Cn. So now, you have just the element of little l2. Now, if I put Cn here en, now, you can argue, as we did before, that this series is, in fact, Cauchy in H and therefore converges to some element in H. And then you can prove that T takes that element that you have to the sequence that you started with, proving that T is surjective. So I'm not going to go through all the details. But it should be kind of clear that this-- at least based on this identity here, what the map should be. All right, so we've seen some applications of this general theory of orthonormal bases of countable maximal subsets in the Hilbert space applications, meaning if it has a basis, orthonormal basis, then the space has to be separable. Every separable infinite dimensional Hilbert space is basically the same. They're all isometrically isomorphic to little l2. But how can we use this in a more concrete setting, whatever concrete means? I mean, concrete kind of is by taste. So I thought we'd pause here on the general theory for Hilbert spaces that we've been doing and do something a little more specific and look at Fourier series, which will connect this general stuff that we've been doing with Hilbert spaces to more of the concrete bit of producing Lebesgue integration in these big LP spaces, which we proved are complete spaces involving, in some sense, integrable functions. So let's take a pause from general theory and talk a little bit about Fourier series. OK, Fourier series, which was the reason why a lot of especially the integration theory was created in the first place, was to understand a certain question, which we'll get to in a minute. Let me just start off with a very simple theorem, that the subset of functions, e to the inx over root 2 pi in now an integer is an orthonormal-- orthonormal subset of l2 minus pi to pi. And here, let me-- just for a quick refresher, if t is a real number, e to the it, this is simply defined to be the complex number of cosine t plus i sine t. And it satisfies all the things that you know and love about the exponential. If I multiply e to the it times e to the i tau, that's equal to e to the it plus tau just by referring to the definition and using angle sum formulas and so on. All right, so this is not too difficult to prove. What's the proof of this? If I take e to the inx and inner product it with e to the inx, now this inner product is in big L2. This is, by definition, equal to the integral from minus pi to pi e to the inx times e to the imx complex conjugate. And again, from the definition, when I take a complex conjugate, that flips this i to a minus i. And then I can take that minus that sits out here and put it in, and make this a minus t here and a minus t there, since cosine t is even and sine t is odd. So then this becomes e to the inx e to the minus imx x dx. And again, you can just go from the definition of what this is. You don't need any fancy complex analysis. This is i to the n minus m times x dx. Now, this quantity here-- when n equals m, I just get 0 here. And therefore, this is just 1. And when I integrate that, I get 2 pi. So this equals 2 pi. And now, when n does not equal m, this has an antiderivative that you expect it to be. And therefore, this is equal to e to the i n minus m x over n minus m times i evaluated at pi n minus pi. And this is if n equals m, n does not equal m. Now, what is it about e to the it? This thing is 2 pi periodic. And it's the same if I have an n here. Then that's going to be 2 pi over n periodic. So if I stick in pi here, the value I get is going to be the same as when I stick in minus pi here because the difference between these two values is 2 pi. This is 2 pi periodic. And I'm sticking in two numbers separated by 2 pi. So this will be 0 when n does not equal m. And therefore, that proves the claim that this is an orthonormal subset. We divide by square root of 2 pi so that when I take the inner product of this with itself or the element with itself, I get 1. Here, I just did e to the inx, so I got 2 pi. So I divide by the square root of that to get the orthonormal thing. So make a definition. So let f be in l2 of minus pi to pi. The n-th Fourier coefficient-- this is the new bit of terminology-- of f is the complex number f-hat of n equals 1 over 2 pi inner product-- I mean-- inner product-- integral of minus pi to pi of f of t e to the minus int dt. And the N-th partial Fourier sum of f is denoted by capital S sub N of f of x. This is equal to the sum for n in absolute value less than or equal to capital N. So little n is an integer of f-hat of n times e to the inx, which I can-- is the same as f inner product with in-- let's say t over square root of 2 pi e to the i. All I did with my definition of the Fourier coefficient is I combined these two square root of 2 pi's in my definition of f-hat of n. But this partial sum is just equal to a partial sum in terms of this orthonormal subset. OK, and we also associate to function f a formal object. The Fourier series of f is the formal series, because we are not making any claims about its convergence at all, summed from n in z f-hat of n e to the inx. So the question that we're going to answer-- and like I said, what motivated all of this to begin with is the following. So back when Fourier was studying heat conduction, he made a claim that every function can be expanded in terms of-- essentially, he was doing cosines and sines. But in terms of this, he said every function can be expanded-- is equal to its Fourier series. Now, at the time, people said, no, that's not the case. Not every function is periodic. And these are 2 pi-- each of these is 2 pi periodic. So what are you talking about? And then he said, maybe if we restrict to continuous functions, maybe it's equal to its Fourier series. That's not true. There's continuous functions that have-- where the Fourier series actually diverges at a point and doesn't converge back to the function. But Fourier series, now if you're asking about pointwise convergence, that's a very tricky and delicate issue. But Fourier series has a very nice and beautiful answer when you look in terms of the space that you're supposed to. As I started off this discussion, I said these elements are orthonormal with respect to this inner product. And this inner product lives, as far as in a Hilbert space, in the space big L2. So the question that one should ask then is-- you have this or the normal subset of L2. So the question is, do we have for all f in L2 f of x equals f-hat of n e to the inx. So this is an infinite sum. So I should be talking about in what sense is this series converging. And I mean in the sense of in the space that we're asking this question, in L2, i.e. do I have that the partial sums converge to f in the L2 norm? So I'm not going to write the argument. OK, maybe I will. Does this equal-- does this-- OK, now, I'm butchering this. Instead of writing that limit, let's say, does this norm converge to 0 as capital N goes to infinity? So that's the question. Do we have convergence of this series, meaning the partial sums, do they converge to the function f in the L2 norm? So this is something weaker than-- or maybe not even comparable to pointwise convergence. So sometimes, this is referred to as convergence in the mean. And so what is the way to phrase this question based on what we've done? So this question is equivalent to the question, is this collection of orthonormal elements in big L2 a maximal subset of big L2? i.e. and so now, let's really put this question into equivalence with just a pure statement-- i.e. does the vanishing of all the Fourier coefficients imply that the function is 0? So we proved that this statement here is equivalent to e to the inx being an orthonormal basis. So we proved one direction. We proved that if I have a collection of orthonormal vectors, then that's an orthonormal basis implies that every function can be expanded in this infinite series. But the converse clearly holds, too. If I can expand every function in an infinite series with those coefficients, then that implies that the subset has to be ortho-- has to be maximal because if something's orthogonal to everything in the collection, then all those coefficients appearing in the series are 0 and therefore the function is 0. So this question here is equivalent to asking whether or not this collection is maximal, which means if I have an element in L2 that is orthogonal to everything in here, then it has to be 0, which is this statement here. So let me put a box around it. Based on what we've done, of the question of convergence of Fourier series in big L2-- and what I'm using here really as well is the fact that big L2 is complete. This all only works in a Hilbert space. At certain points, we relied on the fact that Cauchy sequences converge to something in this space. So the fact that I can cleanly reduce this question down to what's in yellow relies on the fact that big L2 is complete, which we did a lot of work to show and to construct. So this is the question that we're going to try to answer. And let me go ahead and answer-- I asked a question, so I should give the answer. The answer is yes, but it's going to take some work. This is a non-trivial matter. So let's see, how much time do I have? OK, so how we're going to proceed is via what may be referred to as Fejer's method. So again, our goal is to show this, to answer the question about convergence of Fourier series in L2. Why in L2? Because that's the complete-- that's a complete Hilbert space that we're working in, that we're going to apply our general framework to. So let me start off with a following simple calculation, that for all f in L2 minus pi to pi and for all natural numbers n, including 0, if I want to look at the n-th partial sum of f, I can write this as the integral from minus pi to pi of a function evaluated at x minus t times f of t integrated dt. And again, this should be interpreted as a Lesbegue integral, even though I'm writing it using the notation that you use-- Lord, sorry, that e was awful. I just couldn't let it stay there. Even though I was using this notation at one point to denote the Riemann integral, I'm now using this notation for the Lebesgue integral if the function's-- if I'm talking about Lebesgue integration, which is not such an abuse of notation because we found out that the Lebesgue integral of continuous functions is the Riemann integral. So let me finish the statement. So we can write it as some function of x minus t times f of t integrate from minus pi to pi where dn of x, this is equal to the function, which is 2n plus 1 over 2 pi when x equals 0 and sine of n plus 1/2 of x over 2 pi sine x over 2 when x does not equal 0. And this function here-- so first off, note that this is a continuous function. As x converges to 0, using L'Hopital's rule, if you like, this thing converges to 2n plus 1 over 2 pi. In fact, it's a smooth function. This function here is referred to as the Dirichlet kernel. As a first step, I said, we're going to look at-- we're going to rewrite the partial sums in this way. I'll tell you why in a minute. But let's just take this as a warm-up calculation of some calculations to come. OK, so what's the proof? We have that the n-th partial Fourier sum of f, this is equal to n less than or equal to N of-- let me just write out here what the Fourier coefficient is-- 1 over 2 pi minus pi to pi of e to the minus i of f of t, e tot he minus int dt e to the inx. And this is just a finite sum. So I can bring this inside the integral and combine everything. And I get this is equal to minus pi to phi of f of t times dn of x minus t, where dn of-- so in fact, let me-- let me not jump ahead. I'll go ahead and write this out. This is equal to now sum times e to the in x minus t dt. So this is dn, all right? So dn-- so call this thing, dn of x minus t, x minus t appearing there. And now, let's compute dn of x. So let me just rewrite what it is, sum in less than or equal to n e to the inx. Where'd the t go? Again, again this is dn of x minus t. So the argument's there. So dn of x is-- there's the argument. Maybe I should have put a y or something like that, but I didn't think that far ahead. So this is the d capital N. And now, let's just massage this a little bit. We can write as e to the minus inx times now the sum from n equals 0 to 2n e to the inx. This is a sum from, if you'd like, n equals minus capital N to n. All right, so if I factor out an e to the minus inx, I can write this sum as this sum. Now, this is a geometric sum. This e to the inx, again, this is something that you can just check from the definition. This is also equal to e to the ix raised to the n-th power. And I know how to sum things that involves something being raised to the n-th power-- finite sums, that is. This is equal to 1 over 2 pi times e to the minus inx times 1 minus e to the i 2n plus 1 x over 1 minus e to the ix, the thing that's appearing here. 2n plus 1 the thing that appears on top plus 1. Now, multiplying through by this e to the inx, pulling out a 1/2 e to the ix over 2 and distributing that to the bottom, we get that the previous is equal to 1 over 2 pi. So first off, this is valid only when x-- what? When x is not equal to 0. When x equals 0, I just get 1 here and I get 1 here. And then I get the sum from 0 to 2 n, which is equal to 2n plus 1. That's why I get what I get when x equals 0. So this is valid for x not equal to 0. And so I get e to the in plus 1 x. I should say, after I've taken away that 1/2 x that appeared with this 1x and distributed it down on the bottom e to the ix over 2 minus e to the minus ix over 2. Now, if I have e to the i times something minus e to the minus i times that something and I subtract those two, I pick up 2i times sine of whatever's in the-- whatever this real number is. So I pick up 1 over 2 pi times 2i sine of n plus 1/2 x over 2i sine of x over 2. And these 2i's cancel. And that equals 1 over 2 pi sine of n plus 1/2 x over sine x over 2, which is what I wanted. So what's the idea of trying to prove this? In the end-- I mean, in the beginning, I should say, we were asking about the convergence of the partial Fourier sums to f. And we don't know that going in. That's what we're trying to prove. So maybe working with the partial sums is not the best thing. What am I getting at here? How about I introduce the next bit, and then I'll explain why we're interested in it. So if f is in l2 from minus pi to pi, we define the N-th Cesaro-Fourier mean of f-- so this is the new bit of terminology-- to be the average of the partial sums of f. This we denote by sigma n of f of x. This is equal to the average of the first n partial sums of f. In the end, what we'd like to do is we're trying to establish that this claim in the yellow box-- if the Fourier coefficients are all zero, then the function has to be zero. So you may think, well, let's prove that the partial sums converge to the function f. And that would give us immediately that the function is 0 because the partial sums would all be 0 because they involve the coefficients of f. But that's ridiculous because that's actually what we're trying to prove. That's equivalent to the question that we're trying to-- that's equivalent to what's in the yellow box. We're not making our life any easier by doing that. So now, what one can do instead is instead of trying to prove what's in the yellow box, that if all the coefficients are zero the function is zero, what if we can prove that this object now converges to f? This object here, which is this mean of the partial sums maybe has better properties than the partial sums originally that we were trying to study. Remember if you look back to 18.100, if you have just say a sequence of real numbers which converges, you can define a Cesaro mean of that by averaging the first n terms, just like we did here. There's an n plus 1 here because we're starting at 0 and going up to n rather than going from 1 to n. But if you have a sequence of real numbers, you can look at its Cesaro sums or Cesaro means. And what's great about the Cesaro means is that you don't-- it kind of behaves a little bit better than the sequence you start with. But you don't lose any information. So if the original sequence converges, then the Cesaro mean also converges. So if you're expecting the partial sums to converge to f, then the Cesaro means should converge to f. But the Cesaro means have an even better quality. Let's go back to sequences of real numbers. You could have sequences of real numbers that don't converge whose Cesaro means do converge. So take the sequence 1, minus 1, 1 minus, 1, 1 minus 1, and so on, that sequence doesn't converge. But the Cesaro means do converge. The Cesaro mean is 1, 0, 1/3, 0, 1/5, 0, 1/7, 0. So the Cesaro means converge to 0. So all of that is to say that this object here, which is the average of the partial sums, we expect to behave better than just how the partial sums converge. We expect it to have better convergence properties. And what does better convergence properties mean? Quite honestly, it means we should be able to show it converges to f in hopefully some straightforward way-- or maybe not straightforward way, but in a-- yeah, I don't know how to describe that. And that's what we'll do. That's the plan, is what we're going to show is that for every f in l2, the Cesaro means converge to f in l2, meaning the limit as capital N-- so let me-- so what's the goal that we'll show is we'll show that the Cesaro means-- Cesaro-Fourier means of f converge to f in L2. Now, if we can do this, then we will have answered and we will have shown what's in the yellow box because if you assume all of the Fourier coefficients are 0, then all of the partial sums are 0. And therefore, all the Cesaro means are 0. And by this thing that we are hoping to prove, this will prove that 0 converges to f. And therefore, f is 0, which is what we wanted to show. So one more time-- we want to prove that if all the Fourier coefficients are 0, then the function has to be 0. Now, if all the Fourier coefficients are 0, then all the partial sums are 0. And therefore, all the 0 means are 0. And if we're able to prove this holds for all f in l2, then this would tell us that 0, which is what all of these Cesaro-Fourier means are, converges to f. And therefore, f is 0, our desired conclusion. And from that, we conclude that the collection e to the inx over square root of 2 pi is a maximal-- is a orthonormal basis for l2. And therefore, the Fourier series of a given l2 function converges to the function in l2, meaning f minus the partial sum converges to 0 as n goes to infinity. All right, and so that'll be what we do next time, is prove this claim right here.
MIT_18102_Introduction_to_Functional_Analysis_Spring_2021
Lecture_19_Compact_Subsets_of_a_Hilbert_Space_and_FiniteRank_Operators.txt
[SQUEAKING][RUSTLING][CLICKING] CASEY RODRIGUEZ: OK, so the name of the game today is compactness. And so last time I recalled that the definition of a subset of A metric space x is compact if every sequence xn in k, so every sequence of elements in k has a subsequence converging in K, has a subsequence which converges to an element in k. And so for example by the Heiner-Borel theorem, closed and bounded subsets of Rn or Cn are compact. This is from 18100. And general closed and bounded subsets of a Hilbert space are not necessarily compact. Last time we looked at the closed unit ball in the Hilbert space, or let's say in little l2, this is not compact because you can take or orthonormal basis vectors which are the sequences that have zero except for one in the n-th spot. These are all of unit linked. And the sequence consisting of these basis vectors, which are sequences, so the sequence of sequences does not converge in little l2 or have a subsequence which converges in little l2. So we were looking for an additional condition that would ensure compactness. And as a warm up to this, last time we proved the following that at-- so we're always working in a Hilbert space that may be separable or not depending on if I say it is or not-- that if we take a sequence, a convergent sequence in H, then the conclusion is, 1, that the set k consisting of all the elements of this sequence and the limit is compact. And the set k has what are called equi-small tails with respect to any orthonormal subset of H. If e sub k is a countable orthonormal subset of H, then the following holds, this condition we called K having equi-small tails, then for every epsilon positive, there exists a natural number N such that for all-- let's call it-- use a different letter, v tilde in K, so either an element of the sequence or v, we have that the sum over K bigger than N squared is less than epsilon squared. So what this says in this condition we called K has equi-small tails with respect to this orthonormal subset. So of course for any fixed v, because this entire some-- forget just K over N-- is bounded by the norm of this v tilde squared by Bessel's inequality. We can always choose an N for any individual v tilde. Now equi-small tails means that I can choose this independent of the element in K. So if you like uniformly small tails is another way you could think about it. But the terminology equi has been used instead. But another way is to think we can uniformly make the tails with respect to an orthonormal sequence, or orthonormal subset of H small. Now, what we're going to prove is that, in fact, this condition here of having equi-small tails with respect to an orthonormal basis on top of the set being closed and bounded, suffice to prove that the subset is compact. And that's the following theorem. So let H be a separable Hilbert space. And let ek be an orthonormal basis of H. So I said suffice, but not just suffice. It's also necessary, as we'll show. Then K subset of H is compact if and only if K is closed, bounded, and has equi-small tails with respect to this orthonormal basis, so in particular for any orthonormal basis. So this is a complete description of compact subsets of a Hilbert space, it's those subsets which are closed, bounded, and have equi-small details with respect to one orthonormal basis. And what this theorem also says is that if it has equi-small details with respect to one orthonormal basis and is closed and bounded, then the same holds with respect to any orthonormal basis. OK, so let's prove-- so it's a two way street. Well, it doesn't make sense to say one street is shorter, one side of the street is shorter-- maybe wider, easier to use. But let's prove that if K is compact, then it implies all of this. So suppose K is compact and K is closed and bounded by general metric space theory. So look this up in the 18100's material, or intro to analysis material. So let's-- this also gives us an opportunity to understand this equi-small tail condition as well by having to negate it. So we're also going to prove that K has equi-small details by contradiction. So suppose K does not have equi-small tails with respect to this basis, or this orthonormal basis. Then what's that mean? So this is a for all there exists in a statement. So then there exists some bad epsilon so that for all N, a natural number, there exist a uN in K such that-- so these N's are increasing such that u-- so we have the negation of the condition that satisfied K bigger than N of uN eK squared is bigger than or equal to this epsilon 0 squared. So note that this is a sequence in this compact set K. And so by the defining property of a compact set, it must have a convergent subsequence, which implies there exists subsequence which you usually denote use of N, capital N sub m, some other letter. But I'll just call it, to make it a little connection back to what we wrote a minute ago, v sub n. So this is a subsequence of u sub capital N and v in K such that v sub n converges to v. Now then for all N, natural number, we have that sum over K bigger than n of vN eK squared is bigger than epsilon 0 squared because this is a subsequence of the original sequence which satisfied this condition. Now this would actually be capital N sub n. But if I throw in some more, capital N sub n would be an increasing sequence of numbers. And it's always bigger than n. So I can replace that capital N sub n with this. All right, so this is fine. Then that says the subset consisting of the elements v sub n in the natural number union v does not have equi-small tails, which is a contradiction to the theorem we proved last time. This contradicts the previous theorem that I stated and we proved last time. So any convergent sequence along with its limit is a compact set and has equi-small tails. So that's a contradiction. And what was the assumption that brought us to this road forking itself is the assumption that K is not compact. So therefore K-- I mean, not K's compact. K has equi-small tails with respect to this orthonormal basis. So that's one direction that if K is compact then it has to have equi-small tails with respect to the orthonormal basis. We didn't use anything about it being a basis, so this also says that if you have a compact set, then with respect to any orthonormal subset, it has equi-small tails with respect to that orthonormal subset. So now let's use this condition that K is closed and bounded and has equi-small tails to prove that K is compact. So suppose K is closed, bounded, and has equi-small tails with respect to this orthonormal basis. Now let uN be a sequence in K. So we want to show that it has a subsequence which converges to an element in K. Now, K is closed, which means that if I take any sequence which converges in K, or which any sequence of elements in K that converges, the limit has to be in K. That's the definition of a set being closed. So I just need to show that this sequence, in fact, has a convergence subsequence. That's all I have to show. So since K is closed, we just need to show uN as a convergent subsequence. So the idea is to use what we know about bounded subsets of complex numbers. So we know about bounded subsets of complex numbers. If I have a bounded sequence-- I shouldn't say subsets, but if I have a bounded sequence of complex numbers, it has a convergent subsequence. This is a Bolzano-Weierstrass. Usually it's stated in terms of subsets of R. But sequences of complex numbers converge if and only if the real and imaginary part converge, and are bounded if and only if the real and imaginary part, which are sequences of real numbers, are bounded. So Bolzano-Weierstrass immediately implies a similar statement for complex numbers, that every bounded sequence of complex numbers has a convergent subsequence. so. How would we use that? What we're going to do is we're going to expand uN's. And this is where we'll use that this as an orthonormal basis in terms of this orthonormal basis. So for each K, we'll have a sequence of complex numbers given by the entry, the K-th entry, of course the K-th entry of this sequence. And that will be a bounded sequence of complex numbers because we're assuming K is bounded, and therefore this sequence is bounded in K. So what we're going to end up having is entry by entry, a bounded sequence of complex numbers where these complex numbers represent the K-th entry of the sequence of vectors. And we're going to then take a subsequence that converges for the first entry. We're going to take a subsequence of that subsequence to get a subsequence which converges for the first and second entry, and then so on and so on, a diagonalization argument. And that subsequence we'll show converges. Now, it's not that simple because if you can do something 10 times it doesn't mean you can get some uniform control. We just have control for any finite number of entries. What saves us in the end is this equi-small tail condition, which allows us to basically throw away infinitely many entries. That's what this says is that you can always choose an N, so that the tail end of the entries don't matter. And so that, with having control over the first large block of entries, will be enough to conclude. So that was the roadmap. If you didn't follow the roadmap, fine. Hopefully you follow the proof. But it's a diagonal argument. And those are relatively easy to understand, somewhat difficult to write down, so I will do my best. So since K is bounded, that implies there exists a c non-negative such that for all n, norm of u sub n is less than or equal to C. The C is independent of n. It just depends on the set K. And therefore that sets for all K and n-- I should say for all K, for all n, if I look at the K-th-- if you let me call this the Fourier coefficient in this basis in this orthonormal basis of ek, if I look at the K-th Fourier coefficient, this is bounded by Cauchy-Schwartz, u n ek. And these are orthonormal so this has unit length. This is bounded by c. So for each fixed K, this is a sequence in N of complex numbers which is bounded. The sequence consisting of the K-th Fourier coefficient of the elements u sub n, so n is the thing that's changing, is the bounded sequence of complex numbers. So now this is where we're going to start choosing subsequences of subsequences so that we get convergence along the entries. So since u sub is bounded, this implies that there exists by Bolzano-Weierstrass, a subsequence, which I'll denote u sub n sub 1 of k-- let me not use k. Let's use-- what do I use here? j-- e1, so 1, 1-- in n, which converges in c. OK, so I now have a subsequence of the original sequence of the un's so that the first entry converges here. Or this sequence converges. Now I'm going to take-- OK, so since un1 of j e2, this is still a bounded sequence by what I showed here, bounded sequence. This implies there exists a subsequence. Now I'll call this subsequence un2 of j e2 j. So in 2 of j, this is a sequence of integers. That is a subset of these guys that's increasing --of un1 of j. So I should say-- which converges. So now note, this sequence converges. But the n2 of j, so this is a subsequence of integers of the n1 of j's. Since the n2 of j's a subsequence of the n1 of j's, we get that limit j goes to infinity of un1 of j-- or should say n2 of j-- e1. So this is a subsequence of this sequence. So the n2 of j is just a subsequence of n1 of j's. So since this sequence in j converges, this subsequence also converges. But now we've chosen the two so that we also have not just convergence along the first entry, but convergence along the second entry. But now you see what-- so this is how the argument goes, that now I will take a subsequence of the n2 of j so that the subsequence converges when I pair it with e3. And then I take a subsequence of that subsequence so that I get e4 and so on, and so on. And all of these subsequences are subsequences of-- or these subsequences of integers are subsequences of the previous set of integers, which are again a subsequence of the original integers n. So then what do we get? For all l there exists a subsequence of integers in l of j of the previous set of integers in, let's say, l minus 1 of j, such that for all k between 1 and l limit as j goes to infinity of u in l of j ek exists. So I hope this is clear. So you just take a subsequence of the original sequence so that the first entry converges, take a subsequence of that so that the second entry converges, and so on and so on. So then you have this nested subsequences so that at each fixed l, you have convergence of the first l entries. And so what you now do is that you pick-- so let me write or pick vl to be the diagonal along this sequence. So u in l of l, where l equals 1, 2, 3, and so on. Then what we get is since the l-th subsequence converges-- or the k-th entry for one-- OK, so let me back up and say this one more time. Since four fixed l here I have convergence of the first l entries, by picking along the diagonal I get a subsequence of un such that for all k I have convergence for the k-th entry as l goes to infinity, converges. All right, so I can take subsequences of subsequences and obtain a subsequence in the end so that for this subsequence of the original sequence un's, I have convergence of the entries entry by entry, or I have convergence of the Fourier coefficients as l goes to infinity. And now we're going to use this equi-small tails to prove that the sequence v sub l converges. We're just actually going to show its Cauchy. Since H is a Hilbert space, we then conclude that it must converge. So claim vl is Cauchy. And then that will conclude the proof because then it must converge in H. And by what I said at the beginning, since this is a sequence of elements in k, which is a closed set, the limit has to be in k. All right, so we just need to show it's Cauchy, and then the proof is done. So let epsilon be positive. So since k has equi-small tails-- let's go over here-- there exists natural number capital N so that for all l we have that the sum of squares k bigger than N of the tails, vl, ek squared, is less than epsilon squared over 16. Why 16? Because I want everything to come out correct in the end. So since the N sequences given by the first N entries of the sequence vl, so since the N sequences vl, e1 l up to vl, vn, l-- so these are only-- so this is the bit where I said equi-small tails allows us to throw away the tail end of these entries, and just control over the first finitely many entries gives us the condition or the convergence that we want. This is where that's happening. So since these converge, there exists a natural number M so that for all l, m bigger than capital M, I have that the sum n equals 1 to capital N of vl, ek minus vm, ek squared is less than epsilon squared over 4. So each of these are convergent sequences. So each of these are Cauchy sequences. So for each one, I can find a capital M sub 1 so that just one of these entries is very small for l and m bigger than that capital M sub 1. And then I can do the next one so that for-- so this should be k. I don't know what that looks like, but k. So that for k equals 2, it's very small, and then up to k equals n. And then I choose capital M to be the maximum over all those M sub N's. So I can always choose this because, again, there's only finitely many conditions I have to verify, all right. So now I claim this capital M works. Then for all lm bigger than or equal to M, if we look at the norm of vl minus vm, so this is equal to sum-- it should say k equals 1 to n of vl minus vm. So by k equals-- k bigger than N, vl minus vm, ek 1/2. So here I'm using the fact that we have an orthonormal basis. Then in that case, the norm is equal to the sum of the squares of the Fourier coefficients. Bessel's inequality would say this is less than or equal to this side. It wouldn't give us any control we want over the actual norm. But because we're working in orthonormal basis, the norm of that is equal to the sum of squares, take the square root. And so now the square root of A plus B, that's always less than or equal to the square root of A plus the square root of B. So that's less than A equals 1 to N of vl minus vm squared 1/2 plus k bigger than n vl, ek. So I'm just going to write this out a little differently, vm, ek, 1/2. Now this part we know is less than-- by how we've chosen capital M, this finite part is small. It's less than epsilon over 2. So that's less than epsilon over 2. Plus now we use the triangle inequality now for little l2. I can think of this as the sum in little l2-- or I can think of this as the norm in little l2 of this sequence in k minus this sequence in k. So this is less than or equal to k bigger than N, vl, ek, 1/2 plus sum k bigger than vm, ek 1/2 All right, we're almost done. So this is less than, the square part is less than epsilon squared over 16. So taking the square root of that, I get epsilon over 4 by how we chose N. So this is based on how we chose N, plus-- which came from the equi-small tails part. And this part also because of N is less than epsilon over 4 equals epsilon. So we've now shown the claim that the subsequence v sub l is Cauchy, and therefore it converges since H is complete. Now, so for example, one can readily verify now that the following subset. Let k be the i set of all sequences n little l2 with the property that ak is less than or equal to 2 to the minus k. So this is not a subspace, it's a subset-- is compact. This subset is known as the Hilbert cube. All right, now maybe it's a bit unwieldy that our condition for being compact is phrased in terms of an orthonormal basis. Maybe that's not so simple to verify or definitely not necessarily, what's the word I'm looking for, canonical-- I guess that's the word I'm looking for-- in the sense that we have to make a choice to verify compactness. But a different way of characterizing compact subsets of a Hilbert space is the following. Now, I'm not going to give the proof of this. You can look it up in Melrose's notes. Maybe I'll say a word about why you should believe or think it's true. But, again, it involves a diagonal argument, which that was tough enough to do here, or at least painful enough to write out. And so I don't want to do it again. So we have the following that-- and this theorem also holds in a nonseparable Hilbert space, basically by a trick of reducing it to a separable Hilbert space, but that's OK. A subset K of a Hilbert space H is compact if and only if K is closed, bounded. And the following condition holds what you can think of as K can be approximated by finite dimensional subspaces. And for all epsilon positive, there exists a finite dimensional subspace W in K such that for all u in K, if I look at the distance from u to this finite dimensional subspace-- which I think I said at one point, maybe I didn't, but a finite dimensional subspace is always a closed subspace in H. A finite dimensional subspace is always closed. But if I look at the distance from this point to the subspace, this finite dimensional subspace, this is less than epsilon. So a subset in the Hilbert space is compact if and only if it's closed, bounded, and if you like can be approximated by compact subsets of finite dimensional subspaces, another way to think about it. Now, why is this believable in the first place? Well, what this-- where is it? What the equi-small tail condition says is basically that all elements in K can be approximated by-- if you let epsilon be positive-- can be approximated by the subspace consisting of the span of the first one up to N basis vectors or orthonormal basis vectors. That's what this is. For all epsilon the tails are small. Another way of saying that is that for all epsilon, you can approximate any element in the set K in the span of the first N basis vectors. So K has to be close to finite dimensional subspace. But one can go the opposite direction as well, showing that finite dimensional subspace-- being able to be approximated by those guys implies that K is compact. And it's another diagonal argument that I just don't want to write out and rather just start using these conditions instead to start saying things about when we can solve certain equations and certain interesting operators that arise quite naturally. So let's start talking about certain classes of operators. And we should start with the simplest, which are finite rank operator. So the only linear operators you came into this class knowing probably-- if you didn't and you were more sophisticated, that's good-- but were matrices. Now a matrix is just a linear transformation defined in terms of an orthonormal-- or not orthonormal basis, but a basis in both the domain and target. We picked the target to be the domain. We just fix a basis, and then we can express a linear transformation as an array of numbers. Finite rank operators are the generalization of what you think of matrices now for to Hilbert spaces. And we'll see that in just a minute. And what follows H is a Hilbert space. And instead of writing the space of bounded linear operators, B H, H, I'm just going to concatenate this by dropping one of those H's. So the space of bounded linear operators from H to itself will be denoted by B of H. So finite rank operators, these are, like I said, these will be the analog of matrices. But let me give a invariant definition first. So a bounded linear operator is a finite rank operator if the range of T which is a subspace of H is finite dimensional, OK. So, for example, let me give you the simple-- well of course the zero operator is a simple one. If H is a finite dimensional Hilbert space, [INAUDIBLE],, then every bounded linear operator is a finite rank operator because the range is always contained in Cn, which is finite dimensional. For example on little l2, if I define Ta to be now the sequence given by a1 over 1, a2 over 2, up to an over n for some n and then 0 afterwards, and this is for a equals sequence in little l2, then T is a finite rank operator because the range of this operator-- so here N is a fixed number acting on sequences. And little l2 just spits out the first-- let's make this even more explicit. Let's say that's 5. This is a finite rank operator because it's contained in the subspace consisting of those sequences which are 0 after the fifth entry. And that's finite dimensional. A basis is given by the sequence consisting of 0's with a 1 here, 0's with a 1 here, 0's with a 1 here, and so on. So this is a finite rank operator. And so let me just put over here that as far as notation goes, we write T is in RH, R for rank, finite rank. All right, so R of H, this is the set of all finite rank operators. Now, it's not just a set. It's easy to see that this is a subspace of the Banach space consisting of all bounded linear operators from H to itself. Why is that? Well, so if I take an operator that has finite dimensional range and multiply it by a scalar, then the range of the scalar multiple of that operator is equal to the range of the original operator unless it's the scalar 0. So that will be finite dimensional. On the other hand, if I take two finite rank operators and I consider their sum, the range of the sum is going to be contained in the direct sum of the ranges. And the direct sum of two finite dimensional spaces, subspaces, is again finite dimensional. So if one has dimension 5 and the other has dimension 6, then the direct sum of these two will be of dimension 11. So that is me talking you through the proof that this is a subspace. So take a minute to write it down. But now let me prove that these finite rank operators really are essentially matrices. I have the following theorem characterizing finite rank operators. So T is a finite rank operator if and only if there exists an orthonormal, finite orthonormal subset ek, k equals 1 to some integer L. And complex numbers cij ij equals 1 to l. So this is subsequence-- or a subset of complex numbers such that if I want to compute what T applied to u is, it's just going to be sum i equals j, ij goes from 1 to l of these numbers Cij times the j entry of u times ei. So, in other words, T corresponds to a matrix where the coefficients are these c's, basically, acting on this finite dimensional subspace of ek's. So let's give the proof of this. One direction is immediate. If T has such a representation, then it has finite rank because then the image is contained in the span of the ei's for i going from 1 to l. And that's a finite dimensional subspace. So this is clear that this condition implies that T is a finite rank operator. So let's go the opposite direction. So since the range of T is finite dimensional, we can find an orthonormal basis, call it ek, k equals 1 to N such that T applied to u is equal to-- so it's an element in the span of-- so this spans the range. So I can write this as some numbers times the ek's. And what are these numbers? Well, these numbers, since these are orthonormal, have to be T applied to u in a product ek. All right, so Tu-- so the ranges spanned by these orthonormal vectors, and anything in the span of this orthonormal vectors has to be written as that thing in a product ek times ek. But let me now make an observation that this thing right here we can write as using the adjoint as u in a product T star ek, ek. The adjoint operator, so T star which goes from H to H, remember satisfies this property that given any two vectors a-- or u and v, T applied to u in a product v is equal to u in a product T star v. That's the defining property of the adjoint So I can write it like this. And-- which is k equals 1 to N of u in a product, call this vk, ek bar, where here vk is simply defined to be T star ek now, let e1 up to el be the orthonormal subset of H obtained by applying the Gram-Schmidt process to vectors E1, e2 up to eL vL, up to vL. So applying it to this subset, I keep the e1 up to eL's. And then I might pick up some more from some more orthonormal vectors that are the normal to these guys when I hit the v1's up to vl. I should have gone to N, I'm sorry. So not L here, but N. So I obtained an orthonormal subset that spans the same span of these guys. And so what does that mean? Then there exists numbers akj, bkj-- let's call it i-- such that ek bar is in the span of these orthonormal vectors. So I can write it as aki, ei k equals 1 to l-- no, no, no, sums going the wrong way-- i equals 1 to l, and also the same for the v's, some vk equals j equals 1 to l bkj kj. So these are obtained by the Gram-Schmidt process applied to this collection of vectors. And therefore I can write each one of these vectors as a linear combination of these orthonormal vectors. So that's all I'm saying here. Now we just plug this in to this computation. And we have Tu which we computed was equal to sum from k equals 1 to N of u in a product vk, ek bar. This is equal to-- now if I stick in these sums here, I get sum ij equals 1 to L of some k equals 1 to N of aki, bkj complex conjugate. Now, u in a product ej-- this is coming from when I stick in the vk's-- ei. And this is coming from when I stick in what I have for the ek bar. And this number right here is my number cij. And that's the end. So finite rank operators are really just matrices. You can just compute these numbers and they completely characterize the linear operator. So in particular, the kernel, the null space of T is the subspace that's orthogonal to the e1 up to eL, or at least includes that. Now from this we can conclude a nice property about finite rank operators, namely the first is that if T is a finite rank operator, then its adjoint is also a finite rank operator. And if T is a finite rank operator and A and B are just bounded linear operators on H, then A times T times B, this is also a finite rank operator. In fancy or in short form you would say that this is a star closed two-sided ideal in the space of bounded linear operators, or its closed undertaking adjoints in closed undertaking the two-sided multiplication by bounded linear operators. So 2 I'll leave as an exercise. What's the point? The T has finite dimensional range. A then acts on that finite dimensional range. And therefore its range, the range of A hitting T must then be finite dimensional. And what you do with B really doesn't matter. So 1 is an exercise. So let's prove 1. 1 we'll prove using this characterization of finite rink operators. So we're assuming T is finite rank. So we can write T as some constant cij, u and a product ej, ei, u in H. So we can express it in that way where these are just some fixed constants not depending on u. Now let's compute how T star x on vectors by using this defining property of the adjoint. Then u, T star v-- what we're going to do is we're going to come up with an expression for this as u in a product something and from which we can conclude that T star v has to equal that something for all v. Now this is equal to, by the defining property of the adjoint, T applied to u, v. So this is equal to-- i and j will be understood to go from 1 to l. So this is just finite, a finite sum, OK-- cij, u, ei-- or ej, sorry-- ei, inner product v. Now taking the inner product is linear in the first entry. So this is equal to sum ij. Again, this is a finite sum ij, u, ej, inner product ei, v. And now I'm just going to switch what I'm summing. This I can write as now u inner product sum ij cij bar. So if I want to-- this I can write as bar, bar, but ei, v, complex conjugate, ej. We can check this as correct because-- so u will then inner product with each of the ej's and these constants when they come out of the inner product, get hit by a complex conjugate, which turns them back into cij and ei, v. Now, the complex conjugate of the inner product is just the same as if I flip the entry. So this is equal to cij times v, ei, ej. Now, this holds for all u and v. So what have I shown? I've shown that u T star v minus this finite sum, so I actually put it in here, cij v, ei, vj, this equals 0 for all u, for all v. Now, for a fixed v this quantity here is orthogonal to everything in H. So it has to be 0. And therefore I conclude that T star v has to equal sum from ij equals 1 to l, the complex conjugates of cij, v inner product ei, ej for all v in H. And that proves that T star is a finite rank, and also gives you the expression for the coefficients in terms of the old ones. I could write this-- now if I wanted to write it in terms of the i being in front of the bases vector that I'm having here, I write it as sum ij. So we see that the matrix corresponding to the adjoint is what we computed last class basically, is the complex conjugate of the transpose of this matrix, of the original matrix cij's. All right, so that proves adjoints are also finite rank. Now, so taking the adjoint of a finite rank, operator leaves you in the space of finite rank operators. If you take finite rank operators and compose them on the left and on the right with bounded operators, you remain finite rank. The subset of finite rank operators also form a subspace in the Banach space of bounded linear operators on a Hilbert space. So the space of bounded linear operators on a Hilbert space naturally come with a norm as well, the operator norm. So the next obvious question is, is the subspace of finite rank operators, is this closed? Closed not meaning-- not as a closed undertaking linear combinations. Of course that holds for every subspace. But if I take a sequence of finite rank operators that are converging in operator norm to something, then is the limit also a finite rank operator? So that's the definition of being a closed subspace, or let me say subset so that we're now asking about this in a metric sense. Is it a closed subset in the space of bounded linear operators? Now, the answer to this is no. Why? Let's, for example, take Tn. So this will be a sequence of operators from little l2 to little l2. And for an element a in little l2, this should give me another sequence in little l2. And it'll be the sequence given by a1 over 1, a2 over 2, up to an over n, and then zeros after that. All right, so T1, the operator T1 takes the sequence in little l2 and just spits out the sequence that has the first entry divided by 1. And 0's followed after that. That's the operator T1. T2 takes a sequence in little l2 and spits out the first entry divided by 1, the second entry divided by 2, and o's after that, T3, so on. Now, this is just a picture. This is formal. This is not meant to mean anything. If I were to express this as an infinite matrix times an infinite vector, or infinite length vector where the infinite length is the sequence ak's, this is 1, 1/2, 1/3, 1 over n, 0, 0, 0, 0. This infinite matrix times a1, a2, a3, and so on. So I'm just multiplying a1 by 1, a2 by 1/2, a3 by a 1/3, and then up to a n over n, multiplying a n by 1 over n. And then the rest of the entries get spat out to give me 0. So these are the finite rank operators. And you can check that the Tn minus T in operator norm goes to 0, where what is T? T is the operator that takes a spits out-- so a will be an element in little l2. Now as n goes to infinity, you can kind of guess what's going to happen is I end up multiplying everything by 1 over wherever entry I'm at, a1 over 1, a2 over 2, a3 over 3, a4 over 4, and so on, and on. So as an exercise you can show that the operator norm of T minus Tn, this is less than or equal to v1 over n plus 1, something like that, OK. Now, this operator here does not have finite rank because then I can find infinitely many linearly independent vectors in the range. For example T of e1 where this here is the basis vector gives me the-- this is the first basis vector in little l2. If this is the second basis vector in little l2, this is just-- and so on. If I applied T to en, This. Is 1 over n, 0, 0, and so on. This is in the n spot. So it just takes each basis vector and-- so if you like, this is e1, this is 1/2 e2, this is 1 over n en, and multiplies each of these basis vectors by 1 over the integer that's-- not delineating-- denoting, I'm sorry. All right, it's the end of a day so some of my thinking has slowed down. So finite rank operators are not closed-- is not a closed subset of the space of bounded linear operators. So what's the closure? And finite rank operators we'd like to think we know how to solve equations for. I mean, just based on this expression for a finite rank operator, well, if we want to solve Tu equals v, well, first we know v has to be in the range of or in the span of these basis vectors that appear here. And now once we've restricted to that, then we can solve the equation by looking at the properties of this matrix. And the null space will be not only the null space that sits inside the span of these basis vectors, but it will also include the orthogonal complement in H of these basis vectors. So solving equations involving finite rank operators ought to be fairly straightforward if we remember linear algebra. Unfortunately, they're not closed upon taking some subsequential limits. But we can identify the closure of-- so now the question is, what is the closure of finite rank operators in the space of bounded linear operators on H? So something in the closure has to be a limit of finite rank operators. And since we know how to solve finite-- equations involving finite rank operators, hopefully we know how to solve equations use with these operators as well. Is there a simple way we can characterize-- or not simple, but at least a more useful way to characterize them than just being the limit of finite rank operators? So the answer to this question is-- so I won't even write-- OK, I'll write answer-- is what are called compact operators, so the subspace of compact operators. So let me make a definition. So k, a bounded linear operator is compact operator if the closure of the image of the closed unit ball in H-- so the closure of the image by k-- or the closure of the image of the closed unit ball is compact. I can also write this as a closure of the subset ku norm of u less than or equal to 1. So I won't have time to prove this theorem in this class, but we'll do it next time. And we'll actually show that, in fact, this class of compact operators which are defined in this way, this is a compact operator according to this definition if and only if k is in the closure of finite rank operators, meaning there exists a sequence of finite rank operators converging in operator norm to k. And, again, why are we interested in compact operators? Well, we know how to solve finite rank or equations involving finite rank operators. We just look at the subspace which that contains the range. And then we look at the matrix that appears there in that decomposition. Compact operators there are something completely new. When we move from finite dimensional linear algebra now to functional analysis because these are not exactly necessarily equal to finite rank operators. I mean, we just did this example a minute ago of this operator given here by where I take the entry of the sequence and divide it by the place that it occurred in the sequence for a sequence given in l2. So if you believe the theorem, this is a finite rank-- I mean, not finite. This is a compact operator. It's the limit of this sequence of finite rank operators. Perhaps you want to take a try at trying to prove this as a compact operator directly from the definition, or not. you can wait till next time and just use this theorem. But many other operators, many interesting operators, for example, inverses of differentiable operators in fact turn out to be compact operators. That's also why we're interested in them. So we got these by taking the closure of finite rank operators. And then after-- so now we have this interesting subspace of operators that we hope to be able to solve equations involving. And that'll be basically what takes up the remainder of our time in this course. So you just staring at this definition, you can kind of see that-- well, you can see that if k is a finite rank operator, then it's a compact operator. So we're not going to prove this theorem, like I said, but just to see or do a sanity check that finite rank operators ought to be compact operators. Well, a finite rank operator, the image of this closed unit ball of a finite by a finite rank operator, that's going to be a bounded subset of a finite dimensional subspace. Any bounded operator takes bounded sets to bounded sets. So if it's finite rank, it takes this bounded set to a bounded subset of a finite dimensional subspace of H. And then I take the closure. I get a bounded and closed subset of a finite dimensional subspace. And we know by Heine-Borel that closed and bounded subsets of finite dimensional subspaces are compact. So that shows why at the very minimum you should believe that finite rank operators are compact. OK, so we'll stop there.
MIT_18102_Introduction_to_Functional_Analysis_Spring_2021
Lecture_1_Basic_Banach_Space_Theory.txt
[SQUEAKING] [RUSTLING] [CLICKING] PROFESSOR: OK, so let's get started. This is the first lecture for the course-- Introduction to Functional Analysis. OK, so let me give you a brief preview or a few words about what functional analysis is or initially aimed to do. So by this point, you've taken-- or at least what the prerequisites are supposed to be, you've taken linear algebra. You've taken calculus. And what these subjects allow you to do is solve problems where you have finitely many independent variables, if you like. You're working in finite dimensions. So you always have-- for example, in calculus, you're trying to find the min or max of a function of 1, 2, 3, or 4 variables, right? Or in linear algebra, you're trying to solve a linear-- set of linear equations, but there's five equations and five unknowns. So there's always finitely many independent variables. Now, that allows you to solve a lot of fun problems of how fast water is leaking from a cone. I think that's one of the problems you solve or something of that sort. But then when you move on in life, you come across ODEs and PDEs and other types of minimization, maximization problems where now, if you'd like, the set of independent variables is no longer a finite dimensional, OK? So let's think of the number of-- or your independent variable as being a member of some vector space. So when we were talking about calculus, functions of 1, 2, or 3 variables, these are functions of R, R2, R3, so and so on. But it turns out that if you want to consider, for example, what's the shortest curve between two points, this is a natural functional, meaning-- well, this is terminology that means now the argument is a function, that the points, your independent variables, are curves, are functions. For a vector in R3, say, it takes three numbers to specify a vector, the three coordinates. How many numbers do you need to specify, say, a continuous curve on the interval, say, 0, 1? Well, you need infinitely many numbers. You need to know the graph of that curve. So functional analysis, in short, was built to be able to start solving problems where the vector spaces are not necessarily finite dimensional. And as we'll see in the problems that we work through and the situations that pop up, this arises quite naturally for very particular-- I mean, not particular, but for concrete problems. This is not just some sort of academic exercise. This whole subject grew out of trying to understand particular concrete problems involving PDEs, and minimization, and optimization of now functions of functions. That was the original terminology, functions of functions, where now the vector space is some space of functions, not just spaces of, say, three-dimensional vectors or two-dimensional vectors, and thus the name functional. OK, so that's a few words about how this subject grew, what's the point. So let's start getting into specifics. So again, I'm going to be using a lot of terminology that comes from linear algebra and the Real Analysis course, 18.100B. But at the start, I'll be reminding you of what some of these terms that I'm using mean. But as the course goes on, I will stop redefining terms that you should have seen in Real Analysis or Linear Algebra and just use them. OK, so the first topic that we're going to deal with-- so our norm spaces, these are the central objects and/or starting point in functional analysis, which are the analog of our R2, R3, and so on. So what's the setup? Let V be vector space over R or even C. So it could be a complex vector space. And either of these spaces we'll usually denote by a boldface K. So what does this mean? So again, this is one of those points where I'll just quickly remind you what a vector space is. Then V comes with two operations-- plus and scalar multiplication. So plus, going from V cross V into V, which we denote-- if I have two vectors in V, they get mapped to the new vector, which I denote V1 plus V2. And then I have scalar multiplication from the set of scalars cross V into V. And this goes alpha-- V gets mapped to alpha times V. All right, so you have these two operations. And they satisfy certain conditions, assumptions, relations between them, which are part of the axioms of a vector space, which you can read in the last section of the notes, if you want to refresh your memory. And so V as a vector space has these two operations satisfying a certain set of axioms. And so for example-- familiar old examples-- R2, Rn, and then, of course, C, the set of n tuples of R or C. But here's another simple example-- C 0, 1, which I'll remind you this notation here means the set of functions from 0, 1 into-- let's say their complex value, just to fix a field of scalars to work with, such that f is continuous, meaning it's continuous at every point. This is also a vector field because the sum of two continuous functions is continuous. And if I take a scalar multiple of a continuous function, then that's also continuous. And then these operations satisfy the axioms that you need for vector space. But there really is a big-- pun intended-- difference between these spaces here and these spaces here-- or that space there. And what is the difference? The size, OK? Now, in analysis, size was-- you had maybe one or two different notions of size you were introduced to, depending on how much analysis you've seen, but one was cardinality. That's not what I'm talking about when I mean size. What I mean is the following-- I mean the dimension of these spaces. So let me recall the following definition, that we say a vector space V is finite dimensional-- and these were a lot of the vector spaces you were first introduced to-- if every linearly independent set is, in fact, a finite set. And so let me again-- since I'm using some of these words to help you recall, what does this mean? In math, this means for every set E that is linearly independent, meaning which have the following properties so that if I take any elements in E that-- so E satisfies that if I take any finitely many elements of E, the assumption that there's a linear combination of them giving 0 implies that all of these scalars must be 0. OK, so this here is the definition of being linearly independent. For all E satisfying this condition-- so this is poorly written, but I hope you follow. This is the definition of linearly independence. Then for every set that's linearly independent, then the set E is finite in the sense of cardinality, in the sense there's only 100 elements in there or not. So this is finite dimensional. And say V is infinite dimensional if V is not finite dimensional. So some of you I saw have taken a course from me before. I tend to use a lot of abbreviations when I write. But typically, these abbreviations are pretty clear what they mean, if you just sound it out. So vector space V-- and these infinite dimensional guys, these are the guys we're going to be dealing with a lot in this course. The finite dimensional ones you dealt with in linear algebra. Maybe you used a few-- you had a few infinite dimensional examples if you were looking at examples of vector spaces. But these infinite dimensional vector spaces, these are the type of guys or the type of vector spaces we're now going to solve linear equations on and, in some sense, do calculus on. That's not quite true, but we're going to use calculus and some tools to be able to say some things about linear equations on these infinite dimensional spaces. But they won't just be any type of infinite dimensional space. And I'll say what type we're looking at in a minute. OK, so finite dimensional, infinite dimensional-- so the first set of examples-- R1, R2, Rn, Cn, and so on, those are finite dimensional spaces. And the dimension is n, if I had to define what the dimension is. What is an example of infinite dimensional? Well, you can probably guess since I led up to it with saying there's a big difference between this one and-- Rn and Cn. The space of continues functions on the interval 0, 1, this is infinite dimensional. And why is that? This is because the set by like E given by the functions fn of x equals x to the n-- here, n is a natural number or 0-- this is a linearly independent set. I'll let you think about why that is. And you see that it's not a finite set. And it contains infinitely many different functions. So this is a linearly independent set, OK? So like I said, what we're going to be dealing with is how to handle solving linear equations or questions about analysis that we'll need to solve other problems on these infinite dimensional spaces, unlike in the past where we did analysis on finite dimensional spaces. And the types of-- the types of things you proved in the analysis class were something like the Heine-Borel theorem that closed and bounded subsets of Rn are compact, meaning every bounded sequence has a convergent subsequence. This is, in fact, something you prove to show that every continuous function on a closed-- on a closed and bounded set has a min in a max in that set. But that statement that I just said about the Heine-Borel theorem, that is not true once we get to infinite dimensions. And so we have to-- if we want to be able to solve problems, we'll have to develop some machinery to get around that. It is one of the main issues that arise when you move from doing analysis on finite dimensions to infinite dimensions. So OK, mathematically, I haven't said much, but I have been trying to gin up the subject to make sure you stay engaged. All right, so we have vector spaces. But to do analysis, we need some notion of how close things are. And to do that, we introduced the notion of a norm. So a norm on a vector space V-- if I don't write vector space V, you should say to yourself, this capital V is a vector space. A norm on a vector space V, this is a function. This is going to be an object that generalizes length. So it's a function from V to the set of non-negative numbers with three properties that we associate to length, namely that the norm of V is 0 implies-- well, I should say if and only if V equals 0. 2, if I take V and I multiply it by a scalar and then I take its length, I should get something like that scalar times the length of the vector. If I take a vector, multiply it by 2, I should get twice times the length of the vector. So this is expressed by this. And this is for all lambda in my field of scalars and for all V in V. So this property here is referred to as homogeneity. This property here is referred to as definiteness. Definite-- I don't know if I'm spelling that right, but definiteness. And then the third property is that it satisfies the triangle inequality. This is that for all V1, V2 in V, the norm of V1 plus V2, this is less than or equal to the norm of V1 plus V2. And any vector space that has a norm on it we call a norm space. With norm is what we call a norm space. And this thing here, I usually-- so it's referred to as a triangle inequality. All right, so a vector space with a function on it that satisfies these three properties we call a norm space. And then so like I said in my past classes, whenever I see a-- or whenever you see a decent definition or something with substance, you should do examples. We'll do that in just a minute after I give a few more definitions. So let me just add to this one. This was the definition of a norm. A semi-norm wants to be a norm, but it's not quite. This is a function, which I'll also denote with these two parallel lines on each side. It's a function satisfying homogeneity and the triangle inequality, but not necessarily positive-- or definiteness-- or positive definiteness is also what that goes by, satisfying 2 and 3, but maybe not necessarily 1. Again, semi-norms pop up in a natural way. And I'll give you an example in just a second. So first off, we have this notion of length in a vector space, which is a norm. And if we're given a norm on a vector space, we can associate a metric. So remember, a metric-- so from real analysis, I want you to recall that a function d on a set-- if x is a set, a function d from x cross x into 0, infinity is a metric if you have three conditions satisfied-- a, so the distance between a point in itself-- or let me write it this way, the distance between two points is 0 if and only if x equals y. b, for all x, y in the set X, dxy equals dyx-- this is symmetry of the distance. And you have the triangle inequality for the distance for all xy in X, d. Or for all x, y, and z in x, dxy xy is less than or equal to the distance from x to z plus the distance from z to y. So if I have a vector space and I have a norm on it, I can turn that space now into a metric space. And this metric that I'm about to write down is usually referred to as the metric induced by the norm. So our first little kind of mini-theorem here-- if I have a norm on a vector space, then d of-- let me use the same-- if I define a function dvw to be simply the length of v minus w-- so for elements of v and w in the vector space, defines metric on V. And this metric we refer to as the metric induced by the norm, all right? And this is not difficult to prove. Basically, 1, 2, and 3 imply respectively a, b, and c. 1, the first property of a norm, implies a. This is pretty clear. The distance between vw is 0 if and only if v minus w equals 0, which is if and only if v equals w. That gives us part a. b we get from 2, since by 2 v minus w is equal to minus 1 times w minus v equals-- now, we pull the scalar out and take its absolute value. So since by 2, we have the length of v minus w equals the length of w minus w, this implies b. And then c just follows again immediately from the triangle inequality 3 by just adding and subtracting a third element. So let me just write 3 implies c essentially immediately, all right? OK, so when we have a norm, we get a notion of a metric, a notion of distance between two vectors in our vector space by just taking the length of the difference of them. So again, this should sort of-- this is supposed to be an analog of what we see in R and Rn in general. So for example, let me just recall this. So now, let's look at a few norms. Instead of n tuples of Rn or Cn where we have the Euclidean norm-- Euclidean norm, here, if I have an n-tuple given by x-- so let me put a 2 here to denote this norm-- then the Euclidean norm of this vector-- so this guy is in Rn or Cn-- then its Euclidean norm is given by the sum i equals 1 to n xi length squared 1/2. And that gives you the standard notion of length and distance between points that you've dealt with in Euclidean space. But that's not the only norm you could have on these spaces that you've dealt with before. Another one-- put an infinity here. This norm is the max between 1 and n of xi. So to measure the length of a vector, I take this to be the magnitude of the largest entry in that vector, OK? And in general, there's a whole family of norms I could put on Rn or Cn. Let me put a p here. This is norm, the lp norm, little lp norm, is the sum of the p-th power of the xi's raised to the 1 over p. Of course, you need this 1 over p for homogeneity to work out and also the triangle inequality. And this is for 1 less than or equal to p less than infinity. I didn't write Infinity there because that doesn't make sense, although you can actually prove that if I take a fixed vector and I let p go to infinity, then this quantity here converges to this quantity. That's not hard to show. So let me quickly draw you a quick picture of what the unit balls look like-- R2, say, with the different norms. So let me recall that if I have, in a metric space, a metric, then B x, r this is a set of all y in x so that the distance from x to y is less than or equal to r. Now, you know what the ball looks like for the Euclidean norm. This is just a circle and filled in, of course. I'm not going to fill this in. So now, what I'm looking at is-- and I'll put a 2. This is the ball centered at 0 of radius 1, so everything inside. What about for the infinity norm? Let's say I want to look at what does this look like. This, in fact, is square. And what about-- let's do one more about the little l1 norm, which is just the length of the magnitude of the entries, the length of the absolute value of the entries. So all these go through these points here on the axis. This ball-- so first off, everything inside the blue is the l infinity ball. Everything inside the white is that little l2 ball. The little l1 ball is everything inside of this square, which is now tilted. And every other little lp ball is in between the yellow and the blue. And if you take p goes to infinity, then that ball converges, in a certain sense, to this l infinity ball, which is in blue. So you see, changing the norm, even on these finite dimensional spaces, changes the geometry of the balls, if you like, all right? But not in two drastic of a way, meaning if I take a large enough B-- if I take a large enough l1 ball, that will swallow up an l infinity ball. And therefore, the two balls are kind of an-- I can take the size of that ball to be maybe of size 3. And so the balls are kind of the same. One can always be sandwiching the other. I'll talk a little more about that when we talk about equivalent norms, at least in the problem sets. So OK, so that's an example of norms on a finite dimensional vector space. Let's look at another norm. Let's take a metric space so I remember we have a-- so this is just any old metric space. And now, I'm going to define a vector space, C lower infinity x. This is defined to be the set of all continuous functions on this metric space-- remember, if we have a metric space, we have a notion of continuous functions on it-- such that-- what? So these are-- so again, to fix this, let me just clarify where it's going. f goes from x to, let's say, C such that f is continuous and f is bounded, meaning the image of x under f is a bounded subset of C, all right? Just so you can connect this with something I just wrote down a minute ago, the set of all bounded functions, continuous functions on 0, 1, this is just a set of continuous functions on 0, 1 because we know continuous functions on a closed and bounded interval are bounded. So I don't have to say bounded whenever I write this. OK, so I have this space of continuous functions on a metric space, which are bounded. And I'm going to define a norm on this. So this is a vector space. Because any sum of two continuous functions is continuous, a scalar multiple times a continuous function is continuous. And again, those two operations satisfy the axioms of a vector space. And I can define a norm on this space. Then I claim that-- what do I use here? Then if I define an infinity norm again, which will look similar to what I wrote up there, as this is the sup of all x in capital X of u of x, then this is a norm on this space of continuous-- bounded continuous functions. So 1 and 2 are pretty easy to see from the definition. So properties 1 and 2 are easy to see. For the triangle inequality, well, that follows essentially from the triangle inequality for C. These are C-valued, complex valued functions. If you like, replace it with real valued functions if that makes you feel better in the first lecture, although we'll need complex numbers eventually. So let's check that this function satisfies the triangle inequality and therefore is a norm. If u and v are two bounded continuous functions on x, then for all x in X, if I take ux plus vx, this is less than or equal to, by the triangle inequality for C, the absolute value of u of x plus v of x. Again, u of x is a complex number and so is v of x So this is by triangle inequality. But now, this, for any old x, is always bounded by the supremum of that over all values. So let me write-- OK? So u of x for any fixed value of x is always bounded by the supremum over all x's is in the capital X, which is the norm of u. So what I've shown is that for all x in capital X, the absolute value of u of x plus v of x is bounded by this number. And therefore, this number is an upper bound for the set of absolute values of these guys and therefore the supremum, which implies that-- which is the least upper bound of all these quantities as x ranges over x is less than or equal to this number here. And therefore, we have the triangle inequality for this function. And therefore, this defines a norm, which I'll often refer to as the uniform norm or l infinity norm just because there's an infinity. But the content should be clear on what I'm talking about. I mean, this is not inconsistent with what I wrote down because Rn, that's just a set of-- well, OK, never mind, let me stop talking before I get myself twisted in a knot. So what does convergence mean in this norm here? We know what convergence means in the Euclidean or any of these norms. It means the points are getting together. The fixed points in the plane are starting to get closer and closer together, at least in R2 or C2. What about here? So let me just note that-- so remember, un converges to u in this space of bounded continuous functions. What does this mean. This means the distance between un and u goes to 0 as n goes to infinity. So here, I'm talking about what does convergence of a sequence of elements of this space mean. So that means that the distance between un and u goes to 0 as n goes to infinity. And the distance, remember, is defined in terms of the norm. So convergence in the space of bounded continuous functions just means this. But in terms of something covered in past analysis class, what is that equivalent to? I'll write it out. This means for all epsilon positive, there exists a natural number n such that-- now, this is the sup of u of x for all-- or sup un of x minus u of x for all x in X So I can write such that for all n bigger than or equal to n for all x in X, un of x minus u of x is less than epsilon. But that's just the definition of uniform convergence uniformly on x. So the point of this little note is that convergence in this norm or in this metric-- I'm going to use norm and metric interchangeably because this metric is induced by this norm. So convergence in norm in this space of bounded continuous functions is the same as saying that this sequence of functions converges to this function uniformly on x So I hope that's clear. So maybe I should say this now instead of in the eighth lecture. But really, take the time to actually watch these lectures for two reasons. One, I'm actually here recording them. So instead of just saying read this and ask me if you have any questions, I'm giving you some more insight than just what's in the notes. And two, it keeps you engaged, OK? So you should treat these as you actually being there-- have your notebook, take notes as I lecture. The great thing is that you can pause and rewind. This space of continuous functions-- we have that norm. OK, so some more examples of norm spaces is-- so I was calling that little lp norm. But the little is not necessarily little. So the actual little lp space that I will typically refer to-- this is going to be the space of all-- this is the space of all sequences now. So the vectors in this space are sequences. The points in this space are sequences, such that-- so call this thing element a. So a is a sequence. So this is the set of all sequences that have lp norm that's finite. And what is the lp norm in this case? Well, it's the natural generalization of that guy, where the lp norm is equal to the sum j equals 1 to infinity for 1-- p between 1 and infinity. And then the l infinity norm is just the sup over . J So little lp stands for the space of sequences with finite lp norm. Or you could say their lp is summable. Their p-th power is summable. So let me just state a silly example. The sequence 1 over j, j equals 1 to infinity, this is in little lp for all p bigger than 1, but not for p equals 1 because then we get the harmonic series. Now, why? First off, why the triangle inequality holds even in the finitely many, in the finite dimensional case and why this is a natural vector space is non-trivial. So you shouldn't be able to actually say, oh OK, that makes sense. It satisfies the triangle inequality. It's not clear. It's non-trivial that this is, in fact, a norm. This one's easier to see. But it's not clear for p bigger than 1 that this is a norm and that therefore little lp, and that the sum of two guys in little lp are in lp so that this is an actual vector space, and then we have the triangle inequalities so that this is a norm-- that will be in the exercises. But just take it as-- take me at my word that this is indeed a vector space. If I take two sequences that are in little lp, meaning their p-th power is summable, then their sum, entry by entry, is also in little lp and so on, and that this function here defines a norm on this space little lp. So just accept that. And so now, we're not interested-- so we've gone a little bit-- narrowed down our spaces we're interested in a little bit more. So we've gone from general infinite dimensional vector spaces now to norm spaces being our interest, what we're interested in. But we're not exactly interested in just any norm vector space. The central objects and functional analysis that we're interested in are the analogs of Rn. Now, what property that are in and Cn have is that their metric-- this metric that you have on these sets is complete. The Cauchy sequences always converge. And this is our next narrowing down of the spaces we're interested in. And these have a special name, so so-called Banach spaces after Stefan Banach. So norm space, so a vector space with a norm, is a Banach space if is complete with respect to the metric induced by the norm. So if we have a norm space, we have a metric that's associated to that by defining the distance between two guys by that equation up there. And so we say it's a Banach space if that metric is complete, meaning Cauchy sequences in that space with respect to this metric converge in the space. Now, in first year analysis, you learn that instead of rational numbers, they are not complete. There are Cauchy sequences which do not converge to a rational number. Every real number you can write as a limit as finitely many decimals. So square root of 2 will be a number where you can form a Cauchy sequence of rational numbers converging to it. But it's not a rational number, so the rational numbers are not complete. But R is complete. And in general, we give a name to those norm linear spaces-- those norm spaces such that this metric is complete. We call those Banach spaces. So examples which you saw, at least for the Euclidean norm and should be able to prove on your own, assuming the triangle inequality for the little lp norms, these are complete with respect to any of the little lp norms. All right, now, let's do a non-trivial one. Let's show that the space of bounded continuous functions on a metric space is complete. So let's make this a theorem. If x is a complete geometric space, then it is a Banach space. BSP, that's my abbreviation for Banach space. All right, so for a complete metric space, the space of bounded continuous functions on x is a Banach space. We've already shown that that function over there defines a norm on it. So I'm saying it's complete with respect to that norm. Cauchy sequences always converge in this space. So let me just write this out. We want to show that every Cauchy sequence un in the space C bounded continuous functions has a limit in the space of bounded continuous functions. All right, so let's take a Cauchy sequence. And the way this proof is going to work is the way essentially really all of the proofs of showing something is a Banach space works, is you take a Cauchy sequence. Then you'll be able to come up with a candidate for the limit. And then your job is to show two things-- that that candidate is in the space itself, and then that the convergence happens. And sometimes, those two come together or can be done at the same time. And you'll see what I mean. Let un be a Cauchy sequence in C infinity, in the space of bounded continuous functions. First off, I claim that this forms a bounded sequence in this space. This is a fact from metric space theory, but I'll write it out again. Then there exists natural number N0 such that for all n bigger than or equal to N0, the difference between u N0 and u-- so let me say for all nm bigger than or equal to N0, un minus um in the infinity norm is less than 1. So what I'm first going to do-- I'm first going to show that this sequence of functions are bounded in this space. Each of them are, of course, bounded. But I'm saying that they form a bounded sequence in the space. Then let's see, rewrite this. For all n bigger than or equal to N0, un, the norm, this is less than or equal to un minus u N0 plus u N0. And this is less than 1 plus u N0. So for all n bigger than or equal to this fixed number N0, un and l infinity norm are bounded by 1 plus the norm of this fixed guy. Then for all n, a natural number, the norm of un is less than or equal to-- if I put u1-- now, if n is bigger than or equal to N0, then this is less than or equal to this guy. So it's certainly less than or equal to some non-negative things plus this guy. Meanwhile, if n is less than N0, than the norm of it is less than or equal to one of these norms that appear here, which is also, again, less than or equal to this entire number that I wrote down here. And let me define that to be this number B. So I've shown for all natural numbers n, the norm of un is bounded by B. So this forms a bounded sequence in the space of bounded continuous functions. You have to keep track of where this bounded-- where I'm saying this bounded is taking place. Each of these functions is bounded, is a bounded function. What I'm saying here is that they form a bounded sequence in this space. OK, so let me make a note of this. This is bounded by this. All right, now, what do we know? And let me-- in fact, I should have written this out over here, but let me write it out again. What does it mean for this sequence to be Cauchy in this space? This means for all epsilon positive, there exist a natural number n such that for all nm bigger than or equal to N un minus um in this uniform norm is less than epsilon. Now, since for all x I have that un of x minus um of x in absolute value is less than or equal to the sup over all x's in capital X, which is just the uniform norm, the l infinity norm on this space, and I am assuming it's Cauchy, so it satisfies this property, I get that for all x in X, the sequence now of complex numbers un of x-- so this is now just a sequence of complex numbers. It's the value. I take x. I stick it into u sub n. I have now a sequence of complex numbers. This is the Cauchy sequence. All of this may seem a little bit weird at first. But after I finish this proof, do it again now for little l infinity. And you'll start getting the hang and seeing how the arguments go. All right, so for every x in capital X, un of x-- so that's now a sequence of complex numbers. And yeah, I thought I didn't need it to be complete. I don't know why I wrote it down. Any metric space, if x is a metric space then, OK? All right, so for each x in the metric space, un of x, which is now forming a sequence of complex numbers, this is a Cauchy sequence. But the space of complex numbers, this is a complete metric space. And therefore, for each x, this sequence has a limit. So by completeness of C for all x in capital X, this sequence, un of x, has a limit in C. And now, I define what will be my candidate function, u of x, to be this pointwise limit, limit as n goes to infinity of u sub n of x. So in fact, in a few words, if you remember what these words mean, what we've shown is that every Cauchy sequence in the space of bounded continuous functions has a pointwise limit. OK, then now what we're going to show is that this guy is, in fact, in the space of bounded continuous functions and that we have convergence of this sequence of functions to u in the space C infinity x, in the space of bounded continuous functions. We've only defined this guy as the pointwise limit-- so as the point wise limit of these guys, OK? Then for all x in capital X, the absolute value of u of x is equal to the limit as n goes to infinity, so the absolute value of the limit. But since this converges, this is equal to the limit of the absolute values. And each of these guys are bounded by B. They're bounded by the infinity norm, which is bounded by B. So this is less than or equal to B. And thus, so this is a bounded function at least on this metric space. So now, we're going to achieve two things at once. We're going to show that it's continuous and that we have convergence in this space by showing it is-- by showing this l infinity norm of the difference goes to 0 of u minus un. OK, so first, we're going to show that-- so think of this just now as a function, not necessarily a norm. OK, maybe I'm being too careful. I'm just going to say now we're going to show that this quantity here goes to 0 as n goes to infinity. And how do we do this? Well, since we're just going to do this the old fashioned way of let epsilon be positive and show that we can choose an n such that this is less than or equal to epsilon, I know in the definition you're supposed to have less than epsilon. But less than or equal to is good enough. OK, so let epsilon be positive. Since the sequence is Cauchy in this space, this implies there exists a natural number N such that for all nm bigger than or equal to N, I have that un minus um l infinity is less than-- all right, let's make it epsilon over 2. Then for all nm bigger than or equal to N, This means un of x-- so let x be in capital X. We now want to show that for all little n bigger than or equal to capital N, in fact, un minus u in this norm is less than or equal to epsilon over 2. For all nm bigger than or equal to N, I have that u, which is less than or equal to un minus um-- and therefore, if I take the limit as m goes to infinity, remember, u of x is the pointwise limit. I fixed x in capital X. I get that for all n bigger than or equal to m, un of x minus u of x is less than or equal to epsilon over 2. So what have I shown? I've shown for all n bigger than or equal to capital N-- and this capital N came from this condition, not anything having to do with x. It came from this condition-- I have un of x minus u of x is less than or equal to epsilon over 2. And therefore, the sup over all x's is less than or equal to epsilon over 2, which is less than epsilon. Thus, un converges to u-- or I should say as n goes to infinity. Now, what's the last step? I have this candidate, u. un's converged to u with respect to this sense. I need to conclude also that u is an element of the space of bounded continuous functions. I know it's bounded. Why is it continuous? Well, since un minus u goes to 0, this implies that un-- by what we remarked a little bit earlier, it means un converges to u uniformly on x. And since u is the uniform limit of a sequence of continuous functions, that implies that u is continuous. So let me just reiterate what we've done. In sequence, we've shown that ux is bounded. We've then shown that the sup over all x in X-- so I should have put that in yellow. We've shown convergence to u with respect to this norm. And we've shown u is, in fact, in the space. So therefore, u is in-- i.e., the space is complete, and therefore a Banach space. So the first time you see that proof-- and again, this is kind of how all the proofs of something being a Banach space goes. When you see that for the first time, it's a little weird. And this will be in the exercise so that you can get a jumpstart on that by looking at maybe the simplest one. Well, I still have space over here. So little lp, this is a Banach space for all p between 1 and infinity. Another space, which-- OK, maybe try your hand at this one instead of little l infinity because at least something's a little different. Little c0, which is the set of sequences in little l infinity-- so each element of c0 is a sequence, a bounded sequence such that limit j goes to infinity of aj equals 0. This is also a Banach space. So I encourage you to take this example of bounded sequences that converge to 0. First off, it's pretty clear it's a vector space. It's actually a subspace-- and I'll get to subspaces of Banach spaces next lecture. So it's a Banach space with the l infinity norm. So again, how would you prove that this is a Banach space? You would take a Cauchy. So you have to start thinking of-- again, these Banach spaces, these spaces you're looking at can be made up of complicated things. Little lp, this is a space of sequences. So each point in the space is a sequence. It's a sequence of numbers. And here, so your sequence of points is a sequence of sequences, just like in the example we did here, your sequence of points in your space of bounded continuous functions is a sequence of functions. Here, we have a sequence of sequences, which again a sequence is just a function, so you shouldn't be too scared. But try your hand at showing that this is a bonus space just from what we've done so far, following this kind of blueprint. And again, it'll be kind of the same. You first show that a Cauchy sequence in here is, in fact, bounded. Then show that pointwise each of the-- and here, pointwise should mean each of the entries of your sequence of sequences-- forms a Cauchy sequence. Then that allows you to obtain a candidate sequence as the limit of your sequence of sequences, all right? And then you have to show, again through this argument, that you do have convergence with respect to this norm, that that sequence is bound it, and, in fact, that sequence satisfies this condition in order to be in this space. OK, so I think I'll stop there.
MIT_18102_Introduction_to_Functional_Analysis_Spring_2021
Lecture_16_Fejers_Theorem_and_Convergence_of_Fourier_Series.txt
[SQUEAKING] [RUSTLING] [CLICKING] CASEY RODRIGUEZ: OK, so let's continue with our discussion of Fourier series from last time. For a given function in L2, we define the Fourier coefficient f hat of n to be the integral of 1 over 2 pi minus pi to pi f of t e to the minus int dt, which up to a factor of 1 over root t, is equal to the inner product of f with e to the int over square root of 2 pi in the Hilbert space L2, OK? And the question that we had-- so let me-- and we also had that the n-th partial sum for the Fourier series associated to f was given by sum from n equals minus n to n f hat of n e to the inx. And the question which we're trying to resolve is do we have for all f and L2 limit as n goes to infinity of f minus SN of f 2 equals 0, OK? All right? In other words, is f equal to its Fourier series, at least when we interpret equals as in this sense here? Now, based on what we've done for Hilbert spaces, this question is equivalent to the following statement. If f is an L2, and the Fourier coefficients are all 0, does this imply f equals 0, right? So this question is-- by what we've done for Hilbert spaces, big L2 is a Hilbert space. This question here is equivalent to this statement, which is that the collection of orthonormal vectors and big L2 consisting of the exponentials divided by square root of 2-- is this a maximal orthonormal subset? Or as we were using the terminology we had from last time, does that form an orthonormal basis? So this is the statement that we're going to prove this class. And we're going to proceed via Fejer's method, if you like to give it a name, where what we did last time was-- when we recall, we had the Cesaro Fourier mean we defined to be the average of the first n partial sums with the hope that somehow this behaves a little bit better than the partial sums, because that's the thing we're trying to study. And that's a hard question. And typically, means of sequences might behave better than the sequences themselves. But if the original sequence converges, then the means converge. So we should expect this to converge to f but hopefully faster or have better, more recognizable properties than just studying the partial sums directly. And we'll get to-- in the next statement, it's a little bit clearer why the Cesaro Fourier means converge to f. And so what our goal-- what we're going to show is that if f is an L2, then the Cesaro Fourier means converts to 0, I mean converts to f as n goes to infinity. OK? And so once we've proven that, then that gives us what we want in the yellow box, right? That proves what's in the yellow box, because let's take f and L2 with 48 coefficients all 0. Then, all of the partial sums will be 0. Then, all of the means will be 0. And since the means converge to f, that proves f is 0. And we get what's in the yellow box. And therefore, the partial the Fourier sums converts to f as capital N goes to infinity in L2, all right? And then, once we prove that, I'll make a couple of comments about other types of questions one can ask and what you can do, or a brief comment. So this is our goal for this lecture. And we should be able to get through it. So let me first rewrite the Cesaro Fourier means slightly differently. How we did in the previous lecture for the partial Fourier sums-- we wrote them as what's called a convolution. I haven't defined convolution-- but an integral of a function depending on x and t times f of t dt. And we're going to do the same now for the Cesaro means. And we'll see here in what-- it's a little bit more clear, although I didn't talk so much about the Dirichlet kernel that appears for these guys-- but why the Cesaro means converge to f. OK, all right, so the statement is for all f in L2 minus pi to pi, we have that the n-th Cesaro mean of f, which is our Fourier mean, I can write as the integral from minus pi to pi of a function kn of x minus t times f of t dt. So remember, for the partial sums s sub n, we could write it as d sub n, where d denoted a Dirichlet kernel, where kn of x-- this is equal to n plus 1 over 2 pi, and then 1 over 2 pi times n plus 1 times sine n plus 1 over 2x over sine x over 2 squared. And this holds that x equals 0. This one holds at x0 equal to 0. OK? And this thing, we call Fejer's or the Fejer kernel. So all right? And now, let me just list off a few properties that we'll get from this. Moreover, we have the following properties. 1, kn is non-negative. kn of x is equal to kn of minus x. It's even. And kn is 2 pi periodic. The second is that the integral of k sub n of x from minus pi to pi-- dx, or let's make this t-- dt equals 1. And the third is if delta is a positive number less than pi, less than or equal to pi, then for all x with absolute value bigger than or equal to delta and less than or equal to pi, we have that kn of x, which is equal-- I don't need the absolute values, because it's non-negative-- is less than or equal to 1 over 2 pi over n plus 1 times sine squared delta over 2, OK? So OK. So let's prove this theorem. And then, I'm going to say a few comments about-- well, since I have these properties right here, let me go ahead and make a few comments before we prove it. What does that mean kn looks like? Let me draw 0 pi minus pi. So kn is non-negative. It's even. And away from a small neighborhood, it's quite small if capital N is very big. So what it's looking like is maybe the first one-- and it's large at the origin. OK, so maybe that's n equals let's say 1. And then, let's say this is delta, and then minus delta. If I were to now look at, let's say, n equals-- I don't know-- a billion, it looks more like something that's very concentrated at the origin, but in such a way that the area underneath the graph-- so the integral-- the area equals 1, OK? And the same with what I drew in white, because white was supposed to be n equals 1. Yellow was supposed to be n equals-- I don't know-- 1,000. The area is always 1, OK? So this is telling you that if I look at sigma n of f-- so this is just some remarks. This is not to be taken completely literally. This is just the intuition on why we believe that the Cesaro means converge to f. And I'll say how this picture differs from if we looked at just SN. So this means that sigma n of f is, in fact-- so remember, we're going to get, in the end, that this is equal to kn of x minus t f of t dt. Now, kn is very concentrated near where t equals x, OK? So based on the picture, as n gets very large, this thing is getting more and more concentrated near where n equals x, OK? Now, and therefore, at least for let's say very nice f, if this thing is concentrated near where t equals x, then f of t will be approximately f of x. So f of x comes out of the integral because this is an integral dt. So since this thing is concentrated at-- and because the area underneath the curve is always 1, this integral is always equal to the integral of kn over any. So kn is 2 pi periodic. This integral is equal to the same integral over any 2 pi periodic interval, which means I could put here-- I could add an x to both top and bottom, and therefore change variables to get this is kn of t dt, which equals-- because the integral is 1, I would get something like f of x, OK? So this is a heuristic reason on why one should expect the Cesaro means to converge to f. OK? If you look back at the kernel that we had for the partial sums, it had some of the same-ish properties. It was 2 pi periodic, and also even. The integral was 1. And it did decay away from 0. However, it's non-negative. I'm talking about the Dirichlet kernel dn, which if you look back in your notes, was sine of plus n plus 1/2 times x over sine x over 2 with a constant out in front. And that little difference, the fact that this kernel is non-negative-- and the Dirichlet kernel is not-- makes a big difference. So although this heuristic argument-- maybe you don't see it there-- in the actual proof itself, that oscillation-- and what I mean by isolation is the fact that dn actually does oscillate between negative and positive values-- this bit of oscillation is actually what you can use to build up a continuous function whose partial sums do not converge to that continuous function at a point, OK? But as we'll see for the Cesaro means, the Cesaro Fourier means, basically, pick a space. And the Cesaro sums or Cesaro means converge to the function in whatever space you're talking about. And I'll say a little bit more about that in a minute. But OK. So let's prove the theorem that the Cesaro means are written in this way, and the kernel has these three properties. So let me recall that we have SN of x-- or let's put a k there-- this is equal to, as we wrote last time, minus pi to pi DK of x minus t f of t dt, where DK of, say, t was, from last time, equal to 2n plus 1 over 2 pi at t equals 0 and sine n plus 1/2 t over sine t over 2, and then with 1 over 2 pi out in front, I believe. Let me make sure I got the right exponent. Right. For t not equal to 0. OK? Oh, and this should be k. OK, so using this, we have that the Cesaro sum of x-- this is equal to 1 over n plus 1, sum from k equals 0 to n, the mean of the first and partial sums. And this is equal to-- now, sk f of x is equal to this. So I can write this as integral from minus pi to pi of 1 over n plus 1 sum from k equals 0 to n of DK x minus t f of t dt. And so this here is kn of x minus t. All right? So now, I'm just going to verify that kn of x takes that form that we had before. And kn of x-- this is equal to 1 over n plus 1 sum from k equals 0 to n of DK of x. And let's go to the next half board. So I can write this as 1 over 2 pi n plus 1 and times-- so I will look at the case that x is non-0. x equals 0 is, you'll get what you get. But let's look at x not equal to 0. So then, I plug in this formula here and pull out a sine t over 2-- or sine x over 2 squared on the bottom. And then, I get k equals 0 to n of sine x over 2 times sine n plus 1/2 x, OK? And because I feel like it, let me put a 2 here and a 2 here. Why do I feel like it? Well, it's because if I have 2 times sine of a sine b, I can write that as using my angle sum formulas from trigonometry. You wondered why those would be useful. Well, here they are appearing in the advanced MIT class. You can write this as sum from k equals 0 to n of cosine n x minus cosine n plus 1x. Let me make sure I got that right. Or this should be k. I'm sorry. That should have been k. k, k, all right? Now this, is a telescoping sum, right? I have a sum of cosine kx. I have a cosine k plus 1x. So this is equal to-- so let's just write this out. And let me just indicate why this is a telescoping sum. We get cosine 0x minus cosine 1x plus cosine 1x minus cosine 2x dot dot dot plus the last one, which is cosine nx minus cosine n plus 1x. And OK, so this telescopes. That cancels with this. That will cancel with so on. And that last one will cancel. So all that we're left with is this one minus this one divided by this 2 that I have right there. And I get 1 over 2 pi n plus 1 times 1 over sine squared x over 2 times 1 minus cosine n plus 1 x over 2. And again, using a trig formula-- 1 minus cosine 2a equals sine squared-- divided by 2 is equal to sine squared a. So I get this is equal to 1 over 2 pi n plus 1 times sine squared n plus 1 over 2x divided by sine squared x over 2, OK? So that verifies the formula for the Fejer kernel. What about the properties that we have there? These properties-- at least the first two-- follow directly from this formula and the definition. So 1, follows immediately. This is clearly non-negative. It's even, taking x to minus x does not change this, because we have squares. And also because of the squares, it's 2 pi periodic rather than 4 pi periodic, OK? OK, so that's 1. For 2, we note that if we take the integral for minus pi to pi of the Dirichlet kernel, this is-- OK, we had a formula for the Dirichlet kernel, but remember, this is nothing but-- this was defined to be the sum from n equals minus k to k of e to the int dt, OK? Now, e to the int when n is not equal to 0 is 2 pi periodic. And when I integrate it from minus pi to pi, the integral from minus pi to pi of this 2 pi periodic thing-- you can just check. It's the integral of sine, nt, and cosine nt over its period. That's going to give me 0. So all I pick up is when n equals 0, right? And so that's equal to just the n equals 0 term. So that gives me 1, OK? So since the integral of each kernel is 1, then the integral of the Fejer kernel-- which remember, this is equal to the average of the Dirichlet kernels. And each of these is 1 sum from k equals 0 to n 1. I get n plus 1 divided by n plus 1. I get 1, OK? So that gives me 2. And for the third property, we have-- what do we have? Then, the function sine squared x over 2-- what does it look like? This is increasing. Or I should say it's even and increasing on 0 to pi. So what it looks like is sine squared x over 2. So there's pi minus pi sine squared. Looks like it goes up to 1. So if I'm looking at all x outside of-- so in that shaded region-- then, if x is outside of this delta region, then I get that sine squared x over 2 is going to be bigger than or equal to whatever, so it sits above the value that I get here, which is sine squared delta over 2. And therefore, I get that kn of x, which is equal to its absolute value, is less than or equal to 1 over 2 pi n plus 1 sine squared n plus 1 over 2x over-- I had sine squared x over 2, but since sine squared x over 2 is bigger than or equal to sine squared delta over 2, taking 1 over reverses the inequalities. And I get sine squared delta over 2 here. Sine of anything is always bounded above by 1. So I get this is less than or equal to 1 over 2 pi n plus 1 sine squared delta over 2, OK? So for the moment, let me put this absolute value there. I'm not doing it because I think it looks better. It's because I'm going to make a comment in a minute. OK, let me just make a small comment. Well, let me prove the next theorem. And then, I'll make the comment. OK, so we have these properties of the Fejer kernel. And now, what we're going to do is on our way to proving that we have convergence of the Cesaro means to a function in L2, we're first going to do it for continuous function. So you proved in the assignments that in L2 minus pi to pi, the continuous functions vanishing at the two endpoints are dense in the space big L2, OK? Now, if a function's continuous and equals 0 at both of the endpoints, it's 2 pi periodic in the sense that it has the same value at both endpoints. And therefore, the subspace of continuous functions that are 2 pi periodic is dense in L2. So if we're going to be able to show that the Cesaro means converge to a function in L2 for arbitrary L2 function, maybe it makes sense to try and do it first for continuous functions. And it's there that this argument that I just-- this heuristic argument I gave here will be more math-like. OK, so we have a following theorem due to Fejer, which is the following. If f is continuous and 2 pi periodic, meaning f of pi is equal to f minus pi, then not only do we have the Cesaro means converging to f in L2, we actually have it in the best sense that you could for a continuous function. Then, sigma n of f converges to f uniformly in minus pi to pi, all right? So before, we were looking at Fourier series in L2. So convergence in L2 was the way one makes sense of infinite series or something converging to something else, all right? If we're looking at continuous functions, then we already a different norm there if we want to just consider a complete space containing continuous functions. We have the uniform norm, or the infinity norm. And so what this says is that even in this smaller space and in this stronger norm, we have convergence of the Cesaro means to the function f. But again, this doesn't imply that the Fourier series converges to f uniformly. Like I said, one can, in fact, use this oscillatory behavior of the Dirichlet kernel to prove there exist continuous functions whose Fourier series diverges at a point. And therefore, it doesn't converge uniformly to the function. But this is true for the Cesaro means because of these properties of the Fejer kernel, because it has this shape where it's non-negative. It's peaking near the origin. And it has total mass 1, and total integral 1. In some sense, you should think of, as n goes to infinity, sigma n is looking more and more like the Dirac delta function at 0, which maybe you encountered in physics. If that doesn't mean anything, don't worry about it. Just skip to the next part of the talk, which is supposed to have this magical property that it's 0 away from 0, which these are looking like, as integral 1. And when you integrate it against a function, you get f evaluated at the origin, which is like what we're saying here, OK? So again, that's some more heuristics. But linear operators depending on a parameter that appear like this, where it's a function of this form times f of t integrated dt, pop up all the time in harmonic analysis, OK? And having these properties, in fact, pops up also in harmonic analysis, OK? Harmonic analysis being a fancy name for Fourier analysis and other stuff. So let's prove this. So the first thing that I want to do is-- so f is a continuous function on minus pi to pi. That's 2 pi periodic. So I can extend f to all of R by periodicity. In other words, so we extend to all of R, meaning I have-- so there's pi minus pi pi. Here's a 3 pi. Here's minus 3 pi. So supposedly, I have this continuous function, which is 2 pi periodic. Now, I take that continuous function and just extend it by how it is here and so on, OK? I'm not saying I extend it by 0 outside. I'm saying I extend it periodically, OK? OK, now, I can write down a formula for exactly how you do that. But just trust me. You can do that. And also the following simple properties, then-- f, now referring to it as a function defined on all of R that's 2 pi periodic, this is also continuous, is 2 pi periodic, which implies that f is uniformly continuous and bounded, i.e. If I look at the infinity norm of f first off, because by periodicity, this is just equal to sup xn minus pi to pi, and because f is continuous, this thing is finite, OK? All right. Now, it's not difficult to believe that f is-- if I extend it by periodicity, it's going to be continuous. But using that and the fact that it's 2 pi periodic, you can then also conclude that it's uniformly continuous, meaning-- let's just quickly review what uniformly continuous means. This means for all epsilon positive, there exists a delta positive such that if y minus z is less than delta, then f of y minus f of z is less than epsilon, meaning I can choose a delta independent of y and of the point, right? Continuous at a point means I fix x. Then, for all epsilon, there exists a delta. Uniformly continuous means the delta doesn't depend on x, the point that I'm looking at. All right, so we have basic observation that we're going to make there. And maybe I'll just leave this up for now. So we want to prove the sigma n's converge to f uniformly on minus pi to pi. So that means we should be able to find, for every epsilon, a capital M such that for all n bigger than or equal to M sigma int and for all x in minus pi to pi sigma n of f minus f is less than epsilon in absolute value. All right, so let epsilon be positive. Since f is uniformly continuous, as I stated-- recalling the definition-- that implies that there exists a delta positive such that if y minus z is less than delta, then f of y minus f of z is less than-- and let me get this right so it comes out pretty in the end-- is less than epsilon over 2, OK? So now, what we're going to go through is make that argument which I just erased actually precise, all right? So here, we're saying if f is very close to-- if any two points are sufficiently close, f is going to be close in value. OK, now choose M natural number so that for all n bigger than or equal to M, the quantity twice times the L infinity norm over n plus 1 times sine squared delta over 2 is less than epsilon over 2, OK? So n plus 1-- that's the thing that's changing. So I have these fixed numbers here now. I've fixed delta. I have the L infinity norm of f. So I have this number here. And I'm just saying, choose a capital M so that for all n bigger than or equal to M, this number of times-- and I'll even put it here-- times 1 over n plus 1 is small, is less than epsilon over 2. And I can do that because this, as capital N goes to infinity, converges to 0, right? OK. Now, since f and k sub n, the Dirichlet kernel, are 2 pi periodic, I can write the Cesaro mean, which is given by minus pi to pi kn of x minus t f of t dt. I can make a change of variables, set tau equal to x minus t. And then, this will be equal to-- what is it going to be equal to? x minus pi x plus pi kn of tau f of x minus tau d tau, OK? All of this change of variable stuff is fine, because I'm dealing with continuous functions. I'm integrating continuous functions. So that's the Riemann integral. We have a change of variables for the Riemann integral. So that's completely fine. OK, now this is the product of 2 pi periodic functions. And if I take the integral of that quantity of a 2 pi periodic function, the integral of that is equal to the integral over any interval of length 2 pi, all right? So we're integrating over an interval of length 2 pi, right? We're going from x minus pi to x plus pi. That is equal to the integral of the same quantity over any interval of length 2 pi. So it's also equal to the integral over minus pi to pi. OK? So all I'm saying is I can change variables and move the x minus t. And let me even go back to t instead of tau here. Because of periodicity, I can switch this x minus t over here to f. All right, now we're going to start seeing some magic happen. And this is where that heuristic argument that I gave earlier actually starts to make sense. So then, I have that for all n bigger than or equal to M-- so I have that condition that quantity was less than epsilon over 2. And for all x and minus pi to pi, I have that sigma n f of x minus f of x-- so this is equal to minus pi to pi. And again, I'm going to write this now as kn of t f of x minus t dt minus-- now, here's the trick. The Fejer kernel has integral 1. So I can actually write f of x as minus pi to pi integral kn of t times f of x dt. I'm integrating dt, right? Then this just pops out. I get f of x times the integral of the Fejer kernel, which is 1, OK? And this equals minus pi pi. So just combining things-- kn of t f of x minus t minus f of x, which is good, because we have a continuous function. And now, we have something inside that looks like I'm subtracting f of some argument minus f of the argument minus something, OK? Now, I'm going to split this integral into two parts, and then use the triangle inequality and bring the triangle inequality inside. In fact, I'm going to go ahead and do that here. This is less than or equal to if I combine terms like I did and then bring the absolute value inside. OK? And now, I'm going to split this integral up into two parts. This is equal to the integral over t less than delta. And because kn is non-negative, this is just kn of t f of x minus t minus f of x dt plus the other term. OK? kn of t f of x minus t minus f of t dt, all right? Now, what do we know? If the absolute value of t is less than delta, then x minus t minus x is equal to minus t, which is an absolute value less than delta. So note that x minus t minus x equals t is less than delta here, right? And therefore, this quantity here is less than epsilon over 2 by how we chose delta. So this is less than epsilon over 2 times the integral over this region of kn of t. OK? But I can make this region larger and just go back to-- so let me just leave it here as it is. Plus now what do I do with this piece? I have this. I bound by twice the L infinity norm of f. The absolute value of this is less than or equal to by the triangle inequality, the sum of the absolute values, which is less than or equal to the sup of this plus the sup of that and x, which is equal to twice the infinity norm. So I get 2 times the infinity norm of f popping out from this term, and kn of t-- oh, I'm away from t less than delta. And this is where I use that third property that I have from before, that it's less than 1 over 2 pi. So let me leave this here. Sum 1 over 2 pi n plus 1, sine squared delta over 2 dt, OK? And now, this, I can say, is less than or equal to the whole integral over minus pi to pi, which is equal to 1. Plus again, making this an integral over the entire region, I get 2 pi times-- or divided by 2 pi gives me 1. So I get twice infinity over n plus 1 sine squared delta over 2. And we chose n plus 1 so that this second quantity here is less than epsilon over 2. OK? And therefore, uniformly, we prove that for all capital N bigger than or equal to M for all x in minus pi to pi, the difference in sigma nf in f is less than epsilon, proving uniform convergence, OK? So here's the remark I was going to make, is that the same proof can be modified if instead of kn of x being bigger than or equal to 0-- let me make sure I'm saying the right thing. So if instead of this property, which we had for the Fejer kernel, we have that sup over n of the integral from minus pi to pi of kn of x is finite, meaning if I have a function or if I have a sequence of functions, kn's, and I have the corresponding operators that look like that-- maybe they're not associated to any questions about Fourier analysis, but I'm just saying-- and it satisfies the three properties I had before with the exception of being non-negative, but instead of that, it satisfies this property, then I can do redo the same proof and show that those things converge to f uniformly, OK? Why am I saying that? Because maybe you would like to then try your hand at replacing kn with dn, the Dirichlet kernel, OK? The Dirichlet kernel satisfies all of the other properties we had up there. The integral is 1. In absolute value, it decays away from x is less than delta. And it's even in 2 pi periodic, OK? But it doesn't satisfy this. And if I look at minus pi to pi of the Dirichlet kernel, what one can prove is that this is something like log n for large enough n, OK? All right? So that was just a tiny remark I wanted to say on why, if you thought about maybe redoing this proof using the Dirichlet kernel, which satisfies almost all the same properties with the exception of being non-negative, you could, if the Dirichlet kernel had satisfied this bound. But it doesn't. It satisfies this bound. It's like log n. And therefore, if I take the sup, I don't get something finite, OK? All right, so we've proven that the Cesaro means of a continuous function convert uniformly to a continuous function. So we're almost to the point where we can say that the Cesaro means of an L2 to function converge to an L2 function and conclude that the subset of exponentials divided by square root of 2 pi form a maximal orthonormal subset of L2, and therefore is in orthonormal basis so that the partial Fourier sums converge back to the function in L2. We just need one more bit of information. So we have the following theorem. For all f in L2 of minus pi to pi, if I look at sigma n of f-- so first off, this is just a finite linear combination of exponentials, right? So this is clearly an L2. It's a continuous function. But if I take the L2 norm of that, it could depend on n. But in fact, it's less than or equal to the L2 norm of f. OK? So how we'll prove this is we'll first-- and this bound is what allows us to go from the 2 pi periodic continuous functions to general L2 functions by a density argument, OK? So first, we'll do this for 2 pi periodic continuous functions, and then by density, conclude it for L2 functions. So suppose first that f is 2 pi periodic. And then of course, extend it to R by periodicity like we did before. Then, as before, we had that the Cesaro mean of f is equal to the integral from minus pi to pi of f of x minus t kn of t dt. And so if I compute the integral sigma n f of x squared dx, this is equal to-- so each one of these is equal to an integral over minus pi to pi. So I'm going to have three integrals. And f of x minus s times the complex conjugate fx minus t times kn of s and kn of t, And Then ds dt dx, OK? Now, all these functions are continuous. So we have a Fubini's theorem, which says we can reverse the order of integration however we please. So I can write this as the integral for minus pi to pi, minus pi to pi, of now integrating first with respect to x-- kn t. And now, integrating first with respect to x. dx, let's say ds, dt, OK? Now I do Cauchy-Schwarz on this. And so this is less than or equal to minus pi to pi minus pi to pi pi kn of s kn of t times-- I'm using Cauchy-Schwarz in x now-- so times the L2 norm of the function minus s. So I'm taking the L2 norm in this variable-- 2 times 2 ds dt, OK? What I mean by this is I'm taking the L2 norm of this function depending on s, But In the first variable-- in this x variable, OK? So just write it out to see what I mean. Now, this is the integral of a function over an interval of length 2 pi. That's 2 pi periodic. That's equal to the integral of that function over any 2 pi periodic interval or any interval of length 2 pi. So I can, in fact, remove this s and remove this t, and just pick up the L2 norm of f in both places. So this is, in fact, equal to-- and because these two things no longer depend on s and t, they come all the way out of the integral. And I get L2 norm squared times minus pi to pi kn of s ds times the integral from minus pi to pi kn of t dt. Both of these integrals equal 1. So I get norm squared. And I started off with the L2 norm squared, or the L2 norm squared of the Cesaro mean of f. So I get this for all 2 pi periodic continuous functions, OK? Now, how do we then get the bound for general f? We use the density argument. So by what you've done in the assignments, there exists a sequence of 2 pi-- so let me start over real quick. Now, let's take a general element in L2. OK, now we start. By assignments, you know that there exists a sequence of 2 pi periodic continuous functions converging to f in L2-- fn a of 2 pi periodic continuous functions such that the fn's converge to f in L2. And one can verify simply from the definition of each of the Cesaro means that then-- so this is as little n goes to infinity-- that then the Cesaro means also converge as little n goes to infinity. So capital N here is fixed, OK? Just using the definition of what the Cesaro mean is and Cauchy-Schwarz, basically, OK? And the fact that fn's converge to f in L2. Thus we get that the L2 norm of the Cesaro mean is equal to the limit, as n goes to infinity, of-- so this is little n-- of the L2 norm of the Cesaro means of these continuous 2 pi periodic functions, which as we've proven already-- these are all less than or equal to the L2 norm of fn, because they're 2 pi periodic. And again, because f is converging to fn, the norms converge. And I get the result I wanted for general L2 functions. OK? So now, we're almost there. What we have is this bound. And we have that the Cesaro remains converge to-- so if I take the Cesaro means of a continuous function, those converge to the continuous function uniformly on the interval. We're going to use that, this bound, and the density, again, of the 2 pi periodic continuous functions in L2 to conclude the following theorem. For all f in L2, the Cesaro means converge to f as capital N goes to infinity. In particular, we get, as an immediate corollary, if all of the Fourier coefficients are 0, then f is 0, right? Because if I've proven this and all the Fourier coefficients are 0, then the Cesaro means are all 0. And therefore, since this is 0 converging to f, f must be 0. OK? And therefore, the set of exponentials-- normalized, of course-- form a maximal orthonormal subset of L2, i.e. that they're an orthonormal basis for big L2, which answers the question we had about Fourier series converging to a function in L2, OK? All right. So we'll do this just as a standard epsilon n argument. Let epsilon-- so let f be in L2. Let epsilon be positive. So we know that the continuous 2 pi periodic functions are dense in L2, because we did this in the assignment that for any f in L2 over an interval, I can find a continuous function that vanishes at the endpoints and therefore is periodic, which is close to f in L2. So there exists a g that's continuous 2 pi periodic such that f minus g in L2 norm is less than or epsilon over 3. So since sigma N g converges to g uniformly on minus pi to pi, there exists a natural number M such that for all N bigger than or equal to M, for all x minus pi to pi, I have that sigma N g of x minus g of x is less than epsilon over 3 square root of 2 pi. OK? Now, we go about the part where we replace f with g, OK? Then, for all N bigger than or equal to M, if I look at the L2 norm of sigma N of f minus f in L2, and I apply the triangle-- I add and subtract terms and apply the triangle inequality-- I get that this is less than or equal to sigma N of f minus g 2 plus sigma N of g minus g in L2 plus g minus f in L2, OK? So sigma N of f minus g is equal to sigma N of f minus sigma N of g. So I use that there without explicitly stating that. So let me say sigma N of f minus sigma N of g. Just from the definition, you can check this is equal to sigma N of f minus g, OK? Now, by the bound I just proved, the L2 norm of the Cesaro mean is less than or equal to the L2 norm of the function here. So this is less than or equal to f minus g2. And then, I also have this L2 norm of f minus G there. So I'll put a 2 there plus-- and I'll actually write out what this is-- sigma N g of x minus g of x squared dx 1/2, OK? Now, f minus g is less than epsilon over 3 in L2 norm. So this is less than twice epsilon over 3. Sigma N g minus g is less than epsilon over 3 square root of 2 pi here. So I get epsilon over 3-- that pulls all the way out-- minus pi pi 1 over 2 pi dx. And I just get epsilon in the end, OK? OK. So that concludes what I wanted to do for Fourier series, at least for now, which applies what we've done for Lebesgue integration, these big LP spaces, and also some of this general machinery we've built up for Hilbert spaces to actually answer a more concrete question rather than just trying to prove general statements. General statements are very, very useful. I'm not saying they're not. But I'm just saying so that you can see a concrete problem why one would want and use functional analysis in the first place. Now, coming back to what we've done so far, so let me just make a couple of remarks about what we haven't shown. It's a very deep theorem due to Carleson. So what we've shown is that the partial sums-- so we showed the set of exponentials normalized, or a maximal orthonormal set-- I mean that they're orthonormal basis. So the partial sums converge to f in L2. So this is what we've shown. For all f in L2, the partial sums converge to f in L2, all right? But this does not translate into a point-wise statement. This does not say that the partial sums converge to f almost everywhere. OK? There is a general theorem one can say that is covered in more advanced measure theory classes where one can say that there exists a subsequence converging to f almost everywhere. But that's not very good, or at least very clean. Now, for a long time, it was not necessarily believed that the partial sums converged to f almost everywhere. But a theorem due to Carleson shows that for all f in L2, partial sums do converge to the function almost everywhere, OK? This is, in fact-- maybe this is true. Maybe this is not. I heard this from my advisor. Carleson spent a few decades trying to prove the negation of the statement, trying to come up with an example of a function whose partial sums converge don't converge almost everywhere back to the function. And then, he came up with the bright idea that, well, maybe that's not true. Let me spend some time trying to put myself in the other shoes. And within a year or a couple of years, he was able to prove this theorem, OK? So this is Carleson's theorem that we do have convergence almost everywhere. Now, you can also ask, what about convergence? So this convergence in L2 of the partial sums. We have other LP spaces, right? What about in those LP spaces? Can I replace this 2 with p? The Fourier coefficients and partial sums-- these all make sense for any big LP space. So what is known is that also-- and now, the name is escaping me, but I'll just state it. For all p between 1 and infinity, the partial sums converge to the function in lp. When p equals 1, this is false, OK? And when p equals infinity, this is also false, because the partial sums-- these are a finite linear combination of exponentials, and therefore continuous function, OK? So you can't have, for an arbitrary function in L infinity-- which can be discontinuous, just has to be bounded-- these converging to L in such a function. Because then, the limit would have to be continuous, OK? The uniform limit of continuous functions, which L infinity kind of is, has to be continuous, OK? So that's why you wouldn't expect it for L infinity. And for what one would call duality, because infinity is the dual of L1, you also don't get p equals 1. But in fact, things are worse there. You can come up with an L1 function. So that the Fourier series-- I don't think I'm lying when I say this, but-- diverges almost everywhere, I want to say, OK? I don't think I'm lying. But if p equals 1, one can come up with an example where the partial sums diverge point-wise almost everywhere, OK? OK. But to prove this flavor of statements requires deeper harmonic analysis, harmonic analysis being the umbrella that Fourier analysis sits in, and requires a knowledge of, or at least working with certain operators, which are called singular integral operators, which were developed back in the last century, middle of the last century at the University of Chicago by my mathematical grandfather and great grandfather, which gives you some beautiful results about, again, convergence of Fourier series, but also some applications to PDEs, which were why they were originally created in the first place and so on. But perhaps you'll encounter that if you take a class in harmonic analysis or Fourier series. I haven't taught the Fourier series class, so I don't know what it's about. But that kind of material will not be covered in this class. And this will be as far as we go as far as these types of questions, all right? So next time, we'll move on to minimizers over closed convex sets and consequences of that, one being that we can identify-- which is the most important application-- we can identify the dual of a Hilbert space with the Hilbert space in a canonical way. You can already prove that if you wish using the fact that every Hilbert space is asymmetrically isomorphic to little l2. You know that the dual of little l2 is 1 over q, is lq, where 1 over 2 plus 1 over q equals 1. And therefore, q equals 2. So little l2 is a dual of itself. But we'll prove it for general Hilbert spaces, which has some very important and interesting consequences when it comes to now studying, solving equations in Hilbert spaces, meaning you have linear operators. When can you solve equations involving these linear operators, and so on? All right, so we'll stop there.
MIT_18102_Introduction_to_Functional_Analysis_Spring_2021
Lecture_14_Basic_Hilbert_Space_Theory.txt
PROFESSOR: So last time we introduced pre-Hilbert spaces as vector spaces that come equipped with a Hermitian inner product. Our Hermitian inner product is linear. So it's a pairing between two elements that gives you a complex number that's linear in the first entry. And if you switch the entries, that's equal to the pairing of the original two entries, taking the complex conjugate of that. And it's positive definite, meaning if I take the inner product of a vector with itself, it's non-negative and 0 if and only if the element is zero. And we proved the Cauchy-Schwarz inequality. So let me actually just recall that we defined for if h is a pre-Hilbert space. We defined what we called, or using this norm notation, although I haven't proved it's a norm yet to be the inner product of v with itself raised to the 1/2 power. And this is a non-negative real number. This is a non-negative number. So taking the 1/2 power is meaningful. And we proved at the end of last time the Cauchy-Schwarz inequality that for all u, v in H if I take the inner product of u and v, take its absolute value, this is less than or equal to norm of u times the norm of v. OK, so now let's use this to actually prove or to prove that this thing that I have been denoting with this norm notation is, in fact, a norm on a pre-Hilbert space. So theorem H is a pre-Hilbert space, then this thing here defined in this way is norm on h, OK. So remember, we have to prove two-- or we have to prove three things for something to be a norm. We have to prove it's positive definite. And then we also have to prove homogeneity and the triangle inequality. So note that this quantity here equals 0 if and only if the inner product of v with itself is zero, which by the positive definite quantity of a Hermitian inner product implies that v equals 0. So that proves that this function here on h is in fact positive definite. Now if lambda is a complex number and v is in H, if I take-- so we kind of saw this at the proof of the Cauchy-Schwarz inequality. But if I take lambda times v and inner product lambda times v, this is equal to lambda times lambda bar. A scalar pulls out of the first entry unfazed. If a scalar is in the second entry, it comes out with a complex conjugate. And then of course for a complex number of times this complex conjugate, I get the norm of that or the length of that-- the absolute value of that complex number squared. So taking the square root of both sides of this, which is equal to this, I get that this quantity here is equal to the absolute value of lambda times this norm of v, which proves homogeneity of this function on h. OK, so all that remains to prove that this is an actual norm-- so I don't have to keep being or at least trying to be careful with what words I'm using-- will prove that this is a norm. OK, so now we need to prove the triangle inequality. You let uv be in H, then I compute u plus v squared. This is equal to u plus v plus v, which is equal to just how we computed it for when we had a t here and a t here, when we did the proof of the Cauchy-Schwarz inequality. And we'll use this identity quite often that the norm of u plus v squared is norm u squared plus norm b squared plus 2 times the real part of u and v, the real part of the inner product of u and v. OK, now this is less than or equal to norm squared plus v squared plus the absolute value of the real part. Now the absolute value of the real part of a complex number is less than or equal to the absolute value of that complex number. So this is less than or equal to the absolute value of the inner product of u and v. And by Cauchy-Schwarz that is less than or equal to still what I have before plus 2 times the inner product of u and v. And I guess let me go to the next board. All the proof is essentially done. What I had before is equal to norm of u plus normal v squared, i.e. So I had started off with the norm of u plus v squared, and I had proved that's less than or equal to this thing here squared. So taking the square root of both sides, I get the norm of u plus v is less than or equal to the norm u plus the norm of v. OK, so this thing that I defined before is, in fact, the norm. I'm not just using that notation to denote an impostor. OK, now using the Cauchy-Schwarz inequality, we can also prove that taking the inner product is-- it's a function on h cross h that's continuous, OK. So let me label this as continuity of the inner product. So let me state it as the following. If we're in a Hilbert space and un converges to u and vm converges to v in a pre-Hilbert space with norm as defined as before-- so now we have a norm on a pre-Hilbert space, so we can define convergence since it's a norm space. Pre-Hilbert space H with norm this, then un, inner product vn converges to u inner product v. OK. All right, so why is this? So last time in the previous proof, I mean, we really didn't use the full strength of the Cauchy-Schwarz inequality. We could have just gotten by with the real part, having just proven that the real part of the inner product is an absolute value less than or equal to the inner product of-- or the product of the norms of u and v. Here we'll actually use the full Cauchy-Schwarz inequality. So the proof is quite simple. If un converges to u and vn converges to v-- i.e, let me just spell this out for you-- un minus u converges to 0. And norm of vn minus v converges to 0. As n converges to-- as n goes to infinity, then we'll use the squeeze theorem to show that this quantity converges to u in a product with uv. So I have to show that this quantity here-- hold on. Let me not put that. We're not in 18100. Typically in 18100 when I first started teaching the squeeze theorem, I always included the lower bound. But it should be clear that this is always bigger than or equal to zero since it's absolute value of a complex number. OK, so this is equal to un minus u, vn plus-- let's see. U, inner product, vn minus v. By the triangle inequality now, just for the modulus or absolute value of complex numbers, that's less than or equal to un minus u, vn, plus-- that's the value of u, vn minus v. And this is less than or equal to by the Cauchy-Schwarz inequality norm of un minus u times the norm of vn plus the norm of u times the norm of vn minus v. And now the vns are. So the vns are converging to v. And therefore, the norms of the vns are converging to the norm of v. So recall that if the simple fact that vn convergence to v, then norm of vn converges to norm of v, OK. All right. And, again, so let me add-- since one can prove as you do in real analysis the following reverse triangle inequality since if I have two vectors v and h. So this just works in any Banach space, not necessarily a pre-Hilbert space. The absolute value of the difference in norms is less than or equal to the norm of the difference, OK. So this is a convergent sequence of real numbers. So it must be bounded. So I can say that's less than or equal to times sup of n. Or let's use a different letter, k, times plus norm of vn minus v times norm of u. Now, this is something converging to 0 times a fixed number. This is something converging to 0 times a fixed number. And therefore, this goes to 0 as n goes to infinity. And therefore, the thing which we started with, so this quantity here, an absolute value, is less than or equal to something converging to zero. And it's also non-negative. So therefore I get that. So this is by the squeeze theorem, i.e. OK, so on pre-Hilbert space, we can define a norm using the inner product. And this inner product is continuous with respect to this norm, OK. Because once we have a norm, we have convergence. So we can talk about-- once we have a norm, we have a notion of distance, so we can talk about convergence of sequences and things like that. So now we have pre-Hilbert spaces. What is a Hilbert space? This is simply a pre-Hilbert space which is complete with respect to this norm. So a Hilbert space, H, is a pre-Hilbert space, which is complete with respect to the norm, with respect to this norm, which again remember was defined as-- I took a vector. The norm of a vector here is equal to the inner product of the vector with itself raised to the 1/2 power. All right, so this is the new terminology. And as we'll see, or at least we'll explicitly spell out in the form of a theorem-- but we will see probably by the end of this lecture that really there's-- for a reasonable Hilbert space, you're either one of two things. So let me first give the examples, basic examples of Hilbert space. So for example Cn which is a set of N-tuples. How do I denote this? Complex numbers where the inner product of two vectors, z and w, is equal to sum j equals 1 to the n, zj, wj. OK. So this is an example of a Hilbert space, finite dimensional one, meaning it's a vector space of finite dimension. The other example we have is this space, little l, to which this was the space of sequences such that each of these is a complex number. And this series is convergent, is finite. Sum of the absolute value of the ak squares, this is a Hilbert space as well with what's the inner product of two elements of little l2. This is the sum from a plus 1 to infinity of ak, bk. OK. All right. So this is pretty clear. But note that the norm I get from this inner product, this is simply-- which was little l2 norm, right? OK. So these are two basic examples of a Hilbert space. We will in fact show that every separable Hilbert space can be in a inner product. And therefore length preserving way-- so that's usually called an isomorphism form-- can be mapped isometrically to either Cn or little l2. So these two kind kof-- if one wants to go about categorizing all of the possible Hilbert spaces as far as separable Hilbert spaces, which are the only reasonable ones, of course, one can come up with wild examples of Hilbert spaces which are not. Then these are the only two-- I shouldn't say two because this one is indexed by its dimension. But these are the only two types that you come up with, either a finite dimensional Cn, or it's isometric to little l2, OK. So but I will still write down another example for-- since we went to all that work, if E is a measurable subset of r and l2 of e, the big L2 of e, which, remember, this was the space of measurable functions f from e to c such that f squared e is finite. This is a Hilbert space. What's the inner product? The inner product of two elements in L2 defines v, just the analog of little l2 where we replace the sum with an integral, f times g, OK, the Lesbesgue integral of f times the complex conjugate of g. OK, now, I wrote down capital L2 I wrote down little l2. But what about the other little lp and big LP spaces or any of those Hilbert spaces? So, of course, if I define the inner product in this way, as I did before, then that only induces the little l2 and the big L2 norm. But is there perhaps some other kind of magic inner product out there that I can put on little lp or big LP so that I would get out the little lp or big LP norm, when I define the norm according to how I've been doing it. So the question on hand is-- the other little lp or big LP spaces, also Hilbert spaces. All right, so again it's clear that if I were to define the inner product in the way I did in the two examples, then that only is going to give the l2 norm. I'm asking now is there some magical inner product I can define on these spaces that spits out the little lp or big LP norm? So the answer is no. And there's in fact, a way to determine whether or not a space is a Hilbert space because if you think about it, the way we've come about introducing what a pre-Hilbert space is and a Hilbert space is, we first had a norm in our hand. And then we'd define an inner product. Well, if that's the way you're building your space, then you know automatically that it's a Hilbert space because you had an inner product first and then you define the norm second. Let's suppose the data you're given is some norm on a space. When can you determine if that norm comes from an inner product? That's the question that is underlying this question I wrote on the board. If I have a norm space-- all right, so my initial data is the norm. When can I tell that that norm comes from an inner product? And so you'll prove this basically by direct calculation. And this is the following parallelogram law, which is the following. If H is a pre-Hilbert space, then for all uv in H, if I take the norm of u plus v squared, and I add the norm of u minus v squared, this is equal to twice the norm of u squared plus the norm of v squared. So this is a condition that is stated purely in terms of the norm. Moreover, if h is a norm space satisfying star, then h is a pre-Hilbert space. OK, in other words, although it seems we defined Hilbert spaces or pre-Hilbert spaces initially in terms of an inner product, in fact, you can say a norm space is a pre-Hilbert space if and only if it satisfies this parallelogram law. So if you have on your hands, just the norm, then as long as that norm satisfies this identity, that norm can be derived from an inner product. So using this theorem, you can check that the answer is only for people to-- so in other words, if you can come with u and v so that this inequality is not satisfied when p is not equal to 2. OK. OK, so now we have the notion of a Hilbert space where this norm given in terms of an inner product, this space is complete. Now, we have an inner product, so we can start talking about vectors being orthogonal to other vectors or orthonormal sets, which when I first started lecturing about this stuff, I was already using the terminology, this thing being orthogonal to this thing or something like that. But of course, if you don't remember what those words mean from linear algebra, I'll quickly remind you. So suppose we're in a pre-Hilbert space. We say that two elements, u and v, are orthogonal, if their inner product is zero. If instead of saying orthogonal I want to write this in words, I'll write u per v, OK. So throughout H is going to be-- well, a pre-Hilbert space if I don't actually write it. If h is a pre-Hilbert space, a subset, which I'll denote by a lambda, lambda, and capital lambda, so just some subset indexed by some indexing set capital lambda. We say this set is orthonormal if for all lambda each one of these vectors in the subset has unit length. And I need two different indexed elements, implies that they are orthogonal. OK. So maybe this notation scares you because what's this indexing set? Typically we just use the natural number. So let me just make that remark although, I'll make a few remarks in general about orthonormal sets that are not necessarily indexed by the natural numbers will mainly be interested in a finite set or a countably infinite set. OK, so although an orthonormal subset of Hs could be a very crazy type of subset, mainly we're going to be interested only in finite or countable, countably infinite orthonormal sets. So what are some examples? OK, so simplest examples if-- so this is an example of a set of orthonormal-- or an orthonormal subset of c2. Or this is a 1. This is also an example of an orthonormal subset of c3, let's say, using notation from before. If I denote by e sub n, this is the sequence consisting of zeros up until I hit the end spot and then 0 afterwards and entry, which is an element of little l2, then this is an orthonormal subset of l2, OK. One other example is-- Let's look at the functions 1 over square root of 2pi times e to the i and x. And let's think of these as elements of l2 minus pi to pi. Then this is an orthonormal subset of-- I might write O-N instead of writing out orthonormal, but this is an orthonormal subset of L2. If I take the inner product of two of these guys, I get-- let's say I just look at two different ones. let's say m does not equal n, so then I take the inner product of imx with inner product of inx, complex conjugate. So that's the inner product on big L2, and this is equal to 1 over 2 pi times minus pi to pi. Now if m equals n, I just get the length of e to the i, nmx squared. The length of e to the i, mx is 1. And therefore, I would just get 1 here, dx over 2 pi, which gives me 1. But now they're not equal. So I get e to the i, m minus nx, dx. And now the integral of this creature-- so here e to the i, let's say I had to come up with some e to the i y when y is a real number, this is by definition cosine of y plus i sine y. Now you can check that fundamental theorem of calculus still holds for e to the i, m minus n over x And this equals e to the im minus nx over i times m minus n. And e to the i, m minus n times x is 2 pi periodic. So when I evaluate it at minus pi, I get the same value, if I evaluated at pi. So this equals 0. OK. So that's an orthonormal subset of big L2. So this collection of vectors-- and I'm calling them vectors, but this collection of elements in big L2 is still countable, even though I'm indexing it by the integers rather than the natural numbers. OK, so we have the following. Now most of what I'm going to say with regards to countable subsets of-- so countable orthonormal sets still carries through to possibly uncountable orthonormal sets where now a infinite sum over an uncountable number of elements has to be defined in a precise way. But I will really just stick mostly to the countable case, and if you're interested, you can always look that stuff up. So I should say-- what I haven't said is that whether or not the lp spaces, let's say over an interval ab or over r, are separable or not or even over a-- or even over a-- so let's stick to either a closed and bounded interval or r. Now, why are these spaces separable, meaning they have a countably dense subset, so including big L2? The reason is the following, so what you proved in the assignment is that the continuous functions are dense and little and big LP for p between p equals 1 to infinity, strictly less than infinity. OK, they're not dense in l infinity. So as long as you stay away from that, they're dense, OK. So continuous functions are dense in lp. Now, what's one way to approximate a continuous function? Going back to your introductory analysis class, hopefully you covered what's called the Weierstrauss approximation theorem, which says that for every continuous function, you can approximate it uniformly on the interval by a polynomial, OK. So that shows that polynomials are dense and all the lp spaces, of course, not l infinity. Now how do you go from the set of polynomials, which is uncountable to a count, which is dense in LP, but is uncountable to a countable dense subset, which is dense in LP. Make everything rational. OK, the set of polynomials with rational coefficients is, in fact, countable. And you can approximate every polynomial with real coefficients on a closed and bounded interval by a polynomial with rational coefficients. It's not too difficult to believe. And therefore, the polynomials with rational coefficients are dense in the lp spaces as long as I'm not in l infinity. And therefore all the lp spaces are separable. All right, now the little lp spaces are also separable because except for L infinity also, which is not separable-- because, first off, a dense subset of the, let's say, little l2-- let's make things definite-- is the subspace consisting of all sequences, which terminate after some entry. In other words, it's 0 after that, OK. Convince yourself that this subspace of all finitely terminating sequences, this is dense in little lp for every-- for p between one an infinity, not equal to infinity. So finitely terminating sequences are dense and little lp. Unfortunately, that's, again, an uncountable-- I mean, it's a subspace. So it's going to be uncountable. So how do I get now a countable thing again? I replace everything by rational numbers. So if I can approximate every sequence which terminates after a certain point consisting of real numbers by a sequence full of just rationals terminating after a certain point by just choosing the rationals very close to those real numbers. And I didn't say this explicitly a minute ago that we still have the density of the rationals and the reals. For every real, I can find a rational very close to it. So this is thinking that goes on. But now the set of all finitely terminating sequences with rational coefficients. This is a countable set. And that countable set is dense in little lp. And therefore little lp is separable as long as p is between 1 and infinity, excluding infinity. OK, so I said at the beginning that we're going to be mainly interested in separable spaces without actually saying why little lp or little l2 and big L2 are separable. But I just gave you the argument by word of mouth now instead of actually writing it down. OK, so we have the following Bessel's inequality equality for countable orthonormal subsets. So if the n is-- well, let me just put n here. If this is a countable, meaning it's either finite or it's countably infinite, when it's a countable orthonormal subset of a pre-Hilbert space, h, and for all u [? en ?] h, if I look at the sum of squares u in a product e sub n, this is less than or equal to the norm of u squared, OK. So our discussion here of orthonormal subsets is taking place within a pre-Hilbert space. We don't need Hilbert spaces to talk about these concepts. Yeah, but when we're in a Hilbert space and we have a certain or the normal subset, that would be important that we're in a Hilbert space. OK, so the proof is-- let's do the finite case first. So suppose I have a finite collection of orthonormal or a finite orthonormal subset of h or a finite-- or I'll often say finite collection of all normal vectors in h. And subset of h in on, standing in for orthonormal. Then let me just record a few identities, which are pretty easy to verify sum from n equals 1 to n of u inner product en, en. If I take the norm of this thing squared-- let's compute this out. This is the inner product of with the understanding that n is going from 1 to capital N. Let's use a different index here. And this is equal to nmuen. Now I have this sum here of-- oh, I'm leaving out em of u, vm, the complex conjugate times the inner product of en with em. That's the two vectors that are taking the complex conjugate of. This is a number here. It just comes out. This number here gets hit with a complex conjugate when it comes out. Now the inner product of en with em is 0. When n does not equal m, it is equal to 1 because it's equal to the norm of en squared when n equals m. So all I pick up from this double sum, which is just a finite double sum, is when n equals m and n going from 1 to n. And therefore, this is equal to uen squared, OK. And that's one formula I want to have. Another one is that if I take the inner product of u with m equals 1 to the n of the sum. And maybe you recognize what this sum here actually is. I'll say so in a minute. So this is equal to sum from n equals 1 to n of-- so u inner product e sub n times this number. This number comes out and gives me a complex conjugate. OK. And therefore, zero, which is bigger than or equal to the norm of u minus n, equals 1 to n of uen, en. Now if you remember back from linear algebra or from calculus that if I have orthonormal vectors, and I have a vector u, this quantity is nothing but the projection of u onto the span of those orthonormal vectors. So what I'm looking at here is, if you like, the norm part of u that's orthogonal to these finitely many vectors. OK, so this thing is bigger than or equal to zero. We use that formula of how to compute the norm of something plus something. And this is equal to norm of u squared plus norm n equals 1 to nuen, en squared minus 2 times the real part of u inner product sum from n equals 1 to n of uen, en. OK, and now we know what all of these things are. This is equal to-- I'm not even going to go over-- this is equal to this quantity here as is this inner product. It's also equal to this thing here. So the real part is equal to this thing here because this is a real number. Since this comes with a 2, I cancel one of those. So I get norm of u squared minus sum from n equals 1 to n of u, en squared, which is exactly what I wanted to prove for the finit case. But the infinite case then follows from the finite case. And by letting capital N go to infinity-- so infinite case, suppose en equals 1 to infinity is an orthonormal normal subset of H, then we know that for all N, capital N, we have that sum over n equals 1 to n of uen. This is less than or equal to the norm of u squared. So I can just send capital N to infinity to get that the sum equals 1 to infinity of norm of un squared is less than or equal to the norm of u squared. OK. OK. So orthonormal subsets we can define as a collection of vectors that have unit length and are mutually orthogonal to each other. Now just orthonormal subsets-- just any old orthonormal subset is not really the most useful thing, if we're trying to study the entire Hilbert's, or pre-Hilbert space h, because we may miss something if we leave out certain orthonromal vectors. But a more useful type orthonormal set is a maximal orthonormal set, which is defined as follows, an orthonormal subset e lambda, lambda and capital lambda of a pre-Hilbert space H is maximal. So again, if having a possibly uncountable collection of orthonormal subsets indexed by some indexing set makes you uncomfortable, replace this with n, where in is going from 1 to capital N or n is going from 1 to infinity, so a countable collection, if you like. But I'm stating this so that you know that something more general is true. So this is maximal if what? The following holds. If u is an h and u is orthogonal to everything in this orthonormal subset, this implies that u is 0. OK. So an example, of course, is you can check that this collection here is maximal orthonormal subset of c2, a non-example. You can maybe see this coming, the one we had a minute ago. This is not maximal since there's a vector that has inner product zero with both of these but is not zero, since this should be c3. You just write it loosely this way since this vector 010 is orthogonal to both of these, but it's not 0. Maximal means if you're orthogonal to everything in your collection, then it has to be zero. But this is non-zero, and it's orthogonal to everything to these two vectors there. Another example is, again, with the notation from before. This is where this is the sequence that is 0, except for the end spot where it is 1. This is maximal orthonormal subset of little l2. OK. Now, what we're going to see very shortly is that if we have a countably infinite maximal subset of a Hilbert space, then that set serves kind of the same purpose as a basis as an orthonormal basis does in linear algebra. I mean, if you look at here, already this orthonormal set is maximal. And it also forms the basis for c2. This was not maximal. And you see it doesn't form a basis for c3. So maximal is going to give us a condition, which is equally useful as being a basis. But it won't be a Hamel base. These subsets won't be a Hamel basis in the sense that every vector can be written as a finite linear combination of the elements of a maximal orthonormal subset. But what is true is that we can write it as possibly an infinite sum involving the maximal orthonormal normal subset, which is, in most cases, just as good if you want to use that, all right. OK, so first off, when-- does every pre-Hilbert space have a maximal, a subset? So let me state this as a theorem. In fact, I'll state two theorems. The first is every-- I should say non-trivial because we could have the Hilbert space-- this be the 0 vector, pre-Hilbert space as a maximal orthonormal subset. Whether it's separable or not, it has a maximal orthonormal subset. And the way you prove this is using-- so I'm not going to give a proof of this, I'm going to give you a proof of something a little less strong but about as useful as we'll need. One proves this by using Zorn's Lemma, by taking a set, your set that you're going to put a partial order on to be the collections of orthonormal basis, or not orthonormal basis, max of orthonormal subsets and then ordered by inclusion. And then one can do a Zorn's Lemma argument and apply Zorn's Lemma to obtain a maximal orthonormal normal subset. But that's kind of hands off. And maybe that scares you a little bit because Zorn's Lemma is equivalent to the axiom of choice. So if you don't like using the axiom of choice, maybe you have a problem with using it to construct a maximal orthonormal subset of a pre-Hilbert. But we can actually do this by hand, if the Hilbert space is separable. So this is a theorem we'll actually prove every non-trivial separable, meaning the pre-Hilbert space has a countably dense subset. Every non-trivial separable pre-Hilbert space as a countable maximal orthonormal subset, OK, which as I said, this is the main types of orthonormal subsets we'll be interested in just because defining infinite sums is easier to do over a countable index than it is an uncountable index. But anyways, so we're going to prove and actually construct this essentially by hand, using the process that's the name of this little section, the Gram-Schmidt process. So if you remember from linear algebra, if you have a collection of vectors, you can always find an orthonormal collection of vectors that span the vector space that is spanned by the original set of vectors. OK. And that's what we'll do. So since H is separable, let be a countable dense subset of H. And this is a non-trivial pre-Hilbert space. So we can always make sure that the first one is a non-zero vector. OK. So you know what countable means? Dense, remember, means that for any element in h there exists an element from this sequence, or from this collection, that's within epsilon of that vector from H. So now I'm going to make the following claim. This is essentially the Gram-Schmidt process, which we'll prove by induction. For all n, natural number, there exists another integer natural number m of n and an orthonormal subset e1 to em such that following is true. First is the span of e1 up to em of n equals the span of v1 up to vn. And so you see n is changing. So maybe for each time I change n, I get a different orthonormal subset or a wildly different orthonormal subset from the integer before it. So the property of these subsets are that I'm just simply adding a vector or not. And e1 up to em of n, so this collection is equal to the previous collection, union either the empty set if vn is in the span of the 1 up to vn minus 1 and some new vector e of m sub n. And I'll tell you what em of m sub n is otherwise. OK, so what I'm saying here is that I have this countable infinite list of vs. And for each n, I can come up with a finite orthonormal subset that spans the same span as v1 up to vn. And at each stage, all I do is add a vector or not depending on if the next v is in the span of the previous or not. OK, I hope that's clear. OK, so I proved this by induction. So proof of claim, this is by induction. So let's do the base case n equals 1. We take e1 to be e1 over length of v. All right. So now we're started. Now we've got our first vector in this list that we're building up, inductive step. So let's call this, what I want to prove, star. So suppose stars hold for a-- so this whole claim here, I shouldn't say just a and b, but the whole claim holds for n equals k. And now I want to prove it holds for n equals k plus 1. So I want to-- what kind of vector do I need to add to the previous collection of vectors to now span v1 up to vk plus 1? OK. So if vk plus 1 is in the span of v1 up to vk, then e1 up to em of k-- I should say the span of these vectors-- equals span of v1 up to vk equals because vk plus 1 is in this span. This span is also equal to vk plus 1. OK, and therefore, this case is handled. And we're in this spot where we don't add anything to the previous collection, OK. This proves what we wanted for n equals k plus 1 by not adding anything. So this was in the case that vk plus 1 is in the span of v1 up to vk. So now let's do the more interesting case of v1 not equal or not in the span of v1 up to vk. So now suppose vk plus 1 is not in the span of v1 to vk. I define wk plus 1 to be vk plus 1 minus its projection onto the previous list of orthonormal vectors. So sum from j equals 1 m of k will be k plus 1 ej, ej. Then first note that this vector cannot be-- so wk plus 1 this vector cannot be zero. Otherwise, this would imply that vk plus 1 is equal to this quantity here. And therefore, vk plus 1 is in the span of the ejs for j between one of between 1 and imps of k, which is equal to the span of the vjs for 1 up to k. But we're assuming this is not in the span. I should say this is equal to the span of e1 up to em sub k. That's what we're assuming. That's the inductive hypothesis, right? So then this vector does not equal 0. And define e n sub k plus 1 or m to be the vector wk over its length for plus 1. Now, this is a unit vector. And what? And if I take-- so now I claim this is orthogonal to e1 up to e of m sub k. If I take emk plus 1 inner product e sub n sub j-- this is equal to, simply by definition, 1 over length of mk plus 1 times the inner product of vk plus 1, minus the sum from j equals 1 to m sub k. So remember, this is supposed to be the projection onto the e1 up to em sub k. And vk plus 1 minus that is supposed to be orthogonal to those guys. So we'll see that this ends up being 0. And let me change this j to an l since I'm-- or no. And I get 1 over time the sum, times vk plus 1, e sub l, minus now e sub l inner product with the sum. Now this is the sum from j equals 1 to m sub k of this number here times the inner product of e sub j inner product e sub l. Now, the inner product of e sub j and e sub l is zero unless j equals and 1 if j equals l. So I only pick up the j equals l part of this sum when this inner product hits e sub l. In particular, I only pick up the coefficient in front since these have unit length. So I get v sub k plus 1, v sub l equals 0. Then this guy now does. the job. So I would write a little bit more, but I'm running short on time. OK. So we proved the claim. And now let's use this to conclude that this collection of all of these els are maximal. So we let s to be v sub n. And s is orthonormal subset of H. So I may not be adding any more vectors after a certain point and therefore just have a finite collection. That's also possible. But I may also have a countably infinite collection. So this is an orthonormal set by v. And so we now show x is maximal, OK. All right. So we haven't used anything about the nature of this subset of H. All we view is that it's countable. So we could do this step by step, where we construct these orthonormal vectors that span each finite-- equal the span of each finite collection of vectors. Now, to show it's maximal, this is where we use the fact that this collection of vs were dense or are dense in h. So suppose u is in H and for all l, u of e sub l equals 0. So this is either a finite set. So l is between 1 and n. Or it's a countable set, and l is going from 1 to infinity. Either this is equal to e1 up to e m sub n for some n. Or e1 is countably infinite? OK. So since vj-- this is a dense subset of h-- we can find a sequence of elements from this collection. So v of j sub k, k equals 1 to infinity such that vj of k converges to u as k goes to infinity. OK, now by property a, the fact that this span is equal to that span, v of j sub k is in the span of e1 up to em of j sub k. And therefore, by Bessel's inequality, and the fact that u is orthogonal to all of these orthonormal vectors, I get that v of j sub k squared-- now v sub j of k, this is in this span. So you can show that, in fact, this norm is in fact equal to the sum of the coefficients that I get from taking the inner product of-- OK, if I have a vector in the span of a collection of orthonormal vectors, a finite collection of all the normal vectors, then the norm squared is equal to the sum of the coefficients, just like an rn. And since u is orthogonal to each of these, this I can write as-- And by Bessel's inequality, this is less than or equal to-- the sum of these things squared is less than or equal to squared, and this goes to 0 as k goes to infinity since the v of j sub k is converted to u. But we started off with the norm of e of j sub k, which implies zero. And therefore u, which is the limit of these, must be zero, proving that this is a maximal subset. OK. So next time we'll prove that these maximal orthonormal, these maximal countable orthonormal subsets, in fact, form a pretty good analog of basis that you find in finite dimensional linear algebra. And we'll stop there.
MIT_18102_Introduction_to_Functional_Analysis_Spring_2021
Lecture_21_The_Spectrum_of_SelfAdjoint_Operators_and_the_Eigenspaces_of_Compact_SelfAdjoint.txt
[SQUEAKING] [RUSTLING] [CLICKING] PROFESSOR: OK, so we're going to continue with our discussion of the spectrum of an operator, a bounded linear operator. So let me just recall from last time the definition that if A-- and throughout H is a Hilbert space, if A is a bounded linear operator, then the resolvent set of A is the set of all lambdas and complex numbers such that A minus lambda times the identity, which I write as A minus lambda, is bijective, meaning it's 1 to 1 and onto, which implies by the open mapping theorem, that it has a bounded inverse or is equivalent to it having a bounded inverse. And the spectrum of A was the complement of the resolvent set within the set of complex numbers. And so the spectrum is supposed to be a generalization of, in finite dimensions, what were called the eigenvalues. And so we just-- recall we called lambda an element of the spectrum, an eigenvalue if there exists a u not equal to 0 such that A minus lambda applied to u is 0. In other words, A minus lambda is not injective. So the reason for lambda being in the spectrum is that-- so for a number to be called an eigenvalue is that this operator, A minus lambda, has non-trivial null space. In other words, it has a non-zero u So that Au equals lambda u. And we call this thing an eigenvector. And we saw last time examples of an operator that has infinitely many eigenvalues and eigenvectors and also an example of a bounded linear operator, which has no eigenvalues and eigenvectors unlike in the case in finite dimensions, where the spectrum is exactly the set of eigenvalues of an operator. Now, and we also proved at the end of last time-- so what we could say about these two sets or what we could say about the spectrum is that it's a closed set. And it's contained within the ball of radius norm A in the complex numbers, which means it's a compact set. And what we could say, by taking complements about the resolvent set, is that it's an open set that contains the exterior to a ball of radius norm of A in the complex numbers. And that's about all we can say about the spectrum in general for now. But we can say quite a lot about the spectrum of self-adjoint operators. And then we can give a pretty complete picture about the spectrum for compact operators-- self-adjoint compact operators. But let's first look at self-adjoint operators. So at the end of last time, we proved that if I have a self-adjoint-- and this is not related to the spectrum. If I have a self-adjoint bounded linear operator on a Hilbert space, then for all u, Au, u is a real number. And we could write the norm of A as this quantity sup u equals 1 of the absolute value of Au inner product u. All right, so now, we have the following theorem about the spectrum for self-adjoint bounded linear operators on a Hilbert space. So the first is that the spectrum is contained in the real number line. So the spectrum of A is contained in the real number line-- norm A-- or in this interval, minus norm A to norm A, which I'm viewing as a subset of the complex numbers, so just the line segment from minus norm A to norm A as a subset of the complex numbers. And the second is that one of these two endpoints has to be in the spectrum-- maybe both, but at least one. At least one of plus or minus norm A is in the spectrum of A. OK, so to establish one, we already know-- since the spectrum of A is contained in those complex numbers with modulus less than or equal to the norm of A, we just need to show that the spectrum is contained in the real number line. Then it must be contained in this interval, since it's contained here. OK, so we'll show that anything off the real number line lies in the resolvent. That's how we'll go about this. So we'll show that if lambda equals s plus it with t not equal to 0, then lambda is in the resolvent set of A. Now, suppose lambda has this form. Then A minus lambda is equal to A minus s plus-- or minus, sorry, it, which I can write as A-tilde minus it with A-tilde, a bounded linear operator given by A minus s, which is also equal to the adjoint because s is a real number. So I should have said t is not equal to 0 and s, t, real numbers. So A minus lambda I can write as A-tilde minus it, where A-tilde is A minus s, again, a self-adjoint operator, all right? So if I can do an argument for A-tilde and show that A-tilde minus it is bijective, then I can conclude A minus s minus it is bijective. Why am I doing this? Because then I can just focus on the one case that s is 0. A minus it is bijective if and only if A minus lambda is bijective. So we only need to consider-- so I can just work on this thing. But instead of writing A-tilde over and over again, I'll just switch back to A. So I only really need to consider the case s equals 0. So rather than do the argument for A-tilde minus it, I'm just going to set s equal to 0 and start doing the argument for A minus it. OK, so since by what we proved, or the result from last time-- so let me just set out what we're going to prove. If A is self-adjoint, then A minus t minus it is bijective for all t not equal to 0. So once I've proven this claim, then I've proven that-- I've proven my first part of the theorem. OK, now, since Au applied to u is real, I get that-- if I take the imaginary part of A minus it applied to u, inner product u, this is equal to-- Au, u, taking the imaginary part of that is just 0. So then I get minus t norm u squared, which implies that since t is non-zero-- we're assuming t is non-zero-- A minus it times u equals 0 if and only if u equals 0 because if this quantity here equals 0, then this thing here equals 0. And therefore, the norm of u has to be 0 since t is non-zero. So the null space of A minus it is just the zero vector. And therefore, it must-- it's injective, right? It is injective. It is 1 to 1. All right, now, we just want to show it's bijective-- or it's surjective. OK, so similarly, I can prove that the adjoint of this operator, which is, in fact, A plus it, is injective. And therefore, I get that the orthogonal complement to the range of A minus it-- so I want to show this equals H to show it's surjective. So the orthogonal complement of that, which is equal to the null space of the adjoint, which is-- equals-- so since this equals 0, I conclude that the range of A minus it closure, which is equal to the range of A minus it taking the orthogonal complement of the orthogonal complement. So that was part of an assignment that if I have a subspace of a Hilbert space and I take-- or let me say it here-- and I take the orthogonal complement of the orthogonal complement, I don't get back to subspace. I get the closure of the subspace. So this is equal to the orthogonal complement of the zero vector, which is H. So I'll be done showing that A minus it is surjective if I can show that the range is closed because then this will just be the range of A minus it equals H. So we just need to show now that the range of A minus it is closed. So to show it's closed, we have to show that if we take a sequence of elements in here converging to something, then that limit is, in fact, in the set. So suppose I have a sequence of elements un such that A minus it applied to un converges to an element v. So my goal is to show that v is in the range of A minus it. So we want to show v is in the range. And then we've shown that the range is closed. And we're done with the first part. So using this argument here, we're going to show that the un's, which a priori we don't know converge-- all we know is that the images of the un's converge. We are going to show that the un's actually converge. And then that will essentially finish the proof. Then we have that the absolute value of t u minus um norm squared-- this is, by this calculation we did over here, equal to the absolute value of the image of-- I mean, the imaginary part of A minus it un minus um un minus um. Take the value of all of that. And now, this is less than or equal to-- so the absolute value of the imaginary part of a complex number is less than or equal to the absolute value of that complex number, which by Cauchy-Schwarz I can say is less than or equal to A minus it applied to un minus um. But I'll write it as A minus it times the norm of un minus um. And I started off with t, which is non-zero, times the norm of un minus um squared. So I get that un minus um-- that this is less than or equal to 1 over the absolute value of t times the norm A minus it applied to un minus A minus it applied to um norm. Now, this thing on the right-- or I should say A minus it applied to un, this is a convergent sequence. In particular, it's a Cauchy sequence. So given epsilon, I can find capital N so that the norm of this right-hand side is less than epsilon times the magnitude of t. And therefore, for all capital N bigger than or equal to-- or for all little n m bigger than or equal to that capital N, this in norm will be less than epsilon. Since this is Cauchy because it's convergent, the previous estimate implies that the sequence un is Cauchy. And since we're in a Hilbert space, which means it's complete, we can find a limit of this un. There exists a u in H such that un converges to u. And then we're done now. Then since A is a bounded linear operator, A minus it u-- or A minus it applied to u, this is equal to the limit as n goes to infinity of A minus it applied to un. But remember, we assumed that this converges to some element v. And therefore, v is equal to something in the image or in the range of A minus it. v is in the range. And thus, the range of A minus it is closed. And by this here, we conclude that the range of A minus t equals H. So A minus it is bijective. And that concludes the proof of the first property we wanted to do. OK, so for the second thing we wanted to show, we wanted to show that at least one of plus or minus norm of A is in the spectrum of A. Now, since the norm of A is equal to the sup over norm of u equals 1 of the absolute value of Au, u-- so a supremum is always characterized by being an upper bound and also there existing a sequence in the set of things you're taking the supremum of converging to that supremum. So that implies that there exists a sequence of unit vectors so that Au inner product u in absolute value has to converge to the norm of a. So in particular, Au applied to u has to converge to the norm of A or minus norm of A as n goes to infinity. And there's no n here. All right, then what does this imply? Then this implies that, for at least one of these choices, then A plus or minus norm of A applied to un inner product un converges to 0 as n goes to infinity, where here the plus or minus is chosen depending on whether this goes to plus or minus the norm of A. So this sign here would be the opposite of whichever sign this sequence goes to. I now claim that this property here implies that whichever sign we have that for, that this operator appearing here cannot be invertible. And therefore, whichever sign appeared here or the opposite sign-- so let me, in fact, stay-- so the minus corresponds to the plus sign. The plus sign corresponds to the minus sign if we have one of those. So I claim that this property here implies that this operator is not invertible. And therefore, one of those is in the spectrum. So suppose instead that A minus plus norm of a, whichever one appeared, is invertible. So whichever one does satisfy this is invertible. Then the un's all have norm 1. So 1 is equal to the norm of un. And I can write this as the norm of A minus plus norm A inverse applied to A minus plus norm A, because that's just the identity, applied to un. And this is less than or equal to the norm of the inverse times the norm of this quantity. And so this is a fixed number. And this thing is converging to 0. So the right-hand side converges to zero. But that's 1, right? I have 1 is less than or equal to 0. So that's a contradiction. Thus, A minus or plus the norm of A-- again, the minus or plus corresponds to which sign of the norm of A we had that sequence converging to-- is not invertible, which implies that plus or minus at least one of these is in the spectrum of A. OK, now, we can, in fact, do a little bit better then-- based on this argument, we can do a little bit better in bounding the spectrum of a self-adjoint operator than just the bound that we have coming from the general theory. So what do I mean by that? So if A is a self-adjoint bounded operator and a-minus is equal to the infimum over all u equals 1 Au applied to u, a-plus equals the sup Au applied to u, then the spectrum of A is contained. So first off then, a couple of things, two things-- then both of these numbers are in the spectrum of A. And the spectrum is contained in this line segment. So this is something of a tighter bound because a-minus is always bigger than or equal to minus the norm of A just by this always being bounded below by the norm of A. And a-plus is always bounded above by the norm of A, since this is always bounded above by the norm of A. So the sup will be bounded above by that. So this is a tighter estimate than just the regular estimate that says the spectrum is contained inside of the interval from minus norm A to norm A. And in fact, you get more information, that not just one of the endpoints have to be in, but both of these endpoints are in the spectrum. So the proof of this is just kind of a trick of using what we've done already. So first, note that-- again, since the absolute value of Au inner product is always less than or equal to the norm of A, for all unit vectors, this implies that this quantity here is always bounded below by norm of A and bounded above by norm of A. And therefore, the infimum of this is always bounded from-- so this is a lower bound for this quantity here. So this infimum is bigger than it, since it's the greatest lower bound. And the least upper bound of these quantities is always less than or equal to the norm of A. So these are actual numbers for one. OK, now, by the definition of a-plus or minus, there exists sequences of unit vectors un-plus or minus such that A applied to un-plus or minus inner product un-plus or minus converges to a-plus or minus. Now, by the argument we just gave with a-plus or minus being the norm of A-- but now, we have this property, i.e. A minus a-plus or minus applied to un-plus or minus, un-plus or minus converges to 0. Since I have this property by the previous argument I gave, this implies that both a-plus and a-minus are in the spectrum of A, since we have for each choice of plus or minus a sequence of unit vectors so that this quantity here goes to 0. A minute ago, we could just assert that there was a sequence of unit vectors. So for at least one choice of plus or minus the norm of A, we had this thing going to 0. But for these two numbers, because it's the inf and because this is the sup, we can always find unit vectors so that this quantity is converging to the sup, which is a-plus; this quantity is converging to the inf, which is a-minus. So by the previous argument, we get that both a-plus or minus are in the spectrum of A. Again, I just want to emphasize. Before, we could just say that one of the norms of A or plus or-- at least one choice of plus or minus the norm of A is in the spectrum. Here, we're saying both of these numbers are in the spectrum. So now, what remains is to show that the spectrum is, in fact, contained in this interval from a-minus to a-plus. All right, so let b be their midpoint. And B equals A minus b times I. Now, B's a real number because those are two real numbers. So capital B is the difference between A and-- is A minus a real number times the identity. So B is self-adjoint and a bounded linear operator on H. So by the previous theorem, we get that the spectrum of B, well, is contained in the norm of B-- so minus norm of B, norm of B. And it shouldn't take much thought to realize that if the spectrum of B, which is a shift of a by little b, is contained in this interval, then the spec of A is contained in minus norm of B plus little b, normal of B plus b. So now, what's left is to compute the norm B, all right? But this is not too difficult. We have that the norm of B-- this is equal to the sup of u equals 1 Bu applied to u. And now, I take the sup over all u equals 1. And let me plug in what A is and B is. And this is Au, u minus a-plus minus-- or a-plus minus a-minus over 2. Now, here's the picture. Here's a-minus. Here's a-plus. a-plus is the sup over all of these expressions where u has unit length. a-minus is the inf over all these expressions, where u has unit length. a-plus plus a-minus is the point right in the middle of them. So what's the biggest this-- or what's the supremum of the difference between these numbers and the midpoint? Well, it's the distance given by the distance from a-plus to the midpoint, which is equal to the distance from a-minus to the midpoint, which is a-plus minus a-minus over 2. And since that's the norm of B, when we plug that into what we had a minute ago, we conclude that the spectrum is contained in a-minus, a-plus. So as a simple corollary of what we've done, we have this nice little statement about when exactly a self-adjoint bounded linear operator is non-negative. So let A be a self-adjoint bounded linear operator on a Hilbert space. Then for all u, Au inner product u is bigger than or equal to 0 if and only if the spectrum of A is contained in the non-negative numbers. So I'm not even going to write out the proof. I'm just going to talk my way through it. So let's suppose that Au inner product u is non-negative. Then this number a-minus is non-negative. And therefore, the spectrum is contained in a-minus, a-plus, which is the subset of the non-negative real numbers. On the other hand, suppose that the spectrum of A is contained in here. Then a-minus, which is in the spectrum, has to be in the set of non-negative real numbers. And therefore, Au inner product u always has to be non-negative, since a-minus is the inf over all of these. So now, we're going to move on to the spectral theory for not just self-adjoint operators, but self-adjoint operators that are also compact. Again, a natural example is given by the inverse of taking the second derivative along with requiring 0 at the endpoints, this operator I gave last time. That is a bounded self-adjoint-- or a compact self-adjoint operator. So all the spectral theory we developed for that applies. And the spectrum for that operator ends up being 1 over the eigenvalues corresponding to u-double prime equals lambda-- equals, say, mu times u with 0 at the endpoint. And you'll see that in the assignment. Or maybe I'll do it as an example. So now, we're moving on to spectral theory for compact self-adjoint operators, which is one of the most, again, complete things-- or class of operators we can say the most about when it comes to the spectrum. And I'll go ahead and give you a preview of what we can say about the spectrum for these operators, that it essentially consists of nothing but eigenvalues with the possible exception of 0 being an accumulation point of the eigenvalues. So what we'll prove is that the spectrum of a compact self-adjoint operator consists of the eigenvalues of this operator along with possibly 0. And 0 may or may not be an eigenvalue. If it's not an eigenvalue, then it's the limit of the eigenvalues. And in fact, implicit in that statement is that the spectrum is, in fact, countable for a compact self-adjoint operator. So why should we expect that? Or why should we expect such a complete picture? In the end, we'll also prove that you can find a basis for H consisting entirely of eigenvectors of the operator A, which is, again, a generalization to infinite dimensions of what hopefully you saw in finite dimensions. But if you didn't, our proof will still apply to finite dimensions. So why should that then apply to compact self-adjoint operators if you believe it for finite dimensions? Well, it's because, again, compact self-adjoint operators are the norm limit of finite rank operators, all right? And finite rank operators, again, these just correspond to basically matrices. We know how to compute the eigenvalues of matrices. For finite rank operators, they could have a very large null space, meaning the eigenvalue 0 could have a very large eigenspace. But that's the point of why you expect maybe things to carry over to the setting of compact self-adjoint operators from what you know in finite dimensions. OK, so this is not so much a definition as just notation. If A is a bounded linear operator, I will denote E lambda to be the null space of A minus lambda-- in other words, the set of-- or the subspace of eigenvectors with eigenvalue lambda, which, again, is the set of u in H such that A minus lambda u equals 0. So first off, before we get to classifying the spectral-- or the spectrum of a compact self-adjoint operator as basically consisting of eigenvalues along with 0, we'll first give some kind of general properties of eigenvalues in general for a compact self-adjoint operator. So we have the following theorem that suppose A-star in A is a compact self-adjoint operator. Then a few things-- if lambda not equal to 0 is an eigenvalue of A, then the dimension of E lambda, the eigenspace, the linear subspace of all vectors that are eigenvectors of A, this is finite. So for a given eigenvalue, the dimension of the eigenspace is finite. The second is that if I take two different eigenvalues, the corresponding eigenspaces are perpendicular to each other. Lambda 1 does not equal lambda 2. For eigenvalues of A, then E lambda 1, E lambda 2 are orthogonal or perpendicular. Every element in E lambda 1 is orthogonal to every element in E lambda 2 and vice versa. And finally, the set of non-zero eigenvalues of A is either finite or countable. If it is countable, i.e. it's given by a sequence lambda n, then these-- or if it is countably infinite, I should say-- and I should have said countably infinite here. If it's countably infinite, then the eigenvalues converge to 0. In particular, this implies that if I have a compact self-adjoint operator with infinitely many eigenvalues, then 0 is in the spectrum of this operator because the spectrum is a closed set. So it's closed undertaking limits. And since, these are in the spectrum, the limit has to be in the spectrum. All right, so proof of 1-- suppose I have a non-zero eigenvalue. And towards the contradiction, E lambda is not finite dimensional. Then what I can do-- by the Gram-Schmidt process, then there exists a sequence or a countable collection un, orthonormal elements in E lambda. So every element in the sequence has unit length. And it's orthogonal to any other element in the sequence. Now, since A is a compact operator and all of these have unit length, it follows that A applied to un is contained-- this is a sequence in a compact set, right? So it has a convergent subsequence-- Au nj, j. Then Au nj is Cauchy. But let's actually look at what's the difference between two of these in norm. Let's make it squared. This is equal to norm of, because these are eigenvalues, lambda un j minus lambda un k squared, which equals-- squared, which equals 2 lambda squared, which is a fixed number that's positive because lambda is not equal to 0. Oh, I left off a part of the-- OK, so what does this imply? This implies that the distance between any-- so this is-- if I take any two elements in this subsequence, their distance is a constant equal to 2 times lambda squared. And therefore, this is not Cauchy, which is a contradiction. Something I forgot to say-- restatement of the theorem-- forgive me, it's the end of a long day-- is that eigenvalues have to be real for self-adjoint compact operators, or really for self-adjoint operators. So I could have included it earlier. So the eigenvalues of a self-adjoint operator have to be real. Why is that? Since if I have something with norm 1-- so if lambda is an eigenvalue, it comes with an eigenvector u with length 1 so that Au equals lambda u. Of course, it just has to be a non-zero u. But I can normalize it by dividing by its length. And therefore, I get that lambda, which is equal to lambda u inner product u-- this is norm of u squared, which is 1 lambda u, u. And this is equal to complex conjugate-- or let's not do that. This is equal to Au, u which is equal to-- I take A and it becomes A-star u. But A-star is equal to A, So I get u, Au, since A is self-adjoint. And this is equal to u, lambda u. And remember, inner products are conjugate linear in the second entry. So this lambda pops out, but now complex conjugate. So lambda-- so we've shown that the complex conjugate is equal to the original number. So lambda has to be a real number. All, right so that proves part 1, that the eigenvalues of a self-adjoint operator have to be real. And the eigenspaces, which is what I have just started calling the E lambdas, the eigenspace, have to be finite dimensional for a compact self-adjoint operator. OK, so now, let's show that distinct eigenspaces have to be orthogonal to each other. Suppose lambda 1 does not equal lambda 2. u1 is in E lambda 1. u2 is in E lambda 2. So now, what I'd like to show is that the inner product of u1 with u2 is equal to 0. And it's going to be a trick, kind of like I just did here. Lambda 1 times u1, u2, this is equal to lambda 1 u1, u2. This is equal to A applied to u1, u2. And now, I move A over to here because A is self-adjoint. And A applied to u2-- so u2 is in the second eigenspace. So this is equal to u1 lambda 2 u2. And because lambda 1 and lambda 2 have to be real numbers-- what we've done from the first part-- this lambda 2 comes all the way out and remains itself, no complex conjugate because it's equal to its complex conjugate. And so I started off with lambda 1 times the inner product of u1, u2. And I've ended up with lambda 2 u1 inner product with u2. And therefore, lambda 1 minus lambda 2 times the inner product of u1 minus-- or the inner product of u1 with u2 equals 0. And lambda 1-- remember, we're assuming lambda 1 and lambda 2 are non-zero-- or not equal. So this quantity here is non-zero. So I get that u1, u2 equals 0. And that's the-- nope, that's not the end. That's the end of number 2, but not the end of the proof of this theorem. All right, so we're going to prove the last thing, that the set of non-zero eigenvalues is either finite or countable, and that if I arrange them in a sequence, then the sequence converges to 0. OK, so just to have some notation running around-- capital lambda, let me let this denote those non-zero eigenvalues. All right, so what I'd like to claim-- or what I'm going to show is that if lambda n is a sequence of distinct elements-- or distinct eigenvalues, non-zero eigenvalues of A, then these converge to 0. So this gives us-- of course, so the set of non-zero eigenvalues may be finite. Fine. Suppose it's not, OK? Now, we're just in the setting that A has infinitely many eigenvalues. If I can prove this claim, then I have proven two things at once. I have proven both that the set of non-zero eigenvalues is countably infinite, assuming it's infinite, and they converge to zero. So why does this-- So first off, if we can show that this capital lambda is countable, then this claim then implies that-- or countably infinite, then this claim tells me that the eigenvalues converge to 0, which is the last thing I want. So all I really need to show is that this is countable using this claim. Now, why does this show that capital lambda is countable? Since then if I define lambda sub capital N to be the set of non-zero eigenvalues which are, let's say, even bigger than or equal to 1 over N, this has to be a finite set, right? If it was infinite, then I could find a sequence of distinct elements in here and obtain-- or I should say, then I can find a subsequence-- or hold on. Let me stop for a minute. So my claim is that this is finite for all N, which implies that lambda, which is the union of-- is countable. OK, so assuming this claim or assuming what I wrote here, that this is finite for all N, this implies this is countable, that's clear. So why do I get this as finite, this set is finite assuming this claim? Well, if this set is infinite, then I can pick out a sequence of distinct elements in lambda sub N that converges. I could just take any sequence, and then take a convergent subsequence because that sequence has to be bounded between 1 over N and the norm of A. But since they're all bigger than or equal to 1 over N, that sequence has to converge to something that's non-zero. But that would contradict the claim-- again, assuming the claim is true. We haven't proved it yet, all right? So again, from this claim, we can then conclude that each of these sets is finite for all N. And therefore, the set of non-zero eigenvalues is countable. And if it's countably infinite, then, again, from this claim, we conclude that the eigenvalues must converge to 0 when I line them up in a sequence. So the whole proof is reduced to just proving this claim. OK, so to prove the claim, let un be associated eigenvectors. So these have unit lengths. And for all n, A un equals lambda n un, right? We have eigenvalues. So we can find eigenvectors with unit length. Now, then lambda n, which is equal to-- or the absolute value of lambda n is equal to the absolute value of lambda n-- or the norm of lambda n applied to-- or times un, which is equal to the norm of A applied to un. So what I'm going to show is that A applied to un converges to 0. So if you like, this is the final claim that I need to. Prove so this is claim 1. Claim 1 will follow from claim 2 in this little computation right here, where claim 2 is that the norm of A applied to un-- again, un's are eigenvectors with unit length corresponding to the lambda n's converge to 0. So the fact that A applied to these unit vectors converges to 0 is not just specific to eigenvectors of distinct eigenvalues. It's just a property of the compactness of A and the fact that the un's are in orthonormal sequence. They're all unit length. And any one element in the sequence is orthogonal to a different element in the sequence. So suppose not. Suppose claim 2 does not hold. Then just negating the definition of convergence, there exists an epsilon positive. And we can find a subsequence A unj such that for all j, length of A unj is bigger than or equal to epsilon 0. If you look at the definition of-- or the definition of convergence to 0 and then negate that, you can conclude that you can find a subsequence so that I have this. So there is some bad epsilon 0 so that I have that. All right, since A is a compact operator, there exists a further subsequence. And let me call it e sub k, which is un sub j sub k-- unj-- such that-- so remember, A applied to un sub j-- so un sub j is a unit length vector. And therefore, A applied to that is contained in a compact set, assuming A is a compact operator. So this must have a convergent subsequence such that A applied to ek converges in H. And note Aek, since this is just a subsequence of this sequence, is bigger than or equal to epsilon 0 for all k. Now, since the ek's are a subsequence of an orthonormal sequence, it's still an orthonormal sequence. So note, for all k not equal to l inner product ek el, which is unk unl equals 0. And what I'm using here-- so of course, these are all unit vectors. Why are they orthogonal? It's because they correspond to distinct eigenvalues, distinct non-zero eigenvalues. And we proved that-- that was number 2, that-- was it number 2? Yeah, that eigenvectors for distinct eigenvalues are orthogonal to each other. So assuming the negation of the claim 2, which would prove claim 1 and finish the proof of this whole theorem, I conclude that there exists a sequence of eigenvectors, orthonormal eigenvectors of A So that Aek is always bounded below in norm by epsilon 0. And this sequence converges. So let f be the limit as k goes to infinity of Aek. Then norm of f, by continuity of the norm, is equal to the limit of the norms of the ek's. And all of these are above-- bigger than or equal to epsilon 0. So f is non-zero, right? In fact, we can say a little bit more. Then in fact-- let's see. So this is kind of useless information I skipped. I didn't write down what I wanted to. But then-- well no, I still need that. No, let me not get rid of that. So that should still be there. So the norm of f is bigger than or equal to epsilon 0. So norm of f squared is bigger than or equal to epsilon 0 squared. So f inner product f-- and by continuity of the inner product, that's equal to-- since the Aek's converts to f, I will get f here. And using the fact that A is self-adjoint, this is equal to ek, Af. So I have that this limit here is non-negative. I mean, it's a real number. And it's bigger than or equal to epsilon 0 squared. Now, here's the problem. I have here a sequence of orthonormal vectors, right? And I know that the sum of squares of these Fourier coefficients, which are Fourier coefficients for A applied to f, are-- the sum of squares is finite. And therefore, this has to go to 0. And that's the contradiction to the epsilon 0 squared. So by Bessel, Bessel's inequality, we get that sum over k norm ek, Af squared, this is less than or equal to the norm of Af squared, which is finite. And since this is a convergent series, the individual terms have to converge to 0. And therefore, this equals 0. But this and this are a contradiction. OK, so that finishes the proof of this theorem about the eigenvalues and eigenspaces for a compact self-adjoint operator. All right, so I think we'll stop there.
MIT_18102_Introduction_to_Functional_Analysis_Spring_2021
Lecture_23_The_Dirichlet_Problem_on_an_Interval.txt
[SQUEAKING][RUSTLING][CLICKING] CASEY RODRIGUEZ: So today we're going to apply functional analysis to a simple Dirichlet problem, i.e., an Ordinary Differentiable Equation, on a line with conditions at the boundary. Typically, when you encounter ODEs for the first time, say, in your Ordinary Differentiable Equations class, you always have an equation you want to solve, and then you specify maybe the function at a point and its first derivative, if it's the second-order derivative or just your second-order differentiable equation or just the function evaluated at that point. But now this Dirichlet problem, you're going to be specifying the function at two endpoints. So what's the problem we're going to look at-- let V be a continuous real value function. And we will consider-- we'll call the Dirichlet problem-- minus u double prime of x, plus V of x. u of x equals f of x. And with boundary conditions, u of 0 equals 0. u of 1 equals 0. So what do we want to do with this? So the question is given. So what's the input? You may think of f as being kind of a force. So you're given a force and you would like to compute the solution or show there exists a solution on the interval 0, 1 satisfying the equation. And we'll say a classical solution, meaning it's twice continuously differentiable. So this equation makes sense at every x. So given f continuous function, does there exist a unique solution u of x. And in C2 0, 1, so the 2-- C2 meaning two continuous derivatives. C2 means two continuous derivatives. So does there exist a unique solution to the Dirichlet problem here? Does there exist a unique solution to this ODE with these boundary conditions at 0 and 1? And the purpose of this section is to apply functional analysis that we've done to say the answer is yes if V is non-negative. OK. If V is not necessarily non-negative, then the answer is it depends on f. All right, but given any f, does there exists a unique solution to the Dirichlet problem? When V is non-negative, the answer is yes. And this is what we're going to prove. This is the main goal of this section. And I'll state this answer as two theorems. One thing that there exists a unique solution. And the other will be-- or I should say that a solution to this is unique. And then the second part, the more involved part, will be that there does exist at least one solution to this problem, assuming V is non-negative. So the first thing we'll prove is that any solution to the Dirichlet problem is unique. f is a continuous function on 0, 1. u1, u2 are twice continuously differentiable functions that satisfy the Dirichlet problem. I should say we are now working under the assumption that V is non-negative. So I should also put that in here-- suppose V is non-negative. We won't always have to assume V is non-negative in some of these theorems that we'll prove about this problem or about certain operators. So it would be good if I specify when I'm assuming V is non-negative. So suppose V is non-negative and f is a continuous function and I have two solutions to the Dirichlet problem, then they must be the same. Now the proof I think I've given this as an assignment or as a problem in an assignment in past classes, but when in doubt integrate by parts. So by subtracting let u be u 1 minus u 2. Then u is a twice continuously differentiable function which satisfies u of x of V of x, times u of x equal 0. And u of 0 equals u of 1 equals 0. Now what we'd like to conclude is that u is 0. And how we're going to do this if we're going to integrate by parts. so then multiply this equation by something and integrate by parts. So we have 0 is equal to the integral from 0 to 1 of minus u double prime of x, plus V of x, u of x times the complex conjugate of u of x, dx. Now this is equal to integral from 0 to 1, u prime of x times u of x, plus integral from 0 to 1, V of x times cancelled value of u of x squared. And now we integrate by part. So integration by part says that if I have a derivative on a function, I can move it to the other function, and then plus boundary terms. And changing this, minus sine to a cosine. So this is equal to u prime of x-- u bar of x evaluated at 0, 1, with this minus gives me a plus. And we go from 0 to 1. We find with x the derivative of complex conjugate of u dx plus V of x, d of x squared, dx. Now we use the boundary conditions that at 0 and at 1, u of x is 0. So its complex conjugate is 0, so please go away. And what I'm left with is integral from 0 to 1. And I'm not going to keep writing the x part. So integral from 0 to 1 of u prime squared, plus integral from 1 to 1, V times u prime-- or times the magnitude of u squared. Now since V is non-negative, V times the absolute value of u squared is non-negative. So this is certainly bigger than 0, 1, u prime squared. And remember, what I started off with was 0. So I showed 0 is bigger than or equal to this non-negative quantity. Now u is twice continuously differentiable so that u prime, that's a continuous function-- non-negative continuous function. So since this equals 0, I get that u prime is identically 0, which implies u is constant. And because u is constant and u of 0 equals 0, I get that u is identically 0, the 0 function, which is what I wanted to show. u equals 0 means u1 equals u2. So that's uniqueness. That's half of what we wanted to show, which was if V is non-negative, then there exists a unique solution to the Dirichlet problem given any f, any input to the right-hand side. Now we're going to turn to unique existence, which is more interesting and harder parts. So now we're going on to existence. So if you don't know how to do something, try and do something easier. So let's start off with the V equals 0 case. So we'll look at-- so we have the following theorem that says we can uniquely solve the Dirichlet problem when equals 0. So that just means we want to solve minus u double prime equals f of x with these boundary conditions. And we can actually write down explicitly the solution in terms of f as a compact self-adjoint operator, which I've alluded to maybe at a couple of points here in the class. So let K x, y be the function given by minus 1 times y here if y is less than or equal to x, is less than or equal to 1, and now switch. So it's piece-wise defined across the diagonal. And it's continuous across the diagonal when y equals x. I get the same thing. So this thing, in fact, is a continuous function on 0, 1 cross 0, 1. We have this function here. Define an operator, A f of x to be integral K of x, y, f of y, dy. And A is a bounded linear operator on L2. And in fact, it's a self-adjoint compact operator. Self-adjoint operator. And basically A times f is the inverse of u double prime, or A times f solves the Dirichlet problem. f is C 0, 1. And u equals Af is the peak solution to the Dirichlet problem when V equals 0. I'm not writing out all the argument of x, but it should be understood u of 0 equals u of 1 equals 0. So when V equals 0, we can write down the explicit solution in terms of this, what's called an integral operator because you take a function, multiply it by another function, and integrate. This thing here is usually referred to as the Green's function for this differentiable operator. And what this theorem says is that the solution to your differentiable equation, your Dirichlet problem, is given by an integral operator, which is a compact self-adjoint operator on L2. So it shouldn't come too much of a surprise that the solution to a Dirichlet problem u can be written as an integral operator, right, because by the fundamental theorem of calculus integration and differentiation are inverse of each other. So let's C be the sup over 0, 1 cross 0, 1 of the absolute value of K. So I'm not writing x, y. So this is the supremum of K over the square of 0, 1 across 0, 1, which is finite, since K is continuous. And by the Cauchy-Schwarz inequality we get that A times f of x, A f of x, is equal to the integral from 0, 1, K x, y, f of y dy, is less than or equal to-- now if I bring the absolute value inside and then bound K by C-- my name is Casey-- is bounded by C times the integral from 0, 1 of f of y dy. And now I apply Cauchy-Schwarz to this, this I can think of as this quantity times 1. So this is less than or equal to C times the integral 0, 1, 1 squared raised to the 1/2 power. At certain points I will stop writing of x dx, or of y dy. And the meaning should be clear, integral 0, 1 f squared 1/2. And I get C times the L2 norm of f. So what have I done? I've bounded for every x in 0, 1, A f of x, by a constant times f and L2. And I also have, if I look A f of x minus A f of z, so I can just again by bringing the absolute value inside the integral, bounded. And so the difference here is going to be K of x, y minus K of x, z times f of y and integrate it. So what I get is that this is less than or equal to sup y in 0, 1, K of x, y minus K of z, y times the L2 norm of f. So I'm not going to give all the details because I've actually assigned this problem before, that if K is a continuous function, then this is a bounded compact operator on L2. So maybe you haven't done the exercises yet, but in any case I'll just sketch out from here that just based on these two estimates and the Arzela-Ascoli theorem, which gives you sufficient conditions that a sequence of continuous functions has a conversion subsequence in the space of continuous functions, you can conclude that A is the compact operator on L2. Now so that shows is A compact operator on L2. Y is itself adjoint. Let f, g be continuous functions on 0, 1. Then if I look at a times f paired with g, this is in the L2 pairing, this is equal to integral of 0, 1, integral of 0, 1, K of x, y, f of y dy, V of x dx. Now I have a double integral involving nothing but a continuous function. So I can apply Fubini's theorem to interchange the integration. So this becomes 0, 1, integral 0, 1. I can write it this way-- f of y. So I haven't done anything yet, I'm just distributing, bringing this gx inside and stating that this integral is equal to this double integral, dy dx. I guess I'm using Fubini's theorem there. And Fubini's theorem says if you're integrating continuous functions over a box, then it doesn't matter if you integrate dy first or dx first. So then I can just undo this. And, oh, the pairing should have a complex conjugate over g. And so if I just integrate dx first and kind of undo things, I can write this as the integral of f of y times the integral from 0, 1 of K of x, y, g of x dx. Now the complex conjugate over K, all of this complex conjugate, dy. Again, you can check that this is equal to this previous thing by Fubini, because then this iterated integral becomes just an integral dx dy, which again by Fubini doesn't matter what the order is. And the complex conjugate hits this complex conjugate. I get back K. This complex conjugate gets g, and I get that. So this says that Af paired with g is equal to f paired with B times g, where Bg of x now. So I got out a function of y integrating x. So if I switch the dummy variables, I can write this as K y, x, complex conjugate, dy dy. But now the thing to note is that K, what is K? So K is this function here. It's a real value function. So the complex conjugate just doesn't matter. It's real value. So the complex conjugate is equal to the original function. And it's symmetric in x and y. If I switch x and y, I get K x, y back. So K of y, x is equal to K of x, y. So this is equal to 0, 1. K of x, y, g of y dy. But this is just by definition equal to A times g. So we've proven that Af, g equals Af paired with Ag now for all f, g, which are continuous, which, remember, is a subset of 0, 1. But not just a subset, it's a dense subset. Since continuous functions on 0, 1 are dense in L2, and the density argument implies that this relation has to hold for every f and g in L2. It's Af, g equals f, Ag, not just for all continuous functions, but for all functions in L2. That proves that A is self-adjoint. All right, and so the last part, which is verifying that in fact defining u to be Af, gives you a twice continuously differentiable function which satisfies the Dirichlet problem when V equals 0. It just follows from direct computation. So f is in 0, 1, then if I define u of x to be Af of x, and I actually write out the integral over the various domains on how it's defined, this is first I pick up an integral 0 to x of x minus 1 times y, f of y, dy, plus x times 0, 1 of y minus 1, dy. And now I can just apply the fundamental theorem of calculus to show that indeed by fundamental theorem of calculus that u is twice continuously differentiable. And minus u double prime gives me f. And u is given by Af. This is the unique solution to that problem because we've already proven that when V is non-negative, there exists only one solution to the Dirichlet problem. So in the case V equals 0, we can write down the explicit solution in terms of this integral operator, which on L2 is the self-adjoint contact operator. All right, so to solve the Dirichlet problem, so now what's the plan for V not equal to 0? So the plan is this-- that if I have minus u double prime plus V times u, so this is just kind of formal stuff, and then we'll actually prove rigorous statements and prove that there exists a solution in the end. We already have uniqueness, like I said. How do we solve this differentiable equation with, again, the boundary conditions which I'm not going to write down? So let me just write it down here. So that implies that minus u double prime equals f minus Vu. And so equals f plus minus Vu. Now think of this as just a fixed function g. And therefore by this existence and uniqueness result that we have for the V equals 0 case, so think of this as g. So now I have minus u double prime equals g. So this implies that u equals A applied to f minus Vu. So the unique solution to minus u double prime is equal to A applied to g. g is f minus V, so I have this. So I get-- by this if and only if here, which, if I distribute this through, is the same as saying the identity plus the operator given by A applied to the multiplication by V-- so when I write V here, I mean multiplication by V-- applied to u equals A applied to f. And now this is good because we've gotten rid of this differentiation. And now we're talking about solving an equation that involves bounded operators. The identity is a bounded operator. Multiplication by a continuous function is a bounded operator on L2. And A is a compact self-adjoint operator on L2. So now we're solving an equation involving bounded operators on the Hilbert space. What would make it even better is if this thing was-- so we already know it's a compact operator because A is compact. If we knew this was self-adjoint, but that doesn't necessarily hold because of the adjoint of AV will be V times A. So these don't exactly commute. But we can get around that and reduce ourselves to studying an equation that involves compact self-adjoint operators by a nice little trick. So write u as-- but I'll say A to the 1/2 V. A to the 1/2 meaning its square gives me A. The fact that such a thing exists is not clear right now, but we will show it, in fact, exists. Again, formal stuff. So write u as A to the 1/2 applied to V, where now V is the thing we solve for. And if we stick this into this equation here, we get A. We pull out an A to the 1/2 applied to I plus A to the 1/2 V, A to the 1/2, by to V equals Af. So this should be little v, not this capital V. And therefore if I can believe I can cancel 1/2 powers, I've reduced myself to study the equation A to the 1/2 V times A to the 1/2, applied to little v equals A to the 1/2 f. Remember, f is given. So whatever this thing is on the right side, we know that ahead of time. So our problem is to solve this equation here. Now what's the great thing? The great thing is is that because A is compact self-adjoint and in fact a non-negative operator, A to the 1/2 exists is also a compact operator and it's also self-adjoint. So then we have this self-adjoint operator on both sides of V. Both of them are compact. So this whole thing will be a compact self-adjoint operator. So if we want to be able to solve for V-- i.e., invert this thing on the left side-- this is just a compact self-adjoint operator plus the identity, which you can think of as a compact self-adjoint operator minus 1 times the identity. So now that's an equation we know how to solve. We have the Fredholm alternative that says we can invert this operator if and only if this entire operator doesn't have a null space or doesn't have a non-trivial null space. So that's the plan is we will reduce ourselves to studying-- or we will prove that we can invert this operator here. First, we have to prove that we can find such that A to the 1/2, meaning an operator that's square gives me A, prove the properties we need, and then also show that this operator is convertible, define V as inverse of this times this, u as A to the 1/2 times V, and conclude that u solves our problem. So that's where we're headed. So now to get this plan off the ground, we need to show that we can come up with such a compact self-adjoint operator whose square gives me A. So as a first step in this direction, we are going to compute the spectrum of the operator A, which again is the inverse of this Dirichlet problem. So when f is continuous, it's the unique twice continuously differentiable function that's second derivative gives me f and 0 where at the end points. But in general, it's this integral operator. So first thing I want to prove is that null space of A is the zero vector or the zero function. And so it has no non-trivial null space. And the orthonormal eigenvectors for A are given by uk of x equals square root of 2, sine k pi of x. Here, k equals a natural number With associated eigenvalues, lambda k equals 1 over k squared pi squared. So let me just make a brief remark. We have via the spectral theorem that we proved last time that for a compact self-adjoint operator we can find or that the eigenvectors form an orthonormal basis for the range of the null space of A. And then to complete it to an orthonormal basis of all of the Hilbert space, in this case L2, we just need to take an orthonormal basis of the null space of A. But the null space of A is the zero vector. So by the spectral theorem for compact self-adjoint operators, we get that square root of 2 sine k pi x, k to a point infinity. So this is, orthonormal basis for L2 0, 1. So we use the spectral theorem to conclude that this is an orthonormal basis for L2. You can also prove it directly using what we know about E to the I in x being an orthonormal basis for minus pi to pi. Just by rescaling E to the I in pi x is an orthonormal basis for L2 to minus 1 to 1. Now we have functions here defined on 0, 1, which we can extend by odd parity to minus 1 to 1. And the only parts of the E to the I in pi x that any expansion for an odd function that come out only involve the sine-- these guys. So that's without knowing these are the eigenfunctions or eigenvectors for this operator A, you could also conclude that this is an orthonormal basis for L2 of 0, 1. All right, so to prove this theorem, so what we're going to do is to prove that the null space of A is trivial is we will show-- so you can go about it a couple of different ways. We'll show that the range of A is, in fact, dense in L2. So first thing we're showing is that the null space is trivial. So we'll show that the range of A closure equals all of L2. Remember, this is equal to the orthogonal complement of the null space. So if the orthogonal complement is the entire space, then that means the null space of A is just a trivial vector. So I need to be able to show that a dense subspace of L2 can be solved for by A. So let u be polynomial on 0, 1, and f equals minus u double prime. Now by the previous theorem-- I should say-- [INAUDIBLE] apart-- with u of 0 equals u of 1 equals 0 and f equals minus u double prime. Then by the previous theorem, A applied to f is the unique solution to the Dirichlet problem with that function V being 0. But remember, u, so f-- write it this way-- i.e., f applied to u double prime should give me f. And Af of 0 equals Af of 1 equals 0. But how do we define f? f is minus u double prime. u is a polynomial on 0, 1. That's 0 at the endpoints. And therefore I conclude that Af equals u. I hope this is clear. Now since a set of polynomials on 0, 1 are vanishing at x equals 0 and 1 are dense in L2. Now why is this? This is because we know that continuous functions that are vanishing at the two endpoints are dense in L2. And by the Weierstrass approximation theorem, every continuous function on 0, 1 can be approximated by a polynomial. And it's not too difficult to convince yourself that if that's the case, then I can also approximate every continuous function that's vanishing at the two endpoints by a polynomial that's vanishing at the two endpoints. So since I can approximate every continuous function on 0, 1 vanishing at the endpoints by a polynomial vanishing at the endpoints, then those polynomials vanish at the endpoint are dense in L2 of 0, 1. So we'll just say here that this follows from, like I said, density of 0, 1. Now I'll put a 0 here, meaning it's 0 at the two endpoints, and Weierstrass approximation theorem. So we've been able to solve for every u that's dense in L2, right. So every polynomial that's 0 at the endpoints is in the range of A. And therefore, range of A contains a dense subset of L2 0, 1. And therefore, the closure has to be all of 0, 1. And then since null space of A is equal to the orthogonal complement-- or I should say the orthogonal complement of the null space of A is equal to the range-- the closure of the range, and this equals L2 of 0, 1, I conclude that the null space is just the trivial vector. So A has no null space. So that by the spectrum theorem and orthonormal basis for L2 of 0, 1 is given by the eigenvectors of A. And so now we'll prove the eigenvectors are given by this form. Now I'll just give you a brief sketch of this. So let's solve for the eigenvalues and eigenvectors. Suppose that lambda does not equals 0, u is an element of L2 that has unit length or unit norm, and A applied to u equals lambda times u. Now then-- let's see-- write that a minute ago. OK. Then u equals 1 over lambda times Au, which is fine, because lambda is non-zero, so I can divide by that. Now I didn't say this when I was discussing-- let's see if we still have it up there. We do, so we can talk about it. Now for any function in L2, by this estimate here, this also proves that A applied to f is a continuous function. So this also shows that A applied to f is, in fact, a continuous function. I hope that's clear. Because why is this so? I need to make this thing on the left-hand side small if x and z are close together. Now that only depends on some number, the L2 norm of f times this quantity here. K is a continuous function on 0, 1 cross 0, 1. So as long as I make x and z close, then x, y and z, y will be close. And since K is continuous, this quantity here will be small. And therefore, the thing that's smaller than that will be small. So that's why if I take a function which is in L2 and hit it by A, I get a continuous function. So u equals 1 over lambda times this continuous function implies that u is continuous. But now we're going to feed that back in. Because if u is continuous, then A applied to u is twice continuously differentiable, which implies that u equals 1 over lambda Au is twice continuously differentiable. This is what's called a bootstrap argument, I guess. Is that the right word? Anyways, that doesn't sound right. In any case, so we conclude that a eigenfunction of A is twice continuously differentiable, right. So now u is a twice continuously differentiable function, which is equal to 1 over lambda times Au. So another way to write this as A applied into u over lambda, because A is a linear operator. Now A applied to something is a unique function whose second derivatives times minus 1 gives me that thing inside. So I conclude that minus u double prime equals 1 over lambda applied to u. And along with the boundary conditions, u of 0 equals u of 1 equals 0. But now I know how to solve this equation. It has to be a superposition of two functions. And I get that. So u of x must be equal to A times sine, 1 over square root of lambda times x, plus B times the cosine 1 over square root of lambda times x. And now since u of 0 has to give me 0, that tells me that B equals 0, which tells me u of x equals A times sine 1 over lambda x, and since u has unit length with A not equal to 0. And now u of 1 equals 0 implies that 1 over square root of lambda has to be an integer multiple of pi. If u of x is to be non-zero. And therefore, u of x is equal to A times sine k pi x for some k. And the fact that we get square root of 2 comes from the normalization condition. Implies that u of x equals square root of 2 sine k pi x for some k natural number. So square root of 2 times sine k pi x as k varies over the natural numbers gives me an orthonormal basis for L2 that consists of eigenvectors of A. The eigenvalues are 1 over k squared pi squared. So I can think of A as simply multiplying each eigenvector by 1 over k squared pi squared. And if I want to define A to the 1/2, which is why I'm doing all this, then we'll define an operator so that it takes something in this orthonormal basis and simply multiply this by 1 over k pi, which is half of what A would do, or half power of what A would do. This is also how one could define what's called the functional calculus for self-adjoint compact operators, which you can then extend to self-adjoint operators as well. So let's make this a definition. So if f is in L2 0, 1 with f given by this Fourier expansion with ck given by interval 0 to 1, f of x square root of 2, sine k pi x dx, and we define an operator which we call A to the 1/2, although I'm not saying that its square is A just yet. We define linear operator A by A f of x equals sum from k equals 1 to infinity of 1 over k pi ck square root of 2, sine k pi x. 1/2. So right now I just have this expression for given a function in L2 with this expansion in terms of this orthonormal basis. I do this to the coefficients, I take the coefficients and multiply them by 1 over k pi. And my claim is that A to the 1/2 is a compact self-adjoint operator on L2. And this notation is not just for show. I take A to the 1/2 and compose it with itself, meaning I square it. I get the operator A, which remember was defined as this integral operator involving k, right. If you like, though, so let me just [INAUDIBLE].. Remember A, when it hits each of these sine k pi x's, spits out 1 over k squared times pi squared. So A applied to f would be this thing multiplied by 1 over k squared pi squared. And so it makes sense that A to the 1/2 should be something that when I apply it again I get back A, since I have 1 over k squared pi squared when I hit the A, I multiply the coefficients by 1 over k pi, then this should give me what I want. And now it's just the process of confirming these facts that we need. But you may think of A as this integral operator. You can think of A as a solution operator for this Dirichlet problem. Or you may simply think of it as, I take a function f, expand it in terms of sine k pi x in orthonormal basis for L2 of 0, 1, and I simply multiply by 1 over k squared pi squared. How does this jive with what I just said about A also being the solution operator? Well, if I have A applied to this quantity, then I have sine k pi x over k squared pi squared. Now let's say I take two derivatives of that, then I get k squared pi squared over k squared pi squared, I get 1. So I get back out f for the minus sine. So all of those things jive. All right, so proof of this theorem. You write, say, two functions in L2 expanded in this basis. And so first thing I would like to show is that this operator is bounded, linear operator. So I want the L2 norm of fA to the 1/2 f squared. So this is the L2 norm squared of the function with coefficients given by k pi square root of 2 times k pi x, all squared. And now, so the L2 norm of this function given by this Fourier expansion here is just the sum of the squares of the coefficients appearing in front of the square root of 2 times sine k pi x. So this is equal to, by Parseval's identity equals ck squared over k squared pi squared. And 1 over k squared pi squared as k goes from 1 to infinity is bounded by 1 over pi squared equals 1 to infinity of ck squared equals 1 over pi squared, again, by Parseval's identity, or a completeness of this eigenbasis that gives me back the L2 norm of that. So that proves boundedness. What about self-adjointness? A to the 1/2 f, but in take its inner product with g, again using how these things are defined and the fact that the inner product between two functions is given by just the little L2 pairing of the coefficients that appear in front of the square root of 2 sine k pi x. This is ck over k times pi, dk. And now we just move this over here. And these are real numbers, so I can move them over on the d without taking their complex conjugate. And that's just equal to f inner product A to the 1/2 g. So that proves that A is self-adjoint. And now finally, we show that A to the 1/2 squared gives me A. So let f be-- OK, so I rewrote that part. I don't need to write it again. If I look at A to the 1/2 f, this is by definition equal to A to the 1/2 applied to now the function given in this eigenbasis or this orthonormal basis by ck over k pi square root of 2 sine k pi x. Again, how do I apply A to the 1/2? I take the coefficients in front of the basis function and divide by k pi. So then I get k equals 1 to infinity of ck over k squared pi squared, square root of 2 sine k pi x. Let's leave it here. Now going from here to what we have next, since each of these is an eigenfunction of A with eigenvalue 1 over k squared pi squared, this is also equal to A applied to the square root of 2 sine k pi x. And now it's perfectly legitimate to pull this A out of the infinite sum, because this is a basis for L2, or really by [INAUDIBLE].. In any case, I can pull the A out and I get square root of 2, sine k pi x. Why can I do this? It's because so if I put a finite in here, then sum from k equals 1 to n converges in L2 norm to this quantity here with an infinity. So since A is a bounded linear operator, the limit as n goes to infinity of A applied to the finite sum is equal to A applied to the infinite sum. Now A applied to a finite sum is given by this, letting n go to infinity again. This we can take the A in and out. But this was just the expansion of f. So this is equal to Af. So we have proven that A to the 1/2 squared, meaning A to the 1/2 applied to A to the 1/2 of f gives me back Af. Now let me give a brief sketch on why this is a compact operator. We've shown it's a bounded self-adjoint operator whose square gives me A. So A is a compact operator. We'll show that the image of the unit ball has equi-small tails. I want to say here is suffices to show that, say, A applied to f has equi-small tails. The fact that this has equi-small tails then implies that the closure also has equi-small tails. And therefore by this theorem we proved about characterizing compact subsets of Hilbert spaces, we conclude that the closure of this set is compact-- I should have 1/2, 1/2-- and therefore A to the 1/2 is a compact operator. So let epsilon be positive, and choose N as a number so that 1 over N squared is less than epsilon squared. We can do that because 1 over capital N squared goes to 0 as capital N goes to infinity. So I can always find such an N. And now we claim that this N is the one that works for showing this set has equi-small tails. Let f be in L2 with norm less than or equal to 1. And again, if I look at the sum of the Fourier coefficients given in this basis of square root 2 sine k pi x of A to the 1/2 f, this is equal to, by definition of how A operates, this is equal to simply the coefficient of f squared over k squared pi squared. And this is less than or equal to-- k is bigger than N, so this is less than or equal to 1 over n squared with a pi squared. But that's something like 9. So I can leave it and still be less than or equal to, sum, sum, k. I can now take this to be all ck's. And this is equal to 1 over N squared, 2 squared, which is less than or equal to 1 over N squared. And then we chose an N so that 1 over N squared is less than epsilon squared. So that's less than epsilon squared, therefore showing that the set has equi-small tails. And therefore the closure of this set is compact. So we had this A to the 1/2 operator, which we needed to carry out this plan that we sketched at the beginning solving the Dirichlet problem. So now we just need to check that that operator I had before, which is A to the 1/2 composed with multiplication by B composed with A to the 1/2 is a compact self-adjoint operator. So first, let me just state a theorem about multiplying by a continuous function. Let V be a real value continuous function. Define m sub v as in multiplication f of x to be this V of x f of x for f in L2. Then this is a bounded linear operator on L2, and it's self-adjoint. So I'm actually going to leave this as an exercise. It's not too difficult to prove. Again, just multiplying by this function, which is continuous and bounded gives you boundedness automatically. And the fact that its real value will give you the self-adjointness. So from this, you can conclude the following. Let V between a function be real value, then the operator T equals A to the 1/2 composed with multiplication by v, composed with A to the 1/2 satisfies the following. One is T is a self-adjoint compact operator on L2. And one extra condition is T is bounded. So in fact T is a bounded operator from L2 to continuous functions. And if I take an L2 function and stick it into this function, I get out a continuous function in a bounded way. So one follows immediately from what we've proven already in this exercise of a theorem that I gave, right. Multiplication by V is a self-adjoint operator. A to the 1/2 is a self-adjoint operator. A to the 1/2 is a self-adjoint operator. So when I take the adjoint of this quantity, I will get the adjoint of A to the 1/2 in front, adjoint of mv, adjoint of A to the 1/2. They are equal to each other themselves. So I get out something self-adjoint. This is the composition of, if you like, a bounded operator with a compact operator. So I get a compact operator. So that proves it's a self-adjoint compact operator on L2. Why is this a bounded operator from L2 to continuous functions? Well, it suffices to show that this A to the 1/2 is a bounded operator from L2 to continuous functions. Why is that? Because if I take an L2 function and feed it into this operator T, then A to the 1/2 would be a continuous function. Multiplication by V will be a continuous function, again, because multiplication by V is continuous. So V is continuous. And A to the 1/2 applied to a continuous function again spits out a continuous function. So suffices to show that A to the 1/2 is a bounded linear operating from L2 to continuous function. So let f be given in a Fourier expansion in terms of the sine k pi x's, then if A to the 1/2 in f of x, this is equal to k pi square root of 2 sine k pi of x. And now to show that A to the 1/2 applied to f is a continuous function, we'll apply the Weierstrass M-test right. So this is an infinite sum of continuous functions. So to apply the Weierstrass M-test, I have to say that I can bound this thing by something which is summable. So I have this. And then I also, if I take the absolute value of-- that's less than or equal to ck over k pi square root of 2, which is less than or equal to Tk over k. And I claim this is summable. And if I sum from k equal 1 to infinity, ck over k by Cauchy-Schwarz, this is less than or equal to sum over k, 1 over k squared, 1/2 sum k ck squared raised to the 1/2. And remember, this is just the L2 norm of f, because the sine k pi x's are in L2 basis. So that equals something like pi squared over 6. I don't know. Let's say it is, this thing here times L2 norm. So each of these continuous functions in this infinite sum is bounded by a constant. And those constants are summable by this computation. So thus, that implies A to the 1/2 f is continuous function by Weierstrass M-test. Not only that, this computation we did shows that A to the 1/2 applied to f of x is less than or equal to pi squared over 6 raised to the 1/2 times the L2 norm in f. So it's a bounded linear operator from L2 to the space of continuous functions. So where are we? We have all the pieces we need in place to solve our problem, Dirichlet problem. All the ingredients are ready. We just need to cook them. So I was mentioning the Weierstrass M-test. That should have been covered in 18.100. There's an infinite sum of continuous functions. And each of those continuous functions is bounded by a constant. And those constants are summable, then you get a continuous function out in the end. So now the theorem that concludes the existence part of the Dirichlet problem, let V C 0,1 continuous function be non-negative. So it's real value and non-negative. And let f be a continuous function. Then there exists a unique twice continuously differentiable function on 0, 1, solving the Dirichlet problem, minus u double prime, plus V multiplied by u equals f on 0, 1 and boundary conditions u of 0 equals u of 1 equals 0. All right, so I'll just recall for you the plan was to define u to be A to the 1/2 pi plus A to the 1/2 mv, A to the 1/2 inverse, A to the 1/2 applied to f. Now we just need to say why this thing exists, right. Why is this operator in the middle invertible? And then we'll get what we need. So proof by the Fredholm alternative, right. So let me not skip ahead by previous theorem, this operator mv is sandwiched between A to the 1/2 is a self-adjoint compact operator. And therefore, the Fredholm alternative, the inverse exists if and only if the null space of the operator is trivial. Now suppose we have something in the null space, and we'll show that it has to be 0, then if I pair this with sub g is an element in L2, if I pair it with g, so I get that 0 is equal to I plus A to the 1/2, applied to g, inner product g. Now carrying this g through, and it'll get inner product with itself. So I get g squared plus A to the 1/2 multiplication by V of A to the 1/2 applied to g, inner product g. Now since A to the 1/2 is self-adjoint, I can move it over to the second entry. And let me just write this out. This is equal to g L2 plus the inner world from 0 to 1 of V times A to the 1/2 g, times-- so this is all multiplication, pointwise multiplication-- times A to the 1/2 g complex conjugate, dx, which this gives me A to the 1/2 g on squared. Now V is non-negative. So this quantity here is non-negative. So this is bigger than or equal to g squared, an L2 norm of g. And we started out with 0. So I get that g is 0, and therefore the null space of I plus this self-adjoint compact operator is trivial. And therefore I plus this compact self-adjoint operator is invertible-- the Fredholm alternative. So this inverse exists. And I define u to be what? A to the 1/2. Let's say this way I'll define V to be I plus A to the 1/2 multiplication by V, A to the 1/2 inverse applied to A to the 1/2 f, and u to be A to the 1/2 of V. So then what do I get? Then u plus A applied to multiplication by V of-- so I'll say A Vu, this is equal to, by definition, A to the 1/2 V plus A to the 1/2, simply because A is equal to A to the 1/2 squared applied to V. So I get A to the 1/2, I plus A to the 1/2 of V, A to the 1/2 applied to V. And now V is given by this thing. So I just get, when it hits that, I just get back A to the 1/2 f multiplied by A to the 1/2 to get Af.
MIT_18102_Introduction_to_Functional_Analysis_Spring_2021
Lecture_18_The_Adjoint_of_a_Bounded_Linear_Operator_on_a_Hilbert_Space.txt
[SQUEAKING] [RUSTLING] [CLICKING] CASEY RODRIGUEZ: OK, so let's continue with our discussion of Hilbert spaces. And so last time, we finished with Riesz representation theorem, the only representation theory I remember, Riesz representation theorem, which states that if you have a Hilbert space, then for all f in the dual space-- which remember, this is a set of bounded linear maps from H to complex numbers-- there exists a unique v in H such that f of u is equal to u inner product v for all v, for all u and H. So every continuous linear functional on a Hilbert space can be realized as an inner product with a fixed vector. So using this, we can revisit the subject which you touched upon in the assignment of adjoints. So let me just state this as a theorem. Let H be a Hilbert space and A going from H to H be a bounded linear operator. The conclusion is then, there exists a unique bounded linear operator A star from H to H, which we call the adjoint of A such that A star satisfies the following property-- for all u, v in H, A u inner product v is equal to u inner product A star v. And moreover, the operator norm of the adjoint equals the operator norm of A. So again, the same way we proved uniqueness for the Riesz representation theorem, you can prove that such a linear operator satisfying this identity for all u and v has to be unique. So I'll just state here that uniqueness of such an operator A star follows from the identity it satisfies, just like the vector v, which spits out f of u when you take the inner product with u with it. Just like that vector is unique regardless of how you chose to define it-- I mean, how you created it based on that identity-- the same thing holds for the adjoint. So it's just the same argument is what I mean. So how do we define this operator? Well, we basically define it by this relation. So let v be in H. Define a map f sub v from H to C via f sub v of u is equal to A u inner product with v. Now, this is clearly a linear map. Maybe I shouldn't check this, maybe I should, but let's check it anyways. If I take f of v of lambda 1 times u1 plus lambda 2 u2, this is equal to A times lambda 1 u1 plus lambda 2 u2 v. And now, we use the fact that A is a linear operator so that it acts on the linear combination by linearity. So then I get lambda 1 A u1 plus lambda 2 A applied to u2 inner product v. And because the inner product is linear in the first entry-- it's conjugate linear in the second entry, but it's linear in the first entry-- this is equal to lambda 1 times A u v plus lambda 2 A u2 v, and which is lambda 1 f v of u1 plus lambda 2 f v of u2. So this thing is a linear map. I claim it's also continuous, i.e., it's an element of the dual. And we just check. If the norm of u equals 1, then if I take the absolute value of f sub v applied to u, this is, by definition, A u v, which is less than or equal to by the Cauchy-Schwarz inequality, the norm of A u times the norm of v. And the norm of A u is less than or equal to the norm of A, since is a bounded linear operator, times the norm of v. So this holds for all u and norm u equals 1. And therefore, norm of this operator is less than or equal to norm of A times norm of v. So that proves that it's an element. So thus, f sub v is an element of the dual. And now we use the Riesz representation theorem. So by the Riesz representation theorem, there exists a unique element, which we denote A star v, in H such that for all u in H, f v of u equals u inner product with A star v. I.e., for all u in H A u v-- because that's the definition of this guy on the left-- is equal to u A star v. So this just defines, for each v, an element A star v in H. So now let's prove that this map v to A star v is a bounded linear operator. That then defines the adjoint. So first, we claim v, which maps to A star v-- the element that satisfies this identity-- is linear. So let lambda 1, let v1 v2 be in H, lambda 1, lambda 2 be complex numbers. Then for all u in H, we have what? If I take-- let's see, how did I spell this out-- u inner product with the element A star lambda 1 v1 plus lambda 2 v2-- so remember, this is the element or this is the element such that this inner product is defined by-- this is equal to A u plus inner product with lambda 1 v1 plus lambda 2 v2. And this is equal to-- so first off, the inner product is conjugate linear in the second entry. So this is equal to lambda 1 complex conjugate A u v1 plus lambda 2 complex conjugate A u v2. Now, how A star-- how this symbol, if you like-- is defined is that it's equal to this quantity here is equal to lambda 1 times A u A star v1 plus lambda 2 bar u A star v2. And pulling the complex numbers back in, this is equal to u lambda 1 A star v1 plus lambda 2 A star v2. Now, this holds for all u in H, i.e., the inner product of u with this minus this is 0 for all u in H. And the only thing orthogonal to everything in H is the zero vector. So we conclude that this element is equal to-- so this element corresponding to this linear combination is a linear combination of these elements. Thus, the map v that's mapped to A star v is linear. And so we denote this map now simply by A star. It acts on a vector v by operating by A star v, by multiplying by A star. So now, A star is a linear operator. When I write A star here, I mean the map that takes v to A star v. So now I want to check that this linear operator is a bounded linear operator. I need to check that it's continuous. And in the process, we'll end up showing that the norm of A star equals the norm of A. Now, suppose norm of v equals 1. So I want to bound A star applied to v. So if A star v equals 0, then clearly, A star v is less than or equal to the norm of A. This, in the end, is what I will show. But I'm kind of dealing with the stupid case, which is when this thing on the left-hand side is 0. So suppose A star v is non-zero. Then, if I take the norm of A star v squared, this is equal to A star v inner product A star v. Now, by the definition of the inner product, this is equal to A times A star v, v, taking this A star and moving it over here. And by Cauchy-Schwarz-- well, first off, all of these numbers are real so it makes sense to actually write less than or equal to. But by Cauchy-Schwarz, this is equal to, or less than or equal to, A times A star v norm times norm of v, which equals 1. Now, this equals 1. And the norm of A applied to A star v, that's bounded by the norm of A times the norm of the input, which is A star v. And therefore-- so we started off with A star v squared less than or equal to the norm of A times the norm of A star v. This is non-zero. So I can divide by it and conclude that A star v norm of that is less than or equal to norm of A. Now remember, the norm of an operator or a linear operator is the sup of this for all v which have linked 1. And since this is bounded above by the norm of A, that implies that norm of A star is less than or equal to the norm of A. So we've shown that basically all of this theorem, with the exception of equality of the norms. So every bounded linear operator has an adjoint which is a bounded linear operator and satisfies-- and it's a unique linear operator satisfying that identity up there for A star. Now, note that for all u, v, and H, if I look at A star u inner product v, this is equal to the complex conjugate of v inner product A star u, which is equal to, by the definition of A star, moves over to A v u. And again, applying the complex conjugate switches the order of the entries. So this is equal to u A v. So what have we shown? We've shown that the adjoint of the adjoint-- that is equal to A. So we've shown that A star u v is equal to u times A v. And recall this is supposed to be equal to u A star A star, so the adjoint of the adjoint. And we've shown that this quantity is equal to this quantity for all u and v. And therefore, the adjoint of the adjoint is equal to the operator again. Thus, by what we've done previously-- the previous argument, if you like-- applied now where A is replaced by A star, the norm of A, which is equal to the norm of A-- the adjoint of the adjoint because this operator is equal to A-- by what we've done, we've shown that the norm of the adjoint is always less than or equal to the norm of the thing you're taking the adjoint of. So this is less than or equal to norm of A. And so we have this. And we have this, which implies norm of the adjoint is equal to the norm of the operator. So I mean, what is this creature in practice? So let's take maybe a second simplest example. The simplest example would be if you're on C n and R n-- well, so let's talk about that. Suppose now we have a matrix. So u is equal to u1, u n. And this is in C n. And A applied to u, if I want the i-th coordinate of that, is given by sum of j equals 1 to n of u j, where these are just complex numbers, some fixed complex numbers. So A is just a matrix. A linear transformation on C n is always represented by a matrix. And here, I'm writing u in terms of the standard basis vectors 1, 0, and so on. Then, to determine what the adjoint is, we figure out what's the operator that satisfies the identity the adjoint has to satisfy. So if we take A u and take its inner product, we want to write this-- so let me write this in a box-- we want to be able to write this as u with some operator B applied to v. And then this would be-- and therefore, that would be the adjoint. So if we write A u inner product v, this is equal to sum i equals 1 to n of v i complex conjugate. And this is, by definition, equal to-- I'm just going to write sum over i and j, i and j are going from 1 to n, of A i, j, u i v j complex conjugate. And if I now switch this over and sum first with respect to i and then with respect to j, I can write this as u j sum i equals 1 to n of A i j applied to v i complex conjugate, the complex conjugate there. And so this tells me that-- so this is equal to sum j equals 1 to n u j and times what I'll call the adjoint applied to v j where-- what is the adjoint? Or how does the adjoint operate on an element v? This is equal to sum from-- let's say I want the i-th member now-- this is sum from j equals 1 to n A j i complex conjugate applied to v j. So for matrices-- again, this should be review-- if A is represented by the matrix, then the adjoint, which is also representable by a matrix, is representable by the matrix. So I should say the matrix for the adjoint A i j is equal to A j i complex conjugate. So we did this for C n. There's no reason we can't do it for, let's say, little l2 now and make it a little more interesting. So suppose now I have now infinitely many numbers A i j, so a double sequence in C n, such that sum i j, sum A i j squared, which is linked as n goes to infinity, of sum of i equals 1 to n, j equals 1 to n, is finite. Now, let's define A now from little l 2 to little l 2 via A applied to a sequence, which I put an underline underneath, or let's make it u. OK, we've been using A's for elements in little l p as sum j equals 1 to infinity A i j A j. And here, this is for A equals A little l 2. Now, by the Cauchy-Schwarz inequality, you can check that if this condition is satisfied, then this is a bounded linear operator. The absolute value of this thing will be less than or equal to the sum j equals 1 to infinity of A i j sum of j squared, square root times the l 2 norm of A. And therefore, I can sum that using-- that's l 2 summable using the fact that this is finite. Don't worry about the fact that I'm summing this double sum in a certain way by j going from 1 to n, i going from 1 to n. In fact, if it's absolutely summable in this, which it is-- it's the sum of non-negative numbers-- this sum doesn't depend on how I'm summing it in this way. So then this is a bounded linear operator. And what's its adjoint? Its adjoint is going to be of the same flavor as what we got for the finite dimensional case. And for all A B and little l 2, if I take the inner product of A applied to u, A, B, and little l 2, this is equal to sum over i-- I mean, the same proof applies, essentially-- sum i j A i B i complex conjugate, which is equal to sum j A j i A i j B i complex conjugate, complex conjugate, which is equal to inner product with A star B, where A star B is defined via sum-- so I should have said this is the i-th entry in this new sequence in l 2. And so here, the i-th entry in this new sequence in l 2 is defined to be j equals 1 to infinity A j i complex conjugate B j. So just like in the finite dimensional case, for this case, if I switch the variables of this A-- if you'd like a double infinite matrix, if you want to think of it that way-- it's the same as in the finite dimensional case. Let's do one last example. So I'm not going to go through again the computations, which are very similar. Here, we're having kind of finite sums. If you can do something with sums, you should consider whether or not you can do it for integrals. So let's suppose K is a continuous function on 0, 1 cross 0, 1. And we define a map A which we can show goes from L 2 to L 2 via A f of x is equal to the integral from 0 to 1 of K of x, y, f of y dy. In fact, this is for each f in L 2, A f-- this thing is, in fact, a continuous function. It's more than just in L2. It's in fact, a continuous function. But continuous functions on 0, 1 are elements of L2 on 0, 1. Then you can check, just as we've done in these two other examples, that the adjoint of f is equal to-- or let's make this different-- say adjoint applied to g is equal to the integral from 0 to 1 K y, x complex conjugate g of y dy. So again, it's like flipping the indices and taking the complex conjugate. I said at the end of last class that you can tell something about the solvability of equations based on how the adjoint-- on properties of the adjoint. We'll see another way this is manifested later. That's kind of the simplest way. And so suppose H is a Hilbert space. And A from H to H is a bounded linear operator. Then-- I used Ran in the assignments, so I'll use it this way here-- the range of A, orthogonal complement-- so the range you can show is a subspace of H. In fact, I think that was in one of the earlier assignments. If you take the orthogonal complement of that, that is equal to the null space of the adjoint. So here, let me recall that range of an operator B-- this is exactly what it is. This is a set of all vectors B u where u is in H and null space of B-- I might have called it the kernel, might have labeled it as the kernel of B, but both mean the same set. This is a set of all u in H such that B u equals 0. So in particular, if we know that we have an operator A such that the range is a closed subspace, then being able to solve the equation A u equals v is equivalent to showing that the null space of the adjoint is simply 0. So let me make that as a remark. Suppose that the range of A is closed. Then A from H to H is surjective if and only if the adjoint is injective, meaning the only thing that gets set to 0 is the zero vector. Because if the null space is equal to the 0-- so first off, it's easy to see if the range of A is equal to H, then the orthogonal complement of H is the zero vector. So null space of the adjoint is just a set containing the zero vector. Now, if the null phase of the adjoint is just a zero vector, then taking the orthogonal complements of both sides, and using the fact that the range is closed so that the orthogonal complement of the orthogonal complement gives me back the set, I conclude that the range of A equals the orthogonal complement of the zero vector, which is H, the entire space. So we know that we have an operator which has closed range. Then being surjective is equivalent to the adjoint being injective. So the proof of this, though, is pretty easy. So B is in the null space of A star if and only if u A star B equals 0 for all u in H. A star v equals 0 if and only if this holds for all u in H. And this is equivalent to, by the property of the adjoint, u inner product of A star v is equal to A times u inner product v, so A u, v equals 0 for all u in H. So v is a fixed thing. So this says that v is orthogonal to all of the elements of H of the form A times u. But that's just the range of A. So this is equivalent to B is in the orthogonal complement of the range of A. So we're soon going to get into the realm of more refined things we can say about solving certain equations involving operators, meaning when can you solve A u equals v and so on. But now, the best theorem about that you should know or maybe heard at some point from linear algebra is the rank-nullity theorem, that says the dimension of the range plus the dimension of the null space equals the dimension of the-- let's say you're going from spaces with the same dimension, then the dimension of the range plus the dimension of the kernel-- I may be actually getting this wrong; it's the end of a day; it's the end of a long day-- equals the dimension of the whole space. So what this says is that in essence, in order to be able to solve a given equation, your input has to satisfy finitely many linear relations. And the solution of that equation is unique up to finally many-- or a finite dimensional subspace. Now this is the best thing you can say. Input has to satisfy finitely many linear relations and unique. So that existence is predicated upon your data satisfying finitely many linear conditions, and you have uniqueness up to a finite dimensional subspace, meaning the null space. Now, this is from finite dimensional linear algebra. It would be great if we could do the same thing for infinite dimensions, not just because it would be fun, but in the end because we need to do these things, or I should say need to, but not in the sense that an airplane wing is going to fall off if we don't. But in order to learn more about certain problems that arise, it would be great to be able to do these things. And we'll be able to do them for certain operators, which are in some way-- these aren't the only operators, but for certain operators that are close to being matrices in a certain sense. So we'll get to that in a minute. Now, these operators that we will eventually study the solvability properties of have a very special nature or property on how it acts on bounded sequences. Now, something that you take for granted in R n is that if I have a linear operator and I have a bounded sequence of vectors, then A maps that to a bounded sequence of vectors because a matrix is a bounded linear operator. It takes bounded sets of bounded sets. And therefore, by the Heine-Borel theorem, there will be a subsequence of the images of these vectors, which converges. So there's some compactness hidden in what you're doing on R n and C n. And why am I rambling right now? I'm getting to the final point that we need to study a little bit about compactness and Hilbert spaces to eventually get to the place where we can say more about being able to solve equations involving bounded linear operators, now in infinite dimensions. So with that rambling build-up, hopefully, I convinced you that we should study a little bit about what it means for sets to be compact in a Hilbert space. So let's look at compactness in Hilbert space. So let me recall for you what it means for a set to be compact in a metric space. If X is a metric space, We say a subset K in X is compact if every sequence of elements in K has a subsequence converging to an element in K. So a set is compact if every sequence of elements in the set has a subsequence which converges to an element in the set. So the simplest on Earth types of compact sets are finite sets. This just follows from the pigeonhole principle, basically. So the simplest example are finite sets of finite subsets of a metric space, of any metric space. Now, we have this very cool theorem from Intro Analysis, which goes by the name of Heine-Borel, that says a subset K in R-- you can replace R by R n or C n, but this follows from the one-dimensional case, essentially-- a subset K of R is compact if and only if K is closed subset of R and bounded. So for example, the closed and bounded intervals a, b-- those are all compact sets. a, b, the set consisting of 1 over n, n a natural number, union the element 0. That's also a compact set. It's closed and it's bounded. Now, this is a theorem about elements of compact subsets of R. One can build off of the proof of this, really, to get the result for R n and C n. The metric properties of C n are the same as R 2n, so let's say just R n-- that a subset of R n is compact if and only if it's closed and bounded. Now, does that hold for arbitrary metric spaces? No. Does it hold for Banach spaces? No. You did an example on one of the assignments that proved that little l p is not compact. What about if we specialize the Hilbert spaces? The answer is still no. So let's make this a non example, since we did examples a minute ago. So all of these, as subsets of R, these are all compact. So let's suppose H is a infinite-dimensional Hilbert space. Then the closed ball B equals the set of u and H-- let's make it F-- such that length of u is less than or equal to 1. This is a closed and bounded set. You can check that. This is not compact. Let's get back to compactness. So let's suppose H is a infinite-dimensional-- let's even make it separable, as all good Hilbert spaces are. Then the closed unit ball is not compact. So why is this? Let e n equals 1 to infinity b n-- or in fact, it doesn't have to be separable. In fact, if you just take any infinite-dimensional Hilbert space, you can find by the-- we don't need to make it separable. You can always find an orthonormal, countably infinite orthonormal, subset of H. Now, in the separable case, we could choose it so that it's a orthonormal basis. But that really doesn't matter. Just the fact that H is an infinite-dimensional Hilbert space, we can find a countable subset. So we can find a countably infinite subset of orthonormal vectors. Why is this? Because we can find a countable set of linearly independent vectors-- an infinite-dimensional Hilbert space or infinite-dimensional vector space. And then we can make them orthonormal by applying the Gram-Schmidt process to that collection, and come up with countably infinite orthonormal subset. Then, for all n not equal to k, we get that e n norm minus e k squared-- this is equal to, again, using the parallelogram law-- well, I don't need to apply the parallelogram law. I can just apply what this is. This is equal to norm n squared plus norm e k squared plus 2 real part e n e k. And since they're orthogonal, this is 0. And since they're normalized to 1, this is equal to 2. So for any two distinct elements in the sequence, the distance between the two of them is equal to 2. So there's no way the sequence of these orthonormal vectors can have a convergent subsequence. Because a convergent subsequence would mean that if I go along the subsequence, or this subsequence has to be Cauchy so that the difference between vectors has to get small. But it's never small. It's always at least bigger than or equal to 2, at least the length squared or distance squared. So this is a closed and bounded subset which is not compact. There exist sequences which have no-- or this sequence has no convergent subsequence. So the question is, what's the additional condition I need to conclude that a subset of a metric space is compact? From Real Analysis 18.100, every compact set has to be closed and bounded. One implication is that if something is compact, it's closed and it's bounded. What additional condition on the set-- so being closed and bounded is a necessity of the set. What additional condition guarantees that it's compact? Maybe not if and only if, like we have for the Heine-Borel, but at least something simple to verify so that the subset of the Hilbert space is compact. Now, if you didn't see this in 18.100, that's fine. But this question can or could have been already asked in a different context in 18.100. And at least when I taught it, 100B, it was that you can ask the same question about subsets of continuous functions. So definition of a compact set is with respect to a metric space. Doesn't have to be an inner product space. So the space of continuous functions on, say, 0, 1-- that's a perfectly good metric space with the infinity norm. You could ask, what are the conditions on a set to ensure that the set is compact? And the three conditions are closed, bounded-- because these are necessary if the set is to be compact-- and what's called "equicontinuous." So for subsets of the space of continuous functions, if the subset is closed, bounded, and what's called equicontinuous, then the subset is compact. This is the famous Arzela-Ascoli theorem. And that extra condition, equicontinuity, is in some sense what allows you to reduce your problem of showing your space is compact to the finite-dimensional case. It takes care of the infinite dimensions or it takes care of the infinite part, in a sense. That's not very specific, but it's needed to control all but finitely many things, in some sense. Equicontinuity in Arzela-Ascoli is what helps you control all but finitely many things. Now, there is a similar theorem that you can prove, now in Hilbert spaces. Let's just state the theorem. Part of the theorem will be a definition, unfortunately, but let H be a Hilbert space. Well, we're not going to prove the theorem I want to prove just yet. I just skipped ahead by one. So we'll motivate a certain condition. So I got ahead of myself just a minute by just one theorem. But let me introduce the definition. So let H be a Hilbert space. A subset K of H sets has equismall tails with respect to a countable orthonormal subset. So the new bit of terminology is "equismall tails--" e n if-- let's make this K-- in some sense-- so by Bessel's inequality, the sum of the, if you like, Fourier coefficients or the sum of the squares of the inner product e K with a fixed vector-- this is always bounded by the norm of the vector squared. And therefore, it always converges. So equismall tails means that somehow, that series converges, or the tail end of that series converges, uniformly with respect to this orthonormal subset. So in other words, for all epsilon positive, there exists a natural number N such that for all v in the set K, if I look at the tail, this is less than epsilon squared. So somehow, given epsilon, I can always choose an N which is independent of the element in K so that the tail end of, if you like, the Fourier series is small. And this N can be chosen independent of the element in K. So I leave it to you to verify if K is simply a finite set, this implies K has equismall tails with respect to any orthonormal subset. So what's the motivation for this definition is that the next simplest type of set which is compact. So like I said, any finite subset of a metric space is compact. Now, I'll give an example of another, more interesting compact set. But that compact set also satisfies this property, that it has equismall tails with respect to any countable orthonormal subset. And so then, we should hope that if a set is closed, bounded, and has the property that, with respect to an orthonormal basis it has equismall tails, then that set is compact. And that's eventually what will prove, but I don't think we'll get to it today. But so one more bit of motivation on why this might be the right extra condition to add to closed and bounded to produce compact sets is provided by the following. So again, let H be a Hilbert space and be in a convergent sequence. Now let e K be some arbitrary orthonormal, countable orthonormal subset. So then two things hold-- one is that the set K equals a set of elements v N union the limit. This is compact. The proof is given in [? Mel ?] [? Rose's ?] notes, and I'm going to leave it to you to look up or work your way through. It's not that difficult to work your way through. And K has equismall tails with respect to e K. So this subset consisting of a convergent sequence along with this limit is compact. And it has equismall details with respect to any countable orthonormal subset. So like I said, I'm going to leave one for you. 2 we will prove. How much time do I got? I don't have much time left, but we should be able to finish this. We have to verify this definition. So let epsilon be positive. So why should we-- what's the thinking on why we expect this set to have equismall tails? So I have to check that I can find a large enough integer that all the squares of these coefficients is small. Now, the point is that the v N's are essentially very close to being v, at least for N very large. And for a fixed v, I can always choose a capital N so that this holds. And so that N will take care of infinitely many, and then I just need to choose N big enough for v, and also the finitely many that are not very close-- finitely many elements of this sequence which are not close to v. That's essentially the argument. So since v N converges to v, there exists a natural number M such that all N bigger than or equal to M, I have v N minus v, and norm is less than epsilon over 2. Now, choose natural number N so large so that sum k bigger than N of v v k-- this is small, so here, v is fixed. And we always have that this whole sum converges by Bessel's inequality. And therefore, we can always choose an N so that the tail is small. And we can do the same for the first M minus 1 elements of the sequence, so plus max of 1, so that this is all less than epsilon squared over 4. So for each v N, I can find a capital-- so for v1, I can find an N1 so that this is less than epsilon squared over 8, say. And then for N equals 2, N equals 3, all the way up to N minus 1, I can find a capital N sub n. I then take capital N to be the maximum of those finitely many capital N sub n's along with the N that I need for this. So that's the argument on why I can find such a capital N. I just have finitely many things I need to make small, which I individually can. Now, I claim this capital N works. So I have to show that if I take any element in K, then it has a small tail. Then I have that, by how I've chosen N, sum over k bigger than N of v e k squared-- this is less than epsilon squared over 4, which is less than epsilon squared. And for all 1 less than or equal to n is less than or equal to n minus 1, the same bound. So I just need to check now that this is small when n is bigger than or equal to M. Now, if n is bigger than or equal to M, by Bessel's inequality, we have that some K bigger than our N v n e K squared, if I can take the 1/2 power and show that's less than epsilon. So this is equal to K bigger than N of v N minus v plus v. So I can-- e K plus-- so let's write it this way. Now, this is the little l2 norm of a sum of two sequences in K, not N, but in K. So by the triangle inequality for sequences in little l2, this is less than or equal to sum K bigger than N of v N minus v e K 1/2 plus sum from K bigger than N of v e K to the 1/2 power. Now, this quantity I already have control over. That's less than epsilon over 2. I'm taking the 1/2 power here. And I can bound this by Bessel's inequality. This is less than or equal to norm of v N minus v plus-- and then using the bound for this-- epsilon over 2. And now remember, how is capital M chosen? Capital M is chosen to ensure that this is less than epsilon over 2, which we can do since the v N's are converging to v. So this is less than epsilon over 2 plus epsilon over 2 equals epsilon, proving that this set consisting of the elements of a convergent sequence along with the limit, has equismall tails with respect to an arbitrary orthonormal subset. So next time, we will prove that if we have a subset of a separable Hilbert space which is closed, bounded, and has equismall details with respect to A orthonormal basis, which exists because it's a separable Hilbert space, then the set is compact. And then, we'll rephrase that in a way that doesn't involve Hilbert spaces, and go from there, and start looking at some of these operators I said, which are close to matrices. All right, we'll stop there.
MIT_18102_Introduction_to_Functional_Analysis_Spring_2021
Lecture_9_Lebesgue_Measurable_Functions.txt
[SQUEAKING] [RUSTLING] [CLICKING] CASEY RODRIGUEZ: All right. So last lecture, we concluded our discussion about measurable functions, I mean, measurable sets. And remember our original motivation was that we're trying to build an integral that somehow surpasses the properties of that of the Riemann integral in that, hopefully, this larger class of functions that are integral with respect to this integral form a Banach space as opposed to Riemann integral functions. And we started off by asking the question, how would we integrate the simplest types of functions, which are just one on a set and zero off of it. And that led us to how to define measure. And so then we define Lebesgue measurable sets, proved that they form this special type of collection of sets called the sigma algebra, and that they contain a lot of sets, a lot of interesting sets, open, closed, unions, countable unions of closed, which are not necessarily closed, countable intersections of opens, which are not necessarily open, and so on. Now not every set is Lebesgue measurable. We will not go through that construction of a nonmeasurable set. The construction of the set or how one obtains it also would provide a proof that we could not build a measure that's defined on all subsets of real numbers having these three properties that is translation invariant. The measure of an interval is the length of the interval. And the measure of a disjoint union is the sum of the measures. But this is a functional analysis class. Our goal is to build up a Banach space of integral functions so we needed to define a better notion of integral than Riemann. I imagine in a measured theory class, you would see such a construction but we won't cover that here. So in some sense, if we've built up measure, we kind of know, roughly speaking, how we would integrate the simplest type of function, which is one on a measurable set and zero off of it. Now what about what would be a method of trying to integrate more general functions? So this is, I think-- so now we're going to be talking about measurable-- this is how it's spelled-- measurable functions. So to motivate the definition of a measurable function, let me just give a few minutes of why we introduced this definition, what's really behind it. So historically, when Lebesgue thought of his theory of integration, if we had a function on a closed interval, AB-- and let me draw it. Let's say it's just increasing. So what Riemann does, of course, is you partition up ab. And then you form, essentially, these boxes that have width given by how you've chopped up ab and height given by f, the function f, evaluated at some point in those subintervals. Right. And you form a Riemann sum and you take a limit. Yeah. And that gives you the Riemann integral. And so what Lebesgue thought of doing was, instead of chopping up the domain, chopping up the range. So let's imagine-- so this function is only this height. So what he would do, or what one would do, is let's-- and let's say we're just interested in integrating non-negative things-- chop up the range and so on up to some finite point. And now how do you form your boxes, if you like, that you're going to take the size of? Well, you look at the piece of f that's in between a given partition. And this portion of f will-- OK. In this picture, we'll call this point c. So then you take this portion of f that's in here. And then ac would be given by is the set f inverse of, in this picture-- and I'm just going to write it in this way-- yi, y, yi minus 1, yi. And you could build up a box by now taking it to be the lower value y2 and having width given by a to c. And I can actually do this because this is an increasing function so this will be an actual box. Now what am I getting at? And then you would say something like we would like to define the integral of f over ab. So this is all motivation, informal discussion. Don't take this to heart too much. Then we would like to define the integral from ab to f to be somehow this limit. And I'm not even going to write as something goes to zero or infinity you can think of as partitions get smaller of a sum now where I'm summing from i equals 1 to n of y minus 1, so the lower part, times now the length of, at least for this picture-- so this would be the length of the interval given by f inverse of yi minus 1, yi. OK? So this would be kind of an analogous procedure to what Reimann does on the domain except now focusing on the range. Now I wrote length here because this function I drew is increasing so the inverse image of one of these intervals is going to be another interval. So the length is meaningful. But if f is more general, f inverse doesn't need to be of yi minus 1, yi, need not be an interval. And so taking its length would not be a meaningful thing. But remember we now have this notion of measure, which should be the substitute for length for more general sets, measurable sets. So this procedure could still work if, instead of this requiring that this is an interval, requiring that this is a Lebesgue measurable set. And then perhaps one could define the integral in this way. And there these kind of things would no longer be boxes. So that should motivate why perhaps we should look at functions so that the inverse image of closed intervals is a measurable set so that we can take its measure and maybe do this procedure of defining an integral. Now all of that is a bit-- again, this is all informal discussion just meant to be motivation. In fact, in how we'll define the Lebesgue integral, we're not going to define it in this way, because what this way of doing it suffers is it's not clear that this is independent of how I partitioned up the range. So maybe if I take a limit as the partitions get smaller along some sequence of partitions, I get a different number from the others so that would have to be checked. But as we'll see when we do define the Lebesgue integral, this number, you can compute it by essentially this procedure I gave here where you chop up the range of f and take approximations to f that are more general than just step functions, which is what this is. It's a function that's a given value off on an interval and then zero outside. And that'll be a way of seeing that this motivation connects to actually how we define the Lebesgue integral. But again, the whole point of what I'm saying is that-- we're trying to motivate is that we should, and at least historically this is why one considers these things, consider functions so that the inverse image of closed intervals is measurable. OK. And that's the motivation for measurable functions. Now, as we saw when we were discussing what sets are measurable, we didn't exactly conclude directly that closed intervals were measurable. We started with something a little more basic, which was half infinite intervals, proved those were measurable, and then concluded that close intervals are measurable. Any interval is measurable. And so when we actually define measurable functions, we'll be using as input more of these half infinite intervals being measurable rather than this. So let's get to it. Let's define measurable function. So now just this comes with the territory. But we're going to be considering extended real valued functions in what we do. So I shouldn't have written that down. I should have written down extended real numbers. What does this mean? This just means the set of real numbers along with plus and minus infinity. So when I write the interval minus infinity, infinity, this is our union, the two symbols plus or minus infinity. Now we're going to have expressions where we allow a function to take on the value plus infinity or minus infinity. So we should set down what we mean when we multiply some of these things together. So sums are defined as x plus or minus infinity equals plus or minus infinity for all x in r. So if I have two functions whose values could be in the extended real numbers and I have one is finite and the other is infinite, then I just, by convention, their sum is defined to be plus infinity. But I'm not allowed to take infinity minus infinity or infinity plus minus infinity. So this and products are defined as we take the convention that zero times plus or minus infinity equals zero. And we won't come up against this until we discuss the integral, why we would need such expressions. But you could have a function that's, let's say, identically equal to infinity, identically gives this symbol plus infinity. If I multiply that function by zero, then I should get zero. This has nothing to do with limiting processes though. These are purely algebraic expressions we'll be dealing with. I'm not saying that everything you learned in 18100 Real Analysis about being careful when you have infinity over infinity or zero times infinity is-- I'm not saying throw that away. I'm just saying when we have certain algebraic expressions, these are the conventions that we adopt. And x times plus or minus infinity equals plus or minus infinity for all x in R take away zero. OK. And let me just recall for you what it means for-- we'll be making some limiting statements about certain numbers approaching other numbers. So what would it mean for a sequence of numbers to approach plus infinity or minus infinity? I want you to recall that a sequence a n of real numbers converges to infinity. And then you can make a similar definition for minus infinity if, for every R positive, there exists a natural number N such that for all N bigger than or equal to N a sub n is bigger than R. OK. OK. And I mean I've been using something equals infinity already when we've been just discussing measure, the outer measure of something being equal to infinity. In what follows, we'll be having expressions where we allow something to equal infinity. And we'll have algebraic expressions of this type so I just need to set down convention that if I have a real number plus or minus infinity, that is by definition equal to plus or minus infinity. If I have zero times something that equals plus or minus infinity, then that is by convention equal to zero and so on. All right. So measurable functions I said should be those functions at least motivated by this discussion before. Those functions or the functions we're interested in are those functions so that the inverse image of closed intervals are measurable. So that's how we'll define measurable functions almost. And the equivalent way, which is a little bit easier to work with, is the following. So let E be a measurable set and f from E to the extended reels. You say f is Lebesgue measurable-- that's new terminology-- if, for all alpha in R, if I take the inverse image of the half infinite interval, alpha to infinity, this is measurable I should say. We had some notation last time, the script M being the collection of measurable sets, i.e. is Lebesgue measurable. OK. As we'll see in a minute, this is an equivalent definition really of requiring that the inverse image of a closed and bounded interval is measurable, or at least we'll see one direction of why that's equivalent. And then in your own time, you can figure out why it's actually equivalent. So now why not include alpha? Why do I have to look at the inverse image of this half open interval? And why am I going from alpha to infinity and not, say, alpha to minus infinity or include alpha there? Why this specific kind? And then so what I want to first tell you is that looking at those types of sets and seeing if those are measurable is equivalent to this definition that I'm giving here. So let's take a function from a measurable set E, a subset of R to the extended reals. Then the following are equivalent. One is kind of the property that's in the definition of being measurable. For all alpha in R, f inverse of alpha infinity is measurable. Two. For all alpha in R, f inverse including alpha is measurable. Three. All alpha in R, the inverse image of minus infinity to alpha is measurable. And the last condition is, for all alpha in R, the inverse image of now include alpha is measurable. So to check what you get from this, to check that a function is Lebesgue measurable, it suffices not to just solely prove this property or check this property. You can check it on other types of sets. It suffices to only check it-- it suffices to check it on either these types of sets, or these types of sets, or these types of sets. So if you can prove that for all alpha, this type of set or the inverse image of these types of intervals are measurable, then it's equivalent to saying that f is Lebesgue measurable. OK. OK. Now the proof is not hard based on what we know about Lebesgue measurable sets which, again, we proved last time that they form a sigma algebra. They're closed under taking countable unions, intersections, and complements, and so on. So let's first prove that one implies two. So I'm saying they're all equivalent, which means they should all imply each other. So let's first prove one implies two. So suppose one holds, then for all alpha in R, now I want to check that two holds. I can write alpha to infinity as the intersection over natural numbers n of alpha minus 1 over n infinity, which implies the great thing about taking inverse images is that they respect all operations you can do on set. So the inverse image of any type of intersection is the intersection of the inverse images. And if I'm assuming each of these is measurable by 1, then this is a countable intersection of Lebesgue measurable sets. Again, when I say what is the Lebesgue measurable sets, I'm talking about the inverse image of this set, not this set in here. I'm talking about the inverse image of this interval. So each of those is a Lebesgue measurable set so their intersection is Lebesgue measurable. OK. And therefore for every alpha in R, the inverse image of these closed intervals are measurable. And how do I show two implies one? By a similar game. Suppose two holds. Then for all alpha in R, the half open interval now I can write as the union over n alpha plus 1 over n to infinity. And therefore the inverse image of alpha to infinity is equal to the union of the inverse images of these sets. And again, if I'm assuming two, then I'm assuming each of these for each n is a Lebesgue measurable set. And we know countable unions of Lebesgue measurable sets are again Lebesgue measurable. So this is Lebesgue measurable. OK. So that proves one and two are equivalent. OK. And we get three and four. So first off, two is equivalent to three simply because if I take-- let's see-- minus infinity to alpha, this is equal to alpha infinity complement. And therefore again, when I take inverse images, and knowing that that the complement of a Lebesgue measurable set is Lebesgue measurable, I get that two is equivalent to three and one is equivalent to four. Again, since I have this last type of set, minus infinity to alpha, this is equal to-- so this is for all alpha in R since for all alpha in R, this is equal to complement. So again, if I take the inverse image of this set, I will get the complement of the inverse image of this set. And if that's measurable, that inverse image is measurable, then its complement is measurable, implying that's measurable. So it's a simple game or it's a simple proof just based on the fact that we know that Lebesgue measurable sets are closed undertaking countable unions, intersections, and complements. OK. So then we get the following theorem that, if E is measurable, so E is a subset of R is measurable, and f from E to R is a measurable function-- in the future, I will probably just say f is measurable, not a measurable function-- then for all f in the Borel sigma algebra, f inverse of F is measurable. The inverse image of f is measurable. OK. So what's the proof? f measurable implies that, for all ab, a less b, f inverse of the open interval ab, which is equal to the f inverse of the two intersect equals f inverse of-- So if I take the inverse image of this open interval ab, then it's equal to the inverse image of this intersection, which is equal to the intersection of these two inverse images. And if I'm assuming f is measurable, then each of these sets-- each of these pre-images are measurable by the previous theorem that I just proved and therefore that's measurable. So I've shown that open intervals are measurable. And similar to how we concluded that open sets are measurable, you can then-- we use the fact that every open set, which you proved in assignment 3, that every open set can be written as a countable union of disjoint open intervals, thus f inverse of u is measurable for all open sets, subsets of R. OK. And since the set-- let's call it A, which is the set of all F such that this is a-- so you're also proving in the assignment this collection of sets, such that the inverse images are Lebesgue measurable, this is a sigma algebra containing-- since we've now proved that all open sets are in the set, this implies that the Borel sigma algebra, which is, again, the smallest sigma algebra containing all open sets, is a subset of this here, which is the statement of the theorem. OK. All right. And so let me just add on here another thing that-- OK. So now we know that if we have a measurable function, it takes the inverse image of Borel sets. So sets that belong in the Borel sigma algebra are measurable. This includes all open sets. What about throwing plus or minus infinity into the mix? So if f goes from one measurable set E to R is measurable, then the sets-- well, let me write it this way. The inverse image of where f is infinite or equal to minus infinity, these are measurable as well. And how does one prove that? So proof. Again, we could just use the definition of how this works and the fact that we know Lebesgue measurable functions, or Lebesgue measurable sets, are closed undertaking countable intersections, unions, and complements. We have that f inverse of infinity, this is equal to the intersection overall n of f inverse, the inverse image of n to infinity. And if I'm assuming f is measurable, then each of these is a measurable set. The inverse image of n to infinity is a measurable set. I will often use inverse image or pre-image. These are meant to be taken as saying the same thing. Each of these is a measurable set so it's the countable intersection of measurable sets is measurable. So that's measurable. And similarly, if I look at the set where it's equal to minus infinity, this is equal to the intersection of the inverse image of minus infinity to n. Let's take these positives, so then I'll put minus n here. And by this theorem that we just proved a minute ago, each of these-- well, I mean we just proved that the inverse image of Borel sets are Lebesgue measurable so-- well, that won't apply. Never mind. Forget what I just said. The theorem we just proved a minute ago that Lebesgue measurable functions take sets of this type to Lebesgue measurable sets implies that each of these is Lebesgue measurable and therefore it's a union of Lebesgue measurable sets, so for all n, so that's measurable. So if I have an extended real value function, and the inverse image of every Borel set-- you can even toss in the two infinities if you like. The inverse image of those sets is Lebesgue measurable. So in particular, we conclude that the inverse images of closed and bounded intervals is Lebesgue measurable for measurable functions. OK. Now what are the simplest types? Again, you see a definition. You should ask for an example. What are the simplest types of measurable functions? So if f from R to R is continuous, then that implies that f is measurable. So in the end when we build our definition of the Lebesgue integral, we should encompass Reimann integration as well. In other words, we should be able to also integrate continuous functions. And later we'll see, and something that we want as well, is that the Lebesgue integral of a continuous function should reduce down to the Reimann integral of a continuous function. So at a minimum, when we're building these concepts, we should check as a kind of sanity check that we are including continuous functions in these possible functions which we will integrate. So if f is continuous, f is measurable, why is this? This is because for all alpha in R, if I look at the inverse image of alpha infinity-- so first off, this is equal to this set. And this is an open set so this is open. So the inverse image of an open set for a continuous function is open. And therefore it's measurable. Right? OK. So how about a different example? Let's take a measurable subset E of R. Let F be another measurable subset. And define the indicator function chi of F of x to be 1 if x is in F, 0 if x is not in F. Then this function chi F-- now if I think of it as being a function from E to R, this is measurable. Now, why is that? Well, we can just compute. If alpha is in R, if I look at the inverse image of alpha to infinity, this is equal to one of three things. Since chi takes on only the values 1 and 0, if alpha is bigger than 1, this is the empty set, which is measurable if alpha's bigger than or equal to 1 since this set does not include-- would not include 1. If alpha is between 0 and less than 1, then the inverse image of this set is equal to E intersect F. E is measurable. F is measurable. So their intersection is measurable. And if alpha's less than 0, then the inverse image of this set-- in other words, what sets map into-- from one negative number to infinity-- well, both 1 and 0 map into there. And that's the entire set E. And so whatever you-- no matter what alpha is, the inverse image of these-- of that set is measurable. So these are basic properties of measurable functions. Let's continue what would we like more-- so what other properties of measurable functions, again, which we would hope to integrate or we will be integrating at least a class of these in the end to satisfy. Well, we would like them to be closed, undertaking linear combinations, and also products, because in the end, we'll have LP spaces, which is products of-- integrals of products of integral functions. So we have the following theorem. So let's suppose E is measurable. And I have two functions going from E to R that are measurable. And I have a scalar in R. Then c times f, f plus g, and f times g-- so these are now all functions from E to R-- are measurable, are measurable functions. So what's the proof? Let's start with scalar multiplication. This equals 0. And c times f equals 0, which is a continuous function. It's a constant function. 0 is continuous, hence measurable. So let's suppose c is non-zero. So the-- we're out of the silly case. If c does not equal 0 and alpha is an element of R, then c times f of x greater than alpha-- this is equivalent to f of x is greater than alpha over c. So this implies that if I want the inverse image, if I look at c times f and I look at the inverse image of alpha to infinity, this is equal to the inverse image of-- by f of the set alpha over c infinity. And now if I'm assuming f is measurable, then the inverse image of this set is measurable. So that's a measurable set. And therefore, c times f is a measurable function. So now, let's look at the function f plus g. Suppose alpha's in R. Then let's do something similar. Then f of x plus g of x is greater than alpha. This is if and only if f of x is greater than alpha minus g of x. So I didn't say this. But when I'm looking at such a condition, I'm looking at those x's that will be in this inverse image. So I'm just trying to figure out an equivalent way of expressing this condition, which we saw was this condition. So what I'm really looking at is those x's which would lie in the-- this pre-image here. And what I did a minute ago was show it's equal to this pre-image here. So this is why I'm considering f of x plus g of x plus alpha. This is the condition that-- so maybe I'll just write this out. Then x is in this set if and only if f of x plus g of x is bigger than alpha, which is equivalent to f of x plus-- f of x is greater than alpha minus g of x. Now, you learned in 18.100-- A, B, P, Q, whatever-- that if I have any two real numbers, one bigger than the other, then I can find a rational number in between them. So if this number is bigger than this number, there exists a rational number, r, such that f of x is greater than r is greater than alpha minus g of x. So this-- assuming this does imply this. And of course, this condition also implies this condition. If there exists a rational number so that f of x is bigger than r is bigger than alpha minus g of x, then, of course, f of x is bigger than alpha minus g of x. So these two conditions are equivalent. So this is equivalent to saying there exists an r in Q such that x is in the inverse image of f r to infinity intersect and r is bigger than alpha minus g of x, which one can state as x is in the inverse image of alpha minus r to infinity. So this last expression here-- so let me come over here and started erasing. So we've shown that x is in the inverse image of this set, f plus g, the inverse image by f plus g of this set, if and only if there exists a rational number so that x is in the intersection of these two types of sets, which we know are measurable. So we can express this, or summarizing what we just found-- that f plus g inverse image of alpha infinity-- this is equal to the union over rational numbers Q, r and Q, such that-- or the inverse image of f inverse or the inverse image of r infinity intersect alpha minus r infinity-- so that's just, again, expressing what we did over there. And now what do we know? If we're assuming f is measurable, then this whole set is measurable. And we're assuming g is measurable. So this whole set is measurable. And therefore, this intersection of these two sets is measurable. And now I have countable union because, again, the rational numbers are countable. One of the first things you prove in analysis is that the set of rational numbers is countable. This is a countable union of measurable sets. So that's measurable. Now, what about f times g? Here, we'll pull a little trick. So we'll prove that if f is measurable, then its square is measurable. And then we'll use a simple identity to conclude that f times g is measurable. So now I claim that f squared is measurable. Let alpha be a subset of R. If alpha is bigger than or equal to 0, well, let's do this stupid case first. If alpha is less than 0, then f squared-- this is a non-negative function. So if I take the inverse image of alpha to infinity, this is just equal to the domain E. So this is measurable. Remember, this is a set of all x's that get mapped by f squared to alpha to infinity. And if alpha is negative, then no matter what x is in E, f squared of x is going to be in alpha to infinity, again, for alpha less than 0. So this inverse image equals E, which is measurable by assumption. And then the other less trivial case is if alpha is bigger than or equal to 0, then f squared of x is bigger than alpha if and only if either what? f of x is bigger than the square root of alpha or f of x is less than minus square root of alpha. And therefore-- so, again, this expression here is-- this expression is expressing. But this relation here is expressing x is in the inverse image of alpha to infinity. So this says that the inverse image of alpha to infinity is equal to the set of all x's satisfying this condition on the right of this, if and only if, which is-- can be written as x is in the inverse image of-- f is bigger than square root of alpha, union, again, coming from the or, f inverse of-- and again, if we have a measurable function, then not only are these types of the pre-images of these types of sets measurable, but the pre-images of these types of sets are also measurable. That was the first thing we prove. And so since each of these pre-images are measurable, their union is measurable. So that's measurable. So we proved that f squared is measurable. And now we conclude that f times g is measurable by a simple identity that f times g-- this is equal to 1/4 f plus g squared minus f minus g squared. So f times g is equal to this function squared. If f and g are measurable, their sum is measurable. And therefore, by what we just proved, the square is measurable. And again, over here, this is going to be measurable. And by scalar multiple of minus 1, that thing's measurable. Scalar multiple of 1/4 on the outside is fine as well. So we conclude that this thing is measurable because every operation in this expression preserves the function being measurable, as we proved before this expression. And that's-- that concludes the proof. So the sum of two measurable functions is measurable. Scalar products are measurable. Products are measurable-- big deal because as far as Riemann integration goes, that's-- that still holds. So what's something that sets apart a function being measurable and, eventually, Lebesgue integrable is that it has better-- it's closed under taking limits as opposed to being Riemann integrable. So this is the following theorem. If I have a measurable set and then I have a sequence of functions, fn, going from minus infinity to infinity-- oh, and so I should have said a minute ago-- I'm just catching myself. Oh, no. Everything's fine. So I just wanted to make sure I had gone to R because f plus g is only defined going from E to R. But again, if I have one function that is finite everywhere and another function that's an extended real-valued function, you can check that f plus g is then going to be measurable as well. I can't make sense of the sum of two extended real-valued functions because I might be in a situation where I have plus infinity minus infinity, which is an undefined expression. But what I'm saying is I can take a-- for all of this, for one of them to be extended real-valued is fine. And also, the product of extended real-valued is fine, although I left it out of the list of rules, plus infinity times minus infinity is defined to be minus infinity, and with the usual sign rules. So back to the theorem-- if I have a sequence of measurable functions-- so then some other functions are measurable. Then the function g1 of x equal to sup n fn of x-- so that's now a function defined on E. This is also measurable. So I'll just list the functions and then say they're all measurable. g2 of x equals the inf over n fn of x. 3 x equals limsup as n goes to infinity of fn of x, which we'll recall the definition you can write as the inf over all n of the sup over all k bigger than or equal to n fk of x. And 4-- so I don't know why I'm double-labeling them. I have 1, 2, 3, and 4. But I also have 1, 2, 3, and 4 here. And the liminf of-- the point-wise liminf of these functions, which I will recall is equal to the sup over m of inf-- these are all measurable functions. So let's prove this. So the proof is not too difficult. So let's start with the first one. Actually, the third and the fourth follow from the first and the second. But so x is in the inverse image by g1 alpha to infinity if and only if, of course, sup n fn of x is greater than alpha. And this is equivalent to the sup over all n of fn of x is bigger than alpha if and only if there exists some n so that fn of x is bigger than alpha. If all the fns stay below alpha when evaluated at x, then the sup is less than or equal to alpha. So since the sup is the least upper bound-- so if and only if there exists an n such that-- if and only if there exists an n so that x is in fn inverse of-- and therefore, we've proven that the inverse image of the set by g1 is equal to the union over all n of f inverse f sub n inverse the inverse image of alpha to infinity. And now we're assuming the fn's are all measurable. So each of these is measurable. It's a countable union of measurable sets. So it's, again, a measurable set. So we've proven that for all alpha, this is a measurable set. So g1 is measurable. Now, if we go on to g2, it's the same thing. I can prove that the inverse image of-- let's see. What do I do here? Now I'm going to include alpha here. The inverse image of alpha to infinity-- so this is equal to the requiring that the inf over fn of x is bigger than or equal to alpha. This is equivalent to the intersection now of f inverse of these guys. Now, each of these is, again, measurable by assumption. And therefore, it's countable. Intersection is measurable. So we've proven that taking sups and infs of sequences of functions are measurable. But by how the limsup and liminf are defined-- so now we've proven that for any sequence of measurable functions, the sup and the infs are measurable functions. But the limsup is the sup-- is this sup first, followed by an inf. Now, if all of the f's are measurable, this sup is measurable for all n. And therefore, this inf is measurable. So g3 being measurable follows immediately from proving that sups and infs of measurable functions are measurable, and the same thing with the liminf. If f sub k is measurable for all k, then we've already proven that the infs over the k's-- bigger than or equal to n doesn't matter-- is measurable, a measurable function. And therefore, the sup over the n's of all these measurable functions are, again, measurable. So the fact that g3 and g4 are measurable follow from the expressions for-- and the previous two cases, meaning that we proved infs and sups of measurable functions are measurable functions. Now, what do we get from this? So an immediate theorem is the following. If E is measurable, fn goes from-- and fn is measurable for all n. And they converge pointwise to some function f of x. Then f is measurable. I shouldn't write it as a theorem. I should write it as a corollary. Why? Because if these converge, then f is equal to the limsup. It's also equal to the liminf. But f is equal to the limsup. So f-- so the proof is one line. For all x in E, f of x is equal to limsup of fn of x. And it's also equal to the liminf if the fns are converging to f. You covered this in 18.100. You have a limit of a sequence if and only if the limsup equals the liminf. And this also holds if I include plus or minus infinity as being possible elements of the extended real numbers that are in the sequence or the possible limit. Since this is measurable by the previous theorem, f is measurable. So this says something that is an indicator that we're doing something that's or we're building something that's stronger than being Riemann integrable. As I've said, the functions which we will define a Lebesgue integral for will be a certain subset of measurable functions. And so taking them as our possible candidates, let me just make the following remark, which now separates measurable functions, which are-- again, are candidates for being Lebesgue integrable from Riemann integrable is that this fails if I replace measurable by Riemann integrable. If fn-- let's go from a, b to even R-- is Riemann integrable. And fns converge to f pointwise, which just means this. For all x in a, b, limit as n goes to infinity of fn of x equals f of x. Then f need not be Riemann integrable. In other words, when I look at Riemann integrable functions, they're not closed under taking pointwise limits. So yet for the Lebesgue integral, at least for our candidates of Lebesgue measurable functions, those are closed under taking pointwise limits, as we just proved. Now-- which says that we're on track to proving something that's a-- that has better qualities than the Riemann integral. Now, as it turns out, the pointwise limit of Lebesgue integrable functions need not be Lebesgue integrable. But if you add another very minor condition, the answer is, yes, it is Lebesgue integrable. But at the very least, we would like for our candidates-- again, Lebesgue measurable functions-- to be closed under taking pointwise limits or, at least, that should indicate to us that we're doing something that will have better properties than Riemann integrable functions. So why is this? What's a sequence of functions that are Riemann integrable, but not-- but their pointwise limit is not Riemann integrable? So for example, you could take the function fn to be-- so you know what? I'm actually going to add this to the assignment. And it won't be-- no. So I'll go ahead and say why this is the case. Let's write-- so the rational numbers Q and-- let's say intersect 0, 1-- this is a countable set. It's a subset of the rational numbers. So this is countable set. So we can list them. This is equal to r1, r2, and so on. If I define fn of x to be 1 if x is in r1 up to rn and 0 otherwise, now this is a function which is piecewise continuous on 0, 1. So it's Riemann integrable. But what happens as I take n goes to infinity? The pointwise limit of these functions converges to the indicator function of the rational numbers-- intersect 0, 1. And hopefully, this was something that you checked when you learned about Riemann integration. If you just learned about the Riemann integral of continuous functions, then that's fine, too. But anyways, this function is 1 on the rationals and 0 elsewhere. And-- which is a function which is discontinuous everywhere, or-- let's see-- not everywhere, or-- yeah, everywhere. And you can convince yourself that that's too crazy of a function to be Riemann integrable if all you knew was that continuous functions are Riemann integrable. But if you learned more about the Riemann integral, then one of the things that you learned was that this function is not Riemann integrable. So taking the pointwise limit of Riemann integrable functions may not be Riemann integrable. However, what we've proven is that the candidates for being Lebesgue integrable-- namely, those measurable functions-- are closed under taking pointwise limits, which is, like I said, an indication that this theory of the integral that we're building up is going to be a stronger theory in the sense that we can prove more things than that of Riemann. And one of those things we'll prove is that the space of Riemann-- I mean, Lebesgue integrable functions with a norm being the integral of the absolute value is, in fact, a Banach space as opposed to the Riemann integral. So now let me just make a couple of final statements. And then we'll call it a day for this lecture. So this is just really some terminology that when I make a statement P of x and I say that with x to-- an element of a measurable set E-- If I say this statement holds almost everywhere on E, and I'll usually say-- I'll shorten that to just "ae" on E or just "ae" if I'm deleting even more from what I write-- this holds almost everywhere if the set where it doesn't-- where it's-- doesn't hold has measure 0. So I'm saying-- you may think I'm saying two things here, that this set is measurable and its measure is 0. But remember, when we developed the theory of measure, first thing we learned about measurable sets is that if it has outer measure equal to 0, then it's measurable. And remember, the outer-- the measure of a measurable set is just its outer measure. So perhaps I'll say with-- in parentheses-- So a statement holds almost everywhere if the set where it doesn't hold has a Lebesgue measure 0, which is equivalent to saying it has outer measure equal to 0 because we proved that sets that have outer measure equal to 0 are Lebesgue measurable. And so the final theorem that we'll prove today is that if I have two functions-- one measurable, the other differing by that measurable function off a-- on a set of measure 0-- then the second function is also measurable. So somehow, sets of measure 0 don't affect being a measurable function. So if fg go from a measurable set E to the extended real numbers, f is measurable. And f equals g almost everywhere on E, meaning the set of x's where f does not equal g is a set of measures 0 in E. Then g is measurable. So if you take a measurable function f, change it on a set of Lebesgue measure 0, you still get a measurable function. And again, this just follows from the fact that all sets of Lebesgue measure 0 or outer measure equal to 0 are Lebesgue measurable. And then, again, the fact that we know measurable sets are closed under taking unions and complements-- so let n be the set of x's in E such that f of x does not equal g of x. Then this is a set of outer measures 0. And therefore, it's Lebesgue measurable. So now if I take an alpha in R and define another set-- call it N sub alpha-- to be the set of all x's in N such that g of x is greater than alpha, this is a subset of N just by definition. So this is how I'm defining this set. So it has outer measure equal to 0. And therefore, it's measurable because N has outer measure 0. Then if I take the inverse image of alpha to infinity, this is equal to the inverse image by f of alpha to infinity intersect, where they equal each other. So we'll intersect the complement of N union this set, N sub alpha. Now N has outer measure 0. So it's Lebesgue measurable. And therefore, its complement is measurable. This is measurable by assumption. And therefore, this intersection is measurable. And this set here is measurable. So the union of this measurable set with this measurable set is, again, measurable. And we conclude that this is measurable. So we have defined measurable extended real-valued functions and proven some properties of them, the most striking of which being that being measurable is closed under taking pointwise limits and that you can change a function on a set of measure 0 and still be measurable. Next time, we'll extend this notion in a trivial way to functions which take on complex values, not extended, just finite complex numbers, not including the-- pointed at infinity. And then we will move on to defining-- so once we have these-- so here's the game plan. So we'll extend this notion of being measurable to complex-valued functions. And then we'll show that there's a particular class of functions called simple functions which, again, are simplest in the sense that they just take on finitely many values, and show that those are the universal measurable functions. And from there, we will then define the Lebesgue integral of a non-negative measurable function and prove some properties of that. And then once we've done that-- so you can define the-- an integral of a non-negative-- an arbitrary non-negative measurable function. We will then restrict to those functions which we call Lebesgue integrable, which now can-- not necessarily non-negative, but will have finite Lebesgue integral, and then prove those are-- and then prove some properties of that, including the big convergence theorems, and then close out our section on or chapter on measure and integration by proving that the Lebesgue integrable functions form a Banach space with a natural norm put on it as opposed to Riemann integrable functions. We'll stop there.
MIT_18102_Introduction_to_Functional_Analysis_Spring_2021
Lecture_20_Compact_Operators_and_the_Spectrum_of_a_Bounded_Linear_Operator_on_a_Hilbert_Space.txt
[SQUEAKING] [RUSTLING] [CLICKING] CASEY RODRIGUEZ: OK, so the main character today, in this lecture and the next, are compact operators, which I introduced at the end of last lecture. And I'll recall throughout, H is a Hilbert space. So we say an operator, let's call it K, is a bounded linear operator, is a compact operator, if the image of the unit ball, taking the closure of that-- so this is just a set of all elements K u with u less than or equal to 1 closure is compact in H. And how did this set of operators come up? At the end of last time, we dealt with finite rank operators, which were just basically matrices. You can actually write them so that they're completely described with respect to a certain basis as a matrix. And the question we posed at the end of the last lecture was, these finite rank operators, they form a subspace of the space of all bounded linear operators. Is that subspace closed? And the answer is no. What is the closure of the finite rank operators, which we know and love because they're just matrices? And what we'll prove today is that it's exactly the space of compact operators. But let me just-- last time, I gave you an example of one that is a compact operator K a equal to, let's say, a1 divided by a1 one, a2 divided by 2, a3 divided by 3, and so on, defined for a in l2, little l2. Then my claim is that K is a compact operator. And in the assignment, you show basically a variety of other operators that come up quite often are compact operators. For example, you also show, essentially, that if K is a continuous function on 0, 1, cross 0, 1, and I define a linear operator T f of x to be the integral from 0 to 1 of K x, y, f of y dy, now f is in big L2 of 0, 1. Then part of the assignment is to use and remind yourself of one of the only-- one of the very few-- compactness results one has, is to use the Arzela-Ascoli theorem to show that this is, in fact, a compact operator. So from the assignment, you'll prove T is compact operator on L2. Now, maybe you think I'm just coming up with something that looks like this to come up with examples of a compact operator, but the solution operator for differential equations take this form. So for example-- I guess this would be an example, as part of this example-- if I take K x, y to be-- let's see if I can get it right-- x minus 1 times y, x times y minus 1, for 0 less than or equal to y less than or equal to x less than or equal to 1. And then now reverse. So let's see. Yeah, I think that's it. Then u equals 0, 1 K of x, y. You can actually compute. Solves the differentiable equation u double prime equals f, u of 0 equals u of 1 equals 0 on 0, 1. So maybe this has a minus in front. I can't exactly remember-- u of x. So the solution operator, meaning if my input is f, then the solution to this differential equation is actually-- I can write it as a compact operator acting on the data, which is f. So these don't just pop up out of nowhere. They pop up quite naturally. Now, these are a bunch of examples on compact operators. One non-example-- it's easy to actually come up with non-examples of compact operators-- but let's say the identity on little l2. This is not a compact operator. Why? Because the condition of being a compact operator is that K of the unit ball should be a compact subset of little l2. But let's let e sub n be the sequence-- our favorite sequence-- which is 0, except in the n-th spot, where it's 1. Then each of these has norm 1. And so if I look at the image, e n, that's I e m, and I compute the square of that, you can see quite easily this is equal to 2 since these are orthonormal for all n not equal to m. And therefore, I e n does not have a convergent subsequence. If I, the identity, was a compact operator, then the image of the closed unit ball in little l2 should be a compact set, meaning what? Meaning elements of this form, I evaluated on something that has norm less than or equal to 1, should form a compact set. So if I have a sequence of guys in the unit ball, closed unit ball, then when I hits it, I should be able to find a convergent subsequence. But here, if you just compute, then we have the distance between any two of these guys for n not equal to m is 2. So this sequence can't ever be Cauchy, nor can any subsequence ever be Cauchy. And therefore, this has no convergent subsequence. So the identity on little l2 is not a compact operator. And by this argument, you can, in fact, prove that the identity is not a compact operator on any infinite-dimensional Hilbert space. So we have one way of showing that an operator is a compact operator, which is by verifying the definition. And then we now have this theorem, which I alluded to at the start, showing that the set of compact operators is the closure of the finite rank operators. So let H be a separable Hilbert space. Then, an operator T, bounded linear operator T on H, is compact operator if and only if there exists a sequence T n of finite rank ops, which converge to T n the operator norm. So this proves that the set of compact operators is the closure of the set of finite rank operators, which we denoted by-- so this shows compact operators is the closure of the finite rank operators, which we denoted by R. That's R of H. So first direction is to show that if I have a compact operator, then it's equal to a norm limit of finite rank operators. So since H is separable, it has an orthonormal basis. Now, since T is a compact operator, remember this means that the image T u with of all those elements of norm less than or equal to 1 is contained in a compact set, namely the closure of this set. Now, since this set is contained in a compact set, I can use-- in fact, let me just put it's a compact set. And so we have this characterization of compact subsets of a separable Hilbert space by being closed, bounded, and having equismall tails. So then, for every epsilon positive, there exists a natural number N such that if I look at sum over K bigger than N, K u e K squared, this is less than epsilon squared for all elements with norm less than or equal to 1. Because the K u's-- these are in this compact-- why am I using K? Should use T, sorry. The T u's are all in this compact set. So we have this equismall tail condition that for every epsilon, we can find a N which does not depend on the elements of the compact set, so that the tails of the Fourier coefficients are small. So what this says is that we can basically, if we're taking any u with norm less than or equal to 1 and hitting it with T, then the end of the Fourier coefficients don't really matter. And so we should think we can approximate T by just a finite rank operator that consists of coefficients like this times e k for k between 1 to N. And that's basically what we're going to do. So for n, define T n of u to be sum from k equals 1 to n of T u e k e k. And this is for u in H. So one can check that this is a bounded linear operator and it's also a finite rank. Because the range is-- so it's clearly a bounded linear operator. Just take the norm squared of this. This is equal to the sum of squares of the Fourier coefficients, which is less than or equal to the norm of 2 squared, which is less than or equal to the norm of T squared. So it's clearly a bounded linear operator. And if I look at the range of T n, this is a subset of the span of e1 up to e n. And therefore, T n is a finite rank operator for all n. So I claim this is our set of finite rank operators that converges to T as n goes to infinity. So to prove this, we'll just do standard epsilon N argument. Let epsilon be positive. Then let N be the N that appears in this condition star up here, meaning capital N is the natural numbers, guaranteeing this. So now, I claim that for all little n bigger than or equal to this capital N, the operator norm of T n minus T is less than or equal to epsilon, which is still good enough. Then for all-- then if norm of u equals 1, we compute that T n of u minus T u. So we haven't used anything about this e k yet, but this is where we use that they're in orthonormal basis, is that I can write T n of u. Well, first off, this just is from the definition that this is K equals 1 to n-- and let me put a square here-- of T u E k E k minus-- and now I expand this element as in this basis T u e k e k squared. You see what cancels out are the first n of these. And this equals the norm squared of k bigger than n of T u e k e k norm squared. And this is equal to, since these are orthogonal, k bigger than e-- orthonormal, i mean-- k bigger than n T u j k squared. And now, n is bigger than or equal to capital N. So this is less than or equal to k bigger than N T u e k squared, and that's less than epsilon squared. So what we've shown is that for all u of norm 1, T N u minus T u squared is less than epsilon squared. And therefore, if I take the sup over all u that have norm 1, I get that the norm of T n minus T, which is that sup, is less than or equal to epsilon. So thus, the T n's converge to T in operator norm. So now, going in the opposite direction, we will use this second characterization that we have of compact sets that we didn't prove, but I said you should look up if you want to. It's, again, a diagonalization argument, but that says that compact subsets can be approximated by finite dimensional-- or are sets that are compact-- or a set is compact if and only if it can be approximated by finite-dimensional subspaces. So now, we're proving the opposite direction. And suppose that T can be approximated in operator norm by a sequence of finite rank operators. And we want to show that T is a compact operator. So first, note that if I look at T u norm less than or equal to 1 closure-- so this is the set that I'm trying to show is a compact set-- it's obviously closed because it's the closure of a set. The closure of a set is always the smallest closed set containing that set. And it's contained because T is a bounded operator. It's always contained in the set of, let's call it v in H, where the norm of v is less than or equal to the norm of T. So this shows that the set here is closed and bounded. So to conclude that this set is compact, we just need to show that it can be approximated by finite-dimensional subspaces, meaning for every epsilon, there exists a finite-dimensional subspace so that the distance from every element in this set to the subspace W is less than epsilon. So we want to show now, or we make this a claim, for all epsilon positive, there exists a finite-dimensional subspace W such that for all u less than or equal to 1, we have that the distance from this element T u to w is less than epsilon. That's what I mean by we can approximate this set by finite-dimensional subspaces. Once we can do that, then we've proven the claim, or we can use that characterization of compact sets to conclude that this set is compact. Now, what's the idea? We have that T, which again, this set here is just the image of the closed unit ball by T taking the closure. And therefore, T can be approximated by a finite rank operator. Its range is finite dimensional. So this set should be approximated by the range of T n, which is finite dimensional, and therefore, should give us our claim. And that's exactly what we do. So since the norm of T n minus T goes to 0, there exists a natural number N such that I have that for all little n bigger than or equal to capital N, blah, blah, blah. But I don't need all little n. I just have this less than epsilon. Let W be the range of T sub n, which is a finite-dimensional subspace. And now, I want to show that I have this condition here-- that for all u with norm less than or equal to 1, this distance from T u to capital W is less than-- we can make that less than or equal to epsilon. That still proves what we want, just go through and change epsilon to epsilon over 2. Then, for all elements in H with norm less than or equal to 1, if I take T u and subtract off T n u, which is an element in capital W, this is less than or equal to T minus-- so the operator norm of T minus T n times norm of u, which is less than or equal to T minus T n, which is less than epsilon. Put a less than there-- it doesn't matter. So this holds for one particular element of W, and therefore, the n-th, which is the distance from T u to capital W, which is less than or equal to this, must be less than epsilon. And therefore, this proves the claim. And the fact that this set is compact follows from that characterization of compact sets, which we proved-- or didn't prove, but at least stated a few lectures ago. So I mean, you can use the definition to verify something as a compact operator. You could use this alternative characterization as a limit in operator norm of finite rank operators to verify something, say compact operator. It's just whatever's most convenient at the time. So how about the end of-- when we talked about finite rank operators, we also went a little bit into their algebraic structure. We can do the same for compact operators. So again, let H be a separable Hilbert space. Then the first property of compact operators is that-- and let's call it a name, K of H be the set of compact operators on H. And the first is K of H is a closed subspace of the space of bounded linear operators on the Hilbert space. So the fact that it's a linear subspace is not too difficult to prove. The fact that it's closed follows immediately from what we've done so far when we showed that the compact operators are the closure of the set of finite rank operators. Really, I mean the fact that we've shown that proves both of these, because the closure of a subspace is, again, a subspace. If T is a compact operator, then its adjoint is also a compact operator. And for every pair of bounded linear operators and compact operator, if I multiply on the left and on the right by these bounded linear operators, that remains a compact operator. So a fancy way of combining 2 and 3 is that the set of compact operators is a star closed two-sided ideal in the algebra bounded linear operators. But I'm just going to list conditions, and not give fancy words for them. So 1 is clear. I'm not going to write the proof for that. We'll just do 2 and 3. And although we could do it directly from the definition, let's use the theorem that we just worked hard to prove. So if T is a compact operator, then that implies that there exists a sequence of finite rank operators such that T n converges to T in operator norm. And therefore, by what we proved last time, we showed that if we have a finite rank operator, then its adjoint is also a finite rank operator, which implies that if I take the norm of the adjoint of this finite rank operator and minus the adjoint of the compact operator-- remember, we have this nice theorem that tells us the norm of the adjoint is equal to the norm of the original operator-- this is equal to T n minus T, which we're assuming goes to 0. And therefore, the adjoint is a norm limit of finite rank operators. And therefore, it must be a compact operator. And to prove 3, we'll use this characterization again. So again, assume T is a compact operator so that there exists a sequence of T n's and a finite rank operators converging to T and the operator norm. So T n finite rank operators, T n minus T converges to 0 implies, then-- now, last time, we showed that the set of finite rank operators also satisfies these two conditions. This one I just used here, but also satisfies this one-- that if I take a finite rank operator and multiply it on either side by a bounded linear operator, I still have a finite rank operator. So for all n, I also have A times T n B is also a finite rank operator by what we did last time. And if I take the norm of A T n B minus A T B, this is equal to the norm of A T n minus T B. And now, the norm preserves this kind of algebraic structure of bounded linear operators-- I shouldn't say "preserves," but "respects" it, meaning the norm of a product is less than or equal to the product of the norm. So this is less than or equal to-- this just follows from the definition-- T n minus T B. And so this norm is fixed. This norm is fixed. This thing is going to 0. So I have bounded above this non-negative quantity by something converging to 0. And therefore, this thing converges to 0. So A times T times B is the norm limit of a sequence of finite rank operators. So it's also compact. Now, coming back to-- one second. Hydration is key. Now, one of the most important set of numbers that one can associate to a matrix is the set of its eigenvalues, and not just-- so these eigenvalues typically represent possibly the modes of vibration of a string. Or if you're taking Quantum Mechanics, in which case you're not necessarily dealing with matrices, depending on how it's being taught to you, these eigenvalues are the energy levels for bound states. So perhaps, since compact operators are the inverses of differential operators, which most definitely appear in quantum mechanics but also in other applications, it would be good to develop some sort of theory of eigenvalues and eigenvectors for more general bounded linear operators, and then specify to compact operators. But what am I going on about? So now, we're going to discuss the spectrum of a bounded linear operator, which is a generalization of eigenvalues and eigenvectors that you encountered in linear algebra. In linear algebra, the spectrum will, if I take H to be just a finite-dimensional space of R n C n, then the spectrum, as we're going to introduce it now, consists entirely of the eigenvalues of that matrix. Not so if we move on to now bounded linear operators, which is what makes it pretty interesting, is that there's more stuff that's associated to the fact that you're working in infinite dimensions. And rather than harp on about it, let's go. Let's just dive in and start discussing the spectrum. First, I want to bring up a few facts that you proved in, I think it was the first assignment, first or second assignment of the course. So throughout, again, H is a Hilbert space. I will say it maybe at one point that it's separable, but throughout, H is always a Hilbert space. So we have the following theorem, which tells you how to invert certain operators via what's called Neumann series. Let T be a bounded linear operator on a Hilbert space. If the norm of T is less than 1, then the operator I minus T is invertible. And if I want to compute the inverse, I can compute it just like I would if I minus T was 1 minus x. And we're talking about dividing by the function 1 minus x. You have a power series-- not power series, but you can write it as a geometric series-- sum from n equals 0 to infinity T to the n. And this series over here is absolutely convergent. Why? Because the norm of every term-- norm of T to the n is less than or equal to the norm of T raised to the n. And the norm of t is less than 1. So this thing is absolutely summable. And you use that when you actually proved this theorem in the assignment. So I'm not going to give the proof of it again. I didn't give it once before, you did, but I'm not going to prove it. We'll just move on. And so another fact that you proved using this is that the set GL h, which is the set of bounded linear operators that are invertible, meaning it's bijective-- by the open mapping theorem, we know that if an operator is bijective, then it automatically has a bounded inverse, so I'll say bijective-- that this is an open set, is an open subset of what? Of the space of bounded linear operators. So the proof is quite quick using this previous theorem. So let T0 be an element of the invertible operators on H, GL H. So that means we need to find an epsilon so that the ball centered at T0 of radius epsilon is contained in GL H, meaning everything in that ball is invertible. So suppose that I have an operator T and its distance in operator norm to T0 is less than 1 over the operator norm of the inverse of T0. I claim that T then has to be invertible. Then, if I look at T0 times T minus T0 inverse-- so t0 inverse times T minus T0, in operator norm that's less than or equal to T norm of T inverse times the norm of T minus T0 and norm, and that's less than 1. So by the previous theorem I get that I minus T inverse T0 or T0 inverse applied to T minus T0 is invertible because this thing here has norm less than 1, which implies that T, which is equal to the product of two invertible operators, namely T0 and I minus T0 minus 1, T minus T0, is a bijective bounded linear operator or a bounded linear operator with a bounded inverse. And so let me just summarize. I.e., the ball of radius 1 over the norm of the inverse is contained in the set of inverted bounded linear operators. And that proves that this is an open set. So another way to think about-- so if you go back to what eigenvalues, eigenvectors were, it's especially clean if you have what were-- depending on if you did stuff with complex value matrices or not, but let's say they're symmetric, and you're dealing with real value matrices, so matrices with real entries, then a theorem of linear algebra is that you can diagonalize that matrix, meaning you can find an orthonormal basis of R n So that in that basis, the matrix is simply diagonal, and the numbers appearing on the diagonal are the eigenvalues of that operator. Now, another way to think about that is that for which lambda, say, can I invert the matrix A minus lambda? In fact, that's exactly how the eigenvalues are defined. If you want to find an eigenvector, then that should be in the null space of A minus lambda. So you compute the determinant of that, and you get a polynomial that you saw for the lambdas. And then you get to solve for the eigenvectors, which is a system of linear equations once you know lambda. But thinking back, those eigenvalues are impediments to being able to solve the equation A minus lambda u equals f. And so with that in mind, that's how we'll define the spectrum of a bounded linear operator. So first, I'm going to define the complement of the spectrum. So let A be a bounded linear operator. The resolvent set of A-- so this is the new bit-- is a set Res A. This consists of all complex numbers lambda, with the property that A minus lambda I is an invertible operator. Now, I'm a bit lazy, and so throughout, instead of writing A minus lambda I, will most often write A minus lambda, with the understanding that there should be an identity here. So the resolvent set is a set of all lambdas complex numbers so that A minus lambda is invertible. In other words, that you can always solve the equation A minus lambda u equals f for arbitrary f. And you can solve it uniquely, uniquely being the key. The spectrum of A-- so this is a new bit of terminology-- is simply the complement. So usually, we write it Spec A equals C take away the resolvent set of A. So this is the example I was saying a minute ago. Let's say I have just a matrix, linear transformation of C2 to C2, and A is just given by lambda 1, 0, 0, lambda 2, for some two numbers lambda 1 and lambda 2. So I'm giving you the simplest example possible. Then A minus lambda is equal to lambda 1 minus lambda 0, 0, lambda 2 minus lambda. And when is this matrix invertible? Precisely when lambda does not equal lambda 1 or lambda 2. So A minus lambda is invertible if and only if lambda does not equal lambda 1 or lambda 2. And when lambda equals lambda 2 or lambda 1, this matrix here has a non-trivial kernel. And that's really the only obstruction to being invertible. But in infinite dimensions, when we have operators on infinite dimensions, there's an additional wrinkle that could happen. So let me finish with this example. Therefore, the residual set of A or residual resolvent set of A equals C take away lambda 1, lambda 2. And the spectrum of a equals C set lambda 1, lambda 2. So we kind of have a special name for if something pops up in the spectrum, in kind of the same way we have for regular matrices. So definition-- if A is a bounded linear operator and A minus lambda-- so lambda is a complex number-- is not injective-- so we need two conditions for A minus lambda, or for lambda to be in the resolvent set, we need A minus lambda to be both injective and surjective, one to one and onto. Then this implies that it has non-trivial kernel. And there exists a u in H, take away 0, so that A u equals lambda u. We then call this lambda, which is in the spectrum of A-- it's in the spectrum because A minus lambda is not invertible-- an eigenvalue of A, and u an eigenvector. So the best way to learn about spectrum is to do a lot of examples. And so I only have a finite amount of time in class to teach you new stuff, so I can't do too many examples. But let's take, for example, the operator from earlier, T a equals a1 over 1, a2 over 2, a3 over 3, and so on, for an element of little l2, so a sequence in little L2. And so this was, in fact, a compact operator. What would be some eigenvalues of this? Well, note that if I apply T to one of the elements of orthonormal basis-- so let's call it e n, where e n is the sequence consisting of 1 in the n-th spot and 0 otherwise. What do I get? I get 0, 0, except in the n-th spot, 1 over n, 0, and so on, which is 1 over n times e n. So that proves that T minus 1 over n has non-trivial kernel, and therefore, 1 over n is an eigenvalue, and therefore, in the spectrum. So this was for all n. These are eigenvalues of T. So the spectrum contains at least this sequence of numbers. So unlike in the finite-dimensional case, you can have infinitely many eigenvalues. So it's also possible that you have no eigenvalues. Let me just bring up a small point here. Let's look at 0. Is 0 in the spectrum or in the resultant? In other words, So T minus 0, that's just T. So I'm asking is T itself injective and surjective? Is it bijective? So first off, it should be clear that this operator here is injective. If T a equals 0, then every one of these numbers is 0, and therefore, a has to be 0. So T is clearly injective. But try to convince yourself that T cannot be surjective, because if it was surjective, the inverse would have to be a bounded linear operator. What would the inverse have to be? It would have to be I take the elements of my sequence in l2, and a1 gets multiplied by 1, a2 gets multiplied by 2, a3 gets multiplied by 3, and so on. And that would not be a bounded linear operator on little l2. So that argument tells you that it can't be surjective. But what one can show is that the range is dense, though, in little l2. So you can have this-- something can be in the spectrum, something that's completely different than what happens in finite dimensions. You can have the range of T or T minus lambda, say-- you could have the range be dense in H, but not closed. And therefore, the operator is not surjective. This one subtle difference between what happens in infinite dimensions and finite dimensions-- that doesn't happen in finite dimensions because the range is going to be a subspace of H, and in finite dimensions, the range is a finite-dimensional subspace, and so it's always closed. In infinite dimensions, the range does not have to be closed. So this little difference, though, can result in something being in the spectrum. It's not always just something's in the spectrum if and only if it's an eigenvalue. That is not the case. And so for example, and I'll probably put this on the assignment, but what you can verify is that let's say I define now an operator T from L2 of 0, 1 to L2 0, 1 by T f of x equals x times f of x. Then T has no, none, eigenvalues. So I'll probably put this example either on the homework or on the exam, but it has no eigenvalues. And you can compute that the spectrum, the set of all lambda so that T minus lambda is not invertible, equals 0, 1. Why 0, 1? It's not because the domain's 0, 1, but because that's essentially the range of x, the function x as x ranges from 0 to 1. So when you move on to infinite dimensions, the spectrum doesn't necessarily need to just be the eigenvalues of your operator. In fact, there may be no eigenvalues. It can be something that's much more subtle, which occurs when the range is not a closed subset, is not a-- the range can be dense in H, but not close, and therefore, not equal to all of H. So the operator is not surjective. So let's prove some general properties of spectrum and resolvent sets. So let A be a bounded linear operator. Then spectrum of A is a closed subset of C. And in fact, it's contained in the set of lambdas in C with modulus less than or equal to the norm of A. So in particular, this implies that the spectrum is a compact subset of the complex numbers. So in particular, since 1 over n is in the spectrum of this operator we have over here, and 0 is the limit as n goes to infinity of 1 over n, this automatically, from this result, implies 0 is in the spectrum of this operator. Now of course, you can phrase this, since the spectrum of A is the complement of the resolvent set of A, you could have phrased this theorem in terms of the resolvent, which is that the resolvent set of A is open and the resolvent set of A contains the complement of this set. That's, in fact, how we'll prove this theorem. We'll show-- is open and that this thing, the complement of this set, is contained in the resolvent of A, and set of lambdas with modulus bigger than the norm of A is contained in the resolvent set of A. So why is the resolvent open? Well, it follows from the fact that GL H, the space of bounded linear operators that are invertible, is an open set. So take something in the resolvent. We have to show that there's a small ball around this complex number so that that small ball is contained in the resolvent, meaning A minus lambda is invertible. Since GL H is open, this implies there exists some small epsilon such that if T is an operator which is close to A minus lambda 0, an operator norm, this implies that T is also invertible. Then, if lambda minus lambda 0 is less than epsilon, we get that the norm of A minus lambda minus A0 minus lambda 0, the A's cancel and I just get the operator norm of lambda minus lambda 0-- and here, I should have kept around the identities-- which just equals lambda minus lambda 0, which we're assuming is less than epsilon. So this operator here is within epsilon of this convertible operator here, which guarantees, by how we chose epsilon, that A minus lambda is invertible, and therefore, lambda is in the resolvent set of A. So we started out with something within epsilon distance to lambda 0, where epsilon was coming from this condition here, and showed that lambda is in the resolvent, i.e., the ball lambda minus lambda 0 less than epsilon is contained in the resolvent set of A. So that proves that the resolvent set is open. So now, let's show this condition-- that if the absolute value of lambda is bigger than the norm of A, then A minus lambda is invertible. So suppose lambda is bigger than, strictly bigger than, the norm of A. So in particular, it's nonzero. Then the norm of 1 over lambda times A is less than 1, and this implies that I minus 1 over lambda A is invertible. Because then this has norm less than 1. So we can always invert something that is I minus that thing that has norm less than 1, which tells me that A minus lambda is the product of two invertible operators, namely multiplication by minus lambda and I minus 1 over lambda A. Let's see. Let's make sure everything cancels. OK. So multiplication by lambda, which is a positive or nonzero complex number, that's an invertible operator. This is an invertible operator. Product of two invertible operators is invertible, so that thing is in GL H. So it's invertible, which implies lambda, simply by definition, is in the resolvent set of A. So we supposed that we had something here, and showed it's in here, which is what we wanted to do. So the spectrum is a closed subset, in fact, a compact subset, of a set of complex numbers. At least for this class, that's about as much as we can say for general bounded linear operators about their spectrum. Let me just make a small comment, also, that if you know a little complex analysis, the previous proof actually also allows you to-- so I'm just going to state this as a fact. One could ask, could the spectrum ever be empty? So that's certainly a compact subset of complex numbers. And the answer is no. But this class does not assume complex analysis, so I can't really or write a proof of that and expect you to completely follow it. But here's the thinking-- so if you don't know complex analysis, just skip ahead a few seconds, but the thinking is this-- let's suppose that the spectrum is empty. Then A minus lambda applied-- so let's assume the spectrum is empty, and take u and v that are in the Hilbert space, apply A minus lambda to u or A minus lambda inverse, applied to u. An inner product it would be. So this is well defined because A minus lambda inverse always exists because the spectrum is empty. And this gives you a continuous function in lambda on the complex plane. And now what happens? So the proof, or at least one of the estimates I gave before, says that as the magnitude of lambda gets large, the operator norm of A minus lambda is bounded by a constant divided by the magnitude of lambda, which is going to 0. So the operator norm of the inverse of A minus lambda goes to 0 as the magnitude of lambda goes to infinity. But you can show a bit more about this function I defined a minute ago involving two vectors-- that basically, by this Neumann series calculation that in fact, that's a complex differentiable function in lambda. And thus, you have this complex differentiable function defined on the entire plane that goes to 0 as lambda goes to infinity. And by Liouville's theorem-- not Louisville, Louisville's a type of bat, a bat that you hit stuff with, not the bat that gives you COVID-- then A minus lambda inverse applied to u inner product v has to be identically 0 because that's a complex differentiable function in lambda, which goes to 0 as lambda goes to infinity. And since this applies for all u and v, this implies that the operator A minus lambda inverse is identically 0. That's a contradiction. So the spectrum cannot be empty. So it will be very useful to have a different characterization. So now, we're going to specialize to not just looking at the spectrum of arbitrary bounded operators, but now, we're going to zero in on self-adjoint operators. And first, we need a little theorem, that if I have a self-adjoint operator, meaning A is equal to A star, then two things are true-- for all u in H, the inner product A u applied to u is a real number. And two, I can give an alternative characterization of the norm of A, that the norm of A is, in fact, equal to the sup over norm of u equals 1 of the absolute value of A u applied to u. So this actually has nothing to do with the spectrum. This is just a fact I'll need when we start studying the spectrum of self-adjoint operators. And that will be in the next lecture. So with the remainder of this lecture, we'll prove this theorem. So proof-- so 1 is quite easy. If u is in H, and I look at A u applied to u, and I take its complex conjugate, something's real if and only if its complex conjugate is equal to the number again. So I want to show that the complex conjugate of A u applied to u is equal to A u applied to u. So this is equal to, by the properties of the inner product, u applied to A u. Now, A is equal to its own adjoint so this is equal to u applied to-- or u inner product A star u. And by the definition of the adjoint, this moves over here. And that's the end. Now, I keep bringing up quantum mechanics, but I am a researcher in quantum mechanics, but I'm not. But in quantum mechanics, the observables-- meaning the things that one would actually measure like position, momentum, center of mass, these types of things-- are modeled by self-adjoint operators. These are not bounded linear operators. They're unbounded, which we will not cover in this class, but they're self adjoint. And therefore, by this first property this quantity here, which is the expectation of that measurement-- so what this number 1 tells you is that the expectation of this measurement is always a real number. It's always something that you can measure in nature. I mean, as far as I know, the things one measures in nature, not necessarily uses to model nature, are not complex numbers. So number 2, let's verify that I can, for a self-adjoint operator, write the norm in this way. So let little a be this thing. First thing to note is actually, this is a finite number, and it's bounded above by the norm of A. Note that for all unit length vectors, the absolute value of the inner product of A u with u via Cauchy-Schwarz is less than or equal to the norm, the product of the norms of A u and u. u has norm 1, so this is less than or equal to the norm of A u. And by the definition of the operator norm of A, that's less than or equal to the operator norm of A. So this proves two things. This thing on the right-hand side is a finite number and it's bounded above by the norm of A. So little a, the sup, is less than or equal to norm of A. And what we'll do is prove the opposite inequality. And to do that, we'll use this first property, that for all u, A u applied to u is real. So I want to somehow estimate the operator norm of A, which means I need to estimate the norm of A u, where the norm of u equals 1. So let's take something in H with norm 1. And since the operator norm of A-- so let me just-- now what we want to show is that the norm of A is less than or equal to this number, little a. So let's take something with norm 1. And so that when I hit it with A, I don't get 0, because the operator norm of A is computed by taking the sup of norms of A u, where u has unit length. And I just need to estimate those A u's that are nonzero. Let v be the vector A u over the norm of A u. Then this thing has unit length. And so now, if I compute the norm of A u, which is the thing I want to estimate above by a, this is equal to A u applied to v, or A u inner product v, because then I would get A u inner product A u, which gives me norm of A u squared divided by the norm of A u. So I just get A u. Now, this is a real number because it's equal to this norm. So I can throw in a real part and not get into any trouble. And now, I have a-- let's see, what would I call that-- polarization identity. That's the name of that, not the parallelogram law. But I have a type of polarization identity for this expression, which you can just verify in the quiet of your own dwelling that this thing right here, I can write as 1/4 A u plus v, u plus v, minus A u minus v, u minus v plus i times A u plus i v, u plus i v minus A u minus i v v minus i v. Now, so A u v-- this inner product here is equal to this thing, this expression I have here. The real part is still there. Now I'm taking the real part of this number. Now, this number here is real. This number here is real by number 1. And therefore, i times this number is purely imaginary. So when I take the real part, it goes away. So this is equal to 1/4 times A u plus v minus A u minus v, u minus v. I'm missing something there. Now, this is A applied to a vector inner product with that same vector. So that's less than or equal to 1/4-- since little a is a supremum of the absolute values of these expressions where, as long as these things have unit length, this is less than or equal to a times norm of u plus v squared, this first expression. Because this is less than or equal to the absolute value of this thing. And dividing by u plus v squared, this, I get that that's less than or equal to a. But I still have this u plus v squared hanging around. And then the same thing with this u minus v, so plus a u minus v squared. And now, I use the parallelogram law. This is equal to a over 4, u plus v norm squared plus u minus v norm squared. This is equal to 2 norm u squared plus 2 norm v squared times a over 4. u and v have unit length. So this is 1, this is 1. So I get 2 plus 2 is 4. And I get a So thus, for all norm 1 vectors, I've proven that the norm of A u is less than or equal to a. I've done it just for those so that A u is nonzero, but I still have this inequality of A u equals 0. So I get that the norm of A is less than or equal to a, which is what I wanted to do. So next time, we will get into some properties of the spectrum of self-adjoint operators a little more, in particular, the type of subset of complex numbers it has to be. In fact, we'll find out that the spectrum of a self-adjoint operator has to be contained in the real line-- so you can't have anything with a nonzero imaginary part-- that's contained within certain bounds related to this type of expression, A u applied to u. And this also gives us an interesting way of checking whether the spectrum is contained within two real numbers. In particular, it'll tell us something about when a self-adjoint operator is non-negative, meaning A u inner product u is always non-negative. And from there, we'll start talking about the spectral theory for compact self-adjoint operators. So this is kind of the most complete thing we can say about the spectrum of a certain class of operators. And what we can say about self-adjoint compact operators is that it's pretty close to the finite-dimensional case-- namely, that the spectrum is that these operators can essentially be diagonalized. You can find an orthonormal basis consisting entirely of eigenvectors for the operator. And we'll stop there.
MIT_18102_Introduction_to_Functional_Analysis_Spring_2021
Lecture_7_Sigma_Algebras.txt
[SQUEAKING] [RUSTLING] [CLICKING] CASEY RODRIGUEZ: So let's get started. Last time, at the end of the last lecture, we introduced outer measure. So we first discussed what we wanted to measure, the properties of kind of a notion measure of subsets of to satisfy. We wanted it to, first off, be defined for all subsets. We then wanted the measure of an interval to be the length of the interval. We also wanted the measure of a union of disjoint subsets to be the sum of the measures, and said we wanted it to be translation invariant, meaning if I take a set and I shift it by a fixed amount, then the measure of the shifted set is the same as the measure of the original set. But as I said last time, that's impossible. You can't. There does not exist such a thing defined on every subset of real numbers. So what we're going to do is define-- or what we're doing right now is defining outer measure, which will satisfy almost all of the properties we wanted, and is defined for every subset of R. And then we'll restrict this outer measure to a certain class of subsets of R that are well behaved with respect to measure. And then we'll get a function, now defined on a collection of subsets of R, what we'll call measurable sets. And this set function, which we call measure, Lebesgue measure, will satisfy the three properties, the main three properties we wanted-- that the measure of an interval is the length of the interval, the measure of a countable disjoint union of sets is the sum of the measures, and it's translation invariant. Now at the end of last time-- so we defined outer measure and we also proved this theorem that if we have a countable collection of subsets of R, then the measure of the union is less than or equal to the sum of the measures. Now, what we would like in the end, like I said, is to be able to have equality whenever the subsets are disjoint. Now, this doesn't hold for outer measure, but as we'll see, once we restrict outer measure to a certain class of subsets, we will get the property that we want. Now outer measure almost satisfies one of those properties we want to have of a measure. What we're going to verify now is that outer measure does satisfy one of the properties we wanted of a measure, namely that the measure of a interval ought to be its length. So that's the following-- if I is an interval, then the measure, the outer measure of I, is equal to the length of I. So here, the length of I-- remember, if it's an interval of the form a, b, with the endpoints included or not, the length of the interval is b minus a. If it's an infinite interval, then the length is infinite. So the most involved part, or at least the crux of proving this, is doing the simplest kind of interval, which is a closed and bounded interval, a, b. So let's do that case first. Suppose I is equal to a, b. And also, something that I didn't write down here is we have, immediately from the definition of outer measure-- so let me just pause that real quick-- that if A is a subset of B, then the outer measure of A is less than or equal to the outer measure of B, simply because if I take any open collection of open intervals covering B, then that will be a collection of open intervals covering A. And since infimums of bigger sets-- let's see, which direction is that-- since infimums of bigger sets have to decrease, that gives me the inequality that I want. So going back to the proof of this theorem, let's suppose I equals a, b. What we're going to show is that the outer measure is less than or equal to the length of I, and then the converse-- that the length of I is less than or equal to the outer measure of I. So the simpler one is showing the outer measure of I is less than or equal to the length of I. So then I is contained in the open interval, the single open interval, I minus epsilon, b plus epsilon for all epsilon positive. And therefore, since the outer measure of I is the infimum of the sum of lengths of intervals covering I, this implies that the outer measure of I is less than or equal to the length of this interval, which is equal to b minus a plus 2 epsilon for all epsilon positive. And therefore, if I have this is less than or equal to b minus a plus to epsilon for all epsilon positive, I can then send epsilon to 0 to conclude that the outer measure of the interval is less than or equal to b minus a, the length of the interval. So that's the simple part. So now, we're going to show that b minus a is less than or equal to the outer measure of the interval. So also, what this shows is that the outer measure of this interval is finite. It's a finite number. So to show this, what we have to show is that if I take any collection of open intervals covering I, then the sum of the lengths of those intervals is bounded below by b minus a. And if I don't write n equals 1 to infinity, it should be understood that n is always going from 1 to infinity. Let I n be a collection of open intervals such that this interval I, a, b, is contained in the union. We now want to show that b minus a is less than or equal to the sum of the measures of the I n's. Now, the closed and bounded interval, a, b-- this is a compact set. So you look back to 100B, which is covered by a collection of open intervals-- in other words, open sets. The definition of compactness-- so since this closed and bounded interval is compact, this is due to a special theorem by Heine and Borel-- we can find a finite covering of this interval by finitely many of these intervals. So I can choose finitely many of these intervals to cover a, b. There exists a finite collection, which I will now denote, let's say, J k, k equals 1 to n, so contained inside this collection of open intervals-- so these are just finitely many open intervals from this collection-- such that a, b is contained in k equals 1 to n J k. So now, I have these open intervals. And now what I'm going to argue is that I can do this-- is that here's the plan. Here's a. There's b. What I'm going to argue now to you is that rearranging how I'm indexing these open intervals, that I can cover a, b in the following way-- so this is the first. So I'm going to argue that we can cover this interval like this, so that I can choose the first interval to cover some of a, b, and then maybe it covers all, in which case I would stop my construction. If it doesn't cover all of a, b, then I can cover it with J2. So now, I have a2, b2. And if that doesn't cover all of a, b, I will still have to choose some more intervals to cover it. And eventually, I'll get to-- at least in this picture, since I can't draw n intervals-- I will be able to cover a, b in such a way that these are kind of linked together, so that these intervals are linked. And then I cover all of a, b. Now, why is that great? Well, then, because the sum of the lengths of these intervals is going to be bigger than or equal to-- the sum of the lengths of these intervals is going to be what? b3 minus a1, which is bigger than or equal to b minus a. And that would give me the lower bound that I desire. So now, let me argue that we can do this-- that this picture is correct. Since a is in this-- it's in a, b, so it has to be in this finite union of open intervals-- there exists k1 such that a is in J of k1. So one of these intervals, a has to be in it. And by rearranging the intervals, I can assume that k1 equals 1, i.e., a is in J1, which is a1, b1. Now, it's possible that this whole interval covers a, b, in which case I would stop the construction. Otherwise, I continue. If b1 is less than or equal to b, then b1 is certainly in this interval, which implies there exists a J k sub 2-- let me write it this way-- there exists k sub 2 such that b1 is in j sub k sub 2. And again, by rearranging, we can assume that k sub 2 equals 2. So by re-arranging the remaining intervals, I can assume k2 is 2. So if in the case, like I've drawn in the picture, b1 is not bigger than b, then I can find another interval J2 that contains b1. So b1 is in J2, which is a2, b2. And I will just continue this until the endpoint of one of these intervals passes b. The first instance when one of these intervals passes b, I stop this process. And it has to stop because all of these intervals do cover b, so it has to occur at some point. And there's only finitely many intervals. And I'm going to write if b2 is less than or equal to b, but I'm going to put dot, dot, dot there. So what have we done by this argument? Thus, we conclude that-- maybe that's a little bit bad notation because I have n there. So let's make this a capital N. Sorry about that. Make that a capital N, since that little n appears also to index the I sub n's. So thus, we conclude that there exists a k with 1 less than or equal to k less than or equal to n such that three things hold. For all k equals 1 to k minus 1, we have that b k is less than or equal to b and a k plus 1 is less than b k is less than b k plus 1. So think of this as a condition where we have not yet covered the entire interval a, b. For k equals 1, this was basically the argument I just gave here. And two, that this has to stop at some point, so that b is less than b capital K. So what I drew here-- so the picture is for the K is K equals 3. So we have this K, and I drew the picture over there for b3. Now I'm going to show that the sum of the links of the intervals-- and this is just a finite collection from the bigger collection of intervals-- is now bounded by b minus a, which is kind of clear from the picture that I drew. But we still have to write stuff down. This is not Topology. Then sum of length of I sub n's-- this is certainly less than or equal to the sum over this finite subcollection of intervals, lengths of J k. And this is less than or equal to sum k, K-- now here, this capital K is coming from over there-- length of J k. And now what is this? This is equal to b K minus a K plus b K minus 1 minus a K minus 1 plus, and then all the way down to b1 minus a1. But here's the trick-- remember, each of the b K minus 1's, they lie ahead of a K. So the index here is shifted just by a little bit. But this says that the previous b is in front of the next a. So I can collect terms and write this as b K plus b K minus 1 minus a K plus b K minus 2 minus a K minus 1. So I just borrowed a b coming after this term, so on, and so on, until I get b1 minus a2. And then I'm stuck with a a1. And by the second condition over here, everything in parentheses is non-negative. So this is bigger than or equal to b K minus a1. And what do we know? We know b K is bigger than b, and we know a1 is less than a because a is in a1, b1. So this length is bigger than b minus a. And since we've shown that the sum of lengths of intervals covering I is bigger than or equal to b minus a no matter what the collection of intervals is, then the infimum has to be bigger than or equal to b minus a. And we conclude that the outer measure of this interval a, b, is bigger than or equal to b minus a. And therefore, I have both sides of the inequality I want, and therefore, I have equality. Now, this was for a closed and bounded interval. But this essentially gives us the result for any interval. So if I use any finite interval of the form a, b, a, b, not including a, open, and then for all epsilon positive, at least sufficiently small, what do I have? I have that a plus epsilon, b minus epsilon is contained in I, which is contained in a minus epsilon b plus epsilon. I is one of these intervals. So if I just make it a little fatter and take that closed interval containing it, I get this side. And then if I just kind of shrink the interval a little bit and take the closed interval, this guy will be contained inside of it. And therefore, the measure of this interval has to be less than or equal to the outer measure of this interval, which is less than or equal to the outer measure of this interval. And I get that-- now, the measure of this interval-- I'll just write this. And this over here is b minus a minus 2 epsilon is less than or equal to the outer measure of the interval, which is less than or equal to the outer measure of b minus a plus 2 epsilon. And this holds for all epsilon positive. Recall that I was starting off for all epsilon positive, at least sufficiently small depending on b and a, smaller than the difference of b and a. And therefore, if I send epsilon to 0, I get b minus a is less than or equal to the outer measure of this interval, this finite interval is less than or equal to-- what why do I have this here? Is less than or equal to b minus a. And therefore, the outer measure of any one of these finite intervals is the length of the interval. And then I'm going to leave it to you as an exercise, which is not difficult. If I is equal to R negative infinity to a, a infinity, or include-- then the outer measure of this interval is infinity. In other words, I cannot ever cover these intervals by a countable collection of intervals whose sum of lengths is a finite number. Again, this is not hard. So we get kind of a nice, little fact from the theorem we proved last time, which was that if I have a collection of subsets of R, then the measure of the union is less than or equal to the sum of the measures, and this theorem, which says that the measure of an interval is the length of the interval, which is the following-- for every subset of R and epsilon positive, there exists an open set O such that A is contained in O and the measure of A, which is less than or equal to the measure of O simply because A is contained in O, is less than or equal to the outer measure of A plus epsilon. So somehow, the outer measure of subsets can be approximately measured by the outer measure of open sets. So another way to say this is, at least with respect to outer measure, every set can be approximated by an open set, an outer measure. And what's the proof of this? So this is clear if the outer measure is infinite. Then I just take O to be the entire real number line. So suppose it is finite. Let I n be a collection of open intervals such that they cover A. And remember, the outer measure is the infimum. So I can get arbitrarily close to that infimum by summing lengths of certain collections of open intervals. And sum of the links is less than or equal to, if you like, the outer measure of A plus epsilon. So I should have said at the beginning, let A be a subset of R and epsilon be positive, but you understand. And so what I do is I just take O to be this union of open intervals. So O is a union of open intervals. Each of these open intervals is an open set. And you should know that the union of open sets, any collection of open sets, is, again, an open set. So this is open. A is contained in O because A is contained in the union. And the outer measure of O, which is equal to the outer measure of this union of open intervals is, by the theorem we proved last time, which I stated with the definition, or when I recall the definition is less than or equal to the sum of the outer measures of the intervals, which we just proved is the length of these intervals. And how we chose these intervals, remember, is so that we have this condition. And this is less than or equal to-- so with respect to outer measure, every set can be approximated by the approximation with respect to outer measure by an open set, by a suitable open set. So we've defined outer measure. We've proven some properties of it. Now, we're in a position to at least define measurable sets. And I should say these are Lebesgue measurable sets. So a subset of real numbers is Lebesgue measurable-- so this is a new piece of terminology-- if for all subsets of R, if I look at the outer measure of A, this is equal to the outer measure of A intersect E plus the outer measure of E A intersect E complement. This is the definition of being a Lebesgue measurable set. So in some sense, E is a well-behaved set if it cuts A into reasonable sets. What's the best way to say that? I guess that's OK. So a set is measurable if and only if for all A, we have this equality here. So let me make a few remarks. First off, this is the left-hand side. No matter what A and E are, the left-hand side is always less than or equal to the right-hand side by the theorem we have up there, that the outer measure of the union is less than or equal to the sum of the measures. So since for all A E, A is contained in, or in fact, equal to A intersect E union A intersect E complement, we get that the outer measure of A is always less than or equal to the outer measure of A intersect E plus the outer measure of A intersect the complemented of E. This is regardless of A and E. I mean, if E is measurable or not, this always holds. So since this always holds, we could state being measurable just by, instead of equality, satisfying one of the inequalities. Thus, E is measurable if for all subsets of R, we have that A is less than or equal to the outer measure of A. Because like I said, we know no matter what A and E are, this holds. So if I want equality, then I have to have this as less than or equal to that. So E is measurable if and only if this is less than or equal to the outer measure of A. So now we have the silliest-- well, let's not state it as an example, but let's we can state it as a theorem. It will be a silly theorem, one which I will not write the actual proof of. But the empty set-- so again, I've told you before that when I'm writing on the board, writing in my notes, I typically shorten words. So "measurable" you will see written as "mble" throughout the notes, and also when I write on the board. I forget the word for what it means to shorten words. So the empty set is measurable. R is measurable. And we have the fact that a subset of R is measurable if and only if its complement is measurable because either one of these definitions-- remember, by what we've proven about outer measure, this equality is equivalent to requiring this inequality. These are symmetric in E and E complement. So E is measurable if and only if its complement is measurable. All right, so these are kind of the stupidest measurable sets you could have. Again, the empty set-- why? Because this is then empty and the outer measure of the empty set is 0. And then over here, I would just get the measure of A intersect the empty set complement, which is R. So I get the outer measure of A is equal to the outer measure of A. And since the empty set is measurable, its complement R is also measurable. So let's do some non-stupid examples of measurable sets, still kind of trivial because they don't make up very much. But if a set has outer measure 0, then it's measurable. So how do we prove this? Again, we just need to prove this inequality here for every subset of R. So let A be a subset of R. Then A intersect E, this is contained in E. And therefore, since we know outer measure is-- so the fancy word for when A is a subset of B, the outer measure of A is less than or equal to the outer measure of B-- the fancy word for that is "monotonicity." But since the outer measure is monotonic, that tells me that the outer measure of E is less than or equal to the A intersect E is less than or equal to the outer measure of E. And since that is 0, that implies that the outer measure of A intersect E is 0. Because the outer measure is always a non-negative number. Thus, the outer measure of A intersect E plus the outer measure of A intersect E complement-- this is 0, so this is equal to the outer measure of A intersect E complement. A intersect E complement is contained in A. So the outer measure of A intersect E complement is going to be less than or equal to the outer measure of A, again, because this set is contained within A. And thus, E is measurable. So right now, after two theorems, we've shown that the uninteresting sets are measurable, if by "uninteresting" we mean having very little measure, even though I haven't defined measure yet. I've just defined outer measure. I'm using these terms interchangeably, unfortunately, but you'll see why either by the end of this class or the end of next class. But so far, we don't have very many interesting examples of measurable sets. What we will show is that there's a lot of interesting measurable sets. In fact, every open set is measurable, and since if an open set is measurable its complement is measurable, and the complements of open sets are closed sets, then we have that every closed set is measurable, as well. But in fact, it's much richer than that, as we'll see. You can take a countable collection of open sets and take their intersection. That's not necessarily an open set, so it's not clear if it's measurable by what I just said. But it turns out that this intersection of open sets will be measurable. And again, by taking complements, you get unions of closed sets, which are not necessarily closed, are also measurable. So when I learned measure theory, my instructor told me that if you can write down the set, chances are it's measurable. If you can sit down and write down a union of intersections of complements of so on and so on of some basic sets, then that's measurable. And we'll see why that's true shortly. Now, before we get to showing everything I just told you, we need some general facts about measurable sets, about the structure of the collection of measurable sets. Right now, we just know that the collection of measurable sets includes the empty set R and the sets that have outer measure equal to 0, which I see I didn't write. This is also the danger of lecturing in an empty classroom is that if I make a mistake on the board, there's no one to correct me. So next, prove the following theorem about measurable sets-- that if I have two measurable sets, their union is measurable. If E1, E2 are measurable, then their union, E1 union E2, is measurable. So again, we have to verify that inequality to show that it's measurable. So let A be a subset of R. So since E2 is measurable, we get that the measure of A intersect the complement of E1 is equal to the outer measure of A intersect E1 complement intersect E2 plus the outer measure of E of A intersect E1 complement intersect E2 complement. You may be asking, where the hell did this come from? Why do I care? Well, it's because I'm getting something. Remember, this is equal to, by De Morgan's law, the complement of the union of E1 and E2. So I want to somehow have some relation involving the complement of E1 union E2 intersect A. Because again, I'm trying to show that the measure of A intersect E1 union E2 plus the outer measure of A intersect the complement of E1 union E2, which is exactly this term, equals the measure of A. So it looks like this relation was just grabbed out of left field. But that's the thinking behind why you would care. So now, A, if I take A and intersect it with E1 union E2, this is equal to A intersect E1 union A intersect E2. Now, everything from here that also has something in common with E1 is contained in this set. So this union is actually equal to A intersect E1 union A intersect E2 intersect E1 complement. Now, this appeared here, so you can kind of see maybe some magic is going to start to happen in a minute when we start taking the measure. So what do we get? We get that the measure of A intersect E1 union E2, which is this side, this is less than or equal to the outer measure of A-- so this is equal to this union, so the outer measure of this is less than or equal to the sum of the outer measure of this and the outer measure of this. Now, we use the fact that E1 is measurable. So since E1 is measurable, the outer measure of A intersect E1 is equal to the outer measure of A minus the outer measure of A intersect E1 complement. Or I should say that's backwards. And then plus, still, this outer measure here. But now, we're in good shape because what do we have? We have this term here appearing here. I also have this term here is appearing here. So subtracting this over here and subtracting that over there, I get that the right-hand side is equal to the outer measure of A minus the outer measure of A intersect. And I'll just rewrite this intersection of complements as the complement of the union by De Morgan's law. And remember, I started off with the outer measure of A intersect E1 union E2, and I showed its less than or equal to the outer measure of A minus the outer measure of A intersect the complement, and therefore, the outer measure of A intersect E1, E2 plus the outer measure of A intersect E1 union E2 complement is less than or equal to the outer measure of A. And that's what we wanted to prove, and therefore, E1 union E2 is measurable. Now, if you can do something for two things, you can do something for n things, typically, by an induction argument. So the previous theorem implies the following-- that if E1 up to E n are measurable, then this finite union k equals 1 to n E k is measurable. And how do you prove this? You prove it by induction. So proof by induction-- when n equals 1-- the base case-- this is clear. Suppose-- so call this claim star-- suppose this claim star holds for n equals m. Now we want to show it holds, that this implies that the claim of the theorem holds with n equals m plus 1. So let E1 E m plus 1 be measurable. Then the union k equals 1 to m plus 1 of E k is equal to union k equals 1 to m of E k union or E sub m plus 1. Now, by the induction hypothesis, a collection of m measurable sets, their union is measurable, so this is measurable by the induction hypothesis. And we're assuming this is measurable. And then by the previous theorem, the union of two measurable sets is measurable. And that proves the theorem. So up to this point, we've shown two basic things about measurable sets-- really three. First off, it's nonempty. It's a nonempty collection of measurable sets. We've shown that a set is measurable if and only if its complement is measurable. And we've also shown that finite unions of measurable sets are again measurable. And therefore, if I look at the collection of all measurable sets, this has a very special structure or a very general type of structure, which I'm now going to elaborate on. So let's take a pause here about measurable sets because now, we're going to say a few general things about certain classes of sets or certain classes of collections of sets. So let me make a definition here. A nonempty collection of sets-- so this is a collection of subsets of R, so it's a subset of the power set of R-- is an algebra. So some of you have taken Algebra or are taking Algebra. So there is a notion of what an algebra is in Algebra. Here, we say a collection of sets is an algebra if in some sense it's closed under taking complements and finite unions. If two conditions are satisfied-- if E is in the algebra, then its complement is in the algebra, and two, if I take finitely many elements in the algebra and finitely many subsets from this algebra of subsets, then this union is also in this algebra. Now, this is for finite unions. You can ask, what about infinite unions? That's not equivalent to finite unions, meaning that's a stronger condition to impose. So we have a special name for collections of subsets of that type. We say an algebra A is a sigma algebra-- sigma for, if you like, sum, because we really have a summation in mind-- if also the following, stronger condition is satisfied-- if I have a countable collection of subsets in A, so a countable collection of elements of the algebra, then their union is in the algebra. So an algebra is closed under taking complements and finite unions. A sigma algebra is closed under complements, and also the stronger condition of taking countable unions. So of course, this condition implies this condition-- actually, that's not yet clear. We'll see. Although it's not in the definition, we'll see in just a second that, in fact, this implies that the empty set has to be a member of the algebra. And therefore, this condition implies this condition if you just take finitely many of these and then take them to be empty after some finite n. So that's the definition of an algebra. That's the definition of a sigma algebra. Let me give a few examples real quick or a few remarks about this. First off, by De Morgan's laws, which tells you the complement of a union is an intersect of the intersection of the complements, we get that E1 E n in the algebra implies that E1 intersect-- so their intersection, so this is just for an algebra now-- their intersection, which is equal to the complement of the union of their complements. So since each of these is in the algebra, the complement is in the algebra. And therefore, the finite union is in the algebra. And therefore, the complement of that is in the algebra. So not only is an algebra closed under taking complements and finite unions, it's also closed under taking intersections, finite intersections. So thus, if I take some element from my nonempty algebra, the empty set, which is equal to E intersect E complement, that's in the algebra. So let me pause for a minute and write this a little carefully. If I take an element of the algebra which is supposed to be nonempty, then the empty set, which is equal to E intersect its complement, that's in the algebra. That's in the algebra. The intersection of two things in the algebra is in the algebra. So that's also in the algebra. And also, this implies that R, which is equal to the complement of the empty set, is also in the algebra. So for algebras of sets, they always contain the empty set for nonempty ones, which is the only kind we're ever going to care about. They always contain the empty set in R. And not only are they closed under taking complements, but also finite unions. And just like we proved that for algebras, finite intersections are also in the algebra, you can also prove that for a sigma algebra, countable intersections are also in the sigma algebra. So let me make a point of that. If A is a sigma algebra, then E n, a countable collection of elements of the sigma algebra, implies that this countable intersection is also in the algebra, again by De Morgan's law. So why am I making all these general definitions now? So what we're going to show soon, at least in the next lecture, we'll soon show that if I define script M as a set of all measurable subsets of R, so this is a collection of subsets of R, that this is a sigma algebra-- one of the things we'll show. Now how we're doing this now is we had these ideals of what a measure should be, and we're building it up from this way, and in the end we're going to come up with this collection of measurable sets, which will have this special structure of being a sigma algebra. And then our measure will be defined on this sigma algebra of sets. When you go on to general measure theory, that is the input for you. A measure space will then be a collection, a sigma algebra of subsets of a set, with a measure on it. That's your input. Here, your building, if you like, one of the first nontrivial measures. That's what we're doing, one of the most important measures, really. So we have that definition. If you've taken my class before, you know that if there's a definition, then we should see an example or two. I said that the set of measurable sets will be a sigma algebra. That's going to take a little work to get to, but we can come up with a few examples already. So some simple examples-- we have the stupidest example. Always start off with the stupidest examples. That's the way to go. So the simplest sigma algebra is given by this collection of subsets consisting only of the empty set in R. The next stupidest one is on the other end of the spectrum, where every subset is in this sigma algebra. So these are sigma algebras. What's a non-stupid one? Let's say we take A to be the set, the collection of all subsets E, such that either E is countable or E complement is countable. So why is this? I claim this is also a sigma algebra. Why is this a sigma algebra? First off, it's clear that if E is in A then its complement is in A. Because if E is countable, then the complement of its complement is E, is countable. So E in A implies that E complement is A. In other words, this condition is symmetric in E and E complement. Why is it closed under taking countable unions? So suppose I take a collection of elements from my collection here, A. I want to show the union is in A. Then if for all n, E n is countable, then the union over n of E sub n is a countable union of countable sets. And therefore, this is countable, which implies this guy is in the set collection A. So this is the case, that all of these are countable if there exists an integer N0 such that E n complement is countable, E n sub 0. Then, remember what I have to verify is that the union of E n's is in the set A, which means either it's countable or its complement is countable. So if I look at the complement of the union, this is then equal to, by De Morgan's law, the intersection of the complements. And this is contained in one of these guys. And therefore, this is a subset of a countable set, which implies that this is countable, and which is one of the conditions of A. And therefore, this is in A. So therefore, this collection of subsets of R such that E is countable or the complement of E is countable is a sigma algebra of sets. It's usually referred to as the co-countable sigma algebra. And one can define a measure on this collection, this sigma algebra. But we're only interested in Lebesgue measure, which is going to be defined on, as we'll see, the sigma algebra of measurable subsets of R, Lebesgue measurable subsets of R. So let's do one more example. This will just take a minute and it's not too technical. I know it's the end of the lecture. So maybe-- well, I mean, you're at home watching this. You can pause it at some point, get a snack, change pajamas, whatever. But anyways, here's one last example of a sigma algebra, which is, in fact, a very important sigma algebra, which is the following. Let capital sigma be the collection of all sigma algebras containing all open sets. So let's unwind this definition for a minute. So you are in this collection of sigma algebras if you are a sigma algebra and you contain all open subsets of R. So for example, the power set of R, which is a sigma algebra-- a trivial one-- contains every subset of R, so it certainly contains every open subset of R. This is in this collection of sigma algebras. And let me define script B to be the intersection over all sigma algebras in this collection of sigma algebras. So this is the intersection of every sigma algebra containing all open sets. So first off, this is nonempty. This is a nonempty intersection, so note, this is a intersection of subsets of collections of subsets of R So this is R and it's nonempty. Then B is-- I'm going to say a lot in these next few words-- is the smallest sigma algebra containing all open subsets of R. And we call it the Borel sigma algebra. So how do you think of this? You take every sigma algebra in the universe that contain every open set. You take the intersection of all these sigma algebras. My claim is that you get a sigma algebra, and this is the smallest sigma algebra containing all open subsets of R. So another way to say this last sentence is, E is in sigma and for all A in sigma, E is contained in A. So it's the smallest sigma algebra containing all open subsets of R. The proof is not hard. You just need to make sure you understand exactly what I'm saying here. And if you understand what I'm saying here, the proof is quite simple. And I'm only going to do one part. So first off, once I show that B is a sigma algebra, then the rest follows because if I take any open subset of R, it's contained in every one of these. And therefore, it's contained in the intersection. So every open subset is contained in B or every open subset is an element of B. I shouldn't say contained, but every open subset is an element of B. And so I just need to show that B is a sigma algebra. And since it's equal to the intersection of all sigma algebras containing every open subset, it has to be the smallest one. Any other one-- this intersection for any fixed one, this intersection is contained in any fixed one. And therefore, B is contained in any fixed sigma algebra which contains all the open subsets. So I just need to verify that B is a sigma algebra. And since I'm running out of time, I'm going to do one part, or verify just one part of the definition. Because the other part's essentially the same except you have to use more chalk. So I just need to verify that B is a sigma algebra. Now, suppose E is an element of B. So this means it's a subset of R. It's one of these subsets of R that's an element of B. Then for all sigma algebras in this collection, since B is the intersection over all of these sigma algebras, E is in A. And since each of these is a sigma algebra, I get that the complement is in A. And this statement says for every sigma algebra in this collection, the complement is in that sigma algebra, and therefore, E complement is in the intersection, which is, remember, how I've defined the Borel sigma algebra. And again, the proof of being closed under countable unions is sort of the same. Take a countable collection of elements of B. Then this countable collection-- they all have to be elements of A for every A in sigma. Therefore, their union has to be an element of A for every A in sigma because A is a sigma algebra. And therefore, the union is in the intersection over all A's which is equal to B, the Borel sigma algebra. So what we're working towards and what we're going to do next lecture is we're going to show that M, the collection of Lebesgue measurable subsets of R is a sigma algebra and it contains the Borel sigma algebra. So it contains this very big collection of subsets of R. I mean, this is truly a very big subset of R, like I was saying. It contains all open subsets. And since it's a sigma algebra, it contains the complements of all open subsets, i.e. closed subsets. but then it also contains all intersections of open subsets because sigma algebras are closed under taking countable intersections, as well. And then, you could take countable unions of those countable intersections of open subsets, and then, and so on, and so on. So you get a very rich class of subsets of R that's contained in the sigma algebra. And what we're going to show is that, like I said, the collection of Lebesgue measurable subsets of R, that's a sigma algebra and it contains the Borel sigma algebra. So it's a very rich class of subsets, even though up to this point, all we've shown is that the empty set R and the sets with outer measure equal to 0 are Lebesgue measurable. So we'll do that next time. And I'll stop there.
MIT_18102_Introduction_to_Functional_Analysis_Spring_2021
Lecture_5_Zorns_Lemma_and_the_HahnBanach_Theorem.txt
[SQUEAKING] [RUSTLING] [CLICKING] CASEY RODRIGUEZ: All right, so today, we are going to prove the theorem that I mentioned last time, the Hahn-Banach theorem, which is a theorem about being able to extend a bounded linear functional on some subspace of a norm space to all of the norm space. And this will, therefore, answer the question that we posed at the beginning of this topic, whether or not the dual, which is the space of all bounded linear functions on a norm space, is nontrivial for every norm space. Now, one of the tools we're going to need is this axiom or lemma from set theory, which is due to Zorn, which is as following-- and you're going to have to recall from the last lecture what some of these words mean. So Zorn's Lemma states that if every chain in a nonempty, partially ordered set E has an upper bound, then E has a maximal element. So a partially ordered set E-- that means the set E with a relation that's basically like a less than or equal to that satisfies three properties, so that it's like an extended less than or equal to. And a chain is a subset of E so that any two elements in that subset can be compared. One is either bigger than or equal to the other. One is always bigger than or equal to the other. And so this theorem says that if every chain of a partially ordered set has an upper bound-- that has a pretty clear meaning-- then E has a maximal element. So a maximal element of E-- that means an element that is not less than or equal to something other than itself. So anything bigger than or equal to this maximal element has to be that element. A maximal element is not necessarily an upper bound. A maximal element just means it cannot be-- nothing can get over its head. And so as a warmup, we're going to use Zorn's Lemma to prove a fact about vector spaces. So a Hamel basis-- and I think I went over this at the end of class last time-- of a vector space V, this is a linear independent set, H, such that every element of V is a finite linear combination of the elements from H. So for example, this set consisting of the vector 1, 0, and the vector 0, 1, this is a Hamel basis for R2. Every element in R2 can be written as a finite linear combination of these guys. And the fact that finite-dimensional spaces have bases is something you discuss in linear algebra, but now we're in functional analysis, which is linear algebra in infinite dimensions. It's not so clear that every vector space has a Hamel basis. And so what we're going to do is we're going to apply Zorn's Lemma to prove that every vector space does have a Hamel basis. So this is the following theorem, and this argument will kind of be a warmup for how we'll apply Zorn to prove the Hahn-Banach theorem. So if V is a vector space, then V has a Hamel basis. So for the proof, I am going to apply Zorn. So I need to have some ordered set. And my ordered set is going to be let E be the set of all linearly independent subsets of V. And we're going to define an order on E. So E is the set of all subsets of V that are linearly independent. And we define partial order on E by inclusion. So the elements of E are subsets of V. So we'll say subset is less than or equal to another subset if one is included in the other. So e and e prime in V, then we'll say e is less than or equal to e prime if and only if this subset of V e is a subset of e prime. And again-- I think I said this in a previous lecture-- I'm kind of used to using this notation for a subset, not necessarily a strict subset, just from teaching 18.100A last semester. So this does not mean a strict subset, so maybe put that in there. So this is my partially ordered set which I hope to apply the Zorn's Lemma on. And then what I will show is that the maximal element of this set, once I show it exists, in fact has to be a Hamel basis for V. So now we'll apply Zorn towards applying Zorn. Let C be a chain in E. That means any two elements of C can be compared. Define little c to be equal to union over all e in capital C e. So each of these little e's is a subset of V consisting of linearly independent elements. And now what I'm taking little c to be is the union of all these subsets of linearly independent elements. And I claim that this is a linearly independent subset which, since this subset of V contains every element of c, means c is bigger than or equal to all of E. Thus, c is an upper bound for C. So we just need to show that little c is now linearly independent subset, and therefore, all of these e's in this capital C are bounded above by c. And therefore, c is an upper bound for capital C. Now, to show it's a linearly independent subset, we're going to use the fact that capital C is a chain-- that you can always compare two things. So let's show that little c is a linearly independent subset. That's something very specific. So let v 1 up to v N be in little c. Then there exists e1, e2, e n in capital C. So little c is the union of all the e's. So these have to come from somewhere such that for all j, v j is in e j. Now, it's not difficult to show by induction that since I can compare any two elements in C, I can compare any n elements of C, meaning I can actually order any finitely many elements in C. So to say this is a chain means I can always order two elements, but by induction, it's not difficult to show I can always order n elements of C. So I'm just going to skip to that. And I'll leave it to you, meaning that I can always find a biggest element out of any finite collection of these guys. So since C is a chain, there exists capital J such that for all j equals 1 up to N, e j is less than or equal to e capital J, which again, remember, this means that e j is a subset of e capital J, just by how we've defined this partial order. And therefore, since all of these e j's for j from 1 to N are contained in this e sub capital J, that means that v1 up to v n are in E capital J. So I had this finitely many from C. I can always compare any two of them. And therefore, I pick the biggest linearly independent subset out of this finitely many. And all of these vectors have to then come from that linearly independent subset. And therefore, since this is a linearly independent subset, that means these are linearly independent, since e J is a linearly independent subset. And so we've shown every finite collection of vectors in C is linearly independent. So we've concluded that little c is a collection of linearly independent vectors. So we've now shown that the hypotheses of Zorn are verified-- that every chain has an upper bound, and therefore, this set E has a maximal element, which I'll call H. So I claim that H now spans the vector space V, meaning every element of V can be written as a finite linear combination of elements of H. So I claim that H spans V. So when I say H spans V, that's just a short way of saying that every element of V can be written as a finite linear combination of elements of H. So suppose not. Then there exists an element v in capital v such that v cannot be written as a finite linear combination of elements of H. Now, it's something from linear algebra. I'm sure they went over this before, but if I have a linearly independent subset and an element that can't be written as a finite linear combination of elements from that subset, then just by adding that element, I now obtain a new linearly independent subset. And therefore, I conclude that the set H union v is linearly independent, a linearly independent subset of v. But then, H will be less than, meaning it is less than or equal to, but not equal to, which implies H is not maximum, which is a contradiction. Remember, H was supposed to be the maximal element of E. Nothing sits above H. And if we assume that H does not span v, then we can tack onto H something making a bigger linearly independent subset of v. And that results in a contradiction. So thus, that must contradict our initial assumption that H did not span v. And therefore, by definition, it's a Hamel basis. So we've seen this kind of exercise, first exercise, of using this very powerful weapon, Zorn's Lemma to prove this fact that every vector space has a Hamel basis. And now we're going to use it to prove the Hahn-Banach theorem. So let me state the Hahn-Banach theorem for you, and then we're going to discuss the strategy. Actually, what I'm going to do is I'm going to state the Hahn-Banach theorem, I'm going to state a lemma, and then I'm going to give you what the plan is for proving the Hahn-Banach theorem. So the Hahn-Banach theorem is if V is a normed space, M is a subspace of V, and u going from M to C is linear such that-- so it's a bounded linear functional-- for all t in M u of t-- so this is now a complex number. Its absolute value is less than or equal to constant times norm of t, then there exists a continuous extension of u to the entire space and a continuous extension that has the same constant here. So remember, think of this as being the norm of little u. And so we can extend it to a bounded linear functional with the same norm, essentially. Then there exists a capital U which is a bounded linear functional from V to C, so it's an element of the dual space such that now, for all t in V, U of t is less than or equal to the same constant t. And I should have said before, such that capital U, when I restrict to M, gives me little u, and for all T in V, capital U of t is less than or equal to a constant times the norm of t. So I should have put this here at the start of the theorem, but this is the Hahn-Banach theorem. And this is a very, very useful theorem to have. In fact, in the exercises, you can use this theorem to prove that the dual of little l infinity is not little l1. So remember from the second assignment and from what I've said in lectures, the duel of little l p is little l q, where 1 over p plus 1 over q equals 1, as long as p is bigger than or equal to 1 and less than infinity. And it doesn't work for p equals infinity. So using the Hahn-Banach theorem, you can show why it doesn't work for little l infinity. And that'll be in the assignment. Now, so I don't have to keep writing all of this over and over again, I'm going to refer to this as U is a continuous extension of little u, although that's not quite precise because we are extending little u to a continuous-- so a bounded linear operator on V, but we're also extending it so that the capital U satisfies the same bound is little u. So this is a little imprecise, but I think you get my meaning. So this assumption let me denote by star, namely, that we have a subspace and we have a bounded linear functional on the subspace that satisfies this bound. So I'm not going to prove the Hahn-Banach theorem just yet. I'm going to prove a lemma or I'm going to state a lemma and then tell you how we're going to use it to prove the Hahn-Banach theorem. Actually, we'll state it slightly different. I'm not going to use that. So if V is a normed space, M, a subset of V is a subspace, and u from M to C is linear such that little u of t is less than or equal to a constant times the norm of t, so for all t in M-- I think the difference between capital U and little u is clear enough-- then I can extend it at least in one direction. So in other words, then so if M is-- and one last assumption-- and I take something that's not in the linear subspace, then I can extend it to the subspace of V consisting of M and also the direction x. So then, there exists u prime in the dual-- so I shouldn't use u prime. Let's say v-- OK, well, let's use u prime-- u prime from M prime to C, which is linear. And here, M prime is defined to be the subset or the subspace M plus-- so this does not mean quotient space. We were using pluses for quotient spaces. But this is the subspace of V consisting of M plus elements of the form a constant times x. So this is elements of the form t plus ax, where t is in M, a is in C, such that u prime when restricted to M gives me u for all t prime in M prime, u prime of t prime, absolute value, is less than or equal to the same constant times the norm of t prime. So maybe I made a mess of that, speaking while I was writing a lemma. But anyways, let's say you have a bounded linear functional on a subspace of V, and you take an element that's not in M. So the way I'll draw the picture is there's M, and here's something x. Then, I can extend this bounded linear functional which lives on M to now the subspace consisting of all elements of M plus elements plus scalar multiples of x. And I can do this in a continuous way, meaning I get a bounded linear functional on M prime, which satisfies the same bound as u did. So what's the strategy for using this to prove the Hahn-Banach theorem? Just so that we're clear on why we would be interested in such a thing, so we apply Zorn's Lemma to all continuous extensions of little u. So now, I'm talking about how we prove Hahn-Banach theorem. So we define as our partially ordered set would be the set of all continuous extensions of u. And then we would put a partial order on that, where one extension is bigger than or equal to another extension if that extension extends the smaller extension. And using the argument we did for proving the vector space has a Hamel basis, we can then show that Zorn's Lemma applies. And then, we'll have a maximal element of this set of continuous extensions of u. Now, what we would like to conclude is that this maximal element, this maximal continuous extension, is defined on all of V. And so how would we prove that? Well, we would do it just kind of like we did for the Hamel basis case. We would suppose not, and then we would show that if it was not defined on the entire normed space, then we can extend that maximal element, again using this lemma, and therefore, contradicting the fact that that extension was a maximal element. So in short, we apply Zorn's Lemma to all continuous extensions of u to get a maximal element, capital U. Two, we use the lemma to show U is this extension, is defined on all of capital V. So these extensions will come with two pieces of information. One is the subspace, which is bigger than M, and then also, the functional itself. And so we'll use the lemma to show that the subspace that this capital U is defined on is all of V by showing that if it's not, then we can extend it to a slightly bigger subspace using the lemma, which would contradict the fact that this is a maximal element. So that's the plan. So in fact, since we have this lemma already here, and since I've said it so many times, let's go ahead and just prove the Hahn-Banach theorem assuming this lemma holds. We don't need the proof of this lemma to actually prove the Hahn-Banach theorem. We just need this statement. So that's what we're going to do first, and then we'll go back and prove this lemma. So this is the proof of the Hahn-Banach theorem. We'll go back and prove the lemma in a second. So let E-- like I said, this will be the set of all continuous extensions of little u. So v, comma, let's say N such that N is a subspace of V, M is contained in N-- not strictly, but it's a subset of N-- and v is a continuous extension of u to capital N, meaning it satisfies-- its a bounded linear functional on capital N. When you restrict it to capital M, it equals U. And is satisfies the same bound that little u does on the bigger subspace N. And note, this is nonempty because u and M are in this. I'm not saying M has to be a strict subspace of capital N. And so we'll define a partial order on E by the following definition-- we will say v1, N1 is less than or equal to v2, N2 if N1 is contained in N2 and v2, when restricted to N1, gives me v1. So v2 is, if you like, a continuous extension of v1. Remember, all of these functionals are assumed to be satisfying the same bound as u did, so with the same constant. So it's not difficult to check, just like I didn't check it for inclusion. But it's not hard to check that this is, in fact, a partial order. So now, we want to apply Zorn's Lemma to get a maximal element of E, which we want to show is, in fact, this u that we say exists. So we have to show every chain in E has an upper bound. So let C, which I'll denote as v i N i for i in some index, be a chain in E. So this is a set of extensions, and we can always compare any two extensions in there. So let me just repeat or write down again what this means to be a chain. This means, then, for all i1, i2 in this index that I'm using just to index these elements of the chain, either v i1 in i1 is less than or equal to v i2 in i2 or vice versa. I can't remember if a C goes there or an S, so I'm going to put an S. One is either bigger than-- whenever I have two elements, I can always compare them. Let N be the union of all these subspaces coming from this collection. Now, again, it's not difficult to show that N is, in fact, a subspace. So I claim N is a subspace, and again, we're going to use the fact that C is a chain to be able to verify this. Let v1, v2 be in N. And let's take two scalars, a1, a2 in C. I want to show that a1 times v1 plus a2 times v2 remains in N. Then there exists i1 i2 indices such that v1 is in N i1 and v2 to is in N i2. Now, I can always compare any two subspaces that are appearing in this set of ordered pairs forming this chain. So one of these subspaces is bigger than the other. Then just by flipping 1 and 2, if I need to, and since C is a chain, N i1 is contained in N i2 without loss of generality. So it will either be N i1 will be contained in N i2 or the other way around. And if it's the other way around, just flip the numbers 1 and 2. So I'm going to assume N i1 is contained in N i2. Then that means both of these elements are contained in N i2. v1, v2 are both in the bigger one, and since this is a subspace, this means that a1v1 plus a2v2 is in N i2, which remember, is a subset of N. And therefore, N is a subspace. So I now have a subspace which contains all of these subspaces coming from the chain. Now I need to define a linear functional on this subspace N that extends all the v i's in a continuous way. And this would give me an upper bound for this chain. But it's not difficult to guess what that linear functional will be. We define v-- or not, I don't think I use v. Yeah I do. Well, let's not do that. So I mean, I shouldn't have used v1 and v2 when talking about the subspace business. That was poor choice of notation. So let's make that x1, x2. And the chain, blah, blah, blah, x1, x2, and therefore x1, x2 is in-- OK. So I just don't want to mix up the elements of the vector space with these functionals, which I'm labeling by v. So now, we define a function v from N to C by the following-- if t is an element of this union, it has to be an element of one of these N sub i's. Then I define v of t to be simply v sub i, which is defined on this linear subspace. And so I take v of t to be the value of v sub i evaluated at t. So now, one question is, is this well defined? Because an element t in N sub i could also have been an element of a different N sub i. So is this well defined? So we have to check. If it's in two of these, does this imply, question mark, v i1 of t equals v i2 of t? And again, we're going to use the fact that this is a chain to verify this. So suppose t is N sub i1 intersect N sub i2. And again, for any two elements of this chain, we can compare them. So let's assume i2 corresponds to the bigger index. So v i1 N i1 is less than or equal to v i2, N i2. Now, this order not only is defined in terms of the subspaces, remember, but in terms of the functionals defined on these subspaces. And it's defined by the fact that this functional is an extension of this functional. And therefore, since v i2 is the bigger one, it extends the smaller one. This implies v i2 of t equals v i1 of t. And therefore, v is well defined. And not only that, you can also-- so I wrote this out carefully showing it's well defined, but by a similar argument, you can then show that v is, in fact linear. And let's see, do I do that or do I stop there? So it's well defined on N. It's also an extension of every single functional defined on each N sub i. So the last thing to check is that it's linear and it's a continuous extension of all these v i's, meaning it satisfies the same bound. But all of the v i's satisfy that bound. V is defined in terms of the v i's, so we can just read off from here that v will satisfy that same bound. So I will leave it to you that you can check that v is an element of the dual space of this subspace N, so it's a bounded linear functional on N and a continuous extension of all v i's. So for all i, for little i and capital I, we conclude v i, N i is less than or equal to V, N, and therefore, v, N is an upper bound of C. So we've verified the hypotheses of Zorn's Lemma which I just erased. So that means that set E has a maximal element. So by Zorn, the set E has the maximal element capital U, N. So I claim N equals V. And therefore, capital U does the job. Because remember, since capital U, capital N is an element of E, that means capital U is a continuous extension of little u. And now we just want to conclude that for this maximal element, the subspace on which it's defined is the entire space V. So suppose not. Let x be an element not in N. By the lemma, there exists a continuous extension of capital U to the subspace N plus the span of x. And this is a continuous extension of capital U, which is a continuous extension of little u. And therefore, it's a continuous extension of little u. so continuous extension, let's call this something. Let's call it little v-- v N plus. So if the subspace that capital U is defined on is not all of v, then by the lemma, we can extend capital U continuously to N plus the span of x. And therefore, this element will be a continuous extension of little u. And therefore, it's an element of E. But then, U, N is smaller than v N plus the span of x. This is a bigger subspace than this because x is not in N, which implies u, N is not a maximal element. And that's a contradiction. And what did we contradict? Or what was the assumption that led us astray is the fact that we assumed that this maximal element is not defined on all of the entire normed space. Thus, U is defined on the entire normed space and it's a continuous of little u. And that's the proof of Hahn-Banach. So I hope that the proof was clear. If you followed the Hamel basis argument, this should be reasonable to expect, too. This argument is almost the same as the Hamel basis argument, except now, instead of the Hamel basis where the elements of our partially ordered set are just subsets, we also have two pieces of data for our partially ordered set here, one being the subspace and the second being the functional that's defined on that subspace, that extends the original continuous linear functional that we wanted to extend. So let's prove the lemma, and that will conclude the proof of the Hahn-Banach theorem. So now, we're going to proof a lemma. So that lemma up there, if V is a normed space and you take something that's not in the subspace, then you can extend U continuously to this bigger subspace. So first off, even though I keep saying that this is a subspace, this is kind of something that needs to be-- I mean it's not difficult to check, but still check. But also, one thing we need is that every element in a subspace plus this constant times x can be written uniquely. So we first note if t prime is in M prime, which remember is M plus a constant times x, then there exists unique t and M and a in complex numbers such that t prime equals t plus a times x. So why is that? Why can an element of this space not have two different representations, not be written as two different elements of M plus two different scalar times x? Well, if t plus ax equals t tilde plus a tilde x, then this implies that a minus a tilde times x is equal to t tilde minus t, which is in M. M is a subspace. So the difference of two elements of M is in M, and therefore, if a does not equal a tilde, then that means x is equal to a constant multiple of something in M. And therefore, x is in M. So we conclude that a must equal a tilde, which implies from assuming they're equal that t equals t tilde. So every element in this larger subspace can be written uniquely as an element of M plus a scalar multiple of x. Now, why do we need this fact? We need this fact to be able to say that this linear functional that we're going to define, that we hope to say extends u continuously, is in fact well defined. So thus, upon choosing a number lambda in the complex numbers, the map u prime of t plus ax given by u of t plus a lambda is well defined on M prime because we've shown every element of M prime can be written uniquely in this way. It's well defined on M prime and u prime going from M prime to C is linear. So if the original functional little u-- if that constant C is equal to 0, then little u is just identically 0. And we know how to extend the zero functional. So let's suppose capital C is nonzero. And if I divide little u by capital C, I can then extend that functional with the capital C equals 1 and obtain an extension satisfying the bound I want. And so what I'm saying kind of poorly is that-- I'll leave you a second to think about it. It's not difficult to understand why, without loss of generality, we can assume C equals 1. So if we do the C equals 1 case, then for the case C not equal to 1, we extend u over capital C using the C equals 1 case. And then the result follows. So we'll just do the case capital C equals 1. So now, our one free parameter is lambda. And this is for-- and we want to be able to choose lambda so that-- so this is already an extension of little u if I take a equals 0. So I'm just taking elements in M. This is u prime of t is equal to u of t So it's already an extension of little u to this bigger subspace. And now we want to be able to choose lambda so that it extends it in a continuous way with the same constant being 1 and that inequality up there. So I keep pointing up there. I don't think the camera's looking up there, so you have to hopefully interpret my meaning correctly. So we want to choose lambda, complex number, such that the following holds-- for all t in M, a in C, u prime of t plus ax, which is just u of t plus a lambda in absolute value is less than or equal to the norm of t plus a lambda. If we're able to do that, then u prime is then our continuous extension that we're looking for. So all that we need to do now is find a lambda so that this holds. And once we've done that, we've finished the proof. Now, that thing I boxed in a minute ago, we're going to reduce it to a simplest form. Now, what's in the box holds regardless what lambda is when a is equal to 0. And now what I'm going to do is basically remove the fact that a can change. So the estimate u of t plus a lambda less than or equal to t plus a lambda holds when a equals 0 regardless of how we've chosen lambda. So we just need to be able to choose lambda so that this holds for a nonzero. Consider trying to choose lambda so that this holds for all a nonzero. Now, let me take this inequality here and divide it by the absolute value of a. Then that inequality for a not equal to 0, this is equivalent to if I divide by the norm of a and bring this inside the norm, u of t minus a minus lambda is less than or equal to-- I should say this is an absolute value and that's the absolute value-- less than or equal to t over minus a minus lambda for all t in M. Now, if t is in M, t over minus a is also in M. So this bound here is equivalent to showing that we can choose lambda so that u of t minus lambda is less than or equal to the norm of t minus lambda for all t in M. So since t is a subspace proving this bound, or choosing lambda so that this bound holds, is equivalent to choosing lambda so that this bound holds. So in sum or in summary-- I should say in sum. I'm thinking in terms of notation. So in summary, the thing that we wanted to show originally, that we can choose lambda so that for all t in M a complex number, that inequality holds, is equivalent to showing that we can choose lambda so that this inequality holds. That's the point. And now, what we're going to do is we're going to choose the real and imaginary parts of lambda. And we'll choose the real part of lambda first. So we first prove-- piece of chalk is weird. I've got to get a different one. So we first prove that there exists an alpha in R such that w of t minus alpha in absolute value is less than or equal to t minus alpha for all t in M. And what is w of t? This is equal to the real part of u of t. Which let me remind you, this is just equal to u of t plus u of t complex conjugate over 2. Now, the real part of any complex number is always less than or equal to the absolute value of that complex number. So we haven't chosen what alpha is yet. I'll show you how to choose alpha in just a minute. Note that for all t in M w of t, an absolute value, which remember is defined to be the real part of u of t-- this is less than or equal to the absolute value of u of t. I should say the modulus of u of t. The modulus of a complex number is the square root of the sum of the real part squared and the imaginary part squared. So that's always less than or equal to that. And by assumption, this is less than or equal to the norm of t. Now, we're going to use the fact that w is real valued. Then w of t1 minus w of t2-- so let me say for all t1, t2 in M-- if I look at w of t1 minus w of t2, this is equal to-- and since u is linear and taking the real part is linear, this is equal to w of t1 minus w of t2. This is less than or equal to its absolute value. This is where I'm using the fact that it's real value. And as we've just shown that this is always less than or equal to the absolute value, we have that. And now, I'm going to do one more thing and add and subtract x. So in the end, we want-- whoa, whoa, whoa. This should have been x. It should have been x. Sorry, sorry, sorry. I hope you fast forwarded to this point and saw that this should have been x and x. So in the end, we want to somehow connect this to-- and I did it again here-- x, to the norm of t minus x. So what I'm going to do is add and subtract x and use the triangle inequality. And this is less than or equal to t1 minus x plus t2 minus x. And therefore, w of t1 minus the norm of t1 minus x is less than or equal to w of t2 plus norm of t minus t2 minus x for all t1 and t2 in M. So this holds for all t1 and t2. So I could fix t2 and take the sup over all t1's, which implies that sup of-- so let's not make that a t1, I could just say t-- is always less than or equal to w of t2 plus t2 minus x. And this holds for all t2 in M. So the fact that this holds for all t1 tells you this thing is an upper bound of this for all t in M. And therefore, its supremum is less than or equal to this thing for all t2 in M. And therefore, this quantity on the left is a lower bound for this thing on the right for all t2 in M. And therefore, we can conclude that the sup of t in M of w of t minus norm of t minus x is less than or equal to the inf over all t in M w of t plus t minus x. How do I choose alpha? So I have these two numbers here which are related in this way. I choose alpha between these two numbers-- between this number and this number. So there's a less than or equal to sign, so I can pick some number in between them. Maybe these two things are equal, and therefore, alpha is equal to both of them. Or I could just choose alpha to be this one. It doesn't matter. And the proof is just about completed. Now, I'm going to show that this alpha works. So then for all t in capital M, I have w of t minus norm of t minus x is less than or equal to alpha is less than or equal to w of t plus norm of t minus x. And therefore, it's less than or equal to alpha minus w of t is less than or equal to norm of t minus x, which is the same as alpha minus w of t, I should say. All right, so we showed how to do it for-- we were able to choose an alpha so that essentially, this inequality holds for the real part. We can do that also for the imaginary part.
MIT_18102_Introduction_to_Functional_Analysis_Spring_2021
Lecture_12_Lebesgue_Integrable_Functions_the_Lebesgue_Integral_and_the_Dominated_Convergence.txt
[SQUEAKING] [RUSTLING] [CLICKING] PROFESSOR: All right, so last time we defined the integral of a non-negative measurable function, the Lesbesguen rule. Now we are going to define the Lesbesguen rule for a general class, a more general class of functions. So Lesbesgue integrable functions. So what does this mean? Let E be a measurable subset of R, a function f, a measurable function f, from E to R. So its real value is Lesbesgue integrable over E. So I should say this is a measurable function-- is Lesbesgue integrable over E if the integral of the absolute value of f is finite. So a mark. So recall that we have the positive and negative parts of f. So we can write f equals f plus minus f minus, where these are the positive and negative parts of f, both non-negative functions so that the absolute value of f is equal to f plus plus f minus. And therefore, if I want to compute the integral of the absolute value, this is equal to the integral of the positive part plus the negative part. Remember, these are both non-negative functions. So let me just recall for you f plus is equal to max f0. f minus is equal to-- I think d for some reason, the f plus is always easy. The f minus is always-- max minus f0. So the integral of the absolute value is equal to the integral of f plus plus the integral of f minus. These are both defined because these are both non-negative measurable functions. So both non-negative because this is a max of two things, one of them involving 0. So that's always bigger than or equal to 0. So these are two non-negative measurable functions. These integrals exist. So this is finite if and only if both of these two things are finite. So thus f is-- instead of saying Lesbesgue integrable, I'm just going to say is integrable. This is equivalent to the functions f plus and f minus, the positive and negative parts of f, are integrable. All right, so with that remark there, so if f is a measurable function from E to a measurable set, the R is integrable. So "intbl." My short form is usually-- you can sound it out. If f is Lesbesgue integrable, then the Lesbesgue integral of f over E is defined to be the integral of f plus over E minus the integral of f minus. Now again, so this is a bit of new terminology. This is meaningful because I'm only defining the integral for integrable functions, meaning this is a finite number, this is a finite number. So I can always subtract two finite numbers. So this is the definition of being Lesbesgue integrable, the Lesbesgue integral. So what are some immediate properties of integrable functions and the integral? Suppose f and g from E to R are integrable. Then for all c in R, the integral of c times f-- no, I should say c times f is integrable. And the integral of c times f is equal to c times the integral of f. That's one simple property that just follows from the definition along with the linearity that we have for the integral of non-negative functions over non-negative scalars. f plus g is integrable. And the integral of f plus g over E is equal to the integral over E of f plus the integral of g. And again, just an analog of what we had for the integral of non-negative functions. If A and B are disjoint measurable sets, then integral of f over A union B is equal to the integral over A of f plus the integral over B of f. One I'm just going to leave to you. It's pretty clear. I just write c as-- if it's 0, then this follows immediately. If c is positive, then it doesn't change the positive and negative parts. The positive part of c times f is just c times the positive part of f. And the negative part is just c times the negative part of f is the negative part of c times f. But it flips them if c is negative. So you can just check that equality for those two cases. So let's move on to something more interesting, so for example, the fact that the integral is linear in f and g. So note that by the triangle inequality, I have that f plus g is less than or equal to f plus g, the absolute-- so the absolute value of f plus g is less than or equal to the absolute value of f plus the absolute value of g. And therefore, by what we know for the integral of non-negative measurable functions, the integral of f plus g is less than or equal to the integral of the absolute value of f plus the absolute value of g, which equals by linearity for non-negative measurable functions is equal to the sum of the integrals. Each of these is finite. So therefore, their sum is finite. So if I have two integrable functions, their sum is integrable. Now why is the integral of the sum equal to the sum of the integrals? Well, we have f plus g, splitting f and g into their positive and negative parts and writing f as f plus minus f minus. I can write f plus g as f plus plus g plus minus f minus plus g minus. Now this does not say that the positive part of f plus g is equal to this or that the negative part of f plus g is this. But it does say that the positive part of f plus g plus the sum of the negative parts is equal to-- just by splitting the left side into positive and negative parts, I get this identity. Now all of these are non-negative measurable functions. Everything appearing here are non-negative measurable functions. And therefore, the integral, which is linear for non-negative measurable functions, tells me that the integral of f plus g plus the integral of f minus plus g minus equals the integral of f plus plus g plus over E plus the integral over E plus g minus. Now what do I do is I rearrange. I bring this over to this side, this over to this side. And I can still use linearity because this is an integral of non-negative functions. I get that the integral of E over f plus g plus minus the integral of E over f plus g minus is equal to-- so let's just do this slowly. This is equal to this integral and this integral. So even though we didn't know that the positive part and-- equals the sum of the positive parts, so even though we don't know that the positive part of the sum is the sum of the positive parts and the same thing for the negative part, we still get the integrals equal. And this thing right here is just equal to, by definition, the integral of f plus g. And this thing over here-- again, if we use linearity for the integrals of non-negative functions, this is equal to f plus plus the integral over E of g plus, minus-- and again, just expanding this and then carrying through the minus, minus F minus. So you see here, all we're using is linearity for non-negative for a sum of two non-negative measurable functions. And this is just, again, by the definition of the integral the sum of the integrals. And so that was 2. What's the proof of 3? 3 from 2 and the fact that if I take f times the indicator function of A union B, this is equal to f times the indicator function of A plus f times the indicator function of B when A and B are two disjoint sets. So using what we know about for the integral of non-negative measurable functions, it follows that the integral of f over A union B is equal to the integral of this quantity here over E, which is equal to the sum of the integral of this sum here, which is equal to the sum of the integrals. And then going back, that's equal to the integral of f over A plus the integral of f over B. So again, it follows from linearity, this fact, and although I didn't write it down, the fact that even now for integrable functions, but that the integral of a subset is equal to the integral of the function times the indicator. The reason that's true is because it's true for non-negative measurable functions and simply from the definition of the Lesbesgue integral. All right, so some more properties of the integral. Suppose f and g from a measurable subset to R are integrable. Then the following holds. The integral-- I mean, the absolute value of the integral is less than or equal to the integral of the absolute value. If f equals g almost everywhere, then-- so let me-- so let's just-- I have no idea what I was going to say. I had to cut it down a little differently. So let's restart. Suppose f and g are two functions measurable. Need to write measurable. OK, back from the top, if f is integrable, then the absolute value of the integral of f is less than or equal to the integral of the absolute value. If g is integrable, and f equals g almost everywhere, then f is integrable. And the integral of f over E equals the integral of g, again, back to that philosophy from last lecture where I said if your conclusions are in terms of integrals, then usually the hypotheses can be stated only in terms of almost everywhere information. 3, if f and g our integrable, and f is less than or equal to g almost everywhere, then the integral of f over E is less than or equal to the integral of g. So one follows simply from the definition and the relationship between the absolute value of f, f, and the positive and negative part. So we have the absolute value of the integral of E of f over E of f is equal to, by definition, the integral of the positive part of E minus the negative part of E. Now these are two non-negative numbers. So by the triangle inequality, the absolute value of their difference is less than or equal to the sum of the absolute values, which are equal to themselves because they're non-negative numbers. And now the integral of-- or the sum of integrals is equal to the integral of the sum. And f plus the positive part plus the negative part of f is equal to the absolute value of f. So the triangle inequality-- or not-- I shouldn't say triangle inequality. But I mean, if you think of the integral as the sum, then this is a version of the triangle inequality. But the triangle inequality for Lesbesgue integrals holds. So 2, so we have f is less than or equal to-- so first off, absolute value of f is equal to the absolute value of g almost everywhere, which implies what we know from the integral of non-negative measurable functions, that if I have two non-negative measurable functions who equal each other almost everywhere, then their integrals agree. And that's fine. Thus, f is integrable. Now why does the integral of f over E equal the integral of g over E? So since f equals g almost everywhere, f minus g in absolute value is equal to 0 almost everywhere. And therefore, if I look at the absolute value of the difference of these two integrals, this is by linearity equal to the integral of the difference, which by the triangle inequality for Lesbesgue integrals is less than or equal to the integral of the absolute value. And by what we know for the integral of non-negative measurable functions, if I have a function which is 0 almost everywhere, then its integral is 0. And therefore, the absolute value of-- or the integral of f over E must equal the integral of g over E. So that proves 2. And 3 is-- again, so we're just using the stuff that we know from the integral of non-negative measurable functions. Find a function h of x to be g of x minus f of x and 0. Now when is it this? This is when if g of x is bigger than or equal to f of x, and 0 otherwise. So this is a non-negative measurable function. It's equal to g of x minus f of x when g of x is bigger than or equal to f of x, and 0 otherwise. And h is a non-negative measurable function. h equals-- since g is bigger than or equal to f almost everywhere, h is equal to g minus f almost everywhere because this condition is satisfied almost everywhere. And therefore, I get that 0, which is less than or equal to the integral over E of the positive part of h-- because this is just a non-negative function, we know what its Lesbesgue integral is defined to be. It's just given by the previous lecture. So it has to be a non-negative number, which because h is non-negative, this is equal to h. h plus is equal to h. But what we proved a minute ago, since these two functions, integrable function, so I should say why. So why am I also asserting that h is an integrable function? Well, the integral of h is equal to the integral of g minus f, which, if I want to put absolute values on things, the integral of the absolute value of h is equal to the difference of two integral functions, which is integrable almost Everywhere and by what we did for part 2, then that tells me h is integrable. So the integral of H, so by 2, must be equal to the integral of g minus f, which by the linearity that we proved in the theorem before is equal to the integral of g minus the integral of f. And you remember, we started all the way at the end of this or at the beginning of this with 0 is less than or equal to. So the integral of g will be bigger than or equal to the integral of f. So now the final most useful convergence theorem one encounters in Lesbesgue integration or in integration theory is Lesbesgue's dominated convergence theorem. Or I'll just call it the dominated convergence theorem. Let me make a small comment. Well, let me pause and make a small comment right here. What functions are Lesbesgue integrable first off? What are some examples of functions that are Lesbesgue integrable? Did I-- let's see. Did I skip that? Oh, I had it at one point. So what sets have finite measure? We know that the set-- the measure of intervals is the length of the intervals, right? So any sets that are contained in a large interval are-- any measurable subsets that are contained in a large interval have finite measures. So compact sets, which we know are measurable, because they're Borel sets, are-- have finite measure. So why am I saying that? If I have simple functions which are nonzero only on sets that have finite measure, then that will be an integrable function, because go back to the definition of how one integrates a simple function, right? It's the sum of the coefficients times the measure of the sets, where the-- it takes that coefficient. So if I'm only nonzero on a set of-- with the convention that 0 times infinity equals 0 so that it's the measure of 0 times the measure of where the function is 0 is equal to 0, then those simple functions that are nonzero only on sets of finite measure will be integral functions. Now what about continuous functions? So in fact, let me-- I'm not going to prove this because we're going to prove something much stronger in a minute. So what about continuous functions on a closed and bounded interval, say a-- so a continuous function on AB? Let's just talk our way through why those functions are integrable on those sets. So a continuous function on an interval AB-- if a function is continuous, then its absolute value is continuous on AB. And therefore, the absolute value must be bounded by some constant. A continuous function attains a minimum and maximum on a closed and bounded interval. So the absolute value of a continuous function on a closed and bounded interval is bounded by some constant. And therefore, by monotonicity of Lesbesgue integrals, the integral of the absolute value of f over the interval AB will be less than or equal to some constant times the interval over the-- or times-- will be less than or equal to the integral of the constant over AB. And the integral of a constant is just equal to the constant times a measure of the set. Just by how we define the integral of simple functions, a constant is just the simplest of simple functions, which would be the constant bounding above the absolute value times the measure of the closed and bounded interval AB, which is B minus a So the Lesbesgue integral of a continuous function-- so what was all that for? To say that a continuous function on a closed and bounded interval is a Lesbesgue integrable. Now we're about to show something right after I prove the state and prove the dominated convergence theorem, which is much stronger than that, is that in fact, the Lesbesgue integral of a continuous function on the closed and bounded interval equals its Riemann integral. So in fact, for continuous functions, you already know how to compute Lesbesgue integrals. They're equal to the Riemann integrals. So back to the dominated convergence theorem. Let g be an integrable function over E. f in a sequence of measurable functions such that-- two things. One, for all n, the absolute value of fn is less than or equal to g. So g is non-negative. I guess I should have put here non-negative. And almost everywhere, there exists an f from E to R such that fn converges to f pointwise is almost everywhere, so meaning fn of x converges to f of x for almost every x in E. So these, the sequence of measurable functions, converge to-- pointwise to this function almost everywhere. And they're all dominated by an integrable function. Then the conclusion is that the limit of the integrals equals the integral of the limits or the integral of the limit. So this is a very useful and powerful theorem in integration. It's way stronger than you can-- than anything one can really say in Riemann integration, as far as convergence theorems go. Riemann integration requires always in some form, some form of uniform convergence here. All we have is pointwise convergence almost everywhere, right? And the second requirement-- so remember, the monotone convergence theorem also required the functions to be increasing. So here, now we're dealing not just with non-negative functions, just arbitrary measurable functions. Then I should say real valued measurable functions because our measurable functions could also be extended real value. But I wanted this to be real value. So all we need on top of pointwise convergence almost everywhere is just for these to be bounded above by some integrable function, some fixed integrable function. Then we conclude that the limit of the integrals is equal to the integral of the limit, which is an extremely powerful and useful theorem in analysis. So we'll prove it using Fatou's lemma. But first, again, the conclusions are about integrals. And I'm making almost everywhere statements in the assumptions, which I said you could always basically do. Let me briefly reduce ourselves to the case that these two things hold almost everywhere. I mean everywhere. Since for all n, fn is less than or equal to g almost everywhere, this implies that for all n, fn is integrable. So moreover, fn converges to f almost everywhere implies a couple of things, that-- remember, the pointwise even almost everywhere convergence of measurable functions is a measurable function. So f is measurable. And so f is less than or equal to g almost everywhere, which implies that f is integrable. So since changing fn for each n-- or I should say-- let me say this. Since changing f on a-- what am I saying? Since changing f and fn for all n on a set of measure 0 does not affect the integrals. In the end, our conclusion should be about the limit of the integral of the sequence equals the integral of f. And since if I change fn and f for each n on a set of measure 0, doesn't change the integrals, right? We can assume that those two assumptions-- we can assume that these two-- we can assume that these two assumptions hold everywhere. And these hold everywhere on E. So first off, I want to note-- so now let's actually get to the proof. All right, maybe you need to take a second to think about what I said here. But the point is that because the conclusions are in terms of integrals, and although I stated it for almost everywhere, I can fiddle with fn and f for each n on a fixed set of measure 0 without affecting the integrals. So if I want to prove this, I can change them on a set of measure 0 without affecting the integrals. And on the set of measures 0, I make it so that all of the fns equal f. And so then I have this convergence everywhere. And I change f to be 0 on that set as well so that I will have this everywhere as well. But so if you need a second, if you'd like, just imagine that I erased that "almost everywhere" so that the statement of the theorem has these things holding everywhere. And then think a little bit about why I don't need it to hold everywhere, just almost everywhere. So we assume for all n, fn is less than or equal to g. There exists an f so that I have that. So note, for all n, the integral of f sub n is less than or equal to the integral of the absolute value, which is less than or equal to the integral of g, which implies that the sequence formed by the integrals of these guys-- so this is a sequence of real numbers-- is a bounded sequence of real numbers. So it has a limsup and a liminf, right? Now remember from your first analysis course, whatever it was, that the limit of a bounded sequence is equal to L if and only if the liminf and limsup equal each other, and they equal L. So what we're going to do is we're going to show that the liminf and the limsup of this sequence of numbers are equal. And they equal the integral of f. And to do that, we're going to use Fatou's lemma. So since g plus or minus f sub n is less-- or is bigger than or equal to 0 because the absolute value of f sub n is always less than or equal to g. So g plus or minus f sub n is always bigger than or equal to 0. I can now apply Fatou's lemma, which tells me that for a sequence of non-negative measurable functions, the integral of the liminf is less than or equal to the liminf of the integrals. So again, so the fns are converging to f. So f of x-- fn of x for all-- so fix x in E, fn of x converges to f of x. Therefore, liminf of fn of x equals f of x. So the liminf of g minus f equals fn. Sorry, equals g minus f because fns converge to f, again, for each pointwise. Let's do a minus here. And by Fatou's lemma, this is less than or equal to liminf as n goes to infinity of the integral of E-- over E of g minus f sub n. Now this is equal to the integral. Using linearity, this is equal to the integral of g minus the integral of f sub n. When I take the liminf of that and carry it through, I get the liminf when it hits a minus turns into a limsup. So whenever I have a bounded sequence of numbers, the liminf minus that sequence of numbers equals minus the limsup of that sequence of numbers. So here I'm using liminf of minus An, n being-- equals minus limsup of An. And similarly, I get that the integral of g plus f is less than or equal to just by-- now if I choose lemma again, but now I don't have to switch minuses, this is less than or equal to the integral of g plus liminf as n goes to infinity of the integral of fn of E. So I have two inequalities. I have that. And I should say it's less than or equal to, so maybe include a little bit of that. And then I also have this inequality here. Now all of these quantities that I've written down, these are all finite numbers. This is one reason why. So this is the limsup of this bounded sequence. That's a number. This is a number. That's a number. So I can subtract and move them to either side of this inequality. So there's no funny business going on with subtracting infinities. This is all on the level. So moving the limsup over and subtracting over the integral of g minus f, I get that the limsup as n goes to infinity of f sub n is less than or equal to the integral of g minus f. And now by linearity, that's equal to the integral of f. And this is also equal to, by linearity-- let's see-- minus the integral of g. And by the second yellow box here, this is less than or equal to the integral of g plus the liminf of the fn, so liminf of the integrals of the fns. So what do I have? I have the limsup is less than or equal to the integral of f, is less than or equal to the liminf. The liminf always sits below the limsup. So therefore, those three numbers have to equal each other. So this box always sits below this box. So all three numbers must equal each other then-- equals the integral of f equals liminf as n goes to infinity of fn. And I bet in analysis, you thought limsups and liminfs would never be useful, but they are. So that is the proof of the dominated convergence theorem. Now let's use some of this muscle we've been building up. So suppose a is less than b and f is a continuous function on a, b. Then the Lesbesgue integral over a, b of f is equal to the Riemann integral of f. This is the Riemann integral. So in the course of this proof, we'll also see why f is, in fact, integrable. So, proof. So we first show f is Lesbesgue integrable. So this implies that the absolute value of f is also a continuous function. And every continuous function on a closed and bounded interval is bounded. So there exists a constant b such that f is-- the absolute value of f is less than or equal to b on this closed and bounded interval. Then the integral of the absolute value of f, the Lesbesgue integral of f of the absolute value of f over a, b, this is less than or equal to a times b, the integral over a, b of capital P. And this is the simplest of simple functions. The Lesbesgue integral of this is just b times the measure of b of a, b, equals b times b minus a, which is finite. So continuous functions are Lesbesgue integrable on a closed and bounded interval. Thus, f is Lesbesgue integrable. Now the positive part of f is a continuous function. And the negative part of f is a continuous function. So in fact, you can write these down a little bit differently than I wrote them down before. f plus is equal to-- let's see. So these are the positive and negative parts written slightly differently. If f is continuous, both of these functions are continuous non-negative functions. And the Riemann integral of f plus minus f minus, which is f, the integral of f, is equal to the integral of f plus minus the integral of f minus, which is exactly how also Lesbesgue integrals are defined in terms of the integral, Lesbesgue integral of f is equal to the Lesbesgue integral of f plus minus Lesbesgue integral of f minus. If I consider simply these two cases separately, be considering these separately and showing the integral over a, b, f plus or minus equals the corresponding Riemann integral. And using linearity, I may assume that f is non-negative. So what's the point here? I'm trying to show this for general continuous functions. But by splitting it into its positive and negative parts, it suffices to prove this equals that for the positive and negative parts, both of which are continuous functions and non-negative. So I only need to prove what I want for the case that f is non-negative. That's the point. All right, so we now have a non-negative continuous function on a, b. And we want to show the Lesbesgue integral is equal to the Riemann integral of that continuous function. So let xn be-- let's see. So this should equal a. This should equal b-- be a sequence of partitions of a, b such that the norm of-- so this is just notation from back in real analysis. You shouldn't take this as an actual norm. Well, I guess it is a norm in a certain sense. But this is just a subset of a, b that partitions a, b such that this quantity here, which I denote using the norm-- but don't confuse us with norms that we discussed before, which is defined to be the max of-- and m could change with n. Goes to 0. So why am I taking a sequence of partitions of a, b? Because this is how you compute the Riemann integral in terms of Riemann sums. And I'm going to show that the sequence of Riemann sums converging to the Riemann integral actually converges to the Lesbesgue integral as well, but along a certain sequence of Riemann sums. So let xi j, n in one of the subpartitions. So f is a non-negative continuous function on this interval. So it has a minimum that it achieves at some point. Equals f of xi j, n. So on the subinterval, f of x is always bigger than or equal to f of xi n, j. So this is how I'm defining the xi n. Then by the theory of Riemann integration, if I look at limit as n goes to infinity of the associated Riemann sums, this limit exists. And you get the Riemann integral of f for a continuous function. This should have been covered in your introductory analysis class that-- if you like, this is the lower Riemann integral or lower Riemann sum. And that as long as you're going along a partition so that this quantity here is going to 0, then the associated Riemann sums converge to the Riemann integral. All right, now each of these is a finite set. Let n be the union of these sets. Then this is a countable union of finite sets. So it's countable. A countable union of countable sets is countable. In particular, this means the measure of this set is 0. So if I-- the set of all partition points as I range over all of the partitions converging to 0-- so I just took any sequence of partitions with this quantity here going to 0. If I take the union of all these partitions, I get a countable set. That set has measure 0 because it's countable. Why am I making this point that it has measure 0? Well, because off of this set, magic happens. And what we've learned is that magic happening off of a set of measure 0 means magic happens for integrals. So let me-- I'd like one more important piece of information that we have from the theory of Riemann integration that the Riemann sums converge to a Riemann integral. Let fn be the following simple function. This is sum from j equals 1 to mn of f of c in j of the indicator-- times the indicator function of xjn j minus 1 xjn. And then could put plus 0 times the indicator function of xjn. I mean, this part doesn't really matter. I'm just saying. So this is a simple function for each n. Should say a non-negative simple function. So what's happening now? And why did I choose the xi n? So let me draw the picture that goes with this. I have my function f on a, b. And what I'm doing is I'm cutting up the domain to get the approximate Riemann integral. So that should connect. And I'm choosing the heights to be the minimum of f on each of these intervals. So this is a f of xi 1, if you like, f of xi 2. And at least for this picture, this is xi 1. And what I know is as I'm making this partition finer and finer, these approximate areas here are converging to the full Riemann integral of f. Another way to think about that is that what these are just the Lesbesgue integral of certain simple functions, where the simple function is 1 on this and has height f of xi in or xi 1. This quantity here, the integral-- the area underneath this-- is equal to the Lesbesgue integral of f of xi 2 times the indicator function of this interval, and so on. So I can view these pieces here that are entering in the Riemann sum as Lesbesgue integrals of certain simple functions. Or I can view this entire quantity as the Lesbesgue integral of a simple function for reach n. Now the goal here is what we'll do is we'll show that the simple functions whose-- I mean, in fact, we can do just do this now. Note, for all n, if I look at the Lesbesgue integral of fn over a, b, this is equal to sum from j equals 1 to m sub n. This is a set of measure 0, so it doesn't contribute. f of xi jn-- these are all non-negative numbers because f is non-negative-- times the measure of xj minus 1n, xjn times xjn. Now remember, we built up Lesbesgue integration so that the measure of an interval is the length of the interval. So this is equal to xjn minus xj minus 1n. So the integral, the Lesbesgue integral, of each of these functions, each of these simple functions, equals that Riemann sum appearing in this limit, right? And now the goal is to show that the fns converge to f almost everywhere and are bounded above by an integral function at least almost everywhere. Then I can apply the dominated convergence theorem to conclude that the limit n m goes to infinity of this thing equals the Lesbesgue integral of f. But this, the limit as n goes to infinity of this thing, is equal to the limit as n goes to infinity of this quantity, which is equal to the Riemann integral. And that's the game plan. And I mean, you can already guess from here. What's the function that sits above all of the fns? It's going to be at least away from these-- at least away from possibly the endpoints. It's going to be f. Then for all xn a, b, take away n. A couple of things I have. Well, first thing is less than or equal to f of x. Now I'm going to show that on a, b, take away n, which is a set of measure 0, that fn of x converges to f of x. So now I claim-- then-- i.e., fn converges to f almost everywhere. And it's bounded above by a Lesbesgue integral function almost everywhere. So then we can apply the dominated convergence theorem to get what we want. So let's prove this. So let xb and ab take away the set of all partition points. So we want to show this. Let's just go back to a basic epsilon. In argument, let epsilon be positive. Since f is continuous at x, there exists a delta positive so that if x minus y is less than delta, then f of x minus f of y is less than epsilon. Now we know that the partitions are getting finer and finer, right? So x is in a, b take away n, right? So since the norms of these partitions, which remember, is the max-- I forgot that n when I was writing it here. And this-- should be n. Since this goes to 0, there exist capital M so that all n bigger than or equal to capital M, this quantity here, the length of-- the longest length of the subintervals-- is less than delta. So now let me draw you a picture. So x is in a, b, take away the partitions, all the possible partition points, all the xj's, n's, right? So let-- I claim now that fn of x minus f of x is less than epsilon. Now how do I evaluate for x in a, b take away n fn of x? And fn of x is equal to-- times the indicator function of xj minus 1n, xjn of x. And I don't have to worry about the endpoint, because remember, x is in a, b, take away all of the partition points. xjn is always a partition point. xjn is always b, so I'm always taking away b and a. So this must equal-- this x must lie in one of these intervals, but not be one of the partition points. So this must equal-- this is bad notation. But let's say fxe, OK, k-- for the unique k, such that x is in xk minus 1n xk. Then since xi n, k is in this interval, and the max over the length of the small intervals, which is the differences here, is less than delta-- so in particular for this one. So I have-- here's a picture. Here's xk minus 1n, xnk. Xi nk is somewhere in there. x is somewhere in there. And since this quantity here is less than delta, this implies that x minus xi in k must be less than delta. And therefore, f of x minus f xi f sub n of x, which is equal to f of x minus f of xi kn. Now these are within delta distance to each other. And therefore, f of x and f of that number must be within epsilon of each other simply by how delta was chosen. So we've shown that for all n bigger than or equal to capital M, f of x minus fn of x is less than epsilon. Thus, limit as n goes to infinity of fn of x equals f of x, all x in a, b, take away the partition points. So we have a couple of things. So I said this, but now I'm just going to write down what. We have these two things. We have that almost everywhere, fn's are converging to f. Almost everywhere, fn is less than or equal to f. f is a fixed continuous function that's integrable. It's non-negative. So by the dominated convergence theorem, the integral, Lesbesgue integral, of this function, continuous function f of x, is equal to the limit as n goes to infinity of the Lesbesgue integrals of these special simple functions, which, as we computed right here, is equal to the limit as n goes to infinity of k equals 1 to m sub n, f of xi in j minus xj and minus xj minus 1n. And by how this-- what we know about Riemann integrals, this converges to the Riemann integral. And thus the Lesbesgue integral equals the Riemann integral for a continuous function. So we have discussed the Riemann integral of real-valued measurable functions or real-valued integrable functions. Now quite often, we want to have complex-valued functions defined on measurable subsets of real numbers. What do we do then? Well, everything that we've done basically applies as long as it makes sense. So let me state that here. So all of the previous theorems that we've proven, as long as they make sense, you can use what we've done for real-valued integrable functions to what we call complex-valued integrable function imply the corresponding statements for complex-valued integrable functions. And what are these? So f from a measurable subset of E to now the complex numbers. So let me say E to-- and now let me just define what I mean by complex integrable functions, meaning-- so f from a measurable subset of E to C is Lesbesgue integrable if the same condition holds, that the integral of the absolute value of f is-- or now this is a modulus of a complex number for each x-- is integrable. So this is a non-negative measurable function now which is defined on E. So this quantity here makes sense. So we say a complex-valued function is Lesbesgue integrable if this quantity is finite. And now the definition of the Lesbesgue integral of a complex-valued integrable function is defined to be simply the integral of the real part of f, which is, again, a real-valued integrable function, plus i times the integral of the imaginary part of f. So again, we just-- to define complex integrable functions or complex-valued integrable functions, we just take the same definition. And then we define the integral to be the integral of the real part plus i times the integral of the imaginary part. And the fact that this is finite implies that these two real-valued measurable functions are, in fact, integrable. And all of the theorems that we stated before, as long as they make sense, carry over. For example, linearity of the integral with respect to now multiplication by complex numbers carries over. The integral of the sum is the sum of the integrals. That still carries over just by using this definition, along with the fact that we know that the integral of two-- the sum of two real-valued functions is integrable. And the Lesbesgue dominated convergence theorem then carries-- can be generalized to complex-valued integrable functions. Maybe what's not so clear is the-- or let me just at least give you the flavor of how you can use what you know about real-valued integrable functions to get corresponding statements for complex-valued integral functions. So let's do the triangle inequality for integrals, right? So if I have a complex integrable or complex-valued integral-- complex-valued integrable function-- then the integral of E of f-- so this is now a complex number. Taking the modulus of that is less than or equal to that. So what's the proof? So this is clear. So if the integral of E equals 0. So that's just clear automatically. So let's assume it's not equal to 0. And let alpha be the following complex number, integral of E over-- so the integral of f over E is a complex number. I take its complex conjugate. I divide by the modulus of that. Then alpha and modulus equals 1. And the absolute value of the integral of f-- this is equal to alpha times the integral of f over E because the complex conjugate times this gives me the modulus squared divided by the modulus. I get back the modulus, right? And linearity of the Riemann-- or not Riemann-- of the Lesbesguen rule for complex-valued integrable functions still holds. So I can pull that alpha inside. And now this thing is equal to this. So it's equal to a real number. So it's equal to its real part. And now the real part of the integral is equal to the integral of the real part just by the definition. So this is equal to the real part of alpha times f. Now this is a real-valued function, integrable function. And we know from what we proved for real-valued integrable functions, that's less than or equal to the real part of alpha times f over E. Now the absolute value of the real part of a complex number is less than or equal to the modulus of that complex number. And alpha has modulus equal to 1. So this is equal to the integral of the absolute value. So at this point right here, we used what we knew about real-valued integrable functions. And using that, we then got the corresponding statement for complex-valued integrable functions. And using the theorems that we proved for real-valued integrable functions, we get corresponding statements for complex-valued integrable functions as long as the statements make sense. We don't say two complex-valued functions, one is less than or equal to the other, because we don't have an order on complex numbers. So that statement doesn't make sense. But statements about, for example, complex-valued continuous functions, the Lesbesgue integral equaling the Riemann integral, that follows just immediately because the Riemann integral of a complex number-- a complex-valued continuous function-- is just defined to be the same thing. And since we know about-- know that this equals the Riemann integral of the real part of f, and this equals the Riemann integral of the imaginary part of f, which is by definition equal to the Riemann integral of f, then we immediately get the previous theorem generalized to complex-valued functions. All right, so that's the theory of Riemann integration. I mean-- Riemann integration? Lesbesgue integration. Next time, we will finish our discussion of-- measure an integration by introducing the big Lp spaces, which are spaces of measurable functions which have a finite-- which raised to a certain power, have a finite Lesbesguen rule, and show that those are Banach spaces that contain the continuous functions, of course, and in certain cases are-- the space of continuous functions are dense in these spaces.
MIT_18102_Introduction_to_Functional_Analysis_Spring_2021
Lecture_17_Minimizers_Orthogonal_Complements_and_the_Riesz_Representation_Theorem.txt
[SQUEAKING] [RUSTLING] [CLICKING] PROFESSOR: OK, so last time, we discussed orthonormal bases. And then we considered the concrete question of the complex exponentials being an orthonormal basis for L2 of minus pi to pi, that Fourier series actually converge to an L2 function in the L2 norm. So now, we're going to go back to a general discussion of Hilbert space. This is kind of the way the rest of the course, will be some general theory followed by some concrete applications scattered in, concrete in the sense that it just won't be about general Hilbert spaces, but about maybe specific operators, specific problems we're trying to solve. Now, we're going to discuss length minimizers. And what do I mean by that? I mean by the following, is if I have a closed subspace of a Hilbert space, then I could define-- of course, so way back when, we could then define a norm on H modulo, this closed Hilbert space, and the norm on that is the infimum of the norm of v minus w, where w, little w, is in the linear subspace capital W. Now, a natural question is, is that infimum, is that minimum, that minimal distance actually achieved by some element in the subspace? Now, the answer is, in fact, yes. And it's, in fact, true for a larger class of subsets of Hilbert spaces than closed subspaces. So we have the following-- suppose C is a subset of a Hilbert space-- of a Hilbert space H such that three conditions hold. The silliest-- C is non-empty, C is closed. And the final is that C is what's called convex. This means the following, that if v1 and v2 are in the subset C and t is in 0, 1, then tv 1 plus 1 minus tv2 is in C. So another way of stating this last condition is that for any two elements in C, the line segment from v1 to v2 is contained in C. So this is nothing but an element on the line segment joining v1 to v2. So here, we have what would look like a convex set C. And for every v1, v2, it's convex, meaning the line segment joining them is in there. So you wouldn't have-- let's say you wouldn't have-- so this is convex. And so for example, something like this picture is not convex because I could take two elements-- v1, v2-- here and the line segment does not stay in the set C. All right, so if we assume that we have a non-empty, closed, and convex subset of a Hilbert space H, then there exists a unique element v in C such that-- or I should say-- yeah, v in C such that v is equal to the minimum of the length of u in C. Now, we'll see, when we apply this theorem, how such C's pop up. One of them is, like I said a minute ago, that-- OK, I didn't exactly say it this way, but C being a vector plus a-- so C could be some fixed vector plus a subspace, meaning the set of all vectors of the form v plus w, where w is in some subspace. But that's not necessarily the only kind we'll come across. OK, so if we have a closed, convex, non-empty subset of a Hilbert space, then there exists a unique minimizer in this subspace. So of course, I encourage you to think about if I drop either of these conditions whether or not this theorem remains true. So it's clear you cannot drop C being closed. So for example, this just remark you cannot drop the condition C closed. For example, let's say you have an open ball in R2 now. And you take just any point outside the ball. Then the minimum-- now, let's see. I've got this backwards. Let's take C to be everything outside the ball. So it's neither closed nor convex. Then of course, this will not have a minimum length because-- or this subset will not have a vector that has minimum length because the minimum length will want to be on this-- let's say that's radius 1-- on this circle of radius 1, which is not included in the set C. So that one's taken away both conditions-- that C is closed and convex. Of course, you can do something where it's just convex, but not closed-- for example, take a square now in R2. Then let's say this is minus 1, 1. This is corners at 1, 1. And there's the origin. Then the vector of minimum length will want to be right here at the point 0, 1. But that's not in C. So you can also show that for C given by this rectangle missing a spot, it will not have a minimizer. And then you can play with this and find a closed subset that is not convex that doesn't have a unique minimizer. So these conditions are necessary for this theorem to hold. It's not just me giving you a lesser-- or a theorem that's with assumptions that are much stronger than you need. OK, so what's the proof? So this infimum-- so let me recall just real quick that-- so something, let's say a, is equal to the infiumum of a set S, where S is a subset of the real numbers if and only if two things-- a is a lower bound for S and there exists a sequence Sn in S such that Sn converged to a. So this should have been covered in 18.100. So let's call this number d. So let d be the infimum of u, which we know exists because norms are bounded below by 0. So this set of norms where u ranges over C is bounded below by 0, so the infimum exists. OK, so let d be that. Then there exists a sequence un of elements in C such that the norms of these un's converge to d. So I now make the claim that this sequence-- so what we're trying to do is come up with some element in C that achieves this d, so that the norm of v is equal to this d. So I now claim that this sequence is, in fact, Cauchy. And you'll see how we use the Hilbert space structure of-- or the inner product structure that comes with the Hilbert space. So we'll just do this the old fashioned way. Let epsilon be positive. We now need to find a capital N so that the difference between un and um less than epsilon in norm for all n and m bigger than or equal to capital N. So since the norm of un is converging to d, there exists a natural number N such that for all n bigger than or equal to N, I have that 2 un squared is less than 2 d squared plus epsilon squared over 2. So this thing here is converging to 2 times d squared as n goes to infinity. So if I perturb 2 d squared by a little bit, then since this is converging to 2 d squared, that will be less than this for insufficiently large. Now, I claim this capital N works. And for all nm bigger than or equal to capital N, if I take the norm-- now, I'm going to use this parallelogram law that I have for norms, which crucially relies on the Hilbert space property. So this I can write as 2 norm u squared plus 2 norm um squared minus 4 times un plus um over 2 squared. Now, what's the point? Now, un plus um divided by 2, that's, if you like, t equals 1/2 in condition C. Now, since un and um are in C, un plus um over 2 is in C as well, right? So this thing is in C. And therefore, its norm squared is bigger than or equal to d squared. So when it gets hit with a negative, that switches the inequality. This norm squared is less than or equal to-- or minus 4 this norm squared is less than or equal to minus 4 d squared. So this is less than or equal to 2 un squared plus 2 um squared minus 4 d squared-- again, because d is a lower bound for the norms of all elements in C and this element is in C. And this minus sign flips, all right? And now, based on how we've chosen capital N, we've chosen it so that we have this inequality here. I get this is less than 2 d squared plus epsilon squared over 2 from the first one plus 2 d squared plus epsilon squared over 2 from the second one, and then minus d squared from this last one. And that equals epsilon squared. So for all nm bigger than or equal to capital N, the norm squared of un minus um is less than epsilon squared. And therefore, that proves the claim that this sequence is Cauchy. Now, since this sequence is Cauchy and we're in a Hilbert space, a complete inner product space, there's a limit such that un converges to v. Now, C is closed. So v is also in H. Since C is closed, it contains all limits, all subsequential limits. So C is closed is equivalent to for every sequence converging to something, that something has to be in the set. v is in C. Finally, since the un's converge to v, we have v norm is equal to the limit as n goes to infinity of the norms of the un's. And remember, where did we come up with these un's? They are supposed to be-- their norms are supposed to be converging to d. So v is an element in C whose norm gives me d. Now, there's one last statement in the theorem that v is unique. There can't be two of them. And it follows from a similar argument that we made here. So what we've shown so far is that there exists a v in C whose norm gives me d. Now, we claim that there can't be more than one. Suppose v and v-bar are in C and both of their norms give me d, this infimum of norms over C. Now, I use the parallelogram law again. I get that norm of v minus v-bar squared, this is equal 2 norm squared of v plus norm of v-bar squared with a 2-- plus 2-- minus 4 norm v plus v-bar squared. Now, this equals d squared. This equals d squared. So this combines and gives me 4 d squared minus the same thing. And again, since C is convex in both v and v-bar are in C-- v-bar does not mean complex conjugate. It just means something other than v. So since both of these elements are in C, their midpoint, which is, again t equals 1/2 if you like in condition C for this theorem, this is also in C. And since d is the smallest of all the norms, the norm of the norm squared has to be bigger than or equal to d squared. And when I hit a minus sign, that flips the inequality. So that's less than or equal to 4 d squared minus 4 d squared equals 0. And therefore, this norm squared has to be 0, i.e. v equals v-bar. Now, this simple theorem has important consequences which we're going to obtain from it. So the first application of this theorem is going to be to-- a way to always decompose a Hilbert space if we're given a closed linear subspace, which you're kind of used to if, you're working in Rn or Cn, but we haven't touched on yet for Hilbert spaces. And there's a reason, because we didn't have the technology yet. But now, let's discuss orthocomplements. So let's see. OK, I actually spelled it correctly. When I was at the University of Chicago, I sent my thesis to my advisor. And he had some questions about math points of the thesis that I needed to make a little bit clearer. These weren't big deals math-wise. But of course, every time I got an email from my advisor with a question about my thesis, it struck the fear of God in me. But everything was fine. Everything was fixable. And then the last comment he made was there was like 50 times throughout the paper that I needed to-- or the thesis that I needed to change complement because I kept spelling it compliment with an I, so complement meaning what's not in there. Compliment is something nice you say. And so mistakes get made sometimes. So for the complements we're interested in, we have the following. If H is a Hilbert space and W is a linear-- is a subspace, then the following set, W-perp which is equal to the set of all u in H which are orthogonal to everything in W-- so u inner product w equals 0 for all w in W-- this is a closed linear subspace of H. So that's the first part. If the subspace we started with, W, is closed then, in fact, we can write H as the direct product of-- or the direct sum, I'm sorry, of W and the orthogonal complement. What does this mean? Again, I'll recall from linear algebra what this means, i.e.-- let me make sure I don't leave anything out. i.e. for all u in H, there exists a unique w in W, w-perp in W-perp, little w, such that u is equal to w plus w-perp. So it's simple to show that-- so to see that W-perp is a subspace of H. What I have to check is that linear combinations of elements of W-perp remain in W-perp. But this is kind of clear. If u inner product w1 is equal to 0 for all u in capital W-- no, no, no, backwards. If u1 inner product w is equal to 0 for all w in capital W and u2 inner product w is equal to 0 for all w in capital W, then the linear combination of u1 and u2 will be orthogonal to w for all little w in capital W. So that's pretty easy to see why it's a subspace. And the only thing these two subspaces have in common is the zero vector because if I have something in W and W-perp, then it must be orthogonal to itself. And one of the conditions that we have for an inner product is that it's positive definite. If I have something orthogonal to itself, it has to be the zero vector. So that's why we get these fairly quickly. Why is W-perp-- why is this closed? This follows from the continuity of the inner product. So to show the W-perp is closed, let un be a sequence in W-perp and u in H such that un converges to u. So W-perp is closed if we show that u is, in fact, in W-perp. That's the condition of a subset of a metric space being closed, right? It's closed. I mean, there's several different ways to phrase this. But the most useful one is often that a subset is closed if and only if it's closed undertaking subsequential limits. Every limit of a sequence is contained in the set. So we need to verify that if I have a sequence of elements in W-perp converging to something, then that something has to be in C. Now, let little w be in capital W. Then the inner product of this limit, u with w, by continuity of the inner product, since the un's are converging to u, this is equal to the inner product of un with w, taking the limit. And all of these are 0 for all n. So this equals 0 . And therefore, this inner product is 0 for all w in capital W. But this is the condition that u is in W-perp. Thus, u is closed. Now, let's do the second part. So maybe I should have numbered these-- 1, 2. So this is a proof of 1. So now, we're onto proof of the statement 2. If W is closed, then H is equal to the direct product of W and W-perp. So now, suppose W is closed. If W is just the entire Hilbert space, then clearly the only thing orthogonal to everything is the zero vector. And we have the decomposition as trivially. So let's assume w is not the entire-- is not the entire space. So now, we actually have something to check. Let u be in H take away W. I can't remember if the backslash goes that way or that way. I don't mean H modulo W. I mean H take away the set W. And let's define set C seems to be-- OK, so maybe it looks like I'm just lying to you about not having to deal with H mod W. But in any case, the actual set u plus capital W, meaning set u plus little w, little w in capital W. All right? So just this set. Now, I claim that C is-- so first off, C is clearly non-empty. It contains u. So it's non-empty. I claim it's also closed. Well, first, let's do the easier bit, that it's convex. It's convex since if u plus w1 is in C, u plus w2 is in C. So these are two elements now in C. They're of the form u plus an element of capital W-- w1, w2 are in capital W and t is between 0 and 1-- then t times u plus w1 plus 1 minus t times u plus w2, this equals t times u plus 1 minus t times u just gives me back u plus t w1 plus 1 minus t w2. And now, you note that w1 and w2, they're part of a-- they're elements in a subspace capital W. And therefore, this linear combination of them is also in W. So therefore, u plus this element is in C, which is all elements of the form u plus w for w in capital W. So C is convex. Now, let's just show C is closed. Now, why is C closed? So suppose u plus wn-- so this is the sequence of elements in C. So each of these is in C-- converges to an element-- let's call it v in H. We want to show that v is in C. We want to show v is in C. Now, u plus wn converging to v implies that wn converges to v minus u. And since wn is coming from a closed subspace-- remember, this is actually what we're using the fact that W is closed. So this implies since W is closed v minus u is in W. And since v is in-- since v minus u is in W, this implies that v is equal to u plus some element w with w in capital W, i.e. v is in C. Again, capital C is the set of all elements of the form little u plus something from capital W. So we've shown that if we have a sequence converging to an element in H, then that limit must be in the set. So C is closed. All right, so let me draw a picture now of what's going on. Let's imagine that W, the subspace W, is-- so imagine H is R2 and W is just the x-axis. And u is this vector here. u plus w is now the horizontal line that goes through this point. This is u plus w. This is-- let's just let's just call it C. Now, we have the set C. We have W. We would like to break up u into an element which is parallel to W and something that's perpendicular to w, right? Now, based on this picture, what is-- let's call this element v. This is not the same v from before, so new v. We would like for this v to be perpendicular to C. And based on this picture, what would it satisfy? It would be the element of C of minimal length. So that's how we'll define v. Or if you like, that's the element that's the W-perp. It'll end up being the perpendicular part. So since C is closed and convex, there exists a unique element v in C such that the norm of v equals the infimum of all elements-- the norms of the elements and C. but I'll write it a slightly different way, w in capital W of norm of u plus w because this is all of the elements in C come in this way. So this is the way this norm of v is-- or the way this infimum can be written. So I've identified a candidate for the part that will be orthogonal. And then simply, u minus v will be the part that's in C, hopefully. And let's check this. So claim. So first off, let's note a simple thing, note that v in C implies that u minus v is in W. v is of the form u plus little w. So u minus that has to be an element of W. And we have that u is equal to u minus v plus v. So this would be the element of w. And we hope to show that this element is in W-perp. OK, so now, we claim little w is in capital W-perp. Now, how we do this is what's called, I guess, a variational argument or Euler-Lagrange equations argument. But in any case, something is the infimum of this if and only if v satisfies certain equations-- or not if not only if, but v being the infimum of this implies that v satisfies certain equations. If you've taken classical mechanics, those equations end up being the Euler-Lagrange equations. But anyways, I claim v is in W-perp. So let w be in capital W. We want to show v inner product w equals 0. Let f of t be the norm of u-- norm of, sorry, v plus tw squared, which this is just a polynomial in t. This is equal to norm v squared plus t squared norm w squared plus 2 real part of 2t real part vw. So it's just a polynomial. Now, what do we know? So then f of t has a minimum at t equals 0 because for each t, this is an element of capital C. And the norm of everything in capital C is minimized exactly at v, which is t equals 0. So this has a minimum at t equals 0, which implies f-prime evaluated at t equals 0 is 0. And therefore, if I take the derivative of f-prime of t and set t equal to 0, then I just pick up twice the real part of vw equaling 0. And therefore, I get the real part of vw equals 0. So I got that the real part of the inner product is 0. Now, it's not-- so what we can do is then repeat the previous argument with iw, i times w in place of w to get that the real part of v inner product iw, which is equal to, in fact, the imaginary part of vw, equals 0. And therefore, the inner product of v with w is 0, since its real part and imaginary part equals 0. And therefore, vw equals 0 and v is in the orthogonal complement. And so v is in the orthogonal complement and u can be written as something in W plus something in W-perp. Now, why is this? So let's take just two seconds to say why this decomposition is unique. It's unique because the only thing the two subspaces have in common is the zero vector. OK, so if I have u is equal to two different decompositions-- w1 plus w1-perp equals w2 plus w2-perp, where each of these is in capital W, each of these is in the orthogonal complement of capital W-- I didn't say-- actually, that's the terminology I'm using, but w-perp I'll call the orthogonal complement-- then this implies that w2 minus w1 is equal to w1-perp minus w2-perp. And this is in W. This is in W-perp. And therefore, since the only thing in W and W-perp is 0, that implies that the left and right side have to be 0. And therefore, w equals w1. w1-perp equals w2-perp. And that gives us the uniqueness of the decomposition. So it's in the assignment-- I should say the optional assignment. But this subspace, then-- so I have a subspace W. I can take its orthogonal complement. If it's closed, then H is equal to w plus w-perp. But if I just have an arbitrary subspace and I take its orthogonal complement, I can then take the orthogonal complement of that. What do I get? Well, the orthogonal complement of a set is always closed. So I may not get back the actual subspace again. But I will get back its closure. So the closure of W, which you can check is, again, a subspace, is equal to the orthogonal complement of the orthogonal complement. So in particular, if W is closed, then the orthogonal complement of the orthogonal complement is the set again. So this is in the optional subspace-- in the optional assignment from the last week. All right, now, given a closed linear subspace, I can define an operator-- maybe it's-- so let's just call it a map for now-- that takes a vector u and spits out-- let's say the part that's in capital W. It's W-part. Now, what kind of operator is that? Or I could have said, it takes an element u and spits out the part that's in the orthogonal complement of capital W. What kind of map is that part? Or is that? So there's a very special name for that. So of course, in R2, if I have an element u-- so let's say W is the x-axis. W-perp would then be the y-axis. And then this would be the part that's in W. This vector would be the part that's in W-perp. Now, what exactly is this usually called, at least going back to your calculus days? You usually refer to it as the projection of u onto the, let's say, x-axis. But that name has-- or that word has a very specific meaning. And then we'll show that what I was just discussing-- taking u to it's W part or it's W-perp part-- is, in fact, a projection. So a founded linear operator P going from H to H is a projection if p squared equals P. So this is a new bit of terminology. So for example, it doesn't have to exactly look like this or come from this way I was describing of obtaining a map from H to W or W-perp. That's not the definition of a projection. The definition of a projection is this. So for example, taking everything to 0 is certainly a projection, which I guess you could think of as projecting onto the subspace consisting only of 0. But what I want to say is that, in fact, the map that I just outlined is, in fact, a projection as defined here. So let H be a Hilbert space, as usual; W, a closed subspace. So then by the previous theorem, we have H is equal to the direct sum of W and its orthogonal complement. The map pi sub W going from H to H given by-- defined by the following-- if v is equal to w plus w-perp-- so I take an element in the Hilbert space, decompose it as a part that's in W and as a part that's in W-perp. Then the definition of this map evaluated on v is just w. This map is a projection. So we need to show that it is a bounded linear operator and its square gives you back the original map. So first, let's show this map is linear. So the first claim is that pi is linear. So if I have v1 is equal to w1 plus w1-perp, v2 is equal to w2 plus w2-perp, and I have two scalars-- lambda 1, lambda 2, complex numbers-- then lambda 1 times v1 plus lambda 2 v2, this is equal to-- multiplying out and combining, this is equal to lambda 2 w2 plus lambda 1 w1-perp plus lambda 2 times w2-perp. Now, since w1 and w2 are in W, this linear combination of them is also in W. And this part also, since that's in the orthogonal complement of W and so is that, their linear combination is also in the orthogonal complement. So the decomposition of this linear combination of v1 and v2 is a linear combination of the decompositions. And therefore, by how we define this map as the part that's in W, this is equal to lambda 1 w1 plus lambda 2 w2. And w1, that's just pi sub W. That's just the projection-- I'm calling it a projection, although I haven't proved that yet-- this thing applied to v1 and w2 is this thing applied to v2 by definition. This is pi. w2 is pi by definition. So this is equal to lambda 1 v1 plus lambda 2 v2. And therefore, this map is linear. OK, so it's linear. Why is it bounded? So now, pi is bounded to v is equal to w plus w-perp. w is, again, equal to pi of v. Then because these two things are orthogonal, I get that norm of v squared is equal to norm of w plus w-perp squared. And now, what do I pick up? I pick up the norm of w squared plus norm of w-perp squared plus 2 times the real part of the inner product of w with w-perp. But that's 0, so I just get the sum of the norms. And that's bigger than or equal-- or since it's a sum of two non-negative things, that's bigger than or equal to one of them, which is w squared. Or rephrasing, since w is equal to pi applied to v, I've said that this is less than or equal to the norm of v so that pi is a bounded linear operator. In fact, what we've shown is that its norm is less than or equal to 1. OK, and finally, so the last piece that we need to check is that pi squared equals pi. But that's pretty easy to check. We simply note that if v is equal to w plus w-perp, then I need to check that I get pi w of v again. So this is equal to the part of this that's in w. Now, this is equal to w, little w. And again, this picks out the part of the element in here that's in capital W. But this is in capital W. So this is just equal to w. And this is, by definition again, equal to pi of v So we've shown that pi squared of v equals pi of v. OK, so one last application we'll do of minimizers, which is probably the most important application, which one could prove for separable Hilbert spaces based on what we know and have done so far. But this proof works also for non-separable Hilbert spaces. And OK, so what is this theorem I'm referring to? It's probably one of the most important theorems in all of this business. It's the Riesz representation theorem. OK, so the only category theory I know is the Baire category theorem. The only representation theory I remember is the Riesz representation theorem, which tells us we can identify a Hilbert space, the dual of a Hilbert space with the Hilbert space itself. So if H is a Hilbert space, then for all f in the dual, there exists a unique element v in H such that f of u-- so this is an element in the dual, meaning it takes u to a complex number-- and is linear in u, you can write it as u inner product with this element v All right, so every element of the dual can be realized as the inner product with a vector. Now, we saw this in a certain form already in, I think, it was maybe the first or second assignment when you computed the-- or proved that the dual space of little lp is a little lq, where 1 over p plus 1 over q equals 1. When p equals 2, q is 2. So you saw that the dual space could be identified with itself when we're looking at little l2, which is the only Hilbert space out of all the little lp's. And remember, how we proved that little lq was dual to little lp was via pairing between the two, which specifically was the sum of the sequences multiplied entry by entry. And now, so this theorem says that wasn't a fluke. This is, in fact, true for every Hilbert space, that the dual can be identified with the space itself in this canonical way, where every element of the dual can be realized as taking the inner product with a vector. So for the proof-- so first off, we note that v is unique. If such a v exists, it's unique, since if f of u is equal to u, inner product v is equal to u, inner product v-tilde for all u implies that u inner product v minus v-tilde equals 0 for all u in H, which, by setting u equal to v minus v-tilde, tells me v equals v-tilde. All right, so all we need to do is come up with-- given an element of the dual, come up with a vector that whenever I stick a u into the dual vector, it's equal to this inner product. OK, so the easiest case to deal with is, of course, f equals 0. In other words, it maps u to 0 no matter what u is. We just choose this vector that-- we just choose this vector v to be 0. Then we'll always have f of u equal to u inner product with the 0 vector. So suppose now f does not equal 0. Then there exists, let's say, a u1 in H such that f of u1 does not equal 0. Then if I take u0 to be u1 over f of u1, this implies that f of u0 equals 1. OK, so now, let C b e the set of all elements u in H which give me 1 when I stick it into F. Now, this set is non-empty, because I just gave you an element that when I stick it into f gives me 1. So this is non-empty. And what is this? Actually, this is the inverse image of the singleton in the set of complex numbers. Now, a singleton is a closed set. f is a continuous function, right? An element of the dual is a bounded linear map-- or bounded linear map from the Hilbert space to the complex numbers. And therefore, it's continuous. So the inverse image by a continuous function of a closed set is closed. So this is also closed. So C is that, which is a non-empty closed subset of H. You can see where this is going. Now, let's check that C is convex. That writing is a bit skewed. Now, if u1, u2 are in C, t is in 0, 1, then f of t times u1 plus 1 minus t u2, this is equal to-- now, f is linear. So this is equal to t times f of u1 plus 1 minus t f of u2. Now, this is equal to 1. That's equal to 1. So I get 1. So C is also convex. Now, you can see how the proof is going. But we might guess that-- if you're just looking at the proof, why am I doing this? Well, in fact, almost the minimum vector, or the vector with the smallest length, will be the vector that satisfies that inequality. So since C is a closed, convex, non-empty set, there exists a v0 in C such that v0 is equal to the inf of, let's say, u in C such that-- so we're looking at the norm of u. So let v equal norm of v0 over norm squared of v0. So first off, note that v0 cannot be 0, right? If v0 is 0, then f of v0 is equal to 0. But v0 is in C. f of v0 has to be 1. So v0 is non-zero. And I claim this v does a job. For all-- OK, I was using u's to denote elements in C. But claim-- I'll just write does a job-- i.e. for all u in capital H, f of u is equal to u inner product with v. All right, to see that, let N be the null space of f, the set of all vectors that get sent to the number 0. So this is a set of all w in H such that f of w equals 0. This is a closed linear subspace of N-- I mean of h. Then it's not too difficult to convince yourself that I can write C as, in fact, v0 plus w, where w is in this subspace. And what is v0? Again, v0 is the infimum of the norms of all of these guys. So it's equal to the infimum over, if I'm writing C in that way, w in N norm v0 plus w. All right, why am I doing this? Because of the argument we gave a minute ago, I can conclude that v0 is orthogonal to everything in n. So by the previous argument, I mean, look back at-- is it up there or-- no, it was on this board and I erased it. But where we define this function f of t equal to-- in this case, it would be v0 plus t w squared. Since that has a minimum at t equals 0, we conclude that v0 is orthogonal to W. So by previous argument for when we showed that H can be written as W plus its orthogonal-- the direct sum of a closed linear subspace and its orthogonal complement. So by the previous argument from that proof, we can show that v0 is an element of the orthogonal complement of N, the set of all vectors that get sent to 0. Now, we're almost done. So let u be in H. I want to show that f of u is equal to v inner product u. I just needed that little bit of fact there. So remember, then if I look at f of u minus f of u v0, this is equal to-- so f is linear, remember? So scalars pop out. This is just a complex number. So this is equal to f of u minus f of u f of v0. This equals 0, right? And therefore, u, which is equal to u minus f of u v0 plus f of u v0-- now, this thing here, because f applied to it gives me 0, this is in N. And this is in, again remember, N-perp, although I'm not going to use that specifically. I'm going to use the fact that v0 inner product with everything from N is 0. And therefore, if I take the inner product of u with v-- remember, v was v0 over v0 squared-- this is equal to u 1 over v0 squared inner product u v0. This is equal to 1 over norm v0 squared. Now, again, this element is in N. So when I take the inner product of this line with v0, when v0 hits this, I get 0 because it's in N. So if you like, call this-- so call this thing w, which is in N. So this is equal to w inner product v0 plus f of u v0 inner product v0. And again, this thing is in N. v0 is in the orthogonal complement of N. So this inner product is 0 times f of u times v0 inner product with itself is the norm squared. And I get f of u. So I have found a vector, namely this certain minimizer over its length squared so that f of u is equal to u inner product with this vector for all u in H. And that completes the proof of the Riesz representation theorem. Now, next time, we'll talk about adjoints, which you had on an assignment at one point when we were talking about-- I think I defined the adjoint for a general Banach space. So if you have a map, let's say, from a Banach space to itself, then we define the adjoint, in a certain way, to be now a linear map that goes from the dual space to the dual space. Now, in the case of a Hilbert space, the dual space is equal to the space itself. So the adjoint for a Hilbert space will be a map, again, from the space to itself. And it will satisfy a certain identity. And we will see the connection between adjoints and what this has to do with when a map is-- or a bounded linear operator is onto. So adjoints pop up now when we're trying to solve equations on Hilbert spaces. The properties of the adjoint can tell us when we can always solve our equation. And not only that, they're going to end up-- they're the analog of the transpose that hopefully you saw. Maybe it was also referred to the adjoint in linear algebra on finite dimensions. And so this is way ahead. But hopefully, what you proved in linear algebra and finite dimensions was the spectral theorem, which that if you have a matrix that is equal to its adjoint-- more generally, that it's normal, that it commutes with this adjoint-- then-- and for finite dimensions, the adjoint of a matrix was switched the entries, switch i and j and take the complex conjugate. If that's equal to the matrix again, then what you can conclude, the statement of the spectral theorem is that you can find an orthonormal basis of Cn, say, or Rn, that diagonalizes the matrix. And we'll see something like that also holds in the Hilbert space setting, but it's not so simple as the eigenvalues-- as diagonalizing just does not mean finitely many eigenvalues. And we'll go over more of-- I'll go more into that when we get to it. All right, we'll stop there for now.
MIT_18102_Introduction_to_Functional_Analysis_Spring_2021
Lecture_13_Lp_Space_Theory.txt
[SQUEAKING] [RUSTLING] [CLICKING] PROFESSOR: OK, so we're going to complete our discussion of Lebesgue measure and integration by discussing the big Lp spaces, which was kind of the whole point of this endeavor, which was to find, in some sense, the complete space of integratable functions, whatever integratable means-- I mean, we had to define an integral-- containing the space of continuous functions with norm given by, let's say, the integral of the p-th power of the continuous function. OK, so at the end of-- so at last lecture, we introduced the general Lebesgue integratable-- I mean the class of Lebesgue-integratable functions and the Lebesgue integral. And we proved that-- we proved the dominated convergence theorem, and one consequence of that was the fact that if I have a continuous function on a closed and bounded interval a, b, then the Lebesgue integral of that continuous function equals the Riemann integral of that continuous function. So you know how to compute integrals-- or the Lebesgue integral for every function that how to compute the Riemann integral for, which are mostly continuous functions. Now this can be strengthened. One can show that, in fact-- so we're not going to cover it in this class. You'll see it in another class that is devoted maybe just solely to measure theory for a longer bit, of time but one can show using the dominated convergence theorem that, in fact, every Riemann-integratable function-- not just continuous, but every Riemann-integratable function on a closed and bounded interval is Lebesgue-integratable, and that the Riemann integral equals the Lebesgue integral, even just for more general Riemann-integratable functions. And you can also, now using the machinery that we built up-- I don't know, maybe I'll put this in the assignment maybe not. That you can completely characterize those functions which are Riemann-integratable. And the statement is that a function-- a measurable function, say, is Riemann-integratable if and only if it is continuous almost everywhere. I'm not saying that it's equal to a continuous function almost everywhere. I'm saying that it's continuous at almost every point in the interval. OK. Now let's move on to analogs of the little Lp spaces that we saw earlier in the lectures and on the assignments. So these are usually referred to as the big Lp spaces. And so to define these, let me first define what will be, in the end, a norm. So if f from a measurable subset of R to the complex numbers is measurable, and 1 is less than or equal to p is less than infinity, then we define the following possibly-- or the following extended non-negative real number, norm Lp of E. This is defined to be the integral over E of f raised to the absolute value raised to the p to the 1 over p. Now this is meaningful because no matter what f-- how f has behaved, because the absolute value of f raised to the p, this is a non-negative measurable function. So we can always define what the Lebesgue integral is. So this may be either infinite or finite, but it's a non-negative number, extended real number. And so maybe you ask, what's going on? Why did I leave out p equals infinity? We have a different definition for equals infinity, just like we had a different definition for the little l infinity. We define this quantity here, which I'm going to go ahead and start referring to as the Lp and L infinity norms even though I haven't proved their-- a norm yet on what space either. This is defined to be the infimum over M positive such that the measure of the set x and E such that f of x is bigger than m equals 0. So what does it mean for m to be in this set? This means that f of x is less than or equal to m almost everywhere. And then I take the minimum of all such-- almost everywhere upper bounds. And this is usually referred to as-- what is called the essential supremum of f of x. So just a little mini theorem about this L infinity norm here, what you'll see-- well, I guess you'll be seeing these lectures after the first exam. So you saw this guy actually on the exam, and you proved one of these facts. The other I will put on a future assignment. If f from E to C is measurable, then the absolute value of f of x is less than or equal to gL infinity almost everywhere on E. And another fact, if f-- if E is equal to a, b and f it is continuous on a, b, then this essential supremum is equal to, in fact, just the usual what we call the L infinity norm which, remember, was the sup over x and a, b absolute value of f of x. So why do I state this? Because the L infinity norm bounds f from above for almost every x in the same way that the little l infinity norm for sequences bounded, the sequence-- every entry in the sequence-- for every entry in the sequence. But now for the essential supremum, we have just an almost everywhere statement. But this norm is the same as the L infinity norm or the infinity norm for continuous functions. So it shouldn't be something that's too crazy. OK. So now I'm just going to state a couple of theorems because you already gave the proof-- the proofs, I should say, when you did-- I think it was probably the first assignment, when you did the corresponding statements for little lp spaces, except now you replace the-- for those, you replace an integral with a sum. I mean, you should always think of an integral as a sum. So we have the following two theorems, two inequalities. We have Holder's inequality. If one is between p-- if p is between 1 and infinity and q is the dual exponent to p, meaning 1 over p plus 1 over q equals 1, and you have f as an Lp of E and g is an Lp of E, then f times g integrated over E-- well I haven't even said-- sorry. Getting way ahead of myself. So-- and if f and g are two measurable functions, then the integral over E of f times g absolute value, this is less than or equal to the Lp norm of f times the Lp norm of g. Now of course, this inequality is only interesting if the right hand-side is finite. If this is infinite, then this is all vacuously true. So this is the analog of the Holder's inequality which you proved for sequences where we had a sum here instead of the integral and where we had a sum here instead of the integral. And it's proven essentially the same way. You just replace a sigma with a little s. And so from Holder's inequality, you obtained Minkowski's inequality. If p is between 1 and infinity, and f, g are two measurable functions, then by-- take the Lp norm of f plus g, this is less than or equal to the Lp norm of f plus the Lp norm of E. And again, you proved this exactly the same way as you did for the little Lp spaces using Holder's inequality. Of course, that requires a slightly different argument for p equals infinity you know for this essential supremum, but in fact, that's what you did in the exam you took a few days ago. So I've been calling these things a norm even though I haven't proved their norm yet and on what space are they a norm, so now I'm going to do that. So when it's clear-- also let me make a small remark. I'll denote this thing just by shorthand with just a p. And it should be clear from the context what set I'm taking this norm over. Or what set am I taking this integral that defines this norm over. OK, so now let me define the actual space that this will be a norm on. And it involves a slight abuse of terminology and notation in the end, which is just tradition, not just in this subject, but-- I mean, abuse of notation is tradition in all of math. So for 1-- for p between 1 and infinity, we define the space Lp of E-- so E here is know-- if I don't say it, it's always a measurable subset of R. This is the set of all functions from E to C, which are measurable, that have finite Lp norm. Now let me make a second caveat to this space. So as I've written it down now, it's a space of functions. And I'm going to keep referring to it as a space of functions. I'm going to keep referring to elements of it as functions. But the actual space itself is not a space of functions, it's a space of equivalence classes in order for this quantity, which I keep calling a norm, to actually be a norm on this space. So let me add here-- if we consider two elements of this set, the-- I shouldn't say equal, but to be the same element-- and let me give two elements, say f and g in Lp, to be the same element if f equals g almost everywhere. So as I've written it down, Lp of E consists of all measurable functions with finite Lp norm, and now I'm saying that I will consider two elements in this space to be the same-- to be the same element if they equal each other almost everywhere. So strictly speaking, let me just make this as a remark. This means an element of Lp of E is an equivalence class of the form-- so little brackets f to indicate the equivalence class. I haven't told you what the equivalence relation is, so I'll explain that as I describe the equivalence class. So the equivalence class of f is equal to the set of all function g, which are measurable. So Lp of E is really a set of all equivalents-- is a set of equivalence classes of measurable functions with finite Lp norm where two equivalence classes are equal if and only if the representative from the first equivalent class equals the representative of the second equivalent class almost everywhere. So this is what Lp of E is. Now why am I making this-- why am I adding this caveat that you have to consider two elements to be equal almost everywhere? Because this is what allows me to put-- or say this Lp norm is an actual norm. Otherwise it is just a semi-norm if I really consider Lp of E to be this space of actual functions. So again, this is a small point, that Lp of E is, in fact, a set of equivalence classes where two functions are two equivalence classes or equal if and only if the representative functions equal-- are equal almost everywhere. But now that I've explained all of that and carefully told you what Lp of E is, it is customary to never refer to them as equivalence classes or elements of Lp of E as an equivalence class. We usually still just refer to them as functions. Let me make that point clear. So rather than speaking of an equivalence class or elements of Lp of E as an equivalence class, I will just refer to them as functions with the understanding that two functions in Lp of E are considered to be the same function, the same element if they equal each other almost everywhere. Now maybe you think this is a little weird, but you've been doing this your whole life already, because the rational numbers themselves are defined, if you look back into algebra or if you're taking algebra now, when you actually sit down and construct the rational numbers, they're constructed as equivalence classes of pairs of integers. But you don't think of them as that, you think of them as 3 over 2. Not the equivalence class of 3, 2. So again, two elements in Lp of E. I mean, we think of Lp of E as a set of functions with finite Lp norm, and with the caveat that two functions-- or two elements of Lp and E-- Lp of E are equal-- are the same element if the two functions agree almost everywhere. OK, now with that minor detail out of the way, let's move on to-- let me state the theorem that-- so Lp with the natural scalar multiplication where you just multiply-- the scalar multiple of an element as just multiplying the function by a scalar. And the sum of two elements is just the pointwise why some of the two functions. So with the obvious definitions of scalar multiplication and addition. Operations is a vector space. Moreover, this function which I've been referring to as a norm is actually a norm on Lp. Now I'm not going to-- I'm only going to prove part of this because to verify something as a vector space, you have to verify the operations are-- satisfy certain properties. And when I state the-- or give the proof of what I'm going to prove out of this theorem, this will be the last time I actually refer to the fact that these are equivalence glasses and not functions, but just to make a point. Again, this space, strictly speaking, is a set of equivalence classes. So one would have to check that addition and scalar multiplication are well-defined on this set, meaning if I have two representatives of the same equivalence class and I multiply one by a scalar and the other by a scalar, then I get the same equivalence class in the end, which is easy to check, and the same for addition. So again, in a strictly speaking sense, that Lp is a set of equivalence classes, these things are well-defined, scalar multiplication and addition. So let's check that Lp-- or that this quantity here is a norm. And I'm just going to-- So first off, note that taking the Lp norm of an element of Lp is actually well-defined. Remember, so this is going to be the only theorem and proof where I actually refer to the fact that Lp is actually a set of equivalence classes, and then after that we just won't do that anymore and we'll just think of Lp as a set of functions where two functions are the same element if they equal each other almost everywhere. But first thing to note is that this is actually well-defined on this set of equivalence classes. So note, if I have two representatives of an equivalence class, then by what we developed for the Lebesgue integral-- So f, take the absolute value raised to the p is also going to be equal to g absolute value rates of the p almost everywhere, and therefore these two integrals are going to equal each other. These integrals are real numbers. They're not equivalence classes of real numbers, this is a real number, a non-negative real number. And therefore, if in the notation I had before, if I have an equivalence class and I have two different representatives of this and I define the Lp norm of this equivalence class to be the integral of the representative, I get the same number regardless of the representative for that equivalence class. I.e.-- OK? So this function is defined by-- if I take that Lp norm of an equivalence class given by just the integral of the representative, this is well-defined. If I take two different representatives, I get the same number out. Now this quantity here equals 0 if and only if-- by what we've developed in our theory of integration, f equals 0 almost everywhere. So the Lp norm or the-- so I should say this equals 0 almost everywhere-- i.e., f equals 0 almost everywhere. But possibly not everywhere. The equivalence class of this function is equal to the 0 element. So this was the whole reason why I went through this trouble of actually explaining to you why, strictly speaking, the most rigorous way of defining Lp is as a space of equivalence classes, even though we won't think about them that way or talk about them that way in the future, that's, in fact, what they are. Because then, this quantity here, this Lp norm is, in fact, a norm. If the Lp norm is 0, then that element is the 0 element. OK. So that's about the last time I'm ever going to refer to the elements of Lp as equivalence classes. I will now be referring to them as functions. But homogeneity and the triangle inequality for the Lp norm, then follow, simply from the definition, for homogeneity, the fact that a scalar-- that the Lp norm of a scalar multiple of f is equal to the absolute value of the scalar times the Lp norm of f, and the triangle inequality follow from the definition and Minkowski's inequality. OK. So Lp is the space of measurable functions with finite Lp norms. See? I'm Already not going to refer to it ever again as the space of equivalence classes. You should think of them as space functions, just with two elements equal if these functions equal each other almost everywhere pointwise on E. Now we come to the big question. We now have this norm space Lp of E corresponding to all these functions that have finite Lp norm. First off, is this non-empty? Let's maybe give you the simplest example. In fact, let me prove the following simple theorem. So-- which is the following. Let E be measurable, then f is an Lp of E if and only if the limit as R goes to infinity over-- of the integral of minus R to R intersect E of f raised to the p is finite. Notice, as R going to plus infinity-- so maybe let's not make it R. We can make it n we're in as a natural number. Then this is an increasing sequence of numbers. So let's give the proof real quick. Let's assume f is in Lp implies that this quantity here is finite. Since the sequence of integrals over minus n to n intersect E f raised to the p, this is an increasing sequence. Because at each step, at each entry, I'm taking the integral of this non-negative quantity over a bigger set. So this is an increasing sequence. So, in fact, this limit always exists, so it was meaningful to actually refer to this limit. Since this is an increasing sequence, limit as n goes to infinity minus n intersect E actually exists as a possibly extended real number. Now since for all n, we have that the integral of minus n to n intersect E raised to the p is less than or equal to-- well, in fact, I'm being a little inefficient here. Let's just erase one bit here. OK. Yeah, let's do it this way. We'll do it much faster. Now note that intersect E f raised to the p, this is equal to integral over E chi minus n to n f raised to the p. So I can think of this as-- I can think of what's here in the integrand as a function for each n. So since this is a sequence that is pointwise increasing, and for all x in E, we have limit as n goes to infinity of chi minus n to n, f of x raised to the p equals f of x raised of the p. This-- by the monotone convergence theorem, the integral of the limit as n goes to infinity, which is just f raised to the p, is equal to the limit as n goes to infinity of the integral of this quantity here over E, which is, again, the interval over minus n to n intersect E raised to the p. And therefore, this quantity here is finite if and only if this quantity here is finite. And therefore, f is an Lp if and only if this quantity here is finite, and in fact, they equal each other. So using that theorem, if you like, you can prove-- and I'll leave it to you-- that if f from, let's say, R to C is measurable and there exists a constant non negative and q bigger than 1 such that for almost every x in R, f of x is less than or equal to a constant times 1 plus x raised to the minus q, then f is an Lp of R for all p bigger than or equal to 1. So how do you do that? So OK, maybe I'll just indicate why we use the previous theorem and look at the integral from minus n to n intersect R. So this here, by this estimate, is less than or equal to the integral from minus n to n times constant 1 plus x to the minus q. Now this is a continuous function over a compact-- or closed and bounded interval, so this is equal to its Riemann integral. I will often, on exams and so on, right the Lebesgue integral also in this form, though. So I don't want to say that I'm just-- you're going to use this kind of notation for the Riemann integral. But anyways. And I leave it to you to show that as long as q is bigger than 1, this is less than or equal to-- so I should put p times q. p times q. That as long as q is bigger than 1, this integral here is less than or equal to some constant depending on p. OK. So there's many functions that are in Lp, so it's not exactly a trivial space. But what kind of space is it? Now let me state the following. That, in fact, this is what you proved in the assignment right before -- or the assignment I at least assigned before the exam, which was following. Let a less than b-- less than b. And p between 1 and infinity. So in fact, you did it for L1, but the same proof carries over for Lp. fE in Lp of a, b. So I keep adding stuff and epsilon would be positive. Then there exists a continuous function in a, b, which I can also impose vanishes at the end points and is close to f and Lp norm. So what this states is that the space of continuous functions is dense an Lp of a, b. And it's a proper subset of Lp of a, b. I can find elements in Lp that are not continuous or even not equal to a continuous function almost everywhere. So it is dense and also proper. And now the final theorem will prove about integration in Lp spaces is that Lp is complete. So this is due to Reece and-- let's see. Does this have a C or is it just-- yeah, it does have a C. This is a Banach space. For p between 1 and infinity. Including 1 and infinity. Now I'll give the proof for p between strictly less than infinity. So p equals infinity. This will appear in an assignment. So we'll do the case of p between 1 and infinity. So how are we going to do this? We are going to, in fact, use that criterion from several weeks ago about when is a norm space a Banach space? So we proved this equivalent criterion-- so remember, a Banach space is a norm space that's complete with respect to the norm. So you would have to check that all Cauchy sequences in the space converge to something in the space. But now we came up with this other criterion-- I shouldn't say we did. Somebody did, and then I showed it to you, that an equivalent way to prove that is to prove that all absolutely summable series in the norm space are summable. Remember, absolutely summable means that some of the norms is finite. So that's what we're going to use-- or that's what we're going to do. We will show that every absolutely summable series is summable. So, suppose I have a sequence in Lk, it's a sequence in-- f is an Lp, not Lk. That form an absolutely seminal series. So such that sum over k of-- if I take the LP norm of fk. So this is now a series of non-negative numbers, let's assume this converges, meaning this I'm writing it as this is finite. So, in fact, this equals a-- let me call this a convergence series something. Call it m, which is a finite number. All right, so we have this absolutely summable series, and we want to prove that now the series, sum of fk's, converge to something in Lp. And what do we want to show now? So let me just-- so that that's clear ahead of time, we want to show there exists a function in Lp such that k equals 1 to n. The partial sums converts to f as n goes to infinity and Lp of eie. Equivalent way of writing that is that the limit as n goes to infinity of the norm, some k equals 1 to n of fk minus fp equals 0. That will show that every summable series is-- absolutely summable series is summable. So we have to identify a candidate f and then show that the norm of-- that this norm here goes to 0 as n goes to infinity. OK, so define gn from E to-- so it's a non-negative number-- or it's a non-negative function-- by gn of x equals sum from k equals 1 to n of the absolute value of fk of x. This is, again, a measurable function because it's the sum of measurable functions. So what do we know? By the triangle inequality, we know that if I take the Lp norm of gn, which is this finite sum-- so by the triangle inequality for the Lp norm, this is less than or equal to sum from k equals 1 to n of the Lp norms of the fk's. And this is a partial sum corresponding to the series of the norms of fk's which sum to M. So this is always less than or equal to M, which we're assuming, again, is a finite number. And therefore, which implies by Fatou's lemma that the liminf as n goes to infinity of g sub n over E, which is equal to-- Now for each x, this-- so as n goes to infinity-- for each x as n goes to infinity, this converges to something. It's either finite or equal to infinity, so it always converges. So this is equal to the infinite sum. I should say, let's raise this to the p. Which is equal to-- so-- OK, I kind of got this backwards. I should have said this is equal to this. But anyways, by Fatou's lemma-- so let me reverse these things. So by Fatou's lemma, so I have this is equal to-- so this does not used Fatou. It just uses the definition of gn. This is less than or equal to-- now I'm using Fatou's lemma. Integral over E gn raised to the p. And now I use that bound which I have right here-- this is always less than or equal to-- remember, the Lp norm of g sub n was less than or equal to M. So raising that to the p power, I have this is less than or equal to M to the p. So I started off with the integral of this non-negative measurable function, which is the series of f sub k raised to the p, and showed that that integral is finite. Thus-- so by another theorem that we proved from integration, if I have a non-negative measurable function that has finite integral, then that measurable function has to be finite almost everywhere. So thus, this quantity has to converge or is finite for almost every x in E. So, what I get is that for almost-- let me include an x in here. So I've proven for almost every x in E, fk of x is absolutely convergent. So this series is absolutely convergent. And therefore, it converges for almost every x in E. And I'll define my function away, right? Because in the end, remember, we're trying to find a function f so that These partial sums converge to f and Lp. So if we can define f at somehow as the almost everywhere pointwise limit of s sub k, maybe we can use the dominated convergence theorem in some way, and that'll be what we do. So define f of x to be exactly what I get from this convergent series when it converges, absolutely. So if-- 0 or otherwise. And I'm going to define g of x to be, again, if you like, it's when the g sub n's converge. So if this is finite, and 0 otherwise. So now I have these two measurable real value functions. And this guy will end up being what the fk's converged to. So then we have a couple of things. Then limit as n goes to infinity of sum from k equals 1 to n, fk of x-- you can put a minus f of x-- equals 0 almost everywhere on E. Why is that? This is just because f of x is simply defined to be the infinite sum when the sum is absolutely convergent, which is almost everywhere by what we've done. What else? If I-- so this is one crucial thing. And minus f of x, this is less than or equal to g of x-- so it's the p raised the p almost everywhere on E. OK, why is this? Holds, for example, when the series is absolutely convergent. Because then f of x is equal to the infinite sum of the fk's of x. And so by the triangle inequality, the absolute value of that difference is going to be less than or equal to the sum of the absolute values. That's by definition equal to g sub n or g sub n plus 1. g is the limit of the g sub n, so the g sub n's are increasing. So that always sits below g of x. Now, by-- I should say, by this, I don't want to pull it up. So let's just say, since we proved that the Lp norm of this is less than or equal to M, which this here is equal to g almost everywhere, this implies that the LP norm of g is equal to this Lp norm, and therefore is less than or equal to M And therefore, that means the Lp norm raised the PE is finite. Moreover, what else do we deduce? We also deduce from this that f, which is bounded above by almost everywhere by this quantity, this is less than or equal to-- which is equal to g almost everywhere, the Lp form of f is less than or equal to the Lp norm of g, which, as we said, is less than or equal to M-- i.e., f is an Lp of E. So again, almost everywhere we have that f which is equal to the sum without the absolute values. That's always less than or equal to an absolute value. The sum from k equals-- the sum over k with absolute values inside. And we know that the Lp form of this is finite, we proved that earlier, which tells me that the Lp norm of g is finite, because g is equal to this quantity almost everywhere. It's 0 on a set of measure 0, which we just threw in there to make it finite. So we have f is an Lp, we have g is an Lp. So at least this can be a possible candidate. And we have these two facts here. Now we apply the dominated convergence theorem using this fact, this-- we have converging to 0 almost everywhere on E. And since that's true, I can put, if you like, raised to the p. So this quantity here is converging to 0 almost everywhere. This quantity is also bounded above by this quantity here on the right, which is integrable. It has the integral of g to the p is finite. So by the dominated convergence theorem, I can conclude that the limit as n goes to infinity of sum from k equals 1 to n of the fk's minus f raised to the p over E, this equals the integral of the limit, which, remember, is 0. I.e., we've shown that. Which is what we wanted to prove. Now of course, you need a different argument for p equals infinity, and it's a little bit simpler. So we've proved that Lp is a Banach space. And notice, we used a lot of different tools and things that we had developed over the course of this-- over the course of this course so far. I keep using this word completion even though we haven't really talked about it. I kind of left it out of the first chapter because I wasn't going to use the second chapter from the lecture notes that are usually used for this course. But from this fact-- so you should think of the completion as the smallest Banach space containing a certain norm space. But what this statement here along with the theorem due to Riesz and Fischer is that the completion of continuous functions over a, b with norm given by the Lp norm, which is, in fact, a norm on continuous functions because if a continuous function is equal to 0 almost everywhere, it has to be 0. So what this proves is that the completion of the continuous functions with this norm is equal to the space Lp. So now we're going to move on back to some more general theory of functional analysis. This was specific to measure and integration on the real numbers. Now we're going to go to more general topics and probably more intuitive topics because a lot of it has analogs. I mean, a lot of functional analysis has-- it's supposed to be in some way analogous to stuff you've seen in linear algebra. Some of it definitely is not. For example, in one of the assignments, you proved that the unit ball and little Lp is not compact for-- over the natural numbers, the set of sequences that have little-- finite little Lp norm. That that's not compact. While from calculus, that in Rn, the unit ball is compact. That's the Burrill theorem or Bolzano-Weierstrass depending on if you take as your starting definition of compactness in terms of open sets or in terms of sequences having a convergent subsequence, which are equivalent for metric-- at least in a metric space. OK. Now here we're going to move on to the topic of Hilbert spaces. So these are special in the sense that the norm comes from an inner product-- maybe you saw an inner product in linear algebra. Or you should have at least. And therefore, you have notions of being orthogonal, you have notions of projections, and these-- we saw things that have that flavor when we're talking about Banach spaces in general and we were talking about modding out by a subspace, and you can think of the equivalence class corresponding to an element as the projection onto the complement of that subspace, but it wasn't an exact analogy. But a lot of exact analogies will now occur for Hilbert spaces, and then certain operators on Hilbert spaces will be very analogous to self-adjoint matrices or symmetric matrices which you saw in linear algebra. And of course, from a applied standpoint-- I should say applied-- Hilbert spaces, this is where the action-- this is where-- the setting of quantum mechanics. Quantum mechanics takes place in a Hilbert space. The elements are square integrable, now that we've dealt with that, functions over R3, if we're in three dimensions or if we're on the line R, that have L2 norm equal to 1, along with the Schrodinger equation. So Hilbert spaces are very important. They arise naturally in many problems. And because of this additional structure of them-- of the norm coming from an inner product, you can say a lot more things about them. So before we get to Hilbert spaces, let me add a "pre" before that. So pre-Hilbert spaces. So I said that Hilbert spaces are going to be Banach spaces that come from a norm. Pre-Hilbert spaces, these are just norm spaces that come from an inner product. So make the following definition. A pre-Hilbert space H, this is a vector space. Typically over C, but you can also just take it over R, that's fine as well. But I'll just for definiteness say over C. With a Hermitian inner product. So this is new terminology, maybe this is new terminology, too, so let me write out what it means-- what is a Hermitian inner product. So this is, i.e., a map, usually denoted using brackets from H-cross-H into the complex numbers. Satisfying certain properties-- so such that-- so one, it is linear and the first variable-- so for all lambda 1, lambda 2, v1, v2, if I look at the inner product of v1, lambda 1 times v1 plus lambda 2 times v2 with w, this is equal to lambda 1 times v1 times the inner product with w plus lambda 2 v2 inner product with w. For all V, w, and-- I wrote capital V, I should have-- written H a minute ago. Now we're in Hilbert spaces, pre-Hilbert spaces. For all v and w, the inner product of v and w is equal to the complex conjugate of the inner product of w with v. And the following, which is positive definiteness of the inner product, if to call it that, for all v and H, m the inner product would be with itself, this is bigger than or equal to 0, so it's a real number and it's non-negative. And this one, it equals 0 if and only if v is a zero element. OK. So let me make a couple of remarks. First off, let's say this is remark 1. The third quantity does imply that the only thing orthogonal to everything in the space is a zero element. So v is in H and v is orthogonal to everything-- I keep using the word orthogonal. I should just the inner product with zero. It gives me 0 for every element, this implies that v equals 0, and of course, the converse applies, too, that v is equal to 0, then the inner product with 0 and every element is 0 just by linearity. Two is that if I have two elements v and w and a scalar lambda, then this is equal to the complex conjugate of lambda times w with v, and by the first property, the lambda pops out, and therefore, if I-- complex conjugate of a product is a product of the complex conjugates. And then if I undo this, I can get this. So it's linear in the first entry, meaning the constants just come out. But if I have a constant in the second-- or a scalar multiple in the second entry, then that comes out as well, but with a complex conjugate over it. That's all I wanted to say. So I said that pre Hilbert spaces are-- you naturally think of them as norm spaces where the norm comes from the inner product. Here, we have in a product. Where's the norm? So definition. If H is a pre-Hilbert space with inner product denoted as before, we define using this-- So I'm not calling it a norm yet. I'm just saying we define this function on H to be in a product v with itself raised to the 1/2 power. In the end or in a minute we'll show that this is, in fact, a norm. But for now I'm just going to call it this function on H or possible norm on H. All right, so we have the following theorem. So this is valid in any pre-Hilbert space. For all u, v, and H of pre-Hilbert space, if I take the absolute value of the inner product of u and v, this is less than or equal to this norm-looking thing of u times the norm-looking thing of v. So this shouldn't come as a complete surprise. Right down-- if we took H to be Rn, and so then now this is a vector space over R, then and the inner product would just be the dot product. This is stating the Cauchy-Schwarz inequality that you know and love from before. So what's the proof? Let's let f of t be the norm of u plus t times v. So I said norm, but I haven't proved it's a norm yet, so you're going to have to forgive if I keep calling it a norm, but in the end it is. Let f of t be this thing squared, which, we note, it's a non-negative number. Because the product of v with v is a non-negative number taking that to the 1/2 power, so this is u plus tv inner product u plus tv which is non-negative. Now if we compute all this out using how the linearity works for inner products, this-- I get u inner product u plus t squared v inner product v plus t times u inner product v plus t times inner product of v with u. And this is the complex conjugate of this complex number. So let me just rewrite this-- u squared plus t squared or v squared plus-- so again, this number is the complex conjugate of this number. And when I add a complex number to its complex conjugate, I get twice times the real part. 2t real part of u and v. Now, this is just a polynomial with a non-negative thing out in front of t squared. So it has a minimum somewhere. And this minimum has to be non-negative since this function is greater than or equal to 0. Is greater than or equal to 0, I should say, just as a sentence. Now, where does this minimum occur? It occurs where the derivative is equal to 0. Now f prime of t then equals 0 implies-- or if and only if t min is equal to minus the real part. So I will leave this calculus to you. I mean, this is just a polynomial. Take the derivative with respect to t and solve for when that's equal to 0. And therefore, I get that this minimum-- so f evaluated at this point is non-negative, so 0 is less than or equal to f of t min, which, when you stick that into this here, you get norm of u squared minus real part of u v squared over norm of v squared. And therefore, the real part of u, v absolute value-- so this-- all of this is bigger than or equal to 0. So if I move this over to that side and multiplied by norm squared, take the square root, I get that this is less than or equal to-- le me put a-- u times norm v. So this is almost what I want. I want the absolute value of the inner product with u and v. But all I have is the real part of inner product with u and v. So of course, if the inner product of u and v is 0, this inequality I want to prove is automatic. Also, if u or v is 0, then this inequality I wanted to prove is automatic. So that's why I'm actually just dealing with the non-trivial case. So if this inner product is 0, then we're already done because the right-hand side is non-negative. So suppose this is non-zero, but lambda be this complex number u, v inner product complex conjugate over the absolute value, then the absolute value of this complex number is equal to 1 because the absolute value of the conjugate is equal to the absolute value of the original complex number. And lambda times u, v, which is equal to lambda. So I should say u, v, this is equal to lambda times u, v, which is equal to lambda u times v. Now this is equal to this, and therefore, it's equal to the real part of it. So this is equal to a real number, and therefore it's equal to its real part. We pulled a similar trick when we were talking about the triangle inequality for integral functions that now take values and the complex number-- complex numbers. So this is equal to the real part of this, which is less than or equal to-- by what we've done here, we've already proven this inequality holds for every u and v, so it also holds for lambda times u times v. And now-- so simple off to the side, that if I take lambda u, an inner product with lambda u, if I take the 1/2 power of that, that gives me this quantity. But let me just compute this. This is equal to-- lambda comes out from here. And then complex conjugate of lambda comes out from there, and this is equal to-- but this is equal to 1. So we see that this is equal to this, and therefore, raising it to the 1/2 power is equal to-- this quantity here is equal to this quantity. And we proven what we wanted to do. OK. And so this was a Cauchy-Schwarz inequality and a general pre-Hilbert space with this quantity that I referred to as a norm but I haven't proved it's a norm yet. Next time we will use the Cauchy-Schwarz inequality to prove that, in fact, this thing that I'm masquerading around in a norm notation is, in fact, a norm on a pre-Hilbert space. And from there, we'll introduce Hilbert spaces which are those pre-Hilbert spaces with this norm that are actually complete. And really, for any kind of-- so for-- and we'll prove this at some point, there's really only two types of reasonable Hilbert spaces. And I mean this in a very strong sense, not in a loose sense, that the first type is just finite-dimensional-- so think of C raised to the n. So n tuples of complex numbers where the inner product is definable in a natural way. Or a little L2 which is these-- this set of sequences that have finite little L2 norm. This is basically the only other type of Hilbert space that there is, and I'll say what I mean by that. How it'll work is we'll basically show that every separable Hilbert space, which is what I mean by reasonable, that it's the most reasonable spaces we work with are separable, meaning they have countable dense subset, has a countable orthogonal basis, which is what-- Orthonormal basis I haven't defined. It's not a Hamel basis, but it serves the same purpose, meaning you can't write every element as a finite linear combination of orthonormal basis, but you can write it as an infinite expansion in orthonormal basis. Think Fourier series. And this is what provides this identification of a separable Hilbert space with either a C to the n if this or orthonormal basis is finite, or little L2 if this or orthonormal basis is countably infinite. And we'll get to that possibly by the end of next lecture.
MIT_18102_Introduction_to_Functional_Analysis_Spring_2021
Lecture_4_The_Open_Mapping_Theorem_and_the_Closed_Graph_Theorem.txt
[SQUEAKING] [RUSTLING] [CLICKING] So let's continue with the big-name theorems, or the theorems with big names, or the big theorems with names, which the last one was a uniform boundedness theorem, which followed from Baire's category theorem, which let me again recall for you. If M is a complete metric space, C k is a collection of closed sets and M is equal to the union of these closed sets, then at least one of these closed sets has an interior point. Or another way of saying it, at least one of these C k's contains an open ball. So now, what we're going to prove as a consequence of the Baire category theorem is so-called "open mapping theorem," which says that surjective linear, bounded linear operators send open sets to open sets. It's kind of the backwards version of continuity. So if B1, B2 are Banach spaces and T is a bounded linear operator from B1 to B2, which is surjective, meaning onto, then T is what's called an "open map." Or you would just say, "T is open," which means for all open u, for every open subset of B1, T of u is open in B2. So T takes open sets to open sets. So statement of the theorem-- if you have a bounded linear operator between two Banach spaces which is surjective, then T is an open map. So we're first going to specialize to one type of open set. And we're going to prove a specialized version of what we want to prove. And then we're going to show using the linearity of T, along with scaling and translation, that this, then, implies that open sets get mapped to open sets. So first, what we're going to prove is that-- let me recall this is the set of all B in B1 such that-- so this is the open ball in B1-- then the image of this open ball contains an open ball. So now, the image is in B2, an open ball in B2 centered at 0. So T of the open ball could be some of weird set, and we know that linear maps take 0 to 0. What we're saying is that there exists an open ball contained in the image. Now, like I said, once we prove this, we're going to use linearity of T to be able to shift these balls around, shift and scale these balls around so that we prove the result for every open u. But this is the heart of it. So since T is surjective, everything in B2 gets mapped onto. So B2 is equal to the union over N natural number of T of B closure. So everything in B2 gets mapped to by something in B1. Everything in B1 lies in some ball centered at 0 in B1. Just take N bigger than the norm of fixed guy, and then its image will be contained in this. And we're taking closure just so that we can write B2 as a union of closed sets, so that we can now apply Baire's theorem. So this implies by Baire that there exists a natural number n0 such that the image of this ball contains an interior point or it contains an open ball. Now, here's the thing-- T is linear so the image by T of the ball of radius n0 centered at 0 is the same as-- so what I'm saying is, I'm using a little bit of handy notation which I didn't introduce yet, but if I have a subset of a vector space and I have some number outside of it, what I mean by this is the set obtained by taking this set and multiplying it all by the scalar, which is meaningful for vector space. Now, by scaling, you can check that this thing under the closure is the same as n0 times T of the image of the open ball centered at 1. And then the closure's also with respect to scaling. So N0 times the closure of the image of B 0, 1 contains an open ball so let me just draw, again, a picture to convince you. So now I have-- let's draw it like this-- this is n0. Now, if I take this set and now multiply it by 1 over n0, I just get the rescaled guy, which is, to use a fancy word, homeomorphic to n0 times the image of the open ball. So this is just a picture to back up what I'm about to say next, which you can also verify not just by pictures but by going through definitions, which implies that T B0 contains an open ball. So what's this mean? This means there exists some point B2 and a number r positive such that the ball of radius 4r-- so 4 just because it's going to come in handy later with arithmetic, so just by choosing r small-- is contained in the closure of the image of the open ball of radius 1. So this ball is contained in the closure and therefore, this is contained in the closure. And therefore, there exists a point v1 which is equal to T of u1, which is in T. So v0 is in the closure, so I can find points from the image of this ball close to it. So that's just what I'm writing here. So there exists a v1 which is the image of some guy u1 in the open ball such that it's close to v0. And how close, so I don't mess this up? 2r. So why did I do that arithmetic? Simply because if I look at the ball of radius 2r centered at v1-- so v1 is within distance 2r to v0, so everything in here is going to be within distance 4r to v0. And since the ball of radius 4r is contained in here, I obtain-- so this is what I just said a minute ago-- this is contained in the ball of the image of the ball of radius 1, the closure of that. So now, what I'm going to show is that-- I almost have what I want to prove-- I'm going to show that the closure contains an open ball. That's not quite what I want to show because remember, I want to show that the image of the ball of radius 1 contains an open ball. But what I'm about to show is that the closure of that contains an open ball. Now, if v is less than r, then if I look at this element 1/2 times 2v plus v1, this is in what? This is in 1/2-- so 2v plus v1, 2v has a norm less than 2r. So this element is in the ball the ball centered at v1 of radius 2r. So that's contained in the closure. So multiplying by 1/2, I get this is closure, which, as I've said, using the linearity of T and the homogeneity of the norm tells me that this element is in the closure of the ball of the image of the ball of radius 1/2. So this is not crazy. This is an element of this, so 1/2 this ought to be an element of 1/2 times this set. And then the 1/2 comes all the way through because T is linear and the norm is homogeneous, so the 1/2 can come through. But as you're seeing this, think it through slowly. So that implies that v, which I can write as minus T of u1 over 2. u1 here was just defined as an element of the ball of radius 1, which gave me v1, which was close to v0. So let me keep up this string of inequalities-- plus 1/2 times 2v plus v1. Now, this is an element of-- don't confuse what I'm about to write down with the notation we used when we were talking about quotients. That's not what I mean here. I mean take this set and add this fixed element to it. So this set here is the set of all elements of the form something in here plus this fixed element. And now again, by the linearity of T this set here is exactly the same as T of minus u1 over 2 plus ball of radius 0 centered at 1/2, with the closure over all of that. And the closure respects everything that we're doing here. Now, u1 has radius or has norm less than 1. And therefore, u1 over 2 has norm less than 1/2. Everything in here has norm less than 1/2. So something with norm less than 1/2 plus something with norm less than 1/2 is something with norm less than 1. So this is contained in the ball of radius 1 centered at 0, and then closure, and take the image of T. So here, I'm using that this is contained in the ball of radius 1. So that's almost what we wanted to show. What we have shown is that the ball of radius r is contained in the closure of the ball of radius 1, the closure of the image of the ball of radius 1. Now, again, by scaling this, let me just-- now, the ball of radius r is-- so let me just write down what I'm going to write down. Then I'll explain it. And therefore, the ball of radius 2 to the minus n times r, which is equal to the set of elements of the form 2 to the minus n times the ball of radius-- elements in the ball of radius r is contained in 2 to the minus n, which equals. So all I did right here was say, again, by homogeneity of the norm, the ball of radius r or the ball of radius 2 to the minus n times r is the same as 2 to the minus n times all elements of the ball of radius r. And that's contained in this by what we've already proven. So this is contained in the set of all elements of the form 2 to the minus n times things from this set. And again, by homogeneity, 2 to the minus n comes through here and then through here for all n in natural numbers. So let me put-- what I've proven is this and that. So that's almost what I wanted to prove right here, just I want to be able to drop the closure. And now what I'm going to prove is that I can drop the closure if I take the ball of radius r over 2. So now, we'll prove that the ball of radius r over 2 is, in fact, contained in the image of the unit ball by T. Now, we don't have that immediately because we have the ball of radius 1/2 r is contained in the closure of the image of T of this. There's nothing that doesn't say that the closure of the image of this thing could end up being-- there's nothing that says that that has to be contained strictly within this ball. So let's show this. Let v have norm less than r over 2, so it's in this ball. Then, as we've proven up here, v is in the image of by T of the ball of radius 1/2. And therefore, there exists a b1 in the ball of 0, radius 1/2 such that the norm of v minus T of b1 is less than, let's say, r over 4. And now we're going to iterate this. So now, think of this element as lying in the ball of radius r over 4. And therefore, now I take n equals 2 in here. Thus, v minus T b1 is in the closure of the image of the ball of radius of 1/4, which implies there exists a v2 in the ball of radius 4 such that v minus T b1 minus T b2 norm is less than half of what we had before, so r over 8. But now you see the game that we're playing. This is now in the ball of radius r over 8, which is implies this is contained in the closure of the image of the ball centered at 0 of radius 1 over 8. And therefore, I can find a b3 in the ball of radius 1 over 8 so that v minus T b1 minus T b2 minus T b3 is less than r over 16. And then, we just continue inductively. So I said I would do the proper induction argument once and I did it last time. So I will never do it again. Continuing, we obtained a sequence b k of elements in B1 such that the norm of b k is less than 2 to the minus k. And so there's that. And if I look at v minus sum from k equals 1 to n T of b k, this is less than 2 to the minus n minus 1 times r. Let me make sure my-- yeah. Now, we've used the fact that B2 is complete. We haven't used the fact that B1 is complete yet. So we ought to use it at some point. We're going to use it now by showing that v has to be, in fact, inside the image of the radius of the ball of radius 1. So these b k's form a Cauchy sequence-- or I shouldn't say they form a Cauchy sequence, but their sum forms a Cauchy sequence. Let me write it this way. The series is absolutely summable because the norm is bounded by 2 to the minus k, which is summable. And since B1 is a Banach space, the series must be summable, which implies there exist a B and B1 such that B is equal to the sum k equals 1 to infinity of b k. Moreover, the norm of b, which is equal to the limit as n goes to infinity of the norm of n k equals 1 b k, this is, by the triangle inequality, less than or equal to the limit as n goes to infinity of-- equals 1 to n b k, which is less than-- well, is equal to-- sum from k equals 1 to infinity of the norm of b k. And each of these is less than 2 to the minus k. So k equals 1 to infinity of 2 to the minus k. And that sum is just 1. So the norm of b is less than 1 using the first property of this sequence. The second property of this sequence then shows that v is equal to the image of b by T. Moreover, since T is continuous, T b, which is equal to the limit as n goes to infinity of t applied to k equals 1 to n b k, which equals, by linearity, limit as n goes to infinity of sum from k equals 1 to n T of b k. By the second property, the limit as n goes to infinity of T of b sub k is equal to v. And therefore, B is in the image of the ball of radius 1 by T. And thus, the ball of radius r over 2 is contained in the image of the ball of radius 1. So that's the special-ish case that I wanted to prove in terms of open sets. So what I've shown is that, if you like, the interior point 0 remains an interior point. Now let's show that implies the full claim of what I want to prove for the open mapping theorem, that every open set gets mapped to an open set in B2. We'll just use translation. So in scaling again, suppose u subset of B1 is open and B2 is the image of something in u, so this is in T of u. Then there exists an epsilon positive such that B1-- so remember, u is open-- so B1 plus all elements of the ball centered at 0 of radius epsilon and this, which is equal to the ball centered at B1 of radius epsilon is contained in u since it was open. Now, since there exists a delta positive such that ball of radius delta is contained in the ball of radius 0, 1, this implies that the ball of radius-- again, by homogeneity, the ball of radius epsilon times delta is contained in the image of the ball of radius epsilon, again, because T is linear and the norm is homogeneous. So let's go back to how we had it written down. So b2 plus-- so this is the of epsilon times delta. I got my delta and epsilons backwards. Anyways, this is equal to b2 plus epsilon times the ball of radius delta. So this is actually equal to the b2 plus the ball of radius 0 centered at 0 of radius epsilon times delta. The epsilon I can pull out. Now, this is contained in b2 plus epsilon times the ball of radius 0, 1. Yeah, I don't know why I wrote this down. I don't think I needed it. Anyways, OK. And this is equal to T of b1 plus epsilon T of B 0, 1. And again, by linearity and homogeneity, this is equal to the image of b1 plus epsilon ball of radius 1, which I can say is b1 plus ball of radius epsilon. Now, this epsilon, remember, was chosen so that this ball here is contained within u. And therefore, this is contained within u and the image is, therefore, contained in the image of u. Yes, so I'm not sure why I decided to write this down. This is what threw me for a loop. But anyways, so this is the point-- that we can just take the special case and shift things around to get the general statement for open sets getting mapped to open sets. So from the open mapping theorem-- I don't know, it seems almost topological-- but we get what's called the closed graph theorem, which gives you sufficient conditions to be able to check if something is continuous. It's a little bit more convenient. And I'll explain why in just a second. So this is the closed graph theorem. But first, I need to state just a simple theorem before I actually state the closed graph theorem. If B1 and B2 are Banach spaces, then their Cartesian product, which I can give a natural vector space structure on from B1 and B2, just the sum of an ordered pair of elements is just the entry by entry sum. But I can also put a norm on it coming from these two, with norm-- so for an ordered pair b1 and b2, the norm of this is just defined to be the sum of the norms in the respective spaces. So this is a norm space. But moreover, if they're both Banach spaces, this is a Banach space. So it's not difficult to prove just based on the definition. I'm not going to write the proof. I will leave it to you, the proof. Again, it's not difficult. Simply from how we've defined the norm a Cauchy sequence in B1 cross B2, the first entry will form a Cauchy sequence because of this definition of norm, and the second entry of the sequence will form a Cauchy sequence in B2. Both of those have limits. So then you can prove the sequence consisting of ordered pairs has a limit. It's the same way you can prove that R2 is complete assuming R1 is complete. All right, so now, I can state the closed graph theorem, which is the following-- if B1, B2 are Banach spaces and you have a linear operator from B1 to B2-- so all you know is it's linear, don't know that it's bounded-- then there's an equivalent condition you can check to see if it is, in fact, a bounded linear operator. Then T is a bounded linear operator from B1 to B2 if and only if the graph of T which is defined to be the set-- let's see, what notation do I use-- u, v such that v equals T u-- let me write it this way-- which is a subset of B1 cross B2, is closed. So a linear operator is a bounded linear operator if and only if the graph of this linear operator given by u, T u is closed. Now, why is this a little bit easier or I say convenient than just checking if something's a bounded linear operator? Well, maybe it's difficult to prove the boundedness property that we have that's equivalent to continuity. So a bounded linear operator is a linear operator which is continuous. So maybe it's difficult to prove the bound. So you have to go back and try and prove-- just for a God-given or instructor-given operator-- then you try and go back and prove continuity. And continuity says, well, take a sequence u n's converging to u. You then have to prove that T of u n converges to T of u. But there's kind of two statements in there. You have to prove T of u N converges and that limit is equal to T u. What the closed graph theorem does it eliminates one of those steps. Because to prove that the-- let's think about that, that the graph is closed in B1 cross B2. What does that mean? That means you have to check that it's closed under taking limits of sequences. So you have to show that given a sequence u n tending to u and T of u n converging to v, that v is equal to T u. So you get to assume already that T of u n converges. You just now have to check that the thing it converges to is actually equal to the image of the limit of the u n's. I hope that makes sense and explains why I said this is actually a little more convenient than continuity, or at least useful. So this is a two-way street. There's always, typically, one side of the street that's plowed. So let's do this direction, assuming that T is a bounded linear operator, and show that the graph is closed. So if T is a bounded linear operator-- let me just write "suppose" and then start a new sentence. So let u n T of u n be a sequence in the graph of T such that u n converges to u and T of u n converges to v. To show the graph is closed, we have to show that that the pair u, v is in the graph, that v is equal to T of u. But this follows from continuity. Then v is equal to the limit as n goes to infinity of T of u n. And since T is continuous, this is equal to [INAUDIBLE] u n, which equals T u. Thus, the ordered pair u, v is in the graph. So let's now prove the opposite direction. It's still not that difficult. So what I'm first going to do is I'm going to draw a diagram. This may be the only diagram I ever draw in this-- a commuting diagram-- maybe the only one I draw in this class. We'll see. So T takes B1 to B2. I have the graph of T. And now, I'm going to define two maps going from the graph to B1 and from the graph to B2. The graph sits as a subset of B1 cross B2. So this first map going from the graph is just going to be the projection onto the first entry. And pi 1 and then pi 2 will be the projection onto pi 2. Now, to finish this graph, I need to have an arrow going from B1 up to gamma of T. So I just first want to note-- actually before I note it, I'm going to go ahead and draw the arrow. And then I'll tell you what S is. So pi 1 with respect to the graph of T-- this is a surjective map. So let me actually define these things. A graph of T to B1 via pi 1 of an element of the form u, T u equals u, and then pi 2 from the graph of T to B2 is just take the second element, u T u equals T u. So my point here-- so first note-- gamma of T is a Banach space. Why? Gamma of t is a subspace of B1 cross B2 because T is linear and it's closed. So a closed subspace of a Banach space is, again, a Banach space, since it's a closed subspace of the Banach space B1 cross B2 with this norm that I defined in the earlier theorem. Now, pi 1 and pi 2-- these are both continuous, viewed now as maps from the Banach space of the graph to B1 and B2. Pi 1 is a bounded linear operator from the graph T, B1. And pi 2 is a bounded linear operator to B2. Why is this? I mean, this is pretty clear. So the graph of the norm of things in here are just-- so this is because if I take the norm of pi of-- let me write it u, v now, where v is standing for T u-- this is equal to v, which is less than or equal to-- so this is pi 2 of this one-- which is less than or equal to, and then the same thing with pi 1. So pi 1 and pi 2 going from the graph to B1 and B2-- these are bounded linear operators. And pi 1, when restricted to the graph is, in fact, bijective. It's one to one and onto. So I've used "moreover" again. So let me just write moreover more over. Pi one going from the graph to B1 is one to one and onto, bijective. Everything in B1 gets mapped to by pi 1 from the graph. If you have something u1 here, then its image is u1 T of u1. That's a unique element in the graph, since T is a function. There's only one element in the graph corresponding to a given u. And so let's pause the proof here because I forgot to write a corollary after the open mapping theorem. In fact, let's state it over here. So this was the end of the proof of the open mapping theorem. So there's space for the corollary, which I wanted to write here. It's the following-- if B1 B2 are Banach spaces, T is a bounded linear operator from B1 to B2, which is bijective, meaning one to one and onto, so it has an inverse, then T inverse is a bounded linear operator from B2 to B1. So if I have a bounded linear operator from one Banach space to another and it's bijective, its inverse is automatically continuous. And the proof of that just follows from the open mapping theorem. So I'm going to write it in one line because that's all the space I'm going to give myself. T inverse is continuous if and only if, for all u that's open, for all open sets u and v1 which is in the image, the inverse image of T inverse-- or I should say the inverse image of u by T inverse, which you can just check is equal to T of u-- is open. And that's true by the open mapping theorem since a bijective map is surjective. So every bijective bounded linear operator automatically has a bounded inverse. It's also linear. I mean, I didn't say that, but if I have a linear operator which is bijective, then its inverse is also linear. So now coming back to this proof of the closed graph theorem. So we have the graph, which is a Banach space in its own right as a subset of B1 cross B2, as a closed subset of B1 cross B2. I have pi 1 and pi 2, which are bounded linear operators between B1 and B2. I didn't say that they're linear, but that should be clear. And pi 1, when restricted to the graph, is one to one and onto. Pi 1 here just takes the first element of what's in the graph and spits out-- let me, instead of having T u there, let me have v like that. We just know that v is equal to T u. And so this is one to one and onto. It's bijective. Thus, it has an inverse which is a bounded linear operator, by the corollary that I stated over there. It is a bounded linear operator, which implies that T, which I can write as-- I shouldn't have S inverse, defined to be the inverse. And therefore, T, which is equal to S of pi 1 going from B1 to B2 is now the product of two bounded linear operators, pi 1 restricted from B1 from B1 up to the graph. And then S-- no, no, no. I'm messing this all up. Hold on-- S pi 2. OK, now this makes sense. So S, which is the inverse of pi 1, is a bounded linear operator. Pi 2 is a bounded linear operator. And therefore, their composition is also a bounded linear operator, which implies that T is a bounded linear operator. So I made kind of a mess of that having to go back and forth, but the proof is simple enough. I'm sure Professor Melrose would have just drawn the picture, but I decided to make a mess of it. So those are some pretty important theorems that follow from the Baire category theorem. So we got uniform boundedness from Baire category. We got open mapping from Baire category. We got closed graph from open mapping. If you're a lover of logic, think about it a little bit-- open mapping implies closed graph, but you can also show that closed graph implies open mapping. So as logical statements, they're equivalent. Now, we're going to move on to the Hahn Banach theorem. So I haven't done many examples here going into this kind of general theory, but don't worry. There will be plenty of examples in the assignments of using these theorems, and so on. So the Hahn Banach theorem-- these theorems before were all kind of answering a question. Maybe I didn't state the questions as clearly. Closed graph is kind of-- well, it doesn't so much answer question as give us an alternative to proving continuity. Open mapping you can think of as trying to answer this question-- if I have a bijective bounded linear operator, is its inverse a bounded linear operator? And uniform boundedness is the answer to the question for at least a sequence of bounded linear operators, does pointwise convergence imply, or pointwise boundedness imply uniform boundedness? Now, the question that the Hahn Banach theorem tries to answer is the following-- given the general non-trivial normed norm space V, is the dual space given by simply the zero vector? So at the end of last class, I defined the dual space. Recall, this is equal to the bounded linear operators from the norm space V to the field of scalars which is a Banach space because it's a space of bounded linear operators from a norm space to a Banach space, so it's a Banach space. And we usually refer to elements of the dual as-- I'm not sure if I said this last time, but we don't refer to them as bounded linear operators from the vector space to the field of scalars, but as functionals. Because the classical space is where function spaces-- the classical Banach spaces were spaces of function. So the things that ate them and spat out a number were called functionals, so evolving from function of functions and functions of lines. So the question is, if I have just the norm space, is the dual space kind of nontrivial in general? So last time, I hinted that for certain spaces, you can actually write down the dual space explicitly. You can at least identify the dual space in an explicit way. I hinted last time that the dual of little l p prime is, in fact, equal to l p prime, where p prime is defined as the dual exponent. So 1 over p plus 1 over p prime equals 1. And this is for p bigger than or equal to 1 and less than infinity, but not for p equals infinity. And if you remember last time, C0, which is a set of sequences which converts to 0-- I can't remember if I wrote this down at the end of last class, but you can also identify its dual space with little l1. So just to give you some examples of spaces that do have nontrivial dual space-- examples of norm spaces which have nontrivial dual space are given by the little l p spaces. But now the question is, in general for a norm space, is the space of functionals, is the dual space nontrivial? And this is a statement of the Hahn Banach theorem that there's, in fact, a lot of elements in the dual space. Now, we're not going to get to the statement or proof of that in this lecture. Because we first need to at least go over, or state, or you can say result, axiom from set theory that we'll need. So first, like I said, we need an axiom or recall an axiom, a certain lemma, from set theory. So first, let me set down the appropriate terminology. So partial order on a set E is a relation, meaning just a subset of E cross E, denoted by this-- but we usually don't identify it as a subset of E cross E-- such that three things occur. So you think of it kind of as a less than or equal to. For all E in E, E is related to E. I will say less than or equal to, even though this may have nothing to do with less than or equal to. For all e and f in capital E, E less than or equal to f and f less than or equal to e, these two assumptions imply that e is the same element as f. And transitivity-- so this is reflexivity. I'm not sure what would you call this one. I can't exactly remember. For all e, f, g in E, the two assumptions E less than or equal to f and f less than or equal to g implies e less than or equal to g. So this is the definition of a partial order. So to go with this, we say an upper bound of a set D contained in E is an element e in E such that for all d in capital D, d is less than or equal to e. And a maximal element of the set E is an element e in E, which nothing lies bigger than it, essentially. Nothing maximizes it or majorizes it, such that if f is in E and e is less than or equal to f, then e is equal to f. And a similar definition for a minimal element. So this was the definition of a maximal element. And this is the definition for a minimal element. Now, note the maximum element may not sit above everything in E necessarily. It may just be kind of off to the side of everything in E. Because this doesn't assert that you can always check to see if between two elements, if one is bigger than the other. That's something a little more restrictive, which is the following-- if E and less than or equal to is a partially ordered set, meaning a set with a partial order, a chain in E-- so maybe a better way to say this is a set C is a chain if, but a chain in E is a set C such that for all e, f in C, either e is less than or equal to f or f is less than or equal to e. So a chain is something so that you can, for any two elements in the set, compare whether one is bigger than the other. That's what a chain is. But for a general partial order, it doesn't necessarily need to be the case that you can always check to see if one is bigger than the other. For example, your partial order could be on the power set of some set. And the partial order is inclusion, whether one subset is a contained in another. Then it satisfies these three properties, but there's sets that cannot be compared to each other. So let me write this as lemma due to Zorn. We're not going to prove this. Even though I'm writing "lemma," just take it as an axiom, an axiom of Fraenkel set theory that goes with it, which is the following-- that if every chain in a nonempty-- of course, we're considering nontrivial stuff, so in a nonempty-- partially ordered set, E has an upper bound, then E has a maximal element. So if you can check that every chain has an upper bound, then you get to conclude that the partially ordered set has a maximal element. Now, we'll give a simple application of this at the start of the next lecture. But first, let me put this into your brains to marinate on. So we're going to use Zorn's Lemma to prove the Hahn Banach theorem. And I'll go into why that is next time. But you can use Zorn's Lemma to prove other things, and it is used to prove other things. And first off, from Zorn's Lemma you can, in fact, prove the axiom of choice, which says given any collection of sets, you can essentially choose an element from each set stated in a very precise way. Another way, which we're going to use at the beginning of next time, is to prove the following-- but first let me make a definition. A Hamel basis, H which is a subset-- not a subspace, but a subset-- of V, a vector space, is a linearly independent set such that every element of v is a finite linear combination of elements of H. So from linear algebra, for finite dimensions, this is in some sense how one can define the dimension is you find a basis and then the cardinality of that basis is always the same. So a Hamel basis for R n is just the vectors with 1 in one of the entries and 0 otherwise. So for R 2, it's just 1, 0, and 0, 1. A Hamel basis for, let's say, little l1 would be one in the first, followed by 0's, 1 in the second, 0 elsewhere, 1 in the third, 0 elsewhere-- the set of these elements. Now, the question is, does every vector space have a Hamel basis? And using, next time, Zorn's Lemma, we'll show that indeed, every vector space has a Hamel basis. And in fact, that Hamel basis can be quite big. But that'll be a simple application we do next time, is that via Zorn, you can show that every vector space has a Hamel basis. We'll stop there.
MIT_18102_Introduction_to_Functional_Analysis_Spring_2021
Lecture_2_Bounded_Linear_Operators.txt
[SQUEAKING] [RUSTLING] [CLICKING] PROFESSOR: All right, so let's continue our discussion of Banach spaces. So let V be a norm space, meaning a vector space with a norm on it. And last time, a Banach space was defined to be a norm space such that the metric induced by this norm is complete-- all Cauchy sequences converge. So if you want to check that a norm space is a Banach space, you have to take a Cauchy sequence and show that it converges in the space, which we did last time for the space of bounded continuous functions on a metric space. Now, there's an alternative, useful way of checking to see if a space is a Banach space, which we'll use in a minute. But to state it, I need to introduce a definition real quick. So let vn be a sequence in V. And I'm going to abuse notation and write subset of V, even though that's a sequence. I'm using this notation, though. So this means let vn an a sequence in V. The series, which is just right now an expression, just chalk on a board right now, it's just a symbol-- we say this series is summable if the sequence of partial sums, which are now elements in the norm space-- so if this sequence of partial sums converges. We say that the series vn is absolutely summable if the series of involving now these non-negative numbers converges. OK, so this is just like the definition of the convergence of a series of real numbers, which you dealt with in earlier analysis and absolute convergence, only here I'm using the terminology absolutely summable because this is the terminology Richard Melrose used, so I want to stick to what he's using. He's Australian, so I think maybe that has something to do with it. So that's one of the unfortunate things about this, is I can't tell if you're laughing, but I'm going to assume you're laughing. OK, so absolutely sum of all means the sum of the absolute-- of the norms converges. And you have this theorem, just like from real analysis, which you saw at one point, that if vn-- so if this series is summable-- is-- I'm missing an adjective there. If I have an absolutely summable series, then the sequence of partial sums-- this is a Cauchy sequence in the space V. So again, we're working through in a normal space V. All right, and the proof is the same as in the real numbers case. So proof I'll leave to you. This is just a simple exercise. And it's the same as for V equals R. Now, notice I said something which is strictly weaker than what you encounter in either case V equals R. In the case V equals R, you have the theorem that if I have an absolutely summable series, then it's summable. Every absolutely convergent series is convergent. But I didn't say that here. I just said that the sequence of partial sums is Cauchy, not necessarily convergent. So when is the sequence of partial series convergent? When can I say that an absolutely summable series in this norm space is summable? And I can say that precisely when it's a Banach space. So the theorem that we're going to prove is that V is a Banach space if and only if every absolutely summable series is summable. OK, so this characterizes Banach spaces as those spaces for which this theorem you have from real analysis, that every absolutely convergent series converges, is precisely that. Every absolutely summable series is summable. And sometimes, that is an easier property to verify than going through the whole Cauchy business. And sometimes, it's exactly the same amount of work. We'll use this later when we deal with integration and measure theory to prove that the big Lp spaces are Banach spaces. All right, so we have two directions to prove-- that if V is a Banach space, then we get every absolutely summable series is summable. And this is pretty straightforward. So one direction-- if V-- suppose V is a Banach space. Then if vn, if this series, is absolutely summable by the previous theorem, which I didn't prove, but is very easy to prove, I get that the sequence of partial sums-- this is a Cauchy sequence in V. And because V is a Banach space, every Cauchy sequence converges. And therefore, it converges in V. And therefore, the series is summable. So that direction simple enough. Let's go the opposite direction and show that every-- that the condition that every absolutely summable series is summable implies that V is a Banach space. So every absolutely summable series is summable. Now, we want to show that every Cauchy sequence converges in V. So let vn be a Cauchy in V. So what we're going to do is we're, in fact, going to show that there's a subsequence of the sequence that converges. So what we're going to show-- so that this sequence has a convergent subsequence. And once we've done that, we're done because remember back to your real analysis days. If a Cauchy sequence has a convergent subsequence, then the entire sequence converges. And vn converges by metric space theory, all right? So real analysis stuff. OK, so let's find this subsequence. And basically, we're going to build this subsequence up by speeding up the convergence of vn or, if you like, speeding up the Cauchiness of vn. So the fact that the sequence is Cauchy implies that for all natural numbers, there exists a natural number k, also a natural number, such that for all nm bigger than or equal to N sub k, we have that the norm of v sub n minus the norm of v sub m is less than 2 to the minus k. All right, why did I choose to the minus k? Because that's summable, all right? And you'll see. And so what we're going to do is build up essentially a telescoping sum from these-- from Well-chosen guys. So define n sub k. What is this going to be? This is going to be equal to N sub 1 plus plus N sub k. So n sub 1 is less than n sub 2 is less than n sub 3 because at each stage, I'm adding a natural number. So n sub 1 is equal to capital N sub 1. n sub 2 is equal to a capital N sub 1 plus n sub 2. These are natural numbers. So I'm always getting bigger at each stage. So this is an increasing sequence of integers. And for all k, n sub k is less than or equal to N sub k, capital N sub k because little n sub k is equal to sum integers plus capital N sub k. And so the v sub n sub k's are going to be essentially the guys which converge. Thus, for all k natural numbers, I get that v sub n sub k plus 1 minus v sub n sub k-- if I take the norm of that, so n sub k is bigger than or equal to capital N sub k. n sub k plus 1 is bigger than or equal to n sub k, which is bigger than or equal to capital N sub k. And therefore, by this condition-- how n sub k's are chosen-- so this, this, this, and what's in blue tells me that this is going to be less than 2 to the minus k. And therefore, thus, the sum v sub n sub k plus 1 minus v sub n sub k-- this is absolutely summable, right? Because the norm of this is less than 2 to the minus k, which you can sum. And by our assumption, that every absolutely summable series is summable, this implies that-- I didn't even say anything after that-- is absolutely summable, which implies that this is summable, which implies the limit as m-- or let me instead just finish with that-- i.e. the sequence of partial sums k equals 1 to m v sub n sub k plus 1 minus v sub n sub k-- this sequence of partial sums converges in V. OK, so to recap again, we started off with a Cauchy sequence. We go out in the Cauchy sequence far enough and pick certain guys so that they're pretty close to each other. And how close? So close that the sum of their norms is finite, so that it's absolutely summable. And therefore, the series is summable by our assumption. But this is essentially a telescoping sum. Thus, the series v sub n sub m, which equals-- so the sequence v sub n sub m, which is the sum from n equals 1 to m of v sub n sub k plus 1-- let's see, let me put a minus 1 here, minus v sub n sub k plus v sub n sub, m equals 1 to infinity converges in V. And I'm done. So v sub n sub m is equal to this telescoping sum plus-- so when I add this up, terms cancel. And I just pick up the last one, which is when I pick up m sub 1 here minus the first one, which is v sub n sub 1 here. So if I add on v sub n sub 1, I just pick up v sub n sub m. Now, as m goes to infinity, this converges to something because this sequence converges. And this is just fixed in m. So this sum converges. And therefore, v sub n sub m converges. And thus, this subsequence of our original Cauchy sequence converges, proving that the Cauchy sequence converges in V. And we're done. OK, so Banach spaces, these are a nice generalization of the spaces that you worked with in real analysis and linear algebra-- Rn, Cn, and so on. So what are the analogs of matrices, which you had to use in calculus and linear algebra? This is going to lead to our next topic, which is operators and functionals-- so operators being the analog of matrices that take one vector into another vector, functionals are the analog of taking a vector and taking its dot product with a fixed vector, spitting out a real number. So functionals will eat vectors and spit out real or complex numbers, depending on the field that you're working in. So let me write down just an example to keep in mind, as far as operators go. So I want you to keep this example in mind, which was the whole reason for really a lot of building all this machinery. I mean, this example came first, and then the machinery came-- was built later to be able to say all we can about these kind of operators-- or these kind of transformations. And depending on if you're in my class or not, maybe you saw a question about such a creature on an assignment or maybe on an exam. So let K be a function on 0, 1 cross 0, 1, let's say, into the complex numbers. And let's assume it's continuous for f, a continuous function on 0, 1. We can define a new function from f, Tf of x, to be the integral from 0 to 1 of K of xy times f of y dy. Now, these are things that-- I'm about to write a few things that you can just check by hand. But you can then check then Tf is also a continuous function. And it's linear in the argument f. And for all lambda 1, lambda 2 in C, f1, f2 in 0, 1, T lambda 1, f1 plus lambda 2 f2 equals lambda 1 Tf1 plus lambda 2 Tf2. And it has another property, which I'm going to say in a minute, namely that it's continuous on the space of continuous functions. So we've already proven that this space of continuous functions on 0, 1, this is a Banach space, right? This was a special example of the space of bounded continuous functions on a metric space we considered before because on the closed and bounded interval 0, 1, every continuous function is bounded. So this is just equal to C subscript infinity of 0, 1. So we know this is a Banach space. And so this is an example of what's called a linear operator. So definition-- let V and W be vector spaces. So you should have seen a linear transformation. I'm going to call it a linear operator. I'm just recalling what it means to be linear. Let V and W be vector spaces. You say a map T from V to W is linear-- so linear, if for all lambda 1, lambda 2 in your field of scalars-- if they're either R or C-- and for all v1, v2 in V, T of lambda 1 v1 plus lambda 2 v2 equals lambda 1 T v1 plus lambda 2 T v2. So in a map from one norm space-- so this is just a note off to the side. This is how I'm going to be using-- well, I'm not even going to write it down. So given two norm spaces, a linear map between them I will most often refer to as a linear operator. Rather than linear transformation, which is what you probably heard in linear algebra, I'll refer to these as operators. Something I meant to say as well was why do we care about such a guy like this other than it looks nice? You care about guys like these because operators of this form are essentially the inverse operators of differential operators. I mean, you know that from the fundamental theorem of calculus. The inverse operation of taking a derivative is taking is integrating. So it shouldn't be a surprise that the inverse operator, meaning if I take f as my data to some ODE, can be written as this, as this kind of linear operator. That shouldn't come as too much of a surprise. So that's why we care about them, is that operators of this form arise as the inverses of taking differentiable operators. OK, so now, in this class at least, we're not just interested in any old linear operator. We're going to be interested in a certain class of linear operators, those-- so those which are continuous. So let me recall for you an equivalent way of saying that a map is continuous. So this is just any map, any function, not necessarily a linear operator is continuous on V if-- and there's two ways to say this-- for every sequence vn converging to V-- so let me write it this way. For all v in E for all sequences converging to V, we obtain that T of v sub n convergence to T of v. And an equivalent way of stating this in terms of what one would call a topological notions is that the inverse image of open sets are open. So for all open u in W, the inverse image of u, which I will recall for you is the set of v in capital V such that T v is in u-- I'm not saying that T is invertible. That's the inverse image. The set is open in V. Remember, the notion of an open set is that for every point in that set, there's a small ball that's contained entirely in the set centered at that point. OK, now, for linear maps, there's a very simple way or equivalent way of finding when it's continuous on a norm space. Now, on finite dimensional spaces, any linear transformation-- every linear transformation is continuous. I should say that. So if you take any linear operator from Rn to Rm Cn to Cm or Rn to Cm, anything between two finite dimensional spaces, it will always be continuous if it's linear. That is not always the case between two Banach spaces. Now, again, is there a more efficient way of checking when something is continuous? Or what's an equivalent way of saying that? Or a more useful way is the following characterization that we have. So a linear operator T between two norm spaces now-- so between two norm spaces is continuous if there exists a C positive such that for all v in capital V, if I take T of v and take it's norm in W, this is less than or equal to a constant times the norm of v in capital V. OK, now, in this case, we say-- instead of saying that T is a continuous linear operator, we say T is a bounded linear operator. So we don't really say continuous linear operator. We say bounded linear operator. Now, this doesn't mean the image of V is a bounded set in W. That's not what this means. The only linear operator that takes the bounded set-- that takes a vector space into a bounded set is the zero operator. So we're not saying that it's taking all of V to a bounded set. But what this inequality does say is that it takes bounded subsets of V to bounded subsets of W. So let's prove-- let me put a star by this condition, so I don't have to write it out so much. OK, so let's go in this direction. So let's assume star and prove that this linear operator T is continuous. And this is not too difficult to do. We'll use this first-- we'll use this first characterization of continuity. Let v be in V. And suppose v sub n is a sequence in V converging to v. Then by star, if I look at T sub v sub n minus T sub v norm in W, this is less than or equal to a constant times the norm of v sub n minus-- well, first off, I'm kind of using-- let me add one little step. I'm using that it's a linear operator, meaning I can write this as T of v minus v sub n. And therefore, this is less than or equal to a constant times v sub n minus v in capital V. And so the norm of T of v sub n minus T of v is less than or equal to some fixed constant depending only on T times v sub n minus v. This goes to 0 as n goes to infinity. And therefore, by the squeeze theorem, this thing on the left must go to 0. It's always trivially bigger than or equal to 0, by the squeeze theorem. And therefore, T of v sub n converges to T of v. So that takes care of one direction. OK, so we've shown that this boundedness property of T implies that T is continuous, that the linear operator T is continuous. Now, let's show that continuity implies this boundedness property. And so for the continuity guy-- so I'm assuming T is continuous-- I'm going to use the second characterization of continuity here. So then the inverse image of every open set in W is an open set in V. So the inverse image of the ball centered at the zero vector in W of radius 1-- so let me just recall this is a set of v in V such that T v is in the ball. This is an open set in V because the ball of radius 1-- so this is the ball of elements in W such that their distance to 0 is less than 1. This must this is an open set in W. And therefore, its inverse image must be an open set and V. So we have here 0, 1, everything inside. Don't include the boundary. This must be an open set in V. And T takes one to the other. Now, what do I know? 0 is in here. And every linear transformation takes 0 to 0. So 0 has to be in the inverse image because any linear map takes 0 to 0. So 0 has to be in the inverse image. And therefore, since this set is open-- so this is 0. Since the set is open, I can find a small ball of radius r centered at 0, which remains inside the set V. And therefore, it gets sent to some other set, which is a subset of W. So this is the picture. Let me write down the math that goes with it. Since T of 0 equals 0, that implies 0 is in T inverse of-- which implies since this is open, there exists an r positive such that the ball in V centered at 0 of radius r is contained in T inverse of-- the inverse image of the ball of radius 1 centered at 0 in W. Just look at the picture here. Let v be in V. And let's not look at 0 because v equals 0 will satisfy this inequality no matter what C you have. So we just need to look at v not equal to 0. I claim that I can take the constant to be 2 over R, all right? OK, then if I take v and I rescale it-- well, let me not write it as dividing v. Let me write it this way, r over 2 length v v v times v. So this is a vector in capital V. What is its length? Its length is r/2. So if I take it's length in v, this is equal to r/2 is less than r, which implies that r over 2 over v v is in the ball of radius r in V centered at 0. And therefore, so it has to be in this blue disk. And therefore, it gets mapped to something in this blue guy here, which is contained inside-- remember, this big yellow thing was the ball of radius 1 centered at 0-- of radius 1 in W. And therefore, the length of T r over 2 v v v W must be less than 1. And now, scalars pull out of linear transformation. So this comes out. And then it comes out of the norm by the homogeneity of the norm. And so I can divide through by-- multiply through by 2 norm v over r. And I get that T of v norm W is less than 2 over r norm V. And therefore, star holds with C given by 2 over r. OK, so continuous linear operators between norm spaces, we call them bounded linear operators because they satisfy this boundedness property, namely that they take bounded sets to bounded sets. Now, it's going to become quite tedious for me to keep writing norm sub W, norm sub V, and so on. So I'm going to stop using the subscripts. But it should be pretty clear from the context where the norm is. If I have a linear operator T from V to W, T of V-- so if I have a bounded linear operator or linear operator from V to W and you see me write norm of T v, you should equate this to the norm of T v in W. Or if you see v, you should equate this-- and v as an element of capital V, then you should interpret this as the norm of v in capital V. So I'm dropping subscripts just to save having to write too much. And then it'll soon get tedious. OK, so the definition-- in fact, before I do that, let's take a look at this linear operator I wrote up there a minute ago. Can we see that that's a bounded linear operator on the space of continuous functions? So given by-- where k is a continuous function-- this is a bounded linear operator. So it's clear-- pretty clear to see that it's linear in f, right? Scalars pull out and so on. Let's check that it's bounded. So recall that the norm on C 0, 1 is the infinity norm. So let f be a continuous function. Recall the norm on this space is the infinity norm given by sup of x in 0, 1, f of x, which is, in fact, a maximum. It's attained at a certain value. But I'll just write sup anyways. And so now, we want to estimate the norm of T of f of x in terms of the norm of f. That's what this boundedness property is. Then for all x in 0, 1, this is equal to the integral of k xy f of y dy. Now, the absolute value of the integral is less than or equal to the integral of the absolute value. So this is times the absolute value of k xy times f of y dy. Now, f of y for all y in 0, 1 is less than or equal to the infinity norm of f. So that only makes the integral bigger. And the same thing for k, right? k is a continuous function on 0, 1 cross 0, 1. And therefore, it's bounded by its-- it attains a max on this set, being a continuous function. So I can also replace it by its infinity norm. And here, you should interpret this as being the infinity norm on continuous functions on this set. So this infinity norm is the sup of k of xy for x and y in 0, 1 cross 0, 1. And therefore, this equals-- these are just two numbers. And this held-- holds for all x. And therefore, its supremum is bounded by this number as well, so Tf-- so this is a bounded linear operator with constant given by the infinity norm of k. This k is usually referred to as a kernel. So if I've said that before and haven't explained where that comes from or why I use that word, this is usually referred to as the kernel of this linear operator. So there's an example for you. I've already used up both boards? Yeah, I missed the big room that had three of these. Now, given two norm spaces, V and W, we can consider the set of all bounded linear operators from V to W, which we denote by B V, W-- scripty looking B-- set of T. T is a bounded linear operator. So it's not difficult to see, again, that-- or to see that this is a vector space. So let me put this in a remark, that it's clear that this is a vector space. The sum of two linear transformations is a linear transformation-- or linear operator. The scalar multiple of a linear operator is a linear operator. And then those two operations preserve continuity. So this is clearly a vector space. Now, we can put a norm on this space. We define the operator norm of an element in here. This is defined to be the supremum over all unit length vectors v of T of v. So I want to recall that this norm is being taken in W because T of v is an element in W. This norm here is being taken in V because little v is an element in V. So maybe it's not clear at first. But let's go ahead and prove it that, in fact, the operator norm is, in fact, an actual norm so that this becomes a norm space. So the operator norm is an actual norm. It's not just a norm because I said it is. So let's prove this. It's not-- again, so it's not too difficult to see. So let's prove definiteness, namely the norm of T is 0 if and only if T is 0. So if T is a zero operator, then clearly its norm is 0. Suppose T of v equals 0. So this sup is 0 if and only if T of v is 0 for all unit length v. So suppose T is an operator. So that T of v equals 0 for all unit length v. Then that implies that T of-- that 0 equals T of v equals T. All I'm saying is you rescale now, OK? 0. So it implies for all v in V take away 0 that 0 equals T. So t is the zero operator if its norm is 0. 2. Homogeneity follows from the homogeneity of the norm on W. So lambda T, this is equal to-- so the norm of lambda times T, this is equal to the sup of v equals 1 of lambda T v. And this is equal to sup equals 1 times lambda times Tv. And if I take a set and multiply it by a non-negative number, then that non-negative number comes out of the sup. Hopefully, it was one of the first exercises you were ever given on supes. And therefore, this equals-- and now, so that proves homogeneity of the norm. And now, the triangle inequality follows from the triangle inequality for the norm on W again. So I take two operators and two bounded linear operators. And I take an element of V with norm equal to 1. Then S plus T applied to v norm, this is equal to Sv plus Tv. And by the triangle inequality for the norm on W, this is less than or equal to S times v plus T times v. And now, S times v-- so again, v's a unit length vector-- is less than or equal to the sup over all those norms, which is, again, just the operator norm of S and then the operator norm of T. So I've proven that for all unit length v, S plus T applied to v in norm is less than or equal to this number here. And therefore, the supremum over all such numbers as v ranges overall unit length vectors, which is the least upper bound of that set, must be-- sit below this number here, which implies that-- OK? And therefore, the operator norm is a norm both in name and in actuality. So if you like, what we did a minute ago here is we showed that-- so coming back to over here for this bounded linear operator from continuous functions to continuous functions, what this tells you-- so first off, if f has unit length, then I've shown that Tf in L infinity form is less than or equal to the L infinity norm of K. And therefore, I've shown that the operator norm of T, where T is defined over there, is less than or equal to the operator norm for K. We're actually a little wasteful there. This is not equality, actually. But I'll let you think about that when you have free time. OK, so we've talked about bounded linear operators from one norm space to another. What more can we say about this new space we formed from two norm spaces? When is this space complete, for example? What are sufficient conditions? And so theorem-- if W is a Banach space-- again, V and W are norm spaces. If I assume W is a Banach space, then the space of bounded linear operators from V to W is a Banach space no matter if v is a Banach space or not. OK, so what we're going to do is we're going to use that characterization we had earlier of when a norm space is a Banach space in terms of summability absolute summability. So suppose Tn is a sequence of elements of bounded linear operators such that this constant C, which is the sum of these norms, exists as a real number. So let's suppose that we have an absolutely summable series of linear operators. And to show that this is a Banach space, we want to show-- we want to show that this series is summable. Then by that theorem we have earlier, that if you have a norm space such that every absolutely summable series is summable, then it's a Banach space. We conclude the proof of the theorem. And how we're going to show this is summable is, again, kind of the same strategy we used to prove that the space of bounded continuous functions on a metric space is a Banach space. We're going to come up with a candidate for this, show that it's actually a bounded linear operator, and then that the convergence is uniform-- or not uniform, but the convergence is in the space in the operator norm. OK, so let me just make a-- let me just make a note of something real quick, which I meant to write. You know what? Let's write it up there because that's where it belongs. But I didn't write it down. So let me just make a remark here. The operator norm is defined for all unit length v. But it automatically gives us a bound. And I'm moving a little quickly because-- maybe quicker than I should have, but-- OK, what was I going to say? First off, one thing I should have said is that this is an actual finite number if I have a bounded linear operator because, remember, T, a bounded linear operator, implies there exists a constant such that for all v in V, the norm of Tv is less than or equal to a constant times the norm of V. So when v has unit length, this constant bounds-- that satisfies this inequality bounds these numbers for all v with unit length. And in fact, this supremum is the smallest such C that I can put here. So that's the one comment I wanted to say, that this is an actual, well-defined thing. OK, so I wanted to say that. And then also, now from rescaling, we get a bound for all v in terms of the operator norm. Then if I take v over its length and take its operator-- or I take its norm-- so this is a unit length vector in V. I applied T to it. So that's always less than or equal to the sup over all norm of T times something, where that something has unit length. But this is a linear operator so that this scalar comes out here, and then comes out of the norm again by homogeneity. And therefore, I get that Tv is that. So in short, the point of this remark was to say that-- this is not really maybe the best way to say it. But if I think of T acting on v as the product of T and v, then this says the norm of the product of T and v is less than or equal to the product of the norms of T and v. Sorry if I went a little quick there. Maybe you stopped and were wondering, why is that even a real-- why is that even a real thing? Why is that an actual number? What am I going on about? Maybe I'm just a little overexcited about teaching functional analysis because this is kind of your first adult analysis class, I will say. OK, so back to the proof at hand, we have this series of absolutely summable bounded linear operators. So the sum of the norms is convergent, meaning this series sum is finite. And we want to show that this series of the T sub n's is summable so that this has a limit in the space of bounded linear operators. So we're going to come up with a candidate. Let v be in V. If I look at the sum of the norms of T sub n of v-- so T sub n applied to v, this is bounded by-- I mean, I'm writing it like this. But let me be a little more careful. Let m be a natural number. Then sum from n equals 1 to m of T sub n of v norm, this is less than or equal to the sum from n equals 1 to m of T sub n times the norm of v. And so v, that's just a number. It comes outside the sum. And the sum from n equals 1 to m is bounded by the sum from n equals 1 to infinity of these non-negative numbers. So this is less than or equal to the norm of v times T sub n equals Cv. So for all m, I've shown that this thing is bounded by C times the norm of v. And therefore, the partial sums corresponding to this series of non-negative real numbers are bounded. And therefore, that series converges. So this sequence of partial sums of non-negative real numbers is bounded, which implies that it converges, which I'll write Tn v converges. Now, think of T sub n of v as-- so T sub n of v, this is an element in W for each n. So I've shown that the series T sub n of v, this is absolutely summable in W. These are elements of W. And their norms is an absolutely convergent series of real numbers. Now, since W is a Banach space, every absolutely summable series is summable, therefore summable. We therefore define a map from V to W via T sub v-- or T of v is defined to be the limit as m goes to infinity sum from n equals 1 to m of T sub n applied to V, which we've shown for every v is a convergent series in W. That was the point of everything that came before. For each v, this is a summable series. And we define its limit, which depends on v, as this map T going from V to W. So this is our candidate, which we'll show is a bounded linear operator. So let's show it's linear. So T is linear. Why? For all lambda 1, lambda 2 in the space of scalars, RC, v1, v2, in V, we have that T of lambda 1 v1 plus lambda 2 v2-- this is, by definition, equal to limit as m goes to infinity of the sum T sub n lambda 1 v1 plus lambda 2 v2. And now, each T sub n is a linear operator. So I can write this as limit as m goes to infinity of lambda 1 Tv sub 1 plus lambda 2 Tv sub 2. And now, this thing here converges to T of v as m goes to infinity. This thing here converges to T of v2 as m goes to infinity. And therefore, the limit of the sum is the sum of the limits. So technically, I did not prove that in a norm space if I have two sequences converging to v1 and v2, then the sum of that sequence converges to v1 plus v2. But it's the exact same proof as in R, all right? Just replace the absolute values with norms. So this should be believable. And therefore, T is a linear operator. Now, let's prove that it's a bounded linear operator. OK, so now, we'll show that T is a bounded linear operator. Let v be in V norm equal to 1. Then Tv, This is equal to the norm of the limit as m goes to infinity of n equals 1 to m T sub n of v norm. And norms of limits equal to limit of norms, just like in the case of R, so this is equal to limit as m goes to infinity of-- OK? Now, this is less than or equal to-- by the triangle inequality-- the triangle inequality for two things, this implies a triangle inequality for m things by induction, which is less than or equal to the norm of T sub n times-- I shouldn't even say v. It's just less than or equal to the norm of T sub n. And this is precisely equal to the sum of the norms, which I called C, this constant C, which we know is finite-- which we assumed is finite, right? So we assumed it was an absolutely summable series of bounded linear operators. So therefore, I've got that Tv is less than or equal to C for all unit length v. And therefore-- which again, by scaling arguments, this was for all v equals 1, which implies by scaling arguments T for all v. I guess I didn't have to start with norm of v equals 1 if that would have brought a norm of v here. And then that would just be C times the norm of v instead of doing this separate part. So change that in your notes. But I'm up against the clock here, so I'm not going to do this on the board. OK, so T is an actual bounded linear operator. Now, let's show that the sum of these operators converges to T in the operator norm. So now, we claim that T equals-- oh, that's awful. Tn converges to T as m goes to infinity in the space of bounded linear operators, meaning in the operator norm So I think, for this, this is the reason why I maybe accidentally wrote the norm of v equals 1 there. So let v be in V with norm of v equals 1. Then T of v minus T sum from n equals 1 to m of T sub n of v in norm. Now, T is equal to the whole sum n equals 1 to infinity. So I'll write this as maybe limit m-prime goes to infinity of sum from n equals 1 to m-prime Tn V minus sum from n equals 1 to m Tn v. And this equals the limit m-prime goes to infinity of sum from n equals m plus 1 m-prime T sub n v norm. And now, this is less than or equal to the limit as m-prime goes to infinity of-- so the norm-- so this is, in fact, equal to the limit of the norm. And then I use the triangle inequality to bring the norm inside. This is less than or equal to sum equals m plus 1 m-prime T sub n of v. And now, this is less than or equal to limit m-prime goes to infinity of n equals m plus 1 to m-prime T sub n norm because v has unit length. So T sub n applied to v in norm is bounded by the operator norm of T sub n. And this equals the sum from n equals m plus 1 to infinity of T sub n. Now, what we know-- so this is just a series involving real numbers. And we know if the series converges, then the tails have to go to 0. So this goes to 0 as m goes to infinity, right? Or I shouldn't do that step just yet. Sorry, I'm making mistakes. I'm up against the clock that I see in the back. So I started off with this quantity here and ended up bounding it by this thing uniformly in V. So that implies that the operator norm of T minus sum from n equals 1 to m of T sub n is less than or equal to the sum from n equals m plus 1 operator norm of T sub n. And this last thing goes to 0 as m goes to infinity because it's the tail of a convergent series of non-negative terms. And therefore, the operator norm of T minus this partial sum goes to 0 as m goes to infinity, and therefore converges to T. So you see, I mean, it had the same basic format that the last argument did. You found a candidate. You showed it's in the space. And then you showed convergence of the space. So let me finish with a definition. If V is a norm space, then we denote v-prime as the space of bounded linear operators from V to the space of scalars. This is referred to as the dual space of V. And since the space of scholars is always R or C, both of which are complete-- I mean, they're the simplest examples of Banach spaces-- by the theorem we just proved, since the field of scalars is always complete, the dual space is always a Banach space. And let me just write here a simple example, which will be in the exercises, that, in a sense, for all p between 1 and strictly less than infinity, I can identify the dual space of little lp as little lp-prime, where p-prime-- this is now just a number bigger than 1, where p-prime and p satisfy this relation. So in particular, the dual of l1 is l infinity. The dual of l2 is l2. This is very special about l2. But if I take the dual of l infinity, p equals infinity. I would get p-prime equals 1. This, in fact, does not equal little l1. This is something of a headache that manifests itself for the big Lp spaces as well. And life would be a lot easier if this were the case, that the dual of l infinity was l1. But unfortunately, it's not. And that causes a headache. And l2, little l2, this is the only lp that has this property, that its dual is given by little l2. All right, and in the exercises, I'll discuss precisely in what way you can identify the duel with this little lp-prime in this way. All right, we'll stop there.
Deep_Learning_for_Computer_Vision
Lecture_11_Training_Neural_Networks_II.txt
all right welcome back to lecture 11 and today we're gonna continue our discussion about all the little nitty-gritty tips and tricks that you need to train neural networks I apologize I couldn't come up with a better title for this lecture at last lecture they just end up being kind of a bit of potpourri of a lot of little things that you need that I think you need to know about training neural networks but sort of had a hard time putting them into a good theme or a good title other than that so to kind of recap last lecture and also this lecture like I said it's been a bit of a potpourri of all a lot of different topics that you need to know about about how to train their own networks so the last time we talked we focused on some of these sort of one-time setup choices about the architecture and whatnot but you need to make before you start training so recall last time we talked about activation functions and we had a lot to say about the different activation functions but at the end we just decided to stick with rail ooh we talked about data pre-processing and we finally explained the mystery behind those means subtraction lines that have been appearing on your homework assignments so far we talked about weight initialization and then saw how we can use this Xavier or kind initialization rules to force our activations to be initialized torso activations to have good distributions over many layers of a deep network and this was kind of a trade-off between too small where things would collapse to zero or too large where things would explode and then these these last two points I think we went a little bit fast last time in lecture so if anyone have any had any lingering questions about any of these points this would be your your in to ask those but then remember we also talked last time about data augmentation which was this technique by which we can artificially multiply the size of our training set by performing random transformations on our training data before we feed it directly into the network and again we saw that data augmentation was a way that you can inject priors about your own knowledge about the structure of your data into the training procedure of your neural network and that you could imagine inventing different types of data augmentation for different types of tasks then we also saw this very general concept of regularization where so far in the in your homework assignments you've seen something like l2 regularization where we add an explicit term and or explicit additional term on to our loss function that for example penalize --is the norm of the weight matrix but last time we saw this much more general class of regularizer z-- that are commonly used in neural networks whereby in the forward pass we somehow inject some kind of noise to mess up the processing of the neural network in some way and then in the back in during testing then we somehow marginalize out or average out that bit of noise and as examples of this sort of paradigm of regularization we talked about things like dropouts fractional pooling drop connect stochastic depth and these crazy ones like cut out and mix up that are actually used in practice quite a bit so I realize we went a little bit fast especially around regularization toward the end so I just wanted to make sure there were no lingering questions about any of these topics before we move on to new stuff ok very good so then that was kind of the the stuff that we talked about last time and today we're gonna talk move on to some other interesting topics about bits and pieces about how you train neural networks in practice in particular we're going to talk about things that you need to worry about during the process of training your model and getting your model to Train that's sort of setting learning rate schedules and how to choose hyper parameters I know this has been a very frustrating procedure so for some of you and then some additional points that you might want to think about after you've successfully trained your model those are questions about maybe model ensemble incurring and how to scale up your model to train on maybe whole data center levels of compute so the first of these topics is learning rate schedules so I think at this point you've we've seen many different optimization algorithms we've seen things like vanilla s GD H GD + momentum biodegrade RMS from atom and all of these have some kind of hyper parameter that's called a learning rate and usually this learning right hyper parameter is probably the most important hyper parameter that you need to set for most deep learning models and at this point you've had the chance to use SGD for a variety of different types of models and hopefully you've started to get some some intuition about what happens when you set different values of a learning rate with different optimizers so here on the left is a little bit of a cartoon picture of what you can sometimes expect might happen with optimize with with optimization as you set different types of learning rates so here for example in yellow if you set the learning rate too high then often things will just explode immediately and the loss will escape to infinity or you'll get Nan's and it will very quickly go very wrong very fast in contrast if you set something like in blue a very low learning rate then you'll see that learning tends to proceed very very slowly and this is good because you make progress things don't explode to infinity but it might take a while to train for your loss actually you drop to very low values and in contrast something like in green might be learning it which is high but not so high that you explode to infinity and there you see that in contrast to the blue learning rate setting a higher learning rate might actually converge to a fat to a value faster but it might actually not converge to as low of a lost value and and what we kind of like is something like the red curve here which is some sort of ideal good learning rate that you see it makes a quick progress towards this areas of low loss while also not exploding to infinity and also actually training reasonably quickly so obviously if we can we would prefer to choose this red learning rate but if that's not always possible we have a sort of a question here which is that if we can't find that one perfect learning rate that's going to work for us then what are we supposed to do how are we supposed to trade-off between these seemingly sub optimal choices and this is a bit of a trick question it turns out because we don't actually have to make one choice for the learning rate in fact it's very common to somehow choose all of them and to basically the idea the basic idea here is to start with a relic with a relatively high learning rate so that looks maybe something like the green curve and that will allow our loss our optimization to make very quick progress in the first iterations of training towards these areas of low loss and then over time after maybe after this green curve is sort of plateauing then we want to reduce the learning rate and continue training with these lower learning rates maybe like this blue learning rate and this will hopefully let us get the best of both worlds that we can hopefully buy by starting with a high learning rate and then lowering it over time that we can hopefully make a quick progress at the beginning and also converge to very - Val - very low loss values at the end of training but this has been sort of vague and not very specific so far I just said we're gonna start with a high learning rate and then end with a low learning rate but then what actually concretely is that going to look like well this this process of so this process of choosing different mechanisms of starting of changing the learning rate over the process of training these are called learning rate schedules and there's several different forms of learning rate schedules that are commonly in use when training deep neural network models the perhaps the most commonly used learning rate schedule is the so called step schedule so here what we're going to do is start with what with some value of the learning rate for example in and for example the residual networks are very famous for using this kind of step learning rate schedule so with a step learning rate schedule we'll begin training with some relatively high learning rate like ten to the minus one for a residual networks and then at certain chosen points during the optimization process we will decay the lurk we will just all of a sudden jump to a brand new lower learning rate so if a residual networks the the schedule that typically use here is to start with learning rate at zero point one and then after 30 deep arcs of training all of a sudden restart the alert of drop the learning rate to zero point zero one and continue training for another thirty bucks and then again drop the learning rate again after 60 bucks and again after ninety pox we're basically after every 30 parts of training we're going to drop the overall learning rate by a factor of ten and if you look at a training curve that is here shown here on the left for one of these so called step learning rate decay schedules you get this very characteristic curve of the of the loss as a function of time that you get when using a step learning rate decay you can see that in this first phase of training during the first 30 pox when we're using this relatively high learning rate then we see we're making a very quick progress where the lost soil starts from this high initial value and they make sort of quick exponential progress towards lower loss values but then after about 30 bucks you can see that this this quick progress has somehow steadied off and we're no longer making very fast progress after these first 30 parts of training and after these 30 epochs this moment when we decay the learning rate and drop it by a factor of 10 we see sort of another another a new exponential pattern begin where once we drop the learning rate again it sort of decays and then plateaus and then when we need the cable and learning rate again at 60 epochs sort of decays quickly and then plateaus again so this is a very characteristic schedule that you'll a very characteristic shape of learning curves that you'll see when models are trained using this so-called step learning rate schedule now one problem with this step learning rate schedule is that it introduces a lot of new a lot of new hyper predators into the training of our model now not only do we need to choose the regularization and the initial learning rate like we did in all previous models we also need to choose at which iterations are we going to decay the learning rate and what are the new learning rates that we're going to choose at the iterations where we decay it and that actually gives us a lot of choices so properly tuning one of these step decay schedules can actually take a fair amount of trial and error so what people usually do in practice is sort of look at these learning curves and let things train with the high learning rate for quite a long time and then they get a sense at which point the model tends to Plateau so if you read papers sometimes they'll say that they use a heuristic where they keep training until the lost plateaus or until the validation accuracy foot plateaus and then they decay the learning rate well that usually means that they're using some kind of heuristic we chosen step decay schedule but as you can imagine if you're starting out at a new piranha new problem and you don't have a lot of time to experiment with lots of different decay schedules then this step decay schedule can actually be a little bit tricky because it introduces so many new things into the model that you need to tune so it overcomes some of those shortcomings of the step decay schedule one there's another learning rate schedule that has become sort of trendy in the past couple of years which is the the so-called cosine learning rate decay schedule so here rather than choosing particular points particular iterations at which we're going to decay the learning rate instead we want to we're going to write down some formula ahead of time that tells us what will the learning rate be at every epoch as a function of the epoch number or the iteration number and then we need then then we only need to choose some kind of functional form that is the shape of the curve along which this learning rate will decay so one of these that's become very popular is this cosa is this a half wave cosign learning rate schedule where you can see that from the plot on the right where we show the the learning rate as a function of time we can see that it starts off as some high value and then the shape at which the learning rate decays is equal to one half of a period of a cosine wave and this and then what this means is that we start out at some initial high value of the learning rate and towards the end of training our learning rate will decay all the way to zero as this wave of the cosine K is all the way to zero and now this this cosine learning rate schedule is very is has is very appealing because it has many many fewer hyper parameters than the step decay schedule so in particular the cosine learning rate decay schedule only has two hyper parameters that we need to choose one is the initial learning rate on this alpha zero here on the equation and the other is the number of epochs that were going to use to train the model which is this capital T but what's particularly appealing about this cosine light rate schedule is that it actually doesn't introduce any new hyper parameters when training the model because whenever we're training we're not on the model we always need to choose some initial some learning rate and we always need to choose some number of iterations that we're going to train so those two hyper parameters so when when using cosine learning rate schedule it doesn't introduce any new hyper parameters it just gives additional interpretation or additional meaning to some of these other Kuyper parameters that we already were having to choose anyway before and so that tends to make the cosine schedules a lot more ease a bit easier to tune compared to step decay schedules and the general rule of thumb with cosine schedules is just training longer tends to work better so in practice the only thing you really need to tune is that initial learning rate and then sort of come to grips with how long you're willing to wait for your model to train so that I think those are some reasons why this hosiah learning wait just keep the case schedule has become reasonably popular in the last couple of years and here I put some citations on the slide of some reasonably high profile papers from the last year or two that have used this cosine learning rate to keep schedule but this cosine shape is just one of many shapes that you might imagine using for decaying learning rates over time so another decay schedule that people sometimes use is a simple linear decay again we're going to start with some initial learning rate and then decay it to zero over the course of training but rather than following this cosine decay this gay decay learning schedule instead will symbol simply decay the learning rate linearly over time and that seems to work well for many problems I should point out that I think there has not been super good studies that really compare these different schedules head-to-head so I can't really tell you concretely when cosine is going to be better or linear is going to work better I think what most people do in practice is they build upon some prior work and then they sort of adopt whatever whatever type of schedule happen to be used in the prior work that they're building upon so what this means is that you'll see different areas of deep learning tends to end up using different types of learning rate schedules but it's not really clear to me that that's because they're intrinsically better for that area it's often I think just because they want to have a fair comparison with whatever a piece of work came beforehand so with that kind of in mind if you kind of look at these citations and what type of problem you'll see that a lot of computer vision type projects are often using this cosine learning rate decay schedule whereas this linear learning rate to case schedule is often used for large-scale natural language processing instead that are also trained using deep neural networks and again I think that's maybe not something fundamental about vision versus natural language I think it's more a function of what paper what what how different researchers in different areas have proceeded upon the paths so then another learning rate schedule you'll sometimes see is this inverse square root schedule that sort of decays the learning rate across a different functional form but again it has this interpretation of starting out high and then ending up low now this inverse square root schedule I'm only putting it in here because it was used by one very high-profile paper in 2017 but it's like I've actually seen it used compare to be less compared to linear decay schedules and the cosine learning rate and the cosine decay schedules and I think the the potential pitfall with this inverse square root schedule is that the model actually spends very little time at that initial high learning rate so with this inverse square root schedule you can see that the the learning rate very quickly drops off from its high initial value and then spends a lot of time at these lower later values and if we compare that with the linear or the cosine schedule then we see that in contrast with these with these other schedules that are a bit more popular models tend to spend more time at those initial higher learning rates and then I think with all this talk about different learning rate schedules I think I need to point out another very probably the most common real learning rate schedule is just the constant schedule and this is actually and this one surprisingly actually works quite well for a lot of problems so here you know we simply set some initial learning rate and then keep that that same learning rate through the entire course of training and this is actually what I recommend people do in practice until they have some reason to do otherwise where I see people mess up a lot of times when they're starting new deep learning projects is to fiddle use this to fiddle with learning rate schedules too early in the process and typically fiddle changing learning rate schedules should be something that you do rather far along into the process of developing your model and getting it to work and you can usually get things to work reasonably well just using a constant learning rate schedule and then the difference between performance with a constant schedule versus performance with one of these other more complicated schedules is usually not the difference between your model working and not working usually moving from constant to some more complicated schedule well maybe make things work a couple percent better so it's important if you're really trying to push for the state of the art on some problem but if your goal is just to get something to work as quickly as possible with this little mess as possible then I think constant learning rates are actually a pretty good choice although I should also point out that there is a bit of complication between learning rates and the optimizer that you choose so when using stochastic gradient descent with momentum then I think using some kind of learning rate decay schedule is fairly important but if we're using one of these more complicated optimizers like rmsprop or Adam then you can go farther using only a constant learning rate so that's kind of my caveat is that especially if you're using something like Adam then you can you you can actually get pretty far using just a constant learning rate so any questions about these learning rate schedules before we move on to some other topic yeah yeah so the question is sometimes you'll train for a long time loss will be going down you'll be very happy because you think you trained a good model and then all of a sudden loss will go up things will explode and you'll become sad yeah I think it's really hard to say anything general about that case I think there's a lot of reasons why something like that can go can go wrong one case where I've run into a similar problem is if you forget the torch got zero grad that I warned you guys multiple times about a couple lectures ago maybe that was not applicable to your case but then if you're accidentally accumulating gradients over many iterations then things will tend to work for a while but then after some point then gradients will explode and things will be things will go wrong I think things can also blow up after depending on the problem type that you're working on you might also see bad training dynamics so for something like different types of generative models or especially different types of reinforcement learning problems then you'll often see very troubling behavior with these learning curves sometimes although if we're kind of a standard well-behaved classification problem if there's the loss is kind of blowing up after a long periods of training usually that indicate that makes me suspect some kind of a bug either bad hyper parameters or some bug in the way data is being loaded or maybe you're training on some kind of corrupt data actually that I've seen that as a problem sometimes where maybe one sample in your training set is actually corrupted one common cake one common failure mode is like you know Flickr actually takes users on can upload images to Flickr or different photo sharing sites and then later decide to remove those images and then many data sets are constructed not by distributing actual JPEG files by distributing links to Flickr images so then if you go and naively try to download all the links in the data set you'll choose sometimes try to download some images that have been removed by users and then those will end up with some kind of default corrupted JPEG file and then during the training process maybe it's an Zero's with some non-trivial label and then you'll explode when you try to train on we train with a mini batch that includes one of these corrupted image files so sometimes sometimes data corruption in your training set can cause things to explode all of a sudden but I think there's really no general answer you need to dig in more to the specifics of your problem ok but then another strategy oh yeah yeah yeah so the question is about adaptive learning rates so that would be something like a degrade or rmsprop or add-on are examples of adaptive adaptive learning rate mechanisms and they're I think things can still blow up things can still go wrong sometimes they tend to make things a bit more robust but they definitely still solve all your optimization problems the question is like oh maybe I want to write some heuristic that will look at the lost curve and then determine for itself when the learning height will drop I've seen people do this but I recommend against it because I think you can get yourself into trouble too easily doing that I think it's very easy to try to code up some clever solution that will try to smartly choose when to drop the learning rate but there's a lot of corner cases that are difficult to account for for example if you look at these curves they're actually very noisy so here I'm actually each actually I'm plotting up a scatter plot with each dot being the loss at a particular iteration but it's so noisy that to get any signal you need to take some kind of a moving average over the the training iterations so then they just end up being a lot of now sort of meta hyper parameters that go into these heuristics about when to decay the learning rates and I think that you're just again you're just setting yourself up for trouble there and you're better off just looking at the lost curves and trying to make some expert determination there yeah yes that's correct so the dark is actually a bunch of circles that represent the loss of each iteration but they're extremely noisy so these circles end up actually having a few a pretty huge variance between iterations on the individual training losses so whenever I plot these things I always like to plot both those losses at every iteration to get some general sense of the variance but then I also like to plot a moving average so this is usually a moving average of a window size of like a hundred or a thousand iterations that tells the moving average of the losses over some some window and that gives you a sense of both the overall variance of the training as well as the the longer-term trends so I think that's very useful a very useful thing when plotting losses to help you debug and and absolutely point out that this is kind of a very fairly characteristic image that you get when trading with a cosine schedule that it has a very funny shape and if you're used to seeing plots like this with the stuff decay then the first time you train with a cosine schedule you get very surprised so this is something I've started to do recently and and it's always concerning to me when I see these very weird-looking plots but I think another thing that you should always be doing that really helps you to choose how long you should train is this notion of early stopping and I think this is a good mechanism to also go back to this exploding process during training so here the idea is that whenever you train neural networks you want to look at two to the really three curves one is the training loss as a function of iteration which is here on the left and if things are healthy you should see this kind of decaying exponentially in some way but what you should also always be looking at is the training accuracy of your network maybe that you check every every epoch or so as well as in the accuracy both on the training set as well as the validation set and the looking at these curves can give you some other sense of the health of your network throughout the training process so then what you'll typically do is you you want to stop training you want to pick the net the check point the check point of the model during training where you had the highest validation accuracy so what you'll typically do is you'll typically set some number of max iterations or met max epochs that you're going to train for and then just let that thing train for that batch number epochs but every epoch or every five epochs or ten epochs you should always check the valid the the training and validation set accuracies and then save the model parameters at that points to disk and then after the model finishes training then you can plot these curves and then select the check point der it just select the point in time at which your model performed the best on the validation set and then that's the check point they should actually use when about when running the model in practice so if you do a process like this then if the model happened to blow up late in the training process then it's maybe not such a big deal you can just look at this curve and then pull one of the model check points from the point in training before the model blew up so this is a really useful skill a really useful heuristic on how to train your networks and how to select which check what to use at the end of the day so this is something I really encourage people to use pretty much all the time whenever you're training people networks so then that kind of leads into a bit of a larger discussion of how are we supposed to go about choosing hyper parameters for our neural networks well in kind of 1:1 sit one thing that you'll come and we commonly see people do is this notion of a grid search so here what we're going to do is select some set of them we're gonna select some set of hyper parameters that we care to tune and then for each of those hyper parameters we'll select some set of values that we want to evaluate for that hyper parameter and for and often many of these hyper parameters you should be searching in kind of a log linear space rather than a linear space so then for example we might want to evaluate here for examples of the learning rate for different learning rates that are kind of spaced out log in a log in a log minier way and also test out for different values of regularization strengths that again our space sort of log linearly and then get given four values of the weight decay and four values of the learning rate then that gives rise to 16 combinations and if you have enough GPUs just try them all and see which one works best and that actually is a fairly reasonable strategy that people sometimes do in practice but the problem is that this strategy requires a number of GPUs which is exponential in the number of hyperparameters that you want to tune so this very quickly gets very infeasible very quickly so another strategy that sometimes people employ instead is random search rather than grid search and here again we're going to select the procedure is much the same we're going to select some set of hyper primers that we want to tune and now rather than selecting values that we want to try for each of those hyper parameters instead we're going to select some ranges of those hyper parameters along which we want to search and now we're going to develop during each time we train our model we're going to select a random value for each of those type of parameters that fall within that range and again for like a learning rate and a weight decay you'll often want to search in a log linear space whereas for other types of hyper parameters like maybe with the network or model size or dropout probability sometimes you'll see a linear rather than log rather than log linear spacing and it kind of depends on what the hyper parameter is as to whether it should be linear or log linear but now the idea is that with this random search idea we set these ranges for different hyper parameters and then during each training run we draw a random value for our high parameter and then just let it go and then however many of trials of your network you can afford to train you train that many and then whatever happens to work best at the end that gives you some that that's the high parameters that you use and there's been some maybe if you think about grid search versus random search you know they're 4-minute very similarly on the slide so you should think that so maybe you might think that they're very similar in how they perform but there's actually a fairly strong argument for using random search instead of grid search that comes from this 2010 paper and the idea here is that if you have a lot of hyper parameters and you don't really know which hyper parameters are in prop if you have a lot of high parameters you need to tune probably some of those hyper parameters are going to be very important for model performance whereas other of those hyper parameters maybe it didn't really matter what value you were gonna set they were all anything in that range would have been fine but ahead of time before you train models you might not know which hyper parameter is in which category but usually it's the case that some hyper parameters matter and some humber parameters don't matter but now the idea is that if you are using a grid search then we were always going to evaluate exactly the same grid of parameters on the left so what this means is that in this sort of cartoon picture the the parameter on the horizontal axis is ends up being very important for this optimizing for getting good performance because you can see that this sort of distribution that we draw on the top of the grid is sort of the the marginal distribution of model performance as a function of that hyper parameter value so you can see that in this toy example this horizontal hyper parameter is very important for getting very good performance because if we go far to the left there's low performance and there's kind of a sweet a small sweet spot the middle that gives us very high model performance in contrast in this sort of cartoon picture the vertical hyper parameter is maybe not so important for model performance and you can you can see we've also drawn this sort of orange marginal distribution on the left-hand side of the plot that shows that no matter which value of this vertical hyper parameter we choose things are going to perform about the same and now the problem is that if we do a grid search we are not being very we're not gaining as much information as we could from each trial of the model that we train because what we're going to try the same values of the important type of parameter many many times repeated for each about for each value of the unimportant parameter so what that means is that for this important this distribution of the important parameter we're going only going to get three samples in this cartoon example so that means that maybe we don't have enough information to properly tune the right value of the right setting of that important parameter now in contrast if you're going to use random search on the right than every but every trial that we run is going to have random hyper parameters for both the vertical and the horizontal hyperparameters and what this means is that when we plot these marginal distributions of model performance as a function of each hyper parameter then we end up getting more samples for each hyper parameter individually because the points on the grid don't align perfectly vertically or horizontally so what that means is that in a situation like this where one hyper parameter is important and the other peiper parameter is unimportant then we end up getting using the multiple sample the multiple repeated samples of the unimportant hyper parameter in order to give us more effective samples of the important type of parameter so then if you look at these four though for the right-hand plot giving this random grid search we see we end up with many many samples of this important type of parameter that allows us to sample more points on this curve and hopefully find a better value overall so if you're in a setting where you need to sort of search randomly or private over hyper parameters then usually using some kind of a random search is much more important than using some kind of a grid search so here these couple these this slide is showing you kind of a cartoon picture of some kind of idealized situation of what might happen with a random search but here's an example of an ax we'll random search that I did at a project at Facebook so we could use a lot of GPUs so here so here these plots we were evaluating the the learning rate and the regularization strength for three different categories of models that would be a feed-forward model a residual model and a different sort of model architecture called darts that details of what that is is not important for this purpose but what you can see is that each point on these plots is a different model that we trained and the plot is quite dense as we trained a lot of models and then the the color of the point gives you the the overall performance of the model at the end of training and by looking at plots like this you can get some sense of the interactions between different learning rates that you might come across so what you can see here is that here the the x-axis is a learning rate and the y-axis is the regularization strength along a log scale and what we can kind of see is that there is some kind of non-trivial interaction between these two parameters but there's this kind of sweet spot or sweet River in the middle of good learning rates for each regularization strength and vice versa yeah was there a question yeah the question is should can you use gradient descent to learn the hyper parameters yes you can and I think that's that's a really cool area of research that I really enjoy and I think it's really creative and really interesting so there's many different approaches to that that I think are slightly beyond the scope of this lecture but to give you a flavor of what that looks like I think that's actually a beautiful situation where the software systems that we end up building to help us solve our problems end up giving rise to new mathematical solutions as well and what I mean by that is when you have something like PI torch it's very easy to back propagate through arbitrary Python code and you know your optimization is your optimization loop is again yet again just another bit of Python code so in principle it's very easy in pi torch to write code which will allow you to back propagate to sort of have to an inner loop and an outer loop in the inner loop you're going to run optimization over your model parameters but you'll actually back back propagate through that entire inner loop in order to compute gradients of the final model performance on the initial values of the hyper parameters and that'll and then in this outer loop then you'll use gradient to learn the hyper parameters and there's a bunch of really really cool papers along this direction that I would love to find some way to sneak into one of these lectures but then there's one paper I love there where they learn not only the learning rates but they also use similar idea to learn the training data because now if you can back propagate through the learning process we can actually learn the optimal training set that will cause the trained model to work well on the validation set oh so this is like very crazy and very very very very fun to read papers in this direction so yes you can but it's actually not commonly used in practice these things are super computationally expensive and people only sort of at this point in time employ them for relatively small toy problems to show off that they can but for very large scale problems of these kind of automatic methods of learning hyper parameters be a gradient descent are really not very practical to scale up to very large scale problems yeah yeah the color scale is error rate but it looks like this the color bar on the right doesn't quite match up to actual colors in the plot so I apologize for that but clearly the point where we put the red dot or the values we ended up choosing for the paper so that got to be the highest one so you can see this dark purple means it's working well and then moving towards yellow is things not working well so I think there's probably some transparency issue between the actual color bar in the plot maybe I need to talk to my co-author about that so but so this is this that's kind of a good strategy to choose hyper parameters if you happen to be working at Facebook or Google or another tech company I have access to a lot of GPU resources but if that's not the case then you need to be a little bit smarter in the way that you choose hyper parameters so this is but I think you should not despair it's usually possible in my experience to choose pretty good hyper parameters for your problem without this massive massive hyper parameter search so this is the procedure that I usually go about choosing hyper parameters when I don't have access to a very large GPU cluster so the step one is that you ahve you implement your model actually you need to like write some code first and once you're done with that then you should always be checking your initial loss and as we talked about multiple times so far usually by the structure of the loss function that you're using you can kind of compute analytically what what sort of initial lost you expect at random initialization so for something like this cross entropy loss you know it should be like minus log of the number of classes so then your first step after implementing your model is you turn off weight decay and just check this loss at initialization that only takes one iteration it's very cheap to do very fast and if that loss is wrong you know you have a bug and you should go back and fix the bug then the next step is to try to overfit a very small sample of your training data so here the idea is that you want to take something like between one and maybe five to ten mini batches of data like a very very tiny sample of your training set and now try to overfit this to a hundred percent and because and when you're doing this you always want to turn off the regularization and your goal is just to over fit the training data and when you're doing that then what you need to do is fiddle with the the precise architecture of your model maybe the number of layers and the size of each layer play with the learning weight will play with the method of weight initialization and now when you're when you when you play around with these different some of these different hyper parameters then you should be able to usually get the get whatever model you're working on should be able to get to like 100 percent accuracy on this very very tiny sample of the training set within a very small amount of time your goal here is that you should be able to over fit in something like five minutes of training and because you're this sample training stuff you're working on is very very small and now because the training times are very very short it allows you to kind of interactively play around with different values of these settings in order to find settings that cause you to overfit very quickly and the point of this stuff is to just make sure that you don't have any bugs in your optimization loop because if you can't over fit 10 training examples or 10 batches of data then you have no hope in actually fitting the training set for real and it's surprising how often you can catch bugs in your optimization setup or in your model architecture choices just in this stage and again this these training loops run very very fast so you can do this interactively on a single GPU in most cases and and and there in step two we don't care about regular we don't care about generalization to the validation set at all we're just trying to debug the over the optimization process on a small training set but once we've done that then once we have to succeed it at Step two and be able to overfit a very small amount of data then we want to do is take the architecture from the previous step and now use all of the training data and now your goal is to find a learning rate that will cause the loss to start to go down quickly on your whole training set now hopefully from step two you found it you you know that your code is correct you know that your optimization loop is correct and you found a model architecture that you believe is sufficient for modeling your data and now in step three you're going to take all of those architectural parameters and copy them over and you're just going to fiddle with the learning rate on the entire training set and the learning rate is the only parameter you'll the alerting rates the only parameter you'll change and in changing the learning rate your goal is to make the loss drop significantly within the first say hundred iterations of training or so because usually for most problems that are set up properly usually you'll see some very high initial loss at the beginning and you'll tend to see some kind of exponential decrease in loss within the first hundred to thousand iterations of training and that's sort of empirically true across a wide variety of problems and neural network architectures so at this K at this stage again because you're only caring to train for something like a hundred or a thousand iterations and again you can typically do this interactively and just choose learning rates look at the learning curve and then based on what the plots look like then go back and choose new learning rates and work interactively until you can find a setting of the learning rate that causes things to actually converge within the first start to converge within the first 100 or so iterations so now at this point we're in relatively good shape we've got a architecture that we know has the potential to model our data because it can over fit a couple training samples and we know our optimization is in a pretty good state because we know it loss is starting to go down at the beginning of training so now step four is to set up a very coarse hyper parameter grid maybe like a very very small number of models maybe you choose two different values of learning rates and two different values of regularization strength or just choose a very very very tiny hyper parameter grid to evaluate that is somewhere in the neighborhood of all the choices that you have made up to this point and now hopefully because of the by following these previous steps hopefully all of these hyper parameter choices within this very small grid will all end up being somewhat reasonable you will not have any catastrophic ly bad models within this very type of tiny hyper parameter grid so then after you have this course grid then you train on the full training set for something like one to five or one to ten he pops and that should and that should be enough to give you some sense of the generalization performance of your model beyond the training set and actually see it start to look at how it performs on the validation set and then at that bend and again this is something that you probably cannot do interactively but at this point you've got enough familiarity with your model but you know that all of these choices should hopefully work so you set up this tiny hyper parameter grid depending on how many models you can afford to train in parallel and then step back and then come back and I'll come back after a coffee break or a night of sleep or a weeks vacation as the case may be depending on how long your model takes to Train and then come back and see how well these things did and then after you go to step four then you enter this iterative loop of looking at the results of your previous tiny hyper parameter grid and then adjust your hyper parameter grid and then go back and train for longer and then see and then at this point you're in this sort of interactive process but each iteration of this of this of this procedure might take hours two days depending on exactly how long your model takes to Train so always along the way your you're looking at the learning curves and then use it and using that to make determinations about how you should change your what time the changes you should make and your hyper parameter grid going forward so when I say look at learning curves I mean that we need to look at these plots that I mentioned earlier on that whenever you're looking at these things you should always be plotting this training loss on the left and again as I said I like to plot both the raw accuracies as a scatter plot as well as a line plot of these moving average of losses and then on the right I always like to plot the training and validation accuracies and check every Deepak and then by looking at these learning curves you you can get usually gained some sense about what could be going right or wrong with your model so here so here I'll give you a couple cartoon pictures of different sorts of shapes of learning curves that you should become familiar with so one situation is when your learning curve looks something like this where it's sort of very flat at the beginning and then makes a sharp initial drop well if you see a shape like this that means that probably your initialization was bad because the the law you were not making progress at the beginning of training so you should often adjust your initialization and go back and try again now another problem is when we see a loss like this where it's sort of making good progress and then it plateaus after a while well and when you see a loss of the lost curve like this that you should consider some kind of learning rate decay because it's possible you're learning right was too high and maybe around the around the time point where it tends to plateau is maybe the point where you should introduce some kind of learning rate decay and lower the learning rate now conversely if you have not followed my advice and have introduced learning rate decay too early in the model development process then you might see a learning curve that looks something like this so here the model was making good progress and then at some point we hit our step decay or our point at which we were applying a step decay on the learning rate and then the loss made a small drop at the point where we stepped and then after that it was completely plateaued now usually this means that you decayed too early that if you look at the shape of the learning curve leading up to the point in time where we apply the decay then you think then this looks like the loss would have continued going down at the initial learning rate had we not applied learning rate decay so this is a shape that you should watch out for and usually means that you applied learning rate decay too early now these were all shapes of the this moving average of training losses but you can also gain some intuition by looking at these plots of training and validation accuracy over time so one character one characteristic shape of these curves that you might see here is often that they'll make some exponential increase at the beginning and then they'll sort of slowly increase linearly over time now if you see a shape like this there's some kind of non-trivial but maybe healthy gap between train and bowel and they're both continuing to go up then when you see a curve like this it means things are going well and you just need to train for longer because it looks like these curves are still going up and hopefully just do whatever you're doing keep doing it for longer and your models will continue getting better so this is a curve that you like to see in these training Valpo now a plot like this means something very bad is going on so here this is a characteristic plot of overfitting so here you can see that performance on the training set has continued to increase over time but performance on the validation set has either plateaued or even decreased over time now this usually is some no it's it's very common to have some kind of a gap between train and Val that's normal and healthy but when you see this very large and increasing gap between train and Val that is a sign of overfitting and when you see a learning curve like this that means that you need to either increase your regularization strength collect more training data or maybe in some rare cases decrease the size or capacity of your model but this is this is a characteristic shape of overfitting in in contrast if you see a plot like this where the training performance and the validation performance is almost exactly the same you might think this is a good thing because there's no overfitting but usually this is a bad sign usually if you see exactly the same performance on the training set and the validation set usually that means you are actually under fitting your data and in fact you would have been better off to increase the capacity of your model or decrease the regularization and that will tends to give you better overall performance on the validation set even if young even if that results in a larger gap so this one is slightly kind of counterintuitive that this is actually an unhealthy learning curve and usually means that you are under fitting the training data any questions about these learning curves yeah yeah the question is um can you also tell by looking at the absolute accuracies and if they're particularly low and yeah that's also definitely a good sign to know whether you're not you're under fitting the data but there it requires you to have some prior knowledge about what is a reasonable accuracy on this data set which you might not coming into a new problem well so usually I mean this is kind of empirical so usually it turns out that you know what we want is a model that achieves the best on unseen data and it happens I mean I think there's not a strong theoretical reason for this but the empirical fact is that usually when you find a model that achieves the best accuracy on the unseen data then there typically is some kind of a non-trivial gap between performance on the training and performance on the validation set and that's kind of an empirical fact I think there's not really great theory that I can point to you for that problem for that uh for that observation so then this brings us to our final close our final step in this type of parameter training policy you look at these lost curves and then based on your intuitions about how things are going that should gives you some sense about how to adjust your grids based on looking at these lost pots that we just looked at then you go to step 5 and loop until you run into your paper submission deadline and you have no more time to train models so basically what I like to think about when you're tuning these things is that you're some kind of a DJ tuning all these little knobs about your learning rate strength and your hyper parameters strength and your dropout and your model architecture and then hopefully if you tune all these knobs in just the right way you'll end up making beautiful music in the form of a model that works really well and unseen data and in order to do that it's often very helpful to set up some kind of a cross validation command center where you need where you can look at very like you need to train large numbers of models in parallel and then look at these learning curves in parallel and then use this as a way to get some idea about what sets of hyper parameters tend to be working well or tend to not be working well now back in the day before things like tensorflow this was a pain in the butt and you had we had to like write custom web code in order to visualize these learning curves and like learn JavaScript plotting frameworks to plot these things or set up your own custom jupiter notebooks for plotting these things and you could end up spending a lot of time just on the infrastructure of looking at the results of your experiments but now with things like tensor board you a lot of that work has been done for you so it's usually a lot more seamless nowadays to set up these kind of cross validation command centers as they as you will another there's another cut there's some other heuristics that you can sometimes look at that can help you diagnose things that are going wrong and training so for example one thing you sometimes like to do is look at the ratio between the values of the weights of your network and the and the magnitudes of the updates that you're making onto those same ways so that would be you know your gradients times their learning rates gives you the overall delta that you're going to use update each value of the way that each iteration and generally speaking you want to AP's the value between the absolute value of the weight and the absolute value of the update for each of the scalar weights in your network to be not too large typically if you're making updates that are of the same order of magnitude or larger in order in larger in magnitude then the weight value itself that's usually some kind of a sign that something bad is happening so looking at these ratios between weight update magnitudes and weight magnitudes is sometimes some heuristic that people look at and practice or maybe looking at other kinds of statistics of the gradient and magnitudes or magnitude is something that can sometimes help you debug problems that are going wrong during training so that gives us to looking at these training dynamics and hopefully if you follow this simple step a seven step procedure that I've outlined you'll be able to train really good models even if you don't have access to a giant GPU cluster but then the question happens is that after your after you've successfully trained some models then what can you do after that now well now things get interesting so one thing that you often want to do is to get a little bit better on your train on your on your final test set and it turns out there's a very hip simple heuristic that apply is almost across the board for getting a slightly better performance on basically whatever problem you're considering and that is that you train something like n independent models however many you can afford to train and then rather than using one of them instead you just use all of them at test time so that means for each sample in your test set you run it through each of your trained models to get the predictions from each of your trained models and then you average the predictions across all of your trained models for something the exact mechanism of averaging kind of depends on the exact problem you're trying to solve but for something like image classification you could for example take an average of the probability distributions that are output from each of the models because the average of probably distribution is still a probability distribution and typically when you take an ensemble of a bunch of different models you end up getting about one or two percent better on your final test case on your final test set so that is pretty standard no matter what the model architecture is or how many models you're honest not well more is usually better but what tasks you're working on what data set you're working what's your underlying CNN architecture typically you get about one to two percent better when you ensemble some some set up models together so if you're really trying to squeeze out that last bit of juice then this is a very common trick that you'll see people use one kind of cute idea is that rather than training multiple independent models sometimes you can get away with saving multiple check points of one model during training and then actually average the results of those different check points during training and that can also give you some some improved performance and then one trick there is actually to Train with a very very bizarre learning rate schedule that is actually periodic so this is like not super mainstream but it's so crazy I wanted to point it out the idea is that your learning rate schedule is now actually periodic but it starts high goes low goes high again goes low again goes high again goes low again and then the the check points that you take during training to form your kind of ensemble are the values of the model weights that were at the the very low point of each of those points in the learning rate schedule that's kind of a cute idea that you might see people use sometimes another idea is to keep a running average of the model weights that you see during training and this is called polyak averaging and it's used actually pretty commonly in some large skill generative models so here remember like in batch normalization we're always keeping this running exponential average of the the means and the variances of our features and then during testing we we use those that this exponentially running mean of means and standard deviations for batch normalization during test time well it turns out you can actually do the same thing with the model weights so then rather than using the model weights that result from any one iteration of gradient descent instead you can take an exponentially running average of the model weights that you see during training and then actually use the this exponential running average of the model weights at test time instead and this can have the effect of helping you to smooth out some of this iteration to iteration variation in the model that happens during training right if you go back and look at these lost plots remember there was a lot of variation in the loss between individual iterations of the model and by applying some kind of sum this is this kind of averaging to the model weights themselves it can help to average out some of that noise that happens between individual iterations of SGG so those are ways that you can squeeze out just a little bit of extra juice on whatever is the original task you were trying to solve was but there but sometimes we actually want to use one trained model to help us solve a totally different task and that is extremely an extremely powerful tool that has become super mainstream in computer vision over the past several years and that's basically the problem of transfer learning so here the idea is that so here there's kind of a myth that goes around when training CN NS the myth is that you'll often see people to see people say is that you need very very large training sets if you want to successfully use deep learning for your problem but I think this is actually false and I'd like to bust this myth so the idea is that I think if you utilize transfer learning you can actually get away with using deep learning for a lot of problems even in cases where you do not have access to a very large training set so for this reason transfer learning has become a critical part of pretty much all mainstream computer vision so the basic idea is that we'll take step one is train a convolutional neural network model on imagenet or some other a very large scale image classification data set and then make that part work as well as you possibly can using all the tricks that we've outlined and then step two is is to sis to realize that we don't actually care about image net classification performance instead we might care about classification performance on some other smaller data set or some other task entirely well here the idea is that we will take our trained network from image data and then remove the last fully connected layer recall that for example the last fully connected layer in something like an Alex net takes us from these 4096 dimensional features into this a 1000 dimensional vector of class scores so in fact this last layer and then in the network ends up being tied to the category identities of the categories on which the model was trained but now what we can do is simply throw away that last layer and delete it from the network and just use those 4096 dimensional vectors at the second-to-last layer of the network as some kind of general feature representation of our images and then you can just you run our like freeze whole weights of the network and just use those extracted features as or represent eighth as a feature vector that represents your images and what people found out starting in about 2013 22 2014 was that this seemingly simple idea allows you to get very good performance on many many computer vision problems so for example there was another data set that was called Caltech 101 that was unsurprisingly 101 object categories but overall a lot lot smaller in size and image net and here what we're showing is the red curve so here the x axis shows the number of training samples on Caltech 101 that we're using per category and the y axis as though as the classification performance on this Caltech 101 dataset and now the red curve was this a prior stated prior pre deep learning method that was that was the state of the art on Caltech 101 that was very particularly design set a feature extraction pipeline for this particular data set and now the the blue and the green curves show this very simple procedure of taking an Alex nest that was pre trained on image met and then using the final the second to last layer of alex net features as this predefined feature vector and then simply training a logistic regression an SVM in this case so they trained either a logistic regression or a support vector machine which of these simple linear models that work on top of this fixed 4096 dimensional feature vector that was extracted from our pre trained model and what they found is that using this very simple procedure of training a linear model on top of these pre extracted feature vectors they were able to significantly outperform the state of the art in this data set and in particular they were able to get non-trivial performance even using something like five to ten samples a per class on this new data set so this actually is very common and if you are able if you use an image net pre trained model to extract features then you can tend to get reasonably good performance on many data stream tasks even when you don't have a very large training set and this is definitely not particular to Caltech 101 we saw that this was also similar on this other bird classification data set at that time so here DPD and Pio and POF poof were existing methods that were very particularly tuned for this task of recognizing birds and what they found is that by simply training a logistic regression type of these pre extracted features from an Alex that then they were able to outperform these previous methods and if they were and by incorporating the Alex net features into the previous method they were able to get an even larger boost and this was simply swapping out the Alex net features for whatever else that previous method was doing in their learning process and this applies across not just a image Caltech 101 and birds this apply this this tends to apply across a very large set of image classification problems so there was another another another paper benchmarked this image this this idea of extracting features and then training linear models on top of them for a whole suite of different image classification problems so here you can see they got good performance on objects scenes birds flowers human attributes and object attributes and in all cases they were able to outperform the previous state of the art on those data sets and what's astounding here is that each of those blue bars which is the previous state of the art on one of those data sets was typically a completely independent method that had been tuned independently for that one particular data set and and here they were able to use this very very simple procedure to you that utilizes fine tuning and outperform all of these different methods using one simple procedure of extracting extracting features from a pre trained model on imagenet and then simply training linear models on top of those for the downstream tasks and this applies not only to image classification but it turns out that this idea of utilizing features from a pre trained Network applies to a wide variety of image computer vision problems as well so that same paper in it I guess beating one two three four five six state-of-the-art methods wasn't enough for them they also benchmark a set of image retrieval tasks so the details is not super important of how these works but basically the setup is that you for example get an image of some building at Oxford and your task is to based on the pics of the one building retrieve other images from a database that are other images of the same building maybe from a different viewpoint or different time of day or something like that so this is some kind of a like search by image you know if you upload an image to Google and then search for similar images and again here by using these here the idea is that we're going to extract these features from our models that are pre-trained on image dat and then we will simply use near some kind of simple nearest neighbor procedure to perform this image retrieval tasks and again by using simple nearest neighbors on top of these pre-trained feature vectors from image that they were again able to outperform a large number of previous methods on a large set of image retrieval datasets so this this is probably the simplest example of this transfer learning tasks where we simply extract feature vectors and then use them out of box and for some either a linear about that or a retrieval baseline yeah question yeah so that yeah thanks for pointing it out so for this paper in particular they had another trick in their bag which was actually applying data augmentation to the raw images before the extracted feature vectors and they found that this was actually pretty important for for beating these pipe methods but again this data augmentation is fairly simple and very fairly straightforward these are mostly these random scales and flips and crops that we kind of talked about in a previous lecture and then they kind of take it ensemble over many different test time taking ensemble over many different random data augmentations for the data point but again it's sort of a fairly simple straightforward procedure that is very the same across all the data sense so that's kind of the simplest procedure where we simply use the pre trained Network and just use to extract feature vectors out of the box and plug those feature vectors into some other algorithm but if your dataset is a little bit larger you can often do better than that using this procedure we call fine tuning so here the idea is that we will take our image our model which has been pre trained on something like image net and then maybe throw away the last layer and reinitialize the last layer to be maybe a new layer that pertains to the classification categories on our new data set and now we will continue training the entire model for our new our new classification data set and in this problem and then rather than just using it as a fixed feature extractor we'll actually back propagate into the model and continue updating the weights of the model to continue improving performance on this downstream task and here there's a couple tricks and tips one is that sometimes you often need to reduce the learning rates a lot when you're doing this fine tuning and another trick is that sometimes you might want to first start from extracting features and then getting a linear model to converge on top of features and then after you do that go back and fine-tune the whole model so those are maybe some tips and tricks are on this fine tuning that you might do in practice and it turns out that this procedure of fine tuning can actually give pretty substantial gains in performance of a lot of tasks so here for this task of object detection that we'll talk about in more detail in a few lectures so you don't need to know how this works or what is what this number on the vertical axis is just higher is better at 100 is perfect so what this means is that the blue bar is using some kind of transfer learning for object detection on two different data sets this blue bar was using fixed feature vectors where we just freeze the entire network and use it as a fixed feature extractor and now the orange bars are where we actually continue training the entire neural network model on the new on the new data set and what we can see is that by by a fine tuning actually things are working a lot better than this this gives us a huge boost over just using the network as a feature extractor another point here is that the architecture that you use matters so I told you that step one was to train a model on imagenet well it turns out that the model that you train matters a lot and in general models that work better on imagenet tend to also work better for many many other computer vision problems so this is why this is I think the reason why many people in computer vision and basically everyone in computer vision knows the exact relative order of all the models on image net and the reason for that is not because they're obsessed with the image next challenge it's instead because models that work better on image net for many years tended to also work better on basically every other problem that you tried so there was a period in time when basically he would just take the best the latest and greatest model that worked the best on imagenet and just apply it to whatever your problem at hand was and things would would get better an example from that comes from again this object detection problem so it's it's very difficult to find papers that make controlled comparisons between different imagenet models so this object detection comparison is not perfect but it's the closest I could find so here again the y-axis is the performance on this object detection task zero is as terrible 100 is perfect and starting and like in around 2011 the best state of the art at pre deep learning was getting something like 5 on this task and now when we use an object detection method with Aleks that we got 15 and then using the exact same object detection method but just swapping out looks that for vgg and doing everything else the same gave they gave us a boost from 15 to 19 and that's just the result of using a more powerful bigger network that works better an image net also gave us improvements on this new task and then then if you compare this 29 and is 36 again this is the exact same object detection method one but one is using vgg and the other is using a 50 a 50 layer ResNet and again the jump from vgg to ResNet gave huge gains in performance not just on image net but also on a ton of ton of downstream tasks and this trend basically continued that models that work better on image net tended to give you gains nearly for free with very little effort on a wide range of downstream tasks in computer vision so then kind of the the the quick guide of how to approach transfer learning with cnn's is i'd like to think about this little two-by-two matrix where you can kind of think about your problem falling into one of these buckets one is whether your data set is very similar to image net and the other is how much data you have in this new data set well if your data set is very similar to image net that is tends to contain objects that look kind of like image net objects then if even if you have very very little data then like maybe tens to hundreds per category then applying some kind of linear classifier on top of your pre trained features tends to work quite well and if you have a fairly large amount of data maybe hundreds to thousands of samples per category then fine-tuning your an image that model on your new data set to work quite well now if your dataset is fairly different from imagenet and similar and different is not well defined in this context I have met but if your dataset is fairly different from imagenet but you have a lot of data often you can still initialize from an image net model and fine-tune and get good performance now the danger zone is right here where you have a very small data set and the nature of that data is somehow very different from the types of images you see an image that and if that's the case then you're in trouble I would I still think that some linear classifier from a pre trained model or some fine-tuning approach will often give you reasonable results but this is really the danger zone where you did you need to look out for so I'd also like to point out that this idea of transfer learning in computer vision has really become the norm and it has become the mainstream way that we operate on many tasks in computer vision over the past several years that this is really the norm not the exception that basically most computer very wide majority a very wide set of computer vision papers are using some kind of transfer learning these days so we've seen this already in object detection where it is using somehow part of the model was pre trained on imagenet we'll also see this again in things like image captioning where again part of the model it was pre trained on imagenet and even what's even people get even wilder and sort of pre trained different parts of the model on different data sets and then plug them together and achieve some end result so for some some kind of image captioning model that we'll talk about in more detail may be part of the model was pre tried on image net and then part of the model was pre trained on some other data set and then you put them together and continue training for some eventual downstream tasks so somehow I think one important shift in computer vision over the past several years has actually been finding pipelines of pre training and fine-tuning and possibly multiple multiple rounds of it in order to make use of different types of data for your eventual ends tasks and a very great recent example of this type of approach in computer vision actually comes from a very recent paper from our very own GSI go away where he has this very recent paper where they step one train a CNN on image now step2 fine-tune that CNN for object detection on another data set called visual genome step 3 train something called a language model called Bert on a different type of data set the details of which are not important right now step 4 you combine the results of two and three and then fine-tune them for some kind of a joint vision language task and then step five you fine-tune this eventual construction on your eventual downstream tasks be it image captioning or visual question-answering or some other kind of eventual downstream tasks and I don't intend for you to understand the full details of what's going on in this model this is just to give you a sense that these this procedure of fine-tuning and pre training and fine-tuning and transfer learning has become a very critical part of mainstream computer vision research and we'll have a guest lecture from blew away later in the semester where I think he'll talk in more detail about this model in particular so even though it transfer learning has become very very pervasive there's been some very very recent results that have somewhat called it into question so there's this there's this very interesting result from just this year that shows that for the task of object detection you can actually you don't you can actually get away without pre-training which is something everyone thought was critical for many downstream tasks in computer vision the catch is that in order to do as well on object detection but by training from scratch you need to train for three times as long so you can do just as well without pre-training but it requires you to train for many many more iterations and the other takeaway from this paper is that in situations where you have a very little training data then pre training also gives you some very good performance over fine-tuning so that I think this actually meshes with some of the earlier intuitions we had whereas if you have a very very small data set then something like tens of samples per class then some then something like pre training and fine tuning is very very effective and if you have larger data sets if your data set size is larger then you can consider not only fine-tuning the whole model or perhaps also training a brand new model from scratch but the caveat here is that even in cases where you have a large data sets I think pre-training and fine-tuning is still an extremely effective recipe in computer vision because it makes things trained a lot faster so even in cases where you do have a fairly large data set at the end of the day for the task you care about then initializing your model from a model pre trained on imagenet tends to make things train much much faster so it's very very useful to do in practice so I'd like to go I think this one this part is less critical so it's if there's any sort of questions about transfer learning I'd rather take those here alright so then I guess if you can bear with me for just like two minutes we can blast through this stuff because you don't have enough GPUs to do this anyway but but basically a couple lectures ago we talked about this notion of moving from single device to entire sort of rack scale or data center scale machine learning so the question is how do you actually do that so one idea is that we'll split our model across GPUs maybe one thing you could imagine is you partition put some of the layers on one GPUs some of the layers of other GPUs and the players that I'm not on GPU this turns out to be a really bad idea because your GPUs will spend a lot of time waiting that only one in this kind of scheme only one GPU will be executing at a time so it'll be a very inefficient use of your resources another idea that you'll see sometimes by the way this is called model parallelism because the idea is that you're splitting up your model to run different parts of your model on different devices another thing that's another flavor of model parallelism that you'll see is to use different parallel brant use the split up your model into multiple parallel branches and then run those different parallel branches on different GPU devices and this is the type of mechanism that was actually used by yeah the original Alex that paper because if you recall the poor guy only had GPUs with only three gigabytes of memory so this was really critical for training Alex at that time but this ends up being a fairly inefficient way of paralyzing across GPUs as well because it requires synchronizing it requires a lot of communication between GPUs and in particular it requires communicating the activations and the gradients of the loss with respect to those activations within the forward and backward passes so this tends to be fairly expensive to do so instead the way that people the way the typical way that people instead paralyzed across multiple GPU devices is this idea called data parallelism so here what we're going to do is take your batch of n images and then if we're training on to GPU devices we'll replicate the model on each of those two GPU devices and run and over to run a smaller mini mini batch of n over two images on each of your independent GPUs and now after you split if you split across the batch dimension now your GPUs can perform most of their processing completely independently and there's much there's much less need for communication across the GPUs so now these two GPUs can run forward pass completely independently compute loss completely independently compute gradient with respect to the parameters again without any communication between GPUs and the only point where you need to communicate is at the end of each forward and backward pass where these GPUs now need to communicate the gradient of the loss with respect to the parameters and sum those gradients across all the GPU devices in order to make a gradient step so this is the way that people typically take you make use of multiple GPU devices these days and this this approach actually scales very well to very very large numbers of GPUs so you can imagine splitting this not just across two GPUs in your desktop but across eight GPUs on a big server or maybe across hundreds of GPUs in a data center and again the idea is the same that will take our batch of data split it evenly across the GPUs do independent forward and backward passes on each device and the only point of communication is where we need to sum the gradients across the different elements or the different devices another variant you'll somewhat sometimes see is where you may be combined model parallelism within each server and then do data parallelism across different servers in a data center you can imagine this requires having a fairly large number of GPUs to play with but this this whole prop but then basically this this idea of training splitting you're utilizing multiple devices by splitting across the batch dimension works really well but it requires you to be able to train with very large mini batches so that so then basically the goal here is suppose you've got a model that works really well on a single GPU and now we want to scale up to train that same model on a very very large gpus and the goal here is usually that we want to reduce to reduce the overall training time of that model so rather than training for a very long time on one GPU instead we'd like to train for a very short amount of time on that larger set of GPUs so then hopefully the total number of epochs the total number of times that we see each element in the training set should be the same but we will instead form larger mini batches and make fewer overall gradient descent steps and basically the way the one really important trick to do this ends up being fairly straightforward and it turns out that a very simple heuristic that works really well for this large batch training is to scale the learning rate in the nearly so suppose that you've got a model that trains really well on one GPU using a learning rate alpha and a batch size of n then we can we you can usually train on K GPUs with a batch size of KN and per GPU and a learning rate of K alpha so basically you scale the number of devices by K you scale the batch size by K you also scale the learning rate by K and this is the most important trick for getting things to work in very with very large batches the other trick is that when if you imagine our batches are very very large and we're maybe trading up that 1,000 GPUs then that learning rate is going to be very very large because now we're training like 1000 times whatever our old learning rate was and then the problem is that this very large learning rate can sometimes explode in the very first iteration of training so then what people do is they often use a learning rate warm-up sched well will where they'll start the learning rate from zero and then gradually increase the learning rates over the first maybe thousand or five thousand iterations of training or so and then at that point they'll continue with whatever other learning rate schedule they would have used originally there's a couple other tricks that you need to use to get this to actually work and I would recommend you check out this paper if you're interested in them but the basic idea is that once you can once you get all these tricks to work people were able to train models on imagenet very very quickly using a very very large number of GPUs so here this this relatively well-known paper was able to train models on imagenet in just one hour but the trick though that the secret is that they use the batch size of 8,192 and they distributed this across 256 GPUs and they had kind of a click bTW title of the paper called imagenet in one hour and the problem is that once you write a paper are called imagenet in one hour you are just begging for people to try to come and beat you so absolutely so after this paper came out it seemed like all the big industrial labs were just tripping over themselves to try to compete and see how many hardware resources could they throw at this problem and how fast could they train on imagenet yeah question yeah the question is um will you asymptotically reach a limit as your hardware increases I think in theory yes but we haven't hit the asymptotes yet so let's keep going so after this paper came out they got one hour of training on 256 GPUs so not long after another paper train with a batch size of 12,000 it was a group from Intel's they were pushing Intel's new Knights landing devices that was kind of like a GPU and they got at the tree in 39 minutes does a great huge improvement then another group at Intel came out they trained with a batch size of 16,000 and they trained it on 1,600 Xeon CPUs and they got off to train in 31 minutes and then another group came out and they got it to train in 15 minutes but they trained with a batch size of 32,000 on a thousand GPUs and if you look this is like kind of linear right if you compare this original image net in one hour then you know you can train four times as fast with four times as big a batch size on four times as many devices so in practice if you read papers from big industrial labs they'll often use these tricks and practice
Deep_Learning_for_Computer_Vision
Lecture_2_Image_Classification.txt
so welcome back to GE cs4 9800 7/5 9800 v welcome back this is now lecture 2 so remember last time we talked about a historical overview of the fields of computer vision and of deep learning and machine learning and now starting this lecture we're going to talk about image classification and we're gonna start diving into the technical material of the course and today we'll see our first learning algorithm so today we're going to talk about image classification so image classification is really a really important core task in computer vision and really machine learning more broadly so image classification is really quite a simple task to state so what we do is our algorithm is going to take as input an image on the left and then the output is the algorithm needs to assign a category label to that input image so when we talk about image classification we typically have some fixed set of category labels in mind but the algorithm is aware of so in this example maybe the algorithm is aware of these five labels cat bird deer dog and truck and the and the during the as the algorithm performs image classification what it needs to do is simply assign one of these five labels to the image that it sees in this case is cat so for all of us this is a really trivial task right you can do this almost without thinking about it you just immediately know that this is a cat when you look at the image but for the computer that's not so easy and the main challenge in image recognition and image classification when we try to do it on machines is this problem we call the semantic gap so for us when we look at this image we immediately recognize that as a cat we get these perceptions of all these photons run and they hit our retina they go through our brain they go through a lot of complex processing but we're not really aware of that consciously when we look at these images instead we just kind of intuitively know what we see but the computer doesn't have that kind of intuition so when the computer looks at such an image what all it gets is a giant - tomato is a giant grid of numbers so first for an image like this the it's just a giant grid of 800 by 600 by 3 numbers where at each pixel we have a single call value represented with three three numbers between 0 and 255 so the problem is that if you look at these grid of numbers it's really not obvious at all that this number that this grid of numbers should represent a cat and there's no obvious way to convert this grid of this this grid of raw pixel values into this semantically mean a meaningful category label of cat and what's even worse is that this entire grid of numbers can change drastically as we make relatively unassuming changes to the image so for example if you were to imagine if you if we were to imagine changing the viewpoint of this image maybe if we were to take an image of a photograph of this exact same cat from just a slightly different angle then to all of us we would probably recognize it definitely would still recognize it as a cat for sure and we would probably still recognize it as this exact same cat because we could recognize the markings on its face and whatnot but the problem is that due to this this semantic gap this difference between what we understand when we look at images and what they and what is represented in this raw grid of pixel values that if we were to make even a simple change this image like photographing from a slightly different angle all of the pixel values would change in a very unintuitive way and it's and we somehow need to be able to design algorithms that are robust to these massive changes in the raw pixel values that can arise from relatively simple changes to the images themselves so there's a lot more we need to deal with beyond viewpoint variation in order to perform image classification we also need to deal with things like intra class variation so if we look at different images different different cats all look very different and each of these different adorable cats all produces very different grids of pixel values on the raw sensor of the camera so we somehow need to build our systems that are robust to these massive variations that can occur within categories so there's another problem which is sometimes we want to recognize fine grained categories so so far we've talked about recognizing maybe cats versus dogs verse trucks but depending on the task at hand we might want to recognize different categories that appear very visually similar so for example if we were to try if we we might want to recognize different breeds of cats in some applications so he we have different categories that appear very visually similar and this is gonna this is again a huge practical problem and it's not clear at all how to write algorithms that are robust image to changes in image pixels in this way we also need to we need our algorithms to be robust to background clutter so sometimes the objects in the image that we want to recognize somehow blend into the background maybe due to natural camouflage or other sorts of crazy things going on in the scene we need our we need our classifiers to be robust to illumination change as we change the lighting conditions in the scene turn on and turn off lights take pictures in the dark take pictures in the daylight the underlying semantics of the image of the objects in the images do not change so our algorithm should be robust these massive changes in different lighting conditions so our algorithms need to deal with information so the objects that and maybe cats are particularly deformable object categories but we need to be but but sometimes the the objects that we want to recognize in images might appear in very different view very different poses very different position positions in the image we might somehow need to deal with occlusion so sometimes the object that we want to recognize in the image might not be visible hardly at all and I think this example on the right is really interesting right this is basically a couch and we see a tail sticking out from underneath the couch cushion now you probably intuitively thought that that was a cat right because we've seen a lot of images of cats because you know that cats usually live in houses because you know that maybe cats like to burrow down under things sometimes but actually if you think about the evidence in the raw image evidence of this image we don't even know that this is a cat this could be a raccoon this could be some other kind of a crazy animal with a tail right so somehow even this is relatively simple problem of giving category labels to objects and images can involve a lot of common-sense reasoning about the world your knowledge that cats live in houses and raccoons are unlikely to live under cushions of couches right so even this relatively unassuming problem of classifying images becomes very challenging very quickly if we want to recognize the full breadth of Boreas that exist in the world and all the variations and positions and appearances and changes in ways of those objects and appear in images around in the world so if we were if we were somehow able to overcome all of these problems and write down algorithms that could perform robust image classification and recognize lots of aaja categories and lock them in lots of different situations it would be really really useful so we already saw in the last lecture how some applications of computer vision can unlock may be many different scientific questions so we could use image classification for things like medical imaging medical diagnosis maybe we could take pictures of skin lesions and diagnose them as malignant or non malignant tumors maybe we could take pictures of x-rays and try to classify what types of problems could exist in medical images on this case the robust image classification could be useful for astronomers who want to go out and collect visual data of from telescopes and other types of sensors and then classify what types of phenomenon are out there in the sky these could also be useful for many other scientific applications like maybe recognizing whales or categorizing many different types of animals that could appear in sensors so image classification on its own is this really really useful problem and if we could solve it it can unlock a lot of really powerful and useful applications but what I think is possibly even more interesting and maybe less intuitive is that image classification is also a fundamental building block of different algorithms we might want to perform inside computer vision so as an example so far we've talked about image classification so there's a related task in computer vision called object detection so in object detection what we want to do is draw boxes around the objects that in images and say not just what objects are present in the image but where are they located in the image and it turns out that one that image classification is itself a sub or a sub part that can be used to build up to more complex applications like object detection so as an example one way to perform object detection is via image classification of different sliding windows in the image so one one way to perform detection is to just classify different sub-regions of the image so we could look at a sub reach sub region over here and then classify it as background horse person car or truck in this case it's classified as background because there there's no objects here if we were to classify this box we should classify it as person etc etc so you can see that if we had the ability to build really powerful image classifiers that would again let us build other applications like object detectors even something like image captioning is often framed as a classification as an image classification problem so here the idea is that given an input image we might want to write a natural language sentence to describe what is in the image and here this can be framed as a sequence of classification problems just as in the object detection case so here we've got maybe some fixed vocabulary of words in English language at the club that the algorithm is aware of and the question is what word should I say next and this is again a classification problem so then first we would classify and select this word man using an image classifier maybe select the word writing select the word horse and then select the word stop to me know that's the end of the sentence so you can see that image so even maybe another even work we did even slightly more outlandish application is playing computer games like go so people have built AI systems that can learn to play go and even outperform many of the best human experts in the world and this is basically a classification problem too here the input is now an image where the pic each pixel of the image describes the state of the game board at some position and now the output is a classification problem about which position on the board should I place my next stone on so you can see that throughout these different applications this relatively unassuming problem of image classification is a really really powerful building block that we can use to build up to many months to many more interesting problems in computer vision so given all of that we we really want people to write algorithms that can perform image classification really well but it's really not obvious at all how we should do this right if you were to just sit down your computer and start typing code you need to write this class this magical Python function that's going to input this giant grid of pixel values perform some magical computation and then somehow spit out cat or place the gold piece at position five nine or this this piece of image is or is not a piece of background and it's really not obvious at all what code you should type here right because unlike for something like sorting a list of integers there's really no well-defined algorithm for how to convert grids of numbers into caps so we need to do something so it's really not not clear at all how we should approach this problem so one thing you might do is maybe just try to use your own human knowledge about what cats and other objects look like in order to hand code classifiers that try to pick out different object categories so one thing you might imagine doing is you know we talked in the last lecture about how edges in images are really important so maybe what we could try to do is first take the image and then convert it and then extract edges using some kind of edge detection algorithm and then maybe you try to find corners or other types of interpretable patterns in those edges right you know that may be cats of triangular pointy ears and you should hope that those ears come out in the edges so maybe you could kind of look for corners and then draw out right rules about what angles cat's ears are allowed to be maybe cats are supposed to have whiskers in different positions maybe the whiskers would come out and edges so you could imagine maybe like really trying to go in there and a hard code all your own human knowledge about what cats look like and try to write down some explicit algorithm for detecting them but this is not a very good approach right it's going to be brittle there's going to be a cat's without whiskers or cats without pointy ears or sometimes the edge detector will fail and won't give you when they won't detect the edges that you wanted it to and maybe you spend a lot of time trying to figure out all those corner cases for cats but now tomorrow we want to classify galaxies and probably all of the hard work that you put into your algorithm for that for recognizing cats from images it's going to be completely thrown away tomorrow when we want to recognize galaxies instead so we really want some that is more robust some approach which is more scalable and some approach which doesn't require us to write down all of our own human knowledge about what different types of objects look like so here's where we come to machine learning right so the idea is that rather than trying to explicitly encode our own human knowledge about what different types of objects look like instead we're going to take a data-driven approach and have algorithms that can learn from data how to recognize different types of objects in images so the basic pipeline for this machine learning system that we're going to build is that first we want to collect a large data set of images and label them with the types of labels we want our algorithm to predict right so maybe if we want to build a cat that a cat vs. dog detector we need to go out and collect a lot of images of cats and dogs and hot dogs and not hot dogs and then class and then and then go and collect human labels for which images or cats and dogs and hot dogs and whatnot and then once we collect this large data set we're going to deploy some kind of machine learning algorithm which we'll try to learn statistical dependencies between the input images and the data set and the output labels that we you that we wrote down during the data collection process so then once we do and then once we've used our image our machine learning algorithm to extract these statistical dependencies we can then evaluate this classifier on new images so what this looks so then basically instead of writing this single monolithic function called classify image instead we have this really two piece API one is this we need to write one function called train which is going to input us now a collection of images and their associated labels it's going to perform some magical machine learning and then it's going to return some statistical model then our second piece of API is this predict function which going which is going to input the model that we produced during the training phase as well as new images on which to evaluate that model this will run the model on the new images and then spit out the labels as they have been learned from the training set so what's really interesting about this approach is that it's kind of a different way to program computers right when think about writing algorithms to sort images - sorry - sort numbers and lists or perform other kinds of classical algorithms you're basically using your own human knowledge to tell the computer exactly what steps it needs to perform in order to produce the output that you want it to produce but now when we take a data-driven machine learning approach instead what we're basically doing is programming the computer via the data that we feed it in and now if we want to program the computer to recognize cats we feed it in images of cats if we want to tomorrow use with then tomorrow if we want to recognize galaxies instead which all we need to do is collect a new data set of galaxies we don't need to recode our machine learning algorithm hopefully and instead we can just feed a new data and then change the behavior of the program so now this is a really powerful paradigm for a lot of problems where we don't know how to write down an explicit programs to solve them so this is the approach that will be taking through so this this has become the dominant approach for basically all visual recognition problems image classification included so now that we've sort of settled on this machine learning data-driven approach to recognize images we need to talk about sources of data right so there's a couple comment image classification data sets that you'll tend to come across so one of the most common is the Emnes data set so Amnesty has ten cat ten classes digits 0 to 9 the images are 28 by 28 pixels grayscale images so they're very tiny it gives us 50,000 training images and 10,000 test images so if you'll recall in the last lecture we talked about how convolutional neural networks were developed in the 80s and 90s and and deployed in commercial products in order to read check handwritten digits on checks well this MMS dataset was really used for that industrial application of recognizing handwritten digits on cheques and was deployed out there in the world so even though this seems like kind of a toy dataset it really has a lot of rich history behind it and and has been very useful in the development of many machine learning algorithms but that said the end this data set has sometimes been called the Drosophila of computer vision so you know that biologists often will go in for form a lot of initial experiments on fruit flies and then they could have work up to more interesting animals as they make their discoveries and this is really this is really similar to the way that a lot of practitioners work on em mist so em missed because it's relatively small and simple data set it's very quick to try out new ideas on this data set but you have to be really careful when you're reading papers that only show results on a mist because it's very very common that may be I mean basically everything works on that mist right you can write down sort of any reasonable machine learning algorithm will get very high performance on the end mist so this is treated really as sort of a proof of concept but just getting something to work here isn't really enough to impress people anymore so instead another data set that you'll see very commonly used is see part n so see part n has is again very small images 32 by 32 but their color rather things rather than greyscale and now rather than handwritten digits the categories are much more interesting their airplanes automobiles birds cats deer you can read it on the slide and this is a fairly decently sized data set 50,000 training 10,000 tests and this is a and even though it's relatively small compared to other large-scale data sets it's it's reasonably challenging since these categories are reasonably difficult to recognize so as a result we'll be using the C for 10 data set for most of the homework assignments throughout the semester so see if our 10 has a cousin called C 400 that's basically similar statistics except we've got a hundred categories rather than ten so I think people use C 500 a little bit less than C far 10 but you'll sometimes see people working on this and it's nice to be aware of it so another super so we talked last lecture of it about the image net data set and this has become something of the gold standard for image classification datasets so basically when you try to submit a research paper that proposes some new tweak to an image classification algorithm if you don't show results on image net reviewers will probably complain and your paper will probably be rejected so it's consist so image pet is really considered a super important data set to benchmark image classification algorithms these days so image net is very interesting barry chow because it contains a thousand different categories this is much much more than the ten categories in CPR or earnest and it's all and it's very very large so we've got about 1.3 million training images with about 1,300 exam 1,300 training images per category and it gives a standard validation and test sets now one ôs yeah question yeah so that was the question was how big are the images in image net well the the issue is that image net images were sort of downloaded from the web's they actually different resolution quite a lot but in but for most practical applications people resize them to either 256 by 256 or sometimes 2 to 4 by 2 to 4 when training on those images so one interesting bit about image net is that the is the the accuracy metric metric that people report here so because there's a thousand different categories on image net it's very on it's very difficult and possibly unreasonable to expect algorithms to pick out the exact one correct category especially because you know some of the neat labels are a little bit noisy anyway so what people do in practice here is have the algorithm predict 5 category labels and then we count the algorithm as having made its credit made a correct prediction if the correct category is in any one of those five predictions so that's just a little bit of nuance to the way image net is typically evaluated so there's so those are kind of the the most standard image classification data sets that you'll see out there another interesting one is MIT places so this so image net images tend to focus on objects like cats and dogs and fish and trucks and things like that so there's another related data set that tries to focus on scene categories like classrooms and fields and buildings and things like that so it's nice to be aware of now one thing that's really interesting is to try to compare these classification data sets in terms of their size so here this is the number of pixels in the training set for these different data sets and this is assuming 256 256 for imagenet in places and what you'll note here is the y-axis is on a log scale so some so you'll see that CFR is maybe about an order of magnets ref lee an order of magnitude bigger than m nest image net is roughly two orders of magnitude bigger than c far and then places is yet another order of magnitude than imagenet so this kind of drives home the point about why imagenet is somehow a qualitatively different data set than these other ones that you'll see people work on sometimes so that makes results on imagenet much more convincing but unfortunately very computationally expensive to work with sometimes so as a result we're kind of sticking with CFR is kind of a sweet middle ground in this course that kind of splits the difference between the complexities of the visual recognition tasks that show up an image net and the computational affordability of smaller data sets like a mist so what's also interesting to see from this chart is this increasing trend of datasets getting bigger and bigger and bigger over time so that's definitely so that's definitely one interesting direction for research how can we use they ever bigger and bigger data sets to enhance the abilities of our algorithms to perform robust classification but people have also started thinking in the other direction as well so one interesting data set to be aware of there is the Omniglot data set so here Omniglot kind of pushes things to the extreme and wants to benchmark the ability of algorithms to learn with relatively little data so on omni Glatt we've got more than 1,600 different categories each of these categories is a letter in some alphabet from some language somewhere on earth so they've got letters from more than fifth so from 50 different alphabets of different written languages and the really interesting thing about Omniglot is that rather than giving you tons and tons of examples of each image of each category it only gives you 20 examples for each of these letters and somehow the challenge is to build algorithms that can really that can learn very robustly from relatively few examples of each image category so this this social this so-called low shot classification problem is a really huge and emerging key area of research where a lot of researchers are starting to think about these days so now that we so now that we've talked about some of the common data sets that you'll run into for image classification it's time to think about our first classification algorithm because data only gets you so far you need some algorithm to actually make use of that data so I really so first algorithm the first learning algorithm that we're going to talk about is nearest neighbor and this one is so simple it might not even deserve the name of a learning algorithm right so what it does is remember I told you that when we implement a machine learning system we need to implement these two functions one called train and one called predict well for nearest neighbor the train function is trivial we're simply going to memorize all the training data we're not gonna send that we're not going to process it we're not going to do anything with it we're just going to memorize all of our training data and now in the predict in the predict side what we're going to do is take our new image that we want to predict a label for compare it to each one of our images in the training set using some kind of comparison or similarity function and now we're going to keep track of the most similar image in the training set to our test image and now we're so going to simply return the label of the most similar training image so this is like I said very very simple straightforward learning algorithm and it only learns in the sense that it kind of memorizes the training data but in order to implement this algorithm we need to actually write down some function that can compute the similarity between two input images so some very common choice so basically we need to write down some kind of distance distance metric which can input a pair of images and then spit out some number representing how semantically similar are those two pairs of images in order to perform this nearest neighbor classification so one very common choice of this distance metric is a very simple one just use the l1 or Manhattan distance between the pixels of images so here what we're going to do is take our test image here we're imagining a very simple four by four by four test image that we've written down the values of all of them all of its pixels explicitly and to compare it to a training image we simply sum we take simply take the absolute value of the differences between all the corresponding pixels in the two images and then sum up all of these absolute values of all the differences in the corresponding pixels and that gives us a single number representing two different the the the distance or difference between those two images so one thing to point out here is that you know if we've this kind of satisfies all the normal rules for a metric from from from mathematics right so if we've got two images that are exactly the same we'll have a distance of zero things like a triangle inequality are satisfied this is a reasonably well mathematically defined metric so now basically with these couple bits of information we this that's enough to implement your first learning algorithm and indeed the nearest neighbor classifier is such a simple and straightforward algorithm that we can fit a full implementation on a slide and I think people can with even with some comments and even better I think you might even be able to read it so here in our nearest neighbor classifier I told you we need to implement two things one is this train step which is trivial here we just memorize the training data see we assign the images X and their labels Y to some number of variables of our class then in the predict we have is again very simple we take some new images some new test images X we simply iterate all over all the images in the training set compute this l1 distance and then return the label of the most similar image so that's it that's nearest neighbor you can now implement your own machine learning systems so a couple questions so with this nearest neighbor classifier suppose we have a training set of n examples then how fast is training no no it seems tricky well I guess it kind of depends on your copy semantics but I would say that this is maybe constant time training if we're just going to store pointers to all of the training dates all the training data and that could be done in constant time if you were to make a deep copy that maybe be linear time but let's not do that so then the question is again with n n with n examples how fast is testing going to be well this one's going to be linear time right because kind of folding the size of the image and computation of nor of the norm we're going to call that a constant which means that now for every testing example we need to compare it to each of the n training examples which means that compare that at test time we're going to pay a performance penalty that's linear in the size of the training set now this is actually really bad this is like really really really bad this is actually the opposite of what we want from machine learning systems right because if you think about how we want to deploy a machine learning system what we want to do is somehow collect as much data as we possibly can about our task at hand and then maybe use this large amount of data and train a big powerful model and it's okay of training that model takes a long long time but when we finally deploy that model and actually use that model at test time we'd like it to be very fast we'd like to be able to run these models maybe on your mobile phone in real time we'd like to run it for millions or billions of users on the web for all photos that are getting run around on the internet so somehow this is the exact opposite characteristics of what we'd usually like for in a machine learning system and we'll see that as we move to neural network based approaches we'll see that they kind of invert this this this bit of hierarchy and these neural network systems that we'll end up using will be relatively long to train but then relatively fast at inference time so of course I also need to point out that sort of for completeness that there are many amount that many interesting algorithms for computing approximate nearest neighbors and when you perform approximate nearest neighbor computation this can be done maybe much more faster than these full brute-force approaches and these are kind of beyond the scope of this class but it's nice to be aware of in case you find yourself in a situation where you really really need to perform some large-scale nearest neighbor search for some reason so now once we've got this this image this this idea of nearest neighbor classification we can think about how does it actually perform on images so here what we're showing is the results of nearest neighbor classification on the C part n dataset so here on the Left column we're seeing a bunch of test example a bunch of examples from the C part n test set and then along each row we're seeing the nearest neighbors from the training set to each of those test examples and because as you might assume as is sort of intuitive because we're computing the distance between met between images by literally comparing the values of their pixels the nearest neighbors tend to be images that look very visually similar right so if you look at maybe the third row you've got this orange blob in the middle as our test image and then if you look the row you see other images the nearest the nearest neighbors that we retrieve are kind of things that have maybe orange or reddish blobs in the middle and then kind of a green or brownish background so this nearest this this l1 distance that we're using to compute nearest neighbors is really not very smart and it doesn't know much about what it's looking at and we can kind of look we can kind of get a sense for maybe how poorly this might perform if we look at which of these one nearest neighbors are correct or incorrect so it's kind of tough to actually tell what these images are sometimes just by looking at them because they're relatively low resolution but what I've tried to do is draw red boxes around the one nearest neighbors that are incorrect and green boxes are on the one nearest neighbors that are correct and this gives you a sense that even though images can look very visually similar as measured by this l1 depth this distance they actually can sometimes have very different semantic meanings so this is clear maybe in the fourth row when you see this kind of or this kind of brown blob surrounded by a white background I think it's a frog right I think it's a frog actually for the for the test image but then its nearest neighbor is actually a cat so then the cat is also a brown blob on a white background but so it looks very visually similar by this l1 by this l1 metric but the label is different so it would be to the R we would make an incorrect classification on this on this thing so this is one way to think to sort of get an intuitive understanding for what a nearest neighbor classifier is doing another way to think about nearest neighbor classifiers is through this notion of decision boundaries that we can see in this plot that needs a bit of unpacking so what we're showing here is we're imagining performing a nearest neighbor classification over images over images with two pixels so then the x axis here is the maybe the value the intensity value of one of our pixels and the y axis here is the intensity value of another of our pixels and now each of these dots each of these colored dots that we're seeing are examples of training images where the color of the dot maybe represents the category of the of the training image so maybe red dots are cats and blue dots are dogs and so on and so forth and now the color of the background region represents the category label that would be assigned to that point in space we were to have if we were to run the test if you were to run nearest-neighbor classification for one of those test images so for example in this red X here we can see that the nearest neighbor to the the nearest neighbor in the training set is maybe this this this red dot here which means that if we were to perform nearest neighbor classification for this red X we would predict this we would predict the red category so then what's interesting to look at here is the decision boundaries between different categories so here we've drawn out this this region in space that carves up between regions and space that would be classified as green and regions in space that would be classified as purple and when we look at the nearest neighbor classifier in this way we can recognize a couple interesting things one is that these decision regions can be very very noisy and are subject to outliers so for example we see that in our training set we've got this one yellow point kind of sitting out in the middle of a whole bunch of green points and maybe that's noise maybe actually it should have been labeled as green instead of yellow it's kind of hard to say but when we use this nearest neighbor classifier the presence of a single yellow point in this cloud of green is going to cause a bunch of test examples around that yellow point to be classified as yellow maybe that's good maybe that's bad we can also see over here on the left side of the screen we've got this kind of jagged discouraged decision boundary between the red class and the blue class and maybe and again this is because this relying on only the nearest neighbor to perform the classification can be a bit noisy so the question is what might maybe what might we be able to do to kind of smooth out these decision boundaries and maybe give us a more robust to the classification so one idea is to simply use more neighbors so so far we've talked about the nearest neighbor classifier as simply parroting out the label that has attached to the nearest training example to each test example but what we can do instead is use more than just that one nearest neighbor instead we can consider some set of K nearest neighbors and then take maybe a would and then imagine some way of combining the the category labels of each of our K nearest retrieval with nearest retrieved results so one there's many different ways you might imagine doing this but one simple idea is to simply take a majority vote among all the category labels of each of our K nearest neighbors so then it's kind of interesting so then once this this this picture of the nearest neighbor classifier using decision boundaries lets us see some the difference between k equals 1 and k equals 3 so one is that our decision boundaries got a lot smoother so you can see that when we use k equals 1 recall that our decision boundaries were very noisy as a result of only using 1 neighbor and now as we if we use 3 neighbors instead to perform a classification on the same data set we can see we've smoothed out the decision boundary between these two categories quite a lot we can also see that we've this has helped to reduce the effect of outliers on our classification performance so now even though we still have this one yellow point hanging out in a cloud of green it no longer affects it no longer results in this yellow classification region and similarly this region between red and blue has somehow got a little bit smoothed out by using more than one nearest neighbor but there's another problem which is that when K is greater than 1 there might be ties between classes so in this visualization with these white regions all have three nearest neighbors that are better all of different categories so somehow you need those you need some mechanism for breaking ties and maybe you could have you can imagine maybe having some heuristic then based on the distance maybe you back off and use the one nearest neighbor result there's different heuristics you might imagine in this situation so another thing another thing that we might want to change or play around with as we when we do the nearest neighbor classifier is changing the distance metric that we use to compute similarity between images so so far we've talked about using this l1 distance between images which recall was the app the sum of the absolute differences between all the corresponding pixels and the two images another common choice is the l2 or Euclidean distance between the images between the pixels of the image so this has the effect of you know taking basically what we're doing here is taking the the pixels of the image stretching it onto a long vector then imagine computing the Euclidean distance between points in a high dimensional space for those two images and what's interesting is if we flip back to this this picture of nearest neighbors using decision boundaries you can see that as we use different distance metrics we get sort of qualitatively different properties in the decision boundaries that arise so I'll kind of leave this as an exercise to the reader but with l1 classification we can see that all of the decision boundaries between categories are all composed of access aligned chunks they're either vertical line segments horizontal line segments or vertical lines or 45-degree angle line segments but when we use the l2 or Euclidean distance class if we use the Euclidean distance instead to compute nearest neighbors then now our decision boundaries are still piecewise linear but those lines can appear at any orientation in the input in the input so somehow using different distance using different distance metrics somehow is a way that you as the human as the as the human expert can imbue some of your own human knowledge into the structure that you want the algorithm to take account of so it's a little bit unclear maybe for whether l1 or l2 whether that's going to make big differences it's sort of not really intuitively clear what semantic differences in l1 versus an l2 distance the metric is going to be is going to result in for the case of image classification but what's really interesting about the K nearest neighbor algorithm is that basically if we choose different types of distance metrics we can imagine applying K nearest neighbors just about any type of data imaginable so so far we've talked about these kind of using traditional maybe vector norms for a back door vector a vector metrics to compute distances between points but you can imagine using very strange or interesting types of data and writing down very sophisticated distance functions between them in order to perform nearest neighbor classification on many different types of data sets so one example here is comparing research papers um so there's this cool site called archive sanity that lets you go and and kind of have some interesting exploration around around research papers that are coming out each day and one interesting feature of this date of this website is that lets you show so papers that are similar to another paper so here I looked up on archived sanity a paper that I wrote last year called measure R CNN and then if we click show similar then what this does is basically does nearest-neighbor retrieval on these PDF on these PDF files and the way that it does that is by using an interesting distance metric so the distance metric here is called tf-idf similarity that's a term frequency inverse document frequency that's very commonly used in a lot of NLP applications that I won't tell you how it works but it's just kind of a distance metric that works on pieces of text that encodes human knowledge about the the frequency that words appear in different documents and what's interesting is that doing nearest-neighbor retrieval I'm using this tf-idf metric on research papers actually gives really good results so if I look at if we look at the four nearest neighbors to my own most recent paper then we see these four papers they don't they're meaningless to you but actually three of these were like things that we directly compared against and cited and like really try it hard to make sure we beat them in order to get our paper but interestingly the nearest neighbor here is something that we didn't cite so we should I should maybe go back and read that one but the point here is that the nearest neighbor algorithm even though it seems relatively simple can be fairly powerful and can be applied to fairly robust and different types of data as you change the way that you compute distance between between elements so this is also a bit of fun a couple years ago I wrote this interactive web demo in JavaScript but lets you kind of produce these visualizations for nearest neighbor so you can go on this link and play around with this you can interactively drag points around and see the decision boundaries move you can change the number of categories you can change the number of training points you can change the number or the value of K for the nearest neighbors that we use and you can try to flip back and forth between l1 and l2 metrics to try to get a sense of qualitatively what all these choices do and how they change the decision boundaries of your K&N classifier so coding this thing off was like two days in my life so I really hope someone looks at it some time so I think this can be a useful tool to help you gain a little bit of intuition into what this cannon house fire is doing so by now we've seen a couple different choices that we have to make when performing when doing this cannon classification we've seen that apart from the training data we need to choose a value of K that is how many different neighbors are going to consider when doing this algorithm we've also seen we need to choose the distance metric should we use l1 l2 should we try to cook something up that incorporates our own domain knowledge and it's not really clear how we should set these for different problems so these choices of K and of the distance metric are examples of what we call hyper parameters so a hyper parameter is a choice that we need to make in our learning algorithm that we cannot necessarily learn directly from the training data because they somehow interact with the way the algorithm works in a deep fundamental way so these hyper parameters we can't really set them directly through learning so we need some other mechanism to choose which values of hyper parameters are going to work best on our data and unfortunately there's not a lot of great ways in practice to choose hyper parameters the kind of simplest approach is that they're very problem dependent so we basically need to try out different values and see whatever is going to work best for our data and our task but there's some nuance here in what exactly we mean by try out different values and what exactly I mean by decide which one works best so here's a couple ideas for how we might try to go about setting hyper parameters so idea number one would be maybe we should select the values of hyper parameters that will cause our learning algorithm to give us the highest accuracy on our training set this seems reasonable right do we want we want our algorithm to do well we have a training set training set is met for training we need to suite and then maybe we should just set the hyper parameters to give us the best performance on the training set so even though this seems reasonable it's actually a terrible idea like never do this just like simply never do this so the reason is this this can lead you very very far astray so in a concrete example for K nearest neighbor classification if you were to try to set hyper parameters by maximizing accuracy on the training set you would always choose K equals 1 right because imagine what happens if you use the training use actually use a training point in the K&N classifier if you use K equals 1 it will try to find the nearest training point which is itself and then it will always return the correct label so canon classifier with k equals one always gets 100% on the training set but as we've seen some these qualitative examples that probably intuitively is not correct because we've seen how maybe smoothing out the setting higher values of K can maybe cause decision boundaries to be smoothed out and we that actually might be the correct thing to do for some problems but we'll never get to know that for looking at the training set accuracy only so instead better idea is idea number two maybe what we need to do is split our data set into two components one we're going to maybe reserve something like 90% of our data set and called up the training set and then reserve maybe 10% of our data and call it the test set because again really the point of machine learning algorithm is learn that learn the learning algorithm algorithm to learn from the training set and then see what the accuracy is on the test set and then as we vary the values of the hyper parameters we'll choose the values of the hyper parameters that work the best on the test set now this is more reasonable because the point of machine learning out the point of using machine learning algorithms overall is to generalize to unseen data we don't care about performance on the training set because we already have those labels in our data set we care about the performance on unseen data and somehow this approach gives us some estimate of our algorithms performance on data that it had not seen during training question so basically you're absolutely correct and even though I told you this seems very reasonable it seems very logical this is wrong and you should not do this this is actually again equally as bad as training on the training set if you do this you will have you will draw incorrect conclusions about the performance of your learning algorithm because basically what we've done in this approach is a different way of learning on the test set right because once you look at the test set your algorithm is polluted with knowledge of that test set and if you are using the test set in any way to make decisions about your learning algorithm then you're cheating because again that then it pollutes your idea of how well that algorithm is we're going to perform on unseen data because once you use the test set to set values of your hyper parameters the test set is no longer unseen data and you no longer have any estimate you no longer have any idea about how your algorithm is actually going to perform when you deploy it out there in the wild and run it on new images that did not appear in your data set at all so even though this idea idea too seems logical and seems plausible this is a fundamental cardinal sin in machine learning models and if you do this you're making a fundamental error when you in the way that you're preparing your model so a much better approach is idea number three so here what we're going to do is split our data now into three sets we're gonna have a training set that we use to train our algorithm we're gonna have a validation set that we use to set the values of our high parameters and then we'll reserve a test set to use only once at the very very end now so then basically what we do right is kind of this kind of similar is what we did before right we trained our algorithm on the test set we tried different values of hyper parameters we evaluate the performance of different hyper parameter values by checking the accuracy on the validation set now and now we select the value select the values of hyper parameters that have the highest performance on the validation set and now once you've chosen those hyper once you've chosen all the values for all of your hyper parameters once you've fixed everything then only once at the very very end of your pipeline do you ever touch the test set and then you touch it only once you run your algorithm exactly once on the test set and that gives you a single number that now gives you a very proper estimate of your algorithms performance on truly unseen data so even though this is the correct thing to do it's actually completely terrifying in practice right so when you do when you're writing a research paper you've been working on this project for months you've been working on this project for years and that entire time you've been tweaking your algorithm you've been tuning it you've been lovingly trying to improve it and throughout the entire process of developing your algorithm you as a good machine learning practitioner have never touched the test set you've only evaluated on evaluation on the validation set and then it's the week of the deadline all of your hard work has finally come to fruition you it's finally time to see how well your them actually does and then a week before the deadline is the only time you should run it on the test set even though this is terrifying right Davian tuition you you it's right you think what if my number is bad what if my entire life's work has been wasted on this algorithm well if it turns out that well actually even this is the right way to solve that problem right so if it turns out that you get a bad performance in the test set that means your algorithm was bad and maybe it shouldn't be published so this is actually this is very terrifying to work with in practice but this is the correct way to do data hygiene and machine learning and projects have been sunk by getting this wrong if you get this wrong you not only I mean you might get your papers accepted but you'll be fundamentally dishonest about how well your algorithm performs out there in the wild and that's actually the point of building machine learning models at all so even though this is terrifying this is the right thing to do and you should always do it when you're working on machine learning problems so we said that idea 3 was better an idea 3 the basic trick was to split our dataset into three chunks but we can do even better why stop there we can split our dataset into ever more chunks and get ever better estimates of our generalization performance so that's that's idea number four called cross-validation which is maybe even the best idea that we really should all be doing so here the idea is we'll split our dataset into many many different chunks called folds and now what we're going to do is iterate through them and it and use and maybe in this example we have five folds so we'll try out five different versions of our algorithm one that one that uses fold 5 as a validation set and trains on folds 1 through 4 one that uses full 4 as a validation set and trains on folds 1 2 3 5 etcetera etc and then what you can do is then now you get a slightly more robust idea of how hyper parameters are going to perform on unseen data because now you get maybe one sample per fold for each setting of the hyper parameters and then maybe you select your best type of parameters using the highest using some metric maybe the the the highest accuracy across all folds something like that so this is a really this is probably the most robust way to choose hyper parameters but it's fairly because it requires actually training or algorithm on many different folds of the data so for many so even though this is definitely the most correct thing you can do in practice this doesn't typically get done in most machine learning projects just because the training can be very expensive for many of those models but if you're using smaller models or smaller data sets or if you can afford if you can computationally afford to do it then some kind of cross-validation is really the correct way to set hyper parameters for your machine learning models so when you run cross-validation you end up getting a plot like this so then here maybe on the x-axis is a value is different values for one of our hyper parameters K and on the y-axis we see it's in each dot is then one of the validation set performances for each of those different trials of the algorithm that we've run so here's an example of five fold cross validation on K and the line here maybe gives the mean across all the folds for each setting of hyper parameters so then here we can see that the the it maybe Peaks around K equals seven ish so based on this example of cross-validation then k equals seven is the correct value to set for this type of parameter then we should run our model so then once we set that value of k equal seven we should then run our model exactly once on a test set and that's the number we report for our algorithm so another interesting feature of the K nearest neighbor algorithm is this property of universal approximation so what's really interesting is that K nearest neighbor actually makes very few assumptions but the types of functions that it can represent so in fact as we take the number of training samples to infinity then K nearest neighbor can actually represent any function of course any is here with a mathematical asterisk because anytime you make statements like this people who've taken a real analysis course will start pointing out all the corner cases where it might fail so I've tried to cover my cover myself a little bit here but basically for all practical algorithm after all practical functions you might encounter in nature you can expect this to work quite well so as a kind of intuitive example of how this universal approximation property can work for k-nearest neighbors here's an example of maybe doing a continuous valued prediction using a nearest neighbor approach so here we maybe have a one pixel image so just a single skate a single floating-point number is our input X and now we want to predict a single floating-point number Y so then the blue curve here shows some underlying true function that we want our machine learning model to learn but we only have access to a finite number of data samples so here the black points represent this finite number of samples from this underlying true function and now the green curve represents the value of a came here of a one nearest neighbor classifier one nearest neighbor regressor I guess in this case if we were to use this finite training sample to approximate this underlying true function using this finite sample using this finite number of training samples and because it's a one nearest neighbor we have sort of has a flat constant region around each of the training samples and areas and discontinuities wherever it's exactly between two of the training samples now this example uses only five points for training so the quality of our function approximation here is quite bad but as we increase the number of training samples you're doubling to 10 again doubling to 20 and now doubling and now going up again to 100 we can see that this one nearest neighbor classifier basically is doing a very very good job at approximating this underlying function and you can imagine we're not going to go through a formal proof here but kind of intuitively speaking if you're able to kind of paper the space and kind of cover the entire training space with all of your data points with enough data points then your nearest neighbor algorithm your nearest neighbor classifier will actually learn some arbitrarily correct approximation of a true underlying function so that seems to be really good news right all right this nearest neighbor is maybe this is the only learning algorithm we need right it can represent any function all we need to do is collect a lot of data but there's a catch here and that catch is called the curse of dimensionality so the problem is that in order to get a kind of uniform coverage of the full space of a training set we need a number of training samples which is exponential in the dimension the underlying space so in the example from the previous slide our input space was only one dimensional so then suppose that here maybe we don't need we actually don't need that many training samples to get a relatively dense coverage of a one dimensional space suppose we had a two dimensional space and we wanted to achieve a similar density in our training samples over a two dimensional space now we would need that now would now instead of meeting for examples in this in this example we know it would need something like 4 squared training samples and as we move to three dimensions we would now need 4 to the Q 4 to the power of 3 training samples to again achieve a similar density in in covering our space but also you think ok maybe this is ok we need a lot of data sure but the Internet's really big right maybe there's enough inner maybe there's enough images out there to cover the space of all visual things we might care about and this would be very wrong to assume right let's kind of put some numbers on this if we're imagining relatively low resolution images something like CFR 10 images that are only 32 by 32 pixels then the number of binary images that are 32 by 32 is 2 to the power of 32 times 32 which is about 10 to the 308 now that's a pretty big number and to get a get a sense of just how big that number is realize that the number of elementary particles in the visible universe is about 10 to the 97 what that means is that if we took our entire our visible universe and we put a copy of our entire visible universe into every elementary particle in our universe and if our universe was then again a copy our universe was yet another elementary particle in some larger universe then every elementary particle in this entire massive collection of universes would be the same would be actually still less than the number of 32 by 32 binary images so this is not going to work right we can never collect enough data to densely cover the entire space of images because forget about 32 by 32 we want that we want our algorithms to work on things that are much much higher dimension and we don't care about just binary images we care about real-valued color in so this is not gonna work and this is in fact one reason why in fact a nearest neighbor algorithm even though nearest neighbors is this very nice algorithm to think about in practice it's very rarely used on raw pixels so for a couple of reasons one as we've seen it's very slow at test time yeah and that's kind of the opposite of what we want from machine learning systems other is that the other is just we it's very hungry for data and it's very difficult to get enough data to cover the space of all possible images a third reason is that these distance metrics on raw pixel values is just not very semantically meaningful so as kind of a trivial example of this if we look at this original image on the right on the left and compute the l2 distance from the original image to each of these three perturbed images will find that the l2 distance is the same across all of these three pairs and this is not very intuitive right the middle the this this shifted image for example to us appear is very very similar to this original image on the left so you might intuitively hope that any reasonable metric of comparing image similarity should say that the original image and the shifted image are very very similar while this boxed image or this tinted image should be much larger in distance but unfortunately these kind of raw pixel wise L 2 or L 1 metrics between raw pixels of images is just not very sensitive and is not able to capture these kind of perceptual or semantically meaningful distances between images so even though raw pixel raw even though nearest neighbor classifiers on raw pixel values do not work very well actually it turns out somewhat surprisingly that one thing that does work quite well is nearest neighbors using feature vectors computed from deep convolutional neural networks so of course over the next couple of weeks we'll talk about exactly how these might be computed but as just a hint here here's an example of doing nearest neighbor retrieval using not the raw pixel values but instead using feature vectors computed for these images with a deep network and what you can see is that now our nearest neighbors that we retrieve are quite semantically meaningful so here we given a picture of a train we're able to retrieve trains even they are from different viewpoints and different different different angles have trained and even have trains in different positions of the image or if we look at this image of a baby we can see that we're able to retrieve other images of babies even though they look very but even though the the raw pixel values are completely different what I think is particularly interesting here is the example on the far right of the baby row where you can see that we've retrieved a baby which is actually rotated 90 degrees so here the pixels of those two images are completely different yet somehow the the features computed by this deep network were able to bridge this semantic gap to some extent and what's interesting here is that sometimes even though using nearest-neighbor on raw pixel values is not used that often in fact using some nearest neighbor retrieval with convolutional neural network features is actually a very strong baseline for a lot of problems so there was this very nice paper from a few years ago where they actually performed image captioning using a nearest neighbor approach so here you know we've got a large data set of images and captions we retrieved nearest neighbors using features computed from deep networks and we just returned the caption of the nearest neighbor from the training set and even though this is a relatively simple nearest neighbor algorithm it actually could give some pretty good captions so it can direct you can say things like a bedroom with a bed and a couch on the upper left or a cat sitting in a bathroom sink on the upper right which maybe says more about the distribution of images of cats that people upload on the Internet this sort of suggests that there's a lot of examples of cat sitting and sinks in the training set but the point here is that even though nearest neighbor is maybe not the best thing to do on raw pixels you should actually definitely consider giving it a try for more complex problems using better features so then to kind of summarize what we talked about today we talked about this overall problem of image classification we saw how it can be a building block for many other problems in computer vision and then we saw the the K nearest neighbor algorithm as our kind of first example of a learning algorithm and that was simple enough for us to walk through the full floor and at learning pipeline in just this one lecture we talked a bit about hyper parameters we talked a bit about data hygiene on how to properly deal with your training and validation sets so you'll get if you do the first if you now you have enough knowledge to go and do the first Homer assignment which will be due over the weekend and then we can come back on Wednesday and start talking about that our next learning algorithm about linear classifiers
Deep_Learning_for_Computer_Vision
Lecture_14_Visualizing_and_Understanding.txt
all right welcome back to lecture 14. uh today we're going to talk about a visualizing techniques for visualizing and understanding what's going on inside convolutional neural networks um so this this title is actually a little bit of a misnomer we're actually going to talk about really really two major topics today um one is techniques for kind of peering inside of neural networks and understanding what it is that they've learned about the data that we train them on and then the second is um it turns out that a lot of the techniques that you use for visualizing and understanding neural networks can also be used for some fun applications like deep dream and style transfer so we're actually going to cover on both of those topics in today's lecture so the last lecture that you heard from me we talked about attention um so we talked about how attention could be this mechanism that we can add into our current neural networks to let them focus on different parts of the input on different time steps and then we generalize this notion of attention to build this new fundamental component that we could insert into our neural network models called self-attention and then and then remember that we saw that we could use self-attention to build this new neural network model called the transformer that relied entirely on attention to uh to process its inputs and last week i hope you had a good time with our guest lectures so on monday you heard from luway on vision and language and on wednesday you heard from professor prakash on adversarial machine learning so hopefully that was some really exciting interesting stuff for you to hear about and especially the adversarial machine learning i think uses some of the same techniques that we're going to talk about in today's lecture um so before we move on is there any sort of uh questions about logistics stuff that we need to talk about before we move on to the content okay very good so i think this is a question that so at this point in the class we've sort of learned how we can build neural network models to process lots of different types of data um we know how we can build use convolutional neural network models to process images we can use recurrent neural network models to process sequences and we can use transformer or cell potential models to process uh sequences or even sets of data um but there's been this lingering question that i think a lot of people have asked me after class and even during lectures as well which is how do we tell what it is that neural networks have learned once we've trained them for some visual recognition task um so once we write down this big convolutional neural network model and we train it up on our big data set um what are all these intermediate features looking for inside of the neural network and how then then hopefully if we were able to peer inside the neural network and get a sense of what kind of things that different features different layers are looking for then it might give us some more intuition about maybe how it fails or maybe why it works or why it doesn't work so today we're going to talk about various techniques that people have used to kind of peer inside this this depth of this neural network and understand what it is that's actually going on inside of it so i should preface this that it's all sort of empirical we don't really have super strong theory about exactly what's going on in there but we have a set of empirical techniques that can help us gain some intuition about the types of features that these layers are responding to so we've actually seen one technique already that we can use to get some basic idea of what's going on inside neural networks and that's visualizing the the filters at the first layer of the network so if you remember way way back to the linear classifier remember we had this idea that the linear classifier was learning a set of templates one template for each class and that the class scores that were computed by our linear classifier were simply the inner product of the template that we learned for the class and the input image and then when we generalized and moved on to neural networks and convolutional neural networks then this same idea the same notion of template matching carries forward as well so recall that for a convolutional neural network at the very first layer of the network we learned this set of filters where these set of filters are going to slid around the input image and we take it and at each point in the input image we take an inner product between one between each of the learned filters and the and the rgb pixels of the input image um so fitting along with this idea of template matching then we can actually gain some into we can actually visualize the first layer of the convolutional network by simply visualizing those filters as rgb images themselves um and recall that the idea here is that when we take an inner product between the filter and the image then when we try to visualize the image the filter as an image then an image which matches the filter is going to give a very strong response to that filter so by visualizing these filters it gives us some some sense of what these very first layers these first features in the neural network is looking for so here on the slide we're visualizing we're doing exactly this and we're visualizing the first the convolutional filters at the very first layer for four different cnn models that were all pre-trained on the imagenet data set for image classification and what you can see is that um even though these different network architectures like alexnet whereas different different resnet layers or a dense net these uh these neural network architectures as we've seen are quite different but the filters that they tend to learn at the first layer are often very similar and we can see that in these filters we often learn edges oriented filters that look for edges of different orientations we see different filters that look for different types of colors different types of opposing colors is a very common pattern to see in these filters and if you recall all the way back to the huble and weasel experiments on the mammalian visual system you remember that the mammalian visual system also has these cells that look for oriented edges in the visual field of what we're looking at which is somewhat similar to these filters that tend to be learned at the first layer of convolutional networks um so this this gives us some sense that the very first layer of comb nuts are looking for maybe different different color different types of colors and different types of oriented edges and you might wonder can we apply this exact same technique to higher layers in the convolutional network well we can but it's not very informative right so um if we here in this slide what we've done is simply visualize the weights of the so here we've trained a three-layer convolutional network on uh for cfar classification um and for the very first layer we can visualize the weights just as we've done in the previous slide and visualize them as little chunks of rgb image but now the second convolutional layer right because if in in this little toy network the first convolutional layer has 16 filters each of spatial size 7x7 and now the second convolutional layer um recall its input is the out is the raylude output of the first convolutional layer so the second layer is looking at it receives 16 input channels and then has 20 convolutional filters each of those 20 convolutional filters would stretch over all of the all of the 16 input channels so there's no really good way for us to visualize that 16 channel input image it just just doesn't really make sense as an rgb image so here what we've done instead to get some really core sense of what's going on in those filters is um for each of the 20 output filters for each of the 16 slices of that filter we can visualize its 7x7 spatial extent as a little grayscale image and it's maybe tough to see on the slide here but you can see there is a little bit of spatial structure going on in even these higher convolutional filters um that they're maybe still looking for kind of blobby patterns or edges but now this is no longer looking for edges or blobs in rgb feature space this is now looking for blobs or edges in the feature space that was produced by the previous convolutional layer so you can so with this kind of visualization you can tell that maybe something is going on with these higher convolutional filters but it doesn't really give you a strong intuition for what exactly they're looking for so we're going to need to use some other techniques to try to understand what's going on at other layers of the convolutional network so one way that we often try to understand what neural networks is doing is actually skipping these intermediate convolutional layers and instead trying to understand what's happening at the very last fully connected layer so if you recall something like the alexnet architecture terminates and has this fc7 layer that has a 4096 features that then are transformed with a final linear transform to give us our class scores for the 1000 classes in the imagenet data set um so one thing that we can try to do is understand what is being represented by this 4096-dimensional vector that is being computed for each image right because you know one one way to think about this what a trained alex net is doing is that it takes our input image and converts it into a 4096-dimensional vector and then applies a linear classifier on top of that 4096 dimensional vector so then we can try to visualize what's going on by understanding what's going on inside that 4096-dimensional vector so then what we can do is sort of take our trained alexnet model and then run it on a whole bunch of images from the test set um and then sort of record the 4096 dimensional vector that the cnn computes for each of those images in the test set and once we've collected this sort of data set of images and their feature vectors then we can try to visualize them using various techniques so one one really interesting thing that we can do is simply apply uh nearest neighbors on top of this 4096 dimensional vector so if you recall way back to assignment two and some of the early lectures we talked about applying these nearest neighbor retrieval on top of the raw pixel values and that's these examples here on the left that we know when you apply a nearest neighbor retrieval on top of raw pixel values then you tend to retrieve other images that contain very similar pixels but maybe not of the same class or semantic category and now here in the middle instead what we've done is we apply nearest neighbor retrieval using not the raw pixels of the image but instead using these 4096 4096-dimensional vectors that are computed by alexnet and this gives us some sense for what images are close to each other in this feature space that is learned by the classifier and here we see some really interesting stuff going on that if you look at these examples of images that are retrieved using nearest neighbors search on with nearest neighbor on this last dimensional this last feature space we see that for example if you look at the uh the elephant here in the second row then all of the nearest neighbors to this element elephant image are also images of elephants but the pixels of the retrieved elephants could be very very different right so the image that we're using to query with kind of has the elephant on the left side of the image so the image has this kind of gray blob on the left but now we're able to retrieve images maybe if you look at the third column now this actually has the elephant on the right side of the image the elephant is a totally different color and the background is a totally different color which is really amazing that means that somehow this 4096 4096-dimensional vector that has been computed by alexnet is somehow ignoring a lot of the low-level pixel content of the image and maybe it's encoding something like elephantness somehow inside this vector so that when we retrieve an image of an elephant then we actually get other images of elephants even though the raw pixel values are quite different um and maybe uh sort of fitting with a halloween theme maybe you were celebrating halloween last week then in the second to bottom row we can actually see that actually these 4096 dimensional features space computed by alex nat also kind of learns jack-o'-lantern-ness and it can retrieve other images of jack-o'-lanterns simply by looking for nearest neighbors in this 4086 dimensional feature space so oh yeah question how do you exactly compare the image with the other images and what features we consider for the image and the train set to class that's why yes oh so so here um so the question is how exactly do we compare these images when doing nearest neighbor um so here what we're doing uh maybe i should have made this more explicit so then we've got uh we've got a one with our query image then for the query image we run it through the trained alexnet classifier and then extract the 4086 dimensional vector at the last layer of the classifier and now that now without looking at this this 4006 exponential vector for our query image then we also do is run our whole other we run our whole test set through the classifier as well and record their 4096 dimensional vectors and now we do l2 nearest now we do the nearest neighbor retrieval using euclidean l2 distance on those 4096 dimensional feature vectors that have been uh computed by the trained classifier um so here we're not actually doing we're not using this for uh class we're not using nearest neighbor for classification like we did previously instead here we're just using nearest neighbor as a way to get a sense of what types of images are considered nearby in this feature space so another kind of thing that we can do to get a sense of what's going on in this uh in this feature space is to use some kind of dimensionality reduction algorithm um so you know 4096 dimensional vector 4096 dimensional space is pretty high dimensional we're um cursed to live in this low low dimensional three-dimensional space so it's very difficult for our human minds to understand 4086 dimensional feature space but think what we can do is apply some dimensionality reduction algorithm to reduce the dimensionality of those vectors from 4096 down to two or three dimensions which is something that our our minds can actually wrap around um so one simple algorithm that you may have seen in other machine learning classes is principal component analysis or pca which is a linear dimensionality reduction algorithm that the idea is you want to preserve as much of the structure of that high dimensional feature space while also projecting it linearly down to two dimensions so then you could apply these dimensionality reduction algorithms to this 4096-dimensional space giving a sample of vectors from the training or test set and then visualize actually the resulting vectors down in this two-dimensional space um so it turns out that uh there's another kind of uh you know in there's another kind of dimensionality reduction algorithm it's very popular in deep learning papers um called t-sne that's uh t t distributed stochastic neighbor embeddings um the details i i don't really want to go into here but it's basically a non-linear dimensionality reduction algorithm that that inputs a whole set of vectors in some high dimensional space then computes a two-dimensional projection of those vectors that tries to maintain as much of the structure as possible as was present in the original high dimensional space so then here what this visualization is showing is what we've what we what we're doing is we're we've trained a cnn classifier for digit recognition that recognizes the 10 mnist digits from zero to nine um and now the final now that network has some fully connected layer right before the classifier then what we've done is we take all the test set we run all images in the test set through our classifier to get this this vector um this maybe for i don't think it's fourth i don't think it's 496 in this 4096 in this example but then you run each image in the test set we get a vector then we apply this t sne dimensionality reduction algorithm to convert all of those vectors from the test set images from some high dimensional embedding space down to two dimensions and then um we actually visualize that that now projects every point in the test set to some point in two-dimensional space and now what we do is we take our the image itself and plop it down at that position in two-dimensional space that was computed by this uh by this dimensionality reduction algorithm so i i don't think you can see it here but if you uh maybe download the slides and zoom in really really big on this slide you can see that each of these points here in these in this diagram is actually a little digit where the position of the digit in 2d space is now some a visualization of the overall structure of the that was learned by this uh by this cnn classifier and what's amazing here is that actually i told you that we were doing digit classification with 10 digits and you can see that this the digit that this feature space tends to cluster into these 10 regions that very roughly correspond to the 10 digits of the of the classification problem so then again this gives us some idea that the feature space that was learned by this network is somehow encoding the the identity of the class and maybe not the raw of pixel values so we can another way we can do a similar algorithm is not on this this digit recognition problem but actually for imagenet classification as well so uh here we've done something very similar we've taken we've trained first in alex net on imagenet and then extracted these 4096-dimensional vectors for each uh image in the test set and that and then we apply a dimensionality reduction algorithm to lower the dimensionality from 4096 down to two dimensions while preserving as much structure as possible and now uh now that gives us every point in the tests that projected onto two-dimensional space and then we actually visualize the image at those points in two-dimensional space um so it's here i i there's a really high resolution version of this image online that you can check out but what's really cool here is that it left if you kind of zoom in on this image you can see that different regions of the space now correspond to different sorts of semantic categories so like i think if you zoom in way at the bottom lower left corner here of this of this space you see different flowers and they kind of gradually transition over into different dogs over in the upper right hand corner you can see things are kind of blue and white i think they have there's like different boats or sky images in there um and this is really kind of fun to zoom in and to this uh this visualization and kind of see this uh space that the that the neural network has learned and just kind of walk around in there and see what types of images are considered close to each other in this in this learned feature space so that gives us another so this idea of extracting uh vectors from the last layer of the network and then applying different types of operations on these vectors again gives us some way to understand the feature space that has learned that has been learned in the last layer of this neural network so there's another another kind of technique we can use to understand what things are looking for is this idea of visualizing not the weights of the neural network but the intermediate convolutional activations so here remember as we go through a convolutional neural network then at each layer of the network we end up computing some activation volume right say we've got a so for example in an alex net after the fifth convolutional layer then we compute this uh this convolutional volume of spatial size 13 by 13 with 128 channels which are then right those are then um those are that means that there's a hundred there were 128 filters in this fifth convolutional layer of alexnet um and that gives us a 13 by 13 grid um over which we've computed the values of each of those filters so what one thing we can do is actually visualize as images each slight each 13 by 13 slice of that of that activation volume and then visualize that as a grayscale image and the idea here is that things very close to zero or these are going through rayleigh so actually many of them will be exactly zero but um where where these feature maps are non-zero then we can align it with the original input image to get some sense of what features in the input image these different convolutional filters might be responding to so here what we've done in this visualization over here is there's a there's a 128 different different slices of this activation volume each of these slices corresponding to one of those convolutional filters and now we've selected one of these one of the the outputs of one of these filters in green here and then visualize it over here on the left side underneath the original input image so here you can see that the regions in this 13 by 13 grid that were turned on or activated by this by this filter actually somehow line up with this with the with the human face in the input image so that gives us some sense that maybe this one filter inside this one layer of the neural network has somehow learned to respond to human faces or maybe human skin tones it's hard to say exactly but then what you can do is just visualize these activation these slices of activation at different layers for different images and that can give us give us some intuition about what different uh what these different filters might be responding to so the question is uh why are the majority of these black and i think that's uh due to the the rayleigh non-linearity so remember that when we have relu then anything negative will get set to zero and anything positive will be uh we'll be left alone so that means that there's actually a lot of a lot of zeros in here i think um and that means that the non-zero stuff is actually uh pretty important and pretty interesting i think it also could be an artifact of visualization um because exactly how you choose to scale the outright because the neural network is outputting sort of arbitrary real numbers between zero and plus infinity um and then when we visualize the thing we need to squash it down into zero 255 in some way so the way that you choose to squash those values down might also have some effect on the overall brightness of the images so then kind of fitting along with this idea another thing we can do is not just sort of look at random images and random filters but we can try to find um what right if we want we want to get some idea for what is the the maximally activating patch that maximally activates different filters inside the neural network so then here like in this previous slide we had this intuition that this one filter inside this one layer of the network might be responding to faces so if we wanted to test that hypothesis one thing that we could do is um take our whole training set and then i'll run all of our training set images to the uh or run of all of our training center all of our test set images through the network and record for each uh for each image the value of the field of that one chosen filter in that one chosen layer and then we can right we know that because this is a convolutional network each element in that grid of feature in that grid of activation actually corresponds to some finite sized patch in the input image right um right give you great because if you have like a three by three convolution it depends on a three by three chunk of the input image you stack two by three two three by three convolutions then it depends on a five by five chunk and the input image and so on so then what we can do is um for our chosen neuron in our chosen layer or our chosen filter in our chosen layer in our neural network we run all of the training set or run all the tests and images through the network and find the the patches of those test set images that give the highest responses for our chosen neuron and then record those and then record those patches so then that's what we've done here on the in these visualizations on the right so here each uh in the in the top grid uh these are different neurons all from one layer in the neural network that has been i think trained on imagenet and then each row here corresponds to a different filter in one of the one of our in our chosen layer and now each column each uh element in the row course is a patch of a training image that gave a very very high response to that chosen filter in that chosen layer in the trained neural network so these are the maximally activating patches of the test set images that maximally activate our chosen neuron inside the network and then by visualizing those maximally activating patches we can get some some sense of what that chosen neuron is looking for so for example in this very top row here we see that all of these all these things look like snouts of dogs i think so maybe there's some like dog snout detecting neuron that's somehow somewhere deep in that network or if you look at the the one two three the fourth row in this top grid then you can see that all of the maximally activating patches are actually text or chunks of english text and these text might be might have different foreground different background colors different orientations but somehow it looks like this this one chosen filter somewhere deep inside this network is actually looking for text in different colors and orientations or in the bottom grid here we've done the exact same thing but now um run that same experiment for a deeper layer in the neural network and because we've done that experiment for a deeper layer in the neural network that means that each neuron value is depends on a larger patch in the input image so our maximally activating patches for the deeper layer are going to be bigger um so then here if you look at the bottom grid then some like the second row all of the maximally activating patches are human faces which is pretty interesting and these faces have sort of different skin colors and different positions in the patch but somehow it looks like this one neuron or this one filter deep inside the network has somehow learned to identify human faces as a result of its training so that gives us so this this idea of maximally activating patches gives us another technique for understanding what is being recognized by these intermediate layers inside the network so then another thing that we can try to do to understand what these networks are computing is to ask which pixels in the input image actually matter for the classification problem so here what we're doing well here what we do is we're going right suppose here on the left we've got this image of an elephant and so and we then suppose that this image of an elephant is correctly classified as an elephant by the trained neural network now what we want to know is which pixels of this input image were actually important for the network making the decision to classify it as an element as an elephant so now what we can do is we can take our our elephant image and then and then mask out some of the pixels and replace the pixels of the image with some gray square or some square of uniform color equal to the imagenet mean or something like that and now pass this masked image through the neural network again and that will compute some of some updated score for uh what the what the neural network wants to classify this masked image as and then we can repeat this procedure by by shifting around that square mask to a different position and then run the run these different mask images to the neural network so then if we repeat this procedure for for moving the mask around every position in the network and then again computing the probability of elephant for each position in the network then we can map out this saliency map that tells us which pixels matter the most for classification here in this image on the right here what we've done is for every position in the image we imagine putting that mask at that position and then running the running that mask image through and then the this heat map the color of the heat map corresponds to the probability of elephant so you can see that if we mask out if we position our mask over the over the elephant itself then the predicted probability of elephant drops a lot which is sort of intuitive right that means that somehow the neural network is actually looking at the right part of the image when making its classification decision um so that's uh that's encouraging and now we've repeated this experiment again for two other images on the slide the top is a schooner does anyone know what is a schooner does anyone know that i guess it's a type of boat i don't really know but apparently this is the type of boat and now if we mask out the pixels corresponding to the boat that's going to cause the neural network's confidence in schooner class go down a lot but if we mask out the sky pixels for example then the neural network doesn't really care and it still is able to confidently classify this as a schooner so that that gives us another way to understand what it is that neural networks are doing so now again we're not sort of looking at what activates the intermediate features instead we were just trying to mutate the input image and see what parts of the input image are being used by the neural network right and like i think this is kind of amazing like it didn't have to come out this way right like neural networks don't actually know what part of the image is the object and the background so it could be the case that the neural network is just like maybe it happens that all schooners always occur like in water of a particular color and then it could classify as a schooner by just looking at that particular color of the water that it's in and not looking at the boat itself right and they're on and if your data set has ways that neural networks can cheat and like look at the wrong parts but still get the right wrong part of the image but still get the right answer they they tend actually will tend to learn to cheat quite easily um so this type of visualization lets us kind of see whether or not they're actually cheating or whether whether or not they're actually looking at the part of the image that we think that they should be looking at when making their decisions was there question over here yeah the question is is this uh is that does this was this kind of thinking some of the thought process that led to adversarial examples um i think actually yes it was because uh you know it turned and to kind of understand why um so this technique that we've shown here vm computing saline via masking is actually like pretty computationally expensive right because we have to run a like a forward pass for each possible position of the mask so computing saliency maps in this way is fairly computational expensive and there's another way that we can compute saliency maps via back propagation right because um we know what we can do is we know we can take our input image which is this adorable dog and then run the dog through the neural network and compute probability of dog and then during back propagation we can compute the gradient of the dog score with respect to the pixels of the input image and that tells us for every pixel in the input image if we were to change that pixel a little bit then how much would it affect the the prop the dog's classification score at the end of the network um and now this is exactly this kind of image gradient that you could use uh to generate adversarial examples so i think actually the thought process is uh quite connected and you can see that in this in this example actually we've i don't know if you can see with the the contrast in the room here but uh there's kind of a ghostly out in here in the bottom we've actually visualized this uh this gradient image that shows for each pixel uh the gradient of the dog score with respect to the pixel um and now you can see that kind of maps out a ghostly outline of the dog a little bit which which again gives us some sense that the pixels that will change the classification score the most are actually the pixels inside the dog and if we were to change some of the pixels outside the dog then the classification score would maybe not change so much so this again gives us some some idea that the neural network is looking at the right part of the image we can repeat this not just for this dog image but for different images and get some and get these different kind of saliency maps for what these different uh what the neural network is which pixels of the of the image matter when networks are classifying these different images so i should point out that i think these examples are from the paper that introduced this technique and most real examples don't come out this nice um so you may have seen that if you've done the homework so far right because we asked you to implement this on the homework and you may have been surprised that your results were not as beautiful as these results from the paper that probably does not mean you have a bug that probably means that the authors of the paper were somewhat judicious in the examples they selected to put in the paper so i think that's something to be aware of but i think it's still a pretty cool technique that you can uh you can use to get some intuition about what neural networks are learning um so actually here there's kind of an aside um one thing that you it's supposed that this idea of sailing team apps actually works then one kind of cute idea that we could do is actually uh segment out the the object in the image even without any any uh any supervision of this of this flavor so then you know the image here the we're maybe feeding this this image of a grasshopper to the neural network and then it classifies it as grasshopper and now what we want to do is sort of carve out the exact outline of that grass the pixels of the grasshopper image from uh from the input image so you could try to kind of do that using these saliency maps and using some kind of image processing techniques on top of the saliency maps that are computed by these neural networks and again and that kind of lets us take a network that was trained for image classification and then use it in a very different way to actually segment out the sections of the input images that correspond to the object categories um although again i should point out that exam real example these these are kind of really nicely chosen examples and this technique maybe doesn't work as well in general as you might want it to okay so then this this kind of gave us this idea of um computing for each pixel using gradient information now um to compute for each pixel in the image how much does changing that pixel affect the final output score and this turns out to be a really powerful idea of using gradient information on the images to understand what neural networks are doing so now we can take this to another level and not and ask not just which pixels of the image affect the final class score but now again we want to return to this question of what are intermediate features inside the network looking for and we can use gradient information to help us answer this question as well um so here what we can do is we can pick some again we can pick some layer of the neural network and pick some filter inside the neural network and now what we can do is run the image take an image take a training image or a test image run it through the neural network and then back propagate uh and then ask which pixels of the of the input image affect not the final class score but affect this intermediate neuron value instead um right so then you could ask maybe which which pixels of the input image if i were to change them would cause this uh this intermediate neuron value to go up or down a lot well the question is are these saliency maps before training or after training um definitely after training um so if you try i i think if you try to compute these cnc maps before training you'll get some pretty bad garbage although surprisingly i think they will not be totally random even if you do it before training just because the convolutional struct like convolution actually has a pretty strong regularizing effect on the functions that are computed but i don't think you'll get anything so beautiful if you do this visualization before training it'll be uh probably quite random right so then again we can apply the same idea of using gradient information to understand what these intermediate features are looking for so we take our input image of the elephant run it through to some intermediate layer and then backprop not on the final score but backprop on one of these intermediate neuron values to say which pixels of the input image are going to affect this uh this output this this intermediate neuron the most and it turns out that for these intermediate features um using normal back propagation tends to give kind of ugly results so instead people do this kind of horrible ugly hack called guided pack propagation that i think i don't fully understand why it works but it tends to make the images look prettier so we do it but the idea with guided back propagation is that you know normally when you forward propagate through a relu then anything below zero any any elements below zero are set to zero and then when you back propagate through a relu then any upstream gradients in positions of the input that were set to zero are also set to zero right because on you you take that same mask that you used on the forward pass and apply it to your upstream gradients um so that's kind of back propagation normal back propagation through a relu so now in this idea of guided back propagation we're going to we're going to apply we're also going to mask out negative upstream gradients when we back propagate through a relu so that means we're going to uh when we receive our upstream gradients when we backpop through the relu we're going to zero out all the upstream gradients corresponding to zero values of the input activations and we're also going to zero out all upstream gradients that are negative so we're just going to eliminate all negative upstream gradients and like add this extra masking in the backward pass don't ask me why but this tends to make images come out nicer when you do this idea of backpropagation to visualize images um so now this so now when we apply this uh these guided back propagation to images then it lets us pick out the pixels of the image that were uh caught that were causing that neuron value to be changed to affect that neuron value um so here on the left we're showing the same visualization that we saw before of maximally activating tests that patches for different neurons inside the network and now on the right we've applied guided back propagation to say which pixels of these patch actually affect the value of the neuron um right so then remember on this visualization from this from this maybe top row by kind of looking at these images our human intuition was that these neurons were kind of looking for these these dog noses or eyes in these patches and now when we do guided back propagation we see that indeed the pixels corresponding to the interior of those eyes are indeed the ones that matter for this uh for this neuron value and similarly similarly for this text neuron at the bottom it actually is indeed the pixels of the text that are causing this this neuron value to change so that gives us some additional intuition about what these neurons are looking for and we can apply that again to these other deeper layers and get similar intuitions so then you can see our human face neuron from before is indeed looking at the pixels of the human faces which is encouraging to see okay so so far what we've done is we've used gradient information to pick out pixels of test set images that affect different values of the neurons or of the class scores but basically this is restricting us to uh pick out this is sort of restricting us to patches or images that actually appear in the test set so now we could kind of take this idea a step further and ask um could we right so in guided back propagation what we were doing is we were taking a test set image and then asking which pixels of that image were responsible for causing neuron values to be one or to be some value and now we could actually take this a step further and not restrict ourselves to test to test that images and instead ask among all possible images which what what image would maximally activate one of our neuron values and now what we want to do is is actually generate a new synthetic image that would cause one of our neuron values to be maximized and we can do this using uh now gradient a sent on the pixels of the image itself right so here what we're doing is i star is an image that will maximally activate some chosen neuron inside the network and now f i is the value of the image of the value of the chosen neuron uh when up when we run the network on that image and now ri is some regularizer that forces the image that we're generating to be uh to be somehow natural and now this this kind of what we're doing is this looks very similar to the the equation that we saw when we train weights of a neural network but now rather than training the weights of a neural network instead we want to train the image that will cause a trained network to maximally activate one of its in one of its intermediate features and now we can find this image using our favorite gradient ascent algorithm where we just kind of change the image one bit at a time using gradient information so kind of the way this looks like is that in order to generate an image via gradient ascent what we're going to do is initialize our original image to zero or maybe random noise or something like that then we're going to run our image through the network extract out the value of our chosen neuron back propagate uh to find the the pixels of the the image that would cause the neuron value to change and now make a small gradient step on the image itself and then repeat this over and over again until we've generated some synthetic image that will cause that will cause that chosen neuron value to be high and now what's really important it turns out right because you saw in the last lecture that this looks very similar to the procedure that we use for generating adversarial examples so it turns out that if you kind of run this procedure uh on its own you end up generating not interesting images but instead generating adversarial examples so to get good results out of this it's important to choose some kind of regularizer that will constrain our image that we're generating in some way and force our generated image to look natural in some way and now a really stupid way that we can force the image to look natural is just constrain its l2 norm of the over of the overall image so now if we do this what we're going to do is we want to find images that will maximize the class score for one of the category labels on imagenet and also have low l2 norm for the for the entire generated image and if we do that we end up generating images that look something like this uh yeah a question yes yes all of this on fully trained networks right right so the idea here is that we want to fix the weights of the we have a trained network we're going to fix the weights of the trained network and now we want to some we want to like learn an image that would cause the image to have the network respond in a chosen way um so we're going to fix the nates of the weights of the network but we can still back propagate through the network to compute gradients on the image and now if we do this procedure then we can generate these images that are images that we've invented from scratch that will cause the cause the network to recognize them as our chosen class so if you look at this example on the upper left we've generated a novel image that should be recognized confidently as dumbbell and you can see there actually is kind of some dumbbell shapes going on in this thing which is uh pretty exciting or if you look over at the this upper right hand image we're trying to generate uh dalmatian and you can see these kind of black and white spotted patterns kind of popping out emergently and kind of generating from scratch this this dalmatian pattern that is pretty exciting um so right this gives us some idea that maybe that when neural network is looking for dalmatian one of the key features that it's using is just this kind of black and white pattern that dalmatians have on their on their code um so this gives us some way to visualize or understand what features in the image that the neural network is using to classify them um so here we're looking at some more um i like the goose example here or the ostrich example here you can see these ostriches kind of this this like it's kind of inventing ostriches all over the image that's giving this sort of coarse overall ostrich shape okay but now basically at this point we've used a pretty stupid regularizer that these images don't look very natural we can kind of get some of these coarse shapes coming out of the image but they don't look like super these images don't look like super realistic so what we can do instead is play around with better regularizers so there's a whole set of papers where people try to invent better natural image regularizers that force our generated images to look more natural and these like these regularizers can get kind of hacky um so maybe just one flavor of a regularizer is that um rather than just penalizing the l2 norm maybe we want to make sure the image is smooth so we apply some we apply some blurring and clipping of the pixel values inside the optimization loop somehow i think the details are not super important the idea here is that if you hack around with the image regularizer you can generate images that look even more realistic so now if we apply this more sophisticated image regularizer to the same idea of gradient ascent now we generate images that look maybe even more realistic so now we generate these these flamingos we can see these kind of pink flamingo-like shapes kind of emerging that are giving us some generated image that will be highly confident that the network will be highly confident these are flamingos we can look at these uh these billiard tables and kind of see these like pool tables kind of emerging out of nowhere and now this idea of using gradient a sent plus image regularizers we can apply this not just to right so far we've applied this to find images that maximally activate a class one of the class scores at the end of the network we can also do the same thing to visualize to generate synthetic images that will maximally activate interior uh interior neurons inside the network as well so here um you can see that maybe in layer five there's some nut there's some layers that look for these kind of like spider-like patterns or look for these kind of eye like patterns or maybe down on layer three there's um some neurons that look for these kind of green blobs or red blobs in the input image so this gives us another technique that we can use to general to understand what these intermediate features are looking for and there's a like i said there's a whole sort of cottage industry of papers that try to invent better and better natural image regularizers to make these types of visualizations look better so there's another kind of regularizer i don't want to go into at all here that kind of makes images look better or one of my favorites um right so this is this idea of multi-faceted visualization you can now generate images that look even more realistic using a more sophisticated natural image regularizer people go really crazy on this thing so now here we're using a really really really fancy natural image regularizer that's actually based on a generative adversarial network that we'll talk about maybe in a few lectures but now these are kind of images that we've invented from scratch that will be highly recognized as the target category and will be uh and should be natural or realistic so now these there's this kind of this like toaster image emerging from scratch that is now a synthetic image that has been generated via gradient ascent but will be uh recognized as toaster by the neural network now when i read papers like this i always feel like there's kind of a weird tension in these papers because the original point of this kind of research direction was to understand what neural networks were actually looking for and i feel like the more we put more intense image regularizers on the thing kind of the more we lead ourselves astray like the the the smarter an image regularizer we put on the thing then the further away we get from what the network is actually looking for um so somehow when when i look at images like this it's really hard for me to say like how much of this is due to this is what the network is actually looking for versus how much have we just like used a really really smart natural image regularizer to force us to generate natural images so for me personally even though these images maybe look more beautiful i kind of like the results that use very simple image regularizers i think that gives us maybe a more pure sense of what is the raw the features and images that these networks are looking for so i think you should maybe not get too led astray by these examples that look super beautiful okay so then i think you already talked about adversarial examples right so what happens if you have no regularizer whatsoever um then of course you end up with adversarial examples um so then i i think you you talked you heard about this in depth from our guest lecture last week but you know we can start from an image of one class like an african elephant and then use gradient ascent on the class score of another class and then cause and then sort of imperceptibly change the pixels of the image to cause it to be confidently confidently classified as another class so then here we've taken this image of an african elephant and now changed it just a little bit and now it's classified very very confidently as koala or we've taken this schooner image even though we still don't know what a schooner is and then classified it very confidently as an ipod which maybe we do know what it is and it's not this um and of course uh this these were very very tiny imperceptible differences uh that were used to generate these adversarial examples okay so now another kind of uh idea that we can now another thing that we can do using this idea of gradient ascent and using image gradients to understand what's going on in networks is this idea of feature inversion so here what we can do is given an image we can extract given a given a test that image we're going to extract its feature representation at some layer of the network and then set that feature representation aside now what we want to do is use gradient descent to generate a new image which matches that feature which has the same feature representation as that feature we've set aside so what we're trying to do is kind of like invert the feature representation that has been computed by the neural network um and there's kind of a lot of maybe math on the slide but it's very like it's very simple it's the same idea of gradient descent that we've been looking at before so here what we're going to do is we're going to our loss function is now whether our set aside feature of the original image and the feature representation of our generated image are the same in some l2 sense and then plus some image regularizer and this will give us some sense of what's going on or what's being recognized at what what kind of information is being preserved or thrown away at different layers of the neural network so then here is a visualization of this idea using feature inversion or feature reconstruction from different layers of a trained vgg network so here on the left we're showing our original input image y and now this this first column relu 2 2 what we've done is we've taken our image y extracted the relu two two features from the vgg network and then run feature inversion to synthesize an image from scratch that will have the same relu two two features as y so when we do this procedure you see that the images we generate by inverting relu 2 look pretty much identical to the original image y which means that basically all of the image information is still it is still captured by these low-level relu 2-2 features but now as we go up in the network maybe up to relu43 now if we try to invert these relu 4 3 features we see that some information has been lost that now this the sort of the overall shape or structure of the images are still preserved but somehow the low level texture and color information has sort of been lost once we go to these relu four three layers and if we go all the way up to relu five three we can see that basically all of the local color and texture information has been lost but kind of the global structure of these image of these uh the elephant and the fruit is still visible so that gives us um some sense that maybe the low layers of the neural network really preserve most of the information about the image and then the further we go off the network the more in the more information is thrown away about those raw input images okay so now that we've kind of used these all these gradient techniques to peer into trained networks it turns out we can use a lot of these same techniques to now have some fun so one idea is this uh this project from google from a couple years ago called deepdream so the idea in deepdream is that we want to take an existing image and then amplify whatever features were present in the original image so what we're going to do is take our original input image um run it through the cnn extract features at some layer of the network and now set the gradient of the layer equal to the activation values themselves and now back propagate and update the image and what this will do is um basically whatever features were recognized by the neural network we want to change the image to cause those features to be activated even more strongly and so basically this is also equivalent to maximizing the the l2 norm or is somehow equivalent to maximizing some some uh l2 sense of the features of that layer the code for this is relatively simple um but we also need to use a couple image regularizers to get good results um in this they used again a couple natural image regularizers to get a more beautiful outputs so then the way that this works is if you start with deep dream and uh start with this this image of a beautiful sky and then run deep dream to then we're going to run this deep dream algorithm to amplify existing features um whatever features are going to be recognized in this image we want to amplify them and now modify the image to cause features those recognized features to be amplified so if we after we run deep dream on some layer of the network we get some output like this so here what what we've done is basically um we've we've run deep dream on a relatively low layer of the network so then that layer of the network was maybe recognizing different types of edge information in the clouds and now for each of those edges sort of nice swirly edge effect artistic edge effect applied to the input image so this is what happens if we apply the deep dream at a relatively low layer of the neural network and we kind of know that we are kind of already know that low layers of neural networks are maybe looking for edge information um if we and we had this idea that higher layers and neural networks were looking for maybe more abstract concepts and images so now if we apply deep dream to some higher layer in the neural network we get maybe some output like this so now you can see that um now there's like whole structures being emerged from this input image that the neural network is kind of like looking up at the clouds and then kind of inventing stuff for itself that it sees up in the clouds um so if you look at these uh neural if you look at the pattern there's actually some common patterns that show up a lot in these deep dream images so there's this kind of admiral dog over here on the left there's this pig snail over here this camel bird or this one on the right is this dog fish that kind of has a head of a dog and then the tail of a fish um so these are kind of like pat if you look at a lot of deep dream images you'll see a lot of these uh these patterns show up over and over again so if you look at this i think you'll see a lot of kind of like mutant dogs and mutant birds all kind of like crazy uh psychedelic animals um so now if you if you run deep dream for even longer then you start to deviate even further and further from the original input image so you if you kind of run deep stream for a very long time um and kind of do a multi-scale and do some other stuff you can generate these like very crazy images from scratch that are kind of like the neural network dreaming for itself um images that will cause its intermediate features to be highly activated um so i think uh this is an example of a deep dream image that was trained using a network that was trained on imagenet and you know we know that imagenet is a data set of objects but it turns out if we train a an image classifier not on imagenet but on another data set of different types of scenes then we get deep dream images that look very different so now these are images that were generated from scratch using this deep dream algorithm um using a cnn classifier that was trained on images of different types of places or different types of scene categories and you can see that there's these sort of fantastical structures just emerging from scratch that are being generated using this uh using this deep dream algorithm so this was uh this was really amazing when when this when this first came out um when some folks at google published this a couple years ago everyone was like holy cow this is amazing we can use neural networks to generate these like crazy pieces of artwork and that was very exciting so it turned out that there's actually uh more cool stuff that we more interesting ways that we can use neural networks to generate cool interesting images um but to show to give you another example well we need to take a detour into some non-neural network uh ideas so here's there's this very classical idea in uh computer graphics there's this task of texture synthesis so now with texture synthesis what we want to do is input some little image patch giving some regular texture and then we want to generate some text some some output image which is maybe much much larger but kind of still matches the local texture statistics of the input image and there's some classical algorithms for doing this it turns out that we can actually use some nearest neighbor type algorithms to do texture synthesis pretty with pretty decent results so now i don't want to go into the details of this algorithm here um but these are papers from 1989 and 2000 so these are no neural networks here this is kind of like nearest neighbor retrieval on raw pixel values but it turns out we can actually do like pretty good texture synthesis using no neural networks and just like pixel values as long as our textures are simple so here using these kind of traditional approaches to these traditional graphics algorithms we can do texture synthesis on these like green scales or on these bricks or here we're doing it on a printed text and now we can generate new images that kind of match the spatial texture of the input but are much bigger and much more interesting but now of course because this is a class about neural networks you we need to say um actually how can we solve this texture synthesis problem using neural networks so there was a really nice paper that came out a few years ago that showed how we can use a similar idea of gradient a sent on pixels through trained neural networks to solve this task of texture synthesis as well so here the idea is that um it's similar to this idea of feature inversion right remember in feature inversion we had some original some some input image and we wanted to generate a novel image that matched the features of our input now we want to use a similar idea to generate textures but but now what is a texture a texture is something that doesn't we don't want it to match the exact pixels of the input image we want it to match the overall spatial structure of the uh match kind of the local texture features of the input image but not care about the the the spatial structure so then what we want to do is we use this construction called a gram matrix which is a way to capture local texture information from cnn's while throwing away all the spatial information so how does this work well what we do is we choose some layer of the neural network and then run our our our texture image our target image through the network and that extracts this feature volume of maybe height by width by number of channels is the three-dimensional feature volume that is computed by some convolutional layer of the network and now what we want to do is we take two of these c-dimensional feature vectors from two different points in space and now compute their outer product and now their outer product is a c by c matrix and this c by c matrix gives on all of the element wise products between these two feature vectors that we took out from two places in the input image and now what we want to do is use the same construction um and repeat it across all possible blue and red pairs right so for each pair in for each pair in the input image we want to compute this outer product and then we want to average this outer product over all of the pairs in this input volume and now if we do this this gives us a c by c matrix giving actually the unnormalized covariance between the feature vectors computed by the image computed by this neural network so now this c by c matrix is called the gram matrix and now it's a c by c matrix and it has basically thrown away all of the spatial information from this uh from this neural network feature representation but it has somehow kept or what what the information being captured in this gram matrix is which pairs of features in this seed of these c filters in the layer which of these pairs of filters tend to activate together or tend to activate not together right so this this gram matrix tells us something about which features are correlated with each other in the input image and by the way we can officially compute the scram matrix using some reshaping and matrix multiplication and now the idea is that this gram matrix is somehow a descriptor which is telling us which features tend to co-occur at different positions in the image but it throws away all of the spatial structure of the input image so now the idea is that we can use use uh perform texture synthesis using neural networks by trying to match the gram matrix of some target image using gradient ascend so now the way that this works is we can use uh compute texture synthesis using neural networks using this very simple gradient ascent algorithm to match gram matrices so the way that this works is step one we're going to pre-train a cnn on imagenet or some other data set step two we're gonna have we're gonna have some target texture image that we'd like to synthesize a new image from which is this rock this image of rocks in this example we're going to run that image through the neural network and compute these these gram matrices for each for each layer inside the neural network and then we're going to initialize our generated image from scratch run our generated image through the same neural network and again compute the gram matrices for the generated image at each layer of this neural network and now our loss and now our loss function here is the the weighted sum of the euclidean distances between the the gram matrices of our original texture image and the gram matrices of the image that we're synthesizing um so then that then basically we were just comparing all the gram matrices from these two images and using that to compute a scalar loss and then once we have that scalar loss we can back propagate into the pixels of the image that we're generating we can get a the gradient with this of that loss with respect to the image that we're generating and then make a gradient step on the image that we're generating and now repeat this over and over again and hopefully generate some and now hopefully our generated image will we've hopefully generated some image that has the same gram matrices as uh this original input image that we started from and now if we do this it actually works pretty well and now with this this idea of neural texture synthesis by matching gram matrices actually lets us synthesize novel images that kind of match the same texture of an input image but look sort of very different in spatial structure so um here in the top row we're showing different uh different images that we're starting from and now underneath we're showing um generating images by matching gram matrices at different layers of our pre-trained neural network so you can see that if we match if we generate images that match gram matrices at a very low layer then we tend to capture these like very low level uh patches of color and as we additionally match gram matrices from higher and higher layers in the neural network we can see that we're now generating novel images that kind of match the overall texture of our input image but have very different spatial structures so this was a pretty exciting result that got a lot of people pretty excited that now we're able to generate novel images that uh actually look quite realistic and are able to synthesize textures of from pretty broad a variety of input images and now someone had some some really brilliant person um had the idea of what if we set our texture we do texture synthesis but we set the texture image to be a piece of artwork and now we're actually going to do two things jointly we want to combine this idea of feature reconstruction with this idea of texture synthesis and this actually is a beautiful matching right because now what we want to do is we want to synthesize an image that matches the features of one image and also matches the grand matrices of another image and now we saw from this feature reconstruction example that when you recon when you try to invert relatively high features from relatively high layers of a neural network then it tends to keep the overall coarse spatial structure of the input image but it ignores all the all the all the texture and color information and we see that when we perform gram matrix matching then it throws away all the spatial structure but it actually preserves a lot of the texture and color information so when you combine these two ideas together you actually get something very magical so here what we want to do is this this algorithm of neural style transfer that is we're going to generate an image that matches the the the the features of one image called the content image and matches the gram matrices of another image called the style image and now we can use gradient a sent to match both of these things jointly and now generate a novel image that matches the features matches the features of one and the grand matrices of the other and now this uh this sort of lets us generate novel images in this artistic style of an input style image and the way that this works is sort of gradient ascent through this uh this pre-trained network and now what's interesting is because this is a gradient ascent procedure we're kind of starting the image from some random initialization and then performing gradient steps over time that are going to gradually change the image bit by bit to generate this novel image that matches the features of one and the gram matrices of the other so this visualization shows the progress of gradient ascent over many iterations as we kind of converge to this final output so this this idea of neural style transfer is something that i've actually worked on quite a lot um so i have an implementation of this neural style transfer algorithm um from a couple years ago that's actually predates pytorch so i did it in lua it was that's what we did back in the day um and these are some example outputs of this neural style transfer algorithm where we're transferring uh we're using this content image of this uh this this street scene and then rent re-rendering it in the artistic style of these different artistic images and now you can actually there's a lot of knobs you can tune inside this style transfer algorithm so because i told you that we're doing a joint reconstruction of the trying to match the features of one and the gram matrices of the other you can actually trade off between how much do you want to reconstruct features versus reconstruct gram matrices and as we kind of trade off between those two objectives using some scalar waiting term then on the left when you put a lot of weight on the the feature reconstruction then you tend to reconstruct the feature image very well and on the right if you set the the grand matrix uh reconstruction term to be very high then it kind of throws away all the spatial structure and gives us just uh just garbage um so by the way this is brad pitt being rendered in the style of picasso and on the right it's like super picasso and on the on the left it's like super brad pitt um so another knob we can tune here is actually we can we can change the scale of the features that are that we generate that we that we capture from the style image by resizing the style image before we send it to the neural network to compute the gram matrices um so here on the left if we make this if so here we're rendering uh the the golden gate bridge in the style of van gogh's starry night and here on the left if we make the style image very large then we tend to capture these these big artists these big aggressive brush strokes that van gogh uses in his paintings and if we set the style image to be very small then instead we're going to transfer larger scale features from the style image in this case the kind of stars from the starry night from from the starry night photo so another fun thing we can do is actually do multiple style images so here we actually are going to jointly match two different gram matrices from from two different pieces of artwork so now here we can uh generate a novel style image that is trying to render an image jointly in a style of starry night and of the screen or kind of any other arbitrary artistic combination that you can think of and here the here the way that we do this is we just have our target grand matrix be a weighted combination of gram matrices coming from the two different style images so another fun thing you can do with this is that if you're kind of very careful with your implementation and you have a lot of gpus with a lot of gpu memory you can run this thing on very high resolution images um so here here's a nearly 4k image of stanford i think i need to fix this to be a michigan image but uh okay well we'll fix that for the next time but we can take this a very high resolution of an image of stanford and then run it in the style of starry night and sort of capture all these very high resolution uh interesting artistic features from the style image um or we can run it not in the style of van gogh but in the style of um uh kanden kandinsky and get this sort of very beautiful output result okay but now a very fun thing we can do is actually do style transfer and deep dream at the same time so then what we can do is we can get stanford with starry night with all the psychedelic dog mutant dog mutant dog things coming out so this is like kind of a nightmare but uh you can do it okay so now this idea of neural style transfer got a lot of people really excited when you kind of see images like maybe not this this one's terrifying but um but like people saw images like this and it was like holy cow this is really cool um this is a thing that we might want to actually deploy out there in the world but the problem is that this style transfer algorithm is super slow that because we're doing this iterative process of gradient ascent to generate any one of these images you need to take many many gradient steps um doing many many four backward passes through a bgg network um so to kind of put this in perspective one of these high resolution images i generated on i think four gpus and it took like half an hour so that's gonna not scale if you wanna push this thing out in production yeah question um professor um i was just wondering um maybe not for general production but what if somebody wants like their their portrait in the style of like a renaissance painter you know oh so actually renaissance painters don't work too well with this algorithm um so this algorithm tends to capture kind of color and texture patterns so it tends to work really well with like impressionist painters um but like renaissance painters are really about realism and that those tend not to get captured super well by this algorithm okay so this is a problem style trends are really cool but it's super slow um takes a lot of gpus so um whenever as a deep learning practitioner when you're presented with a problem your instinct should be to train another neural network to solve the problem for you um so uh actually there's this great paper from johnson at all that that um that shows how you can train a neural network to perform this style transfer algorithm for you so the idea is that we're going to have uh have this this one feed forward network that's going to input the content image and then output the stylized result and when training this thing it's going to maybe take a while to train but after it's trained then we can just break away this feed forward network and use it on and now it'll just stylize images for us in a single forward pass um so now this this fast neural style transfer algorithm is now fast enough to run real time um and this algorithm actually got pushed out to production um by pretty much all the big companies so snapchat had this as a filter at one point uh google had this as a filter at one point uh facebook messenger had this as a filter at one point so once style transfer was actually fast enough to run in real time then it actually got got deployed by everyone so that was super cool and there was there's a bunch of apps on the app store that you can now find that can do these style transfer effects um on your smartphone um so you can download those and play around with them so um actually to kind of bring this back a little bit you remember these different normalization methods that we talked about many lectures ago batch normalization layer normalization instance normalization group normalization well it turns out that one of these instance normalization was actually originally developed for the task of fast of real-time style transfer so it turns out that actually using instance normalization inside these style transfer networks is actually really important for getting high quality results from this fast neural style transfer algorithm so that's that's where instance normalization comes from so um there's kind of a downside here which is that so far this fast neural style transfer algorithm only trains one network for each different artistic style and now if you're like facebook and you want to ship these shift lots of different styles out in your app then you'd have to deploy lots and lots of different networks and that would be expensive um so then some other folks figured out how to train one neural network that i could apply lots of different styles and there's actually kind of a the way that they do it is actually kind of cool the the idea is we're going to use this new neural network layer called uh conditional instance normalization what that means is that remember in in something like batch normalization or instance normalization we're going to learn these scale and shift parameters that um we apply after we do the normalization step so now um with conditional instance normalization what we're going to do is learn a separate scale and shift parameter for each different artistic style that we want to apply all of the convolutional layers in the network will have the same weights for all the styles and we'll just learn different uh different scale and shift parameters for each style that we want to learn to apply and it turns out that just simply learning different scale and shift parameters inside the these instance normalization layers um gives the network enough flexibility to represent uh to apply different style transfer effects using a single feed forward network so then once we do this then it turns out not only can a single neural network apply lots of different styles it can also blend different artistic styles at test time by using weighted combinations of these learned scale and shift parameters so that gives us so that gives us kind of a brief overview of different ways that we can both understand what's going on inside neural networks we talked about mechanisms like using nearest neighbors dimensionality reduction maximal patches and saliency maps to understand what's going on inside neural networks and we saw how a lot of these same ideas of gradient ascent could be used not only to aid our understanding of neural networks but also to generate some some fun images so hopefully you enjoy looking at those psychedelic dogs so come back next time and we'll talk about object detection
Deep_Learning_for_Computer_Vision
Lecture_15_Object_Detection.txt
all right well look welcome back to lecture 15. uh today we're going to talk about object detection um so as a reminder that last time we were talking about this task of understanding and visualizing what's going on inside our convolutional neural networks and we talked about a lot of different techniques for doing exactly that like looking at nearest neighbors in feature space by looking at maximum activating patches or using guided back propagation or other techniques to compute saliency maps on our features and we talked about methods for generating images like these synthetic images via gradient descent or these uh this task of feature inversion um and of course we also saw that a lot of these same techniques that could be used for understanding cnns could also be used for sort of making fun artwork with convolutional neural networks so we saw the deep dream and the style artistic style transfer algorithms as mechanisms as mechanisms for generating artwork with neural networks and now today we're going to talk about something maybe a little bit more practical which is a new core computer vision task of object detection so basically the main computer vision task that we've talked about so far in this class is something like image is something like image classification right so an image classification of course we know that a single image is coming in on the left we process it with our convolutional model and it outputs uh some cl some category label for the overall image like maybe classifying it as a cat or dog or car just giving we're in image classification we're just giving a single category label attached to the entire image as a whole and this has been a really useful this is a really useful task that has a ton of applications for a lot of different settings and it's also been a really useful task for under for kind of stepping through the whole deep learning pipeline and understanding how to build and train convolutional neural network models but it but it turns out that this image classification task is only one of many different types of tasks that people in computer vision work on so there's a whole hierarchy out there of different types of tasks that people work on in computer vision that try to identify objects and images in different sorts of ways and in particular in today's lecture and in the next lecture on monday we'll talk about different types of computer vision tasks that involve identifying spatial extents of objects in images so for of course for the classic image classification task we're simply assigning a category label to the overall image and we're not saying at all which pixels of the image correspond to the category label which is attaching a single overall label to the image now other types of computer vision tasks actually want to go farther and not just give a single overall category label but actually want to label different parts of the image with different categories that appear in the image so between today's lecture and next lecture we'll talk about all of these different tasks but the one that i want to focus on today is the task of object detection which is kind of which is sort of a super core task in computer vision that has a ton of useful applications so what is object detection uh object detection is a task we're going to input a single rgb image and the output is going to be a set of detected objects that for each object in the scene we want our model to identify all of the all of the interesting objects in the scene so for each of so for each of the objects that we detect we're going to output several things one is a category label giving the category of the of the detected object and the other is a bounding box giving the the spatial extent of that object in the image so uh with on the category labels just like with image classification ahead of time we're going to pre-specify some set of categories that the data that the model will be aware of um so in something like remember in something like cpar classification our model is aware of 10 different categories for cfr cfr10 in something like imagenet classification our model is aware of a thousand different categories well for object detection we'll also be aware of some fixed set of categories ahead of time and then for each of our detected objects we're going to output some category label for the object just as we did with image classification but now the interesting part is this bounding box that it's going where the model needs to output a box telling us the location of each of the detected objects in the image and these bounding boxes we can prim we usually parametrize with four numbers um the that's that's like the x and y giving the center of the box in pixels and the width and the height giving the and w and h giving the width and the height of the box again measured in pixels um so you could imagine you know models that produce sort of arbitrary boxes with arbitrary rotations but for the standard object detection task we usually don't do that and instead usually only define boxes only output boxes that are aligned to the to the axes of the input image so that means that whenever we're working with bounding boxes we can always define a bounding box using just four real numbers so now this this seems like a relatively small change compared to image classification right how hard could it be we're just now we need to just output these boxes in addition to the category labels well it turns out that that adds a lot of complication to the problem so a couple of the a couple of the types of problems that we need to deal with once we move to object detection well one of the biggest ones is this idea of multiple outputs that with image classification our model was always outputting a single output for every image which was a single category label but now with image classification we need to output potentially a very we need to output a whole set of detected objects and each object each image might have different numbers of detected objects might have different numbers of objects in it that we need to detect so now somehow we need to build a model that can output a variably sized number of detections which turns out to be quite challenging of course there's another problem here which is that for each object in this set we need to produce two types of outputs one is this category label that we're very familiar with and the other is this bounding box object so now we need some other way to deal with processing bounding boxes inside of our network and then another kind of computational problem with object detection is that it typically requires us to work on relatively high resolution images so for something like image classification it turns out that relatively low resolution images of like two to four by two to four pixels tends to be enough spatial resolution for most of the image classification tasks that we wanna perform but now for object detection because we want to identify a whole a whole lot of different objects inside the image now we we need enough spatial resolution on each of the objects that we want to detect so that means that the overall resolution of the image needs to be needs to be much higher so as a concrete example for object detection models it's more common to work on images of resolution something like a rather on the order of like 800 by 600 pixels so that's like a lot larger image resolution compared to compared to image classification so that means we can use fewer images per batch we need to train longer we need to use multi distributed gpu training and that's kind of a computational issue that makes object detection much much more challenging okay but i but i think that object detection is a really useful problem right i think there's a lot of cases where we want to build systems that can recognize stuff in images but actually needs to say where it is in the image so one example might be something like a self-driving car that if you're built if you're building a vision system for a self-driving car it needs to know where all the other cars are around it in space so therefore it becomes really critical that it can not only just assign single category labels to images but actually uh detect whole sets of objects and actually say where they are in space so this is so basically after image classification i think object detection is maybe the number two most core problem in computer vision these days okay so then with all that in mind let's consider a simpler problem forget about let's forget for a second about this this set this this problem of producing a set of outputs and let's think about how we might approach this problem if we just wanted to detect a single object in the image well it turns out that detecting a single object in the image we can actually do with a relatively straightforward architecture so here we might imagine taking in our image on the left and assuming that there's only one object in the image passing it through our favorite convolutional neural network architecture uh like an alex now or vgg or some kind of resnet that would eventually result in some vector representation of the image and now from that vector representation we could have one branch that does image classification that is saying what is in the image and this would look very much like all of the kinds of image classification models that we've seen so far that it's going to output a score per category and that's going to be trained with a softmax loss on the ground truth category and this is basically the same as the image classification model that we've seen many times but now the new part is that we could imagine the second attaching the second branch that that also inputs this uh this vector representation of the image and now has a fully connected layer to go from maybe 40 96 dimensions in that final vector representation to four real numbers giving the x giving the coordinates of the bounding box of that one object and now this these box coordinates we can imagine training with some kind of regression loss like a l2 like the l2 difference but like the l2 difference between um this set of four numbers giving the box that we actually output and the set of four numbers giving the actual coordinates of the box that we were supposed to detect um and then then we could imagine trading this the second where branch with an l with some kind of l2 loss or other kind of regression loss on real numbers and now the problem is that we've got two loss functions right because we're asking our model to predict two different sorts of things one is the category label and one is the the bounding box location and for each of these two things we have an associated loss function but in order to compute gradient descent we actually need to end up with a single scalar loss we don't know how to deal with sets of losses so the way that we overcome this is just add up the different losses that we have in this network potentially as a weighted sum to give our final our final loss and now this is a weighted sum because we might need to tune the relative importance of this softmax loss and this regression loss um to make sure that they don't overpower each other in the overall weighted sum loss and now this idea of taking multiple loss functions right now now this idea is that we've got one network and we want to train one network to do multiple different things so then we attach one loss function to each of the different things that we want our network to predict and then we sum them up with a weighted sum and this turns out to be a pretty general construction that applies whenever you want to train neural networks that to output multiple sorts of things and this general construction is called a multi-task loss because um or we want to train our network to do sort of multiple different tasks all at once but we need to boil it down to a single loss function for training at the end um and now uh as kind of kind of a cap now to kind of see how you might work on this in practice um this this backbone network this this cnn would often be maybe pre-trained for imagenet classification and then you would sort of fine-tune this whole network for uh for doing this this multitask localization problem and this seems like kind of a silly approach for detecting objects and images right it's very straightforward we're just basically attaching an extra fully connected layer at the end of the network to predict these box coordinates but this this relative relatively simple approach actually works pretty well if you know that you only need to detect one object in the image um and for example and actually this particular approach to uh to localizing objects and images was actually used in way back in the alex net paper when they had some tasks where they want to classify and also give a bounding box for the one classification decision that they make so this is actually a reasonable approach if you know that you only need to detect one object in the image but of course we know that real images might have multiple objects that we need to detect so this this relatively simple situation is not going to work for us in general and to kind of imagine what this looks like well we need we in general different different images might have different numbers of objects that we need to detect so for this cat image maybe there's only one object in there the cat that we need to detect so then we need to predict only four numbers coming out of our network which are the four bounding box coordinates of the one cat bounding box um but now for this middle image maybe there's three objects we want to detect two dogs and one cat so now we need to predict 16 numbers uh actually i can't i can't add right that's only 12 numbers right 3 times 4 is only 12. so that's a bug on the slide um that's what happens when you make slides very fast you have some bugs on there um but uh right so in this case we need to have our network only output 12 i'll put 12 numbers and for this this image of all these adorable ducklings floating around in the water with their mom there's like a lot of ducks in here i can't even count them there's like way too many um but basically this means that we need our network to output a whole lot of different duck detections like duck duck duck guck and maybe a goose but there's actually no goose here um so then we need to output maybe lots of different numbers from our neural network model so then we need some some mechanism that allows our model to output variable numbers of objects for each for each different image that we might see so there's a relatively simple way to do this which is called a sliding window approach to object detection here the idea is that we're going to train we're going to have a cnn a convolutional neural network and we're going to train it to do classification but now this this cnn that's doing classification is going to categorize a window or sub-regions or sub-windows of our input image and now for each sub-window that we apply it to it's going to output a category it's going to output if we want to detect c different categories we're actually going to output a decision over c plus one outputs where we add a new output for a special background category so that means that if we then basically this is a this is an image we are reducing this problem of detection down to an image classification problem so then all we need to do is apply this classification cnn to many many different regions in the input image and for each region that classification cnn will tell us either is this a dog or a cat or is this some background regen where there is no object that we care about um so then you could imagine if we applied this uh this uh this sliding window cnn object detector to this blue region in the input image well there's no object is precisely localized by this blue bounding box so in this so for this image region our detector should say background to mean that there is no bounding box in this image region then we slide our region over and then run it on a different region in the image and this one it should say that there's a dog here because this is a this is about this is a region that is well localized with um with that box and then this one would also be a dog this one should be a cat and then that basically we take this we take this object detector and we slide it over different regions in the input image and for each of those regions we run the cnn on that region and it tells us whether it's an object or whether it is background okay so this seems like a relatively simple approach to doing object detection but there's a problem let's think about how many possible bounding boxes there are in an image of size h cross w um and hint it's going to be a lot right so if there's h cross if if we have an input image of size capital h cross capital w then consider a box of size lowercase h cross lowercase w now the number of positions that we can put this in well there's capital w minus lowercase w plus one possible uh position x position so we could put this box and similarly for the number of y positions that we can put this box so that means that the number of positions we can put this box in is this uh this quadratic equation that depends both depends on the the the product of capital w and capital h but but but in fact this is even worse because we need to consider not just boxes of a fixed size we need to consider all possible boxes of all possible sizes and all possible aspect ratios so if we sum this thing up over all all lowercase w and all lowercase h over all possible sizes that are less than the full image size then we get this this quartic expression that sort of depends on the fourth power of the number of pixels in the image or rather the square of the number of pixels in the image but it's like h squared times w squared so that's like really really bad and how bad is this well if we have something like an 800 by 600 image then it comes out that there's some that there's about like 58 million different bounding boxes that we could about imagine evaluating inside this 800 by 600 image so if we wanted to do some kind of dense sliding window approach and actually apply our cnn sliding window classifier on each one of these possible image regions this is going to be absolutely completely infeasible there's like no computational way that we could run our object our cnn forward pass 58 million times just for one image we'd be waiting forever for any detections that we wanted to come out yeah is there a question yeah the question is that even if you could do this and you had like infinite compute you'd probably end up identifying the same object over and over again with sort of slightly offset windows and yeah that's exactly correct um that'll actually turn out to be a problem not just for this sort of um impossible to implement version of object detection but that'll actually be a problem for other uh real types of architectures as well so we'll talk about this idea of non-max suppression in a little bit that can overcome that problem okay so then um basically this is not going to work so we need to do some other approach to object detection well that brings us to this idea of a region proposal so here the idea is maybe if there's no way that we can possibly evaluate the object detector on every possible region in the image maybe we can have some external algorithm that can generate a set of candidate regions for candidate regions in the image for us such that the the candidate regions gives a relatively small set of regions per image but which are have a high probability of covering all the objects in the image so one of the so there's there's a there was a few years ago there was a whole bunch of different papers proposing different mechanisms for generating these candidate regions um called region proposals and i don't really want to go into any of the details of exactly how they work because spoiler alert eventually they'll be replaced by neural networks too but um for now what you can kind of think about is that these original uh these sort of early approaches to region proposals would perform some kind of image processing based on the input image and maybe look for blob blob type regions in the image or look for edges in the input image or use other kind of low-level image processing cues to look for image regions that have a high probability of containing objects so one of these very famous methods for region proposals was this method called selective search so selective search was some kind of algorithm that you could run on a cpu and it would give you about 2 000 object proposals per image in a couple of seconds of processing on a cpu and these 2 000 object proposals that it would region proposals that it would output would have a very high probability of covering all of the all of the interesting objects that we cared about in the image okay so that gives us so then once we have this idea of region proposals it gives us a very straightforward way to actually train a practical object detector with deep neural networks so that that brings us to the the very famous paper um very famous method called rcnn which the r stands for region based this is a region-based convolutional neural network system for object detection and this is like one of the most influential papers i think in deep learning that came out in back in cdpr2014 and since then it's sort of been very very impactful overall so here the but the way that it works is actually pretty straightforward in a way so then what we're going to do is going to we're going to start with our input image and then we're going to run our um region proposal method like selective search so then selective search will give us something like 2000 re candidate region proposals in the image that we need to evaluate here we're only showing three because i can't fit 2000 on the slide and then for each of these candidate image regions on these candidate image these region proposals could all be different sizes and different aspect ratios in the image but then for each of those region proposals we're going to warp the that region to a fixed size of something like two to four by two to four and then for each of those warped image regions we're going to run them independently through a convolutional neural network and then that convolutional neural network will output a classification score for each of these regions um so then again this classification score will tell us um will be a classification over c plus one categories so it will tell us whether or not that region is a background region with no object or whether it actually should or if it's not a background region then what is the actual image label that it the category label that region should be assigned and you could imagine sort of training this thing now using sort of all the standard machinery that we know for training uh classification networks um and this will actually work pretty well but now there's there's a slight problem here which is that um what happens if the region proposals that we get from selective search do not exactly match up to the objects that we want to detect in the image right because here the entire mechanism of the bounding the all of the bounding boxes are just coming out of this sort of black box selective search method and there's no learning happening that actually outputs the boxes so to overcome that problem we're actually going to actually use kind of a multi-task loss similar to this very simple mechanism we saw earlier and now um each of these in now this cnn is actually going to output an additional thing which is a transformation that will transform the region proposal box into the final box that we actually want to output for that object of interest um and now this is uh and now because a bounding box actually is is specified or and another thing to i mean one thing to point out here is this this idea of bounding box regression we're not inventing a box from scratch instead we just want to modify the region proposal that we were given as input because hope because we think that the region proposal was probably pretty good but we just might need to tweak it a little bit to cause it to fit better the object that we that we were looking at and now because a bounding box is can be parametrized with a sequence of four numbers then we can also parametrize a delta on top of an existing boundary box also using a sequence of four numbers so there's a lot of different parameterizations of these bounding box transformations that you'll see people use but i think the most common one is is this that i put on the slide here so here the idea is that if we're given a region proposal that has its center at pxpy and has a height and width ph and bw and then if we out if our if our uh comnat is outputting a transformation giving four numbers t x t y t h and t w then we are going to output then our final output box is going to somehow combine the transformation that our cnn outputs and the coordinates of the region proposal that we were given as input and and the the parameterization here is going to be relative to the overall box size in translation so that means that if if now then the x coordinate of our output bounding box will be um t x time will be the original out the original x coordinate of the of the region proposal plus the x transform times the width of the box so that means that if we and parametrizing these things relative to the input bounding box size kind of makes it work out because of the fact that we had to warp the original image regions before feeding them into the cnn so that kind of means that these transformations that we're outputting are kind of invariant to the fact that we had to warp the original input regions so what that means here is that if we were to output like a tx of zero that means leave the original region proposal alone in x position that means it was pretty good and if we output tx equals one that means i want to shift over the region proposal by an amount in x equal to the width of the of the image region um and then we use a similar transform for transforming things in the vertical direction and now for the scale it's going to be logarithmic so then um we're going to scale up the width or the height of the of the of the region proposal according to by exponentiating that transform and then multiplying that and again this makes it sort of scale and variant to the fact that we had to warp the input regions before feeding them to original cnn okay so that gives us our our full that gives us our first full object detection method using convolutional neural networks so now the pipeline at test time looks something like this that will be given a single rgb image and then we'll run this selective search algorithm on the input image on cpu to generate something like 2000 region proposals and now for each of those regent proposals we'll resize them to some fixed size like two to four by two to four and then run them independently through our com net to predict both a classification score of a category versus background as well as this be box transform that will transform the the coordinates of the original region proposal and now um and then now because now at test time we actually need to output some some finite set of boxes to use maybe in our downstream application so there's a lot of different ways that we can do that that kind of depend on exactly the application that you're using here so one idea that you might use is that right you want to somehow use the predicted scores for all the region proposals to output some small finite set of boxes for the for the image um so one idea here is that maybe if maybe you always want to output like 10 objects per image um you don't really care what the categories are then you could imagine thresholding based on the background score and output the 10 region proposals that had the lowest background score and maybe this would be that this would give you 10 boxes that you could output for for your final prediction another option here would be to set a threshold per category right because our classification network is actually outputting a full distribution giving us a score for background as well as a score for each of the categories so another option here is to set some threshold up where for each category such that if the classification score for the category is above the threshold then we output the box as a final as a final detection otherwise we don't emit the box and the exact and the exact mechanisms of exactly how you convert these scores into final detections kind of depends on exactly what downstream application you're working at yes yeah so uh these these combinations all share weights so uh these con these combats we use the exact same combination with the exact same weights and we just apply it to each image region that is coming out of our region proposal mechanism yeah um because if they didn't share weights it wouldn't really work out because we might have slightly different num well we might have different numbers of region proposals for each image um and even if we did have a fixed number of like 2000 proposals per image it would be sort of infeasible to train 2000 separate comnets for each region proposal and it wouldn't really make sense because um right these things are just looking trained we just want to train them to look at image regions and then tell them what whether there's an object essentially cropped in that region yes yes so i haven't really told you about the training of this um because there's there's a couple of subtleties in exactly how you train this thing um but kind of to imagine how you might train this thing um you're gonna form batches where the batches consist of image read different image regions from the same image or maybe different image regions across different images so then when you when you when you run this thing at training time it's basically going to have a batch of image regions and then for each of those image regions it's going to output a classification score as well as these bounding box transformation parameters and then you'll you'll use this idea of multi-task loss to compute a single loss that sums both the regression loss and the classification loss and then you'll back propagate into the weights of the cnn and make a gradient step but there's a little bit of subtlety here in exactly how you decide which region proposals should be considered objects versus background but i'll kind of gloss over that hear that here for the sake of simplicity yeah the question is do you input the the rotation to the comnet or the location so yeah we actually do not input the location of the box to the comnet because this should be somehow translation invariant that maybe um and scale invariant as well because maybe a cat in the upper left-hand part of the image should look the same as a cat in the lower right hand part of the image so we actually do not input the location information usually here and actually if you look back at the way that we parameterized this box transformation this box transformation was parameterized in such a way that it will work even though we're not putting in the location information because the way that we're parametrizing these transformations is kind of invariant to the location and scale of the box in the image yeah yeah the question is how do you choose that the size at which you warp the boxes um and usually that's tied like this you have the same hyper parameter when you're doing image classification right so you know whenever we do image classification you always um build a cnn that operates on some fixed image resolution and then you have to warp your images to fit that fixed image resolution and it's exactly the same thing here except now we're warping image regions rather than warping full images um so that would that would be a hyper parameter um but in general what we tend to do for detection is use the same image resolution for that warping as we would have done in a classification case instead right so for classification networks we usually use image resolution of two to four by two to four during training so then for detection we'll also warp the regions to two to four by two to four okay um but now so then then once we have our now oh yeah another question yeah so this question of like these k proposals or these thresholds um these would be used only during testing um so during training you would always train on all of the potential region proposals and then these thresholds these top k those would be thresholds that you set at test time um and so you so the way that you would choose those thresholds is usually by tuning on some validation set um for your for your downstream application but those thresholds or those top k doesn't doesn't enter into the training process at all for this network okay any more questions on this rcn algorithm okay so then um once we've got our now that we've got an algorithm that can so basically this is a this is a practical algorithm that can input an image and then output a set of bounding boxes for all the objects detected in that image so then um of course we need some way to evaluate our results to say whether or not the boxes that we output are actually similar to the boxes that we should have output so we can write some performance number that we can put in a paper and make it bold to show that we're better than other people so then we need some mechanism so basically in order to do that we need some mechanism that can compare two bounding boxes so suppose that in this image of this adorable puppy that the green box is the true box that the map that the system should have output for this image um and suppose that our and these things will never be perfect so suppose that our algorithm outputs this blue box then we need some way to compare um whether or not our blue box matches the green box and because these are all real numbers then it will never match exactly so the way that we normally compare two sets of bounding boxes is with a metric called intersection over union all usually evaluated as iou um you'll sometimes also see this call called the jaccard similarity or jaccard index in other situations but for object detection it's we usually call it iou and the way that we the way that we um compute this is it's a similarity measure between two boxes and it's basically the way we compute it is exactly what the name is saying so you compute the intersection of the two boxes which is again a box shown here in this reddish orange color on the slide and then you separately compute the union of the area of the union of the two boxes which is this purple region uh actually the purple region together with the orange region shown on the slide and then the intersection over union is just the ratio of the the intersection region to the union region um and then for for this example our intersection intersection over union is something like 0.54 so this will always be a number between zero and one right because we know that if the two boxes coincide perfectly then the intersection is equal to the union so then this ratio is one and if the two boxes are completely disjoint and don't overlap at all then they're in then their intersection will be zero and their union will be non-zero so this will be a ratio so this will give a ratio of zero um so this intersection of reunion is always a number between zero and one where higher numbers mean a better match between the two bounding boxes and to kind of give you a sense for what people usually look at these iou numbers is that iou greater than 0.5 is usually considered like an okay decent kind of match between two bounding boxes so for this green box and this blue box here these have an iu of 0.54 so it's like it's not perfect but it kind of got the general gist of where the object was supposed to be so that's kind of what an iou of 0.5 usually looks like and now an iou of 0.7 is usually like pretty good like maybe we made some slight errors and we cut off a little bit of the bounding box but overall we did a pretty good job at localizing the object and now any any kind of iou greater than 0.9 is like nearly perfect so actually um if you flip between these i actually had to like reduce the width of the lines to cause you to be able to see any gap between these two boxes now once we move to 0.9 um and in fact for a lot of sort of real applications um depending on the resolution of your image um 0.9 might be like only a couple pixels off of the true box so basically you'll almost never get iou of one um so iou 0.9 is usually like an almost perfect sort of threshold so then this iou metric is something that we use all over the place whenever we need to compare two bounding boxes but now now there's actually another problem that was pointed out a little bit earlier which is that actually these these practical object detection methods will often output a set of overlapping boxes um that output like many overlapping box boxes that are all kind of around the same objects so as for this um these are not real object detections i just kind of made these up to put on the slide but for this example of these two puppies in the image then an object detector usually will not output exactly one box per up per per uh per object instead objectors will usually output like a whole bunch of a whole bunch of boxes that are all kind of like grouped very near each uh each object in the image that we actually care about so then we need some kind of mechanism to get rid of these overlapping boxes and the way that we use the way that we do that is by post-processing the boxes coming out of our object detection system using an algorithm called non-max suppression or nms um and this is there's a lot of different ways you can implement non-max suppression but the simplest way is this with this fairly straightforward greedy algorithm so um for now basically we've got our obj the native outputs from object detector is this whole set of boxes in in the region and in the image and for each of those boxes we have some probability that it is uh each that is each of our categories so for each of these four boxes that are being out output by the detector um we have like different probabilities that each of those boxes are a dog as coming out of the classification scores so then the greedy algorithm for non-max suppression is that first you select the highest scoring box in this case is the blue box that has the highest probability of dog as output by the classifier and then you compute the intersection over union between that highest scoring box and each other box in the it that was output by the detector and by and by construction because we started with the highest scoring box each of these other boxes will have lower scores than the one that we that we're looking at here and then we compute the intersection over union between that highest scoring box and all the other boxes which we've computed here and then we're going to set some threshold often something like 0.7 relatively high and say that if we if our detector output two different boxes that have it that had an intersection over union greater than this threshold then it's likely that they did not correspond to different objects it's likely that they were we think that instead they were probably just kind of duplicate detections that fired multiple boxes on the same object so then any boxes that have iou greater than that threshold with our highest scoring box will simply eliminate so now in this example the the blue box and the orange box have an iou of 0.78 so then we will eliminate the orange box and then after eliminating the orange box we'll then go back to step one and we'll choose the next highest scoring box that was output by the detector which in this case is the purple box with a p of dog at 0.7.5 and we'll then again compute the iou between that next highest box and all the other lower scoring boxes so in this case there remains only one lower scoring box which is the yellow box and these have an iou of 0.74 so then we would eliminate the the yellow box and the final outputs from our object detector would be these two um these this blue box and this purple box that are now um fairly separated and don't have high overlap okay so this seems like a pretty reasonable algorithm and basically all object detectors that you'll see out in the wild will almost always rely on some kind of on this non-max suppression algorithm to eliminate shared detections but there's kind of a subtle problem with non-neck suppression is that it's it's going to get us into trouble in cases where there actually are a lot of images in objects in the image that have high overlap um so this is kind of and we don't actually have a good solution for this right now as a community in computer vision so this is actually a big failure mode of object detectors right now is when you've got like really really crowded images with lots and lots and lots of objects that are all highly highly overlapping and then it becomes very very difficult to tell the difference between very close boxes that are actually different objects versus very close boxes that are duplicate detections of the same object so this is somewhat of a bit of an open challenge i think in object detection right now but people are working on it it's an exciting field okay so then another thing we need to talk about is we've talked about a way to compare individual boxes using this iou metric but we also need some kind of overall performance metric to tell us how well our object detector is doing on the test set overall right so um this actually was a fairly trivial thing to do for image classification task right because for image classification um for every object in a test set we would take the argmax score and then check whether or not it was equal to the true label and then we could just compute an accuracy on the test set and for image classification this simple accuracy metric was like really easy to compute and let us tell whether or not one image classification model was doing better than another well now now this task of object detection actually complicates things a lot and the the metric that we use and we need some again some metric that kind of quantifies the overall performance of the model on the test set and because detection is a much more complicated problem unfortunately the metric we use to compare these things is a little bit hairy so i'll try to walk you through it a little bit to get a sense of what it's computing but basically now suppose we've trained an object detector and we want to get a single number to put in our paper to tell us how well does this object detector do on this data set well now what we're going to do is compute some metric called mean average precision and this is basically the standard metric that everyone on object detection uses to compare their object detectors and now to do it what we're going to do is first run our trained object detector on all of the images in the test set and then we'll use non-max suppression to eliminate these duplicate detections on the test set so this will so then after this we'll be left with a bunch of detected boxes for each up for each image in the test set and for each of those boxes we'll have a classification score for each of the categories that we care about and now for each category we're going to separately compute a number called the average precision which tells us how well are we doing overall on just this one category and if you're familiar with these metrics then the average precision is the area under the precision recall curve but in case you're not familiar with that we can step through it a little bit more so then what we're going to do is then for each category that we care about then we're going to sort all of the detections on the test set by their classification score and we'll do this independently per category so then um here the boxes in blue are meant to represent all of the detections that were output by our detector on all of the images across the entire test set so that means that maybe the highest scoring probability of dog regen across the entire test set had p dog equals 0.99 the second highest scoring dog region across the entire test set was p dog equals 0.95 and so on and so forth and now and now the green the orange boxes represent all of the ground truth dog dog regions on the test set and now what we're going to do is we're going to march down our our detected object our detected uh regions in or in sorted order of their score and for each region we're going to try to match it with a ground truth region um using some iou threshold often 0.5 is a common uh common choice so then suppose that our highly confident dog detection with p dog equals 0.99 it actually does indeed match some ground truth dog in that image with an iou greater than 0.5 well then in that case then we're going to flag that detection as a true positive and say that that was a correct detection and now for that and then for that correct detection it lets us compute one point on a precision recall curve so then now now that we've sort of now we're considering only the top scoring detection coming out of our model so then the precision is the fraction of our detections that are actually true and the recall is the fraction of the out of the ground truth that we hit so then in this case um we were considering only the top detection so our precision is one out of one hundred percent and our recall is one-third because among the set of detections we're considering we cover one-third of the of the true ground truth detections so that gives us one point on a curve where we're going to plot precision versus recall and that we then then then we'll repeat this process for the next uh for the next detection in our ground truth and suppose that our second highest scoring dog region also matches some ground truth now this gives us another point on the precision recall curve now we're considering two detections both of which were true positives so our precision is one and um we've got and of the three ground truth regions then we're hitting two of them so our recall is 0.67 so this lets us plot a second point on the precision recall curve now suppose that this third region actually was a false positive and did not match any region any ground truth region in its image this would give us another point on the precision recall curve and then again suppose the next one is another false positive this gives us another point on the precision recall curve and then suppose this final one was indeed a true positive that um is uh again match this this uh this final true ground truth region so then for this final one then our precision is three out of five because of the five detections um we consider three of them as true positives and then our recall is 100 because we got all of the we hit all of the the ground truth regions so then once we plot all of those precision recall these points on the precision recall curve then we can plot the full curve and then compute the area under that curve and the area under that curve will have to be a number between 0 and 1 where 0 means we like did terribly and 1 means we did really well and this area under the curve is called will be the average precision for that category so now for this for this case for this for this objective texture our dog average precision will be 0.86 and now it's interesting to think about what do these ap numbers mean like this is not a very intuitive metric at all well what this means is that well first think about how could you possibly get ap of 1.0 well in order to get ap 1.0 that means that we would have had we would have had to not have all of our true positives would have had to come before all of our false positives so the only way we can get ap 0 ap 1.0 is if um all of the top output detections from our model were all true positives and we did not have any duplicate detections and we did not have any false positives and they all match the ground truth some ground truth region with at least iou 0.5 um and so this is going to be very very hard to get and in practice you'll like you'll never get object detectors that actually get ap 1.0 and you might be wondering like why do we use this complicated ap metric for evaluating object detectors and that's because for different applications you might want to have different choices about you might want a different trade-offs between how many objects you hit and how many objects you miss so for some applications like maybe in self-driving cars it's really important that you not miss any cars around you um so you want to have you want to make sure you have um maybe very high you know not not miss anything but maybe in other applications false positives are not so bad and you just want to make sure that all of your detections are indeed true so different use cases kind of require different thresholds and different trade-offs between precision and recall but by computing this average precision metric it kind of summarizes all possible points on this trade-off between precision and recall so that's why people tend to use this metric for evaluating object detection but of course this was only the dog average precision so in practice we'll repeat this whole procedure for every object category and then get an average precision for each category and then compute the mean average precision as the average across all the categories okay so that was a lot of work just to evaluate our object detector um but it actually gets a little bit worse because uh right because of the way we computed this mean average precision it didn't actually care about localizing boxes really really well right because remember when we were matching the detected boxes to the ground truth boxes we only used an iou of 0.5 so it didn't actually matter that we get really really accurate detections so in practice we'll also then tend to repeat this whole procedure for different iou thresholds and then take an average over all of these different mean average precisions computed at different iou thresholds wow so that was kind of a disaster that's a lot of work just to evaluate your object detector um but i thought it was kind of useful to walk through this in detail because this is actually how people evaluate these things in practice and whenever you read um image whenever you read object faction papers they'll always kind of report this mean average precision number but it's actually kind of hard to find a definition of what it actually is if you're reading papers so i wanted to kind of walk through this very explicitly with any questions on this computation or the metric okay but then forget about the metric let's go back to let's go back to object detection methods so at this point we've got this this rcnn which is this region-based convolutional neural network and it worked pretty good for detecting objects but there's kind of a problem here which is that it's actually pretty slow right because um if we're trying basically we need to run our our forward pass of our object texture for each region proposal that's coming out of our region proposal method um for something like selective search there's going to be like 2 000 region proposals so that means to actually process an an image for object detection we're going to have to do like 2 000 forward passes of our cnn um so that's actually going to be like pretty expensive um right that's not going to run real time if we have to do 2000 forward passes of our cnn for every image that we want to process so we need to come up with some way to make this process faster and the way that we make the way that people have made this process faster is basically to swap the cnn and the warping and and that will basically allow us to re-share a lot of computation across different image regions so how does that work well if we kind of take this this rcnn method actually nowadays people call it slow rcnn just because it's like so slow and then the alternative method is of course faster cnn so fast rcnn basically is going to be the same as slow rcn except we're going to swap the order of convolution and region warping so now we're going to take the input image and process the whole image at a high resolution with a single convolutional neural network and this is going to be no fully connected layers just all convolutional layers so the output from this thing will be a convolutional feature map giving us convolutional features for the entire high resolution image and as for a bit of terminology this confident that we run the the that we run the image on is often called the backbone network and this could be like an alexnet or a bgg or a resnet or whatever your favorite classification architecture is um this will be called the backbone and now we're still going to run our region proposal method like selective search to get region proposals on the raw input image but now rather than rather than cropping the pixels of the input image instead we're going to project those region proposals onto that convolutional feature map and then apply cropping now on the feature map itself rather than on the raw pixels of the image so we'll do this cropping and resizing on the features that are coming out from the convolutional backbone network and then we're going to run a little tiny light pretty lightweight per region network that will output our classification scores and our bounding box regression transforms for each of these detected regions and now this is going to be very fast because um most of the computation is going to happen in this backbone network and the per region network that we run per region is going to be very very relatively small and relatively lightweight and very very fast to run so if you imagine doing something like fast rcnn with an alex not then the backbone that is going to be all of the convolutional layers of the alexnet and this per region network will just be the two fully connected layers at the end so these are really relatively fast to compute even if we need to run them for a large set of regions and then for something like residual networks then we'll take basically the last convolutional stage and run that as the purge region network and then we'll use all of the rest of the network as this backbone network so then we're saving computation here by doing most of our computation is going to be shared among all of our region proposals in this backbone network okay but then there's a question of exa what does it mean exactly to crop these features um because in order to back propagate we need to actually back propagate into the weights of the backbone network as well so we need to crop these features in a way that is differentiable and that ends up being a little bit tricky so one way that we can crop features in a differentiable way is this operator called roi pool a region of interest pooling so then here we have our input image and some region proposal that has been computed on that input image and then we're going to run the backbone network to get these convolutional image features across the entire input image and just to put some numbers on this thing um this the the the input image might have three channels rgb with spatial size 640x480 and then the convolutional features might have five 12 dimensional features with spatial size of 20x15 and now because this network is fully convolutional then each point in this convolutional feature map corresponds to points in the input image so then what we can do is just project that region proposal onto the feature map instead and then we can snap that feature but what after we do that that uh that projection the region proposal might not perfectly align to the grid of the convolutional feature map so the next step was we can snap that grid to the convolutional feature map and then divide it up into sub regions uh say we want to do two by two pooling then we would divide that snapped uh region proposal into rough into roughly equal two by two regions as close as we can get keeping on a line to grid cells and then perform max pooling within each of those regions um so then the the this blue region would be like a two by two by five twelve and then we'll do max pooling within that two by two region to output a single five twelve dimensional vector in the pooled output and then for the green region it's going to have a spatial size of three by two and with five dimensional five 12 dimensional vectors at each point and then we'll do a spatial uh max pooling again to give us a single 5 12 dimensional vector coming out of that green region um so what this what this does is that this means that um even though our input region proposals might have different sizes then the output of this roi pool operator is always going to be a tensor of the same fixed size which means that we can then feed it to these downstream cnn layers that are going to do this per region computation um so this will all work out and now we can back propagate through this thing um by simply like in the way that we would normally back propagate through max pooling so when we get upstream gradients for the region features then we'll propagate them down into the corresponding regions in the image features and then when we're training on batches containing many many regions for the same image then we'll end up getting gradients over mo over most of the entire uh image feature map this is a little bit complicated but now there's a slight problem is that there's a bit of misalignment in these features because of the snapping and because of the fact that these uh green and blue regions are could be different sized so there's a slightly more complicated version of this that people use sometimes called roi align that i think i don't want to go into in deep in detail due to time constraints but basically it avoids snapping and instead uses bilinear interpolation to make everything uh really nicely aligned so you can go through these uh maybe maybe later on your own but the idea here is that with roi align it's very simple similar we're doing a sort of cropping in a differential way but we're having better alignment between the input features and the output features okay so then this gives us a fast rcnn here on the left and slow rcnn right on here on the right and then basically the difference between them is that we've swapped the order of convolution and uh and cropping and warping and now faster cnn is much faster because it can share computation across all of these different image image proposal regions and how much faster is fast rcn you might ask well we can look at the training time as well as the inference time um so for training for training rcnn took something like 84 hours on whatever this is a couple years ago so it's really old gpus and if we train faster cnn on the same uh same gpu setup then it's something like 10 times faster to train overall and now at inference time um fast rcnn is like a lot lot faster than our cnn because we're sharing a lot of computation across these different image regions but now an interesting thing is that once we have fast rcnn then actually most of the time in fast rcnn is spent computing those region proposals because remember that those region proposals that we're kind of depending on were being computed by this sort of heuristic algorithm called selective search that runs on the cpu okay and and now once we have fast rcnn then something like almost 90 percent of the runtime is just being taken up by computing these region proposals so then of course um you remember that these were being done by this like this heuristic algorithm so um let because we're sort of deep learning practitioners we just want to replace everything with deep neural networks so instead we want to find some way to actually compute the region proposals as well using a convolutional neural network in a hopefully a way that'll that'll be efficient and this will then hopefully improve the runtime overall of these object detection systems so the the the method that does that is then called faster rcnn because it's even faster than faststar cnn these guys these authors were like really really creative with their names but the idea here now is that we want to eliminate this heuristic algorithm called selective search and instead train a convolutional neural network to predict our region proposals for us and the way that we're going to do that is it's going to be very similar to this fast rcnn algorithm that we just saw except after we run the backbone network then we're going to insert another little tiny network called a region proposal network or rpn that will be responsible for predicting region proposals so the the pipeline here is that we'll have our input image we'll run the input image through the backbone network to get our image level features we'll take the image level features pass them to the region proposal network here on the left to get our region proposals and then once we have the region proposals then we'll do everything the same as fast star cnn so we'll take our region proposals to differentiable cropping on the image features and then do a per region little parisian networks to predict our final classification and uh bounding box uh transformations so now the only new part here is this region proposal network and then the question is like how can we use a convolutional neural network to output region proposals in a trainable way so then then we need to dive into a little bit the architecture of this region proposal network so again just because we're relying on this same backbone network then we take our original input image and then feed it through our backbone network to get these image features at a relatively high resolution maybe again looking again like 512 by 20 by 15 in the same example and now the idea is that again we recall that these convolutional image features coming out of the backbone network are all kind of aligned to positions in the input image so then what we can do is at each point in this convolutional feature map we can imagine an anchor box that is um some bounding box of a fixed size a fixed aspect ratio but it just slides around and we place an anchor box at every position at every uh position in this convolutional feature map coming out of the backbone network and now our task is to train a little convolutional neural network that will classify these anchor boxes as either containing an object or not containing an object so then this will be a binary classification problem that we so we need to output a a a a a positive score and a negative score for each of these region proposals or for each of these anchor boxes and because there's one anchor box per point in this image level features then we can output these positive and negative scores with just another convolutional layer by by maybe attaching a single one by one convolution that just outputs a score for yes a positive negative score for whether or not that and corresponding anchor should contain an object or should not contain an object and then we could train this thing using a softmax loss with two two categories uh yes being there should be an object here and no being there's no object here but of course um these anchor boxes might actually be pretty poor fit to any objects that might actually appear in the image so we're going to use a familiar trick and in addition to a a score for each of these anchor boxes we will also output a box transform for each of these positions in the convolutional feature map that will give us a transformation that transforms the anchor box the raw anchor box into some actual region proposal that we're going to use um so then here the anchor box is maybe shown in green and then the actual region proposal box is going to be shown in yellow and then these region this uh these box transforms can be trained using a regression loss just like we saw for the that was used in in previous approaches but um of course this is not complicated enough um oh and again again we can predict these coordinates for the b these are these values for the box transforms with just another convolutional layer um and of course this was not complicated enough already so in practice um using one anchor box of a fixed scale and size per position in the convolutional feature map is usually not expressive enough to capture all the types of objects that we want to recognize so in practice instead we'll typically use a set of k different anchor boxes of different scales and sizes and aspect ratios at every point in this convolutional feature map but again oh yeah question oh yeah so the question is for anchor is an object should it be k or 2k yeah it's a bit of an implementation detail so the original paper has 2k so there they output at a score a positive score and a negative score and use a soft max loss and you could you do something what we've done equivalently is output a single score where pl where plus is where high values of plus means that it's positive high values of negative means that it's not an object and use a logistic regression loss instead but these are pretty much equivalent and it doesn't really matter which one you do but you're right that actually most actual implementations will usually output an explicit positive score an explicit negative score per anchor and then use a soft max loss but you could use logistic regression and output one score it doesn't really matter okay um but then um right so then the solution is that we'll actually consider k anchors at every position in every position in the feature map and the sizes and aspect ratios of these anchors will all be hyper parameters so like how many anchors you use and what is the scale and size of these anchors these are all hyper parameters that you need to set for object detection so that's a little bit of a mess okay so then um that gives us kind of this full faster rcnn method that actually now has four different losses that we need to train this thing for so then in the region proposal network we have two losses we have this classification loss that is classifying the the anchor boxes as being object or not object we have the regression loss in the region proposal network which is outputting transformations from the raw anchor positions into the region proposal positions okay and then we have all the stuff from fast rcnn which means that for each of the anchor or for each of the region proposals coming out of the region proposal network we're going to run this per region network on each of the proposals which then gives us two more losses so one is the object classification loss that tells us whether each proposal what is its object category or whether or as a background and this final object regression loss that is going to regress again from the region proposal output from the rpn to the final box that we'll output from the object detector so this is kind of a mess and there's a lot and there's even more details that i didn't have time to go into but there's actually turns out object detection is a pretty complicated topic to get to work okay but once you actually do all this then faster rcnn is like really really fast so then because we've eliminated this this bottleneck of computing region proposals on the cpu and instead we're just computing region proposals using this like really really tiny convolutional network on top of the image features that means that this thing is like really fast and actually can run in real time um so faster rc9 on a gpu is something like 10 times faster than fast rcnn and spp net is a different method that we don't have time to talk about but it's kind of in between fast and rcnn okay so then faster rcnn is usually called a two-stage method for object detection because there's kind of like two conceptual stages inside the method one is this first stage in blue where we run one network on the entire image so that's we're running these convolutions to give us the convolutional features for the image and then we're running the region proposal network which is again a couple convolutional layers that gives us these these detection these region proposals and now once we've got these region proposals then our second stage is going to run once per region and these second stage is going to then output these final classification scores and regression parameters for each region that we output but then there's a question here which is like do we really need the second stage and it kind of seems like we could actually get away with using just the first stage and just ask this first stage to do everything and that would kind of simplify the system a little bit and make it even faster because we wouldn't have to run these separate computations per region that we want to work on so that that gives us then basically this works and we can use a there's methods for object detection called single stage object detection which basically look just like the rpn in faster rcnn except that rather than classifying the anchor boxes as object or not object instead we'll just make a full classification decision for the category of the object right here so now suppose that we want to classify suppose again we have c object categories that we want to classify now when producing outputs for the anchors now again we've got maybe 20 by 15 spatial size in our image feature map we've got k anchor boxes at each position in the feature map and now for each of those anchor boxes we want to directly output classification scores for the c categories and for the background category and again we can just do this again directly using a convolutional layer or a couple convolutional layers and then you sort of train this thing up and now rather than using a binary or two class loss in the rpn instead you just directly use your full classification loss and you still output these box transforms one per anchor but actually it tends to work a little bit better if you use a a slightly if you actually output a separate box transform per category and this is called a category specific regression as opposed to category agnostic regression where you're just outputting one box transform that is supposed to work for any categories and that last and this uh this category specific regression lets the model kind of specialize its behavior a little bit to uh different uh two different categories um okay so this basically this object detection is like really really complicated and there's a lot of different choices that you need to make when doing object detection right like you've got these different kind of meta algorithms um like faster rcnn is this two-stage method you've got these single-stage methods like ssd um and you've got sort of hybrid approaches like rfcn that we didn't have time to talk about so that's kind of one major choice you need to make when you're doing object detection there's another choice you need to make which is like what is the architecture of the underlying backbone network and that affects the performance as well so all of the network architectures that we've talked about for classification we can also use for object detection and now there's a bunch of other choices as well like what is the image resolution what is the cropping resolution how many anchors per how many anchors do we use what are the size of the anchors what are the iou thresholds that we use there's like a whole bunch of hyper parameters that go into this and it's really really hard to like get fair comparisons between different object detection methods due to all of these massive numbers of hyper parameters and settings that you need to and choices that you need to make but there's actually a one really great paper from two years ago in 2017 that tried really hard to just like they basically like re-implemented all object detection methods that they that were available at the time and they tried to do a really fair head-to-head comparison of all these different choices that you can make in object detection so i'd really recommend reading this 2017 paper um if you want to get some more insight onto some of the trade-offs of these different choices um but basically they produce this amazing plot in the paper that kind of like each point here is a trained object detector and the x-axis gives us the speed at test time on a gpu and the y-axis gives us the overall mean average precision which is this performance metric that we said we can compute for object detection and now the color of each point is corresponds to the backbone the architecture of the backbone network whether it's like an inception network or a mobile net or a resnet or a vgg and now the shape of each dot gives the meta architecture that is whether it's a two-stage method like faster rcnn or a single-stage method like ssd and kind of the takeaways here are that the two-stage methods tends to work better right when you use two-stage methods then it allows the network to have sort of multiple glimpses at the image information and that tends to make performance work better but they're a lot slower because you need to run different you need to run computation independently for each region proposal that you want to consider now a second takeaway is that these single stage methods are tend to be a lot faster because they don't have any computation per region instead they're just sharing all computation across the entire image so they tend to be a lot faster but they tend to be less accurate because they get less opportunities to kind of look at the raw image information and the other takeaway is that bigger networks tend to work better so as you if you look at if you compare using a tiny network like a mobile net to a very large network like a resnet 101 or an inception resnet v2 then as you use larger backbone networks then your performance tends to improve okay so those are kind of the takeaways that you should have about these different high level choices in object detectors but it turns out that this paper was from 2017 and now is 2019 and this is a fast moving field so like there have been some changes since then so um i've tried so let's let's look at what is kind of the current state of the art on this task well first off we need to uh shrink the chart in order to fit the current state-of-the-art methods on the on the chart here so then basically since 2017 gpus have gotten faster and people have figured out more tricks to get object detectors to train better so one trick is that um we can just train our networks longer and uh that tends to make them work better because as gpus get faster we can afford to train them for longer and another thing is that there's a some trick called um feature pyramid network so we don't have time to get into that gives us kind of a multi-scale feature representation in the backbone and if you combine those two approaches then you get a pretty big performance boost so this green dot here is now faster rcnn with resnet 101 feature pyramid network and it gets 42 mean average precision um um this cocoa data set um with a runtime of only 63 milliseconds at test time um things and if you use an even bigger network like res next 101 then it works even better um single stage methods have gotten quite a lot better as well since uh 2017 and in 2017 single stage methods tended to work a lot worse than two-stage methods but nowadays um single-stage methods are almost as good as two-stage methods um so then the kind of one of the state one of these state-of-the-art single-stage methods is actually very competitive in performance to two-stage methods now another thing is that very very big models tend to work even better so this is now 152-layer resnext trained for using this feature pyramid network approach and trained for a very long time and now it gets all the way up to a 49 mean average precision and if you do some additional tricks at test time like ru at test time you run the image at multiple scales and multiple sizes and multiple orientations and you kind of ensemble all of these predictions at test time you can get even better import improvement and if you train a lot of data and train a bunch of models on a lot of data and ensemble all the predictions then you can get even better performance um so the current state of the art i checked on the the leaderboard for this data set right before this lecture was all the way up to 55 mean average precision which is a pretty massive jump over the state of the art just in 2017 was 35. so this is like super active area of research and we just don't have time to get into all the tricks that are necessary to achieve these high results but there's a lot of literature that you can read to kind of get some sense of how how things are working okay but then basically there's a whole lot of stuff going on in these object detection methods so my general advice is like don't try to implement them yourself because there's like way too much stuff going on in these things and you're gonna get it wrong and you're not gonna match these performance so like really don't try to implement these things yourself unless you're working on assignment 5 in which case you will implement an objection from scratch so hopefully we'll be able to make it in a way that's reasonably easy for you to understand but in practice if you do find yourself needing to use object detection in practice for some real application you really should not implement it yourself just because there's way too many tricks involved um so in practice there's a a couple like there's a bunch of really high quality open source code bases that do optic detection now and if you find yourself needing to do objection in practice you should really consider building on top of one of these um so google has this tensorflow object detection api that implements pretty much all of these different object detection methods in tensorflow and facebook has this uh of course we like pytorch in this class so um facebook just released a like just released like a month ago um a brand new object detection framework called detectron 2 which implements all of these all of these numbers in in pi torch and in fact if we look back at these chart um these like purple dots and these green dots all of these new dots were actually pre-trained models that are available in detectron 2. so you can actually this uh this orange star dot is not in detectron 2 but other than that this purple dot like you can just download the code and download the model and get that really complicated uh purple dot over there so that's what i would recommend you to do in practice if you find yourself needing to do object detection so today i know we've covered a lot but um i thought it was important that we try to get a sense of how things work in object detection so we talked about these different methods for up for object detection moving from slow rcnn to fast star cnn to faster rcnn and then moving from there even to these single stage methods that are really fast um so that gives us our quick like one lecture overview of object detection um hope i didn't lose too many people on that because i know it was a lot of material but um next time we'll talk about even more of these localization methods and we'll talk about um probably some more object detection stuff and some some methods for segmentation and key point estimation
Deep_Learning_for_Computer_Vision
Lecture_7_Convolutional_Networks.txt
all right welcome back to lecture seven today we're going to talk about convolutional neural networks which is finally the major class of models that we'll use to process images going forward so if you'll recall at the last lecture we were talking about the back propagation algorithm which we could use to compute gradients in arbitrarily complex computational graphs in particular we saw that the use of this computational graph data structure made it very easy to compute gradients without having to use tons and tons of paper or lots of white board space or something to derive these complex expressions instead we were able to compute gradients in expressions of arbitrary complexity by implemented by using the back propagation algorithm to walk forward over the graph and then the forward pass to compute the outputs and then walk backward over the graph in the backward pass to compute the gradients and we had this local viewpoint of the back propagation algorithm where now in order to imp in order to add a function or use a function inside of a computational graph we needed to implement this little tiny local operator that would come that would know how to compute the app its outputs during the forward pass given the inputs and then in the backward pass would know how to compute the gradients with respect to its inputs given the upstream gradient of its out of the loss with respect to its outputs so now now all we need to do in order to plug in new types of functions into our computational graphs is just have them conform to this little modular gate API that we talked about last time and then at the end of last time we ran into a bit of problem so so far in this class we've talked many times about the linear classifier and now we've hopefully gotten some familiarity with these fully connected neural network classifiers we saw the book that especially the fully connected neural network classifier was a very very powerful model that could flexibly represent many different functions but both of these classifiers had a problem so far and that problem is that neither of these two classifiers worst respect the 2d spatial structure of our input images if you'll recall both of these classifiers required us to take our input image that has some spatial structure and has some color about some RGB color values at every point in space and now destroy all of that spatial structure by flattening our images this long vector that we could feed into our linear classifiers or our fully connected networks and this seems like a problem with we're going to work with image data that somehow whenever you build a machine learning model it's very useful to build models that somehow take advantage of the structure of the input data and now in order to make take advantage of this spatial image structure of our input image data the solution is relatively simple now that we've built up all this machinery around computational graphs and back propagation in order to build neural networks that respect the spatial structure of our input data all we need to do is define a couple of new operators that know how to operate on images or on spatially structured data so that's what we're going to talk about today is a couple new operators that we can introduce into our neural networks that will operate on this two-dimensional spatial data in particular so far when we talk about fully connected networks we're very familiar with these two basic components of fully connected networks we have the fully connected layer but gives the fully connected network its name which is a matrix multiply between the input vector that now produces an output vector and recall the other critical component of these fully connected neural networks was this nonlinear activation function I'm showing the the Ray Lu activation function here on the slide now when we move from fully connected neural networks to convolutional neural networks we need to introduce a couple extra basic operations that we can use inside the computational graphs or models in particular so then today we'll talk about three of these operations that let us move from fully connected to convolutional networks in particular we talked about convolution layers pooling layers and normalization layers so first let's see how we can extend the idea of the fully connected layer which as you'll recall destroyed all of the spatial information and input and move on to the convolution layer which will serve a similar role in that it will have learn above eight that will now respect the 2d spatial structure of our inputs so if you'll recall to look for the fully connected layer the one one way to look at what it's doing is that during the forward but it's in today's lecture and going forward we're only talking about forward passes and it's sort of up to you to use the machinery from last lecture to know how to figure how to derive gradients for all of these expressions so we'll be talking exclusively about four passes from this point on and now so then to recap a little bit one way to look at the fully connected layer is that during the forward pass it receives some vector if that vector is perhaps a flattened C far ten image then it would be a vector of 3072 scalar diamond elements being 32 by 32 by 3 and now during the forward propagation operation of the fully connected layer we just simply multiply that vector with a weight with a weight matrix to produce an output vector now now the convolutional layer is still going to have this flavor of operating on an input use doing so that have it you have operating on it with a weight matrix in some way and then producing an output of the same general type in particular the convolution layer now will input a three-dimensional tensor that is a three-dimensional volume that is no longer a flattened vector so for something like CFR 10 image the in at the very first layer of a convolutional Network that's operating on a C part time image that input volume would now be a three dimensional tensor of three by 32 by 32 where that number three refers is is called the channel or depth dimension of the input tensor in the case of a C part n image it has three channels the red blue and green color channels for the raw input image and then we the 32 and a third the other two thirty twos are the height and the width of this three dimensional tensor and now the just as our input in our input tensor has some three-dimensional spatial structure with a convolution layer our weight matrix will also have some kind of three-dimensional spatial structure in particular the the weight matrix also called sometimes a filter in the terminology of a convolutional layer will be a little free dimensional chunk in one example it would be that we might have a convolutional filter of size 3 by 5 by 5 and here the idea is that we're going to take this little convolutional filter and we're going to slide it over all spatial positions in the input image to compute another three-dimensional but here first notice that there's a constraint here between the shape of the input sensor and the shape of one of these convolutional filters in particular the depth dimension of the the number of depth channels in the input tensor always has to match the number of depth channels in one of our convolutional filters that is to say that this that this that this convolution operation always extends over the full depth of the input tensor we'll see some examples of convolution operations that relax this and I think next lecture but for the purpose of today you should always consider the convolution as extending over the full depth of the input tensor and now in order to compute our output what we're going to do is take that little five by five by three filter and stick it somewhere inside the the input image and now that three by five by five chunk of filter will then align itself to some little three by five by five chunk of that input sensor and then once we've aligned the filter to some spatial position in the input tensor then we can compute a dot product between the filter and the corresponding elements of the input tensor and this is just a dot product just as we've seen in fully connected networks before where but now rather than taking an inner product between a row of a matrix and the entire vector now it's an inner product between one filter and a little tiny locally local spatial chunk of the input tensor and so in this example are three by a five by five chunk of the input image would result in a dot product of seventy five elements and it would also be common to add a bias as with most machine learning models it's very comments have a bias whenever you have a weight but for purposes of clarity on the slides will often omit biases but you should remember that they're usually there so then by by positioning this Spade this filter at one position and the input in computing this inner product we end up computing a single scalar number that tells us effectively how much does this position in the input tensor match up with this one filter that we've computed so that will give us one element of the output tensor and now to end now we will repeat this process and take this input filter and slide it around at every one of the every possible position in the input tensor and each one of those positions will result in a single number giving the dot product between the tensor the weight tensor and the local chunk of the input at that position so in this example if we had a 32 by 32 input and a 5.5 filter if you kind of imagine how that's going to work in your head work it's going to result in 22 28 by 28 grid of possible positions at which we could stick that filter and for each of those positions it results in a single number resulting from that dot product so the result of convolving this one convolutional filter or kernel with our input tensor will be an output tensor of shape one by 28 by 28 but of course it's never enough to just have one convolutional filter so in fact a convolutional layer will always involve convolving the input image with a set or a bank of a different filters with different weight values so we can consider convolving now our image with a second convolutional filter I'm shown here in green to represent that it has though maybe different values of the weights and but when we convolve with the second green convolutional filter we perform the exact same operation we take this five three by five by five convolutional filter and we slide it over all positions in the input and compute an inner product at each position and this produces a second one by 28 by 28 output plane giving all the responses of the input image to that second convolutional filter and of course and these these are these 28 by 28 output planes we sometimes refer to as activation maps of the neural network because these are somehow now two-dimensional maps showing how much does each position in the input response to each one of our convolutional filters in the in layer and of course we don't have to stop at two filters in general week will come a convolutional layer will involve convolving with some arbitrary number of filters that is a hyper parameter that you can set so in this example we're showing convolving our input tensor with six convolutional filters and now at the bottom you can see our six convolutional filters we have six convolutional fill each of those convolutional filters has size 3 by 5 by 5 3 being the number of inch input channels from the input tensor divided by 5 being the spatial size of the filter and now we can collect all six of our fill of our convolutional filters into a single four dimensional tensor that now has shaped six by three by five by five and now that and but this this four dimensional tensor has this particular structure this in particular interpretation as being a set of three a set of six three dimensional filters and then when we convolve each of those filters with the input image we get one activation map for each filter now so now we can consider concatenating all of those activation maps which are the responses of the input image to each of our six convolutional filters and we can concatenate all of those activation maps into a single three dimensional tensor which in this example has size 6 by 28 by 28 so now this looks just like another input image because that mez convolutional layer has then taken a three dimensional tensor with some depth dimension 3 and some height and width 33 by 32 and it's converted it into another 3-dimensional tensor where now maybe the height and the width have changed but the but the the spatial structure has been preserved and all of the computation inside the computational layer always written out respects the local structure of the image and of course I mentioned that these convolutional layers always have a bias so for completeness here we're showing it explicitly now the bias we always have one bias term per convolutional filter so in this example with six convolutional filters the bias is simply a vector of six elements that gives us a constant offset that we offset each of the feature maps in the output by each of the corresponding bias values and of course this can write so then the output is then a 28 by 28 grid and so far we've been there's there's kind of two useful equivalent ways to think about the output of a convolutional layer one is this note one is this notion of activation maps that we can think of these different 28 by 28 maps or slices of the output where each of those activation maps represents the degree to which the entire input image had responded to one of those filters and that's one useful way to think about the spatial structure of the output of a convolutional layer but a second way to think about the output of a convolutional layer is that it gives us a 28 by 28 grid which is a spatial grid get corresponds roughly to the same spatial grid of the input tensor and now at each position in that spatial grid the convolution layer computes a feature vector in this Dement in this example a six dimensional feature vector which tells us something about the structure of the structure or the appearance of that input tensor at each position in the spatial grid this maybe seems kind of trivial about whether you're slicing it this way or slicing it this way but depending on whet how you're thinking about convolution in different contexts it's sometimes useful to think of it either as a collection of feature Maps or as a grid of feature vectors so it's useful to have both of those concepts in mind when you think about the output of a convolutional layer and of course we will almost we it's very common when actually performing convolution in practice to perform it on batches of images so then rather than having rather than operating on just a single three dimensional tensor giving us a single input image instead it'll be common to operate on a batch of three dimensional tensors and of course given a collection of three dimensional tensors we can group them into a single four dimensional tensor where the batch dimension at the beginning a corresponds to independent images that were processing in this convolution layer so the general form of a convolution layer looks something like this the it will receive a four dimensional input a four dimensional tensor as input and that four dimensional tensor as input will have shape n for the batch dimension the number of elements in our mini batch by CN is the number of channels in each of those input images in the batch then by it we'll have two spatial dimensions HW that will give us the spatial size of each of the input elements and and then the output will always have the same batch dimension because this convolution layer processes each elements in the batch independently and now the output will have a Willis that will have a channel dimension see out and the see out channel dimension might be different than the C in channel dimension and the output will of course also have some new spatial extent H Prime and W Prime which might be different from the spatial extent of the input image is this operation of what's happening inside the convolution layer I'll clear very good so now because this convolutional layer it takes as input a three dimensional or four dimensional tensor and then produces a four dimensional tensor as output so we can imagine stacking a whole sequence of these convolutional layers all ends to N and by doing so build up a neural network whose basic elements are now no longer fully connected layers but are instead convolutional layers so here's an example of what that might look like for a little cartoon convolutional network with three convolutional layers here we're imagining working on C part n so the input image has three channel dimensions for red green and blue and 32 height and width spatial dimensions then we will operate on it with our first convolutional layer that we'll have now the weight matrix here is six by three by five by five you should interpret that recall as a set of six convolutional filters each of which has an input channel dimension of three to match the input image and each of which has a local spatial size of five by five so then we will convolve each of those five by five filters with the input to get our 28 by 28 spatially sized output and the depth channel of that output will now be 6 for those six filters in that first convolutional layer and then we can repeat the process so now we've just got another three dimensional tensor we can pass it on to another another convolution operation and now for the second convolution operation we can see the weight the weight matrix has shaped ten by six by three by three which again means that we have ten convolutional filters each of those convolutional filters has a depth dimension of six to match that that input tensor and the spatial size of these filters is now three by three and that would produce another output and then that then we stacking more and more convolutions on top of each other and there's of course many and just as we use this terminology of hidden layers when we were talking about fully connected networks we can use the exact same terminology with respect to these convolutional networks so here this would be a 3 layer convolutional network with our input in red our first hidden layer in blue and our second hidden layer in green but there's actually a problem with this convolutional network that I've written down on this slide can anyone spot what might be a bad thing about this particular design yeah so what happens if we stack multiple convolution layers directly on top of each other it's well Egypt convolution operation is itself a linear operator so when we stack one convolution directly with another convolution it actually is another convolution so just as just as you might recall from the example of a linear classifier when we sorry of a fully connected Network if we had tried to build a fully connected Network by stacking two fully connected layers directly on top of each other then it had the same representational power as just a single fully connected layer and the same thing happens with convolution layers because they are also linear operators so if we stack two convolutional layers directly on top of each other the result is still has the same representational capacity as another convolutional layer although perhaps with a different filter size or a different number of channel dimensions but it's still a convolutional layer so then to overcome this problem we use the exact same solution that we saw with the fully connected networks which is that we in between each of our linear convolution operations we need to insert some kind of nonlinear activation function so we will very commonly use the rate of activation function that operates element-wise on each element of this three dimensional tensor just as we did in the exact same way for our fully connected networks yeah question the question was why are there five bias terms for the first convolutional layer and the answer is because I have a typo on the slide so thank you for pointing that out right so a very student that maybe I was testing you that was supposed to have sixth bias terms for each of the six filters in that first convolutional lair good that hopefully means that it's been very clear what these what these layers are supposed to do so then another question you might ask is that as you recall from our study of linear classifiers and fully connected neural networks we always were able to somehow visually inspect the weights that were learned at the first layer of the network so we might ask the same question for a convolutional Network is there some way that we can visually inspect or visually interpret what the weights at that first layer of the convolutional neural network are so you're I think you've seen this many times now the linear classifier we had this interpretation a learning a bank of templates one template per class and then the this was expanded with our fully connected neural networks but now learned a set of a set of templates in the first layer which were not tied to any particular class but the each of the templates in the fully connected network expanded extended over the full size of the input image so the fully connected Network and the first layer learned this Bank of templates each having the same size as the input image well the convolutional network has a very similar interpretation that except that now rather than learning a set of templates that are the same size as the full input image instead now it's learning a set of templates that are small and local in size so here I'm showing some learned templates from Alex net trained on image net and in the first layer of alex net it actually has an 11 by 11 convolution on the outputs with 64 filters so then we can visualize each of those 64 filters as a little 11 by 11 RGB image and then we can get some sense for what these filters are learning and these filters from Alex that are very typical of what you tend to expect to learn in the first layer of a convolutional network we can see that many of these filters learn something like an oriented edge detector that they learn maybe edges that detect maybe vertical horizontal edges or vertical edges in different orientations and with different frequencies so you they they look sort of like local edge edge detectors or local wavelets another thing that's very common to see in convolutional network filters is this notice these opposing colors so we can see that some of these filters have like a green blob next to a red blob and that's somehow looking for up opposing colors in a particular orientation in the image so then the interpretation of what is the feature map at the second after we apply this first convolution operation each of the maps in each of the activation maps in that 3d output tensor gives the degree to which each position in the input image responds to each of these 64 by 64 filters or equivalently when we had this viewpoint of the output of the convolution as a grid of feature vectors then that tells us that maybe for Alex net it gives us a 64 dimensional feature vector at every position in the input image and the court and the elements of that feature vector corresponds to the degree to which the corresponding chunk of the input matches up with each of these templates that's learned in the first layer and if you'll recall back to Hubel and Wiesel experimenting on the cat they found that the cat visual system tends to be tended to respond to these local regions of edges local patterns in the visual field of the of the cat's eyes and that's kind of a similar effect that's going on with these learned convolutional filters in the first layer of a convolutional network so then we can dive in and look a little bit in a little bit more detail at the exact spatial dimensions of a convolution operation so here we have an input image that so now I've kind of Trant transposed it and dropped the depth dimension so now the depth dimension is going into the white into the into the screen and I'm hiding it because that's kind of it's not relevant on your thinking about the spatial dimensions so here we have an input image of spatial size seven by seven and we imagine convolving with a convolutional filter of size three by three so then to see what spatial size the output should be we just need to count the number of spots that we can drop down that fill in this input image so we've got one two three four five so that means that there were five positions that we could have dropped a 3x3 filter into a 7x7 image which means that the spatial size of our output should be will be five by five and in general if our input has some size W and the filter has a kernel size of K then the size of the output will be w minus k plus 1 right the idea is that we've been right that the number of positions we can drop the filter is actually less than the number of positions in the input because it we bump up against the edge of the input along the edges and corners of the input image now this seems like potentially a problem this means that every time we perform a convolution operation in this way every convolution operation is going to reduce the spatial dimensions or input tensor so then that puts some constraints on the depth of the networks that we might be able to train that for example if we use a three by three convolution we're going to lose two pixels of resolution every time we do a convolution which puts an upper bound on the number of layers that we could potentially put in our network because eventually the spatial Dement the spatial size of our image would just evaporate away to nothing if we used enough convolutional layers and that seems like a problem we don't want the number of layers in our model to be constrained by this evaporative nature of the convolution operation so to fix that we often introduce padding around the borders of the image before we apply the convolution operation so here's an example where we're applying a padding of a padding of one which means that before we do the cut before we perform the convolution operation we add a extra of pixels around that around the border of the image and fill them all with zeros this is called zero padding there's very you might imagine there's different strategies you might use for how you might pad out the input you for example you might think to maybe pull the nearest neighbor value from the border of the image or you might imagine a circular padding value or you know you you take as you go up the right-hand side of the image maybe you start copying values over from the left or other other schemes like that but it turns out in practice the most common thing that we do when training convolutional neural networks for padding is simply to add zeros it's simple it's easy and it seems to work quite well so that's this now introduces an additional hyper parameter into a convolution layer so now when we're building a convolution layer we need to choose both the number of both the filter size and the number of filters in the layer and also the amount of padding that we're going to apply in inside the convolutional layer and now once we've generalized our convolution layer in this way to accept padding then the output size now becomes W minus K + 1 + 2 P where P is the padding value and a very common way to set that high per parameter P is to set it equal to the kernel size minus 1 over 2 that means that suppose we're doing a 3x3 convolution then we pad with 1 if we do a 5x5 convolution you pad with 2 on each side and that that is called same padding because it means that when we apply the convolution the output will now have the exact same spatial size as the input so even though padding is an extra hyper parameter technically that you can play around with the the most common thing to do for padding is actually same padding so this this just makes it easier to reason about the spatial sizes because it means that the spatial size tends not to change when we perform our convolution operation now another thing another useful way to think about what the convolution is doing is this notion of a receptive field so here we're showing an input grid I'm showing our input image and then the output grid the spatial grid of the output after performing a convolution operation and recall that we had this interpretation of the convolution as taking our convolutional filter matrix and sliding it around and taking inner product sliding it around every position of the input what this means is that each spatial position in the output image now depends only on a local region of the input image in particular for a three day 3x3 convolution here one element of that output tensor now only depends on a 3x3 region in the input tensor and this 3x3 region is often called the cept afield of the convolution or of the or of the receptive field of the value in the output tensor this is relatively a relatively straightforward and easy thing to think about in the context of one convolution layer but it's also interesting to think about what happens these receptive fields as we start stacking convolution layers together so what we can see is that here we're showing a stack of three convolution layers and now on the very right hand side we see that one element in the very rightmost output tensor depends on a three by three region in the second-to-last activation map but and each of those elements in that second-to-last activation map it in turn depends on a three by three region in the input in the in the second activation map with so that and that which in turn depends on a three by three region in the beginning activation map so that means that transitively this act this green region in the final output actually depends on a fairly large spatial region in the input in there in this input tensor on the far left so in this example with three by three convolutions you can sort of visually see that when we stack to three by three convolutions on one after another then the output of those two three by three convolutions now depends on a five by five spatial region in the original input and if we stack three convolute three three by three convolutions it now depends on a seven by seven region in the input state feature map so then this is another this is a but then the term receptive field is sometimes overloaded a little bit to mean maybe two different things so then the receptive field of a neuron in the previous layer would be equal to the kernel size of the convolution and sometimes you'll also talk about the receptive field of an activation all the way in the original image which is the spatial size in the input image that has the potential to affect the value of that neuron after however many convolution layers and what we can see from this diagram is that as we stack convolution layers on top of each other the effective receptive fields in the input is going to grow linearly with the number of convolution layers that we we add but now this is maybe a slight problem because suppose we want to work with very very high resolution images maybe we will want to work with 1024 by 1024 images now that means that in order for our out the values in that output tensor to actually have the ability to see a large region in that high resolution input image the only way that we can do that is by stacking up a very very very large number of convolution layer a convolutional layers so then if maybe we have a 1024 by 1024 image then every 3x3 convolution only adds two pixels to the receptive field so we would need something like 500 convolutions in order to allow the out those final output features to depend on the full input image and it is having these large receptive fields in the input image maybe seems like a good idea because the neural network needs to be able to get some global context about what is the full image that it's looking at so then a solution to this problem is to add another hyper parameter to our convolution operation yeah was there yeah the question is it's the zero padding it doesn't seem like it's adding any information to the network well it's actually not meant to add any information it's it's more about notational convenience to prevent the shirt the features from shrinking inside the network although there is actually a type of implicit way the zero padding does actually add some information to the network and that's that it actually breaks translation invariance so with a convolution it's it should be translation invariant right if you imagine shifting the whole image a translation equivariance to be more technical if we shift the entire image then the output should shift correspondingly but once you add a convolution layer it actually gives the network the ability to kind of count out from the border that you could imagine it learns a convolutional filter that it looks for that row of zeros to know where it is in the input image so actually I think that adding zero padding in this way somehow breaks the translational equivariance of the convolution operation and gives the network some latent or implicit ability to know exactly where it is in the input image I don't know if that's a bug or a feature but how punch that that's something that the zero padding actually is adding the representational power of the network but then back to this back to this problem we had this problem that in order to achieve these very large receptive fields we need to stash many many convolution layers and now we can open we can overcome that by adding another hyper parameter to our convolution called stride so now we now we go back to our example of a seven by seven input with a three by three convolutional filter but now we want it to have a stride of two that means that rather than placing the convolutional filter at every possible position in the input image instead we're going to place it every two possible positions in the input image so then the output now then are our first place we can stick it is in still in the upper left-hand corner and now we actually skip over one percent shil one potential position to place the filter because our stride is two and now we place it again so then with a stride of two we can see that there are only three positions in the input where we can place the convolutional filter which means that when we use a stride of two then we are our output is now is quite a lot spatially down samples and once we add stride to the network that means it can build up receptive fields much more quickly because that that effectually expected adding a layer of the stride of two effectively doubles that receptive field at every layer in the network well rather at the layer in which we have astride of two so now for a little bit of this this more general formulation of how to compute the output shape or size of a convolution if our input has size W our filter has size K our padding is P and we have a stride of s then we have this expression for computing the size of the output and you might ask here where you can see that the size is dividing by the stride and you might wonder what happens if that integer that's not an integer that's actually divisible by the stride well that's kind of people pay implementation independent but usually you just like trunk you usually will truncate it or round down or round up I don't know it depends on the application but usually we don't do that and usually we set up our convolutional layers in such a way that the stride always divides that that expression that we're dividing W minus K plus two beats so now here's a little excuse kind of recap all of that here's an example of a convolution that you might use in a C part n network with our input volume of size three channels 32 by 32 spatial size and now we might imagine a convolutional filter with ten filters of five by five with stride one and pad two so what we've given given these settings for the convolutional layer what should the output size of this of this out of this tensor be after the convolution well here we can apply the formula on the previous slide and it turns out that in this case the spatial size is the same because we're using stride of one and same padding and the number of channel dimensions is equal to the number of filters so now the output size is 10 by 32 by 32 and now as a recap what would be the number of learn about parameters in this layer well we have our format we have our we have now ten convolutional filters each of those filters has size three by five by five and each of those filters has an Associated bias so that means each of the filters has 76 throwable parameters since we've got ten filters we've got 763 little parameters and now another question how many multiply adds does this convolution operation take to compute well to compute this you can think that the network that the output tensor has shaped 10 by 32 by 32 and in order to compute each of those elements at the output we need to compute an inner product between two tensors each of shape three by five by five so then if you multiply all that together you can see that this convolution operation takes quite a lot of multiply add operations in order to compute its output and also by the way this is something that sometimes trips people up but one thing that's actually used sometimes is a one by one convolution where the kernel size is just one by one and this seems kind of weird but it actually makes perfect sense so for a one by one convolution we might have in this case a spatial input tensor with 32 channels and 56 by 56 in the spatial dimension and now a one by one combo where would have a convolutional kernel where each with 32 filters each of those filters would be one-by-one and spatial size and 60 and extent over the full 64 channels of the input depth and what this basically means is that we're doing independent dots and then remember we had this interpretation of this these three dimensional tensors as being a grid of feature vectors well when you apply a one-by-one convolution that basically looks like a linear layer that operates independently on each of the feature vectors in our one by one grid in our in our three dimensional grid and because of that interpretation sometimes it you might see neural network structures where we have a one by one convolution and then array Lu and another one by one convolution and another Ray Lu some sequence of one by one convolutions and Raley's and that's sometimes called a network in network structure because it's effectively a fully connected neural network that operates independently on each of the feature vectors that appear at every position in space so this is seems like kind of a weird thing to do but actually you'll see it in practice used sometimes so then the recap of a convolution is that it takes this three-dimensional input it has hydro parameters the kernel size in general you might imagine actually non-square kernels that they do show up sometimes but the overwhelming majority is for kernels to be square a number of filters of padding and stride we have a four-dimensional weight matrix and a single bias factor and then it produces an output of four of three-dimensional output according to this particular formula then a couple very common settings to see with convolutions because a lot of hyper parameters here would be to set oh it's very common to use square filters there's some applications where you might see non square filters it's very common to use same padding to of course the output have the same size as the input and a couple very common patterns in convolution would be it be very common to use like a three by three stride one convolution or a three by four or five by five or one by one stride one convolution you'll see these types of configurations very commonly and you'll it's also very common to see a convolution layer with three by three kernels padding of one and astride up to that then is a sort of a spatial down sampling by a factor of two using a comm solution layer so these are all settings that you'll see very commonly in convolution yeah the question is would it be preferable to use a one by one convolution instead of a fully connected layer and I think those have slightly different interpretations a one by one convolution has the interpretation of changing the number of channel dimensions in our three dimensional tensor whereas a fully connected layer has the interpretation of flattening that whole tensor into one and then producing a vector output so that fully connected layer is for cases where you want where you want to destroy the spatial structure in the input so would be common to see that at the end of the network when you need to produce category scores whereas the one-by-one convolutions are very common when you need to like when you need to they're often used as a kind of adapter inside of a neural network when you need to have a convolutional a 3-dimensional chunk of activation kind of match up with something else that expects a different input number of input channels then it's very common to use a one by one convolution to kind of adapt or change the number of input channels enhancer so they're not they're not often used in the same way so they also I should point out that so far we've talked about two-dimensional convolution because we had the notion of this convolutional kernel so we have three dimensional input and we had a convolutional kernel that we moved around at every position in that 2d space but there are other types of convolutions that you'll see used out there sometimes so you might imagine a one-dimensional convolution with a one-dimensional convolution our input would be two-dimensional the input would then be a number of channel a channel dimension and one spatial dimension and now the weight matrix would be true of the weight matrix would then be three dimensional so we would have again a bank of filters so we might have C out filters and each of the independent each of the individual filters would be CN so extend the full depth dimension and then K would be the kernel size again so then it has the interpretation of plopping down this filter and positioning it at every position in 1d space and moving it sliding it over the input and these 1d convolutions are sometimes used to process textual data that might occur as a sequence or they also will use it sometimes to process audio data like if you have an audio waveform that you want to process with the convolutional Network you might use a one-dimensional convolution and we can go the other way and it's sometimes you'll sometimes see a three-dimensional convolution so with a three-dimensional convolution the each element of the batch will be a four-dimensional tensor so now I can't really draw a four-dimensional tensor on a two-dimensional whiteboard but what then then we had this interpretation of the three dimensional tensor as a three dimensional grid where at every point in the grid we have a feature vector of size C in and then the freedom and become the three-dimensional convolution the filter is then a four dimensional is then a three dimensional thing that's expended sorry a five dimensional thing where the kernel has three spatial dimensions it extends over the full number of feature dimensions of the input vector and then we have a collection of those things so it's been a five dimensional weight matrix and then each of those three dimensional filters gets Lud's sled at every position in 3d space over that 3d input tensor and these 3d convolutions are sometimes used in maybe to process point cloud data or other types of 3d data where you actually where data actually lives in some native 3d space and it's so then we've seen that these convolution layers come with quite a lot of hyper parameters to set their inputs and their outputs and their padding and their strides and whatnot so if you look up for example the PI torch in PI torch if you look up the convolution operation there you'll see that you can have all these settings to change all these different settings in the convolutional layer and of course you'll find 1d and 2d convolutions as well so that brings us so then we can see that convolution is this layer that has these learn about weight matrix matrices that are somehow transforming as we go along which brings us to our next layer that's a key ingredient of a convolutional Network which is a pooling layer and with the way that you should think about pooling layers is that they are a way to down sample inside your neural network in a way that does not involve any learn about parameters we have already seen that we can spatially down sample our inputs in a convolutional Network by using a convolution layer with a stride greater than one well another way to downsample our spatial data our spatial dimension dimensions of our tensor would be instead to use a pooling fooling there so here it involves no learn herbal parameters we just have hyper parameters which would be the kernel size there's also this the way that this pooling layer is going to function is very similar to a convolutional layer where we're going to operate on we're going to imagine look local receptive fields in the input tensor of some local spatial size of given by the kernel size and then within each of those local regions we will have some pooling function which is some way to collapse those those set of input values into one output value and then we will apply this this operation at every slice of our input tensor that will then result in some spatially downsample output and one very so one very common way to set up pooling that maybe makes this more concrete would be an example of two by two max pooling with a stride of two which means that our which means now the stride and the kernel size mean the exact same thing as they did in convolution it means that we're going to consider carving up our input tensor in two spatial regions each of psiy each of size 2 by 2 and we will stride those spatial regions with they'll move by by 2 pixels each unit and now when the stride and the kernel size are equal to each other that means that the the pooling regions will be non overlapping so it's very common to use this setting of stride to kernel size to pooling and now the now we use the maps so then that now we have this interpretation of if we have a four by four spatial input then it gets carved up into two by two regions given by the different colors here and then within each of those 2 by 2 regions we want to compute a single output number that summarizes the values within that 2x2 spatially a spatial region and when we use max pooling we use the max function to compute that one number so then within that 4x4 spatial region we pick the biggest one and that ends up as the the corresponding bin in the output so then the this what that means is that this this red region in the input gets collapsed into this single number six because if the largest the green region gets collapsed into the eight and so on so on and so forth and one reason why we might prefer pooling over a strata convolution one is that it's very it doesn't involve any learn about parameters and a second is that it introduces some amount of invariance to translation especially in the case of max pooling so then you can imagine that because the max operation selects the largest value within each of these regions that if somehow the input image would have been some of some of the stuff in the input image would have moved around a little bit then it's conceivable that the max value might not have changed within the region even if the exact position of something in the image would have changed a little bit what that means is that when we use max pooling it introduces some amount of a small amount of translational invariance model which might be useful for four different types of problems so then pooling has a very similar summary oh and by the way this is max pooling another thing that's very common to see would be average pooling so here we're computing that output by taking the max in the region it would also be common to take the average of in the region so in this summary for pooling is that it ends up looking quite similar to convolution that we have this V stride and kernel size parameters but now rather than taking inner products instead we apply some kind of fixed pooling function within each of those receptive fields to compute sync our output values so now we now we've got fully connected layers we've got activation functions we've got convolutional layers and we've got pooling layers and given all of these we can now build a classical combat so given it's now a convolutional network is some kind of neural network that is us that is a composition or combination of all these different operations but you've got a lot of freedom in how you might choose to hook these things up right because a lot of hyper parameters there's a lot of different types of layers but now a very classical design that you'll see in a convolutional network is that we'll have some amount of of comrade liu pool comm really cool Comrie little pool followed by some number of fully connected layers and that's a very very classical design that you'll see in in convolutional networks and there's a concrete example of that classical ComNet design we can look at the Linette v network from jung that Yamla khun used for character recognition back in 1998 so here with the Lynette five it takes as input a single grayscale image that is 28 by 28 in spatial size and because it's grayscale there's only one input channel which is the intensity of each pixel the first thing is a convolutional layer with 20 convolutional filters each of 5x5 spatial size and here they're not using a with with same padding and so then our output of after the convolutional layer is then 20 by 28 by 28 and after the convolution we put we put a rail ooh it's pretty common to put that's non-linearity right after a convolution layer and then the next thing would be to apply this 2x2 stride to max pooling that halves the spatial size of that tensor moving it from 20 by 28 by 28 down to 20 by 14 by 14 then if then we apply another convolutional layer that it has now has 50 filters so now our output after that second convolutional layer and it's corresponding gray liu would be 50/50 filter 50 depth dimensions and 14 by 14 spatial dimensions then we have another max pooling that again halves the spatial dimensions putting us down to 50 by 7 by 7 then we'll then before we go into these fully connected layers we have this flatten operation that takes this three dimensional tensor and then flattens it out into a vector like we've done in CFR ten perfectly connected networks so this flattens out this 50 by seven by seven tensor into a single vector of size 24 50 then we have a fully connected layer with 500 output channels followed by another ray loop that converts us to a vector of size 500 yeah what's your question yeah so the question is what happens with a max pooling if their max is not unique is I think the question then it's kind of implementation dependence so often you'll just pick one and which one you pick will kind of depend on the implementation but it's also I think it's also not very common for that to happen in practice because I guess for that to happen it might be like the entire region is zero from all Ray lose being zero I think that's real I think that's quite rare to actually happen in practice so even if it happens once in a while it probably shouldn't be a problem yeah yeah that's a good question the question as max cooling introduces some kind of non-linearity so why do we need the Ray Lu here and I think you're right that you actually don't need the Ray live here and if you actually I kind of put it in here because I think it's it's common to use Ray Lou in modern convolutional networks and everywhere but if you actually look and at Yamla Queens paper they actually didn't use any non linearity in those layers they just use the max the max cooling as the company arity so I think you're right but in more modern networks it's common to put the rail ooh even when you do have a max pool and even if it might not be strictly necessary which kind of gives you just more regularity in your in your network design and then of course at the end we have another flame fully connected network to produce 10 class scores because we wanted to digit classification and recognized in the cleavage of zero nine and one thing you can notice about this classical ComNet design is that as we go through the network we tend to have the spatial size decreasing either through pooling layers or through strata convolution layers and the depth at the depth channel the number of filters is increasing which means that as the spatial size decreases the depth increases so somehow this the total volume is always sometimes preserved exactly we're just like squeezing it down this way and stretching it out this way and that's a very common paradigm to see in in convolutional networks yeah yeah the question is how the hell'd you come up with one of these things right I think that's there's that bassy of the question yeah well I think it's a lot of trial and error but actually I think next lecture to save you that pain I want to I'm going to talk about in detail the history of many convolutional architectures and how they've evolved over the time so that but for the purpose of today's lecture that you should just think of this as magic that yah McCune handed to you in 1998 it happens to work pretty well um but now there's a problem with this classical design we've seen that it's very common to stack up like calm cool ray lucam cool ray Lujan cool ray Lou and you can imagine writing down networks that are arbitrarily deep and arbitrarily big and you'll be excited about your training deep networks on big data it's awesome but you'll run into a problem that if you use this very classical design of a comm net you'll find that it's very difficult to get networks to converge one thing once they become the very deep so to overcome that it's this then this is actually a more common in a more recent innovation is that we add some kind of zatia layer inside the network that makes it easier for us to train very deep networks and the bat the very most common of these is called batch normalization so the idea is that we want to receive the outputs from some previous layer and we want to normalize those outputs in some way so that they have a zero mean and unit variance distribution why that's a great question if you read the original paper they say that it reduces something called internal covariant shift which is not well understood exactly what that is or exactly what they were trying to say but the rough idea is that when you're training a deep neural network each layer is looking at the outputs of the previous layer and because all these different weight matrices are training simultaneously that means that as this way as the weight matrix from the previous layer changes over the course of optimization the distribution of outputs that the next layer will see is also going to change over the process of optimization and somehow the fact that this second layer sees a changing distribution of inputs over the process of training might be bad for optimization in some way is the very course non rigorous idea of what they mean by internal barriers shift so then to overcome this potential problem of internal covariant shift the idea is that we want to standardize all of the layers to all fit some target distribution in particular we want to force all that the outputs from every layer to have some to be distributed with zero mean and unit variance as that means that the next layer consuming those activations is then hopefully seeing inputs that are from a more stationary distribution over the process of training and this can hopefully stabilize or accelerate the optimization process of these deep networks in some way so then how exactly can we do this well we know that we can normal we can given a set of samples XK from some distribution X we can normalize them by we can empirically normalize them by reduce subtracting off the mean and dividing by the standard deviation and it turns out that computing and subtracting the mean and dividing by the standard deviation is itself a differentiable function and we know from the idea of computational graphs that when you have a differential function you can just slide it as a layer into your neural network so what we will do with batch normalization is just insert a layer into the network whose purpose is to convert the inputs to have this more standardized distribution more concretely you can imagine a fully connected version of batch normalization that receive an input of size n for the batch dimension and size D for the vector for the number of dimensions each vector so it's a batch of n vectors each of each of size D and now what we're going to do is along each element of the vector dimension we want to compute an empirical mean over the batch dimension then we use the different samples in the batch to compute what is the average value for each slot in that vector so then we simply compute the empirical mean over the batch dimension to get this vector mu of size D then given given that we can then look a look and remember the expression for computing the standard deviation of variance and then we know that we can compute the standard deviation in this way that will then give us the the standard deviation of variants per channel for each of those D slots and our input again averaging over the batch dimension and then finally we can normalize to give us 0 mean unit variance by subtracting the empirical mean and dividing by the empirical standard deviation of course you'll notice that in the denominator we have this plus epsilon term that's to avoid dividing by 0 and that's a small constant it's a hyper parameter but people don't usually play with too much so now there's a slight problem that you know we said we wanted to make our inputs unit mean 0 variance and maybe that will be a good thing for optimization but that's actually quite a stiff constraint to place on the network to force all of these layers to always have the exactly fit this unit normal to fit this is your mean unit variance of constraint so in so in practice it's common to add an additional operation after this normalization where we add learn about scale and shift parameters gamma and beta into the network where each of these will now be vectors of dimension D that we will take our normalized outputs in the X hat that now the X hat have 0 mean unit variance then we'll add back in this learn about bias and multiply it by this learn about learning will shift a learnable scale parameter and those are basically allowing the network to learn for itself what are the means and variances that it wants to see in each of those elements of the vector and in particular if the network now has the capacity to learn gamma equals Sigma and beta equals mu which will requite n caused the batch normalization layer to recover the identity function in expectation so that's one intuition about why we want to add these learn about scale and shift parameters back into the network but now there's a problem here with batch normalization which is the batch part right so then this this mu and this Sigma we were computing by averaging over the batch dimension of our input tensors and that's a very very weird thing to do that's something that we have not seen so far in any operation that we've talked about in neural networks thus far so far whenever we've had a batch of inputs all of our operations always work acting independently on every element to the batch what that meant is that we could stuff whatever we wanted into a batch and it wouldn't like having a picture of a cat in the same batch as a picture of a dog would not cause them to have different classification scores and now once we have batch normalization that's no longer the case that now the the outputs that you produce for each element in the pack in the batch now depend on every other element in the back and that is a very bad property to have a test time because suppose maybe you're running a web service and you want to like compute like what users are uploading at every point in time then it would be really bad if you had a network that whose predictions now depended on what different users happen to be uploading at the same time well that would be a very bad property to have in a machine learning model that you're building so in any kind of test time setting it's always preferable to have your models again become independent over the over the elements in the batch so we need to find some so in batch normalization we do that by having the layer actually operate differently during training and during testing so during training the batch normalization layer is going to take these empirical means and standard deviations over the batch of the data that it sees but during testing it will not compute element it will not compute empirically over the batch instead over the course of training we will keep track of some running exponential mean of all of the average of all of those new vectors and Sigma vectors that we had seen during training and then we will that will become now a fixed scalar that is kind of like the average mu and the average Sigma that had been seen over the course of training and those are now constants so then at test time we rather than using the empirical means over the batch instead we will use those constant mu and sigmaz which are the the average means and standard deviations over the course of training and in doing that it will allow us to again recover this independence among elements in the batch a test time which is a very good property and there's actually another really really nice property of these using these empirical rather using these running means and very and variances at test time and that's that if now mu and Sigma are actually constants then this batch normalization operation becomes a linear operation right if you look that now we have our excise we're subtracting a constant and we're dividing by a constant and then we're further for the normalization step and then for the scale and shift step we're multiplying by a learn weight and we're shifting by a learned weight so then a test time the batch normalization operator becomes a linear operator which means that if we put if we if in our convolutional Network we had a design that had convolution followed by batch normalization then we know that two linear operators can be fused into one linear operator so then actually what's very common to do in practice is to actually perform that fusion in inference time and then actually fuse these these empirical meet these running means running standard deviations and learned scale and ship parameters and actually fuse them into the previous convolution operator in the network what that means is that now batch normalization becomes free at test time it then has the zero computational overhead at test time because we can just fuse at test time into the the previous linear operator so it's a very nice thing about batch paralyzation that makes it very nice in practice so then we've seen batch normalization in the context of fully connected networks it's also very common to use batch normalization in convolutional networks to see how that works in the context of convolutional networks in the context of fully connected networks we were averaging over we had this input X of size n by D and then we were averaging all over the batch dimension to produce these empirical means up size one by D and then we at me scale and shift to produce the output now for convolutional networks it looks very much the same except now rather than averaging only over the batch dimension we will average over the batch dimension as well as over both the spate both the spatial dimensions of the input which means that now our our scale our mean and standard deviation will be vectors of size C on our scale of shift learned scale and shifts will also be vectors of size C and then we can compute these outputs using this broadcast functionality that you're familiar with in pipe works by now so then it's very common to add batch normalization in your networks directly after a convolutional or a fully connected layer and what's very nice empirically about batch normalization is that it makes your network strain a lot faster so here's a plot from the paper that introduced batch normalization so here the the black dashed line is this baseline network that is has it hosing only convolution and pooling and these other operations with no batch normalization and now the the the red dashed line is simply adding Bachelor malaysian after all the convolution layers and making no other changes so the training or the architecture of the network and you can see that simply by throwing bash normalization into this model it trains much much faster but another property another nice property about batch normalization is that empirically when you are training networks that have batchman with normalization then you can tend to increase the learning rates much much higher without diverging during convert during during training so then here the then this other this blue solid line is then the same network with the higher learning rates during training and bash colonization and you can see that by combining batch normalization in the model and higher learning rates were able to train this deep network much much money master and this is actually a very robust finding that occurs across many many different convolutional network architectures but there are some big downsides in batch normalization one is that it's really not very well understood theoretically there's this kind of hand waving around internal covariant shift but I think it there's not really a clear understanding of exactly why it helps optimization in the way that it seems to and another problem with batch normalization is that it actually does something different between training time and test time and that seems like a problem that actually is a source of bugs in like many many applications and like myself personally on like multiple different research projects I've gotten bitten by this by this the fact that that penalisation does different things during training at test time sometimes it's just like a bug in your code and you forget to flip the mode between train and test and then you're very sad or sometimes if your data is somehow imbalanced in a way that it might actually be inappropriate for your model to be forcing this normalization constraint on your data so then for maybe problems versus looming like image classification with balanced image classes maybe this unit may be this couch or this 0 main unit variance normalization is appropriate but for other types of models where you expect very imbalanced inputs or very imbalanced data sets that can actually be a big problem and I've been bitten by batch normalization multiple times but for a code of feed-forward convolutional networks that tinia set it to work really really well now one variant on batch normalization that you'll sometimes see right so we said that one of the problems with batch normalization is that it behaves differently during training time at test time and that's maybe bad for a lot of reasons right you're because you want your network to do the same thing at training and testing during training it was trained to do this thing and now at test time if you swap out the way the one of the layers functions then it the rest of the model wasn't trained to to operate productively with that moat that layer in another mode so in general you we generally prefer layers that operate the same way a training end test time so one variant of batch normalization that's been proposed that operates the same at training and test time is called lair normalization and here the idea is very similar that we're still going to compute some means and standard deviations and do this empirical normalization but the difference is that rather than computing the average over the batch dimension instead we'll compute the average over the the feature dimension D and now this that now this this normalization no longer depends on the other amount all our other elements in the batch so we can use the same operation at training and test time and this layer normalization operator is actually used fairly commonly in recurrent neural networks and transformers that we'll talk about in later lectures another kind of a kind of equivalent one one version of lair and of equivalent thing to layer normalization that you'll see in images is called instance normalization here rather than averaging over the batch and spatial dimensions we averaged only over the spatial dimensions and then again we only buy it because our means and standard deviations don't depend on the batch dimension we could do the same thing at training a test time there's this beautiful figure that kind of gives us some intuition of the under the the relationship between these different types of normalization that if we have a tensor with a batch dimension a channel dimension and some spatial dimensions then you can see that Bosch normalization averages only over the bat over the batch and spatial dimensions layer normalization is averaging over the spatial and channel dimensions and instance normalization is averaging only over the channel dimension and because there's an empty slot on the slide you should expect that there's another type of normalization that was proposed to this paper are called group normalization a couple years a year or so ago and here the idea is that you split the channel dimension into some number of groups and you normalize over different subsets of the channel dimension and that actually tends to work quite well in some applications like object detection so now we've seen these different components of a convolutional Network and you might be wondering um do you have a lot of freedom here and how you can recombine these things I've kind of given you a set of ingredients that you can use to build neural network models that are aware of the 2d structure of images but I haven't really told you any best principles about how actually to combine them and go about building neural networks that actually work well so that will be the topic of next week's lecture so come back and actually learn how to build neural networks I have a lot of other stuff here that we're not going to be able to get to maybe that'll come some other time so let's skip to here and the problem is how do we actually build these things in a way that makes sense and then we'll talk about that in next next week's lecture
Deep_Learning_for_Computer_Vision
Lecture_10_Training_Neural_Networks_I.txt
okay welcome back to lecture 10 we made it to double digits that's very exciting so today we're gonna be talking about lots of tips and tricks for how you actually go about training neural networks in practice so last time we left off we talked about the hardware and software of deep learning and we talked about different types of hardware that you run these things on like CPUs and GPUs and TP use and we also talked about different software systems that you can use for implementing these these networks in particular we talked about stat the difference between static graphs and dynamic computational graphs and we talked about some of the trade-offs of both pi torch and tensor flow so now at this point we've pretty much seen all of the stuff that you need to know to train neural networks but it turns out there's still a lot of little bits and pieces that you need to actually be super effective and your ability to train networks so I like to try to break this up into I mean this is kind of like a bunch of potpourri that you need to know in order to be good at training neural networks and I'd like to break this up into maybe three different categories of things that you need to know about one is the one-time setup at the beginning when you're before you start the training process that's where you need to choose the architecture the activation functions you need to do a lot of stuff before you will go and hit that train button and then once you begin training there are certain things that you might need to do during the process of optimization like schedule your learning rates or scale up too many machines or things like that and then after you're done training you might need to do some extra stuff on top of your Train networks like model ensemble or chance for learning and over the course of today's lecture and Wednesday's lecture we're going to walk through a lot of these little sort of little nitty-gritty details about how you actually go about training neural networks in practice so today let's start off by talking about activation functions you'll recall that in our little model of an artificial neuron we always have to have an activation function that we always have are some kind of linear function coming in that color that collects the inputs from the neurons in the previous layer and then those get summed and those gets multiplied by your weight matrix and summed and then pass through some kind of nonlinear activation function before they've being passed on to the next layer and as we recall the having some nonlinear activation function in our Network's was absolutely critical for their processing ability because if we remove the activation function then all of our linear operations just collapse onto a single linear layer so the presence of an of one of these activation functions recall was absolutely critical in the construction of our neural networks and we saw that there's this big zoo of activation functions but we kind of left off we didn't really talk much about these different types of activation functions and their trade-offs when we last saw this slide so today I want to talk in more detail about some of the pros and cons of these different activation functions and other considerations that go into choosing or constructing activation functions for neural networks so probably one of the most classic activation functions that have been used in neural network research going back several decades is the sigmoid activation function sigmoid because it has the sort of S curved shape this is this was a popular activation function because one it has this interpretation as a probability so you can one way that you might think about neural networks is that at each of the neurons it's either on or off and maybe we want to have some real some value between zero and one there represents the probability of that feature being present so the sigmoid activation function has this nice interpretation as the probability for the presence or absence of a boolean variable it also had this it also has this interpretation as the firing rates of a neuron if you recall these biological neurons they received signals from other incoming neurons and then fire off signals at some rate but the rate at which they fired off was non linearly dependent on the total rates of all the inputs coming in and the sigmoid function is a simple way to model that kind of nonlinear dependence on firing rates but so these are a couple reasons why classically the sigmoid non-linearity had been very popular but there are several reasons why it's actually not such a great non-linearity in practice one problem is that it has these flat regimes at the beginning at the end these saturated regimes with zero gradient and these kill the gradient effectively and make it very difficult to train networks in a very robust way so to think about this what happens sigmoid function when X is maybe very very small like minus 10 or minus 100 well in that case with us it will be in this far left regime of the sigmoid non-linearity so the local gradient will be very very close to zero and then that means that all of our weight updates will also be very very close to zero because of the local gradient to zero then it's just going to remember we're gonna take our upstream gradients multiply that by the local gradients that's going to produce these downstream gradients and at that local gradient this effect is a value very very close to zero that means that no matter what the upstream gradients were the downstream gradients will also be values that are very very close to zero this will have the effect of making learning very slow because now all of the weight updates onto our weight matrices all of all the gradients with of the loss with respect to our weight matrices will be very low and it will also give very problematic training dynamics once we move to deep networks suppose that we've got a network that's maybe a hundred layers deep and then are we immediately kill the gradient at some layer and then we'll basically have no signal to train gradients at any of the lower layers and this problem happens both when X is very very small like minus ten as well as very very large like plus ten but we're the only regime so kind of when you have a sigmoid if X gets too big or too small then it's if learning kind of dies for that layer and the only way in which learning can proceed is if somehow the activation is somewhere within this sweet spot near x equals 0 where anyway the rate of the sigmoid function behaved behaves somewhat linearly so that's the first major problem with the sigmoid activation function is that these flat regimes are going to kill the gradient and make learning very challenging a second problem with the sigmoid non-linearity is that its outputs are not zero censored you can because clearly the outputs for the sigmoid are all positive right because it's all above the why the x axis and to think about why this property of not having zero centered outputs is problematic let's consider what happens when the inputs one of our neurons is always positive remember here's our little diagram of one neuron inside our neural network and we're zooming in on just one of these things so now suppose that the input so suppose we're building a multi-layer neural network where at every layer we use a sigmoid non-linearity that means that the inputs to this layer the excise will also be the result of applying a sigmoid function to the previous layer which means in particular that all of the inputs X I to this layer will all be positive now what can we say about the grate now given the fact that all of the X eyes that are inputs to this layer are going to be positive then what can we say about the gradient of the loss with respect to the W is that they'll so remember that in order to compute the gradient of the W I of the loss with respect to the W I will take the local gradient and multiply by the upstream gradient now the local gradient is always going to be positive because the local gradient of W I is just going to be X I and X I is positive which means that the local gradients will all be positive but then we multiply this by the upstream gradient which could be positive or negative but the upstream gradient is just a scalar and if the upstream gradient is positive that means that all of the gradients of loss with respect to W I will all be positive and similarly if the upstream gradient is negative that means that all of the gradients or the loss with respect to WI will be negative so what that means is that all of the gradients with respect to WI are going to have the same sign and this seems like this is kind of a this seems like kind of a bad property for learning so for example suppose that that this means that it could be very difficult to make gradient descent steps that reach certain values of the weights that we might want to because of this constraint that the gradients are all going to be positive or all going to be negative as kind of a pictorial example of why this might be a problem suppose that we can we can have this kind of cartoon picture on the right so here where this this picture is maybe a plot of W 1 and W 2 and we imagine that our initial value for the weights W is the origin and maybe the value of the weights that we want to get to in order to minimize the loss is somewhere down to the bottom to the bottom right now in order to move in order to traverse from the origin to the bottom right we need to take some steps where we're going to take part we want to take positive steps along double one and negative steps along w-2 but with this constraint that the gradients with respect the gradient of the loss with respect to the weights are always going to have the same sign there's no way that we can take steps that aligned in that quadrant so the only possible way for a gradient descent procedure to make progress toward that direction is to have this very awkward zigzagging pattern where it kind of moves up where all the gradients are positive and then moves back down to the left where all the gradients are negative and then moves up again and that's kind of a very awkward zig zaggy pattern so that makes that that's kind of gives and by the way this maybe doesn't look so bad in two dimensions but as we scale to weights with thousands or millions of dimensions this property is going to be very very very bad because now if we have a weight matrix with D dimensions then if you partition up all of the possible signs of all of the elements of that weight matrix there's going to be two ^ d sort of quadrant higher dimensional quadrants or orphans in that a high dimensional weight space and under this constraint that they all have to be positive or negative that means that any of our updates directions can only move in one in two of those possible two to the D or fence high dimensional or fence so maybe so even though this problem looks bad in two dimensions it gets literally exponentially worse as we move to weight matrices of higher and higher and higher dimension so that seems like a bad property of this sigmoid non-linearity that the fact that it's not zero centered and the fact in particular that its outputs are always positive leads to these very unstable and potentially awkward dynamics during training I should point out though that that this whole analysis about the gradients with the gradients on the weights being all positive or all negative only applies to a single example however in practice we'll often perform mini-batch gradient descent and once we take an average over multiple elements in a mini batch then this we kind of relax this constraint so even though for a single example in the mini batch we would end up with gradients that are all positive or all negative on each layer when we consider the gradients worth with respect to a full mini batch of data examples then this less of a problem because you could imagine that maybe even though the gradients with respect to each elements are all positive or all negative when you average them out and sum them out over the mini-batch then you could end up with gradients for the mini-batch that are sometimes positive and sometimes negative so I think this is a problem this is maybe less of a problem in practice than some of the other concerns around the sigmoid non-linearity but it is something to keep in mind nevertheless so this get that that was our second problem with the sigmoid non-linearity is the fact that its outputs are not zero centered and now a third problem with the sigmoid non-linearity is this extra function so I don't know if you know how these mathematical functions get implemented on CPUs but something like the exponential function is fairly expensive because it's it's a complicated transcendental function so it can actually take many clock cycles to compute an exponential function and when I timed this what I did I did some small experiment on this on my macbook CPU the other day and if I want to do know if I want to compare a sigmoid non-linearity versus a Rayleigh non-linearity for a single tenth sort of a million elements then on my Mac on this macbook CPU the REA loop ends up being about three times faster than the sigmoid because the REA Lu just involves a sink a simple threshold whereas the sigmoid needs to compute this kind very expensive exponential function now I should also point out that of these three this this third problem of exponential being expensive to compute is mostly a problem for for mobile devices and for CPU devices for GPUs this ends up being less of a concern because for GPU devices the cost of simply moving the data around in memory between the global memory and the compute elements of the GPU ends up taking more time for these nonlinearities than actually computing the non-linearity so in practice if you try to time these different nonlinearities on a GPU then you'll find that they often all come out to be about the same speed so you really need to move to some kind of CPU device in order to see speed differences between these different nonlinearities so that gives us these three problems with the sigmoid function and of these three problems I really think number one is the most problematic these number two and number three are that you should be concerned about or should be aware of but it's really I think really this this saturation killing the gradient is the really the most problematic aspect of the sigmoid non-linearity so then we can move on from sigmoid and we can look at another popular non-linearity that people sometimes use is the tan H non-linearity now tan age is basically just a scaled and shifted version of sigmoid if you go and look up the definitions on paper it's a definite the definitions of sigmoid and 1010 H in terms of exponential functions you can do a bit of algebra and just show that a cat can age is literally just a shifted and rescaled sigmoid so it inherits all many of the same problems as the sigmoid non-linearity right it's still saturates for very very large and very small values so it still results in count in difficult learning if we're in those cases but it but unlike the sigmoid it is zero centered so if for some reason you have the urge to use the saturating non-linearity in your neural networks then I think tan H is a slightly better choice than sigmoid but really it's still a pretty bad choice due to these saturating regimes so now the the next non-linearity is our good friend or the Ray Lu the rectified linear activation and this one is very nice I think we're very familiar with this one by now it's very cheap to compute because it only involves a simple threshold so you can imagine like for a binary implementation all we have to do is check the sign bit of the floating-point number if it's negative we set it to zero and if it's positive we leave it alone so Ray Lu is like the cheapest possible nonlinear function you can imagine implementing that it's very very fast can typically don't be done in one clock cycle on most things it's does not saturate in the positive regime so as long as our inputs are positive then we never have to worry about this saturation problem of killing our gradients and in practice it could come when you come when you try to train the same network architecture with a sigmoid versus at an H versus array Lu it's very often to find that the Ray louver jeune converges much much faster up to six times as reported by Alex net and when you go to very deep networks like 50 hundred layers then it can be very challenging to get sigmoid networks to converge at all unless you use something like normalization so there are some problems with Ray Lu one of course is that it's not zero centered so just like the sigmoid non-linearity which we saw has all of its outputs always positive the same applies to the Ray Lou so the rate Lu clearly all of its outputs are also non-negative so that means that Ray Lu suffers from the same problem of gradients being all positive or gradients being all negative as the sigmoid but since we know in practice that Rayleigh networks can be trained without much difficulty that suggests that this actual nonzero centering problem was maybe less of a concern than the other problems that we saw with the sigmoid function so the big problem with Ray lieu of course is what happens when X is less than zero well here the gradient is exactly zero so if you imagine what happens for training this rayleigh function when X is very very large like plus ten then the local gradient will be one and trained and learning will proceed just fine when the gradient when X is very very small like minus 10 then the gradient will be identically zero which means that our local gradient is exactly zero which means that our downstream gradients are also exactly zero now this is somehow even worse than a sigmoid because with a sigmoid in the very small regime we weren't exactly zero we were just very very small so even if our gradients were very very small we still had some potential hope that maybe learning could proceed but with a ray Liu once our gradients are once are act once our values are less than zero then all of our gradients are identically zero and learning cannot proceed our gradients will be completely zero it will be completely dead so then this leads to this potential problem that people worry about sometimes called a dead ray Lu so the idea here is that suppose in one of our Ray lute nonlinearities the weights of that layer get totally large get very large and magnitude such that the neuron in that layer has a negative activation for every data point in our training set well in that case it means that the Ray Lu the the the weights corresponding to that unit will always have gradients identically equal to zero as we iterate over the training set and that that way sometimes / - as a dead rail Oh because it never cat has the potential to learn once the rail ooh kind of gets knocked off this data cloud of your training examples then all of the future updates it will ever receive are going to be zero and it will be stuck off in limbo hanging outside your data cloud for the rest of eternity no matter how long you train so that seems like a problem in contrast we want our we need our Rea lose to always somehow stay intersecting with some part of the data cloud those are active Rea lose and they will receive gradients and they will train even if the permit and I should point out this problem only occurs if your neuron if your activation value is negative for your entire training set as long as there's some element of your training set where you receive a positive activation then that weight has the potential to get some gradient and has the potential to learn so one trick that I've seen people sometimes do maybe I think it's less popular now what but to avoid this potential problem of dead ray lose one trick you might second you might think of is to actually initialize the biases of layers that use Ray Lu to actually have a slightly positive value that means that that gives you the that makes it harder to lie to fall into this negative regime and harder to find harder to have dead raters we saw that one problem with the Rayleigh non-linearity is the fact that it's not like the two big problems with Ray Lu are that it's not zero centered and that it has zero gradient in the in the negative regime so there was an alternative proposed called the leaky ray Lu that solves both of these problems leaky Ray Lu is very simple it looks just like Ray Lu except we're so in the positive regime it computes the identity function and when the input is negative then we're going to we're going to multiply it by a small positive constant so rather than being identically zero then in the negative regime the leaky Ray Lu is instead has a small positive slope and what this kind of you can kind of imagine this as a ray loo that is not exactly 0 it kind of like leaks out some of its information but just a little bit in the negative regime and now this 0.01 this slope of the leaky Ray Lu in the negative regime is a hyper parameter that you need to tune for your networks now the advantage of a leaky rail ooh is that like the it does not saturate in the positive regime it's still computationally efficient because it only takes a couple instructions to execute and unlike the Ray Lu or the sigmoid it doesn't ever die the gradient never actually build because our local gradients will never actually be zero in the negative regime our local gradient will just be this zero point zero one or whatever ever whatever other value for that high program etre we could chosen what this means is that these lake these leaky Rea lose and in fact don't suffer from this dying Ray Lu problem instead in the negative regime they'll just receive smaller gradients and have the potential to keep learning but now an annoyance with this leaky ray Lou is this 0.01 this kind of leaky hyper hyper parameter value that we need to set and as you've probably experienced when trying to - and even Lin linear classifiers on us on the first couple assignments the more hyper parameters you need to search over the more pain and frustration you have when trying to train your models so clearly whenever you see a hyper parameter your one instinct you should have is that maybe we should try to learn that value instead so indeed this is the exact idea behind the parametric the the parametric renew or Prelude where now it looks just like the leaky ray Lu except this the slope in the negative regime is now a learn about perimeter of the network which now this is kind of a funny thing because now this parametric really with this prelude is actually a non-linearity that itself has learn about parameters but we can imagine computing that just fine then there in the backwards pass we'll just back propagate our we'll just back propagate into this value of alpha compute derivative of loss with respect to alpha and then also make gradient sistah gradient descent steps on alpha and this alpha might be a single constant for each layer or maybe this alpha could be a separate value for each element of your for each channel in your convolution or each output element in your fully connected layer so that's that's fine you'll see people work on you'll see people use that sometimes but one problem you might have with any of these rayleigh functions is that they actually have a kink at zero right they're actually not differentiable there at all you might ask what happens is zero well it actually doesn't matter what happens at zero because it does happened very often so you can pretty much choose whatever you want to have it happen to zero and it's probably going to be fine but in practice usually you just pick one side with the other but one kind of slightly more slightly more slightly more theoretically grounded non-linearity that you'll see sometimes is the exponential linear unit and this attempts to fix some of the problems of Ray Lu that it's basically like a ray Lu but it's smooth and it has its it tends to be more zero centered so we you can see the mathematical definition of the exponential linear unit here at the bottom of the slide in the positive regime it just computes the identity function just right just just like Ray Lu but in the negative regime it computes this exponential value instead so on the left hand side it kind of looks a little bit like the tail end of a sigmoid and in fact we also see that it asymptotes to some two minus one or two some negative two some nonzero negative value as we go to the left this is to avoid this problem of zero gradients that we might have feelings concerned about with the normal radio so if this was designed to kind of fix some of these problems with really the exact it right that now because the negative regime actually is non zero then you can imagine that the this this this non-linearity you might actually have zero centered outputs and there's some math in the paper to support that conclusion there's the problem here is that the computation still requires this exponential function which is maybe not so good and it also has this additional hyper parameter alpha that we mike that we need to set although I guess you could try to learn alpha as well and come up with a parametric exponential linear unit or pretty Lou but I've never actually seen anyone do that in practice but maybe you could try it I write a paper who knows there's enough there's a you know it doesn't stop there so there was another paper there's people love to propose very I mean I think you should get that we get this idea right now that this non-linearity is kind of this small modular piece of a neural network that you can take out to find new things and run controlled experiments so it's very appealing for a lot of people to try to modify a neural networks by coming up with new nonlinearities so you'll actually see a lot of papers that try to come up with slight variance on them and try to argue for why their ideas are slightly better than anyone became before so as a result there's a lot of these things out there that you can find one that's kind of fun is the sailu or scaled exponential linear unit this one is fun because it's just a rescale version of a Luo that we just saw on the previous slide the only difference is that we set alpha equal to this very long seemingly arbitrary constant and we set lambda equal to this very long seemingly arbitrary constant and the reason that we might want to do this is that this if you choose alpha and lambda in this very particular way than a very deep neural network with this solute non-linearity has a kind of self normalizing property that as you as your layer depth goes to infinity then the statistics of your activations will be well-behaved and even converged to some finite value as the depth of your network goes Trent trends toward infinity and what this means is that if you have very very deep Network very very deep networks with say looat nominee era T's then sometimes people can get these things to Train very very deep networks even without using batch normalization or other normalization techniques now unfortunately in order to understand exactly why this is true you have to work through 91 pages of math in the appendix of this paper so if you if you have a lot of patience you can go through this and figure out exactly why we set the values of these constants to those particular values but I think that's a little kind of fun but I think the bigger takeaway around a lot of these nonlinearities is that in practice they really don't vary too much in their performance so here's a plot from a paper from another paper that was another of these papers about nonlinearities that actually compared the effects of different nonlinearities on different network architectures on all on the CFR 10 dataset and what's you what's if you look at this plot you see that some of the bars are higher than some of the other bars but the most important thing to take away from this plot is really just how close all of these things are something like Ray Lu on ResNet is giving us 93.8 leaky Ray Lu is 94 point to soft Plus as a different one is 94 point six so basically all of these things are within a percent of each other in final accuracy on CFR 10 so and the trends here are just not consistent so if we look at a resonate then something called a Galu or a swish non-linearity is gonna slightly outperform a rail ooh but if we look at a dense net then rail ooh is slightly better or equal to anything else so the real takeaway I think for nonlinearities is just like don't stress out about it too much because usually the your choice of non-linearity as long as you don't make a stupid choice and choose sigmoid or 10h if you use any of these more reasonable more modern nonlinearities your network is going to work just fine and maybe for your particular problem you might see a variance of maybe one or two percent on your final accuracy depending on which non-linearity you use but that's going to be very dependent on your data set on your model architecture and on all your other hyper parameter choices so my advice is just don't stress out too much about activation functions just in basically don't think too hard just use rail ooh it'll work just fine and it'll probably it'll probably work now if you're in some situation where you really really must squeeze out that one last percentage point or that last half a percentage point or a tenth of a percentage point of performance then I think is the time when you should consider swapping out and experimenting with these different nonlinearities so in that case something like a leaky rail ooh or in a loo or a sail ooh or a gale ooh or whatever other thing you can think of that rhymes with a loo is probably a reasonable choice to try but don't expect really too much too much from it and basically don't use sigmoid or tan h those are terrible ideas your network will not converge and just don't use those so that's kind of my my executive summary on activation functions any questions on activation functions before we move on yeah yeah the question was what all of these activation functions are monotonic they increase well and why don't we use something like sine or cosine well I actually lied to a little bit there's this gallu non-linearity function that I didn't talk about that is actually non monotonic but the reason is that if your activation function is non-monotonic is something like sine or cosine um if it increases and then decreases then there exist multiple values of X that have the same Y and that can be problematic for learning because that kind of destroys information in some way so if your activation function is not invertible that means that it's destroying information and we'd prefer and the reason we wanted activation functions in the first place was not so much to perform useful computation it was more just to add some non-linearity into the system to allow it to represent non-linear nonlinear functions I have seen people try to show off and train with sine or cosine or something if you use batch normalization and are very careful I think you could probably get that to Train but I would not recommend it but I think that's that's a very astute observation and actually the the Galen on the non-linearity is a slight slightly interesting one but to check out so I think you should read that paper if you're interested but there the idea is that they actually interpret the non monotonicity of the right of the activation function as actually a bit of regularization and they view that as the expectation they kind of combine it with something like drop out which we'll talk about later and then show that if you take this expectation combined with some stochastic not some stochastic stuff that is sort of equivalent in some rough expectation way to a non monotonic activation function but in general they're not very widely used and most activation functions you'll see in practice are indeed monotonic or any of any other questions on activation functions great just use Ray Lu so then the next thing we need to talk about is data pre-processing and this is something that you've been doing already in all the notebooks if you've been reading through maybe the starter code and the data loading code and actually looked at the parts outside the code where we asked you to write code which then you will have seen that you've been already doing this thing called data pre-processing at the beginning of all of your homework assignments already so what's the idea with that well basically the idea is that we want to before we feed our data into the neural network we want to perform some pre-processing on it to make it more amenable to efficient training so the very cut so as kind of a cartoon here you can imagine your data your data cloud of your training set shown here in red on the Left where the X and the y values are maybe two features of your maybe your one two features of your data set so like maybe the red the red value of one pixel and the blue value of another pixel if you're looking at images and the idea is that your original data is going to be this data cloud which could be which could be very long and skinny and shifted very far away from the origin so in particular if we're thinking about images then the way we natively store image data is usually as pixel values between 0 and 255 so if so our data cloud for raw image data would be kind of located very far away from the origin and now before we come before we feed the data into the neural network we want to standardize it in some way and pull it closer to pull it into the origin by subtracting the overall mean of the training data set and then sum and then rescale each of the features so that each feature has the same variance and we can do this by dividing by the by the standard deviations of each feature computed on the training set and this is this this idea of why we might want to perform pre-processing in this way also kind of ties back to this discussion we had about this about the not about the zero about the bias problem with sigmoid nonlinearities that recall if all of the inputs are going to be positive then all of our gradient directions are going to always be positive or negative and if similarly by the same logic if all of our training data is always positive then all of our weight updates will also be constrained to have either one sign or another sign but for the but for training data we can easily fix this problem by just rescaling everything before we feed it into the neural network so if for images it's very common to subtract the mean and divide by the standard deviation but for other types of data you might see maybe non image data you will sometimes see other types of pre-processing called D correlation or whitening so here the idea is that if we compute the covariance matrix of our data cloud of the entire training set then we can use that to rotate the data cloud such that the features are uncorrelated and this would be the green this would be the the green data cloud in the middle of the slide but now we've basically taken our input data cloud and rotated it actually moved it to the centrum of to the origin and then rotated it another thing you'll sometimes do is then have not not only have unit mean 0 mean unit variance for each feature but actually perform this 0 mean unit variance fixing after D correlating the data and that would correspond to this now stretching your data cloud out into this very well-behaved circle at the center of the of the coordinate axis here on the right shown in blue and if you're perfect if you perform both D correlation and this normalization this is also this is often called whitening your input data so this is sometimes pretty common to use if you have if you're working on non division problems where maybe your inputs where your inputs are maybe specified as low dimensional vectors in some way but for image data this is not so common another way that you can think about why data pre-processing is helpful is to think about what happens if we try to learn even a linear classifier on non standardized data or non normalized data so here on the Left suppose that we have this data cloud that is located very far from the origin and now we want to find this linear classifier that is going to separate the blue class from the red class now if our data cut now if we initialize our weight matrix with very small random values as we've typically done then our we can expect that at initialization the boundary the boundary that our linear classifier learns is kind of near the origin and now if our data cloud is very very far away from the origin then making very very small changes to the values of the weight matrix will result in very drastic changes to the way that that decision boundary cuts through the training data cloud and that can that means that kind of intuitively if your data is not normalized then that leads to a more sensitive optimization problem because very small changes in the weight matrix can result in very large changes to the overall classification performance of the system in contrast on the right if before during training we had normalized our data by moving it to the center now now our entire data cloud also is sitting over the origin so we can expect this will result in a better conditioned optimization problem yeah a question yeah the question is um do people ever use other color spaces other than RGB I think people do use that sometimes in practice I've seen that less for image classification I've sometimes seen that for more image processing type tasks like super resolution or denoising and there it's more common I think to feed raw inputs to the network as some other color space but in practice I think it usually doesn't matter too much what color space you use because in principle the equations for converting from one color space into another color space are usually fairly compact simple equations and you the network could kind of learn in the first couple of layers to perform that kind of conversion implicitly within the first two layers or so if it really wanted to so in practice you do see it sometimes see people feed input data of different color spaces but in practice it usually is doesn't have massive differences on the final results so then talking about a little bit more concretely about what people actually do in practice for images there is a couple things that are very common to do in images one is to is to compute the mean image on the training set this for example was done in Alec's net where they write if your if your training data set is something like n trained by 32 by 32 by 3 then if you average over the entire training data set then you get a mean image of 32 by 32 by 3 and one thing you can do is just subtract that mean image from all of your training samples another thing that's very common is to subtract the per channel mean so you effectively compute the mean RGB color across the entire training data set and then subtract only the mean color from each pixel and this is the way that for example vgg was trained another thing that's very common to do is to subtract the per channel mean and the per channel standard deviation so that means we're going to compute the mean RGB color over the tract entire training data set and we're going to compute the standard deviation of the three colour channels over the training data set and that gives us two three element vectors that we'll use to subtract the mean and divide by the standard deviation and that's kind of the standard pre-processing that's used for residual networks any any more questions about data pre-processing yeah so the question is what do we do during training and testing so whenever you're doing pre-processing you always compute your statistics on the training set and the reason we do that is not necessarily for computational efficiency it's because that's to simulate how we would actually use the network out there in the wild because in the wild there is no such thing as a training set there is just the real world upon which we want to run this classifier and then there's no reasonable way that you could expect to recompute those statistics all the time over whatever real-world data is coming in so it's so to kind of simulate that process we will always compute our statistics on the training set and then use the exact same normalization statistics on the test set yeah so the question was if we're using batch normalization do we still need to use data pre-processing I think so yeah this is very intuitive you should think like what if I just put batch formalization as the very first thing in my lower network before I do any convolution or any fully connected layers or anything this is a very first thing I think that'll actually work but I think it works a little bit less than a little bit worse than actually doing this pre-processing explicitly so in practice people will people still prefer to do this explicitly pre-processing and also have batch normalizations okay so then the next thing we need to talk about is weight initialization so whenever you start training your neural networks you have to initialize your weights in some way so here's a question what happens if we just initialize all of the weights to zero and all the biases to zero well this is going to be very bad right because if all of the weights are zero and all the biases are zero then you know assuming we have array Lu or some other reasonable non-linearity then the out the outputs are all going to be zero and the outputs will not depend on the inputs and even worse the gradients will all be zero so we're like totally stuck so in practice we cannot initialize the zero that's gonna be very bad and this problem of initializing to zero or more generally of initializing to a constant is referred to sometimes as not having symmetry breaking because it doesn't have any to distinguish among different elements in the training set or even among different neurons in the network so if you have these constant initializations that in a very common failure mode is that we'll compute the same gradient for all of the training examples and the network doesn't it's not actually able to learn in any way so in practice what we've got what we've always done instead is just kind of hand wave and say let's initialize with small random numbers and this is what you've mostly done on your homework assignments so far so for example we could initialize with Gaussian with zero mean from a Gaussian and maybe set the standard deviation as a hyper parameter and this is kind of what you've done on your homework assignments so far but let's dig into this a little bit more because it turns out that this strategy of initializing with small random Gaussian values works fairly reasonably ok for shallow networks but it's going to be not worked so well once you move to deeper and deeper networks so to see why let's think about activation statistics and how they propagate as we move forward through a deep network so here this little snippet of code is computing the four paths for a six layer neural network with hidden for a fully connected Network with hidden dimension 4096 and we're using at an age now non-linearity because we're not following the advice I told you a couple slides ago and then what we're gonna do is just plot the statistics of the hidden unit values at each of the six layers in this deep network and here you can see that in now here each of these plots is showing a histogram for the activation values for the six layers of this deep neural network and what we can see is that for the first layer after a single weight matrix anon and tannish non-linearity we get kind of a reasonable distribution of values for the network but as we move deeper and deeper into the network we see that the activations the the activations all collapse towards zero and now this could be this is going to be very very bad for learning because think about what this means for the gradients so in particular if all of the activations are zero that's going to mean that all of the gradients are also approximately going to be zero because remember whenever we compute the local gradient on the weights the local gradient on the weight is going to be equal to the activation at the preview lare if you remember that equation from a few slides back so what that means is that for this very thief network when the activations collapse to zero then the gradient updates will also all be very close to zero and the network will not learn very effectively okay maybe so then maybe maybe this prop so kind of our problem here maybe is that this weight initialization was maybe too small so instead of initializing with 0.01 standard deviation we could instead try changing that to initialize from 0.05 standard deviation instead because maybe the problem was that all of our weights got too small because maybe our weight our weight values were just too small now it turns out this is also really really bad and now if our weight matrices are too big then all of our activations are going to be pushed into the saturating regime of 10h non-linearity so if we look at these histogram values again when we initialize for this larger with this with these larger weight matrices then we see that all the activations are in these saturating regimes of the tannish non-linearity and again this means that now the local gradients will again be 0 and everything will be 0 and learning will also not work so we saw with that when the we'll want to initialize the weights to be too small then they all collapsed a zero and when we initialize the weights to be too big then the activations all spread out to these saturating regimes of the non-linearity so somehow we need to find a value for this weight initialization that is somehow in this Goldilocks zone in between too small and too large and it turns out that the Goldilocks initialization for these networks one one definition is so this is called the Xavier initialization after the first author of this paper cited at the bottom so here rather than setting the standard deviation as a hyper parameter instead we're going to set the standard deviation to be 1 over the the square root of the input dimension of the layer and it turns out that if we make this sort of very special Goldilocks regime for initializing our weights then we'll get very very nicely scaled distributions activations no matter how deep our network goes and I should also that this this derivation is showing you for a fully connected network but for a convolutional network that DN will now be the number of inputs to each neuron so that will now be the number of input channels times the kernel size times the kernel size if you want to do this exact same thing for convolutional networks so then to derive this Xavier initialization the trick is that we want to have the variance of the output be equal to the variance of the input so the way that we set this up is we imagine that this linear this linear layer is going through you're going to compute y equals with matrix multiply of make weight matrix W and input activation vector X so then each scalar element of our next layer ignoring a bias will be this inner product between one of the rows of W and the entire input vector X now if we you know if we want to compute the variance of one of these outputs y I then if we make a simplifying assumption that all of the X's and all the WS are independent and identically distributed then we can use the properties of the variance to simplify in this way and then we know that the variance of each one of the elements of the output layer Y I will be equal to DN times the variance of X I times WI and then we make a couple then we make a couple other simplifying assumptions we assume that X and W are independent random variables and when you take the variance of the product of two independent variables you look up on Wikipedia what to do and you get a formula that looks something like this then the next assumption is that we make another another simplifying assumption that all of the inputs X are 0 mean which is may be reasonable if they're coming out of a batch normalization layer and the previous layer and we also assume that the WS are also going to be 0 mean which is may be reasonable if we assume that they were it may be initialized from some kind of Gaussian distribution so then once we have all these assumptions we can see that if we choose the the magical value of variance of standard deviation equal to 1 over rather if if we set the variance of the w i's to be 1 over d n then we see that the variance of Y I is going to be equal to the variance of sigh so that bets that motivates this derivation of the Xavier initialization it's this idea of trying to match variances between the inputs of the layer and the outputs of the layer and we see that if if we set the standard deviations in this very special way then it won't work out but there's a problem what about rail ooh real ooh right this this whole derivation not also hinges on the non-linearity and the reason I and that's the reason why I actually use 10h in this experiment because this derivation is only talking about the linear layer but that linear will be followed by a non-linearity and for something like 10 H it's symmetric and symmetric around zero so as long as are we match the variances of the linear layer then things will generally be ok as we move through this through this Center to non-linearity but for Rayleigh with things would be very bad if we do this exact same initialization and now use our deep network to be rail ooh instead of tan H then our activations to statistics are again going totally out of whack here these histograms look a little funny because there's a huge spike at zero because rail ooh kills everything below zero but if you kind of zoom in on the non zero part of these histograms you'll see that also for a deep network every all everything is collapsing the algorithm the activations are all collapsing towards zero again which will again give us zero local gradients and no learning so to fix this we need to we need a slightly different initialization for for Rayleigh nonlinearities and the fix is just to multiply our standard deviation by two so now rather by square root 2 so for Xavier Xavier we have standard deviation of square root 1 over D n now for a loop we have standard deviation 2 / DN and intuitively that the factor of 2 is to deal with the fact that Ray Lu is killing off half of the half of its inputs and setting them to 0 and this is called a cunning initialization after the first author of the paper that introduced it or sometimes MSR a initialization after Microsoft Research Asia where he worked when writing this paper now this is so then when you're initializing with really nonlinearities you need to use this MSR right initialization for things too work out well and it turns out that actually the remember a couple of lectures ago we talked about vgg and that in 2014 there was no batch normalization and people could not get the VG to converge without crazy tricks well it turned out that actually once you have this timing wait initialization scheme this is sufficient for getting vgg to train from scratch and actually the the paper that introduced this their big claim to fame was that they got VG to train from scratch by simply changing the initialization strategy but this but remember we've kind of moved on from bgg now and we talked in the CNN architectures lecture that the kind of standard baseline architecture you should only consider is a residual network and it turns out that this MSR a or climbing initialization is not very useful for residual networks and the way that we can see this is suppose we have a residual Network and we've somehow constructed the act the initialization of those internal convalesce such that the variance of the output matches the variance of the input well if we if we wrap those layers with a residual connection now that means that the variance of the output after the residual connection will always be strictly greater because we're adding the input again so that means that if we were to use this MSR a or Xavier initialization with the residual Network then we would expect the variances of our activations to grow and grow and grow and grow over many many layers of the network and that would be very very bad that would lead to explode ingredients and bad optimization dynamics again so the solution for residual networks is fairly simple what we normally do for residual networks is to initialize the although the first layer with your MSR a initialization and then initialize the last layer of your residual block to be zero because that means that admission at initialization this block will compute the identity function and the variances will be again perfectly preserved at initialization so that's the trick that that's the way that we prefer to initialize residual networks and I'd also point out that this this whole area of how to initialize your neural networks and the reasons that you might prefer one initialization scheme over another is a super active area of research and you can see papers even from this year that are giving new just on the ways to initialize neural networks so are there any other questions on initialization yeah that's not quite correct so that I so yeah the idea was that oh maybe the idea with initialization is we just want to be as close as possible to the global minimum of the loss function but before we start training we don't know where that minimum is so instead the perspective that we normally take on initialization is that we want to initialize in such a way that all the gradients will be well behaved at initialization and because if you choose maybe bad mountain air DS or bad initialization schemes then you could end up with zero gradients or close to zero gradients right off the bat at the beginning of training and then nothing will train and you won't make any progress towards the goal the way I'd like to think about it is that we want to initial so you imagine this like lost landscape service that we're optimizing then we want to initialize in a place where it's not flat so we preferred not to initialize in a local minima and that's the main constraint that we want to take into account when constructing initialization schemes so this is all about so so far we've talked about ways for getting your model to Train we set up the activation function we set up the initialization but then once you get your model to train you might if you've done a really good job at optimizing it you might see it start to overfit and it might start to perform better on the test set than it does on the training set and this would be a bad thing so now it's overcome this we need some strategies for regularization so we've already seen one simple scheme for regularization which is to add another term to the loss function the very common one that you've used on your assignments so far is l2 regularization also sometimes called weight decay that just penalize --is the l2 norm of your weight matrices this is very common and this is a very widely used regularizer for deep neural networks as well but there's a whole other host of other regularization schemes that people use in deep learning one of the most famous is this idea called drop out so drop out is kind of a kind of a funny idea what we're gonna say is that when we add drop out to a neural network we're going to explicitly add some randomness to the way that the network processes the data so in each forward pass we're going to randomly set some of the neurons in each layer equal to zero so we're going to compute a layer a for pass they're randomly set some of the neurons to zero and then compute another layer now randomly set some of those to zero then compute another layer and so on and so forth and in and then the probability of dropping into any individual neuron is hyper parameter but a very common choice would be 0.5 but every any individual neuron has probability 1/2 we flip a coin whether or not to keep it or throw it away seems crazy right but it's very simple to implement right this is implementing a two layer fully connected neural network with dropout and you can see that the implementation is very simple we just compute this binary mask after each layer and use that to kill off so half of the neurons in each layer after we compute the matrix multiply so then the question is why would you ever possibly want to do this well one interpretation of what dropout is doing is that it prevents it forces the network to have a kind of redundant representation another way this is phrased is that we want to prevent the co-adaptation of features so we want to encourage the network in some way to develop representations where different slots in that vector represent maybe different robust ways of recognizing the object so for example if we are building a cat classifier maybe maybe one maybe a bad thing would be for each element of the vector to just learn independently whether it's a cat but if we add dropouts then maybe we want it to learn kind of more robust representations maybe some neurons should learn about ears and some should learn about furry and different neurons should maybe focus on different high-level aspects of katniss such that if we randomly knock out half of these neurons then it can still robustly recognize the cat in even if we mess with its representation another interpretation of dropout is that it's effectively training a large ensemble of neural networks that all share weights because if you imagine this process of masking out half of the neurons in each layer but we've affected what we've effectively done is built a new neural network that is that is a somehow a sub Network of the original full network and now at each forward pass we're going to train a separate sub Network of this full network and then somehow we're going to have this very very large exponentially large number of sub networks that all share weights and then the full network will somehow be some ensemble of this very very large number of sub networks that are all sharing weights so these are both to kind of admittedly hand-waving explanations of what drop out might trip might be trying to do and there's a whole host of theory papers that try to get more concrete explanations for why this works but now a problem with dropout is that it makes the test time operation of the neural network actually random right because we were randomly knocking out half the neurons in each layer at each forward pass and this seems bad because if you're deploying these networks in practice you'd like their outputs to be deterministic you wouldn't want to upload a photo to your your web hosting service one day and having recognized as a cat and the next day the same photo is recognized as something else that would be a bad property for your neural networks when they're deployed in practice so a test time we want some way to make dropout deterministic and what we really want to do is kind of average out this randomness because now once we've rewritten those once we've built drop out into our neural network we can imagine that we're rewriting our neural network to actually take two inputs one is our actual input image X and the other is this random mask z which is some random variable that we are drawing before we run the forward pass the network and now the output that our network computes is dependent both on the input data as well as on this random variable so then in order to make the network deterministic what we want to do is somehow average out the randomness a test time and what we can do is then take we can have the test time forward pass be this expectation of averaging out this random variable so then if we want to compute this analytically we just want to compute this integral to marginalize out this random variable Z but in practice actually computing this integral analytically is like we have no idea how to do it that seems very hard and very intractable for rent for arbitrary neural networks so in practice we will instead think about what happens well how can we think about this integral for the for only a single neuron well for a single neuron that receives two inputs x and y and has these connection strengths w1 and w2 and then produces this single scalar output a then at kind of the normal forward past we might imagine doing a test time is to take this inner product of the weight matrix and the two inputs X and y now if we're using dropouts then there are four different random masks that we've made that we might have drawn during training with equal probability where we could have kept them both we could have knocked out X but kept Y knocked out why but kept X or knocked out both of them and each of these four options can occur with equal probability so then if we we can in this case we can write out this expectation exactly because we've got exactly four outputs and they all have equal probability and what we see is that for this example this output this expected output is actually equal to 1/2 the probability of this normal forward pass and this turns out to hold in general that if we want to compute the expectation of a single layer with dropout then all we need to do is multiply by the dropout probability so simply multiplying by the drop of probability allows us to compute this expectation for a single dropout layer so that means that a test time we just that that so this this derivation it means that we need to a test time we want the output to be equal to be expected input a training time which means that an output and at test time we want all neurons to be active we want them all to action all of our learn ways to actually do something so at test time we'll use all the neurons but then we'll rescale the output of the layer using this drop this dropping probability that we've used to drop individual neurons so this gives us our summary of implementing dropout that it's actually quite straightforward that when implementing dropout during the forward pass during training we're going to drop these random we're going to generate these random masks and use those to dropout or zero out of random elements of the of the activation of the activation factors and at test time we'll simply use the proper probability to rescale the output and have no randomness now this expectation is exact only for individual layers and this kind of way of computing expectations is not actually correct if you imagine stacking multiple dropout layers on top of each other but it seems to work well enough in practice I'd also like to point out that a slightly more common thing that you'll see for implementing dropout is another variant called inverted dropout that's really fundamentally the same idea it's just a different implement the same thing and the question is where do we want to do the rescaling we want to do rescaling during test time or we want to do rescaling during training time and maybe we would prefer to not do rescaling during test time because during test time we want to really maximize the efficiency of the system because maybe it's gonna run on mobile devices or in servers on lots of images or whatever so we maybe prefer to pay a little bit of extra cost of training time instead so in practice a very common thing you'll see with dropout is to generate these random masks at training time and then actually then like if you're having drop a probability 1/2 then during training time we'll we'll drop half the neurons and then we'll multiply all of the remaining neurons by 2 and then at test time we'll have we'll just use all the neurons and use our normal way matrix and in this way again the expected value the expected value of the output at training time will be equal to the actual output at test time but it's just a question of where do we put the rescaling we put it during training time or test time then the question there's another question of now that we've got this idea of a dropout layer where do we actually insert it into our neural network architectures well if we remember back to the Alex net and vgg architectures we remember that the vast majority of the learn about parameters for those for those architectures lived in these fully connected layers at the end of the network and that's indeed the exact place where we tend to put dropout in practice is in these large fully connected layers at the end of our convolutional neural networks but if you'll remember that as we moved forward in time and looked at more recent architectures things like ResNet or Google net actually did away with these large fully connected layers and instead use global average pooling instead so actually for these later network architectures so this actually did not use dropout at all but prior to something prior to 2014 or so dropout was really a critical essential piece of getting neural networks to work because it actually helped a lot for reducing overfitting and something like Alex net or bgg and they've actually become it's actually become slightly less important in these more modern architectures like res nets and so on and so forth now this idea of dropout is actually something of a common pattern that we see repeated a lot in different types of neural network regularization basically during training we've added some kind of randomness to the system like by adding a dependence on some other random source of information and then at testing we average out the randomness to make a deterministic for dropout we this randomness took the source of random masks but you can see many other types of regularization that use other sorts of randomness instead and we've actually seen another regularizer already in this class that has like this exact same flavor and that regularizer is batch normalization because if you recall batch normalization adds randomness during training time because it makes the outputs of each elements in the batch depend on each other elements in the batch because during training remember for batch normalization we compute these per mini batch means and standard deviations which means that they depend on which random elements happens to get shuffled into each mini batch at each iteration of training so then at so Batchelor normalization is adding randomness by take by making by depending on the weight which we form batch as a training time and then a test time that averages out this randomness by using these running averages of means and standard deviations and using these fixed values instead a test time to average out this randomness and in fact for these later architectures like residual networks and other types of more modern architectures batch normalization has somewhat replaced dropout as the main regularizer in these deep neural networks so for something like that for something like a residual Network the regularizer is that it's used to train our l2 l2 weight decay and best normalization and that's it and that tends to be actually a very useful success way to successfully train large deep neural networks is actually just relying on the stochasticity of batch normalization but there's some actually there I lied a little bit there's one other type of thing of one other source of randomness that happens a lot in practice but most people don't refer to it as a type of regularization and that's this notion of data augmentation so so far in this class whenever we've talked about loading and training iterations we always imagine that we load up our training data loading load up the label for that training data like this picture of a cat and label cat we're on the image through the network and then compare the predictive label to the true label and then use that to get our loss and compute our gradients but it's actually very common in practice to perform transforms on your data samples before you feed them to the neural network and perform and somehow manipulate or modify the input image in a random way that preserves the label of the of the data sample so for example for images some comments some common things would be horizontal flips because we as humans know that if we flip the image horizontally then it's still a cat other common things would be random crops and scales so at every training iteration we might resize the image to have a random size or take a random crop of the image because we expect that a random crop of a cat image should still be a cat image and should still be recognized as a cat by the neural network and the idea here is that this adds a way this is this effectively multiplies your training set because we add these data transformations to the model in a way that we know doesn't change the training label and this somehow moult is kind of multiplies your training set for free because now your network is trained on more raw inputs and then but this is again adding randomness at training time so for the answer this example of random cropping and flipping and scaling for something like res Nets the way that they're trained is that at every iteration every training image we pick a random size resize it to a random size then take a random two to four by two to four crop of this randomly resized image and this and we just do this random cropping and random resizing and random flipping for each element at every iteration but now again this is adding randomness to the network in some way so we want to fitting with this idea of marginalizing out randomness at test time we want some way to marginalize they get out this other source of randomness in our neural network so then the way this is done in data augmentation is to pick some fixed set of crops or scales to evaluate at a test time so for example for in the ResNet paper they take these five different image scales and then for each scale they evaluate five crops of the image for the four-corners and the center as well as the horizontal flips of all these things and then average the predictions after running each of those different random crops through the network and again and again this is a way that we are adding randomness to the network at training time and then averaging out that randomness at test time there's some other other times people play tricks around also randomly jittering the color of your of your images during training time but in general this this idea of data augmentation is a way that you can get creative and add some of your own human expert knowledge to the to the to the to the system because depending on the problem you're trying to solve different types of data augmentation might or might not make sense so like for example if you were building a classifier to try to tell right and left hands apart then probably horizontal flipping would not be a good type of data augmentation to use but if we want to recognize cats versus dogs then horizontal flipping is maybe very reasonable or sometimes I've seen like in like medical imaging context we want to recognize like slides or cells then even performing random rotations is reasonable because depending on that for that source of data you don't really know what orientation it might have come in at so data augmentation is really a place where you can inject some of your own human expert knowledge into the training of the system about what types of transformations do and do not affect the labels that you're trying to predict so now we've seen this pattern in regularization of adding randomness at training time and then marginally marginalizing out the randomness at test time so I just want to very quickly go through like a couple other examples of this pattern but I don't expect you to know in detail just to give you a flavor of other ways this has been instantiated well one way another idea is drop connect so this is very similar to drop out but rather than zeroing random activations instead we're going to zero random weights during every four pass the network and again we'll have some procedure to average out test elasticity at test time another idea is this that I find very cute is this notion of fractional max pooling so here what we're going to do is actually randomize the sizes of the receptive fields of our pooling regions inside each of the pooling layers of the neural network so that maybe some neurons will have a two by two portaling region some neurons will have a one by one pooling region and they'll be and every four pass and this is fractional max pooling because this randomness between choosing a one-by-one receptive field and a two-by-two receptive field means you can have like 1.35 pooling in expectation so this is sometimes referred to as fractional max pooling another crazy one is we can actually build networks deep networks with stochastic death so we can build something like a hundred layer ResNet and then every four pass will use a different subset of the residual blocks during training and then a test time will use all of the blocks so we saw kind of drop out that's dropping weights dropping individual neuron values we saw a drop connect that's dropping individual weight values this is something like drop block it's dropping whole blocks from this deep residual network architecture another one that's actually more commonly used is this notion of cutout so here we're simply going to 0 set random image regions of the input image to 0 at every at every four pass during training and then at test time we'll use the whole image instead and again you can see that we're performing some kind of randomness to corrupt the the neural network at training time and then averaging out that one that that randomness at test time and now this last one is really crazy that I can't believe it works it's called mix up so here with mix up what we're going to do is train on random blends of training images so now rather than training on just a single image at a time we'll form our training samples by taking a cat image and a dog image and then blending them and with a with a random blend weight and then the predicted target should now be like 0.4 cat and 0.6 dog where that that that ran that that that target is now going to be equal to the blend weight and this actually seems totally crazy like how can this possibly work but the reason that it may maybe seems slightly more reasonable is that these blend weights are actually drawn from a beta distribution which we have the the PDF of the beta distribution up there which means that in practice these blend weights are actually very close to zero or very close very close to one so in practice rather than being 0.4 cat and 0.6 dog it's more likely to be like 0.95 cat and 0.5 dog so it's not as crazy as it initially seems but then kind of my might your kind of takeaways for which regular answers should you actually use in practice is that you should consider drop out if you have an architecture if you're facing an architecture that has very very large fully connected layers but for but but otherwise drop out is really not used quite so much these days and in practice batch normalization l28 decay and data and data augmentation are the main ways that we are regularizing neural networks in practice today and actually surprisingly these two these two wild ones of cut out and mix up actually end up being fairly useful for small data sets like CFR 10 so I think state of the art and see part n actually uses both of these techniques but for larger data sets like image net then these drop up these cut up cut out and mix up are usually not so helpful so that gives us our summary of part one of a lot of these nitty gritty details about choices you need to make when training neural networks and then on Wednesdays lecture we'll we'll talk about some more of these details that you need to know in order to get your neural networks to Train
Deep_Learning_for_Computer_Vision
Lecture_9_Hardware_and_Software.txt
today we're going to talk about or today is lecture nine of the class and today we're going to talk about deep learning hardware and software so this will be hopefully a very applied practical lecture we'll be seeing lots of code on the screen and walking through that so hopefully that'll be fun so as you'll recall last lecture we talked about CN n architectures and we saw how the field has progressed from from architectures like alex not to be GG to ResNet and on this huge proliferation of different convolutional neural network architectures that people have used throughout the years and we talked about the trade-offs of compute computation and memory and just and saw how future advancements like vgg and especially ResNet gave us via regular designs for CN NS that allowed it that made it very easy for us to scale up and scale down their their sizes but now that we have some understanding of the types of architectures that people implement with CN NS it's useful to think about the actual hardware and software systems on which those architectures will ultimately run so that will be the topic of today's lecture we're going to talk about first hardware and then software so first is deep learning hardware so here's a picture of a computer this is actually my computer from grad school and you can see that there's a couple interesting components inside this computer one is the the central processing unit or CPU at the top um it's stuck under this giant heatsink and fan but underneath that you see these two giant other things that say GeForce GTX on them and those are graphics processing units or GPUs and just by looking at the size of these things in the space they take up in the case you can see that the GPUs actually take a lot more physical space than the CPU in the machine so that should maybe give you some hint that these are very important components now if you're perhaps an avid computer gamer you're probably very familiar with GPUs and different models of GPUs and if that's the case you know that there's a big a long bit of tension in the GPU Gate in the gaming community around GPUs and that's the question of Nvidia versus AMD so I'm guessing people are familiar with this debate and maybe fanboys on one side or the other when it comes to gaming but when it comes to deep learning there's a clear winner in this in this fight and that's Nvidia that whenever people are running GPU whenever people using GPUs to accelerate their computation on for neural networks it's almost exclusively on NVIDIA GPUs I think an AMD actually has really great hardware but the software stack for utilizing that hardware for general-purpose computing and especially for deep learning is really just not as advanced as the software stack on NVIDIA society so for that purpose for that reason whatever you see here is GPUs and deep learning it really just means NVIDIA GPUs and any other type of GPU be integrated Intel or AMD it is just really not not a mainstream thing at all when it comes to deep learning now it's also interesting to think about we saw in this in this case how we had both CPU and a GPU now it's interesting to look historically at the trends of the computational power of both CPUs and GPUs so I made this plot a couple years ago it's a little bit out of date now because you can see the x-axis ends at 2017 but here the the the metric we want to look at is gigaflops per dollar so each point on this graph is either a CPU model or a GPU model at the time at the time it was released and then for each of these models we compute the total computing power in gigaflops per gigaflops which is what is Giga that was Cahill a thousand mega there's millions of Giga is billion this is the number of billions of floating-point operations that these things can do per second so then we take the that peak computing capability of the device and then divide it by the number of dollars that it costs at the time the time the device was released and this gives you a sense for what is the cost of computing over time with but also with respects to different types of computing devices and you can see some clear trends here one is that both flops per dollar has been increasing over time come for both CPUs and GPUs but when it comes to GP when it comes to GPUs there's been a dramatic explosion in the dramatic reduction in the cost of computing since around 2012 so if you can if we rewind back to about 2006 was this GeForce GTX 8800 GPU from Nvidia that was our amazing great graphics card for gaming but it was also notable for being one of the I think the first Nvidia graphics card that supported CUDA which is their general-purpose framework so about that time this was 2006 so this was well before Alex that became mainstream but even at that time and video was investing heavily in this idea of using GPUs not just for gaming but for really general purpose scientific computing and on the whole software stack that we use now was really started to develop around that time in 2006 and then you can see that as we move from 2006 to 2012 there was a huge divergence in the cost of computing between CPUs and GPUs and here I've pointed out to the GTX 580 GPU that Alex could chess be used for training Alex net which as we saw last lecture was this absolutely breakthrough result in deep learning and you can see that by the time we got to 2012 and the GTX 580 there was a performing logical computation on GPU was much much cheaper than performing computation on CPU which gives us some sense for why alex net was able to be so much larger than any of the other convolutional neural network models that come before and now if we fast-forward from 2012 up to 2017 you can see that GPU even though CPUs have somewhat flattened in their in their cost per click cost of computing GPUs have been continually accelerating and you may have heard that Moore's law is dead that is really not not seems not to be the case with GPUs they continue to get faster and better and cheaper every year and I think my personal opinion is that on this period from 2012 up until now has been this massive explosion in deep learning people have been training ever bigger bigger models on bigger datasets and I think a big reason for that has just been the low the rising availability of very cheap compute due to these advancements in GPU computing between 2012 and now so this this gives you a sense of the general trends let's also drill down into a couple specifics so that this previous chart was a we said a little bit out of date but I went and pulled some numbers for some actually current CPUs and GPUs so if we look at this a top-of-the-line consumer CPU today actually not today this this thing will be released a month from now in November but this is the horizon 990 3950 X CPU that seems pretty cool it's will again will be released a month from now and this thing has 16 cores it runs at a base clock of 3.5 gigahertz it relies on the system RAM in the memory it the memory of the the overall computer doesn't have its own integrated memory and it runs it will retail I guess for about $750 and now if you and then this so actually I said this this chart is a little bit out of date I think actually there's been a pretty dramatic increase in the compute capability of CPUs in the last couple of year in the last couple of years I think especially as the rise in line has really shaken up the the CPU field and you're right if previously Intel was kind of the winner on CPUs and that was kind of they were kind of the undisputed champions of CPUs which I think explains maybe this flattening of progress for several years in CPUs but since I made this chart about now there's a competitor to Intel they produce really good CPUs and they've actually become very very cost efficient as well so if we multiply all this out we see that this top of the line comes consumer CPU hits about 4.8 teraflops for a second so which is fairly fast that's a lot of computation but if we compare that to the current top-of-the-line consumer GPU which is the NVIDIA Titan RDX which is a bit more expensive but you'll see that it is civics it has is significantly more powerful computationally and can achieve more than three times the total number of floating point operations per second now kind of the the the parrot the the kind of cartoon picture that you should have in your head of what a CPU can do versus what a GPU can do is that CPUs tend to have fewer cores but those cores tend to be much faster and those cores tend to be much more powerful they have maybe better branch prediction they have maybe better caching strategies for memory and the individual cores in a cpu tend to be able to do a lot more GPUs on the other hand have cores that run our relatively stupid cores compared to the the innovations you see in CPU cores and they run at lower clock speeds so you can see that if in this consumer CPU they run at maybe 3.5 gigahertz and the GPU runs over like 1.3 5 gigahertz so the individual cores are running something like 2 to 3 times slower on a GPU but there's just a lot of these cores so if we compare these 16 cores on the CPU versus 4608 course on a GPU you can see that the GPU just has a lot more individual compute elements in it that let it do more computation overall I should also point out that this head butt head head is sometimes you'll see these side-by-side comparisons of core accounts between CPUs and GPUs and it's actually a little bit unfair because the the CPUs actually have these vector arithmetic units now as well they can do multiple instruction to operate on multiple instructions in a single clock cycle so it's not really fair I think to compare the core content of CPU to these CUDA cores that you'll see advertised by Nvidia but still it is it is true that GPUs tend to have a lot more computing elements we're all to make that a little bit concrete we can dive inside this r-tx Titan GPU so if you are if you have if you have all these GPUs and you crack it open you'll see something like this you see that this GPU is basically a little like a little mini computer unto itself that inside the GPU it has its own fans because it gets really hot so it needs to cool itself and those are integrated into the unit and then it has its own memory modules so if you look at these blue boxes these are each of these is a 2 gigabyte memory module and there's 6 of them in here for 24 gigabytes of memory inside the GPU device itself and if you look inside this red box this is the actual heart of the GPU this is the actual processor so these GPUs are really like in like little tiny mini computers that can that have everything they need to do computation inside one little compact device and if we zoom into this processor we see this giant grid of compute elements and you can see that it's very heterogeneous there's a lot of these repeated compute elements stacked side-by-side and a grid and in particular the the core computing element of an NVIDIA GPU are these so-called streaming multi processors or sms and you can see that in this RTX Titan GPU it has 72 of these identical streaming multi processors inside of it and now these streaming multi processors are maybe somewhat more akin to a CPU core on a traditional CPU so if we zoom inside of one of these extreme multi processors we see that there is even yet still more heterogeneous rather homogeneous compute elements inside each of these compete the little streaming movie inside the GPU so in particular here is actually finally after zooming in multiple levels of zoom we finally found the actual out computing elements inside the GPU so this little red box is one of the below to 32-bit floating-point cores that actually performs floating floating point arithmetic deep inside the GPU and you can see that this inside each of these stringy multi processors they have 64 of these 32-bit floating-point units per string the multi processor so then when you multiply all this out we saw we know that there is what seventy-two stream and multi processors in the GPU each of those multi processors has 64 floating FP 32 cores and each of those little individual cores can do two floating-point operations per clock cycle because they can do a multiply and and accumulate in a single clock cycle so that counts us to is multiply and an ADD so when you multiply all that together you get the this number of 16.3 Terra clocks per second that Nvidia advertises but if we look at this streaming multiprocessor you can see that there's a lot there's more going on inside of this thing beyond just these Athlete 32 cores I'm not sure if the text is big enough for you to read maybe in the back but next to the FT 32 cores is something with a very suggestive name called a tensor core so these this is a new bit of architectural elements that NVIDIA has introduced into its GPUs in the last several years and this is basically a specialized bit of hardware deep inside the GPU that that's specifically meant for deep learning so deep learning has been so important to Nvidia that they've actually changed the underlying architecture of their hardware to be better suited to deep learning so in particular these these tensor cores are specialized hardware that do little chunks of matrix multiply so in particular if we have a 4 by 3 4 by 4 matrices a B and C then this tensor core can compute a times B plus C or that each of them is a four by four matrix it does that matrix multiply plus an addition in a single clock cycle so it's a special little purpose bit of hardware this only point in life is to do four by four matrix multiply plus a bias term and you can imagine that we've seen in deep learning we use a lot of matrix multiplication and we use convolution which is basically multiple matrix multiplication plus a bias so you can imagine how you could you could break up the implementation of a matrix multiply or a convolution to make to do it all in terms of little 4x4 matrix multiplies that could then be mapped onto the tensor core elements deep in these essence so if you count the nut so if you count the number of operations that it takes to do to multiply to 4x4 matrices plus an add plus adding another 4x4 matrix with bias you can see that it takes 128 floating-point operations to do that matrix multiplication plus a bias so if so then that that can do a lot of computation per clock cycle so you might have add so you can imagine that these seem like they can do a lot more computation than the epi32 elements in the GPU so there we actually equate and there's got to be some catch we've got to give up something and it turns out that what we give up is the precision of the arithmetic that we use for computing these things so normally when we do deep learning we do our arithmetic in 32-bit floating-point which means that each number is represented by four bytes but when inside the tensor cores they actually use multiple levels of precision for computing this matrix multiply and that allows them to build more compact hardware that is faster more energy efficient so in particular these little tensor core units actually perform all multiplication steps using 16-bit floating point and then they perform addition steps using that they're using 32-bit floating point and doing using this idea of mixed precision it lets them pack a lot more compute into a very tiny and efficient space so then if we multiply all these things out then we see then we want to ask how much computation does this device have when we consider it the tensor core units and so we still have seventy-two string multi processors within each of those streaming multi processors there are eight tensor cores each of those tensor cores can do 128 floating-point operations per cycle so if you multiply multiply by the by the boost clock speed of one point seven seven seven gigahertz then you get a total throughput for this device of 130 teraflops per second which is a lot of computation so now if we return to this table of CPU versus GPU you can see that we were actually on quite under estimating the true power of these GPUs in this previous table so it's true that when you consider only 32-bit floating-point arithmetic then I think some of these modern CPUs with a lot of clocks and a lot of vector arithmetic are actually maybe catching up to some extent to GPUs but when you consider this special-purpose tensor core hardware that is now shipping inside nvidia gpus then you can see that see that GPUs are still dramatically more efficient and have dramatically more computational ability than in CPUs um so then if we revise our chart we can add we can actually add a new green dot on the upper right here showing the first NVIDIA GPU that actually the consumer and be the GPU that ship these tensor cores yeah yeah the question is how do you utilize these tensor cores in pi torch and all you have to do is flip your input data type to 16-bit and then if you've got the prop if you've got the right Hardware installed and the right or the right and video drivers installed then it will automatically accelerate any computation that can be accelerated it will automatically go in that sensor cores so it's actually very easy to utilize these tensor cores from the user perspective although I should point out that optimizing these models now when you're doing come to earth matech in mixed precision actually optimization becomes a little bit more finicky so there are some tricks that people play around like which parts of the model do you want to compute in full precision which parts do want to compute in mixed precision and sometimes people to play some tricks with the optimization techniques in order to make the optimization more numerically stable as you move to lower precision but that said I think it's worthwhile because as you can see the tensor cores give nearly a 10x speed up over the epi32 cores in the GPU so as kind of an example of why matrix multiple like the prototypical example of a operation that is much much faster on a GPU compared to the CPU is matrix multiplication and you'll you can recall that matrix multiplication if we have two matrices then each element in the output matrix is an inner product between two big vectors and this is a trivially paralyzed of the problem because each elements in the output matrix is can be computed independently because it's just an inner product of different rows and columns of the input matrices so you can imagine that this type of problem Maps very perfectly on to a GPU that what you can do is take each of these output elements in the output matrix and assign them to different string multiprocessors or different at the different sets of epi32 cores within the streaming multiprocessor processors for the GPU and in doing so it's kind of a perfectly paralyzed double problem that is extremely well suited to the hardware that is present in GPUs if you contrast this with kind of a single a traditional single core cpu model it has to iteratively compute each of these outputs one by one and it doesn't have as much ability to paralyze over many many computing elements and you can also see how matrix multiplication is a perfect example of an operation that can be accelerated by these tensor cores because now if we have in a little computing element inside our GPU that can compute four by four matrix matrix matrix multiplies plus abayas in a single clock cycle then you can imagine breaking up this output matrix into 4 by 4 matrix chunks and then assigning those little chunks into different tensor core elements and again perfectly distributing this matrix multiplication over all the available computing elements yeah question yeah the question is is 4 by 4 the limit for what we can do in a single round and that's that's true for these for the current generation of tensor cores because basically this thing is like a specialized piece of hardware that like takes in like different different for like takes literally a 4 by 4 matrix and another 4 by 4 matrix and then produces an output 4 by 4 matrix so it's kind of hard what that size is hardwired into the into the into the hardware of course you can emulate different matrix multiplies in software so say if you wanted to do for about 5 by 5 matrix multiply and you could simulate that with 4 4 by 4 matrix multiplies where you pad them out with zeros and I think the if you for example actually compute large matrix multiplies in pipe arch with funny sizes then the underlying matrix multiplication routines will do this ought to do this padding and splitting up are graphing for you but that's actually a good point that you'll sometimes see that powers of 2 especially large powers of 2 are the most efficient on GPUs and that's because then you don't end up wasting compute in the way that you split things up across the across the device so you'll often see neural networks that have their sizes of everything in powers of 2 and that's really a result of this underlying you hardware that they're running on so GPUs can be programmed and in order to program GPUs in order to program NVIDIA GPUs we write in a programming language called CUDA and CUDA is kind of like an extension it's sort of like an extension of C or C++ that lets you write code that runs directly on the GPU I think it's actually pretty fun to write CUDA code it's sort of a different method of programming and it kind of is a very different way of thinking about decomposing problems but CUDA programming is unfortunately beyond the scope of this class however there is another EECS 598 running this semester applied GPU programming that the whole thing is about CUDA programming so I don't that I think that could be a good resource for learning how to program in CUDA if you're so inclined but in practice it's actually relatively rare for deep learning practitioners to need to program in CUDA because Nvidia provides very heavy heavily optimized routines for matrix multiplication and convolution and batch normalization and all these other operations that we want to use in neural networks so in practice you most people will just stick with PI torch and then even in PI torch they don't implement everything in CUDA they often even inside PI torch they'll rely on these heavily optimized matrix multiply or convolution routines that are written by Nvidia so I think it's kind of a fun exercise to learn how to program in CUDA but it's not always a necessary skill for being productive in deep learning now so far we've talked about only single GPU devices and how great they are but that's not really the end of the story people more and more these days are interested in scaling up their compute beyond single GPU devices so in practice it's very common to buy servers that actually have eight GPUs per server and then distribute your comity computation across all of the GPUs in a server or even stack multiple eight GPU servers in a data center and then distribute your training across multiple GPUs across multiple servers across an entire data center so it's interesting to think so then you kind of get this hierarchical decomposition of like servers and two GPUs into tensors into string multi processors into tensor cores and it's just like lots of levels of hierarchy in parallel computing so for the longest time for for many many years and Vidya had sort of been the only game in town when it comes to hardware that people use for deep learning but in the last couple of years another competitor came onto the scene that was a key Google so the last several years Google has been producing their own specialized pieces their own specialized hardware devices for do it for performing deep learning computation they're the the first publicly the first such piece of hardware that they talked about publicly was the cloud TPU v2 that has about 100 it has 180 teraflops of compute for one of these boards so that's sort of on a similar order of magnitude as the tensor cores inside the the latest Nvidia cards and it also has quite a lot of memory inside inside the card itself and the Google is a little bit secretive about exactly what how these chips work but my impression is that they are fairly somewhat similar in design to the tensor cores inside of the Nvidia cards that they also contain specialized hardware that performs low precision or mixed precision matrix multiply in sync in single or a few numbers of clock cycles and you can you can't you cannot buy these things but you can rent them on Google cloud for four dollars and fifty cents an hour or I don't know if you poked around into settings on colab but there's actually an option for you to use these cloud CPU v2s for free on colab which we have not done for assignments but it's I think very cool that you can use these for free on collab but as we said people are more and more interested in scaling beyond single compute devices and scaling to entire data center level compute so teeth the TPU is really shine when they are assembled into these so-called TPU pods so Google creates these large rack scale solutions on pull a TPU pod that has 64 of these chips in one machine one integrated machine and if you add these things all up together then it gets 100 or 11.5 petaflop spur second of computation and if you can rent them on Google cloud with a low low price of 384 dollars per hour now this is the Google cloud TPU v2 so you can imagine that the following year they followed it up with the cloud TPU v3 which just multiply all the numbers on the previous slide by some large factor so the cloud TPU v3 now has 420 teraflops on one of these and and it rents for $8 an hour and the cloud TP uv3 pod has 256 of these teepees and v3 devices for a total compute of more than a hundred petaflop of compute all in one piece of programmable hardware that you can run your neural network models on but unfortunately for the TPU v3 pop pod you're gonna need to call for pricing those are too expensive I guess to even put pricing on the website so you need to talk to a sales rep if you think that you're willing to spend enough to purchase one of those things one big downside one big caveat about TPU is that in order to use TP use you need to use tensorflow so we'll talk a little bit more about tensorflow later in the lecture but that is Google's deep learning framework however that might not always be the case if you go on PI towards github and you kind of look through some of the commits that have been happening over the last couple of months months you start to see some suggestive commits in the PI torch commit log like this one that purports to add a TPU device type and back-end type refer for pipe or spencer's so maybe one day oh maybe this is something they're working on I don't know but it's kind of fun to look at the commit log of these open source projects and you can get some sense of what features might be upcoming in the future yeah yeah so the question is that when you compare GPU is built for gaming and GPUs built for deep learning they often differ in their memory the memory actually differs in two ways one is the amount of memory so the GPU is built for compute compute tend to have more memory that allows you to train bigger models on the device because if you remember how back propagation works that when you perform the forward pass you're building up this computational graph that needs to store all the activations and those activations are getting stored in the GPU memory and then so then you're the size of the monthly depth or size the model that you can train is somehow constrained by the amount of GPU memory on the device itself so consumer GPUs tend to have less memory because they're really meant for gaming and you don't need 60 gigabytes of GPU memory to play games but for for deep learning then you want a lot of GPU memories you can store that all these activations of your computational craft and memory for that propagation but actually the other big difference between consumer great GPUs and compute GPUs is not just the amount of memory but also the type of memory so if you look at the consumer GPUs they'll often use something like GDD our six in the the recent the most recent consumer GPUs whereas the the compute oriented GPUs will use something called high bandwidth memory instead and the difference is the bantam the bandwidth of the memory between the compute elements and the memory of the GPU because it turns out that GPUs are actually so fast that in many is coming in maze for many operations inside the GPU I'm actually moving the data for between the GPU memory and the compute elements is actually much more expensive than performing the computation itself so for something like a that only performs one floating-point operation on each element in the tensor it's actually not bound by the GP by the compute speed those are actually bound by just the speed at which people the the GPU can shuffle the data back and forth between the compute elements and the memory so having increased memory speed an increased memory bandwidth also increases the overall speed of training these devices even though the compute speed might look the same on paper so that that's pretty much all I have to say about deep learning hardware are any questions about GPUs or GPUs or CPUs before you move on ok but I think I think there are I think those those are super fun to think about so the next question is deep learning software so probably you maybe won't have a lot of chance to play on a lot of different card where computing environments because these things are pretty expensive but on the software side you actually have a lot of choices when you're when you're doing deep learning so the deep learning has been had this kind of zoo of different deep learning frameworks that have popped up over the years in the early days a lot of these some of the big ones that were used a couple years ago were things like cafe and torch and piano they came out of academic groups like UC Berkeley NYU and Montreal but increasingly as we moved to second and third generation deep learning software systems we increasingly see that these are no longer built in academia and maintained by grad students but instead these are built in in big industry groups that have a lot of staff on-site to properly engineer these things and support them so some of the bigger deep learning frameworks that you've seen in recent years are cafe 2 and pi torch that are produced that are built by Facebook Google builds tensorflow Amazon's does a lot of support of MX net Microsoft had this one called CNT K the cognitive computing toolkit and Baidu had this one called paddle paddle and there's some other ones like chainer and Jax is this really cool upcoming framework from Google but we'll see if it gets traction in the next few years so for the longest so for a while in deep learning it felt like every time you did a project do it to learn a new deploying framework and because these things were constantly evolving over time um but and that also made it hard to give this lecture because I needed to redo everything at least every year but thankfully I think the last few years things have settled a bit and kind of the mainstream deep learning the two big mainstream deep learning frameworks today our PI torch and tensor flow so these are the ones that I want to focus on now recall that one of these central ideas in deep learning is this notion of a computational graph that when we want to when we build and train a neural network models we want to but we have this notion of building up computational graphs in the forward pass for each of the operations that we perform inside of our model and then during the backward pass we want to traverse back traverse backwards this computational graph in order to compute the gradients of the loss with respect to weights and then ultimately make our gradient step to update the model so this idea of a computational graph is really central to all deep learning frameworks and if you think about what is the point or what are the big killer features that we expect from any deep learning framework I think it boils down to these three one is that we want rapidly want the framework to enable us to perform rapid prototyping of new ideas that means that it should provide a lot of common layers and utilities for us so we don't have to rewrite them ourselves for every project second is that it should be able to automatically compute gradients for us using this abstraction of a computational graph in this class we've sort of forced you to write gradients for yourself because I think it's very instructive for you to do but when you're out there in the world it's actually much more efficient in practice to not drive those things yourself and instead let the software compute them for you so a really efficient and effective mechanism for computing gradients with backpropagation and computational graphs is the second key feature of all of these frameworks and finally the the third major feature is running all this stuff on GPUs or TP use or what other other exotic hardware devices might come out in the future I mean this is maybe a little bit hard to appreciate today right because in pi torch it's so easy to just run things on GPUs that it just it's so trivial why wouldn't you do it but if you look at the development of deep learning frameworks over time and the history of GP GPU computing in general it wasn't so long ago that running general-purpose code on GPUs was really pretty painful and not so easy to do so I think things like tensorflow and pi torch have really been a triumph not just for deep learning but also for GPU computing more generally that they've made it really easy and really accessible to write code that is able to run paralyze at operation over GPUs even without knowing I need the specifics about how the hardware works so with that kind of context in mind let's dig a little bit more into some parts of height works that we haven't explored yet so a big one big caveat that we always need to mention is software versions so for this class we are now using PI torch version 1.2 because apparently colab updated in the last week or so from one point one to one point two so now we're using one point two from now on sorry about that actually I was surprised by this I was running some examples and I saw all of a sudden PI torch was the new version there was no public announcement there was no release notes they just silently swapped the PI torch version on everyone using collab and it was a surprise so I think that actually bit some people on the homework around random seeds because when PI Tork switched from one point one to one point two then itched I think the written that what outputs you get for the fixed random seed actually changed and I think we saw some confusion on Piazza around that point because when PI torch updated to 1.2 then we had developed the assignment on one point one and the random seeds changed a little bit so I apologize for that but it just happened silently also a big caveat is that if you're looking at older pipe PI charge code especially pre 1.0 there was a lot of breaking changes in the pie torch API especially between one between 10.1 and 0.4 so I think PI toward 1.0 has been relatively stable in api for about the last year or so but if you're out there in the wild on the internet looking at random github repos you'll still see a lot of really old outdated PI torch code that might not work under the the more stable releases today so that's just a copy up to watch out for so I think the way I think about PI torch is that there's kind of three different levels of abstraction that it gives you for building your neural network models the the top level the lowest level of abstraction is the idea of a tensor and that's the level of abstraction that you've been working with so far and all the homework assignments where a PI Jewish cancer is just a just an array kind of like a multi-dimensional array that runs on GPUs and you can do operations on it it's basically like numpy but runs on the GPU if you're familiar with other other libraries but piped which gives you two other levels and and in the first couple homework assignments you've seen how you can use only this tensor API for building neural network models and computing gradients and performing gradient descent and you can do all of that stuff using just this tensor API but PI torch gives us a couple other higher levels of abstraction for thinking about building neural network models the second is this auto grad level for automatic gradients and this is a pack this is the heart high torch that lets us automatically build up computational graphs and back propagate through them in order to compute gradients and finally there's yet another level of abstraction called module level where a module is now is now something like an object-oriented neural network layer that actually stores inside of itself State like learn about ways and by composing modules it makes it very easy to build big neural network models so basically the way this breaks down is that on the first three assignments we constrained you to only use this tensor interface in pi torch but starting on assignments 4 5 & 6 then you'll be using the full generality of these different layers of abstraction to perform different types of computation so as kind of a running example throughout the rest of the lecture we're gonna use this this example of training a two layer fully connected Network with written automatic linearities with an l2 loss function as kind of a running example to see how this works both in different frameworks and in different layers of abstraction so something like this code on the screen now is something that you should be very familiar with by this point in the class this is basically training a neural network using only this only these tensor operations so at the top we're kind of creating random tensors for our data and our weights here we're doing a forward pass where it's a fully a matrix multiply and array Lu and other matrix multiply and an l2 loss function and then here is use it is computing the backward pass where we mainly manually compute the gradients of the loss with respect to the weights and then here is a gradient descent to step where we actually update the weights and this type of code should be very familiar to you at this point in the semester and again you know that in order to flip in order to move and do all this computation on GPU all you need to do is change this device arguments that the tensors are placed on and then all of your compute transparently runs on GPU now we can move to the next level of abstraction which is autorad and first observation is that the code is quite a lot shorter so that's hopefully a good thing and the idea with Auto grad is that tensor was so far we've constructed tensors in various ways but whenever you construct a tensor in type I torch there's always another flag you can set called requires grab and all you have to do to make pi torch build computational graphs for you is set requires grad equals true on the parts on the tensors that you want to build computational graphs so here is an example of now training the exact same fully connected neural network model but using the pipe of the auto grad level of abstraction in high torch so here we can see that we're still initializing random date random tensors for the weights and the data but now for our weight matrices w1 and w2 when we construct them we passed this additional flag requires grad equals true when constructing the tensor and this tells PI torch that these tensors are ones that it we want it to track and build computational graphs for us and now our forward pass is actually very much abbreviated because every because we no longer need to explicitly keep track of all these intermediate results in the computation ourself because any intermediate results that will be needed for back propagation will be stored by PI torch automatically somewhere in this computational graph that is being built up in the back for us so to think and then so these these lines are then computing the fordpass it's the exact same sequence of operations that you saw in the non autocrat example the only difference is that now we can we can just throw away these intermediates because we don't need to explicitly store them and now after we've computed the forward pass and the loss then we have this very magical line in a called lost a backwards and this one line of code is a telling PI torch to traverse the graph for us and compute all the gradients with respect to all of the weights that were taken as input so it's walked us through this a little bit more concretely the way that this works is that whenever a PI torch whatever height which performs a primitive operation on tensors it checks whether any of the inputs that operation have this requires grad equals true flag set and if any of the inputs to a PI torch operator have requires grad equals true that means then PI torch will in the background build up start building up a computational graph data structure that represents that computation so when this first little in this first function of this matrix multiplication between X and w1 runs then because w1 has requires grant equals true then pie charts will silently start building up this computational graph in the background that has the inputs X and w1 has a node for matrix multiplication and then has an output which is something on this tensor that is sort in the graph but it's not given a name because we don't have any need for it in our code then when we perform the clamp operation then well the output of this so the other rule is that when PI torch what when Petrich performs an operation on any input that has requires grad equals true then the output also has requires grad equals true so there's all kind of works out recursively so then the next thing is that when this dot clamped line executes then it's operating on this anonymous output tensor that also has requires grad equals true so this line will build up a new chunk of the computational graph when this matrix multiplied a line runs then we build up more computational graph and now rope in w2 when we perform the subtraction we get more computational graph this power adds more to the graph and finally this sum built perform is the final part of the graph so then you basically every time Piper hydroids performs a primitive operator is just adding more on to whatever computational graph has been built so far and now when you call law stop backwards more concretely you can see that we're calling loss stop backwards and loss is kind of the end thing which is a scaler at the very end of the computational graph and then when you call law stop backwards a couple things happen one is that PI torch searches through the graph to find any leaf nodes that have requires grad equals true so the leaf nodes are the inputs of the thing all of the inputs to the graph that would be x w1 w2 and y in this case and now it searches for any of those inputs that have requires grant equals true set in this case w1 and w2 have requires grad equals true set which means that now it will automatically do some graph search to find out the path between that output load node loss and all of the input nodes that have requires grad equals true set and then after finding these paths between the output and the inputs that need gradient it will actually perform back propagation and back propagation through one of each of these nodes one at a time and then after back propagation finishes it actually will throw away that graph and just like free all the memory that was used for that computational graph and it will store the gradients that were computed for the inputs in these in w1 grad and w2 grad which will now be new tensors that contain the computation that will contain the gradients computed during back propagation and now that this lost backward has magically computed all the gradients for us then now we can use this w1 grad MW to Dodge to perform our gradient descent stab and then a very important step and a very common source of errors is that you need to explicitly set those gradients to zero after you perform your gradient descent step the idea is that when we perform back propagation it actually if some of your tensors might already have some gradients hanging around in their Associated tensors and when you perform when you call loss backward it actually doesn't it doesn't overwrite the existing gradients instead it computes the new gradients and adds them to whatever old gradients were already there so that means that if you want so if the normal thing is we want to compute fresh gradients at every iteration which means you need to explicitly zero the gradients of your tensors on every iteration um and I'm in barely I'm embarrassed to admit I've made this bug more times than I want to say um but it's easy to forget these lines and sometimes things will still train without even if you forget to zero the gradient so that can be very very difficult to debug I think it's maybe a bit of a design flaw in PI torch I think it maybe would have been better to overwrite by default and accumulate if if you want to opt into that but this is the API that we have to live with now because it's 1.0 and it's supposed to be stable and now another bit of weirdness that you see in this code is that these gradient updates actually are scoped under this with torch no grad a context manager and what this means is that any operations that fall under a no grad context manager mean that we're telling high torch to just don't track don't build computational graph for any operations that happen within this context even if those even if some of the OP the tensors did indeed have requires graddic was true so this context manager just overrides whatever the requires grad flag was on individual tensors and the reason for this is that we don't want to back propagate through our SGD steps that would cause memory to leak from iteration to iteration and it would just be very confusing and it would not be the SGD algorithm that we mean to implement so whenever you're so kind of is a rule of thumb whenever you're doing an update rule or zero in gradients or anything that is outside the computational graph then you want to scope it under one of these no grad context managers now PI torch is all hi George autocrat is also extensible in this example we've kind of written out before pass by calling these basic PI torch operators uh yeah I was there questions the question was are the gradients calculated numerically or an igloo analytically well they use the back propagation algorithm that we've talked about so far in this class which is not wait it's not finite differences it that that's that's what we usually think of when we say numeric out numeric gradients right it's not really either it's a different thing right numeric gradients is usually something like a finite differences approximation using the limit definition of the derivative and back propagation is not fat when you say symbolic differentiation that usually means you build up some symbolic data structure and then manipulate those structures symbolically to compute some new expression for the gradients and back propagation is also not quite that back propagation is instead this structured application of the chain rule in order to compute it sort of use the chain rule in the right way at every point in the computation in order to compute our gradients so it looks a little bit like symbolic differentiation but it's not quite traditional symbolic differentiation either but it's the back propagation algorithm that we've talked that we've been covering so far in this in this class now in this example we've implemented our forward path of the network using only these basic operators in pi torch like matrix multiplication clamping of subtraction etc and it would be kind of a pain if you had to do all of your computation that way thankfully PI torch integrates very nicely with sort of basic abstract software abstractions in Python so for example you can define a Python function which inputs pi george tensors and outputs pi torch tensors and then use that python function inside the neural map inside the for pass of your neural network and this will work just fine so in this example we're defining a sigmoid function and using this mathematical definition of the sigmoid in order to define a new function that we can then use in the four pass of our network but when you when you define new and this allows us to have some modularity in the way that we implement our our neural networks but it's important to point out that when you use Python functions to perform modular computation inside neural networks or modular Li structure your neural networks then they're still at the computation the computational graph level does not know about Python functions then really the way this works is that when you call the Python function then each of each primitive PI torch operation that happens inside of your Python function will just keep on adding to the overall computational graph so that means that another way to put that is that defining things using pipes using Python functions lets your code look sort of nice and modular and structured but then every time your code runs there will just be some giant computational graph that is kind of like a flattened version of all of the operations that you've performed through through the as your program traced through all the different functions that you called so in particular when this sigmoid function runs then you can see that it will just add on more and more nodes the computational graph for each of these primitive operations that we use to implement the sigmoid function so we've got this this minus 1 this exponential this plus 1 and this division and you may know that the sigmoid function is actually if it's that then when computing back proper computing gradients through the sigmoid function it will back propagate through each of these primitive nodes one by one using normal back propagation but you may know that computing gradients through the sigmoid function in this way will actually be very numerically unstable and actually if you implement the grid the backward pass of sigmoid by back propagating through this graph then you'll very frequently get Mane's in or not a number overflow errors or infinities or other bad numerical things in your computation so PI torch also gives you and if you'll recall from the last look from a couple lectures ago we saw that for the particular case of the sigmoid function the local gradient of the sigmoid function has this very nice mathematical form that you can work out on paper which is that back propagating through the sigmoid as an entire unit unto itself then we get this very nice expression for the the local gradient of the sigmoid function and in cases like this where if we do not where we actually have some special knowledge about the way that gradients should be computed then we can W then PI torch gives us another layer of abstraction to implement this and that is to implement a new auto grad function that is very similar to these little forward and backward modular api's that we've talked about previously so you can so by defining a new autocrat function this lets you define new primitive operations that will give rise to just one node in the computational graph so when defining a new autocrat function you can see that we define a forward which computes the forward pass but then we also define an explicit backward function which is receiving the upstream gradients computing local gradient and returning the downstream gradients and now if we were to use this auto grant function version of sigmoid in our computational graph then this would give rise to only a single node in the computational graph and in order to back propagate through it it would back it would just use this backward function that we hit fermented for this one note so but in practice so it's very nice the PI torch gives you this flexibility and this freedom to very easily implement new fund of new basic elements of computational graphs but in practice this is less common to see I think it's much more common in practice to just use Python functions to implement most things but sometimes you need to actually use the mechanism on the right to define your own new primitive operators of course then the next layer of abstraction in PI torch is the NN module which gives us some kind of object-oriented API for building up neural network models and now that this becomes very expressive very quickly so here you can see that MN left gives us this object-oriented API we're now in torch dot n and sequential is some container object that is meant to make that maintains a sequence of layer objects and then within each of each of the then we provide it layer objects like a linear layer that it stores the learn about sandler novel bias as attributes inside of that object which means that we can now define our structure of our neural network model by just composing these layer objects and sticking them into containers and now when we compute the forward pass all we need to do is pass the data to this object that we built and that computes the forward pass for us then torched n n also gives you common loss functions so you don't need to implement those anymore from scratch then we can still call loss backward to compute gradients and then the the gradient descent Seth now looks very similar we iterate all over all the learn about parameters in this model and update them using our gradient descent rule of course it's kind of annoying to implement your own gradient descent rules all the time so pie charts also provides optimizer objects that implement common gradient descent rules so here you can see that we can build an optimizer object that encapsulates the atom optimization algorithm and we pass it the model parameters that we'd like to optimize as well as the hyper parameters like the learning weight and now when in our training loop all we need after calling the after computing gradients by calling loss talk backwards all we need to do is call optimize or not step and that will automatically make the gradient step for us and of course we also need to remember to explicitly zero the gradients we step and this is again a common source of bugs another very common thing to do in pi torch is actually define your own new n n modules in this example our model had a structure that was that kind of made sense as a sequential sequence of layer objects but in more general situations that don't that aren't just sequences then you'll need to define your own module subclass that represents your computation so here we're all so we're again defining a two layer neural network with a non-linearity as by defining our own custom subclass of the module class so in particular you can see that here the initializer of our custom subclass takes in the the sizes of the sizes of the hidden layers and the sizes of the outputs that we need and then it actually constructs layer objects that are these linear objects that are built that are constructed in the initializer and then assigned as member variables inside our own module object and now in this forward pass we can use any of those module objects that were built in the initializer to perform our computation so you can see that we compute the forward pass by calling by passing the input to the first layer object then calling then clamping the output AB app to do array Lu and passing the output of that to the next layer object to perform to predict the final scores and then the rest of the training loop looks very similar I should also point out that a very common pattern is that you'll see people mix and match modules and sequential Zoar nest custom modules inside of other custom modules and this is a way to very powerfully and very quickly and very easily build up very complex neural network architectures so here's a kind of a little toy example that gives you a hint of what you can do with this so here we're define we're defining an remember in the last lecture we talked about how many neural network modules are built up of these homogeneous blocks right something like a residual Network it has this residual block design and then the overall network is repeating that same block design over and over again so in situations like that it would be very common to define a custom module subclass for the block that you want to use and then to build your model by instantiating multiple that subclass and stacking them together in maybe a sequential container so here in this example we've defined a kind of a weird little neural network block structure that I don't think is actually a good idea but it fits on the slide here that we were imagining a little kind of block design that computes to linear layers in parallel so our input is multiple we go through one fully connected layer on the left then a separate fully connected layer on the right with different weights and biases and then the outputs of those two fully connected layers are multiplied element-wise and the results of those multiplication then goes through array through non-linearity I suspect that this would actually not perform very well at all but it's kind of a little instructive example but you can look at and then you can see that it's very easy to implement this idea by a subclass by defining our own module subclass that again in the initializer we define two separate and n dot linear objects and then in the forward pass we use those two linear objects to compute two parallel outputs and then do this element wise multiplication between them and then you can see that when we build the model we building we build a sequential container that contains multiple instantiations of this little parallel block design and this is a paradigm that you'll see very commonly in pi torch code that kind of mix and match your own custom module subclasses with sequential sequential containers pi torch also gives you some nice mechanisms for loading data that make it easy that can automatically build mini batches and iterate through data sets and all that kind of good stuff for you pi torch also provides a bunch of pre-trained models that you can literally get all these pre trained models in one line that all you have to do is import torch vision then if you want a pre trained alex net you just say alex net equals torch vision models got alex net free train equals true it will go out on the internet and download the pre train weeks of the model automatically and cache them on disk for you and then return those waits for you to use right away in your code so this so this makes it very easy for you to quickly use pre trained models to build up your own different designs in different architectures now a big point a major point of a major point of the design in PI torch is this idea of a dynamic computation graph what this means is that every time we run the four we build up a new graph data structure and then when we call loss we throw away that debt graph data structure and the next time we run our iteration we're gonna build up another new graph data structure from scratch and then again throw it away when we call when we call it loss again this maybe seems a little bit inefficient that it seems kind of silly to just build up the debt graph graph data structure on every iteration and then throw it away at every iteration again just to rebuild the exact same thing at the next iteration but what this means but kind of the the benefit of dynamic computation graphs is that it lets you use normal regular Python control flow to control the flow of information through your neural network models and that lets you do very strange and funny and crazy things using very simple and intuitive code so here's an example that again doesn't make sense and I really don't recommend anyone use in practice but here what we're doing is we're actually initializing two different weight matrices for the second layer of our fully connected Network this is w2 a and W 2 B and now the choice of which weight matrix we use at every iteration of training is going to depend on the loss at the previous iteration of training again this is probably a terrible idea and I don't encourage anyone to write models that do this but if you have such a crazy idea that you want to implement you can see that it's very easy to do in pi torch just by using normal regular Python control flow and now in this way on one iteration we might build up a computational graph that involves w2 a and and then we'll throw it away and the next iteration will build up a new graph to confirm that that has w2 b instead and the main benefit of these dynamic computational graphs is that it lets the computation be deterred the structure of the computational graph be determined by normal regular Python control flow in cases where you want to maybe perform slightly different operations on different iterations yeah question yeah the question was what about tester well I think I think we'll hopefully get to that the quick answer is that yes it does now as of yesterday so the big alternative to dynamic computation graphs is this notion of a static computation graph so here what we want to do is have a two-stage procedure one stage where we build up a graph and then fix the graph for all time and then in the second stage we iterate through and reuse the same computational graph many of these lines and actually high torch this is a kind of a new functionality in more recent versions of a torch PI torch now gives you the ability to perform to use static TOC computation graphs using the JIT or just-in-time compiler so what this means is that we can define our model as a Python function that takes input tensors as input and returns tensors as output then there's this very very very magical line called graph equals torch it descriptive model and what this very magical line does is it introspects the Python source code for that function it parses the abstract syntax tree of the Python source code of that function and then it builds a computational graph for you automatically after traversing the the source code of your Python function and then returns that thing to you as a graph object that you can call and in particular in in this in this model function we see it has this conditional statement for this funny thing we were doing and now the graph that is built for you has to now include the some node in the graph that captures that conditional statement and now every time in every forward pass we'll simply reuse that same graph object and this can be even more succinct we can just you don't even have to compile this thing just explicitly you could just add a torch type gypto script annotation to your code and it'll have this compilation process will happen for you automatically when the Python process when the function is first imported into Python now one big benefit of the static computation graphs over dynamic is the potential for optimization that you can imagine that maybe the graph you write is some long sequence of convolutions and batch forms and Ray Lewis and things like that and if we have a static computation graph you can imagine using some compiler techniques to try to rewrite that graph in a way that might be more efficient computationally for example you might want to fuse some operations like convolution and Ray Lu and actually rewrite the graph in some non-trivial way that would make the computations be faster and and with a static computation graph you can kind of a more the cost of computing those optimizations were those graph rewrites and just do it once at the beginning of the program and then enjoy the speed ups for the rest if every iteration where it might not make sense to separately reoptimize the graph at every iteration that might be too slow another big benefit of static computation graphs is this idea of serialization so what happens in practice with machine learning models is that people will want to train their models in some very expressive programming language like Python but once they have their models trained they would like to deploy those models in environments that do not repent depend on Python so for example what you can do with the static computation graph is train up your model in Python then export the static computation graph as a data structure on disk and then load up that static computation graph object into a C++ API to then run your trained model in a way that does not depend on the Python interpreter anymore and this is again whereas with dynamic computation graphs the code that builds the graph and the code that executes the graph is all kind of intertwine so if you want to use this thing in production you'll probably need to depend on a Python interpreter and this was a big motivation for all of these tech companies really building in strong static graphical functionality to their deep learning frameworks a big downside of static computation graphs is debugging so if any of you have used tensorflow before you see that it's sometimes very difficult to debug because there ends up with a lot of indirection between the Python code that you write and the code that eventually ends up getting executed so you can come to sometimes get very difficult very confusing error messages it can be very hard to know what's going on and what broke and very difficult to profile performance or other things like that whereas with a dynamic computation graph the code you write is pretty much the code that runs so they tend to be much easier to debug now some applications of dynamic computation graphs are things where the structure of a model depends in some way on the input to the model a canonical example of this is a recurrent neural network where maybe the number of time step and we'll talk about these in detail in a later lecture but the idea is that we input a sequence and now the number of time steps in the model is equal to the number of steps in the sequence and we want to perform different amounts of computation depending on the length of the sequence has passed into the model there are also examples like recursive neural networks that maybe get used in NLP so now the input to the model is some kind of semantic parse of a sentence and now the structure like the the way that the neural network model performs its computation is going to vary dynamically based on the structure of the parse tree that gets passed as input so these are by the way I don't expect you to know understand the details of these these are just two meant to give you some flavor of where dynamic computation graphs can really shine and here's an example from Johnson at all two years ago that is another example of dynamic computation graphs so here we have actually one part of the model predicting what structure the second part of the model should use so the first part of the model actually predicts some kind of program and then that program is then implemented by the second part of the neural network model so in order to do this not only does the computation of the neural network model depend on the input the computation that the model performs depends on output of a previous part of the model and in order to implement models like this I use PI torch because you need to you need this heavily heavy dependence on dynamic computation graphs to perform crazy models like this and I think there's a lot of open open area for people to try out really crazy ideas once we have this ability to build dynamic computation graphs very efficiently so that gives us just a couple minutes to talk about tensor flow so tensor flow I mentioned has actually been going through kind of a schism in the last year or so and the kind of the classic version of tensor flow is tensor flow 1.0 and actually yesterday was the final release or the release candidate of the final release of tensor flow 1.0 tensor flow 1.0 actually used static computation graphs by default everywhere and later versions let more recently come to some of the later versions of tensor flow 1.0 added some options to use dynamic computation graphs but the main mode for doing computation in tensor flow 1.0 was actually static computation graphs now tensorflow 2.0 was actually released this week on Monday and in tensorflow 2.0 dynamic computation graphs are the default and there is instead option to use static computation graphs so I think right now is a very dangerous time to read tensorflow code on the internet because you'll see you'll see some horrible horrible mix of 10 of 1.0 and 2.0 and sometimes even when you google bits of documentation they link between each other and it's just a complete mess so I think maybe you should be careful they be very careful about reading tensorflow code over the next couple of months but hopefully in a year or so maybe we will settle on the 2.0 API so kind of to give you a flavor of this classic tensorflow 1.0 then it had the structure kind of like this that I don't want to walk through the details here but in tensorflow 1.0 your code always had two big chunks one at the top is this stage where you define your computational graph and then at the bottom is where you actually repeatedly run your computational graph and this is kind of the the classic way that tensorflow code was written and this became this can be very difficult to debug now because what can happen is that in the piece of your code where you build the computational graph maybe you have a shape error or a data type error or you mismatching the API in some way and you pass a tensor that doesn't make sense to some function well then you don't actually get the error in the line of your code that caused the problem instead you only get an error message when you actually try to run the graph so what that means is that when you have an error then either your your stat trace will actually point to this mysterious session dot run lion and you'll get a stack trace deep into the guts of tensorflow and the thing that caused the problem was maybe you had a shape error on one of the lines 10 out of 10 lines earlier here and that can make debugging this classic tensorflow code very challenging but now in tensorflow 2.0 it seems like they basically copy by tortious API to some extent because the dynamic graph API and PI Church had been very popular very widely very popular and very easy to work with and very easy to debug so now if you look at this 2 layer Network example in tensorflow 2.0 it actually looks a lot like pi torch then you can see that here at the top we're kind of defining some tensor flow tensors not pi towards tensors that we're gonna use to store our weights in our data now remember in pi torch in order to track radians we needed to set requires grad equals true well the equivalent in tensor flow is to wrap them in a TF that variable object it turns out pi toward 0.4 used to wrap things in a in a pie chart variable object but that was annoying API and it deprecated it maybe tensorflow will catch up in 3.0 but then once we've done that then in tensor in tensor flow we can we tell tensor floats that we wanted to track gradients for us by scoping our computation under this TF gradients tape object and that means that any operations that happen under this TF gradients tape scope are going to build up a computational graph much like the way that pi torch builds up computational graphs when it encounters tensors with requires radical truth and now to compute our gradients um after we exit the TF gradient tape scope and call grad 1 and call taped-up gradient of loss with respect to the parameters and that's a very nice line let's you remember what you're taking derivatives of what with respect to and that returns us new tensor flow tensors containing the gradients and then we can perform our gradient descent step as usual so this should look very similar to this kind of auto grad version of Pi torch code that we've seen actually and now pipe tensor flow 2.0 also offers a very similar annotation based jetting based thing for static computation graphs that is very very similar to the torch script annotation that we saw in PI torch so it's kind of nice to see these two frameworks kind of converging on some similar ideas when in the past they used to be very different but now we can use static computation graphs in tensorflow 2.0 by defining our our our our step function as some Python function that takes the inputs and then annotating it with this TF dot function annotation and again this will perform a lot of magic and introspect the Python source code and build up a Python and build up a computational graph for us by interest interest the Python source code one thing to note about the tensorflow version of these things is that in tensorflow actually the gradient computation and the update actually can be part of your static graph as well so that kind of folds the entire everything you do in one training iteration is now folded into the computational graph so then in the training loop all you need to do is call the step function and now inside the step function it will compute the forward pass compute the gradients and make a gradient step for all for us tensorflow 2.0 also has standardized on Kara on this package called Kara's that gives a high level API for working with neural network models that is very similar to that that is similar in some ways to the nm package in pi torch so now here's an equivalent of training this two layered neural network in pike-perch 2.0 where we're using care offs so you can see that there is again now an object-oriented API that lets us build up our models as some sequence of layer objects now it also defines loss functions and optimized objecto Rian's and optimizers for us and now our and now our training loop we need to just call compute the for passed the model computer gradients and use the optimizer to make our great net stuff and then it turns out there's a slightly a bit different bit of API that we can use that actually has the optimizer called a backward call backward for us so this lets you then in very similar this ends up looking very similar to this training loop using NN and autocrat in hi George but again that lets you build up very powerful complex neural network models with very small amounts of code another very nice thing that I should mention about tensorflow is the tensor board functionality so this is basically a web server that lets you track statistics about your model and it's really great and a lot of people use it and a lot of people love it basically what you do is you add some logging code inside your for pass of your model that says like log I'm at iteration 10 my loss was 25 the gradient or this I'm a teapot one my accuracy was 50% or any other statistics that you want to track over the course of training and after you just import add this little bit of logging into your into your training loop then you can start up the tensor boards server and get all these beautiful graphs to visualize the results of your models and tensor board was so widely loved that the PI torch folks actually provided API to let pie charts talk to tensor board so that's one really really nice really so intense report is this very nice thing a lot of people are using actually in both frameworks now so then kind of my summary of hi torch versus tensor flow I think you should have guessed that PI torch is my personal favorite because we've used it for all the homework assignments in the class and I talked about it first before tensor flow but I think one of the big some of the big downsides about PI torch right now are that it cannot use TP use although maybe that's coming so today if you want to use TP use to accelerate your machine learning models you have to use tensor flow another big downside about PI torch right now is that I think it's not very easy to run PI torch models on mobile devices I think that the torch chip the the jetting mechanisms in pipe arch have made it fairly easy to deploy PI torch models in some non mobile contexts but if you want to deploy your train models I got an iPhone or something then it's actually quite difficult to do in PI torch right now so in tensorflow tensorflow 1.0 is very confusing very is static graphs by default very confusing api is pretty messy difficult to debug but this is where you'll find if you look at any tensor flow code online right now it'll mostly be test for a 1.0 code I think tensorflow 2.0 actually looks quite nice but the jury is kind of still out it's very new and we'll see whether or not this ends up getting adoption or or being smoother way to develop models in tensorflow so I'm hoping that tensorflow 2.0 will be great but we'll see so then to kind of summarize today we talked about these three different bits of hardware we talked about CPUs GPUs and TP use we talked about in a software that may in takeaways the main takeaways or static versus dynamic graphs and hi torch for Spencer flow so with all of that in mind come back next time and we'll talk about some nuts and bolts detail is about getting your neural models to converge
Deep_Learning_for_Computer_Vision
Lecture_6_Backpropagation.txt
so welcome back to the welcome back to the class we're up to lecture six and today we're going to talk about that propagation so where we are in this class is that now last time we talked about neural networks and we saw that neural networks with this very powerful class a category of classifiers that let us do a lot more powerful computation than had been possible with the linear classifiers that we have been considering so far you'll recall that neural networks had this fairly simple functional form of a matrix multiply this element wise non-linearity that we called an activation function together with another matrix multiply and then we could change these things together to get really deep neural networks and we saw this notion of space warping to recall that to demonstrate the way one way in which neural networks were much more powerful than any of linear classifiers where neural networks are now able to have these nonlinear decision boundaries to be input space and we also talked about you neural networks are the universal approximator to give another notion in which neural networks are a very powerful class of functions we also saw this notion of non convexity that neural networks despite their very powerful ability to represent many functions resulted in non convex optimization problems that have very few theoretical guarantees and now we're then we were kind of left with a bit of a problem here at the end of the last lecture which is that now we have the ability to write down these very complicated expressions that describe loss functions that we want to minimize using stochastic gradient descent in order to train our classifiers be they neural networks or linear classifiers or other types of deep learning models the problem here is how do we actually go about computing gradients in these models we know that we can write down arbitrary loss functions and if we can find some way to compute the gradient of a loss with respect to all the weight matrices of a model then we know that we can use the optimization algorithm as we talked about a few lectures ago in order to actually minimize the loss and find good models that fit our training data and the topic of today's lecture is how do we actually go about computing these gradients these derivatives for arbitrarily complex types of neural networks or other types of functions the first strategy that you might be familiar with that you might try to try try to adopt if you just attack this problem naively is to just derive the gradients on paper right you know that we can write down these scooby's functions these lost functions and expand out the loss function on paper as an equation with many terms and here I've expanded out the SP I think if this is an SVM loss function with a linear classifier and one one strategy you might go about doing this is you just write it all down on paper you expand out all the terms and you end up with a giant equation that represent the loss as a function of your data and your and your the weights of your model and then if you're very familiar with the rules of matrix calculus you could imagine trying to churn through this and just compute expressions on paper for all the learn about weight matrices that that appear in the model this is turns out to not be a very scalable solution so I apologize if anyone actually attempted this for the second of it a second assignment if you did go this route and try to compute these weight matrices on paper for the second assignment you will have noticed some of the potential shortcomings of this approach one is that it's extremely tedious you likely you probably needed quite a lot of paper to get this thing right as you are working with loss functions like the cross entropy loss function or the SVM loss another problem is that it's not very feasible for complex models for something like a linear model I think you can probably get by with this approach but as we scale to much more complex models it really visit this approach of writing down gradients and deriving them on paper will just not scale to more complex models and a final somewhat subtle problem with deriving everything on paper is that it does not lead to a modular design now suppose that once you've derived your loss function for a linear classifier with an SVM loss and now tomorrow you want to derive the gradients for a linear classifier with a salt with a soft mass softmax loss or a two layered neural network with a soft max loss or a five layer neural network with an SVM loss or any other kind of combination of losses and architectures and regularizer z' that you might imagine if you were deriving these things on scratch on paper for every combination of loss function and architecture you would have to read arrive everything from scratch every time and in practical situations it's much nicer to have some modular approach where you can just swap in and slot out different types of models and architectures and loss functions that will allow you to iterate much more quickly as you as you try to find models that work well fared the approach that we tends to take in deep learning is actually you may have guessed not deriving gradients on paper and since we're computer scientists we'd like to try to find data structures and algorithms that can help us solve tedious problems and the the data structure that we use to help us solve this problem of computing gradients is called a computational graph now a computational graph is a directed graph that represents the computation we were performing inside our model and here on the Left we can see the the inputs to the model the data X and the labels Y maybe not in this graph but maybe here we have the data X and the learn of a weights W coming in as nodes on the left of the graph and now as we perceived through the right as we proceed from left to right in this graph we see nodes that represent bits of fundamental computation that we want to perform in those in the process of computing this function so we see that there's this blue node that represents the matrix multiplication between the input X and the weight matrix W we have this this red node that represents our hinge loss if we're using an SVM classifier we have this green node that represents the regularization term in our model we have a sum that represents the sum of the data loss and the regularization loss and then finally on the right we have the output of the computational graph which is the scalar loss L that we want to compute when training our model now this this computational graph formalism when applied to something like a linear model might seem a little bit silly and a little bit trivial because in a linear model as we said there's only a couple operations that we need to perform in order to compute the loss and the formalism of writing the bound as a graph might seem a little bit like a little bit of overkill but this will become critical as we move to more complex and larger models for example something like Alix net is a deep convolutional neural network with seven convolutional layers and a non-linearity and regularizer is at every layers and a loss function at the end where now the images are coming in at the top and it's going through many many layers of processing and our final scalar loss is coming out at the bottom and something like this you probably do not want to derive the gradients on paper instead you really want to use this computational graphic formalism track the data structure to build up a data structure that represents all of the computation that the model will perform in order to compute the loss this will and this these things can get arbitrarily crazy so here's an example of a model called a neural Turing machine if you remember your theory of intro of theory to computation class you remember that I destroy a machine is this a formalized model of computation well it turns a couple years ago some folks wrote a neural network that is kind of like a soft differentiable approximation to the Turing machines that you learn in this intro the computation class and the here on the screen we're showing the computational graph that arises from this differentiable neural Turing machine extend you can see that it's very big and complex and you definitely want don't want to compute gradients in this model by hand you really want to rely on the computational graph formalism to compute gradients for you but actually it gets even worse than this because for the neural Turing machine this is showing only one time step of the model and in practice this model gets unrolled over many time steps as a kind of recurrent Network so you can see that this this you bury once you get into these very complex models you very quickly get computational graphs that are much much too large even to fit on a slide so you definitely want to use this some kind of direct graph traversal algorithms to help us automatically compute gradients for us on top of this computational graph structure hopefully this has probably motivated why it's really going to be really critical for us to use computational graphs in order to compute gradients in our big complex neural network models now that we've got this motivation let's actually see a concrete example of how we can use a computational graph to help us compute gradients in a little tiny neural network model here in order to fit in order to actually fit an example on a slide we're having to use a very trivial computation but as you've seen we're in real models we'll be doing much more complicated processing here we're showing a very simple function of three scalar variables XY and Z that simply a the output is now we're gonna add x and y and then multiplied by Z to compute the loss this is maybe a weird loss function a weird learning problem that doesn't make sense but hopefully this simple example will help us walk through exactly what it means to compute gradients in a computational craft well when we're when we're using the source and by the way this back propagation is this algorithm that we you for computing out gradients in a computational graph now suppose that we want to evaluate this function at the at a particular set of it at a particular point in the input space say X is minus 2 y is 5 and Z is minus 4 now the first step in using this computational graph is called the forward pass in the forward pass our will proceed with preceding computation from left to right and we will perform all of the operations specified by the nodes of the graph in order to compute the output values from the input values so in this example we'll simply add x and y and we'll get this intermediate that we're called we're going to give the name of Q and then to compute the final output value F we're going to multiply Q by the input value Z and by running the forward pass of this graph we'll end up computing our final our final output value of minus 12 in this case now in the backward pass our goal is to compute all of the gradients of the great the derivatives of all of the sorry the derivative of the output with respect to each of the inputs so in this case our output was f so we want to compute the derivatives DF DX DF dy and the f DZ which are the three inputs that appear on the left on the left side of the graph and we'll proceed from left to right because this is back propagation so it needs to proceed backward compared to the forward pass we always start with the base case so in the base case on the right we want to compute the derivative of F with respect to F anyone got an idea what that ought to be yeah that's trivial that's one because if we change one a little bit if we change a little bit then F is gonna change by the same amount so the derivative is 1 and when we're computing by the way when we're computing back derivatives or using back propagation in a graph will often write a little diagram like this where we show the for the values that are computed at each node above the corresponding line and then we'll come right down the gradients or the derivatives below the corresponding line it was during the backward pass so now the second step is we need to come want to compute the derivative of F with respect to Z and we in order to do this we know we can look at this little intermediate computation we know that F was Q times Z so we know that dirt that the derivative of F with respect to Z should just be Q and and based and then we can go back and look into the computational graph and look up what the value of Q was in this case it was three so in this so now the derivative of F with respect to Z in this little piece of the graph is now going to be three and that's now we've got one of our three gradients that we needed to compute the next piece we need to compute the derivative of F with respect to Q you can see that we're kind of marching backward in an in bryn opposite topologically sorted version of the graph and in order to compute the gradient of derivative of F with respect to Q we again know that F is Q Z so this local derivative should be Z we can look up the value of Z from the for pass of the graph and compute the derivative as minus 4 we can consider continued proceeding to the left now we want to compute the derivative of F with respect to Y and here things get a little bit interesting because now we need to remember the chain rule from calculus because here the the value Y is not directly connected to the output value F so in order to compute the derivative of F with respect to Y we need to compute we need to take into account the influence of Y on the intermediate variable Q so then the chain of the single variable chain rule from calculus tell us that tells us that DF dy is equal to D of DQ dy times DF DQ and this is very intuitive right the idea is that if Y it changes by a little bit then Q is going to change by some a little bit DQ dy and then if Q changes then f is going to change some little bit which is which is the other derivative so then to take into account these two effects we need to multiply them and now here in this case we know and now in the in the context of neural networks these three different terms in this equation have particular names that we'll use over and over again so this this left-hand term the f dy will often call the downstream gradient because this is the value of the derivative that we're computing at this step in the process this value DQ dy is going to be called the local gradient because this is the local effect of how much this value of Y affects this next intermediate output Q and this value d f dy is going to be called the upstream gradient because this is the effect where because network will kind of a match zooming in on this little piece of the graph around Y and the upstream crazy and tells us how much does the output of this piece of the graph affect the final output at the very end of the graph that piece is going to be called the upstream gradient and then of course the chain rule tells us that we to get the downstream gradient we just need to multiply this local upstream derivatives and then of course here we know that the local gradient in this case we know that Q is equal to X plus y so the local gradient or local derivative in this case is just one so when we multiply these two together we know that the derivative of F with respect to Y is the same as the derivative of F with respect to Q so we get our downstream gradient of minus four was that clear to everyone okay good and now that now it's very similar when we want to compute this other thing we again need to multiply the upstream and local gradients again the local gradient is one because this was a simple addition and we compute our final output value so here you can see in this relatively simple example well how we can use computational graphs to help us mechanize the process of computing derivatives in very complex functions during the forward pass we're going to compute all everything from left to right and then the backwards pass we're going to step backwards throughout the step backwards backwards through the rap and then compute these little derivatives at every point in the graph now this method this way of thinking about computing gradients is very useful because it's modular and now one way to think about it is we can zoom in on one little node inside this computational graph and what's really great about this mechanism for using back propagation to compute gradients is that each little piece of the graph does not need to know or care about the rest of the graph we can just perform local processing within each node and then by aggregating all this local processing we can end up computing these global derivatives throughout the entire graph now if we kind of step through this exact same process that we talked about in the previous slide but in the context of a single local node it looks something like this for each node in the graph we're computing some little local function f this local function f takes two inputs x and y and now during the forward pass will apply the local function to compute the local output Z now this now that's what forward operation of this in little independent node after the forward operation of this node runs this output Z will be passed off to some other part of the graph and that might be reused by other nodes and Moulton arbitrary complex ways we don't know we don't care from the perspective of this one node we just know that we computed an output and we passed it on to someone else then but at the end of the process eventually somehow at the end of the graph someone will compute someone far away from us will compute some final loss l and and then back propagation will start and somewhere outside of us will just pass these gradients back to us outside of the purview of this little node and eventually this back propagation process will hit this one node that we care about and this one node will receive a message from upstream in the graph which tells us the derivative of the loss with respect to Z that is how much does this this loss which may be very very far away from this node somehow we're going to be told how much will this very far away loss change if we change the local output of our node by a little bit and that's exactly what this upstream gradients the ldz tells us now at this point we can compute the local gradients that are internal to this node which tell us for each output of the node how much is how much is each output of the node affected by each input of the node do those these local these local gradients and now this node can simply compute the downstream gradients by multiplying the local gradients and the upstream gradients and now these downstream gradients then get passed along to other no it's backwards in the graph that are somehow again this node doesn't need to know or care exactly how those downstream gradients will be used elsewhere in the graph they're simply used somewhere and went by the end when this final dot propagation process terminates we'll be left with the having computed the gradients of the loss with respect to all of the original inputs of the graph and this we were able to compute this local this global property without really reasoning at all about the global structure of the function we were trying to compute it only required us to think about locally what's going on inside each node of the graph and then have some data structure to track how all those nodes are connected together so hopefully this is going to be a big improvement what we're trying to derive those big gradient expressions on paper here's another example of running computational graph here this should look something like a logistic classifier you kind of care about these equations but the details of exactly what the functions are computing for the purpose of this lecture are somewhat irrelevant we just care about computing gradients and arbitrarily complex functions but here on the Left we're showing we've got a function that takes for what 1 2 3 4 5 inputs our 5 inputs are W 0 X 0 W 1 X 1 W 2 and now in the forward pass we're going to compute this inner product between these first two elements of the weight and the first two elements of the X and then we're going to compute this bias term to add W 2 then we're going to compute some kind of e to the minus something on this computational graph and here the computation will proceed much as we saw in a previous example in the forward pass we'll compute the outputs of the graph by evaluating the forward function for each of these nodes and this will end up computing this final scalar output value on the right and then during the backward pass we'll iteratively think about how it will it early multiply the upstream gradient by the local gradient at each node in the graph to compute the downstream gradients so we'll always start with this base case base case of derivative of output with respect to itself is always 1 next we'll look at this 1 over X we know that the local gradient of 1 over X is minus 1 over x squared which gives us the local gradient we can multiply these to get the downs two ingredients we can step through again you know the adding a constant has local gradient of 1 so we that we can easily pass those gradients backward we can compute the local gradient this exponential function which is trivial so that just lets us easily compute downstream gradient and this kind of like process sort of steps backward backward backward one node at a time where at each point were just computing these local gradients and then multiplying the upstream and local gradients but what's really interesting about this particular computational graph is that there's multiple ways in which we could have thought to structure this computation in this graph as I've written it I've written it out in terms of very basic very primitive very fundamental arithmetic operators of addition and multiplication exponentiation division adding a constant a have broken down this computation into its barest fundamental pieces of arithmetic primitives and as you saw as you kind of noticed by looking at this graph breaking everything down into the barest arithmetic primitives at every graph ends up a little bit tedious and sometimes it we actually have our freedom to define for ourselves what the types of primitive operations we want to use in our graph so in this example so here we've kind of broken everything down into the basic primitives but we also have the freedom to define arbitrary new types of nodes that can independent can internally compute more complicated functions an example of why this might be useful is that this little chunk of the graph that I've outlined in blue locally comes it independently computes this so-called sigmoid function that is 1 over 1 plus e to the minus it's art it's argument this sigmoid function shows up all the time in machine learning we've seen it in the context of the of the this this shows up for a binary cross-entropy when you're doing a two class logistic regression this also shows up in many other contexts and what's kind of nice is that we have the freedom as the designers of this little graph language to pick elements to pick primitive out graph elements that will be useful or easy to compute in the backpropagation process and in particular we can choose primitive no primitive functions to assign to nodes in the graph such that the local gradients become easy to compute and this turns out to be the case for the sigmoid function so the sigmoid function if you kind of work through some math on paper you can see that the sigmoid I don't want to work through the details of this expression here but if you work through on paper computing the derivative of the sigmoid function with respect to its input you can see that the local gradient of the sigmoid function actually has a very simple functional form that the local gradient of the sigmoid function is simply equal to the output of the sigmoid function multiplied by 1 minus the output of the sigmoid function what this means is that we can very easily compute the local gradient of this entire blue chunk of the graph without storing this without storing this whole intermediate chunk of the graph and this is an example of ours ourselves of the graph designers cleverly choosing the primitives that we want to use in our graph language in such a way that will make it easy or more efficient to compute the derivatives during the backwards pass so this is definitely something you should consider doing and now if we were to imagine kind of an aggregate you could imagine an equivalent version of this graph which collapsed this whole blue box onto a single node that would then receive the upstream gradients on the right and now compute the the local gradient using this expression we've derived at the bottom of the slide and then immediately returned the downstream gradient on the Left kind of skipping over all of that intermediate computation inside the box so this idea of defining more complex primitives to use in our graphs is something that we'll use quite a lot in general in order to make our computational graphs have them either be more efficient or have more semantic meaning another thing we can start to notice when we look at these computational graphs is that there's some patterns that become apparent when you look at the patterns of how information propagates forward during the forward pass and then during the backward pass what you can kind of think of what I sometimes think about this is like a little circuit that during the forward pass we're kind of flowing information forward from the input to the output and then during the backward pass we're kind of flowing information backward from the loss through each of these intermediate nodes backwards to the original parameters of the model for which we wanted to compute gradients and when you have this kind of circuit interpretation of these computational graphs you start to notice some patterns about how gradient flow how some duality is between how information flows during the forward and backward passes so the simplest example of this is that this add gate or add function acts as a gradient distributor during the backward pass so if we have a little function which locally computes the output as the sum of its two inputs maybe here seven is three plus four then during the backward pass we know that as we saw in the first example of our very first computational graph recall that the derivative of X plus y with respect to X is 1 and derivative of X plus y over y is also 1 so the local gradients are both the inputs are 1 so that means the downstream gradients for both inputs are both equal to the upstream gradient and this would generalize to a sum with an arbitrary number of terms what this means is that during the backward pass when you have a sum node during the backward pass that some node is going to distribute and copy those gradients from the upstream into the downstream which is kind of a nice intuition about what's happening when you have addition inside of your model now kind of dual to the some node is the copy node this is kind of a trivial node that in during its input maybe receive some input and then has two output values that are both equal to identical copies of the input this seems like maybe a stupid operation at first glance why would you ever introduce such an operation in your graph well you might want to do this if you want to use one term of your model in multiple places downstream in the graph for example you might imagine in a regularization setting we actually want to use each of our weight matrices in two ways in our model we want to use the weight matrix one to compute scores in in kind of the main branch of the model and second we need to use the weight matrix to compute our regularization term like L 2 or L 1 regularization so in order to use our weight matrix in to downstream parts in the graph we might imagine inserting a copy node somewhere in the graph that now makes two identical copies of the weight matrix that now can be used in different parts of the graph and the important bit is that even when we've produced these two copies because they may have been used in different ways we might end up computing different gradients with respect to the two copies and now is then during the backward pass the the upstream gradients that the copy node receives might be different for the two outputs that it's produced but now during the backward pass we simply need to sum those two gradients which shows that kind of the the ad rate the add gate on a copy gate are somehow dual that the add gate forward operation is kind of the same as the copy gate backward operation and vice versa so these two operations are somehow dual to each other another another kind of funny thing that's going on is the multi is a multiplication you can think of this as a kind of swap multiplier because you know that the derivative of X Y with respect to X is y and derivative of X Y with respect to Y is X which means the local gradient is for one of the inputs is the other input and the local grading for the second input is the first input which means that when we compute the downstream gradient the downstream gradient is equal to the upstream gradients times the other input and this has kind of a funny implication if you think about now we have a multiplication inside your model it's gonna mix the gradients all up in some kind of a funny way and because during the backward pass the multiple multiple the backward pass of a multiplication gate also involves multiplication you can see that you're going to end up with some very law very large products in the backward pass I mean this can be of this you can imagine might be a problem in certain types of models another one that you might see a lot is a map ski so here the map skate is gonna take it to scaler inputs and return the maximum of the two inputs and here what what does that function look like that looks kind of like a Rayleigh function a little bit and you can imagine that for the input that was indeed the maximum the local gradient was one and for the input which was not the maximum the local gradient was zero so this so that then this has the interpretation that during the backward pass a Maps gate acts as a gradient router that it's going to take the upstream gradient and route that upstream gradient towards the one input that happens to be two that happens to be the max and the downstream gradients of all the other inputs that were not the max are all being are all going to be set to zero so then you can imagine that if we had a model that was taking a max of like many many many things then during the backward pass we'll end up with a gradient that is mostly zero so you can imagine maybe that's a problem for getting good grade and flow throughout the entire model so maybe we might not prefer to use max for that reason so this is these are all it'd be obviously sort of trivial mathematical expressions but it's sort of interesting to think about how these trivial derivatives of scalar functions actually can have non-trivial consequences for the way in which gradients tend to flow through these giant neural network models so now that we've hopefully gotten a bit of intuition about what is back propagation and how can it help us automate the process of computing gradients and big models I think it's helpful for you guys to talk about how you actually might implement this stuff in code because that's something you have to do in your homework so hopefully you will get good at that well I think there's I think about at least two major ways in which people tense in which I tend to think about you're amending backpropagation the first is what I what I call a flat implementation of backpropagation so here the idea is we're gonna write a single Python function that computes the entire computational graph maybe this Python function is computing a linear classifier at an input so mini-batch of data and your weights and then it computes the loss on that mini batch of data sound familiar from homework to hopefully those of you who looked at it and now you're asked to compute a single have a single function that is going to both compute the loss and compute the derivative of the loss with respect to each of those weight matrices well the way you would the way that you - I mean one thing you can do is kind of go to town on paper and kind of like you do your derivatives and it's a mess and hopefully you'll eventually pass the gradient check but you can try to structure this computation in a much simpler way that I think makes writing this different this backward pass code actually very simple very simple in this case so here we'll have a we're gonna as an example we're going to have this little computational graph on the Left which is the sigmoid example from a couple slides ago so we're going to input our two weights W 0 W 1 or 2 inputs X 0 X 1 and our bias term W to the forward pass of our code is simply going to apply this AB mult add a multiply these things and compute our long L and now the backward pass code is going to be right after the forward pass code and the trick is that the backward pass code is going to look like a reversed version of the forward pass code what do I mean by that well the you can see that the very first thing we do in the backward pass is compute this trivial base case of grad of output respect to itself as one you might you probably should actually omit this line and your actual implementation but I wanted to be super pedagogical here but here you can see that this first line in the backward pass code corresponds to this really rightmost thing in the computational graph now this second line in the backward pass code corresponds to back propagate back back propagating through this sigmoid function and you can see that the back propagation lined the sigmoid function corresponds to the last line of the forward pass and here what we notice is that in the forward pass the sigmoid function was taking as input as three and returning as output L and now the corresponding line in the bit of backward implementation kind of inverts that around a little bit here the backward pass takes as input grad al and produces as output grad s3 so you can see that there's a 1:1 correspondence between this line and the forward pass and this line in the backward pass and now the inputs and the outputs are somehow swapped between the these two corresponding lines then this correspondence continues that in the second to last line in our forward we wanted to add these two things s 2 and W 2 and now because this was an operation with two inputs it gives rise to two lines in the output in the corresponding backward code and again we can see that we have this intuition of the the add gate as a gradient distributor so you can see that we're simply distributing or copying the gradient to the two inputs in each two lines a similar thing happens with this third to last line in the forward and now very similar things happen in this fourth Follette in this fourth to last line in the forward but again now we've got a multiplication gate we've got this interpretation of this local gradient add multiply swapper and then finally we've got this this final output that does it the same thing it's also a multiplication and now this is kind of amazing right we actually wrote a correct implementation of backpropagation without writing out any math we didn't write down any equations on paper all we had to do is think about transfer we wrote the code for our broadcast and then we just transformed in our mind the code that we wrote in the four pass we just transformed it to generate the passed the code for the backward pass this is the way that you should actually go about doing homework two if you have not completed those assignments yet and this will make your life much much easier when it comes to computing gradients um it turns out that if once you get enough practice with this you almost never need to do math on paper in order to write gradient code you simply look at the code that you wrote for the forward pass and you just invert it using all these little local rules that you that you pick up over over time so this flap this this this idea of flat back by back propagation that you implement by inverting the code from therefore is something that you should do in assignment two so for example for the SVM you could imagine that we might compute the scores my computer margins my compute the data loss and now in the backward pass you'll just do all of those things in Reverse except the exact will operation that occurs during each line in backward pass will be this local this local computation which is the multiplication of the upstream gradient and the local gradient in order to compute the downstream gradient and you can see that you know you can do this for an SVM you can do this for a two layer neural network interesting that I chose these examples so I would highly recommend that you become familiar with this way of sort of transforming your for pass code in order to compute the backward pass but of course this this kind of mechanism of transforming your this way of doing flat back propagation is really useful when you just need to write a gradient function and to end but it kind of fails this modularity test because here with this input with this version a flat back back propagation you know if we change the model or we change the activation function we change the loss with you to change the regularizer we're gonna have to rewrite our code and that's going to be a little bit painful and a little bit annoying so there's a second way which is kind of like the more industrial-strength way to implement back propagation which is to use a more modular API and this fits very much with this idea that we saw a local computation around nodes so here with this modular implementation of backpropagation we will typically define some kind of computational graph object and this computational graph object will be able to do forward and backward passes throughout the entire graph by doing some topological sort operation on all of the nodes of the graph and then calling these little forward operations on each node and during a forward pass and then calling the corresponding backwards operations on each node during the backward pass now this piece of code that I'm showing you here is just pseudocode like this is not actually real code it could have typos I don't know but this one this one actually is real code so in pi torch you can actually define your own functions using this API so by sub-classing torch dot autograph function now you are defining your own little computational node object which represents a node in a computation Graff and now you can see that this object defines two functions forward and backward forward takes three inputs the interesting ones are x and y which correspond to the input values that this node will receive during the forward pass and these will be specified as torch tensors in this case we're just working with scalars so this would be torch scalars in this case we also received this context object that we can use to stash arbitrary bits of information that we want to remember for the backward pass so you can see that in the forward pass were simply defining this output Z equals X plus y and returning Z these are all operations on torch tensors that you got familiar with in the first two assignments and now in the backward pass we rewrite this function called backward that receives that same context object from the forward pass so we can use that context object to pop off any stuff that we needed to remember from the four pass in order to compute our derivatives in this case we need to remember x and y and now we also receive grad Z which is this upstream gradient also stored in a torque tensor and now internally we locally compute this product of the local gradient and the upstream gradient to compute our downstream gradients which are which are the derivatives of the two inputs and we simply return those and now this is like real torch code that you could use to implement your own scalar sum of two tenths of two scalars so that's how actually our léo already implemented in TOR so I don't recommend you actually use this implementation but if for some reason you did want to define your own arbitrary function in torch and define the forward and backward passes of your new of your new operation this is how you can actually do it inside large and now if you look into a like deep in the guts of the PI torch code base basically what is pi torch is this Auto read opera this auto grad engine and a ton of these little off of these little functions that define paired forward and backward functions so here I'm showing you is just like a there's a lot of files in here and if we this is somewhere on the PI torch github repo and if we zoom into one of these files this is actually the implement one of many implementations of sigmoid it's actually deep inside the guts of pipe work somewhere so you can see that here we are she defining the forward pass of sigmoid I mean using like deepens C++ or C or something deep in the guts of Pi torch that's computing the forward pass of the sigmoid lair unfortunately it calls into this other function which is to find somewhere else it's a bit of spaghetti code if you actually ever looked at the backend of pi torch but we can ignore that and now there's the second pared function which is a th NN sigmoid update grad input which computes the backward pass and it does some boilerplates you know unpack tensors and do some checking of input this is a real instant us triol strength code base but now there's this critical line where you can see that it actually is at like right here is where pipes which is actually computing the backward pass of the sigmoid layer like deep inside c like nested inside some macros and like some crazy stuff going on but basically what is pi torch is these paired forward and backward functions that can then be changed together into these big computational graphs so basically up to this point we've really only talked about the notion of using back propagation and computational graphs using scalars which is really easy and intuitive if you kind of remember everything from single variable calculus but in practice we often want to work with vector valued functions or functions that operate on vectors or matrices or tensors of arbitrary dimension so we also need to think about what does it mean to do back propagation in computational graphs with vector or tensor valued with vector ax tensor values as well well here we need to recap a little bit some of these different flavors of multivariable derivatives well you remember the normal single variable derivative given a function that given a single a scalar and put in a scalar output the derivative of the output with respect to the input tells us this local linear approximation if we change the input by a little bit then how much does the output change we've got this familiar gradient operation and a gradient is the type of derivative that is appropriate where our function takes a vector as an input and produces a scalar as an output and then this gradient of dy/dx in this case is then a vector of the same size of the input where each element of the vector of the gradient vector says how much does the output change if the corresponding element of the input changes by a little bit so it's a vector of these sort of classical single value derivatives and now the generalization is that inputs a vector and outputs a vector possibly of a different dimension and here the I mean these things all have different names but they're all basically the same idea they're all derivatives and this one is called a Jacobian now the Jacobian is a matrix where it has a number of elements which is n times M if those are the two dimensions are dimensions of our input and our output and the idea of the Jacobian is that it says for each element for the input and for each element of the output how much does changing one of the elements of the input affects that element to the output and now because we've got an input elements and output elements meet and end times and scalar values in order to represent all those possible effects of inputs on all the outputs ok so then now suppose where we got the same picture that's kind of right we know that we don't really need to think about graphs as a whole we only need to think about how does back propagation work for one node as a time so then what does it mean to do back propagation in this vector-valued case kind of zooming in on one node again well here we've got now our little function f is earth is inputting two vector values X is now a vector of dimension D X Y is now a vector with the dy elements and we're producing a vector output Z with DZ elements and now this is our forward paths it's very easy things are happening eventually we receive this gradient map from upstream now this upstream gradient now in this vector valued case it's important to remember that the loss we compute is always a scalar no matter whether we're working with vectors or tensors or whatever this final loss we compute with the end at the end of the graph is always going to be a scalar and now this upstream gradient that we receive is going to be the gradient of that the derivative of the loss with respect to our outputs so that's going to tell us for each of the outputs from this node if we were to change those each of our outputs by a little bit how much would they affect the loss way down to the right of the graph at the very end of our computation and now the local ingredients in this case become these Jacobian matrices because in this case now our function is a vector valued function that takes two vectors as input produces one vector as output so now our local derivatives become these Jacobian matrices that again tell us for each element for each output element from this node how much is it affected by changing each input element of this node and now during the back now the downstream gradients that we want to are always going to be the derivative of the loss with respect to the inputs and now the derivative of loss with respect to a vector input is now again going to be a vector of the same size as the inputs so the downstream gradient that we produce for X is going to be DL DX which will have be a vector of the same size of X the downstream gradient we produce for Y will be DL dy which is a vector of the same size as Y and now to actually produce these downstream gradients we know we know we need to multiply the local and upstream gradients but now that we're working with vectors we're not it's not a scalar multiplication anymore now this becomes a vector a matrix vector product where the local gradient is now this Jacobian matrix and the upstream gradient is this gradient vector the downstream gradient the downstream gradient is a gradient vector and the way that we produced this downstream gradient is doing as a matrix vector multiply between the upstream and gradient vector and the local Jacobian matrix in in such a way that the shapes work out so if you're ever confused about this I always just recommend writing out the shapes of all eeper things and hopefully that will help you clarify what's going on now as a concrete example of doing back propagation with vectors we can imagine what this looks like for the rayleigh function so for the rayleigh function remember that it's an element-wise max of where we clip everything below a zero so given an example input vector X so 1 minus 2 3 minus 1 applying the rate of function to this vector would replace all the negative values by 0 so our output Y would be the vector 1 0 3 0 and now this Ray loop function this kind of vector valued rayleigh function is one little computational node embedded somewhere in our graph and now eventually will be returned this upstream gradient which tells us how much with that final loss change if any of our little outputs from our rayleigh function changed and these could be arbitrary values positive or negative we don't know or care how they're computed they're just handed to us by the auto above a differentiation engine and now this Jacobian matrix tells us for each input of our local function how does each output of our local function change and what we can start to notice here is that the Jacobian matrix of this element-wise radial function has some special structure so because this is an element-wise function we know that the first output of the function depends only on the first input and the second output depends only on the second input in particular the first input does not affect the effect the second output or the third output or the fourth output each input only affects its corresponding output in the same the same position in the vector what does that structure look like in a Jacobian matrix that means that the Jacobian matrix is diagonal that all of the off because the off diagonal elements of the Jacobian tell us maybe how does element I of the input affect element J of the output where I and J are not equal so for this element wise function all those off diagonal elements of the Jacobian are 0 and now for the on diagonal elements that's the fact that becomes this scalar value derivative which tells us how much does the railey change as a function of changing the input so then for the positive value of inputs they'll have a local gradient of 1 on the diagonal and for the negative value for the negative inputs they'll have a local gradient or the local derivative of 0 on the diagonal unless UK and then this this by by working through this and lets us form this full Jacobian matrix for the input and now remember to compute the downstream gradient we need to compute this matrix vector multiply between the upstream gradient vector and the local Jacobian matrix so you can compute that thing offline and then the art then we can produce this downstream gradient vector that will be passed to the other nodes that were feeding into us as input and now we start to realize something interesting is that this is actually in this case for this rate of function we saw that the input that the Jacobian matrix was sparse that the Jacobian matrix had a lot of zeroes in it and this isn't this is actually the common case for most of the functions that we use in deep learning in general most of the local Jacobian matrices that we use are going to be very very very sparse so in practice we will almost never explicitly form which is this Jacobian matrix and we will almost never explicitly perform this matrix vector multiply between the Jacobian and the upstream gradient so you can imagine that for this really example it's may be fine to form the Jacobian for a for a vector of four inputs but what if our inputs was like a mini batch of a thousand of 128 elements and each of all those elements was a vector of like 4096 dimensions now this Jacobian matrix would be like super super gigantic and super super sparse like only the diagonal would be nonzero so in general actually explicitly forming those matrices would be super wasteful and explicitly performing that multiplication with a general matrix multiply function would be super inefficient so really the big trick in back prop is figuring out a way to express these Jacobian vector multiplies in an efficient implicit way and for the example of Ray Lu this is trivial because we know that Ray Lu has this structure where the output where the diagonal entries are either 1 or 0 so this means that for the real ooh we can we can compute this local we can compute this downstream gradient by either passing on the upstream gradient or killing the upstream ingredients and clipping it to 0 depending on the sign of the corresponding value of the input and what you what you can the way that you should think about this off this expression is that this is a very efficient implementation of an implicit multiplication between this large sparse local Jacobian and this upstream gradient vector is this clear so then so then now this is talking this has been talking about vectors but of course we need to work with tensors of rank greater than 1 we need to work with matrices and 3-dimensional tensors and four dimensional tensors and like arbitrary things now the picture is very much the same so to understand how to work with back propagation with tensors of arbitrary with matrices or tensors of arbitrary dimensions we have very much the same picture that now we have our local function like deleted the FX identity that inputs to values x and y in this case they're going to be matrices X will have will be a matrix of size DX by M X Y will be a matrix of size dy by my and the output will also be a matrix now remember the loss is still a scalar and all the gradients of the loss with respect to something will always be a tensor of the same shape as something which will tell us how does the final down stream loss change as we vary each of the independent elements of that input of that tensor so then when we finally receive this upstream grade it will be a matrix now of size DZ by MZ which will tell us again how if we change any of the elements of our output how much will that loss change and now these local Jacobian matrices get very interesting in this sort of tensor case because recall these Jacobian matrices need to be able to tell us for each scalar elements of the input how much does each scalar element of the output change which means that the Jacobian matrix is now a kind of like generalized matrix type of thing so the number of elements of these local Jacobian matrices is maybe like DX times M X times DZ times NZ for those local Jacobian matrix between X and Z and we often I often think about in this case grouping the dimensions of the Jacobian matrix where we have one group of dimensions corresponding to the shape of the input and one group of dimensions corresponding to the shape of the output and in that way we get we get this Jacobian matrix is this very high rank tensor that has its kind of the product of the sizes of the input and the output and now the the downstream gradient proceeds in much the same way that the downstream gradients are then going to be again tensors of the same shape as the inputs giving us these downstream gradients and in order to compute them we still need to do a kind of matrix vector multiply between these local jacobians and these upstream gradients the problem is that now we need to they're not quite vectors and are not quite matrices so we need to think of these as a kind of generalized form of a matrix product or a general right where now the local Jacobian matrix you can imagine kind of flattening these two groups of dimensions corresponding to the input in the output they would actually give us a literal matrix you could imagine flattening the output to be a high dimensional vector and flattening the input to be a high dimensional vector and then that kind of results that then you can after flattening actually literally perform a matrix vector multiply and get these demonstrating gradients but I don't know about you but whenever I think about these like higher dimensional implicit Jacobian matrix vector multiplies between like super high dimensional tensors like my brain ends up exploding and it becomes very difficult to actually think about how to write down an expression for implicitly computing this like giant sparse matrix vector multiply thing like it's a mess so to help you out I'm going to work through a strategy that you can use to help you implement these types of operations without actually thinking about really high dimensional high high rank tensors so for that we're going to work through a concrete example of deriving the backpropagation expression for the case of matrix multiplication and and this is going to be a super pedagogical but this will be a general strategy that you can apply for trying for deriving these these back propagation operations for kind of arbitrary value arbitrary functions of tensors and crazy crazy shapes so here we're doing a matrix vector multiplication between an input X which is a matrix of size n by D and I've written out a concrete example of a matrix of size 2 by 3 we have a weight matrix of size D by M and again we have three of a three by four clunky example now our little computational node is going to do matrix multiplication and produce this output Y that hopefully I got the I did the math right and then during the then during the backward during the backward pass we're gonna receive this upstream gradient from somewhere out in the graph it's gonna tell us how much do each element of Y affect that final boss L and again these can be arbitrary values and our goal is to compute this downstream gradient which is how much does each of our inputs affect the lot I think you I think you got that by this point hopefully and now if you imagine actually what are the sizes of these jacobians they're going to be pretty big so like n by D times n by M and here on the slide I've shown you kind of these simple small examples but for a real neural network we might have like n is like 64 and D is and a might be like 4096 so if you multiply those out each of those jacobians is gonna be like 256 gigabytes of memory in epi32 and the biggest GPU you can buy on the market today I think has like 48 gigabytes of memory so it's like this is clearly not gonna work to explicitly form these jacobians and that's like pretty small neural network anyway but so basically the whole trick here is to find a way to express this computation without explicitly forming the Jacobian and without explicitly doing that matrix vector multiply so you need to find a way to do that implicitly and the way that you can think about doing this is kind of element wise on the input so what you can think about doing is think about one element of the input think about x11 and now we want to think about how what is gonna happen just thinking about x11 and now we can compute a slice of the local gradient this local gradient was this like really big thing of the shape of the input times the full shape of the output but this slice of the local gradient what I mean by that is the derivative of the output matrix output Y with respect to the single scalar input element x11 and now because this is the derivative of a matrix by a scalar it's going to be an object of the same shape as that matrix telling us how much does each element of the output Y on get affected by that one scalar element of the input X and now to make matters further then we can think how what is this what is this first element of this local gradient slice this first element of this local gradient slice tells us how much does y how much is y 1 1 effected by X 1 1 well we know that matrix multiplication in order to compute that value y 1 1 what we computed would join nature's multiplication was an inner product between the first row of X in the first column of W so then we can write out this expression on the bottom here for how we actually computed y 1 1 and then we can imagine computing the derivative of y 1 1 with respect to X 1 1 and we can see that all of these other terms fall away and the only part that matters is this first term of X 1 1 times W 1 1 and we know how to take derivatives of product of two scalars so we know that that local gradient if that that piece of the slice of the local gradient is now just equal to W 1 1 so in this case is going to be 3 now we can repeat this process for the second element of this local gradient slice and now the story is very much the same this second piece of the local greeting at slice says oh man oh yeah yeah then this then this says how much does this blue element of the input affect this purple element in the output and again this purple elements in the output was an inner product in the first row of X M now the second column of Y so then again it looks this like this inner product and again all about one term vanishes and it sees then we see that the gradient we just pick out this value of W 1 1 so the second element of the slice is is 2 so it's just going to copy all one of those elements are the weight matrix and I'm not gonna bore you with the next two but basically at this point you should see the pattern that this first row of the local gradient slice is just copying over this first row of the weight matrix W and now what about the second row of the local gradient slice well the second lower alert row of the local gradient slice is going to ask the question how much does this again we're still working member was Thor working on one element of the input how much does the blue element of the input affects the purple element of the output well remember now the purple element of the output is computed by an inner product between the second row of X and first column of Y but what you'll notice is that it doesn't involve that x11 term at all so here the local gradient is zero for this little chunk of the local gradient slice and now you can expect that this pattern will repeat for all of the other elements in the second row of the local gradient slice so then now through that like excessively verbose explanation we've finally computed this local gradient slice and now we're ready to compute the down one element of the datastream gradient so this so we can finally now compute this blue element of the downstream gradient by computing an inner product between this local gradient slice and the full upstream gradient and now this will tell us how much does this one element of the input get affect a affect of the final loss of a very end of the end of the thing so then you can see that this local gradient slice is now ends up being this this inner product between the two ball between his local gradient slice and the Fluxion gradient but because the local gradient slice was copying one row of the weight matrix and the rest of it was zeros what this means is that this element of the input of the downstream gradient is really an inner product between the first row of the weight matrix W and the first row of the upstream gradient DL dy so now at this point we can kind of like throw away this local gradient slice and forget about it and we could realize that we only needed to look at that one row of weight matrix on the one row of upstream gradient to compute the downstream grades and now we could imagine doing the same thing for another element of the another element of the in and we could go through this whole same song and dance to compute the same to compute this local gradient slice but kind of reasoning one element at a time and now you can see that if we were to pick now the local gradient slice 4x - 3 that is this like bottom right element of the input X now our local gradient slice has the same kind of structure that this local gradient slice is now copying one of the one of the rows from the weight matrix and it is it is zero everywhere else so then but when we take this inner product we see that this other element of the downstream gradient is again an inner product in one of the rows of the weight matrix and one of the rows with the upstream gradient and now you could kind of work through this with some complicated indexing expressions on paper but it ends up that you kind of get this general expression that now you can kind of jump and see that for any individual element of the downstream gradient it ends up being an inner product between one of the rows of the weight matrix and one of the rows of the upstream gradient and once you realize this relationship you don't actually have to form that upstream gradient that that local gradient slice at all and we can compute all of these inner products between the rows of the weight matrix and the rows of the upstream gradient compute them all at once using the single matrix product between DL dy the upstream gradient and W transpose the transpose of the weight matrix and what should be what you should be people get confused about this sometimes and people look at this expression and think that we are somehow forming the Jacobian here and we are not forming the Jacobian here what this expression is doing is that by taking this matrix product between the option gradient and the weight matrix this is actually an implicit matrix vector multiplication between this very large high dimensional sparse Jacobian and the upstream gradient even though it looks like a matrix product this is actually not that this is not explicitly the Jacobian ties depth ingredient this is somehow an exhibition way to compute on that sparse sparse product and by the way a really easy mnemonic to remember this expression is that it's the only way the shapes can work out so you know that when you compute a product of two things then the derivative should involve the upstream gradient and because product is a gradient swapper it should involve the other input value so then in order to compute the downstream gradient for X we know it has to involve the upstream gradient and we know it has to involve W and then like there's only one way to multiply them that results in shapes of the same shape as X so that's the protip is like yeah this is a actual trick to remember matrix multiplication just match up the shapes and it turns out that the exact same heuristic also works for the other input right so if we want to compute now the ldw again we know it has to involve you upstream gradient has to involve the other input and there's only one way to match up this effect this product in a way that makes the shapes work out so this is a super easy way to remember how to compute these things so another view of back propagation is that we have this long chain of functions we've got like F 1 and F 2 and F 3 and F 4 and we eventually produced this scalar loss L and now by the multivariate chain rule we know we can expand out this gradient expression and write DL DX 0 as this product of all of these Jacobian matrices which are the intermediate Jacobian matrices and this final gradient matrix gradient vector on the very far right and we also know that matrix vector products are all associative so we in principle could choose to perform this multiplication of all these Jacobian matrices and this final vector in any grouping or in any grouping that makes sense and what's happening in back propagation is that we've chosen the particular grouping of computing these products right to left and what's really nice about computing these products right to left is that we never have to do any matrix matrix multiplication because if we compute these products right to left then we only ever end up having to do matrix vector multiplication which is much more efficient but this whole thing hinges on right this is this is a very nice algorithm but for that to work out it means that we always need to be computing a final scalar loss at the very end and this algorithm only works for computing the derivatives of that final scalar loss with respect to everything else in the graph and there might be other situations where you might want to do something else what if for instance you want to compute the derivative of a scalar input and get the derivatives of everything in the graph with respect to a single scalar input now that might that was that corresponds to a different version oh and by the way this this this back propagation algorithm because of this method of multiplying the G code because of this interpretation of multiplying the Jacobian matrices in this right-to-left way um this is sometimes referred to as reverse mode automatic differentiation so fancy as this is saying fancy sounding name and from the name you know there's explicitly calling it out reverse mode and there should be forward mode as well and it turns out there is so the forward mode automatic differentiation is for this slightly off other case where we want we have a scalar input value and now we want to compute the derivative of that scalar input value with respect to everything else in a graph and then you can see that if we kind of work it will think about it in this view point of vectors and jacobians then again we can multiply these things in any way but if we perform the multiplication left to right then we again get kind of only major expected multipliers and by the way you might ask why I would want to do this and machine learning we always want to compute the we always have a loss at the end we want to compute derivatives with respect to that loss in order to do a gradient descent well I know it's hard to believe but there's actually more to the world than machine learning and sometimes it's useful to have computer systems that can automatically compute gradients for us when we're doing things that are not minimizing a loss function so an example here might be we have some physical simulation and yeah that's this input a is maybe like a scalar parameter giving the gravity or the friction or something of that physical simulation and now we want to compute how much would all of the outputs of the simulation change had one of those scalar inputs so controlling gravity with a friction or something changed so this because this is this kind of idea of automatic differentiation is generally useful far beyond machine learning it's really useful anytime you want to compute derivatives for any kind of scientific computing application but the downside is that a forward mode differentiation is not implemented by a PI torch and tensor flow and all the other big frameworks so unfortunately even though it has these really cool applications and maybe scientific computing and whatnot it's not super easy to use because they don't implement Ford Motor versus dotto Mac differentiation and they've had issues about this open up on Gish you give up like they released but it still hasn't been merged in but thankfully there's a clever algebraic trick you can actually do to compute forward mode gradients actually using two back propagation operations and there's a link to that here it's a super clever piece of algebra that I'd really encourage you to check out if you ever happen to find yourself wanting to compute forward mode gradients in in a deep learning framework now another kind of really useful trick that we can use once we have this viewpoint of back propagation is kind of multiplying veteran tensors and jacobians and vectors and what not is we can actually use this same back propagation algorithm to compute not only gradients but also higher-order derivatives as well so as an example here we're showing a very amended computational graph where we have an input X of a vector of size d1 that goes through f1 to produce another intermediate vector of size d0 d1 then we go for f2 to produce a scalar loss and now what if so far we've always talked about first derivatives we always talked about gradients and jacobians and normal derivatives but in this case the second derivative of the loss with respect to the input x0 is now a matrix that tells us all of these second derivatives right well that's if we were to kind of like if we were to change x0 by a little bit like change one element of x1 by a little bit and change another element of x1 by a little bit simultaneously and kind of how much is the loss changing or equivalently if we were to change one of the elements of x1 then how fast would the gradient change is another way to think about this Jacobian matrix or sorry this passion matrix see these things are easy to get mixed up very carefully passion matrix the second derivative Jacobian matrix is first derivative Jacobian matrix is vector and vector out Hessian matrix is vector in scalar out simple but it turns out sometimes you might want to compute elements in your computational graph that are a function of this Hessian matrix so as an example it would be a hessian vector multiplied say we will have this Hessian matrix that we want to compute the the Hessian is a matrix we have a vector and we want to compute this patient vector product what why would you ever want to do this it turns out there's reasons for instance there's an iterative an iterative algorithm to complete to approximate the singular values of a matrix using kind of these nature expected products so for example you could of what if you wanted to compute the some kind of second order information about the singular values of the optimization landscape then you might want to compute hash inductor products to approximate those singular values and it turns out through but about through a bit of clever algebraic transform you know derivatives are linear and gradients Alinea linear linear functions are amazing so we can actually rewrite this second derivative of this nature expector product as the derivative of the inner product between the gradient and the vector and I'm pretty sure this works out right but of course this is only true if the vector is a constant and doesn't depend on X 0 if it does and you get another cross term but now we can do something very clever in our computational graph so we can extend our computational graph with these back propagation functions so now what we can do what we can think about is extending our computational graph so after we compute the loss then we use this function f2 prime to compute the gradient of the loss with respect to X 1 we can use f1 prime to compute the gradient of a loss with respect to x0 these are these little backward functions that were implemented by the backward pass of the little F gates and then we can implement this dot this dot product with F and think about that as another node in the computational graph and then this final output is going to be the inner product between the gradient the LV x0 and the vector and this vector we've chosen V and now we in order to compute the derivative of x of that thing with respect to x0 we just need to back propagate through this graph so what this immediately this is like kind of magical the first time I saw it but basically what this means is that if all of your backward paths operations when you implement your little gradient knows if the backward pass is itself implemented using differentiable primitive operations then you get all these higher-order gradients for free and you can use back propagation through these computational graphs to compute functions of second derivatives and by the way you can similarly do higher order derivative things as well you can imagine computing a third derivative is now going to be a three-dimensional tensor and you can compute bilinear form on top of the third derivative which is like kind of hard to think about but you could imagine extending this this type of operation in order to compute derivatives of arbitrarily high values using the same simple back propagation algorithm and unlike forward mode automatic differentiation this actually is implemented in all of the major deep learning frameworks like tensor flow and pipe arch so you can do some crazy shenanigans and write down loss functions that involve gradients why would you ever want to do that it turns out people actually do want to do that sometimes so an example here that from this paper called improved training of weather Steen Ganz they actually write a regularization term that depends on the gradient of the loss with respect to the weight matrix so here that has the interpretation that we want to write down a regularization term that can eliza's the the magnitude of the gradient which means that we kind of want to find weight matrices that result in well-conditioned optimization landscapes which is a pretty cool idea and then you can actually implement this kind of crazy regularizer by using this idea of higher order differentiation through these computational graphs so then kind of the summary of what we saw today is that we could represent these very complex functions using this computational graph abstraction that is hopefully going to be a lot nicer than working out things on paper where then it's going to have this forward pass that computes gradients in this back sorry forward pass that computes values backward pass that computes gradients and then you don't even really need to think about the full graph most of the time you only need to kind of zoom in and think about this local picture of little computational graph nodes that compute outputs and then multiply the local gradients to compute downstream gradients and then hopefully the really important part for your homework that's due in a week will be this idea of implementing back propagation using this kind of flat back pathway where them your back pop code looks like an inverted version of your forward possibility and then we also talked about this this more modular eyes API which will where we work with it lets you be more modular and swap things out in a more and better in a better way and actually on assignment 3 we'll implement a more modular API for other types of neural networks so now kind of at this point in the class we've seen linear classifiers we've seen neural networks we've seen how to compute gradients and these things but we've had a big problem which is that both of these networks had this operation where we kind of stretch out the pixels of the input image and take our input image and stretch it out into a vector which basically destroys the spatial information of the image that seems like a bad thing and we'll fix that in next lecture and we talked about convolutional neural networks
Deep_Learning_for_Computer_Vision
Lecture_21_Reinforcement_Learning.txt
um so today we're up to lecture 21 this is the second to last lecture this semester and uh i was kind of waffling a little bit on what i wanted to talk about on this lecture if you looked on the lecture on the syllabus it kind of swapped between two topics computational graphs and reinforcement learning and i kind of finally decided yesterday i wanted to talk about reinforcement learning instead of computation stochastic graphs so that's what we're going to talk about today so then so far in this class we've talked about a couple different major paradigms of machine learning uh the first of which is of course supervised learning and i think we've been we want to order this a couple times the last couple of lectures so in supervised learning of course we get a big data set of the inputs x as well as the outputs y that we want to predict from the inputs and then we want our goal is to learn some function that predicts the y's from the x's and we've seen many many examples of supervised learning throughout the class and supervised learning is very effective it works very very well for a lot of different types of problems in computer vision and then the last two lectures we started talking about a different paradigm of machine learning which is that of unsupervised learning so then of course in unsupervised learning you get no labels you only get data and the idea is you want to learn some underlying hidden structure of the data to be used for some maybe downstream task so we saw a bunch of examples of this the last two lectures um we thought we've so some example examples of unsupervised learning are things like clustering dimensionality reduction or any of these different types of generative models that we talked about in the last two lectures so today we're going to talk about a third major paradigm of machine learning models that is really quite different from either the supervised learning or the the unsupervised learning paradigms and that's this notion of reinforcement learning so reinforcement learning is about building agents that can interact with the world that can interact with some kind of environment so rather than just trying to model some function of inputs to outputs instead there's going to be an agent that like a little robot here that's going to go and make some interactions with the world he's going to observe what he sees in the world based on what it sees it will perform some actions and then based on the actions that it performs it will get some reward signal that tell it how well it's how good its actions were and the goal is to um have this agent learn to learn to perform actions in such a way that will maximize the rewards that it that it captures that it receives during its lifetime so i should point out that reinforcement learning is really a quite massive topic in machine learning and you can and people do in fact teach entire long semester-long classes just on reinforcement learning so uh this lecture today is not meant to give you any kind of comprehensive understanding of the state of the art and reinforcement learning it's really meant to just give you sort of an introduction and a brief taste of how reinforcement learning works a couple simple algorithms for reinforcement learning and then how it could be integrated into some of the deep neural network systems that we've talked about this semester um so then kind of the overview for today is that first we're gonna talk about a little bit of generality of what is this reinforcement learning problem um what can it be used for and how is it different from other types of machine learning paradigms that we've seen and then we'll cover two simple algorithms to actually solve reinforcement learning tasks um that will be q learning and policy gradients um so that i think will give you a very brief introduction and give you a taste of what reinforcement learning is and what it can do so then to be a little bit more formal about the reinforcement learning problem there's going to be some there's going to be two major actors in a reinforcement learning problem one is going to be some agent which is like you can think of it as like a robot that's roaming around in the world and performing some actions and the other is the environment which is the system with which the agent is interacting so then what we should think about is that the agent is the thing that we are trying to learn we have control over what the agent is doing and the environment is something given to us from the outside um so we have control over the agent and the agent just has to interact with the world which is the environment and we don't typically have much control or over what happens inside the environment and now um these these uh these this the environment and the agent will then communicate back and forth in a couple of different ways so first the environment will provide the agent some state um st where the state encapsulates what is the current state of the world so this could be like if we're if we're building like a robot then the state might be uh like the image of what the robot is currently seeing so the state gives the agent some kind of information about what is going on in the world at this at this point in time and uh this this state might be noisy it might be incomplete but it just provides some kind of signal to the agent about what is happening in the world at this moment in time now after the agent receives this state then it has some understanding of what it is that it's doing in the world in this moment or what is around it in the world at this moment so then after the agent receives the state from the environment then the agent will communicate back to the environment by performing an action and now if we are going with this running example of like a robot that's moving around in the world then the action might be the direction that the the agent decides to move in at each point in time so then the the environment tells the agent what's going on the agent decides to do something which modifies the environment back in some way so the agent will then take an action based on what it sees and now after the after the environment gets sends the state the agent sends the action then the environment sends back a reward which tells us how well was that agent how how good was that agent doing at this moment in time and this reward is really kind of a general concept right it might be any number of things so you might imagine like if you are uh if you're this little robot who's delivering things then the reward might be like how much money has this robot made at this point in time that maybe this robot is going to like roll around the world and like maybe deliver copies to people and then its reward is some instantaneous measure of how well that robot is doing at this moment in time so then maybe the reward signal is like the robot gets paid by someone when he gets delivered a coffee um so then that's kind of then that's sort of these three different mechanisms of communication between the environment and the agent um the environment tells the agent what's going on that's the state the agent does something which is the action then then the agent gets a reward which tells it how well is it doing instantaneously at this moment in time but of course this would be kind of boring if this all just happened in a single time step so really reinforcement learning allows this whole thing to unroll over time and this is an interactive in is the long-term interaction between the environment and the agent so then in particular after the agent makes its action that action will actually change the environment in some way so then after so then after this first iteration then the environment will be changed by the action of the agent and then similarly after the agent observes the state and observes the reward that gives the agent some kind of learning signal to update its internal model of the world as well as in as well as its internal model of what it wants to do in order to maximize its rewards in that world so then um after this first round of state action reward then the environment updates as a result of the action and the agent updates as a result of the learning signal and then this whole thing repeats so then now in the second time step then again the environment sends over a new state the agent sends over a new action the environment sends over a new reward and then they both transition into some into some later thing down in time so then this can continue for maybe some some very long period of time where the environment and the agent are interacting with each other over some very large number of time steps um is this sort of formalism clear on what's going on between the environment and the agent in this reinforcement learning problem okay good so then um here's a couple examples to kind of maybe formalize this this intuition so one kind of classical example of a problem that people might solve with reinforcement learning is called the carpool problem so here the idea is we've got some imagine some kind of cart that can move back and forth on a one-dimensional track and on top of that cart is a pole that can uh pivot back and forth and now the state and now the objective is to move the the cart in such a way that will cause the pole to balance on top of the cart so the kind of high level objective of what the agent is trying to do is balance the pole on top of this movable cart but now we need to formalize this objective through this uh notions of states actions and rewards so the state is going to be something like uh what what is the current state of this of this system so that might be something like the angle at the the exact angle of the cart the exact x position of the cart the velocities of all those things maybe giving all the physical variables telling us the exact physics of the situation now the action um that the the agent can choose to apply at each time time step is maybe the horizontal force that it wants to apply to the cart moving left or right and now the reward signal that the agent gets at each time step is maybe uh maybe a one if the pole is balanced and a zero if the pole has fallen down so then this is sort of you can imagine that this this is our first example of maybe formalizing on an agent interacting with an environment through this notion of states actions and rewards so then another kind of example of a reinforcement learning problem would be robot locomotion so then maybe we've got this robot and he wants to learn how to walk through some environment so then the state again might be all the physical variables describing the state of the robot all of the positions and angles of its joints as well as the maybe the velocities of how all the joints are moving at this point in time and the action that the agent could choose to apply is like applying muscular force to each of its joints so that might be the torque that it choose to chooses to apply um to to add additional force onto any of the joints in in the robot's body and the reward now some now the reward some somehow it needs to encapsulate maybe maybe two notions one is that the agent should not fall over so maybe it gets zero reward if the robot falls over um and one reward if the robot is standing but then also maybe we want to give the robot reward based on how far forward it has learned to move in this environment so then sometimes your rewards will encapsulate multiple notions of success maybe in this case it would be both not falling over as well as actually moving forward in this virtual simulated world so then maybe another example of um another example of a reinforcement learning problem would be learning to play atari games so here you want to just learn to play these video games these old school video games and the high level objective is to just get the highest score in each of the games and now the state that the agent might be able to observe at each time step is the pixels that are on the screen of the game and the action that it that the agent can choose to make is maybe pushing one of pushing some combination of buttons on the controller that lets it play the game and now the reward is the instantaneous increase or decrease in score that the agent receives at every time step of the game and now this this example is kind of interesting because the state is actually somehow only partially only giving us partial information about the environment right because in many of these atari games they might depend on some source of randomness right like after you blow up a spaceship then maybe some other spaceships will fly in but you don't know exactly what the next spaceship to appear will be or you don't know where that next spaceship spaceship is going to appear um but the only thing that you can see is this the the pixels comprising the current uh image on the screen and that might not give you enough of fully enough information to fully predict exactly what's going to happen in the next time step of the game so that gives so this gives us this notion that unlike the previous examples the state that the agent gets to observe might actually not might actually be some kind of incomplete information and might not allow it to perfectly predict exactly what's going to happen in the future so these are all kind of examples of maybe single player games where there's just kind of a an agent that's interacting against an environment and it needs to learn to succeed in this in this environment now another thing we can do is actually have interactive games where agents are learning to compete against other agents and here a very famous example of a reinforcement learning problem that a task that has been solved with reinforcement learning in this way is learning to play board game competitive board games like go so here the objective is to win the game um the state is now the positions of all the pieces on the board um the action at each time step is whether or not uh is exactly where the agent wants to place its next piece when playing the game of go and now the reward in this case may be something very uh long-reaching right so then the reward maybe maybe in this example of playing go the agent only gets a reward on the very last time step of the game where there are all the time steps when he's placing pieces and interacting with the with the opponent then it always gets rewards of zero maybe during all the intermediate terms turns of the game but once the game is over then on the very final term of the game then uh the agent gets a reward of one if they won if they if they won the game and beat their opponent and they get a reward of zero on that last turn if they if they lost their opponent so this gives us some sense that the rewards might actually uh capture some very non-local information about how well the agent is doing the rewards might be very sparse they might only and they might and how what what causes the rewards that we get might be affected by actions that have that happened very very far in the past um so those are all some interesting um facets of this particular example of plane go okay so then it's kind of interesting to contrast this notion of reinforcement learning with this very familiar notion of supervised learning that we've seen throughout the semester so um we've now seen this kind of abstract picture of reinforcement learning where we've got an environment and an agent and they communicate back and forth through states and actions and rewards and then transition over time but we can actually we can actually draw a quite similar picture about supervised learning too right so in supervised learning if we kind of draw an analogy with supervised learning then the environment is now a data set and the agent is now the model that we're learning and then these things these these the data set and the model also kind of interact over time in a supervised learning setting so then the data set is first maybe giving the model some input x which the model is supposed to make a prediction for and that's kind of equivalent to the state in uh in reinforcement learning and now the model receives that input and then makes some prediction why which is kind of equivalent to the action that the model is made that the agent is making in reinforcement learning and then the data set um responds to the model by giving it some loss which tells it how well was the prediction that it just made and that's kind of equivalent to the reward signal that an agent is getting in reinforcement learning and then similarly in supervised learning these things kind of unroll over time so the model will get inputs make predictions get a loss and then they will all update that the model will sort of learn based on that loss signal in the previous time step and the data set will then sort of move on to the next example in the data set that is that is being shown so if you kind of like flip back and forth between these two pictures it feels like maybe reinforcement learning is not that different from supervised learning um but that would actually be a very incorrect uh assertion to make so there's a couple really big fundamental reasons why reinforcement learning is fundamentally different from supervised learning and also why it's fundamentally more challenging than supervised learning so i think the first reason is this idea of stochasticity so in a reinforcement learning setting everything might be noisy so the um the the states that we get might be might be noisy or incomplete information about the scene the rewards that we get might be noisy or incomplete and then also the the the transitions that we get as the environment moves from one time step to the next the way in which that environment transitions can also be some unknown non-deterministic function and what do i mean by the reward signal being random in reinforcement learning well if you look back at this supervised learning situation then when you make it when you have an input and you make a prediction then we're always going to get the same loss that typically in supervised learning our loss function is going to be deterministic but now in reinforcement learning if suppose that we receive a state and then we make an action then we might get different rewards in different time steps um even if we saw the exact same state and even if we made the exact same action and that can be due to just that there's just some underlying stochasticity or randomness in this reinforcement learning problem so somehow our agent needs to learn to deal with that so another big problem in reinforcement learning is this notion of credit assignment so like kind of like we saw in the example of the go game the rewards that we get that the that the agent is getting at each time step might not reflect exactly the actions that it that is taken at that moment in time that the rewards that it's getting at time t plus one might be a result of the actions that it took very very far in the past so that's kind of if you kind of like think back to our example of a robot delivering coffee then maybe his um his reward signal was getting paid when he delivers the coffee but the reason he got paid was not just the result of that final action that he took of like giving the coffee to the person instead in order to achieve that reward of getting paid he had to first go and find a person and then take their order and then go to the coffee shop and then purchase the coffee and then drive back to the person and then hand the coffee to the and then hand the coffee to the person and somehow the reward signal that he got of getting paid was a result of all of those complex interactions and and and uh and choices that the agent had made over a very fairly long period of time so um the the technical term we sometimes use to to denote this idea is credit assignment that when the agent receives a reward it doesn't know what caused the reward that it got it doesn't know whether it was the action i just took or was it the action i took a year ago that's causing me to receive this reward right now so that's a really big uh difference between reinforcement learning and supervised learning right because in supervised learning after you make a prediction you get a loss right away and that loss just tells you how well how good was this instantaneous prediction so in supervised learning there's not as much a need for this uh this idea of long-term credit assignment okay so then another big problem in reinforcement learning is the the fact that everything is non-differentiable right so ultimately the agent wants to learn to perform actions that maximize its reward and then kind of the normal intuition or instinct that you should that you might have as a deep learning practitioner is you know we want to maximize the reward so let's compute the gradient of the reward with respect to the actions or the gradient of the reward with respect to the model weights and then perform gradient descent on that kind of uh that kind of a formulation but the problem is that we can't back propagate through the world because we don't understand we don't have a model for how exactly the world behaves so in order to compute the gradient of the reward with respect to the model's weights that would force us to back propagate through the real world itself and that's that that's something that we just fundamentally don't know how to do um so that's another big challenge when it comes to reinforcement learning we need to somehow deal with this uh with this non-differentiable with this non-differentiability problem okay so a third big issue maybe i think i wrote the four big issues now so i think the fourth big issue with reinforcement learning compared to supervised learning this one's a little bit subtle and this is the notion of non-stationarity so that means that um in reinforcement learning the states that the agent sees kind of depend on the actions that the agent had made in previous time steps and then as the agent learns it's going to learn to make new actions in the world and as a result of learning how to make new actions in the world the agent will then maybe explore new parts of the world and be exposed to novel situations and novel states so as a result that means that the the data on which the inv the agent is training is a function of how well the agent is doing at this point in time right so maybe for example of the the robot that's learning to deliver coffee maybe when that robot is just starting out it only knows how to like um go to a person and then just give him a copy that's that's right next to that person but as the agent gets better at that task then maybe people start asking the robot to now fetch me a coffee from the room next door or from the coffee shop across the street and the reason that the agent is now getting these novel states is because the agent has gotten better at the task that it's trying to solve but now now that now it's being it's now it's being exposed to some new situations so somehow the data that it's being trained on or the data that it's being exposed to is a function of how well the agent has learned to interact so far in the environment um so we call that the the non-stationarity problem because the distribution of data on which the the model is training is not a stationary distribution that distribution of data is going to change over time as the model itself learns to interact better in the environment and this does not happen in supervised learning right because in supervised learning we typically assume we have a static data set and then at every iteration we're going to try to classify one sample from that data set but the underlying data set is not going to change as the as the model is training but actually in the last lecture we saw an example of another um deep learning model that also suffers from this non-stationary problem does anyone know about what that was well that was actually the generative adversarial networks that we saw in the previous network in the previous lecture right because in a general adversarial network remember we've got a generator we've got a discriminator the generator is learning to fool the discriminator and the discriminator is learning to classify the data coming out of the generator well then from the perspective of the discriminator is also learning on a non-stationary distribution because the data of the discriminator is learning on is a function of how well the generator is doing and similarly the the signal that the generator is using to learn um is a function of how well the discriminator is currently doing so uh this non-stationary problem also shows up in generative adversarial networks and i think that's one of the reasons why generative adversarial networks can be difficult to train but now it also it also shows up in reinforcement learning so now in reinforcement learning we're like in a really bad situation right because we have to deal with non-stationarity we have to deal with non-differentiability we have to deal with credit assignment we have to deal with stochasticity and these are all like really bad difficult things for our learning algorithm to learn to overcome so as a result um uh reinforcement learning is a really hard problem and uh it just it just is fundamentally much much more challenging than any kind of supervised learning approach so in general if you find yourself confronted with a problem in the world if you can find a way to not frame it as reinforcement learning and instead find a way to frame it as supervised learning then typically your life will be much much easier and the reinforcement learning is uh much more interesting much more general but because it's so much harder it just it's harder to get things to work really well in reinforcement learning contexts okay so kind of now that we've got this this overview of what is reinforcement learning we can talk about maybe a little bit more of the mathematical formalism that we use to talk about reinforcement learning systems so the the mathematical formalism that we use to talk about reinforcement learning systems is this uh scary sounding markov decision process and this is a mathematical object consisting of a tuple of these five things there's a set s of possible states there's a set a of possible actions both of these might be finite or infinite sets there's a function r which gives us a distribution of rewards that we could possibly be given given every state in action pair so this is now a parametrized family of distributions of rewards in every state action pair there's a transition probability that tells us how likely is the environment to transition to different states as a function of what was the current state and what was the action we took in that state and then there's um now this one's kind of kind of weird there's also a discount factor gamma that tells us how should the how should the agent choose to trade off between getting rewards right now versus getting rewards in the future so this trade-off factor gamma tells us um how much do we prefer a reward right now versus a reward sometime in in the in the far off future and the formalism uh that we and the reason this is called a markov decision process or mdp is because it is it has this markov property or this markovian property and that means that the current state at the current moment in time as t um completely characterizes the the all the stuff that's going to happen in the system so um the current state and the action that we take in that state is sufficient for determining the distribution over the reward over over the rewards that we get and the distribution of the net over the next states that the environment might transition to um so in particular what state we get to next does not reply does not depend on the full history of states that we have seen up to this point it only depends on the the immediate previous state and that property is called the markovian property of a markov decision process and that kind of makes everything all the math a lot easier to deal with okay so then to formalize what the agent is doing so we want to learn an agent that can interact with this environment and the environment is kind of formalized by this object called a markov decision process and now we formalize the agent by saying that the agent learns to learns a policy usually called pi and the policy pi is going to give a distribution over actions that the agent is going to take that is conditioned on the states that the agent is exposed to at each moment in time and now the goal is to find some really good policy pi star that maximizes this cumulative this uh cumulative discounted sum of rewards over all time so um i told you that the we when we were speaking kind of more informally we said that the agent wants to learn to get high rewards but um in particular how should the agent trade off between a reward a time step zero versus a reward at time step 100 well the discount factor gamma tells us exactly how we should have that trade-off and that's kind of like an inflation factor you know like in economics money now is worth more than money later well the the get the discount factor gamma is sort of the inflation factor for the environment how much should the how much the environment um prefer a reward now versus a reward in the future um right so like if gamma equals one then our award now is equivalent to reward in the future um if gamma equals zero then we only care about rewards at the first time step and we don't care about any rewards after that and gamma and again when gamma takes values between zero and one then it kind of interpolates between those two extremes of only caring about reward right now versus um caring about the future just as much as the present okay so that gives us our formalization of um of the environment as the mdp and of the agent as this policy which is executing in the environment that is trying to maximize this discounted sum of rewards and then to talk about a little bit more formally about what's going on when we run the agent in the environment or as the agent is interacting in the environment then what happens is at the very first time step t equals zero then the environment is going to sample some initial state um s0 from some prior distribution over initial states and then we're going to loop from t equals zero until whenever we're done and at each time step the agent will first select an action a sub t um that is sampled from the policy pi of a conditioned on st so then recall that st is the state of the environment at the current time step that the environment is giving to the agent and now the policy pi is giving us a distribution over actions that is conditioned on the state that the environment has passed the agent so then the agent is going to sample from this distribution to give us the action that the agent performs at this at this time step then uh the agent will pass that that action a t to the environment the environment will sample a reward for that time step that is uh sampled from this reward function uh capital r and then the the environment will sample the next state as t plus one that will sample from this transition function where the transition function is dependent both on gives us a distribution of our states which is conditioned both on the current state as well as the action that the agent decided to take um and then after that the agent will be given the reward and given the next state as t plus one and this whole loop will will uh will go on forever um so then that's kind of this this this trade this loop that happens more formally when we talk about an agent interacting with an environment okay so then as a kind of classical example of a markov decision process that we can maybe specify more formally people often talk about these so-called grid worlds so here on the states in the middle we imagine that there's some spatial grid over which the environment can can move um and at each and the states are just the the agent can be in one of these positions on the grid so the states so there's now 12 different states giving the position of the agent and now the actions that the agent can take are moving one direction at a time so the agent can move left move right move up or move down and then that causes a deterministic state transition where the agent is then going to move to a different state based on where it is right now as well as the action that it took and now in this particular grid world we want the agent to learn to go from wherever it starts which is the initial state to go quickly to one of these uh special star states so then at every time step the agent will get a negative reward if it's not in one of the goal states and it might get some zero reward or positive reward if it does happen to be in one of these star states for the goal okay so then a policy tells us what actions does the agents take in every state of the environment so then on the left we're showing a bad policy where maybe at every no matter now for this bad policy on the left the the agent does not care about which state is in and no matter where it is on the environment the agent is always going to flip a coin and either go up or down with 50 50 probability and you can imagine that this is probably not a very good policy because there's many because there's many cases where the agent will just not reach any of the goal states very efficiently so now on the right we have an example of what is the optimal policy for this particular markov decision process so here it says for every state in the environment um like if if the agent is directly underneath the goal state then it's always going to move up with probability 100 and that will put it directly into the goal state and then there are certain states where it might be kind of the agent is kind of equidistant from the two goal states so then the optimal policy is to flip some coin and then move in those maybe two or three different directions with equal probabilities so then on the right is this optimal policy where if an agent executes this optimal policy in this simple grid world environment then it will maximize its expected sum of rewards and this is the best that the agent can possibly do in this particular environment so now we've seen this idea of an optimal policy which is the best thing the agent can possibly do in a system so that's kind of the goal of the learning process throughout in this in this reinforcement learning setting so what we want to do is have the agent find this optimal policy pi star that is going to maximize this discounted sum of rewards but now there's a big problem in trying to maximize this discounted sum of rewards which we've kind of talked about already which is one there's a lot of there's a lot of randomness in this situation right um this policy uh the actions that we take might be random and the rewards that we get at each time step might also be non-deterministic so then the solution is that we want to maximize the expected sum of rewards because the actual result the actual rewards we're going to get are going to be somehow random so the best we can do is maximize the expected value of the rewards that we will achieve if we follow this policy so then um we kind of we can define this idea of an optimal policy a little bit more formally and we say that the optimal policy pi star is the policy that maximizes this expected sum of discounted rewards so this is an expectation that ranges over that just says that if we execute this policy pie in the environment then we will make some actions we will we will visit some states we will get some rewards and all of those things will be random but all of the all of the states that we visit and all the actions that we perform will all be dependent on the policy that we're executing so then this expectation is kind of just the averaging out all of the randomness and is just the expected value of this sum of rewards if we are up using a particular policy pi when operating in the environment and then pi star is just the best possible policy that we can do okay so then uh then we need we need to define a couple more bits of machinery in order to actually uh provide algorithms for learning optimal policies right so our whole goal in reinforcement learning is to somehow find this optimal policy right so as we said that suppose we've got some policy maybe not optimal call it pi and then executing that policy in environment is going to give us some kind of trajectory which is a set of states and a set of actions that we perform along the course of executing this environment this policy in this environment and now what we want to do is somehow measure how good are we doing in different states um so one thing that we can one way that we can quantify this is with this this this notion called a value function um called v of v of pi and the value function depends on a policy pi and it takes as input a state s and the value function tells us if we were to execute the policy pi and start from the state s then what is the x the expected reward that we will get over the rest of future the rest of time if we execute policy pi in the environment starting at state s so this value function is really telling us that how good is the state s um under the policy pi that if the value of the state is high that means that on when operating with policy pi starting from that state we're going to get a lot of reward in the future and if the value function is low then we're going to get very little reward in the future as we use policy pi going forward so this is kind of intuitive and it seems that this is this is a reasonable thing that we might want to measure in the learning process how good is each is each state in the environment as a function of the policy that we're trying to execute but it turns out that um even though this this kind of value function is quite an intuitive construct we often want to use a slightly different version of the value function instead which ends up being a lot more mathematically convenient for learning algorithms so this um slightly this slightly modified slightly more general value function is now called a q function and the q function um takes both depends on a policy pi as well as a state s and an action a and the q function tells us if we start in state s and then take action a and then after that operate according to policy pi then what is the future sum of expected rewards that we will get over the rest of time so this q function is telling us how good so that the value function is telling us how good is a state if we um start in that state and execute the policy and the value func and the q function is telling us um if we start with a state action pair and then follow the policy how good would that initial state action pair be assuming we follow the policy for the rest of time okay are we maybe clear up to this point i think we i think this is a lot of i know there's a lot of notation to kind of uh introduce all at once any questions on these q functions these value functions any any of this stuff up at this point yeah well the the q so the q function is a function that tells us for any state then how much reward will we get if we happen to start in that state so the q function so then maybe um in the grid world maybe if you started directly on the goal state then you might expect the q function to be very large because we're going to calculate collect a lot of reward but if we started maybe farther away from the goal state then the total amount of reward we're going to accumulate is going to be less oh well the the environment chooses which state we start at because if we go back to this um right if we go back to if we go back to this then the environment is choosing the initial state so we don't get to choose the initial state the environment chooses that for us but the q function is just measuring for any state that we happen to find ourselves in how much reward can we expect to get if we happen to start in this state yeah yeah so the question is that these two functions this the value function and the q function seem like they're measuring kind of similar things and in fact you can kind of write recurrence relations that write one in terms of the other um and the reason is that uh usually i think it's more uh i mean there's there's algorithms that depend only on value functions and there's algorithms that depend only on q functions um i think it's a little bit more intuitive to start with a value function um but in practice we'll find that the algorithms we use are mo are more often using q functions than value functions but you're right that they're kind of measuring a quite similar thing that one is telling us how much reward are we going to get from a state and then the other is telling us given a state and an action then how much reward are we going to get after that okay so then once we've got this this value function and this q function then um our goal is to define the optimal q function right so q star of state in action tells us um what is the q function of the optimal policy so that's um the cube that's the best if we found the best possible policy that achieved the best possible rewards then what would the q function of that best possible policy be so the q stock q star tells us um what would the what is the assuming we had the best possible behavior that it says assume we start in state s and then we perform action a in state s and then after that we do the best possible thing that we could possibly do in this environment then then how much reward are we going to get for the rest of time after we take that initial action from that initial state and then after that we're going to act optimally for the rest of time and that's the that's this optimal q function q star and what's interesting is there's actually a very simple relationship between q star the optimal q function and pi star the optimal policy so in particular q star actually encodes the optimal policy pi star right so pi star is telling us for every state what what is the best possible action that we can take in that state that will cause us to maximize our rewards for the rest of time and that's just equal to the the and the q function tells us for every state and every action then what's the max possible reward if we took that action in that state so in fact we can just write down the optimal policy pi star by checking all of the actions a prime from the optimal q function so one reason that we want to define the q function in this way is actually it lets us not really worry about policy functions anymore that by defining the q function to take both the state and the action then it's kind of one function that encompasses both kind of values of how good are states as well as how good are actions in states so then the reason why we want to use the q function in this way is that we only need to worry about this one thing which is the q function whereas in other formulations you might want to have two functions that you're learning both the value function that depends on states and the policy function that gives you actions dependent on on states so this is kind of why we want to use q functions and what makes things very convenient okay but now there's actually a kind of amazing recurrence relation um called the bellman equation on this optimal q function um which right so it says that um if we take our optimal q functions the optimal q function tells us if we start in state s and then take action a and then act optimally after that then what's the total reward we're going to get well the intuition behind this bellman equation is that if we start in state s and then take action a then then we're going to get some in some immediate reward r that only depends on that state that state s and that action a so we're going to get some immediate reward r like at that time step right immediately but then after that initial time step then we just the optimal q function would require us to act optimally after we took that very first action in that very first state and then kind of that means that after that very first action we want to behave according to the optimal policy pi star but we know that the optimal policy pi star is just it can be encoded by the optimal q function q star so that actually gives us this very nice recurrence relation that we can define the optimal q function q star in terms of the reward we get at the very at the very next time step um and then uh by then recursing over the cube then the q function at the next state that we get um for the rest of time right so then this this q function is saying that right away we get some state and then after that very first action we're going to behave optimally but the q function the q star function already tells us what we would get in that next state in that next action so then this bellman equation gives us this this beautiful recurrence relation that the optimal q function must satisfy and this kind of lets us take this infinite sum and actually turn it into something more tractable that we can work with okay so then the idea is that we're going to um one way that we can try to solve this reinforcement learning problem is to find a q function is to actually find an optimal q function and it turns out that if we find any q function that satisfies the bellman equation then it must be the optimal q function and this is a this is an amazing fact that we'll have to state without proof for the purpose of this lecture but it turns out that if we find any q function that satisfies the bellman equation then we've got the optimal q function and once we've got the optimal q function then we can use it to perform the optimal policy so what we want to do is find some function q that satisfies the bellman equation so then one thing we can do is use the bellman equation as an iterative update rule so we can start with some random q function and then at every time step we're going to use the bellman the bellman equation to provide some update rule to update our q function so then here we start with some random q function q zero and then we compute an updated q function q one by applying sort of one recursion of the bellman rule and then we apply some next q function q two by applying the next recursion step of the of the bellman equation to our previous q function and then we kind of iterate this process over and over and over and then another kind of amazing fact that we need to state without proof is that under certain assumptions then this um this iteration of using the bellman equation to iteratively update our q function will actually cause the q function to converge to the optimal q function so this is kind of an amazing fact that we need to state without proof but the problem with this particular so this is like this is like a this is like real algorithm for reinforcement learning right like we can write down this random q function we can perform the spelman equation to perform iterative updates to our q function and then that will just converge the optimal q function once we've got the optimal q function then we're good to go we've got the optimal policy yeah well it's it's doesn't need to be strongly connected because that's actually the problem here is that in for every iteration of this value iteration update we need to perform an expectation over all possible rewards and all possible next states and then we need to do that expectation for every possible state and for every possible action so every iteration of the spelman equation causes us to perform a computation for every state for every action for every for every state that we could get to after performing that action so there's no notion of strongly we don't need to a strongly connected thing because it's already touching all the states in this in this formulation but that's actually brings us to the problem with this with this update rule is that we need to keep track we need to perform some explicit computation for every state and for every action and then for every state and every action we need to do something for every state that we might get to after performing that action in that state so this works fine if our states are small and finite and the number of actions we can perform in each state are small and finite but if the states if the state space is large if the action space is large or if either of them are infinite then we cannot actually perform this computation in any kind of tractable way so then the solution is that now now finally on slide 44 neural networks enter onto the scene so then the idea is that we'll train a neural network to approximate this uh this q function and then we will use the bellman equation to provide a loss function that we can use to train this neural network so then we've got this bellman equation and now what we want to do is train a neural network with state with uh with parameters theta that will in this neural network will input a state or some representation of a state input an action input the weights of the network and then tell us what is the value of this q star for that particular state action pair and then we can use the bellman equation to tell us kind of to give us kind of a loss function to train this neural network so from the bellman equation we know that we know that if the network was doing its job properly then the network outputs should satisfy the bellman equation so we can use now we can perform we can use the bellman equation to give some approximate target for a particular state for a particular action we can then sample a bunch of potential next states and potential rewards to give us some target y for what the network should predict based on the current state and the current action and then we can use this this put this uh this potential this this target y um to then train the network so then we say that the the current the current network output is q of s comma a comma theta um and then we want it to hit this target output which is y of s a theta which is computed using this bellman update and then the loss function we use to train the network is then just maybe the square difference between the current output of the network and what the network should be outputting depending on the bellman equation and now this is just a loss function that we could then perform gradient descent on and we can use this to train a neural network that can then approximate this optimal q function and hopefully after we train this thing for a long time the network will converge to something that approximates the optimal q function and then we can perform a policy by just taking the arg max action over the q functions that the network is uh predicting but now kind of a subtle problem with this approach is this a non-stationarity problem right so the network is supposed to be inputting the state inputting the action but now the target that the network is supposed to predict actually depends on the current outputs of the network itself so that means that as the network learns then the targets that is that is it is expected to predict in um from different state action pairs is actually going to change over time and that's this non-stationarity problem rearing its head again there's another big problem here which is how do we actually choose to there's a lot of sample choices we need to make in sampling the the data that we actually use to train this model and that's just a problem we do we can't talk about today that's too complicated but then there's a lot of decisions you need to make on exactly which state action pairs you're going to sample for training how do you form mini batches and that's a lot of a big hairy problem you need to worry about in practice so then as kind of a case study for this uh so this is called by the way deep q learning because we're using a d q a deep neural network to approximate a q function so that's called deep q learning there's shallow q learning where use some simpler function approximators to learn these q functions so then one case study for where deep q learning has been very effective is this task of playing atari games so here um we said the objective was to observe the game state and then predict uh what what action should we take to maximize the score in the game and this actually you can use this was solved using uh this actually uh you can use dq learning to solve this problem so here the the the we're going to have this neural network which is a convolutional neural network the input to the convolutional neural network is um four images telling us that the last four images uh that it that were shown in the game and then those images will be fed to a convolutional neural network that have some convolution have some fully connected layers and then at the end it will have um a an output for every potential action and those outputs will tell us the q functions um for all of the all of the actions that we could have taken from that particular state that we pass this input to the network um so then you can imagine training this thing up using this bellman equation loss function that we saw on the previous slide and actually this works pretty well so here's an example of this was a paper from deepmind a couple years ago that was fairly successful at using deep q learning to learn to play atari games um so uh here the idea is it's for performing this exact dp learning algorithm that we just talked about and it's learning to play this uh this breakout game in atari so uh at first let me know it's at the beginning it's training it's not doing so well at the beginning because we started with a random network it's kind of hitting the ball sometimes but it's missing a lot of the time and it's it's not very smart so you know when you start off with a random neural network it performs pretty garbage at the beginning that's uh but it's pretty normal um after we train a bit longer then the network will have gotten better and now it can actually like hit the ball so that's pretty exciting and this was done of course the network has sort of no notion of what is the paddle or what is the ball or what are the rules of the game all it's seeing are these uh these images from the screen and how much the score is incrementing and it needs to kind of figure out for itself what are the actions that it needs to take in order to uh in order to make its uh make its predictions and actually the network gets like really really good at this game so it actually probably works better than me for sure so uh then you know eventually this network discovers some really pretty interesting strategies for solving this breakout game so so far it hasn't missed and now oh boy right so we kind of learned that um it learned it's able to learn these pretty complex strategies for solving these pretty complex environments even though it has no explicit knowledge of how the game is working all it's doing is receiving these states which are these images receiving these rewards which are how well it's doing then we train a deep network using this q learning formulation so that actually works pretty well that's this notion of q learning so now the the problem is that for some problems this this q function is telling us for the state for the action what's the total reward we're going to get in the future um and for some problems that might make sense but for some learning problems that might be a very difficult function to approximate um for other for some problems it might be better to directly learn a mapping from the states to the actions so imagine like picking up a bottle if i want to pick up a bottle i just want to like move my hand until i touch the bottle once i touch the bottle i want to close it and then once i close it then i want to pick it up so that's kind of a simple policy that is described where my actions are conditioned upon the states that i'm observing at every moment in time and sometimes it might be better to learn neural networks that kind of parameterize the policy directly rather than sort of indirectly through this q function so that gives us this second category of algorithms for deep reinforcement learning called policy gradient algorithms so in a policy gradient algorithm we're going to learn a neural network which inputs the state and then outputs a distribution over the actions that we should take in that state so this is kind of directly parameterizing the optimal policy and now the objective function is that we want to train the policy to maximize the expected sum of future rewards so then we can write down some objective function where the objective function takes as input the weights of the neural network and then it just gives us what is the expected rewards that we would achieve if we were to execute the policy encoded that network in the in the in the environment and then our lowest function is like let's just use gradient ascent on this let's compute gradient of the gradient of this loss j with respect to the parameters theta and then use direct gradient ascent on the network on the network weights but of course the problem with this is this non-differentiability problem that in principle we'd like to just perform gradient ascent on this objective where the objective is just maximize the reward and then we want to just compute gradient of the reward and then take gradient steps with respect to that but we can't actually do that because we would need to compute gradients through the environment so that's a big problem so then to kind of solve that problem we need to do some tricky math so then let's let's take a slight generalization of this problem and let's write our x let's write our our cost function in the following way so let's write it as an expectation of x sampled according to some probability distribution p theta and inside the expectation is a function f of x so then you can think of x as the trajectory of states and actions and rewards that we would get by executing the policy p theta is the in is the implied distribution over those trajectories that is implied by the policy and f of x is the reward function that we would get after observing the trajectory x and now what we want to do is compute the derivative of j with respect to theta okay so now we've got this general formulation we can expand out the integral definition of the expectation so we want to compute derivative of uh this thing with respect to theta um so then we can expand out the expectation into an integral so that's going to be the integral over x of p theta of x times this f of x um then we can uh push the x then we can push the derivative inside the integral assuming all these functions are well behaved this should work and in particular f of x does not depend on theta so we can pull the f of x out from the derivative so now we've got this term sitting inside the integral which is the derivative of p theta of x with respect to theta and that's something that we'd like to get rid of um by the way uh this because that p theta involves the environment and also this integral involves integrating all over all possible trajectories so we can't actually do it in practice so we'd like to massage this equation into something that we can actually work with so now we can perform a little computation on the side um and we can just notice like what if we for some crazy reason we just decided to take the derivative with respect to theta of log of p theta of x and why would we do that i don't know but if we do happen to make that decision then we see that the derivative of remember derivative of log of something is one over the something times derivative of the something so then that derivative is one over p theta of x times derivative with respect to theta of p theta of x and oh boy there's that term that we wanted to get rid of in the previous equation so then we can reshuffle that and then write um d d theta of p theta of x as uh as this other form by just moving the multiplying that thing over and now we get uh we can rewrite this term so then we can sub out that blue term for the red term in the previous integral um and that gives us this this other this other expression but now this expression is interesting right because this is this is this expression is an integral over x and then one of the terms inside the integral is p theta of x so that actually means that this integral is itself an expectation right so then we can then rewrite that integral as an expectation which is now the expectation of x sampled according to p theta um of now the rewards that we get times the the derivatives of the the log probabilities of the trajectory um so now this is good we've managed to kind of push the derivative inside the expectation and then rewrite it again as an expectation so then this expectation we can approximate by sampling some finite number of trajectories from the policy okay so that's good but we still need to deal with this dd theta log p theta term so then um we need to then we can write out using the definition using the definition of the markov decision process we can just write out what is the probability of observing the trajectory x then we can look at the log probability of observing the trajectory x kind of depends on two terms um one is a sum of these transition probabilities and these are you know things that these are properties of the environment that we don't get to observe so this is bad we can't actually compute these and the other um this is actually good these are the action probabilities of our that our model is taking and this is something we can actually compute because we actually are learning the model so we can actually compute this term and now when we take the derivative of this thing with respect to theta then the red term does not depend on theta so it goes away so that's good so that means that now we've got this derivative of the log probability of the trajectories only depends on the action probabilities of our model and that's good because we can actually compute this so now we put that on the side um we pull up we we pull our other expression from before and we put these things together finally to get an actual expression um for now the exp the derivative of the cost function with respect to the parameters of the model and now this is a function where actually we could we can actually evaluate every term of this function which is actually finally good so then this this expectation means we're taking an expectation over trajectories x that are sampled by applying the pot by uh by following the policy in the environment so we can perform these samples by just letting the policy run in the environment and collecting the trajectories x and now this f x is the reward that we get when uh when observing the trajectory x so then as we let the policy play out and we observe the trajectories we'll also observe the rewards so this is something we can compute and now this term is the the now remember we're learning a model a neural network which um predicts the policy uh the action probabilities of the states so now this is actually the model that we're that this mod this this pi is the neural network model that we're learning so this we can also take the derivative of right this term is telling us the the gradient of the predicted scores from our model with respect to the weights of the model so that we can just um compute using back propagation through the the model pi which we're going to represent as a normal neural network so then that gives us um that actually gives us a very concrete algorithm for using this policy gradient approach to learning a policy for reinforcement learning so what we're going to do is initialize um the weights of our our policy network to some random value theta and then we're going to run the the policy pi theta in the environment for some number of time steps to collect data which is trajectories x and the rewards of those trajectories f of x um use it by running the policy pi theta in the environment and then once we've collected all that data we can actually plug all those terms into this giant expectation to then compute an approximation to the derivative of the cost function with respect to the model weights and then once we've got that then we can perform a gradient ascent step on the model weights and then we can go back to two and loop and loop and loop and go over and over again so now this gives us a very different approach to actually learning uh to uh to work in a reinforcement learning setting and this um this equation looks kind of crazy right like how how are you supposed to interpret this thing right this is an equation over an expectation there's like it's kind of hard to see what's going on um but i think actually it's it's actually a bit intuitive when when you when you think about it so the interpretation here is that when we took something so we're going to sample some trajectories x and we're going to get some rewards for those trajectories f of x and when when we got high rewards when f x is high then all of the actions that we took in that trajectory should be made more likely um so that's just saying that when we take a trajectory and when we get high reward then everything we did in that trajectory we're assuming is going to be good and we should tell the model to do those things more and then similarly when we run a trajectory x and get a low reward then we're going to assume that everything the model did in that trajectory was bad and all of the actions the model took along that trajectory should be less likely so that's kind of what this policy gradient method is is kind of doing and that's the rough intuition the time behind what this is doing yeah question how do you prevent the model from taking the greedy option every time i don't know it's actually very very difficult right because uh right the problem is like there's a there's this credit assignment problem right like if you had a very very long trajectory then the model doesn't really know which action was responsible for which reward we're just saying that the entire reward the if the entire trajectory had a high reward then all of the actions should be good or all the action should be bad and hopefully that'll kind of average out if we get enough data and see enough trajectories um but that's actually kind of a big downside with these policy gradient methods is they tend to require a lot a lot a lot of data because in order to kind of tease out which actions were actually good and which actions were actually bad we probably would have need to sample like a lot a lot a lot of trajectories so that's a big downside of these methods so that that gives us like these two different uh different formulations for actually uh learning doing reinforcement learning in practice um policy they can definitely be made better so a common thing in policy gradients is to add a thing called a baseline to make it better which we won't talk about but these are really just the beginning so these are kind of like some fairly um simple straightforward algorithms for reinforcement learning um but there's a whole wide world of more interesting algorithms that people are doing that are state of the art today so we just you if you took a whole semester long class on this stuff i think we cover a lot of these but instead we'll just give a brief flavor so um another approach is this notion of actor critic so here in actor critic we actually train two networks one is the actor that is going to predict the actions that we're going to take given states and the other is the critic that's going to tell how good are those state action pairs so this kind of looks like a combination of the policy gradient method with um the q learning method because we've got sort of one network that's telling us which actions to take and another network that's telling us how good our state action pairs kind of a whole different approach to reinforcement learning is this idea of model-based reinforcement learning so for all of the algorithms we've talked about so far the model we've the the network has not explicitly tried to model the state transitions of the environment it just sort of works directly on uh it sort of just learns to it's not explicitly modeled how the environment is going to change in response to the actions so another category of reinforcement learning models attempts to learn a model of the world and then based on our interactions with the world we try to learn a model that tells us how the world is going to change in response to our actions and then if you've got a really good differentiable model of the world then you can perform some kind of planning using your learned model of the world so that's a very different category a very different flavor of reinforcement learning algorithms now another thing is just do supervised learning so say you want to get an agent to interact an environment then collect data of people that are good interacting in that environment and then train a supervised learning model to just do what those expert people did so that's this notion of imitation learning which is another another kind of kind of idea there's this idea of inverse reinforcement learning where now we're going to collect some data of what agent what what expert agents did in the environment and then try to infer what reward function those experts were trying to optimize and then once we try to infer the reward function that the experts were trying to optimize then we optimized our own model based on what reward function we had thought they were trying to learn that's kind of a much more involved idea called inverse reinforcement learning you can also use adversarial learning for reinforcement learning so maybe we've got some set of trajectories some set of actions that were done by experts and then we want to train a discriminator that tries to say whether or not trajectories were generated by our network or generated by experts and then we need to learn to fool the discriminator in order to learn to do good trajectories in in the environment so these are these are just giving you a very brief flavor of the wide variety of methods that people try to use to solve reinforcement learning problems and now it's kind of a case study of where this has been really successful has been this task of learning to reinforce using reinforcement learning algorithms to learn to play board games so this was um a line of work coming out of folks at google deep mind where starting back in january 2016 they built a system called alphago that combined a lot of these different ideas from reinforcement learning and trained on a lot of data and then actually was able to build a reinforcement learning system that learns to play the game of go better than any human expert so at the time they beat uh this very very famous champion in go called lee sedol who had one who was like 18 time world champion of go and they actually beat him four out of five in a match using this alphago algorithm um so he was not too happy about that and then uh they followed it up so then in october 2017 there was a thing called alphago zero which kind of simplified things clean things up even better and now um they beat who was at the time the number one ranked human player in the world um which was uh could ye and they actually beat him and that i assumed he was not very happy and then um in december 2018 there was uh they generalized this approach even further to beat not just go but also use similar approach to play other board games like chess and shoji so this was a year ago december 2018 and now just the just last month in november 2019 there was a new uh new approach called mu0 that um actually used this idea of model-based reinforcement learning so they learned a model of how the state was going to transition and then plan through that learned model in order to do really well at these board games and they got it to work uh just as well so actually um another kind of interesting piece of news around this that actually just happened about two weeks ago is that lee sedol actually announced his retirement from professional go and he said the reason he's retiring is because ai got too good um so he said with the debut of ai and go games i've realized that i'm not at the top even if i become the number one through frantic effort and even if i become the number one there is an entity that cannot be defeated so um this is actually i mean this is kind of sad in a way right like i this this guy is like brilliant and he's worked his whole life to become very very good at this this task of playing go and then it's just like a machine comes by and these like these like nerds from deepmind come and just like beat him at this thing he's trained for his whole life like i'd be pretty sad if i were him um but i think it's kind of an interesting development in this uh in this history of using reinforcement learning to to play to play games and then um sort of pushing this forward um people have started to push this idea forward to play now even more complex games so there's some follow-up work from deepmind where they learn they built a system called alpha star that learns to play starcraft 2 at very very good levels apparently and open ai has a system called openai5 that learns to play dota 2 very very well and open ai doesn't seem to publish papers anymore they just write blog posts about what they do so there's no paper i can cite for this for this really cool system unfortunately okay so then so far we've talked about reinforcement learning as a mechanism to learn systems to interact with the world but actually i think another really cool application of reinforcement learning is actually learning using reinforcement learning ideas to build neural network models actually with non-differentiable model components and this is this notion of stochastic computation graphs um and as kind of a kind of a simple toy example imagine what we have i mean this is actually not a good idea like i don't think anyone should do this but imagine that we as kind of an instructive example what if we wanted to build a neural network system that was doing something like image classification it's not interacting with an environment it's just doing like a normal image classification task and we're actually going to have four networks involved one is this network in gray which is going to input the image and then tell which of the other networks we should actually use to get our classification decision so then this first network is just making a classification decision over all of these other three networks and then it's this first network is telling us which other component which other neural network should we actually use to to um classify this image then we could sample from this distribution sample maybe the green network and then feed the image to the green network and then we could feed the image to the green network get a classification loss and now treat that loss as a reward and then use a policy gradient method to then use the loss of the second network to actually um use perform a policy gradient update on that first neural network that was doing the routing so now this is like i said kind of a stupid toy example that i don't recommend anyone do in practice but it gives us this this um freedom to now build neural network architectures that do very wild and very crazy things and even non-differentiable things where now you've got kind of like one part of the neural network system which is deciding which other part of the neural network system to use for other downstream tasks and then you can use these ideas from reinforcement learning to now train these very complicated neural network models that are making very complex decisions about how to process the data so this was a very simple toy example but another example is a more real example of this in practice is going all the way back to something we saw a few lectures ago on attention so remember when we talked about on the image captioning with attention um then we talked about building models that could learn to use kind of a soft mixture of different pieces of information around different spatial positions in the image at every time step so then when generating the caption a bird flying over a body of water period then at every time step if the model is kind of focusing its attention on different spatial positions of the image but it always did this by taking a sort of a soft average or a weighted sum of all of the different features across positions in space but there's another version called heart attention where um we're just we want the model to select exactly one region in space to actually pull features from at every moment in time and now this actually is called heart attention because we're selecting exactly one piece of the image that we want to process at every moment in time and this you can actually train using a reinforcement learning method because you've got sort of one part of the neural network which is outputting this classification decision over which positions in the image we want to pull features from and then that part of the neural network which is making that classification decision you can then train using a policy gradient approach so this is just a bit of a taste that you can actually use reinforcement learning algorithms to do more than just interacting with environments that you can use them to train actually neural network systems that just do more complicated types of processing on their data and i think that's a really powerful idea that um can be leveraged to build really interesting network models so then kind of our summary for today is that we had kind of a very fast one lecture tour of reinforcement learning um hopefully we didn't lose everyone and the overall idea is that you know reinforcement learning is this very different paradigm for machine learning that allows us to build um agents build systems that learn to interact with environments over time and then we saw these um very these two simple i mean these two uh these two basic algorithms that we can use for actually training practical reinforcement learning systems of q learning and policy gradients so that's all we have for today and then uh next time will be our final lecture of the semester we'll talk about a recap of what we've learned this semester as well as some of my thoughts about where i think computer vision will be going in the future in the next couple of years you
Deep_Learning_for_Computer_Vision
Lecture_16_Detection_and_Segmentation.txt
so welcome back to lecture 16 today we're gonna talk about some more optical section and as well as some different types of segmentation tasks so last lecture recall that we started to talk about all these different types of localization tasks that we can do in computer vision that go beyond this this this image classification problem that assigns single category labels damages and instead tries to localize objects within the input images and last time we really focused on this object detection task where we have to input this a single RGB image and then output a set of bounding boxes for giving all the all the objects that appear in the image as well as a category label for each of those boxes well I think that we went a bit fast through a lot of the concepts in object detection in the last lecture so I wanted to kind of recap a little bit on some of the important most important most salient points from last lecture and one point that actually we forgot to make last lecture is just how important deep learning has been to the task of object detection so this is a graph that shows the progress on object detection for about 2007 up to 2015 and you can see that from about 2007 to 2012 people were using other oh and the y-axis here is the is the performance on this object detection data set called Pascal vo C and the metric here is of course the mean average precision that we talked about in the last lecture and we can see that from about 2007 to 2012 people were using other types of non deep learning methods for object detection and there was some pretty steady progress on this task from about 2007 to about 2010 but from about 2010 to 2012 the progress has sort of plateaued on this object detection problem and then starting in 2013 when people first applied a deep learning to this object detection task then you can see there was a huge jump over all of the previous non deep learning methods and this jump was not a one-time of fact that as we moved on to better and better object detection methods with deep learning then gains continued to go up and up and up so by the way these dots that are in that are the the deep commnets detection are the the familiar fast faster and slow are CN n methods that we've talked about last time and each one of them just gave huge improvements over the previous generation and this led to a period from about 2013 to 2016 of just massive massive improvements in object detection that really overcame this plateau that the field had had from about 2010 to about 2012 um you'll notice here that this plot actually ends at 2016 and the reason is that after about 2015 people stopped working on this data set because this Pascale vo see data set was deemed too easy at that point so about so progress in object detection definitely did not stop in 2015 it's just sort of difficult to measure continuous progress in this way because of about that time people switched over to start working on more challenging data sets and the current state of the art on this on this Pascale vo see benchmark is well over it is well over 80% I don't actually know what the current state of the air is because most methods don't even bother to test on this data set anymore since it's deemed to be a fairly easy for object detection so then last time we really focused on this our CNN family of models we saw this slow our CNN that was sort of this first major jump over the non deep learning methods that was fairly slow but give debate but gave fairly good results compared to everything else that had come before in object detection and then we had seen this fast our CNN version that sort of swapped the order of convolution and pooling and late gave some some accuracy improvements but even more importantly gave some big speed gains but I actually wanted to dive in a little bit more detail today on the training procedure for these our CNN style networks because I realized that this is something we glossed over a little bit at the end of lecture and then after class a lot of students came up and asked questions so I realized that this was just something that was not clearly discussed in the last lecture and this would also serve as a nice recap of how these our CNN methods work as well so remember in slow our CNN the way that it works is that during training time we're going to receive an RGB input and are a single RGB input image and during training time we have access to the ground truth bounding boxes of this image as well as the category labels for each of those bounding boxes now then we'll run some region proposal method on top of there our input image that will give us a set of region proposals on the image that tell us regions that are likely to contain objects and again we're just sort of treating this region proposal method as a black box because back in the days people used to use things like selective search that were sort of heuristic methods but later on he's got replaced by neural network methods as well and this is basically all we had said last time about how we handle boxes in our CNN training um but there's actually another really important step here which is that during the training end during each iteration of training we need to decide whether each region proposal is either a positive a negative or a neutral region proposal and we do this by comparing the region proposals that we get with the ground truth boxes here so that then here and now we've got kind of a lot of colored boxes going on in this image so let's talk about them a little bit explicitly so here the grippe the bright green boxes are the ground truth bounding boxes of the three objects that we want to detect being the two dogs and the cat and now all of the we've get all these region proposals here shown in cyan for all different but all the different parts of the image so we can see that some of the region proposals actually correspond pretty well to some of the ground truth objects so in particular we've got some region proposals that are very close to eat to each of the two dogs and to the cat but we also have some region proposals that we got maybe one region proposal on the face of the dog that partially intersects one of the ground troops but is pretty far from being perfect and we also got this region proposal in the background that covers this piece of a chair which is totally disjoint from all of positive projects that we wanted to detect so now based on that done we need to categorize each of these region proposals as being a true positive that is something relatively close to a ground truth regen SB and that would be these blue boxes or I guess they're kind of purplish but these blue boxes that are on region proposals that are very close to a grouch earth bounding box and we and by very close we usually measure that by with but with some threshold intersection over Union now some of the boxes are going to be no so these will be positive boxes that we want the network to classify as positive region proposals that do indeed contain an object some of the region proposals however will be negative samples that do not contain an object so an example here would be this red bounding box in the background over the the portion of the chair because this region proposal does not cover any ground truth bounding box at all and the way that we usually determine which bounding boxes are negative is also by setting a threshold on intersection over Union so here it would be common to say for example that a region proposal is considered negative if it has intersection over Union less than 0.3 with all the other positive bounding boxes in the image where that 0.3 would be a hyper parameter you would need to set via cross-validation but interestingly some of the region proposals are kind of in the middle there neither positive nor negative so that's that that's an example of that is this cyan bounding box ayan region proposal that we get over the face of the dog so here it partially intersects a positive bounding box because it partially intersects the dog so we don't really want to count it as a negative but we also don't really want to count it as a positive because it's quite far from being a perfect bounding box so actually we end up with these three different categories of region proposals based on matching them up with the ground truth we get positives that are that should contain an object negatives that definitely do not contain an object and neutral boxes that are kind of somewhere in the middle and now when we when we go and what we go ahead and train this thing we're going to typically ignore the neutral boxes and instead we will train the CNN to classify the positive region proposals as positive and classify the negative region proposals as negative because trying to train the network on the neutral boxes would likely confuse it since these are kind of boxes that are neither positive nor negative so then when we train this thing we will then tip will then tend to crop out all of the region proposals corresponding to all crop out the pixels corresponding to all of the positive region proposals and all of the negative region proposals and each of these boxes will just crop out the pixels of the image and then reshape them to be some fixed size like 2 to 4 by 2 to 4 which is a standard resolution that we use in our classification networks and then at this point it basically looks like kind of an image classification problem where rather than working on whole images instead we're training on these crops that are coming out of these images but once we've decided on these crops then it looks basically like an image classification problem with so then each of these region proposals we pass into our convolutional neural network of course we share the weights among all of these and for each of the regions we want to predict two things one is a category label and the other is a bounding box regression transform that transforms from the region proposal to the but to the object bounding box that we should have predicted so for the positive bounding boxes that for the positive region proposals that did match up with some ground truth box we know that they have a cat a target category label which is equal to the category label of the ground truth box that they matched with so that would be for example the dog the two dogs and the cat and for the negative bounding box that did not the negative region proposal that did not match with any ground truth box its we should classify it as a background region and remember we add this extra background region to our set of categories when we're doing project detection and now the other wrinkle is on predicting the bounding box because we see that these region proposals that are coming out of our region proposal method do not perfectly line up with the boxes of the objects that we wanted to predict so then last time we talked about how we can parameterize this bounding box regression that transforms the raw bounding box coordinates into some target output bounding box that we actually will omit from the detector yeah question the question is since we're using a black box region proposal method how do we give the label to those proposals yeah the positive negative label well that really comes from this matching step because you imagine you kind of mean you kind of like set up a by talk about bipartite matching where you've got like all your region proposals over here and then all of your ground truth objects over here and you need to kind of pair them up based on the inert based on intersection over union between the reaching proposals and the ground truth boxes so you know the label comes for treet because each region from H positive region proposal ends up getting paired with the ground truth bounding box that it has the highest overlap with so then we assign the category label to the region proposal based on the category label of the ground truth bounding box that it has the best match with and that's that's the part that we could completely glossed over in in last times lecture is that a little bit more clear so that gives us the label for the for the category label for each of these bounding boxes but then there's also the trick of what should our regression label be and now for each of these bounding boxes because each positive bounding box each positive region proposal had been paired with a ground truth this lets us say what bounding box should we have predicted from this region proposal so then we can use use our box transfer the box transform target for the positive boxes will be will be the Box transform that would have transformed from the coordinates of the raw region proposal into the coordinates of the bounding box that that region proposal had been matched with in the input image so that's a little bit subtle here because it because then it's kind of weird right because the the targets for each of these boxes depend on the ground truth box that we had matched the region proposals to yeah it was another okay yeah yeah so then we need to do this this pairing up before training is is one thing that you could something that you have to do although one thing that can get kind of tricky so if you're doing this before training then what we would typically do is run your region proposal method offline on your entire training set and then do this matching up a procedure offline so because this this matching is maybe kind of complicated and if you have external region proposals then you can actually do that offline and dump all these labels to disk before you start training so that's how that's one way that you might implement this kind of method now where it gets tricky is something like in faster are CNN where we are now learning the region proposals jointly with the rest of the system so then you need to do some of this matching online between the region proposals and the ground truth boxes but that's only the case for something like faster our CNN where we actually learning the region proposals online during training and then the other bit of wrinkle here is that for boxes that are negative then those it doesn't make sense to have any regression target because a box that should have been classified as background does it does not was not paired to any ground truth bounding box so for negative boxes we do not have any regression loss the regression loss is only applied to the the region proposals that were marked as positive during this this pairing up phase of the region proposals and the boxes so that makes something if that makes calculating your losses now a little bit more complicated than they have been in other in other applications because now we have a classification loss for everything but we have a regression loss only for some of the the inputs to the network and maybe the fraction of positives and negatives is something that you use per mini-batches now maybe another type of parameter that you often need to set and tune when working with object detection methods so this is hopefully gives us a little bit more clear detail on exactly what the training time operations are when you're training one of these are CNN style networks yeah question yeah that's the idea so then to kind of repeat that for is that we're kind of making this assumption that our CNN is not inventing bounding boxes from scratch are the way that our CNN generates bounding boxes is by refining or perturbing the input region proposal boxes a little bit so then during training we snap we kind of compute this this what was what was look what transform what do we need to do to snap the input bounding box the input region proposal to a ground truth region but to a ground truth object boundary box and that a snapping transform becomes the target for the regression loss in our CNN and now during test time we also assume that we have region proposals that look similar as they did during training time so in our final so we get our final output box as a test time we run the same region proposal method on the image and then apply these same regression transforms that are predicted per region proposal of course this this points to one obvious failure case of one of these systems is what if the types of region proposals you use a test time are very different from the types of region proposals that you use during training time well then you would expect of course expect the system to fail because you know one of the central assumptions of machine learning models is that the type of data we get as input at test time is similar to the type of data we get as input at training time and for one of these our CNN style methods that rely on external region proposals the region proposals are really part is really part of the input to the machine learning method they're not part of the output so then it becomes very important that the statistics of the region proposals the test time are very similar to they were at what they were at training time yeah yeah I think the question is that how do you actually validate this and that's that's this that's why we use this metric of mean average precision that we talked about last time to evaluate object detection and the problem is that like one way that you might imagine evaluating these things is like some accuracy style metric where I have some heuristic about maybe I extract a faxed set of boxes from each image and then see whether or not they match with the ground truth but the reason why that's a difficult and annoyed bad way to evaluate for object detection is exactly as you said due to the large number of background boxes what so the reason that we use this mean average precision metric instead is that it helps us to factor out the effect of these large numbers of background boxes in a more of us for in a more robust way so that's why we need this more complicated mean average precision metric when we evaluate objects detected object detection methods okay so then the story here for fast our CNN training is basically exactly the same so as you remember fast the only main difference between slow our CNN and fast our CNN is that we swapped the order of feature extraction and cropping so with slow our CNN we were cropping independently the pixels from each region proposal where is in fast our CNN we're going to run the entire input image through this backbone CNN and get these whole high-resolution image features and then we will crop out the features corresponding to each of the region proposals from these image features and but other other than this and other than that we still have the same procedure of pairing up the positives and the negatives and although although all the training targets for fast our CNN are going to be exactly the same as they were in slow our CNN and now for faster our CNN remember that it's a two-stage method that first we have this region proposal network that's going to work on the backbone features and predict our region proposals and then from there we have this second stage it's going to crop the features from the region proposals and then make these final classification decisions so with faster our CNN one way that you can think about it is that we have two stages of transforming the boxes in the first stage we have these input anchors and recall that the anchors are just these fixed set of boxes of fixed sizes and aspect ratios that are spread over the entire input image and now in the first stage what the region proposal network is doing is transforming this fixed set of anchor boxes into a set of region proposals and now in the second stage we want to transform the region proposals into our final output object boxes and now the losses that we use that each of these two stages and faster are CNN are is basically the exact same types of losses that we had used in slow and fast our CNN that in order to write that in order to transform anchors into region proposals we have we need to do a similar type of pairing up effect so in order to train the RPN and faster our CNN this is where we're doing the transform from this fixed set of anchor boxes into this these region proposals and now again we need to the exact same pairing up where for each anchor box we need to say whether or not it should be a positive or should be a negative or should be a neutral the difference the only difference here is that rather than working on region proposals coming out of selective search instead we're working on this fixed set of anchor boxes that we had specified as hyper parameters to the network and the other difference here is that now in the region proposal network we are only making a two-way classification for each region proposal or rather for each anchor box we need to say whether or not it is positive or negative and so then we don't need to give a category label at this in this in the region proposal network we just want to say whether each positive each each anchor box should be classified as a positive region proposal or a negative region proposal but other than that we use the exact same logic for pairing off anchor boxes to ground with regions and the exact same logic for determining the classification labels and the regression targets in the region proposal network of Bastyr our CNN and then for the second stage in faster our CNN it's exactly the same procedure again where now we need to do a pairing up between the region proposals coming out of the RPM and the ground truth boxes in the image and as we point as we discussed a little bit earlier now this part you actually need to do online because the region proposals that come out for each image will change over the course of training because we're jointly training the region proposal network with the second stage of this with the second stage so that actually becomes kind of a tricky implementation detail as we move from fast to faster our CNN is that now this pairing up between region proposals and and ground truth boxes actually needs to happen online but other than that the logic is still the same that we pair up our region proposals with our ground truth boxes and this and after we pair them up that gives us our classification targets and our regression target for the second stage in faster our CNN so I'm hopefully this this clears up cut hopefully this this by walking through this a little bit more explicitly this helps clear up a bit of the confusion from last time about exactly how these different networks are trained are there any more sort of lingering questions on these are CNN style methods okay I'm sure more will come up later okay so then there's another remember the the kind of one of the important new operators that we introduced as we moved from slow to fast our CNN was this feature cropping right because in fast our CNN we had swapped the order of convolution and cropping and last time we and and right the goal of feature cropping is that we need to do this cropping of the read these cropping of the image features into region based features in a way that is differentiable so we can back propagate through this whole procedure and last lecture we had talked about this ROI pool operation as one mechanism for cropping the feature vector for cropping these for cropping the features corresponding to regions and recall it the way that ROI pool worked is that we take our region proposal from the image we project the region proposal from the image onto the feature map and then from there we snap the region proposal onto the grid cells of the feature map because the feature map is probably going to be at a lower spatial resolution than the raw input image and now another of these key goals of these these these cropping operations is that the output of the cropping operation needs to be a feature map of a fixed size so that we can feed the output of the cropping operation to our second stage Network downstream so then say in this in this example we want the output to have a fixed spatial size of two by two so then in order to make the output of a fixed spatial size of two by two we're in the ROI pool operation we're going to divide up the the snapped region proposal into into roughly equal sized regions but they won't be exactly equal size because we're going to snap we're going to we divided up we're also going to snap to the grid cells of the of the image features and then within each so then within each of these sub regions these two by two sub regions we're going to do a max pool operation so this this recall is this ROI pool operation that we talked but last time so that lets us do this feature cropping in a way in that lets us swap the order of cropping and feature computation but there's actually a couple problems with the ROI pool operator one is that these features are actually misaligned due to all the snapping so there's actually two part two ways in which we're snapping the region we're snapping onto the grid cells in our eye pool one is that we take the whole region proposal and snap it onto grid cells and then the other is that we divide up the region proposal and then snap the subdivided regions also onto grid cells and because of these two bits of snapping we actually end up with some misalignment in the features that are computed by this ROI pool operation so now here we've kind of done a double projection right so here what's going on in the visualization is that the green the green box shows the original region proposal in the input image and then we projected the original region proposal over onto the feature map and then snapped the wreath and then in the feature map we snapped it to this blue box and these blue in these different-colored sub regions and now we're projecting the sub regions back into the original input image and now this and now kind of the position the average position at which each of the features for the sub regions is computed is going to be kind of the the middle the midpoint of these some of these different colored sub regions so now we can see that due to the effects of these snapping on the Centers of the sub regions in the input image and up pretty misaligned with the actual input bounding box that we that we originally wanted to work with and this this so this this misalignment is one big potential problem when we're working with when if we were going to use this ROI pool operation and now there's another kind of subtle problem that seems a little bit weird with this ROI pool operation if it is also related to the snapping one one so one way to look at what this cropping operation is doing is that it's a function that takes two inputs and produces one output the two inputs are the the feature map for the entire image and the coordinates of the bounding box at which we want to crop and the output are the features for the bounding before that bounding box but now because of the snapping we cannot back propagate to the coordinates of the bounding box right because the coordinates of the bounding box were always snapped onto the grid cells of the feature map so in this roi pool operation we can we can back propagate to the from the region features back to the image features but there's no way for us to back propagate from the region features back to the coordinates of the bounding box at which we were doing this this computation so that also gives us a hint that maybe something is a little bit weird inside this are this roi pool operation because normally we like to use operations that are fully differentiable and can properly pass gradients between all of the inputs and all the outputs and that's not the case with this ROI pool operation so the the fix for this is this ROI aligned operation that we did not have time last time to talk about in detail but I wanted to go over it today because you actually will be implementing it on your homework and assignments I've so it actually seems like something we should actually talk about in lecture so the idea with ROI aline is that we want to over we want to fix these problems that are result we want basically want to remove all the snapping that's happening inside roi pool so the way that we're going to do this is make everything continuous have no snapping anywhere inside of the operation so just as before we're gonna take our original input region proposal and project it onto the onto the grid of features and now rather than rather than doing any snapping instead we're going to sample a fixed number of positions inside we're going to divide this the the projected region proposal into equal sized regions and then within each of those equal sized regions we're going to have some equal sized samples within each of those equal sized regions and these samples are the places at which we want to sample the image feature map but the problem is that because we didn't have any snapping the positions at which we want to sample these features probably do not align to grid cells or probably do not align to the grid of the actual image features right so that's a problem how great the way that we normally think about doing this this this sampling is that you just want to pull out the features at one position in this grid of features but now we kind of want to sample that feature map at arbitrary real-valued positions in arbitrary real-valued spatial positions in that feature map so now the way that we can do that is actually using bilinear interpolation to interpolate in between the cells of the grid of our image feature map so kind of - to zoom in on what that looks like for one sample point in the in in so for one of these points we want to sample in this very lower right hand so in the bottom right hand region we wanted to sample four different points and then we're going to zoom in at the bottom-right hand point in the bottom right hand region to see what's going on there so basically we want to sample a feature vector from the image features at this real valued position six point five comma five point eight but of course it's a discrete grid so there's no we can't just pull out a feature vector at that point but instead what we can do is approximate is do a locally linear approximation to the feature grid and instead sort of compute a feature vector at this real valued position as a linear combination of the nearest neighbor features that actually do appear in the spatial grid and do it and the particular way that we do this is by linear interpolation which means that in the x-direction we're going to look at the two nearest neighbor features in the x-direction and have a linear blend that the a blend weight that scales linearly with the distance between the two nearest neighbor feature vectors in the X direction and then similarly in the Y direction we're going to look at the two nearest neighbor feature vectors and blend linearly according to the distance between those so to kind of walk through this a little bit more explicitly in order to compute this this feature vector at this real valued position six point five comma five point eight it's going to be a some linear combination of these four nearest neighbor feature vectors that actually do fall onto integer valued positions of the of the grid so then the way and the way that we compute the linear weighting the linear weights on these four nearest neighbor feature vectors depends on the x and y distances between the point at which we want to sample and the actual positions in the grid so now the weight for this feature vector at integer grid cell six comma five we'll the axe weight is going to be 0.5 because the point at the position of if we want to sample is exactly in between two integer grid cells and the weight for the the vertical feature is going to be and the vertical weight is going to be zero point eight because we're very close to this six point this this uh this this grid cell and then this we kind of repeat this for each of these four nearest neighbor grid four nearest neighbor features in the spatial grid where the the linear weight for each of these four features depends on the distance between the actual position of the grid cell and the put in the green position at which we want to sample so then what we can see is that one is that this is a differentiable operation so we cannot we can back propagate both into the now we can back propagate both from the upstream both from the right because they're now doing back propagation what we're going to do is we're going to get a feature upstream gradient for this sample point this sampled real-valued point in the in the grid and now we can back propagate that upstream gradient both to the actual feature vectors in the grid as well as the actual position of the bounding box because now the actual position of the actual real valued position of the bounding box was used to determine the the spatial position at which we were going to sample so we can actually back propagate all the way into the coordinates of the bounding box that we received as input to this our y align operation so this is now this differential operation that we can back propagate through very nicely and now they now basically we repeat this procedure so within our region proposal we're going to divide it up in equal sized regions and then within each of those equal size sub regions we're going to sample and equals equally spaced no or appoints within each sub-region so and then within and then once we've computed a feature vector for each of these green sample points this gives us now an equal number of feature vectors for each of our sub regions so now we can do max pooling on the sampled green points within each sub region to give the final feature vector for Egypt to get one final feature vector for each of these sub regions but then we can propagate forward into the rest of the network and now this now this I know this R alliant operation was a little bit complicated but it basically solves the two problems that we had with the ROI pool operation now because we have no snapping anywhere in this sampling procedure because we're using real-valued sampling and bilinear interpolation now it means that all of our sample features are going to be perfectly aligned to the positions in the input image so it's going to solve this alignment problem that we have an ROI pool and it's also going to solve this differentiability problem with ROI pool that we can now back propagate upstream gradients both into the down both into the image feature map as well as into the positions of the bounding boxes that we that we had computed and you guys will get a chance to implement this for yourself on assignment five so hopefully it will become clear by then so that gives us kind of an overview of some of these object attack but actually I wanted to pause here so maybe let this sink in it was there any questions on this this ry aligned operation okay they'll probably be pep questions on Piazza once we get to assignment five ok so then that then that then this returns us to the set of object detection methods that we had talked about last time so remember we talked about this slower CNN and fast our CNN and now faster and single stage methods that you don't do not no longer rely on external region proposals but one kind of interesting question about faster and single stage rejection methods is that both of them still rely on some notion of anchor boxes right because in faster our CNN we had this fixed set of anchor boxes that we use inside the research proposal Network and in single stage detectors we were just going to make it we're still going to use anchor boxes and we just make a class a classification decision directly for each of the input boxes so now there's a question there's kind of an interesting maybe thought exercise is there any way that we can design an object detection system that does not rely on anchor boxes at all and just kind of more directly predicts bounding boxes in a nice natural way and there's actually a very cool paper that does that that did this that was done right here at University of Michigan by Professor professor John's group last year and their really cool idea was the a corner nets architecture for object detection that is very different from all of the approaches to object detection that we've seen so far I don't want to go too much into the details of this I just want to give you a brief flavor of kind of what this does and how it's very different from the object detection methods that we've done previously and this one will not be on the homework so it's fine if you don't understand all the details but the idea with this corner net architecture is that we're going to change the way we parameterize bounding boxes and we're going to now represent bounding boxes by the upper left-hand corner and the lower right-hand corner and now in order to detect a bounding box we need to just simply have each pixel of the image decide what is the probability that I am the upper left-hand corner of each object category and what is the probability that I am the lower right-hand corner of each object category so then we're going to run the image through some backbone CNN to get image level features and then from these image level features we're going to predict an upper left corner heat map for each object category that we want to predict so now this upper left corner heat map is going to say for every position in the feature map for every category that I wish to detect what is the probability that this location in space is the upper left-hand corner of some bounding box at this put at this point in space and this you can sort of train with a per pixel cross-entropy loss and now similarly we have the second branch that is a lit predicts for each position in space for each for each category wanted attacked what is the probability that this position in space is the lower right-hand corner of some bounding box and this you can also train with the cross-entropy loss and now you can imagine training though you can imagine training this thing right that I just say for every position in the Met and in the feature map which should it predict either should what it just you have targets for whether every position in space should be an upper left-hand corner and should be a lower right-hand corner but now there's another problem which is that at test time how do we actually need to pair up the upper each upper left-hand corner with some lower right-hand corner and actually in to actually emit a bounding box and the way that we overcome that problem is actually predicting also an embedding vector for every position in space so for every position in space we predict an upper left-hand corner embedding back and then in the right corner detection branch we also emit a lower right corner embedding vector and here the idea is that for each bounding box the embedding vector predicted at its upper left-hand corner should be very similar to the embedding vector predicted and its lower right-hand corner so then at test time we can sort of use the hand waving this part a little bit but we can then use distances between these embedding vectors - now pair up each upper left-hand corner with some lower right-hand corner and now emit actually a set of bounding box outputs so I thought this was a very clever a very clever and very cool approach to object detection that was very different from many of these other approaches that we've seen so far and of course it was happened right here so I got to talk about it so that's I just wanted to give you a brief sense of this very different flavor to object detection and I'd also like to point out this was very recent this was just published last year so I think it's remains to be seen whether this will become an important paradigm in object detection moving forward but it's so different that maybe it could be who knows so that gives us some more details on object detection and now we've got a couple other computer vision tasks that we need to talk about so one is this task of Simmons so next up is this task of semantic segmentation so to define the task one problem what basically in semantic segmentation what we want to do is label every pixel in the input image with a category label so one interest of the for one of these input images of like this adorable kitten while walking around on the on the grass we want to label every pixel of that image as either kitten or grass or trees or sky or whatever given some fixed set of object categories that our system is aware of and one important points to make about this semantic segmentation task is that it does not it's not aware of different object instances the only the only thing it does is label all the pixels in the image so what that means is that if we've got two objects of the same category next to each other in the image like these two cows in this example on the right then semantic segmentation does not distinguish the two instances of the category it simply labels all the pixels so the output of semantic segmentation gives us this kind of like mutant brown blob cow blob in the middle of the end but it doesn't tell us which pixels belong to which cow or even how many car cows there are so we'll overcome that with some later different tasks but for now this is the semantic segment this is the definition of a semantic segmentation task and now one way one kind of maybe silly way you could imagine solving this semantic segmentation task is again using this idea of sliding windows that you can imagine for every pet for every pixel in the image we could extract a small patch around that pixel and then and then feed that patch to a to a CNN and now that CNN could just predict a category label and we could imagine extracting patches around every pixel in the image and repeating this over and over and over again and this would be like very very slow very very inefficient this would be kind of the equivalent of slower CNN but for semantic segmentation so of course we definitely don't actually use this method in practice but it's kind of instructive to think that this in principle should work pretty well given infinite amounts of computation so instead what we do what's very commonly used in semantic segmentation is instead to use a CNN architecture called a fully convolutional network and this is a convolutional network that does not have any fully connected layers or any kind of global pooling layers it's just a big stack of convolution layers and all convolutional stuff so then the input to the image is as an image is an image of some fixed spatial size and the final output of the image is a set of class scores for every pixel so as an example you can imagine running this if you can imagine stacking a whole bunch of three by three stride one pad one convolutions and then the output would have the same spatial size as the input and now we want the final convolution in the net with the final convolution layer in the network to have a number of output channels equal to the number of categories that we want to detect and then we can interpret the output of this final convolutional layer as a score per for each pixel in the image for each category that we want to detect and then you can imagine doing a soft max over each of the scores at each pixel to give us a probability distribution over the we're labels at each pixel in the image and then we can train this thing using a cross entropy loss per pixel yeah yeah the question is how do we know how many categories we have in the image so here whenever you're training a system for semantic segmentation it's just like image classification in that we select a set of categories beforehand and that's there's going to be some fixed set of categories that our system is going to be aware of so I so then that would be determined by the data set on which you train there'll be some set of categories that it that the data set has labels for and then and then unlike bounding ball a cop to detection we don't have any kind of variable size output problem because we know a fixed number of ajik object-- categories that the system is aware of and we simply want to make a prediction for every object category for every pixel so then the output is the size of the output is fully determined by the size of the input and we want to work on this variable output problem like we did it in detection so then we can imagine training this thing with a cross entropy loss function per pixel and then that should go quite nicely but there's a bit a couple problems with this sort of cilium with this with this architecture I've actually drawn on the screen one is that we act in order to make good cyclomatic segmentation decisions we actually might want to make decisions based on relatively large regions in the input image so if we if we imagined this stack if we mention stacking up a whole bunch of 3x3 convolutions with stride one pad one then the number of effective receptive field size is going to grow linearly with the number of convolution layers remember that if we stack two 3x3 convolutions on top of each other then the output of those second through by through convolution is effectively looking at a 5x5 region in the input and if we stack three three of my through convolutions on top of each other then the output is effectively looking at a seven by seven region in the input so if you kind of generalize that argument then you see the stacking of stacking a stack of l3 Lurkey convolutions on top of each other will give us an effective receptive field size of one plus two owl and that means we've actually need a very large number of layers in order to get a very big receptive field size and the other problem here is with computation so in segmentation we'd often want to work on relatively high resolution images and like people would sometimes apply this not just to like internet images but also to like maybe satellite images that are like megapixels in alter in all in all directions so it's important to it that this be relatively computationally efficient and doing all of this convolution at the original image resolution will be very problematic and very expensive computationally so as a result nobody actually uses architectures that look like this for semantic segmentation instead you'll often see architectures that have more of this flavor that use some kind of down sampling and then some kind of up sampling and the advantage you the advantages here are twofold one is that / down sampling in the beginning of a network we get a lot of computational gains just as we did in the image classification setting and then similarly by down sampling it also allows our effective receptive field size to grow much more quickly as a result of this of these down sampling operations and now down sampling we're very familiar with from convolutional neural networks for classification we know that we can use something like pooling even at either average or max pooling or straight at convolution to deal with down sampling inside a neural network model but up sampling is something we really haven't talked about so much and we don't really have any tools in our neural network bag of tricks that allow us to perform up sampling inside of a neural network so then let's talk about a couple options that we can use for actually performing op sampling inside of a neural network so one relative so because because that down sampling is often called pooling then up sampling should clearly be uncool because it's like the opposite of pooling right so one option for unpolite is this so-called bed of nails on pooling so here given an input of spatial sighs let me see is the number of channels and spatial signs two by two we want to produce an output which is twice as large spatially with the same number of channels at each position and now what we're going now this is called a bed of nails up on pooling is because we're going to have the output be filled with all zeros and then copy the feature vector for each region in the input for each position in the input into the upper left hand corner of each correspondent region the output so that looks kind of like one of these like bed of nails that people lay on sometimes that it's like zero everywhere and then we've got these feature vectors sticking up in kind of a grid-like pattern this is actually not such a great idea probably there probably is too bad aliasing problems so people don't actually use this too much in practice anymore one upset another up sampling method or unpooled method that people use more commonly is nearest neighbor own pooling so here we're just going to duplicate so then each each feature vector in the input becomes some is copied a fixed number of times to produce some larger output so then this two by two inputs becomes a four by four output and each position in the two by two input gets copied four times to give rise to a two by two region in the output now there's actually remember we went through all this song and dance about bilinear interpolation inside ROI inside the ROI align operation and it turns out that we can also use bilinear interpolation for up sampling as well so here what we can do is we've got our input is now a fix is now C channels and 2x2 in space and we could imagine putting dropping a four by four grid of equally spaced sampling points in the input feature grid and then for each of those points in the four in the in the four by four sampling grid we can compute use by linear interpolation that again to compute these output features and this is going to give maybe a more smooth version of up sampling compared to nearest neighbor up sampling and if you're familiar with image processing you know that another thing we can actually do is by cubic interpolation so one way to think about bilinear interpolation is that we're using the the the the nearest the one nearest neighbor in the the one nearest neighbor is in a little two-by-two region to compute a locally linear approximation to the input and now with by with by cubic instead what we're going to do is come is use a larger region in the input region in the in the input feature map to compute a locally cubic approximation to the inputs and then use that and then sort of sample according to this cubic approximation rather than a linear approximation and I don't want to go into the details here but this is basically the stand if you basically whatever you resize an image and your browser or an image editing program it's usually using you it's usually using by cubic interpolation by default and if it's good enough for resizing JPEG images to put on the web then it's maybe a reasonable thing to try for resampling or resizing feature Maps inside of your neural network so these are all on fairly simple approaches to up sampling and they're all kind of implemented in standard frameworks like pi towards your tensor flow another slightly more exotic version of upsampling is kind of the opposite of max pooling and this is of course that's because the opposite is called max on pooling and the idea here is that now the unbeli operation is actually no longer going to be an independent operator in the network instead each unpooled or up sampling operation will be tied to a corresponding down sampling operation that took place earlier in the network so that gives and then then when we do a max pooling operation to down sample we're going to remember the position in each inside the grid that where the max value occurred and then when we do uncool we're going to do kind of like a bed of nails I'm pulling except rather than placing each feature vector into the upper left hand corner of the region instead we're going to place it into the position where we found the max value in the corresponding downsampling max pooling region that happen earlier on in the network and the reason that this might be kind of a good idea is that if you're training a network with max pooling then in the forward pass of max pooling you get this kind of like weird a misalignment due to the max pooling selecting different points in the pooling region and then if we unpooled in the same way that that matched the match the positions that we took the maxes from in the in the corresponding pooling operation then we'll hopefully end up with better alignment between the feature vectors compared to something like bed of nails or bilinear so kind of the rule of thumb here is that if you if if in the down sampling portion used something like average pooling then for up sampling you should probably consider like nearest neighbor or bilinear or by cubic but if you on the other hand if you're down sampling operation was max pooling then you should probably consider max unpooled as your up sampling operation so then which up sampling operator you choose kind of depends on your choice of which down sampling operator you had chosen in the other of your of your network so these are all options for up sampling that do not have any learn about parameters these are all just fixed functions there's no parameters that we need to learn for any of these up sampling operations on the contrast there's another up sampling operator that people use sometimes that is some that is somehow a learn about form of up sampling and that is called transposed convolution and to see and we'll see in a couple slides why it has this this funny kind of name but to kind of motivate transpose convolution let's kind of remember how normal convolution works so here let's consider a 3x3 convolution with stride 1 and pad 1 so then we know that each position in the output is the result of a dot product between the filter and some spatial region in the input and then there's a because its stride one we're going to move one position in the input for each position that we move in the output ok this should be very familiar at this point but it's just kind of walking through very very clearly now let's changes to a stride to convolution then with stripe to convolution it's exactly the same except we have a except because it's tried to we're going to move two pixels in the input for every one pixel that we move in the output and now the stride kind of gives us a ratio between number of pixels that we move in the input and number of pixels that we move in the output so then basically in in your normal convolution when we set the stride greater than one we're going to end up down sampling the input feature map because of this ratio between the strides but then is there some way that we could set the stride less than one and somehow stride multiple multiple points in the output for every one point in the input and if we could figure out a way to do that that would be kind of like a learner Belov sampling operation that we could learn with a convolution and that operation is transposed convolution and the way that it works is that now our input is going to be a low resolution thing of maybe two by two spatial size and our output is going to be a higher resolution thing here of maybe four by four spatial size and then here the OP the way that if the input interacts with the filter is going to be a little bit different so here we're going to take our three by three filter and we're going to multiply the 3 by 3 filter the input element in the input tensor and then and that's going to be a scalar tensor product between the scalar value in the input and the filter value in that we place in the output and then we copy and then we copy this weighted version of the filter into that position in the output and then for the event we're going to move to put two positions in the output as we move one position in the input so now for this second position what we're going to do is again take the filter value and now weight it by the second blue pixel in the input and then multiply that pixel the input by the filter value and then copy the weighted filter into this move into this moved position in the output and then in these two regions where we have the outputs from two different filter values we're going to sum where they overlap and now if now if we kind of repeat this procedure over each of the over each of the two by two regions in the input then we see that it gives rise to this kind of native 5x5 output where that is like this overlapping sum of these three four different 3x3 regions and now we can actually get a 4x4 output by just like trimming two of the two the one of the rows and one two columns and this trimming operation is kind of akin to padding in this transpose copied convolution operation and now let's make this a little bit more concrete we can look at a concrete example in one dimension so then here we're looking at transpose convolution in one dimension so the input is a vector of length two the filter is going to be a filter of length three and then you can see that we are going to need to compute the output but we're going to do is take the filter XYZ multiply it by the first position that the first value a in the input and then copy it into these these three positions in the output and then we're going to move two positions in the output for every one position of the move in the input and now in the second position will again take the filter value weighed it by the value of the input and copy it into this position this moved position in the output and then at this overlapping region we will sum where these two things overlapped is this operation and clear what's what's going on with transpose convolution okay now actually if you read different papers you'll find that this this operation goes by different authors call it different things sometimes people call it deconvolution which is like a really technically wrong name in my opinion but you'll see that used in papers sometimes people sometimes call it up convolution which is kind of catchy people sometimes call it fractionally strided convolution because it's kind of like a convolution with a fractional stride which is kind of nice people sometimes call it backward stride of convolution and this is because actually it turns out that this operation of transpose convolution forward paths is actually the same as a stride as the backward pass of a stride at convolution so that's actually a deal arias a reasonable name but the name that I think is that I like the best and I think the community has converged on is transposed convolution now why the heck would we call this thing a transpose convolution well it turns out that because we can actually write we can actually express convolution the convolution operator as a matrix multiply so here here we're showing an example in one dimension so here we're showing the convolution between a 3 a 3 length vector X X Y Z and which is y a should be X Y Z naught X Y X as it is on the slide I think I've had this typo on there for like two years ok but on where this should be a vector X Y Z that is a three length vector X and convolving with this filter a ABCD and you can see that the output here is going to be the normal convolution operator that we're familiar with with zero padding but what we can see is we can if we can perform this whole this whole convolution operation as a matrix multiply where we need to duplicate the the vector along the diagonal of the matrix on the left and then we can do a matrix more a matrix vector multiply between this this matrix or copy portions of the input and this vector which is the the filter that we want to multiply with and this lets us do convolution all in one matrix vector multiply and now transpose convolution means we're going to do the exact same thing except we're going to transpose our X matrix before we perform the matrix multiply operation and if you look at the the form of this of this matrix multiply you can see that reduce convolution stride 1 and vs. transpose convolution with stride 1 transpose convolution with stride 1 actually corresponds to a different sort of normal stride 1 convolution and the way that we can see that is because the transposed data matrix X has the same kind of sparsity pattern in the transpose forum as it did in the untransformed and what that means is that for stride one that convolution and transpose convolution are basically the exact same thing modulo maybe a slightly different rules for how padding works but mathematically they're basically identical or equivalent but now things get interesting when we consider stride greater than one so here we're showing an example of stride 2 convolution on the left being expressed as a matrix multiply and now on the right if we do a transpose convolution of the same operator we can see that now the the sparsity pattern in the trans post stride 2 data matrix does is very different from we there's no way we could get this kind of sparsity pattern with any kind of striding pattern of normal convolution so then this shows us that transposed convolution with stride greater than 1 actually corresponds to some new novel operator that we cannot express with normal convolution and by the way the fact that by looking at convolution in this way it's kind of another easy way to maybe derive the backward pass for convolution and in particularly looking at convolution this way makes it really easy to see that the forward pass of transpose convolution is the same as the backward pass as normal convolution so it's kind of a nice another nice symmetry between these two operators so that's basically it for a semantic segmentation right we know that we we can build a network that involves some kind of down sampling and some kind of up sampling and we talked about many different choices for up sampling and then we use our loss function as our per pixel across entropy but of course semantic segmentation is not really the end of it so so far we're talking about object detection that's going to detect individual object instances and draw bounding boxes around them and we talked about semantic segmentation that's going to give per pixel labels but it's not aware of different object instances so you might wonder if there's some way that we can overcome this problem and kind of get these of these fine object boundaries as semantic segmentation while also keeping the the object identities as we have from object detection well let's talk about that there's actually another bit of technical wrinkle here which is that computer vision people often categorize object categories into two different types of object categories one are thing object categories so things are things that can be that actually makes sense to talk about instances so like coffee cups or cans or dogs or cats or people kind of awkward terminology but I guess people are things in the mind of a computer vision researcher but what that means is that a thing object category is one that it makes sense to talk about discrete object instances and on the other hand a stuff object category are more amorphous and it doesn't make sense to talk about individual instances so on stuff would be objects like sky or grass or water or trees where it doesn't make sense to talk about individual object instances and instead there's kind of like an amorphous blob of stuff in the image so computer vision researchers often distinguish between these two types of object categories and now one of the this brings us to another distinction between object detection and spandex segmentation which is that object detection only handles feign categories because it needs to distinguish individual object instances on the other hand semantic segmentation handles both things and stuff but it doesn't care because it throws away the notion of object instances so then kind of a hybrid task is that of instance segmentation which is kind of like a joint object what we're going to jointly detect objects and then for each object that we detect we're going to output a segmentation mask for the detected object and this will this up this instance segmentation task will only apply to feign categories because it only makes sense for types of objects that we can talk about individual instances so for example running instant segmentation on these images of the two cows we would get two cow tection to detected cows and for each detected cow it would tell us which pixels of the image belong to each of two cows and kind of the approach here is going to be fairly straightforward what we're going to do is do object detection and then for each opt for each attack object we're going to do semantic segmentation to determine which pixels of the detected object correspond to the object versus the background so then to do this we're going to build on our favorite our favorite object detector faster our CMN so remember that up faster are CNN I think you should be familiar with by now and now to move from object detection to instant segmentation it's going to get a new name it's going to go from faster our CNN now it's gonna be called mask our CNN but the main difference in the method is that we're just going to attach an extra branch or an extra head that works on the the the features of each region that's going to now predict a segment a foreground background segmentation mask for each of the objects that we detect and other than that everything is going to be the same as object detection with faster arcing on so then the way that this works is that well will still take our input image we'll run it through the backbone to get our image level features we'll run our region proposal network to get our region proposals and then we'll use our align to get these feature maps for each region proposal and now once now for each region proposal will basically run a little tiny semantic segmentation network that will predict a segmentation mask for for each of our detected objects and now you can imagine training this thing using sort of just it's photos now it's just sort of like jointly training object detection and semantic segmentation where we're just doing a semantic segmentation for each object that we detect so what this means is that now the training targets for these for these segmentation masks are going to be aligned to the bounding boxes of the region proposals so then for example if our region proposal was this with red box in the image and the category were trying to detect was chair then the segmentation mask that we would that would be our target for mask our CNN would be this foreground background segmentation mask that was warped to fit within the chair bounding box and you know had our region proposal been different than our target segmentation mask for for instance segmentation would have been different as well and then similarly if we were to try to detect a maybe a couch then the segmentation mask is going the segmentation mask that we asked it to predict is going to depend on the category of the object so the sight the target segmentation mask for for this this red bounding box for the class couch would be the pixels of the the pixels within the box that correspond to the couch class and similarly if we were detecting the person then the target segmentation mask would be the pixels of a person within the bounding box so that's basically it for masks are CNN like you thought this is going to be challenging but I mean turns out there's actually are a lot of small details in the paper that mattered a lot but the big picture is that we're just basically attaching an extra head on top of our favorite mask our scene insists on top of faster our CNN to now predict an extra thing per per region and this actually gives like really good results when you train it up so these are some actual predicted instance segmentation outputs from NASCAR CNN so you can see that it's both jointly detecting bounding boxes using all the object detection machinery and then for each bounding box it's actually outputting this segmentation mask to say which pixels of the bounding box actually correspond to the detected object so this this works really well and it's like a pretty good state-of-the-art fairly close to state-of-the-art baseline for instant segmentation and object detection these days okay now there's this other thing you know into the segmentation we said only works on things and semantic segmentation does things and stuff yeah question oh yeah so the question is can we do instance a sort of single stage instant segmentation and that turns out there are yes there are a couple methods that work for that but that's like very very recent so that's something that actually I think the first like really successful single stage instance segmentation method was just published like a month ago so that's something that people are like right now actively trying to figure out so actually remember like a couple weeks ago when you had guest lectures for a week I was away at a conference and actually there was like one of the first may really successful on single stage incident segmentation methods was presented at that conference so this is like super hot off the presses stuff so I didn't I wasn't quite comfortable yet to put it on slides so yes people are working on it but I think it doesn't quite work super robustly yet so then there's kind of a hybrid task that people work on sometimes called Penn optic segmentation which kind of blends semantic segmentation and instant segmentation so the idea here is we want to label every pic in the image but for the thing categories we actually want to distinguish the object instances so I don't want to go into any details of how these methods work I just want to give you this reference to let you know that this is a task that people sometimes work on and I again don't want to tell you how it works but some of these methods for Panoptix segmentation actually work like really really well so these are now predicted outputs from a pretty high performance method for Panoptix segmentation so you can see that it's labeling all the pixels in the image and for the thing categories like the people and the zebra it's actually disentangling the different instances but for the stuff categories like the grass or the trees or the road then it's just giving us this amorphous blobs so again I don't want to go into details on this just to have you a point give you a pointer to let you know that this is a thing that exists so then another kind of cool task that people work on sometimes is key point destination so you saw here that we were able to output these these segmentation masks for people but for people sometimes people want people want to have more interesting or fine-grained detail about the exact pose of people in images so one way that we can frame this is as a key point estimation problem so in addition to detecting all the people in the image in addition to saying which pixels belong to each person in the image we might want to say like what is their pose and where are their arms and legs and how is their body situated in the image and one way that we can formulate that is by detect is by defining a set of key points like the ears and the nose and the eyes and all of the joints and then we want to predict where it were the locations of each person's joint in each of these joints for each person in the image and we can it turns out we can actually do this with masks our CNN as well so basically to move from instance segmentation to key point estimation we're just going to again attach an additional head on top of mask our CNN that is going to predict these key points for each of our detected people in the image so then the way that that works is that we have this same sort of formalism for mask our CNN except rather than predicting these semantic segmentation masks instead we're going to predict a key point mask for each of the fixed number proof for each of the K key points so remember there's like seventeen key points for the different body parts so for each of those body parts we're going to predict a segment keypoint segmentation masks and then we can train these things up with with a cross-entropy boss and this actually and so I'm sort of intently going fast over some of these later examples so these are things that I don't really expect you to know in detail just to let you know what's out there so then this so then then if we use this this idea of key point estimation now we can do joint object detection and key point estimation and instant segmentation all with a single mask our Siena network and these can give like really pretty amazing results so it can detect all these people sitting in a classroom and it tells us how many instances there are and for each of those people what is their exact pose in the image and that actually works pretty well so this is now this is kind of like a general idea right is that anytime you want to have a computer vision task where you want to make sort of novel types of predictions for different regions in the input image you can kind of frame it as an object detection task where you now attach an additional head onto an object detector that is going to make additional new types of outputs per per region in the input image so another cool example from Johnson at all is this idea of dense captioning so here we actually want to merge object detection and image captioning and then actually want to output a natural language to script description for lots of different regions in the input image so this is a thing you can do by just following this paradigm of attaching now an LS TM or other kind of Argan and captioning model on top of each region that is being processed by an object detector and we don't have time to watch the video but maybe you so then I kind of wrote this web demo that would like run the thing real time on a laptop and then stream it to a server then we could like watch Walker on the lab and like watch in real time what kind of op what kind of sentences were being described so then you can see it's making it's having a lot to say about my lab base this is actually Andre carpathia who was my co-author on this on his paper so you can see that it's detecting him as like man with beard man in a blue shirt it's talking about the black monitor screen so it like is doing joint optic detection and then for each detected region it's now making a natural language description so that that's pretty cool so then kind of another recent example I mean here I'm just like shamelessly promoting my own work you don't live we don't mind that too much but but kind of another kind of recent example from a tall comma Johnson is a 3d shape prediction so here we want to do joint optic detection and a for each predicted object we actually want to predict a full 3d shape so we can do this kind of following the exact same paradigm of doing an object section system and then attaching this new additional head on top that works on top of each region and now it predicts a 3d shape and we'll talk in more details about different I will talk about more details about how to handle 3d shapes in next next times lecture so then our summary from today is that we had kind of a whirlwind tour of a whole bunch of different computer vision tasks of these I think object detection is the one that you'll be expected to know most the most uh be most familiar with for the homework but I think being aware of all these different computer vision tasks and just having a brief flavor of the fact that they exist and very briefly how they're done is kind of useful education if when you consider applying at different types of computer vision models in practice so that was our whirlwind tour of a lot of different localization tasks and computer vision and then next time we'll talk more about how we can process 3d data with deep throat networks that it will also see for example how to predict meshes with neural networks so then come back on Wednesday and we can talk about those
Deep_Learning_for_Computer_Vision
Lecture_20_Generative_Models_II.txt
and this appears that the microphone is gone from the room today so i'll just have to shout and hopefully everyone can hear me and that'll work okay uh but everyone in the back you can hear me okay yeah okay good so today we're up to lecture 20 and we're going to continue our discussion of generative models so this will be generative models part two um so remember last time we started our discussion of generative models by recapping a couple big distinctions in machine learning that we need to that we need to be aware of so one of these was this distinction between uh supervised learning and unsupervised learning so then you'll recall that in supervised learning we have um both the data the raw data x which is like our image as well as the label y which is the thing we want to predict and in supervised learning what we wanted to do was learn some function that predicts the label from the image and this has been very successful this works well this is we've seen throughout the semester this concept of supervised learning lets us solve a lot of different types of computer vision tasks but supervised learning of course requires us to build a big data set of images that have been labeled by people in some kind of label watch so we'd like to figure out so kind of one of these holy gale holy grail problems in computer vision or even machine learning more broadly is figuring out ways that we can take that we can learn useful representations of data without those labels wide which brings us to unsupervised learning where we have no labels just data and somehow our goal is to learn some underlying structure of the raw data even without any human provided labels and if we could do this this would be awesome right because you can know you can go out on the internet and just download tons and tons and tons of data and hopefully if we did um if we could do unsupervised learning in the right way then we could just download more and more data we don't have to label it so it comes for free and then we our models can just get better and better and better so this is kind of the one of the holy grail challenges in machine learning um and i think we're not there yet but that's kind of one direction that we're pushing with generative models so remember last time we also talked about this station between distributor models and generative models and this is more of the problem and this was about the probabilistic formalism that we use when building our concrete machine learning models so remember that a discriminative model is trying to model the probabilities distribution of the output or the label y conditioned on the input image x and that because of the way probability distributions work we know that probability distributions have to be normalized they have to integrate to one so then this this constraint on probability distributions that they need to integrate to one induces a sort of competition among the support or among the elements of the of the probability distribution so then recall that when we're building a discriminative model this means we have a competition among the different labels that the model might choose to assign to the input image so for this input input image of a cat then the labels dog and cat are kind of competing with each other for probability mass and then remember that for discriminative models this fact that the labels are competing with each other was a bit of a downside because it meant that discriminated models had no way to like reject unreasonable data so if we gave this like an image of a monkey that even though monkey is not a valid label the model has no choice but to force the labels to integrate to one and it still force the model to like output a full valid probability distribution over the label set even though the image itself was unreasonable so then of course with a generative model what we were going to do is learn a just a probability distribution or a density function over the images themselves and now again because of this constraint that density functions need to integrate to one now but now the things that are competing with each other are the images themselves so then with a with a generative model we need to assign a likelihood to each possible image that could possibly appear in the universe and those those all those all those all those densities for all of those images need to integrate out to one so that means that the model needs to decide without any labels which combinations of pixels are more likely to be valid images and this requires a very deep understanding of the types of visual data so that's our generative model that we're trying to learn of course we also saw the this third option of a conditional generative model which is trying to model the images conditioned on the label and of course we we saw that we can use bayes rule to write out a conditional generative model in terms of these other components um like the the a discriminative model and in terms of an unconditional generative model and later in this lecture we'll see actually some some more concrete examples of conditional generative models um that are built out of out of neural networks so then after this kind of introduction we saw this big taxonomy of generative models right that this this idea of building probability distributions over our raw data is quite a large and rich area of research and a lot of smart people have spent a lot of effort trying to build build different sorts of generative models with different sorts of properties so last time we talked about um one type of generative model which is the auto aggressive generative model so now if you'll remember in an auto regressive generative model it's explicitly writing down some parametric form of this density function so then if we're trying to model the the the likelihood of an image x we break the image x down into a set of pixels x1 through xt and then we assign some kind of order to those pixels and we always say that the problem that the likelihood of a pixel is um is we write down a function that right that spits out the likelihood or the the likelihood of a pixel conditioned on all of the previous pixels in the image and this was just like the the types of models that we had built for for modeling sequences with recurrent neural networks so remember that we saw this exact same type of model when for example doing image captioning or language modeling with recurrent neural networks um but then with these with these auto regressive models then we wanted to kind of um just model the pixels of the image one at a time and we could use that either with some kind of recurrent recurrent neural network which gave rise to this pixel rnn or this um pixel cnn where we modeled this kind of dependence using a convolutional neural network over a finite window rather than using a recurrent neural network but either way um we're either these either these types of autoregressive models what we're doing is we're writing down this parametric function with a neural network that is just directly parametrizing the likelihood of an image and then we train the model by just doing a very straightforward maximum likelihood estimation so we just want to maximize the likelihood that the model assigns to all of the training data and in doing that um it'll allow us to then sample or generate new data at test time after the model is trained so these auto regressive models we saw are kind of simple and straightforward they're just kind of directly learning a density function over images and maximizing it on the training data so then after we after we saw these auto-aggressive models we moved on to this this more interesting category of degenerative models called variational auto encoders so then in variational auto encoders remember we kind of lost something compared to auto aggressive models but we also gained something so what we gained with respect to uh with auto arrested models is that in addition to modeling the likelihood of the data we've also introduced this late variable z which is supposed to be some late representation that uh that assigns sort of characteristics that contains characteristics or attributes of the data that are hopefully of a higher semantic level compared to the raw pixel values and now with a variational autoencoder what we wanted to do was learn a generative model that could that was um could produce images conditioned on this latent variable c but we found that in trying to like manipulate the math we saw that it was completely intractable to both to just directly maximize the likelihood of the data once we introduced this notion of this this latent variable z so then last time we saw that we kind of went through this long extended proof that we could that you could look back at the slides but at the end of the day we derived this lower bound on the on the data likelihood so then we'll remember that we have this data likelihood term on the left which is or the log likelihood of the data on the left and on the right we have a lower bound on this data likelihood that consists of these two terms and in order to derive this lower bound we had to introduce an auxiliary network called the decoder network and this is so then now our encoder network on the left here is trying to predict the the likelihood of the latent variable z conditioned on the image x and now the decoder network on the right here is trying to model the likelihood of the data x conditioned on the lathe variable c and where we kind of left off with last time is that we had introduced these two networks and we used these two networks to derive this lower bound on the likelihood and then remember what we're trying to try to do with a variation auto encoder is then train these two networks the input the encoder and the decoder we want to learn the parameters of these networks jointly to maximize this lower bound on the data like because we can't actually access the we can't compute the true likelihood of the data but we can compute this lower bound so that maybe the true likelihood of the data is here and this data likelihood is some lower bound now we're going to train the train the two networks to maximize the lower bound so then hopefully when we train these networks to maximize the lower bound on the likelihood that will hopefully also in some indirect way also hopefully maximize the likelihood of the data so that sort of then that this so then this lower bound on the on the slide here gives us our training objective for a variational auto encoder so now um when we had we're talking about variation auto encoders we need both of these both the encoder network and the decoder network they need to input a piece of like a for the encoder for example it needs to output a probability distribution which is a different sort of thing that we've seen with most of our neural networks right so with the encoder network if we wanted to input a concrete sample of data x and we wanted to output a full probability distribution over the potential latent variables z and now now this now outputting a probability distribution from a neural network is kind of a funny thing that we haven't really seen in other contexts so far so then we need to we needed to do an additional trick in order to allow neural networks to have probability distributions as their outputs so then the trick that we used is that we just we just we just decided that the all of these probability distributions would be gaussian um and in particular would be diagonal gaussian and now we would train the encoder network to output both the mean and the diagonal covariance matrix of this gaussian distribution and to maybe look at what and then the decoder is going to be similar that it wants to input a concrete sample of the latent variable z and then output a distribution over the images x and the way that we do that is again just decide that this distribution is going to be a diagonal gaussian and we have the neural network output the mean and the covariance of the diagonal covariance matrix of that gaussian so then to be a little bit more concrete than if we were we could imagine sort of writing down a fully connected variation auto encoder architecture to train it up on the mnist data set for example so then if we were training on this mnist data set then all of our images are a grayscale images of size 28 by 28 so we can flatten those to a single vector of size 784 and now we could decide that our dimension of our latent variable z is going to be a 20 dimensional latent variable and that that dimension of the latent variable is of the late that latent that size and late variable z is a hyper parameter that we would need to set before we started training so then a concrete architecture for what this could look like is that the encoder network then needs then inputs this vector x it could pass through some linear layer to go from 784 down to 400 units and then from that hidden layer we have two other linear layers that are going from the 400 hidden units into 20 units where where one of those hidden layers is going to output the mean of this of this of this distribution and the mean for because z is a 20 dimensional vector then the mean of the diagonal covariance the the mean of the gaussian is just another 20 dimensional vector so then the network will just have a linear layer that out directly outputs the mean of that distribution and then there's a parallel layer which is also going to output the diagonal covariance matrix of that gaussian distribution and then again because z is a 20 dimensional vector then the covariance matrix is a full covariance matrix would be a 20 by 20 matrix but because we made this simplifying assumption of diagonal covariance then the then all the off diagonal entries are zero so the only non-zero entries on that matrix is the diagonal so there's 20 elements along the diagonal so then we just need to have our neural network then output sort of 20 numbers for the mean and 20 numbers for those elements of the diagonal along the diagonal of the covariance matrix so that would give us this concrete architecture of of an encoder network for this fully connected variational autoencoder and now the decoder would look very something very similar then it's going to input a vector z and then it's going to have a couple linear layers that will uh output the the mean and the covariance of the pixels themselves where we again use this simplifying assumption that the pixels are distributed according to a gaussian distribution with some mean that as output by the network and some diagonal covariance which is output by the network and of course um i've sort of omitted the fact that i've written down linear layers on the slide here but of course out every linear layer should have some kind of non-linearity between them so that's kind of implied in this diagram okay so then once we've got this sort of concrete architecture for a variational auto encoder then we need to think about how to train it so recall that we're going to train oh yeah question oh so the dimension of the output the decoder is 768 because we assumed that we're working with a 20 28 by 28 image and 28 by 28 is 768 i'm outside of the math problem uh maybe i did the math wrong what is it 28 times 28 7 784 okay yeah i did that i messed up the multiplication okay thank you uh it's more common to use 768 because 768 is like two 512 plus 256 that's actually a pretty common number to use so i think i just typed that and actually multiply it but thanks for pointing that out okay so then how do we actually train this thing now that we've got a concrete architecture so remember that our training objective is that we want to maximize this variational lower bound and this variation lower bound looks kind of scary it has an expectation as a kl divergence and these are like things that we usually don't see in loss functions but it turns out it's actually not as bad as it looks so that we can kind of walk through then what it actually looks like when we're training a variational autoencoder so when we train a variational autoencoder first we take some mini batch of data um x here which is our input data from our data from our training data set and then we pass that input data or that midi batch input data through our encoder network and that encoder network is then going to spit out a probability distribution over the latent variable z for for our in for that input element x and now we now immediately we can use this this predicted probability distribution to compute the second term in the very in the in the variational lower bound so what is this what is the second term in the variation lower bound saying it's it's saying that we want to compute the kl divergence between two distributions one distribution on the left here is this q theta of z given x so that is the predicted distribution of z um that is predicted by the the encoder network when we feed it with the with the input data x so that distribution is just this diagonal gaussian that our encoder that our encoder has spit out for us and now the second the second distribution p of z is the prior distribution over the latent variable z which we decided is going to be some simple distribution like a unit gaussian and that is not learned that that prior distribution over z is something that we fix at the beginning of training so now we all we need to do is compute the kl divergence between uh this this distribution that was output by the network which is a diagonal gaussian and this prior distribution which is a unique gaussian and now it's clear why we chose everything to be gaussian because if we all choose all these distributions to be gaussian then we can actually compute this kl divergence in closed form so i don't want to walk through exactly the derivation here but it turns out that um if you sort of expand out the definition of the kl divergence then by the fact that these two distributions are both diagonal gaussians then we can just compute this kl divergence in closed form so then uh yeah question yeah the question is um can we choose sort of other prior distributions for p of z so i think in a classical variational auto encoder we we tend to use a unit gaussian because it allows us to compute this term in close form but it's definitely an active area of research to choose other types of prior distributions for z um and the problem is that so sometimes you'll see people you try to use like a bernoulli distribution and then you have categorical variables or maybe like a laplacian distribution and it implies some like different sparsity pattern of late variables so you definitely can choose different prior distributions for z in a variational autoencoder but being able to compute this kl divergence term might become difficult depending on the particular prior distribution that you choose um so we often use the gaussian just for computational simplicity but it allows us to compute this term in closed form yeah yeah so the question is should we assume sort of different priors for different data sets well i think this is actually that's actually a very interesting question because this this prior is over the latent variables right so what does it mean if we have a diagonal gaussian and so one is that this priors over the latent variables and the latent variables are not observed in the data set the model is sort of learning the latent variable representation jointly with everything else so actually um the choice of prior is sort of our way to tell the model what sorts of latent variables that we want it to mark so then when we if we choose this like diagonal this unit uh this unit gaussian as a prior then that's telling the model that we want it to learn uh latent variables which are independent because it's a u because it's a diagonal gaussian and then all have zero median of variance um so i think that because the latent variables are being discovered jointly by the model for the data that's why i think it's okay maybe to use the same prior distribution even for different data sets but again it's sort of active area of research to try out different sorts of prior distributions in variational models yeah question question is um could we train sort of z different binary z dimension of z different binary classifiers instead of a diagonal gaussian and i think that would be equivalent but the difference is that um we actually want to we want to share the computation within the encoder network so right now the variation auto encoder is kind of interesting because we've got sort of two levels of modeling inside the model one is like the neural network which is computing many layers and the other is kind of the probabilistic formulation so it's true that even though we want that we're telling the model we wanted to learn a set of latent variables that are uncorrelated the way that we're computing those means and standard deviations of those latent variables is through a neural network that is going to share a lot of parameters and a lot of weights through shared hidden layers so i think it's a computational reason that we choose to do it in this way okay that gives us our first term of our very of our variation objective and really what this term what this term is just saying is that we want the distributions which are predicted by the encoder to sort of match the prior that we've chosen and the kl divergence is just penalizing the difference with disparity between the predicted distribution and the prior distance okay so then once we've got a player so that that allows us to compute this first term of the loss so then once we've got um now that we've got our distributions over those things over those over those latent variable z then we can sample from the predicted distribution to actually generate some concrete samples z which are now sampled from the distribution which was predicted by the encoder network and then we can take these samples z and feed them to the decoder network and now the decoder network is going to predict a distribution over the images x and now this this leads us to our second term in the objective so what does this second term in the objective say well it's we're taking an expectation and this expectation the variable over which we're taking the expectation is z the latent variable and the distribution over which z is drawn should be q theta of z given x so um sorry a q phi of z given x so q phi of z given x is the predicted distribution over z that is predicted by the encoder q when presented with the input uh with the with the with the input x right so then we fee was that that's exactly what we've done is that we've fed the input x to the encoder we've gotten this distribution z given x and now we've taken uh some samples from that distribution in order to have some sampling based approximation to this to this objective right so then this this term isn't a is an is expectation and the thing over which we're taking the expectation are latent variables which have been sampled according to the predicted distribution okay so that's kind of the the first half of the of this objective now the second question is what is the thing inside the expectation so now the thing inside that expectation is that we want to maximize the likelihood of the data x under the predicted distribution of the decoder when we feed it a sample z so then we want to uh so that this is kind of an auto encoder objective right that basically this is a data reconstruction term then it's saying that what we want to do is we take the data x we feed it to the encoder we sample some and then we get a predictive distribution over z we sample some z according to the distribution we feed those samples back to the decoder and now we and now the d now the predicted distribution of the z of the decoder um under that predicted distribution over x the original data x should have been likely so this is really a data reconstruction term it means that if we take our data and then use it to get a latent code and then use that that same latent code the original data should be likely again so that's that so this term is really why this is called an autoencoder right because remember an autoencoder was a function that tried to predict its input by bottlenecking through some latent representation and that's exactly what this term is doing except now it's sort of a probabilistic formulation of an autoencoder but it looks exactly the same it's a data reconstruction term but now um then then then we can easily compute this this uh this second term in the loss function right because we've got some samples from our latent codes and then we can run those samples through the decoder to get our distribution and then we can just use a maximum likelihood estimate like a maximize the likelihood of the predicted data under the predicted distribution from the decoder um so that we can then we can compute the second term in the objective once we've gotten these predicted distributions of x given z and that gives us our full training objective for the variational auto encoder so then uh the kind of every forward pass in our variation auto encoder we would give these two terms in the loss and then we would use that to train the to train the two networks jointly so then basically these two objectives are kind of fighting against each other right because the the blue term is this data reconstruction term it's telling us that if we take the data give it back to the latent code and then get the latent code it should be easy to reconstruct the data but now the the green term is kind of saying that the predicted distribution over the latent variables should be simple and it should be gaussian so that's sort of could it putting some kind of constraint on the types of latent codes that the encoder is allowed to predict right so then the the the kl divergence is sort of like forcing the latent codes to be as simple as possible by forcing it to be close to this this simple prior and the data reconstruction term is encouraging the latent codes to contain enough information to reconstruct the input data so somehow these two terms in the variational autoencoder are kind of fighting against each other but then once this thing is trained then of course we could uh sample a reconstruction of our original data by sort of sampling from a new reconstructed data from this uh this final predictive distribution of the data okay so then this is how you would train a variational auto encoder but once it's trained we can actually do some cool things so one thing is that we can generate new data um from the to the trained variational auto encoder so we can ask the variational auto encoder to just invent new data for itself that is sort of sampling from the underlying distribution from which the training data was drawn so the way that we can do that is that we we're going to use only the decoder so here we're going to first sample a a random latent variable from the prior distribution over z and then we'll take that random latent variable and then feed it to the decoder network to get a distribution over the over new data x and then we can sample from that predicted distribution over new data x to give some some invented sample from the from the data set so this means that after we've trained a variational auto encoder we can use it to just like synthesize new images that are hopefully similar to the images that were seen during the training set so now first now we actually get to see some some example results of exactly this process so now these are example images which have been synthesized from a variational auto encoder which has been trained on different data sets so on the left we see um some examples where we well not me but the authors of the paper had trained some variational autoencoder on the cfar data set and then these are now generated images which kind of look like cfar images that have been invented by the model and now on the right um they've trained it on a data set of faces and now you can see the model is kind of inventing new images um new faces that kind of look similar to the faces that it had seen during training so this is um this is like a it's a generative model so we should be able to generate data and that's exactly what we're doing here but now another like but now or another really cool thing we can do with variational auto encoders is actually play around with that uh that latent variable z so remember we um in our we forced some structure on the latent variables because we we put a prior distribution that the model was supposed to to to match over the latent variables so in particular with the fact that we chose the the prior distribution to be an independent to be a diagonal gaussian means that each of the latent variables should somehow be independent so what this means is not now um here we're doing a visualization where we're varying two dimensions in this latent code and then feeding different different latent vectors z to the decoder that will generate new images and we can see that as we vary on the horizontal direction as we vary z2 maybe the second dimension of the latent code then the images kind of translate from sort of smoothly transition from a seven on the left to some kind of a slanted one on the right and now on the vertical direction if we vary a different dimension of that latent code z then we can see the generated images are going to smoothly transition from a six at the top um sort of down through fours in the middle through nines in the middle and then down to sevens at the bottom so now this this is now something that we could that this is now showcasing some of the power of the variational auto encoder over something like the pixel cnn that because the variational auto encoder is not just learning to generate data it's also learning to represent data through these through these latent codes z and by manipulating the latent codes we can then have some effect on the way that the data is generated so that's a really powerful aspect of the variational auto encoder that the the something like the the like the auto aggressive models just just can't do so now another thing we can do with variational autoencoders is actually edit images so we can take an input image and then modify it in some way using a variational auto encoder so the way that we can do that is that first we need to train it on our data set with an after training what we can do is say we've got an image x that we'd like to edit somehow then what we can do is take that image x pass it through the encoder of the variational autoencoder to now predict this latent code z for that image x and now we can sample a latent code from that distribution and now we can modify that latent code in some way like maybe change around some of the values in that predicted latent code and then we can take that edited latent code and feed it back to the decoder to now generate a new edited data sample x so now because we want and then why does this make sense this makes sense because we wanted the latent codes to somehow represent some kind of higher order structure in the data and the generator model is supposed to discover this higher order structure in the data through the latent codes by through the process of maximizing this variation of lower bound so then um but then after it's trained then we can actually edit images using variational autoencoders using this kind of approach so then here we have like maybe some some uh some initial image which is a face and then we take that initial image feed it to our variation auto encoder to get the latent code for the face and then here we can then change around different elements of the predicted label code and feed them back to the decoder to get a new uh edited version of that initial image so you can see that maybe as we vary along the horizon along the vertical direction because we're then we're varying one dimension in that late code then we can see that at the top the guy looks really angry and he's not really smiling at all and at the bottom he's sort of smiling and looks very happy so somehow this this one dimension of laden code somehow seems to encode something like the facial expression or the happiness level of the face and now as we vary z2 uh which is along the horizontal direction then we're then we're editing we're modifying a different dimension of this predicted laser code and then you can see that the guy is actually like turning his face from one side to another that somehow the model has learned to sort of encode the pose of the person's face into one of the dimensions of the latent code and now by editing that dimension of a latent code then we can actually edit input images using a variation auto color of course it's important to point out that we have no control we don't know upfront which elements of the latent code will correspond to which properties of the input image those are decided by the model for itself but by kind of playing around with them after the fact then we can see that the model has sort of assigned in many cases some kind of semantically meaningful data to the different dimensions of that latent code so here's another example from a slightly more powerful version of variational autoencoder where we're doing this idea of image editing again so then in the left column we have the original image um the next the second column shows the the reconstruction of the original image if we sort of take the unedited latent code and feed it back to the decoder and then the next five columns are all showing uh edited versions of that initial image where we change one of the values in the predicted leak code so you can see on the left that um by changing one of the dimensions in the latent code we're again sort of changing the direction of the head and now in the example on the right we see that a different dimension of a lame code corresponds to the direction of the illumination the direction of the light in the scene so then again this shows us how we can use um sort of variational auto encoders to actually do image editing through these latent codes and this is a and this is really kind of the reason why we want like variation encoders took a lot of ugly math right like there's a lot more complicated conceptually than something like the autograph models but this is the re this is the payoff right here that we went through all that additional work with the variation auto encoder so that we could learn these useful leaking codes for images in addition to to a sampling problem um so i think that's that's kind of most of what we want to say about variational autoencoders so in kind of a summary of variation auto encoders is that they're kind of a probabilistic spin on these traditional auto encoders um that they're kind of a principled approach to generative models is kind of a good thing and that they they're really powerful because they learn these distributions over latent codes from the data itself now one of the one of the downsides of variational autoencoders is that um they're not actually maximizing the data likelihood they're only maximizing a lower bound to the data likelihood so all the probabilistic stuff is sort of approximate when we're working with variational autoencoders another problem with with variational encoders is that the generated images often tend to be a bit blurry and i think that has to do with the fact that we're making sort of diagonal gaussian assumptions about the data when we're working with variational autoencoders okay so then so far we've seen two different types of generated models we saw these auto-regressive models that are directly maximizing the probability of the data and they give us pretty high quality sharp images but they're sort of slow they don't give us latent codes and we saw a variation and we saw these variational auto encoders that maximize the lower bound the images are kind of blurry but they're very fast to generate images because it's just like four pass through this through the speed forward neural network and they learn these very rich latent codes which is very nice so then is there some way that we can just like get the best of both worlds and actually combine the auto-aggressive models with the variational models and this i think is a bit of a teaser i don't want to go into this in too much detail but there's a very cool paper that actually will be presented at a conference next month that does this exact approach so this is called a vector quantized variational auto encoder vq vae2 and kind of the idea is that we want to get kind of the best of both worlds of both variation a lot of encoders and auto regressive models so what we're doing is kind of on the left first we train some kind of variational autoencoder type method that learns a grid of latent feature vectors um as sort of the first that looks kind of like training a variation auto encoder but rather than learning a latent vector instead we learn a latent grid of feature vectors and now on the right once you've learned that latent grid of feature vectors then we can use a pixel cnn um as an autoregressive model that is now doing that is now an autoregressive model that operates not in raw pixel space but instead operates in the latent code space so then it's kind of like sampling a latent code and then based on the predicted latent code it steps to the next element in the grid samples the next latent code and so on and so forth so this actually speeds up generation a lot and now the hope is that this kind of will hopefully combine and give us best of both worlds between variational autoencoders and pixel cnns and actually this model gives amazing results so these are actually generated images using this vector quantized variational auto encoder model so these are 256 by 256 generated images um that are conditioned they're actually this is a conditional generator model so the model is conditioned on the class that it's trying to generate from but the but this model is super successful it's able to generate really high quality images so i think this is a pretty exciting direction of future research for generic models so you can see that this model is able to generate really high quality of high resolution 256 by 256 generated images when we train it on even large scale complicated data sets like imagenet and now where this model works really really well is actually on human faces so these are actually generated faces these are not real people these are fake people that have been invented by this vector quantized variational autoencoder model um that's working at this extremely high resolution of 1024x1024 and you can see that um you know it's like it's can model people with crazy hair colors it can model like facial hair with a lot of detail these are also generated bases from this model um so it's kind of astounding to me just how well this model is able to do in modeling these very complicated structures of people's faces so so personally i'm pretty excited about this as a possible future direction for generative models but like i said this paper is yet to be it will be presented at a conference next month so it's sort of out in the air to see whether or not this will this will become the next big thing in general models okay but this i just this i just wanted to serve as kind of a sneak peek as kind of state of the art in uh in auto aggressive and variational models so kind of where we are so far in generative models is that you know we've seen these auto regressive models that are directly maximizing the likelihood of the training data and then we've seen these variational autoencoder models that give up directly maximizing the likelihood of the data and instead um and instead maximize this variational lower bound and this allows them to learn these latent codes jointly while maximizing this variation of lower power now the now so now we need to talk about another category another big category of generative models and that's these generative adversarial networks organs so these are a very different idea here we're going to completely give up on trying to model explicitly the density function of the images so instead we don't we no longer care about being able to compute um the the density over images or even some lower bound or some approximation to the density instead with a with a generic adversarial network the only thing we care about is being able to sample data from some some density of the well we care about sampling we don't care about actually writing down or spitting out the likelihood of our training data so then how do we do that so then the kind of setup with very with a general adversarial networks is that we assume that we've got some some training data xi some finite sample of training data that have been drawn from some true distribution of p data so now p data of x is like the true probability distribution of images in the world um and this density function is like the density function of nature so there's no way that you can actually evaluate this density or write it down but we just assume that the natural images in our data set have been sampled from this like natural density of data and now what we want to do is somehow learn a model that allows us to draw samples from p data um but and we don't actually care about evaluating the likelihoods all we want to do is be able to draw new samples from this probability distribution data okay so now the way that we're going to do this is that we're all like a variational auto encoder we're going to introduce a latent variable z but the way that we use the latent variable z is going to be a bit different so here we're going to um we're just like in various log encoders we're going to assume late variable z with some fixed prior pz and this can be something simple like a uniform distribution or a diagonal gaussian or some other kind of simple distribution so now what we want to do is we want to sample a latent variable z from our prior distribution and then pass a sample through some function g capital g called the generator network and by passing the latent variable through the generator network it should output it's going to output some sample of data x and now um now because now because now the generator network is g is sort of implicitly defining some probability distribution over images that we're going to call p sub g and p sub g it's sort of difficult to write down that exact density you'd have to use kind of like the change of variables function um from probably distributions but um because because the generator we're like sampling some some we're sampling some latency from the prior passing the latent variable through the generator function and that gives us a data sample x so then the generator kind of implicitly defines this distribution p sub g over data samples and now we can't explicitly write down the value of p sub g but we can sample from p sub g because we just sample from the prior then pass it through the generator and now what we want to do is somehow train this generator network g such that uh this the p the p sub g which is implicitly modeled by the generator network we want it to be equal to this true distribution piece of data of the the the distribution of the data coming from nature so then pictorially what this looks like is that we want to draw some sample z from this prior pc feed it to our generator network g and that will give us some generated sample and then what we then the generator's job is what it takes a prior from pz and turns it into a sample from pg but now now we need some mechanism to force pg to end up being close to p data so then to do that we're going to introduce a second neural network called the discriminator network so what the discriminator network is doing is performing an image classification task the discriminator network is going to input images and try to classify whether or not they are real or fake and now the so then the discriminator network will be trained both on samples from the generator as well as on our real or real samples from the data set and then this is sort of a supervised learning problem for the discriminator network we've got sort of samples from the generator that we know are fake we've got samples from the real data that we know are real and now the discriminator network should be trained to do a binary classification task to classify images as either real or fake and now uh the gen but now we're going to actually train these two networks jointly we're going to train the generator to try to fool the discriminator so the discriminator is trying to learn whether classified images are as real or fake and the discriminator is trying to get its images classified as real so then intuitively these two networks are kind of fighting against each other the discriminator is trying to learn all the ways in which the disc in which the generator's images look fake and the generator is trying to learn how to have its images passed as realistic by the generator or by the discriminator so then hopefully kind of the intuition is that if both of these networks get really good at their jobs then hopefully uh this pg will somehow converge to p data and hopefully after by training these two networks jointly then hopefully the the samples from the generator will end up looking a lot like the samples from the real data and this is the intuition behind generative adversarial network now kind of more concretely like the particular loss function that we use to train gener the general adversarial network is called is this a following mini max game between g and d so there's this big hairy objective function in the middle that will go through piece by piece and now now the discriminator d is trying to maximize all the terms in this objective and the generator g is trying to minimize all the terms in this objective and now we can color code this a little bit based on our previous picture to make these each of these terms a little bit easier to understand so then we can look at these two these these terms one by one so this first term is um the expectation of x drawn according to p data so that's just sort of then we can approximate this expectation by just uh taking the sum or the average over the real our real data samples from our training set um and now the the discriminator now this term the discriminator is trying to maximize this term so the discriminator is trying to maximize log of dx and dx is a number between zero and one log is a monotonic function so what this term is saying is that when the when the discriminator tries to maximize this term it's trying to get the real data classified as real that the discriminator is trying to make sure that the discriminator output on real data is one and the generator is trying to minimize this term but this term does not depend on the generator so in fact the generator doesn't care about this term at all this term is just saying that the discriminator is trying to correctly classify the real data as real okay now the second term is that the discriminator is trying to map again maximize this term so now again this is an expectation but the expectation is over latent variables z that have been drawn according to the prior pz and now given uh given a sample of z from the prior we're going to pass the latent variable through the generator to get a fake sample and then take that fake sample and pass it to the discriminator which will give us a number between zero and one so now the discriminator is trying to maximize this term which means that this that log is log of something is maximized when log of one minus something is maximized when the something is minimized which means that the discriminator is trying to set d of x equal to zero when when acts as a fake data so this term when the discriminator is maximizing this term it's trying to make sure that the fake data is classified as fake as a binary classification problem okay but then we can look at this term from the generator's perspective so the generator remember is trying to minimize this whole objective function and now the so then the generator is looking at this exact same term in the objective we're trying to minimize it so that means that the generator is trying to adjust itself such that the generated samples are classified by the discriminator as real so that gives us our training objective for this for this minimax game so then the kind of idea is that we'll uh train this thing using alternating gradient descent that will will jointly train both the generator and the discriminator so that they're both trying to one is trying to maximize this objective and one of the others trying to minimize this objective so then for a notational convenience we can write down that whole messy expression as v of g and d and then our training objective is we run in a loop and then for each for each time in the loop we come we want to first update the discriminator so then we compute the derivative of the objective v with respect to the discriminator weights and now we're trying to maximize the objective for the discriminator so we want to do gradient ascent so then we move in the direction of the gradient and take a gradient step to do a gradient ascent step on d and then once we update d then we compute the gradient of objective with respect to the generator weights and now the generator is trying to minimize this objective so then we need to take a gradient descent step on this objective to update the weights of the generator g and then we'll just kind of update these two one after another and we'll loop forever and hopefully things will uh end up happy but it turns out there's actually a problem right that actually um you know normally when you're training neural networks you can just like look at the loss and the loss is kind of going down like this and that means you know that everything is working well but uh it turns out that's not the case at all for these generative adversarial networks because the loss of the two of the generator like the law the generator has its own loss the discriminator has its own loss and they depend on each other right because when the gen for example if the discriminator is really good and the generator is really bad then the discriminator will have low loss and the generator will have high loss but if the generator is really but like the two losses sort of depend on each other in complicated ways so when you're training gender adversarial networks usually the loss does not go down like this usually if you plot the losses of these two things they're like all over the place and you can't really you can't really gain any intuition by looking at the loss curves when training these things so training generator ever serial networks tends to be a pretty tricky process that i i don't know if i can actually give you that great of advice on how to train these things properly but suffice to say is challenging okay but there's actually kind of another problem here is that um this so this term on the right this log of one minus d of g of z um we can actually plot this um as a curve so here on the x axis we're plotting d of g of z um and on the y y-axis we're plotting log of one minus d of g of z um and now at the start of the training you have to think about what's gonna happen at the very start of training at the very start of training the generator is probably gonna produce like random garbage and then that random garbage will be very easy for the discriminator to tell whether it's real or fake because sort of classifying real data versus random garbage is very easy the discriminator will usually get that within a couple gradient steps so now at the very beginning of training d of g of z is close to zero because the discriminator is like really good at catching the fake data so then d of g then if d of g of x is close to zero that means that this term is like over here on this uh in this red this red arrow for the generator and now that's really bad because the gradient is flat so that means that at the very beginning of training the generator will get almost no gradient we'll have a we have a vanishing gradient problem at the very beginning of training so that's so that's that's bad so then to fix this we actually uh in practice we often we train the generator to uh to minimize a different function so rather so in this sort of raw formulation that i've written up here the generator is trying to minimize log of one minus d of g of z but in practice we want what we're going to do instead is train the generator to maximize minus log of d of g of z which is still has the same interpretation of having the generator's data be classified as real but the way that that's realized into the objective function is a little bit different so now if we plot a minus log of d of g of z we see that at the beginning of training then the generator actually gets good gradients so that so this is actually how we're going to train generative adversarial networks in practice um that the the the generator the discriminator is trying to classify data as real as fake the generator is trying to get its data classified as real by the discriminator but the exact objective that the two are optimizing is a little bit different just to account for the spanish ingredient problem yeah we want to minimize log of one minus d of g z so we want to maximize minus log of d of g of c i think actually maybe that should be uh maybe that should be a minimize yeah i think you could be right let me double check and get back to me yeah okay so then there's sort of like we have this intuition that generator is generated trying to full discriminator and there's a question of like why is this particular objective a good idea to accomplish this goal and now it turns out that this particular objective this particular mini max game actually achieves its global minimum when p of g is equal to p data and now to see this we need to do a little bit of math so then here's our objective so far um and we're kind of ignoring the fact that generators actually optimizing a different objective we're just pretending that they're both optimizing this this one objective so here's our objective so far now what we can do is we can do a change of variables on the second expectation so now rather than writing it as an expectation over z drawn according to the prior we can write it as an expectation of x drawn according to the p of g which is this distribution that the generator is implicitly modeling so we're just kind of doing a change of variables on the second expectation now we can expand out these two now we can expand out the definition of the expectation into an integral and that gives us this expanded version and now if all of our functions are well behaved we can push the max we can exchange the order of the max in the integral and push the max inside the integral and now we actually want to actually now what we have is that we want to actually compute this max so the discriminator is trying to perform the maximum um but the integral is over all of x so now we want to write down what is the optimal value of the discriminator for each possible value of x so now we can do that with a little bit of side computation so we can write down that we can really recognize this this thing inside the max as a function that looks kind of like a log y plus b log one minus y where a is p data of x b is pg of x and y is d of x and now this this function f of y we can just compute the derivative set it equal to zero and then find we'll find that this function f has a local max at um a over a plus b so we can kind of go back and plug that back in and then that tells us that that gives us the value of the optimal discriminator that is the discriminator which is actually satisfying this maximum inside the integral so that the optimal discriminator which is achieving this maximum value um depends on the generator so now the optimal discriminator d star for the generator g up has its value of p data of x over p data of x plus pg of x so that's the the value of the optimal discriminator for any value of x um so it's important to point out that we can compute that this is the optimal value for the discriminator but we can't actually like evaluate this value right because this this d g at this d star sub g of x involves p data of x which we already can't evaluate and involves p g of x which we also can't evaluate so this is kind of a nice mathematical formalism we know that this is the value the optimal discriminator must take but we can't actually compute that value because it involves terms that we can't actually compute but then what we can do is sort of sub that val that optimal discriminator back into the model and that sort of eliminates that inner maximization right so then we've sort of performed that inner maximization we found the value of the optimal discriminator now we can plug in the value of that optimal discriminator in every term of that integral so now we've got um now this is the same the same objective function but we've just kind of like done the inner maximization over the discriminator uh for us automatically now this is getting messy so let's push this up and then uh then we can use the definition of expectation to sort of rewrite this integral back as a pair of expectations so now we now we're sort of pulling this back out and now we write this as two expectations one expectation is x over p data of log of this ratio and the other is x according to pg log of this ratio um and this is again using the definition of expectation now we need to do a little bit of algebraic nonsense multiply it by constant pull it out then we kind of pull out this log four and then we end up with this particular mathematical formalism this is getting messy so let's push it up again um and now now this is something that maybe if you've taken enough information theory you could recognize this as an important term um so now it turns out there's this thing called the collag library divergence or kl divergence which somehow measures the distance between two probability distributions and now we can recognize that we've actually got two kl divergence terms sitting here right here so then by the definition of the kl divergence um it's over a distribution p and a distribution q and then it's the expectation of x drawn according to p of log of the ratio between them and now we can see we've got two pale divergence terms sitting right here inside these two expectations so then we can rewrite this as uh there's the two kl divergence one is the kl divergence between p data and this average of p data and pg the other is the outlook the other way the average of pg and this average distribution then we saw this log4 hanging out now we can recognize another uh sort of fact from information theory there's another distribution we can recognize called the jensen shannon divergence which is yet another way to measure distances between different probability distributions and the jensen shannon divergence is just defined in terms of the kl divergence and now we can see we've actually got a jensen shannon divergence sitting right here on this equation so then we can simplify this even further and write down this whole objective as just this jensen shannon divergence between uh p data and pg so that means that um now this is actually quite interesting right because we've taken this like mini max objective function that we're trying to minimize and maximize we reshuffle things we actually computed the maximum with respect to the discriminator and then we boiled this all down so now we just need to fight and then this whole objective reduces to the minimum of the jensen shannon divergence between the true data distribution p data and the implicit distribution the generator is modeling pg minus this constant log four and now there's an amazing fact about the jensen shannon divergence that i'm sure you're all aware of is that the jensen shannon divergence is always non-negative so it's always greater than equal to zero and in fact it only achieves zero when the two distributions are equal so that means that the that now this whole expression we were trying to minimize find the generator that minimizes this expression and it turns out that the unique minimizer of this expression occurs when p data is equal to pg qed right so that means that um the optimal so the unique that means that the global solution the global the global minimizer of this whole objective happened so then kind of summarizing this we kind of rewrote this whole thing as this minimum as this this minimization function now the summary of all this is that the overall global minimum of this minimax game happens is that when the discriminator assigns this particular value this is this particular ratio um to all of to any data sample and then when the when the generator just models directly the true data distribution so that's kind of the beautiful math that underlies why generative adversarial networks have the potential to work and why training with this midi max objective actually has the capacity to cause the generator to learn the true data distribution but of course there's a lot of caveats here right so that um this this is sort of a proof that makes us feel good but there's some holes in this when it comes to applying this proof in practice so one is that in fact um we've kind of done this minimization assuming that g and d can just represent any arbitrary function but in fact g and d are represented by neural networks with some fixed fixed and finite architecture and we're only allowed to optimize the weights so it's possible that the optimal the generator and the optimal discriminator just might not be within the space expressible functions for our generator and discriminator so that's a problem so it doesn't actually tell us whether or not fixed architectures can represent these optimal solutions and it also doesn't tell us anything about convergence so this does not tell us about whether we can actually converge to this solution in any kind of meaningful amount of time so i think this this proof is nice to be aware of it shows us that p that we are hopefully learning the true distribution but there's sort of a lot of caveats left okay so that's hopefully enough math for one lecture and let's look at some pretty pictures so then uh here's some results from the very first paper on general adversarial networks back in 2014 and you can see that um back in 2014 we were able to generate these gender adversarial network samples that could um reproduce faces to some extent and reproduce these images these uh these handwritten digits to some extent and then for comparison we're showing the nearest neighbor in the training set for each of these generated samples um so the fact that the nearest neighbor is not exactly the same as the generated image means that this model is not just regurgitating trading samples that it's hopefully learning to generate new samples that just look like plausible training like plausible uh samples from the training set okay so this was kind of um the beginning of generative adversarial networks but this was 2014 five years ago and this is a fast-moving field so we've got a lot of advancements since then so then kind of the first really big successful result in general adversarial networks was this so-called dc gantt architecture which used like a five-layer convolutional network for both the generator and the discriminator and they got this thing to train actually much better than some of the original papers and now some of the generated samples from dc dan ended up looking quite nice so here what we're doing is we're training dc gan on a data set of image of photos of bedrooms and now we're sampling new photos of bedrooms from a trained dc game model and you can see that these generated samples are actually quite complicated they're capturing a lot of structure of bedrooms you can see that there's like beds and windows and furniture and a lot of interesting structure being captured by this generic model but what's even cooler about this this these networks is that we can do interpolation in the latent space so remember that a general member serial network is taking a latent variable z and then passing it to the generator to generate a data sample x so now what we can do is we can sample a z over here and a z over here and then linearly interpolate a bunch of z's in between and then feed each of those linearly interpolated z's the generator to now generate interpolated images along this uh this latent path in the late space so then each row in this figure is showing us an interpolation in latent space between a one bedroom on the left and a different bedroom on the right and you can see that the images are somehow like continuously morphing into each other in a really non-trivial way so that suggests that this adversarial network has learned something really non-trivial about the underlying structure of bedrooms it's not just doing like an alpha transparency blend with the two images it's like learning to warp the spatial structure of those images into each other another really cool thing we could do with gender adversarial networks is some kind of vector map on these line vectors so what we can do is we um can sample a bunch of a bunch of samples from the network and then sort of manually categorize them into a couple different categories so here on the left we've got a bunch of samples of smiling women they look kind of like smiling women if we look at the generated images in the middle we've got sort of non-smiling women on the right we've got non-smiling men and then each of the free these data samples we have the latent vector which generated it so then for each of these different columns we can compute the average latent vector along the column and then uh refeed that average latent vector back to the generator to generate kind of an average smiling woman an average neutral woman and an average neutral man from according to this trained model and now we can do vector math so what happens if we take a smiling woman subtract a neutral woman and then add a neutral man smiley man there we go and then you could sort of sample some new vectors around that smiling man vector and sort of get other smiling man images or we could do something similar to man of glasses minus man without glasses plus women without glasses what are we going to get women with glasses there we go so then somehow these uh gender members here oh that works let us do some kind of semi-interpretable vector map in latent vector space which is really cool so this was in 2016 and i think after this paper people got really really excited about gender adversarial networks and the field went crazy so this is a graph showing the number of jan papers as a function of year from 2015 to 2018 and you can see that the number of gan papers being published is just like exploding at a ridiculous rate so there's a there's a website called the gan zoo where they try to keep track of all the different papers that are being published about gans so here i sort of took a screenshot of the gansu this goes through b and they're alphabetized so there's the the gansu just captured like hundreds and hundreds and hundreds of research papers that are being written about hands so there's no way that we can possibly talk about all the advancements in gans since 2016. but i wanted to try to hit a couple of the highlights so one is that we've got improved loss functions for gans now so uh now we there there's an improvement called the wazerstein gam which is which uh sort of changes the loss function that we use for generating gams for for training dans and you can see that as we use this wazer scene loss function then the generated samples tend to work tend to be a little bit better another thing we've gotten better at is improving the resolution of images with dance so uh here are some samples from this model called the progressive gan which was published just last year in 2018. so the progressive gan on the left we're showing 256 by 256 generated images of bedrooms on this same bedroom data set that we've been sitting that we've been working on so these are like fake images of bedrooms and these look like i would like i would stay there if that was on airbnb um those look pretty good and on the right we're seeing these high resolution 1024x1024 generated faces by this progressive gan architecture but of course that was 2018 and we're in 2019 so things have gotten even better since then so then the same authors behind progressive gan wrote this new one called stylegan which was published just this year in 2019 which is also pushing towards higher resolution so here are some results of style gan generating images of cars which i don't know look pretty realistic to me and on the right are again 10 24 by 1024 generated faces using this style gan model and now what's really cool is we saw that gans could be used for interpolation and latent space well we can apply interpolation and latent space to these high resolution faces that are being generated by style gan so you can see that a style gain is kind of like learning to warp in by warping by continuously moving the latent vector in that z in latent space you can see that the generated faces are kind of continuously deforming into each other so the fact that this that the transitions between the faces are so smooth gives us a very strong indication that this model is not memorizing the training data oh no this model seems to be learning some important structure of the generated faces because otherwise there's no way it could possibly interpolate between them in such a smooth way so these so this is sort of like 2019 gans or early 2019 gans okay so then another thing we might want to do is do conditional gains so all of these samples we've seen so far have been unconditional we train it on a data set and then we just sample to get new images from that data set but what we might want to do is be able to get more control over the types of images that are generated from gans so to do that we can use a conditional generative model and and model uh the the the distribution of x the image x conditioned on some label y so then the way that we do that is we change the architecture of our generator to input both the the random noise c along with the label y in some way and the particular way that we tend to input the label information into gans these days is this trick called conditional batch normalization so we know so recall we have batch normalization on the left remember in batch normalization we're always going to do a scale and a shift and then multiply by alert we do the normalization of the data then we add and multiply by a learn scale and shift gamma and beta so now what we do is we learn a separate gamma and beta for each clap for each category label y that we want the model to model so then the way that we input the lay the label y into the generator is just by swapping out a different uh gamma or beta um that we learned separately for each class and this seems like kind of a weird trick but it actually seems to work quite well in fusing label information into gans so then once we once we have this trick of conditional batch normalization we can train conditional gans so then these are um this is an example of a conditional gam model which was trained on imagenet but now rather than just inputting random noise we actually tell the generative model which category we want it to generate so then on the left we have generated a welsh springer spaniels in the middle we have generated fire trucks and on the right we have generated daisies and all these images are generated from the same model but we control which type of category we want it to generate by feeding different uh different wides to the model and this paper also introduced this new normalization method called spectral normalization which we can't get into um we've seen actually self-attention be really important for different types of applications throughout the semester um we saw this in transformers we saw this also uh in other contexts and it turns out the self-attention is also useful for gans so if we put self-attention into our big gant models then we can train even better conditional gam models on imagenet so again these are all conditional samples from the same dm model but we're telling the generator which category we wanted to generate from a test time and now here i think is the current state of the art in gann technology is the so called big gan paper from a broken all that was just published earlier this year in 2019 so these are all these are again conditional samples these are generated images from from a conditional gan model that was trained on imagenet and now these are 500 and 512 images that are all generated from the same model but where we tell the generator which category we want to generate at test time so i think if you want to re if you want to understand all the latest and greatest tricks to get your gans to work really well i think this is the paper to read right now then of course gans don't have to stop with images there's some initial work on generating videos with gans um so this uh here on the left are some generated videos from gans where we're generating 48 frames of 64x64 images i'm using some kind of uh gan model and on the right we're generating 128 by 128 images and only 12 frames so i think this is maybe the next frontier in gan technology so hopefully we'll come back in 2020 and like be able to see even more beautiful videos like this so then it turns out people want to use gans to generate more to condition on more types of information than just labels so there's been work on where we want to we want to train models that are p of x given y where y is not just a category label but it's some other type of information so that y can be a whole sentence so there's work that tries to input a sentence and then outputs an image using a gan using some kind of conditional gan model we can also have that conditioning variable y be an image itself so one example is image super resolution so we're going to input a low resolution image as the conditioning variable y then have the model output a realistic high resolution image um as the output x so then here the the bicubic would be the input x which is the input y which is a low resolution input and then the gan generator will then output this uh this high resolution up sampling of the image we can also do image editing with gans so we can uh train gans that convert like uh different types of image we can change train gans that can for example convert google street view images into street view map images or convert semantic label maps into real images or convert sketches of handbags into real handbags and we can do all of these with some kind of conditional gan formulation a really famous example of this is this so-called cyclogan work that is actually able to train these translations in an unpaired way which i think we don't have time to get into but what's really cool is they can sort of train these gan models that can convert images of horses into images of zebras using some kind of conditional gan formulation so then here the input y is the image of a horse and the output x is the image of a b and is the image of a zebra and they're able to train they found a very clever way to train this thing even when we don't have sort of paired couples of zebra images and horse images we could there's also work on converting label maps to images so here the input there's sort of two inputs y one is the layout of the scene that we want on the top so then like the blue is the sky the green is the grass and the purple like the the the the maroon is the clouds and then on the left is a second input y that gives us the type of artistic style that we want that image to be rendered in so then we can train again model that then generates images which match the layout given by the semantic map but also match the artistic style of the the input style images on the left so then there's this is just so there's just a whole wide world of work on different types of models that we can build with gans then i'd also like to point out that gans are not just for images you can actually use gans for just generating any type of data really so this is a paper that i did last year where we want to use gans to generate uh predictions of where people might want to walk in the future so the input to the model is some uh history of the previous few seconds where a group of people are walking and what it tries to predict is where the people will walk going into the future and we can train this up as some kind of conditional gann model where the conditioning variable y is kind of a pass to where people are walking and the generated data x is the future where they will walk and this needs to be uh realistic as judge by the discriminator so kind of the summary of gans is that you know we're jointly training these two networks the generator and the discriminator um and that under some assumptions um the the generator learns to capture the true uh the true data distribution and then if we kind of like zoom out this taxonomy of generative models now at this point we've seen these three different we've seen of three different uh types three very different flavors of generative models with neural networks so we've seen these auto aggressive models that are going to directly maximize the life of the data we've seen these variational models that are going to jointly learn these latent variables z together with the data x and maximize this variational lower bound and we saw these generative adversarial networks which uh give up totally on modeling pmx and instead just want to learn to draw samples and these gam models as we've seen have tons and tons of applications and they can be used to generate really really high quality images so that's pretty much all we have to say about generic models and then next time we'll talk about mechanisms for dealing with non-differentiability inside your neural network models that will lead us to some discussion on stochastic computation graphs and i think we'll also touch a little bit on reinforcement learning as well so come back and come back on that and hopefully get started your homework
Deep_Learning_for_Computer_Vision
Lecture_4_Optimization.txt
so welcome back to lecture four uh today we're going to talk about optimization and it seems like the the mic in the room doesn't have full clip so i'm just going to talk out loud hopefully everyone can hear me yes in the background okay good so as you recall last the last couple lectures we've been developing this idea of linear classifiers so first we talked about how we can use linear classifiers to solve this this image classification problem and that we can view in linear classifiers from these three different perspectives algebraic visual and geometric and in the last lecture we then also talked about how we can use loss functions to quantify our preferences over different values of the weights when we're using linear classifiers so in particular we talked about the softmax and the svm loss functions which impose different preferences over the sets of weights that we're going to use in our in our linear classifiers and we also also talked about the use of regularization that can also express some preference over which values of weights we prefer and not prefer um where we have this intuition that regularization helps us maybe uh generalize beyond the training set and build classifiers that can that are simpler and thus generalize better to the test set but at the end of last lecture we were left with this open question of how given uh given some some linear classification set up and given a loss function how do we actually go about finding a value for this weight matrix w that satisfies this loss how do we actually fit our linear classifier to the data that we have in our training set well that's going to be the topic of today's lecture so broadly we're going broadly this question of finding weight matrices that minimize a loss function is this topic of optimization so the general framework here is that we have some loss function l of w that's going to input our weight matrix and output our scalar loss and in the last lecture we talked about the internal components of that loss but for the purpose of today we're mostly going to extract all those away and just think about the loss function as an abstract function that inputs the weight matrix and outputs the scalar value of the loss and now during the process optimization what we want to do is find this value of the weight matrix w star that minimizes the loss function and this is why this topic of general optimization is indeed very general it's a very broad topic across all of mathematics and applied computational mathematics and things like that um so we're going to so we there's no way we can cover all possible part pieces of optimization of one lecture so instead we're going to focus on the bits that are most salient for training that will become more salient for training large neural network models as we move forward in the semester so this is kind of the formal way that we write down an optimization problem but we often think about it intuitively as traversing a very large high-dimensional landscape so we often think and i often think about this process of optimization as trying to walk towards the bottom of some beautiful landscape where here the intuition is that each point each xy point on the ground is a different value of our weight matrix w and the height of that point on the ground is the value of the loss function l of w and now during the process of optimization we're going to be some little blindfolded person with no eyes because he has no eyes if you look i've used it on the slide then you can see that there's no he doesn't actually know which direction where is exactly the bottom of this valley and somehow what we need to do is despite having no eyes we need to somehow explore around through this high dimensional landscape and find this point at the very bottom of this optimization landscape um and that looks like a nice place to visit so maybe we'd like to get to the bottom somehow so then the question is how do we actually get to the bottom well one thing you might hope so one one kind of uh hope thing we might be able to do in some special situations is to just write down the answer right so for certain types of optimization problems like might arise in linear classification if you're familiar linear regression we might be able to just write down an explicit equation for the for the bottom of this uh for the bottom of this objective landscape but in general that's not going to work so we need to find more so we're going to use iterative methods instead to try to iteratively uh iteratively improve our solutions and go towards the bottom of this objective landscape so the first iterative algorithm that you might imagine to optimize an objective landscape which by the way is a terrible idea but it's instructive to think about is random search so here what we might try to do is simply um generate many many random values of the weight matrix and then evaluate their loss their loss value of these different random matrices on the training set and just keep track of the overall lowest value of loss that we find over this random search procedure um and if we've abstracted away our loss function into a into a into a single function like we've been talking about then the such a procedure can be implemented in just a couple lines of python um so here we're using random search to train some linear model on cfr10 dataset um and after running this for some amount of time we actually maybe are able to get something like 15.5 percent accuracy which you know is not bad because this is a pretty stupid algorithm pretty stupid optimization algorithm but even random search is actually to make able to make some kind of non-trivial progress on this optimization problem but of course on this on this data set state of the art is something like 95 percent so we've got a little bit of gap to close on the rest of this lecture so then idea num so this idea of random search is really a stupid algorithm that you probably don't want to use in practice but it maybe doesn't work quite as bad as you might think so that kind of motivates the use of slightly smarter algorithms than random search so idea number two is actually what we will do in practice so here we're going to try to follow the slope of this landscape downhill so here we believe that even though this little man walking around the objective landscape doesn't have eyeballs he can somehow feel with his feet around on the ground and feel which direction is sloping that which direction the ground is sloping in a local region around where he's currently standing so given only this local information about how the objective landscape is changing in a local neighborhood of our current point then our strategy is to simply step a little bit in the in the direction of greatest decrease and then we can repeat this over and over and hopefully this will lead us towards some lower point in the objective landscape so this this gen this is a fairly simple straightforward algorithm but it actually works quite well um so here to make this a little bit more formal um we recall that in we need we need to like we need to talk about derivatives right because if you recall for a single for a single value for a scalar value function that inputs a single scalar and outputs a single scalar we can define the derivative which tells us the slope at any point in uh in this in this in this on the domain of this function where the slope tells us for any point in the domain for any point for any any uh any point how if we change x by a little bit then how much is y going to change a little bit in correspondence um so that's the sink that's the familiar gradient that's the familiar derivative that we have from single variable calculus and of course this extends very naturally to multiple dimensions as well so if you'll recall for a vector-valued function that inputs a vector and outputs a scalar which is what we're talking about in this optimization setup then we can define the gradient where the gradient at a point is now a vector telling us the direction of greatest increase in the function so now the gradient tells us the direction of greatest increase and if you'll recall your vector calculus you'll know that the the magnitude of the gradient tells us the slope in the direction of greatest increase so um and it's also kind of a nice fact that the direction we actually care about the direction of greatest decrease because we want to walk downhill this objective landscape and it turns out the direction of greatest decrease is opposite the direction of greatest increase which is maybe not obvious but it happens to be true um and then so we can simply step in the direction of the negative gradient and this will lead us downhill down our objective landscape so then the question is how might we actually go about computing these gradients for arbitrary val for arbitrary vector input for arbitrary functions that input a vector and output a scalar well we can actually implement this uh limit definition of the gradient directly in software so you could imagine that we have some value of the some current value of the weight matrix w here on the left shown as a big vector and now on the right we want to compute this gradient and the the gradient of the loss with respect to the weight matrix and recall that the loss is a single scalar value so the gradient of the loss with respect to the weight matrix will be a vector that has the same shape as the weight matrix itself where each element each slot in that gradient vector will tell us the slope if we change one element of that corresponding part of the weight matrix by just a little bit so we could just implement this limit definition of a gradient numerically so given our function that computes the loss what we could do is compute the loss at our current w then we could perturb the value of w by increasing the first slot of the weight matrix by a small step each and now run that perturbed value of w through our loss function to compute a new loss and then apply the limit definition of the derivative to compute some approximation to the gradient in the to some approximation to the first slot of the gradient and here remember that it's maybe rise over run so we look at the difference in loss value divided by the difference in amount in which we change that slot of the input weight matrix w so that gives us some approximation to the slope of that function along the coordinate axis of the first slot of the weight matrix and of course we can repeat this procedure for then second dimension uh perturb the second dimension recompute a new value of the loss and then again apply limit definition of the derivative to get this numeric approximation to the second slot we can repeat for the third slot etcetera etcetera so this is called the numeric gradient and it's a reasonable thing to think about but the problem is that it's very slow so in this particular example our weight matrix w has maybe 10 or 12 slots i don't remember exactly how many i wrote down so if this would actually not be super expensive to compute but as we move to larger neural network based systems our weight matrices will become very very large they might have thousands or millions or tens of millions or in the biggest cases perhaps even billions of learnable parameters and now in order to compute a gradient using this numeric approximation this limit definition we need to compute a separate forward pass of our loss function once for each slot in our gradient in our weight matrix so this will become very impractical as we move to very high dimensional optimization problems and it's also approximate because we were relying upon a finite differences approximation to this gradient to this full true end to this full true gradient so this numeric gradient not only is it slow but it's also not even computing the correct thing at all but it's still useful to think about in terms of what is actually going on when we try to compute gradients in these large scale systems so okay so now at this point you might have thought that this was kind of a stupid thing to have done right we all hope because we all should have taken some vector-valued calculus because i think i put that down as a prereq for the class and now you know that you could just write down the loss function as some equation and now what we want to do is derive the gradient and you could as the gradient of this loss function with respect to the weight matrix w and if you're familiar with all the proper rules of manipulating these uh vector or matrix equations then we can get some help from these guys and just write down an equation that helps that directly defines the value of this loss function um do you know who these guys are newton and liveness you know which one is which yes okay yeah i think they all look kind of people have looked kind of funny back in those days but uh now what we want to do is then hopefully we can then just use our knowledge of vector calculus to write down an expression that tells us directly what is the what is a an algebraic or analytic expression for the exact gradient at a as a function of the weight matrix w so then what this looks like is back to our current picture we will have our current weight matrix w that will then then will then somehow derive this expression for the derivative of the loss with respect to the weight matrix and that will allow us to compute this gradient the ldw in a single operation or a single pass that hopefully now it will no longer require a number of four passes linear in the number of dimensions of the weight matrix and the details of exactly how we will derive these these gradient expressions and in practice will often rely on back propagation which we'll talk about in much more detail and i think one a week from today's lecture lecture six so for the time being you can just hope to fall back on your knowledge of calculus but moving forward we'll use math propagation as a more structured algorithm to derive gradients for arbitrarily large and arbitrarily complex expressions but for the linear classification for the linear classifiers we've considered so far hopefully you should be able to work it out on paper or if not then you can use background propagation even for those as we'll talk about on top so now the summary up to this point is that uh now we've decided upon this strategy of minimizing a loss function by using gradient information that is we're going to take we're going to compute gradients at every point and then use that gradient to iteratively improve our estimate of where exactly we want to fall in the long's landscape and now we've talked about two different methods of computing gradients that can be applicable to any sort of function that you might imagine in general so we talked about this numeric gradient that is very very simple to understand very intuitive very easy to implement but it's approximate that it's slow so in practice we'll typically use the analytic gradient which will be exact and fast but because it requires you because deriving analytic gradients requires you to actually maybe do some calculus on paper sometimes it can sometimes be error-prone and can right because you might just make a mistake and make a mistake in your computation make a mistake in your derivation which definitely happens when you're trying to think about how to compute gradients of these large complicated expressions so in practice what we'll often do is actually use both so in practice what we'll typically do is use the numeric gradient as a debugging tool to make sure that we properly implemented the analytic gradient so here we would typically then write our code in a way that's agnostic to the number of dimensions in the weight matrix so and then we can test the analytic version of our gradient of our gradient computation on a very low dimensional version of the problem where computing a numeric gradient might be feasible and then we can check that we're computing the same approximately the same value between this numeric approximation and between this analytic gradient that will derive on paper so this is always a really good idea that you should almost all you should always do this whenever you're implementing your own gradient code and this is just a really good way to make sure that you don't have any stupid mistakes when you've implemented gradients um so we will use this strategy on the homework assignments and you'll see starting on assignment two that again will be released later today you will be implementing your own gradient computations for linear classifiers and now to debug those linear classifiers we've written this grand check function that will come internally compute the numeric gradient and then compare it to the analytic gradient that you guys will derive this is a very useful strategy and it's applicable beyond homework assignments as well if you look in the pytorch documentation you'll also find that pytorch provides a function called grad check that does something very similar so if you ever find yourself writing your own gradient computations for code out there in the wild i would strongly encourage you to use this grad check function provided by pytorch to make sure that you properly implemented your gradients and if you look all further in the documentation you'll also find a grant grad check that helps you properly implement second derivatives which is also a thing that you sometimes need to do in pytorch i think we'll see a couple examples of that later in the course but surprisingly there's no grad grad grad check because it turns out that because of the way high torch implements gradients once you've got grad check to pass and grad grad check to pass then grad star check will definitely work for any any number of grads that you want to take so again if you ever find yourself implementing gradient code out there in the wild you should probably use these tools to make sure that they're implemented properly so now that we have some sense of how to compute gradients it finally brings us to this algorithm of gradient descent so recall that we had talked about this this way this strategy for optimizing the loss function we're going to start somewhere and then deal with our b to see what is the local gradient direction and then step in that direction and then repeat it turns out that this algorithm actually is very simple to implement right it fits on four lines of code here so this this very simple algorithm of gradient descent um we're going to initialize the weight somewhere we're going to loop for some fixed number of iterations then we will compute the gradient at our current point and then step in the negative gradient direction uh after we've computed our data in some way this algorithm you'll notice has a couple hyper parameters and um unlike the k n hyper parameters that you probably in fact won't play around with too much in practice these hyper parameters involved in gradient descent will become some of your best friends or maybe worst enemies as you're trying to build neural network-based systems so here when we're talking about gradient descent hyper parameters that come to mind one would be your method of initializing the weights it turns out that intuitively we want to initialize the weights to be some random values but it turns out that for various applications the exact mechanism the exact distribution from which you draw those random values turns out to matter quite a lot in downstream applications so we will talk in a little bit later lectures about the proper way to initialize weights when we're training neural network systems the second hyperparameter is the number of steps we will need to run this out we need to run this algorithm in finite time because we actually want our homework we want our models to converge before the homework is due so we need to run this algorithm for some finite number of steps there's many different stopping criteria you might imagine using ingredient descent but the most common one in deep learning is simply to run for some fixed number of iterations and this hyperparameter is usually usually running more iterations tends to work better for most deep learning applications um so this this hyperparameter is usually constrained by your computational budget and how long you're willing to wait for your models to train and the third hyper parameter here is the learning rate so uh when we come when we take this step in the negative gradient we actually need to say how much do we actually trust this gradient how big of a step do we want to take the local gradient tells us the direction of greatest decrease for our function but we need to actually decide how much we want to step in that direction of greatest decrease the hyper parameter controlling this step size is usually called the learning rate because it controls how fast your network learns where higher learning rates means we're taking larger steps and maybe you might hope that your algorithm would converge faster lower learning rates are maybe less prone to numeric explosion but will take longer to converge will take longer to learn so these three hyper parameters will be some of your friends throughout the rest of the semester and you can again look at this sort of pictorially we can imagine this algorithm of gradient descent by looking at one of these heat maps so here the x and y axis maybe tell show us the values of two dimensions of the weight matrix and the color at each point represents the height of the objective function and we can imagine starting at some white point here we compute our negative gradient which is now the direction of greatest decrease in the loss function then we'll take some small step size along this gradient direction and iteratively repeat as we step downhill so as we actually run this algorithm it might look something like this where we will start at some point in the blue region where blue is this is kind of a maybe taco shell shaped or bowl shape objective landscape and we'll start at some high point in the blue region and then by iteratively following the gradient direction you can see that this this algorithm very quickly converges to this red region of low loss at the middle of the tackle and what's kind of just by looking at this uh at this time at this animation you can recognize a couple interesting features of the gradient descent algorithm one is that it does not go straight to the bottom um because this is sort of a taco bowl shaped landscape the gradient direction is actually at an angle to the direct to going straight to the bottom so it does not go straight to the bottom instead it kind of arcs and curves around to the side before coming back into the bottom of the objective landscape um you can also notice that it starts out going very fast oh no maybe i didn't set up the powerpoint to loop properly oh well well because of the way that we formulated this gradient descent by having a multiplicative learning rate on top of the steps on top of the gradient what this means is that when the gradient has a large magnitude we'll end up taking larger steps and as we get as we approach this bottom of the bowl the objective landscape will flatten out and the magnitudes of our gradients will become small and now our step sizes will also naturally become smaller it's kind of nice this this is a kind of nice way to parameterize gradient descent because it lets the algorithm somehow naturally slow down as we approach flat regions or as we pro maybe approach minima of the objective function this version of gradient descent that we've some so far been talking about is sometimes called batch gradient descent or full batch gradient descent because as you recall our loss function is this giant sum over the individual loss functions for all the examples in our training data set where l i is where x i y i is one of the training pairs in our data set and l i is the loss function um how well our classifier is doing on that one individual example and now our faux loss function on the training set as you'll recall is a sum over all parts over all the the individual elements or training examples in our data set but and corresponding correspondingly because the gradient is a linear operation you can see that the gradient is again a sum of the gradients over all of the individual examples in our training set the problem here is that this can become very this sum can become very very expensive as your data set becomes large if n is something like a million or 10 million or a billion if you're lucky enough to have a very very large data set then simply computing the value of this loss function will involve a loop over tens of thousands or millions or billions of training samples so in order to compute just a single step of gradient descent you'd be waiting a very very long time to loop through your entire data set in practice this will therefore not be very feasible for running on for training on the large data sets that we want to in order to get good performance in practice then we often use a variant of gradient descent called stochastic gradient descent or sgd here the idea is that rather than computing a sum over the full training data set instead we will approximate this loss function and approximate the gradient by drawing small subsamples of our full training data set the typical sizes here in these small sub samples are called mini batches the and the typical sizes of these will often be something like 32 or 64 or 128 elements at a time that then what we will basically do is modify our algorithm so that in each iteration rather than computing the loss on the full data set instead we'll sample a small mini batch of items from the data set and then compute our gradient and our loss using only this small sampled the sampled set of elements this version and this is a much this is a very common version this is this type of stochastic gradient descent is in practice what we'll almost always use once we move from full batch to stochastic gradient descent we introduce a couple more hyper parameters into the system we still have our old friends of weight initialization number of steps and learning rate we've also introduced batch size as a new hyper parameter that is how when we're computing these mini batches how many elements should be in each mini match thankfully it turns out empirically that this one tends not to be too sensitive of a hyper parameter and the general heuristic here is just make the bit the mini batch size as large as you can until you run out of gpu memory or if you happen to have access to multiple gpus or multiple machines um it turns out that you can distribute this and have very very large batches that are distributed over multiple gpus or multiple machines um and it turns out sort of this is all empirically speaking that for many neural network systems the exact batch the exact batch size you use doesn't work doesn't matter too much as long as you properly account for other things in the system so we'll talk more about those details later in the class but the general rule of thumb here is don't worry too much about the batch size as a hyper parameter instead just try to make it as big as you can fit and then we've introduced another hyper parameter here which is the method by which we're going to sample our training data at every iteration for classification problems like we've been talking about so far this tends not to matter too much um on the homework assignments what we'll do is just draw the data at random for each iteration another common strategy would be to shuffle the data set at the beginning of we're going to shuffle the data set and then march through that shuffle data set in order and then shuffle the data set again and then march through it again in order um for classification problems like we've been considering the latter is more is more common but i think it tends not to matter too much for your final results um but if you think about other types of uh problems then uh things like structured prediction problems or maybe triplet matching problems or ranking problems then the way in which you iterate over your data can sometimes be an important type of parameter but but thankfully that's not the case for many image classification problems so you might be wondering why this is called stochastic gradient descent it's kind of a funny name the reason for that is that we now think of our loss function probabilistically so we then sometimes think of a data of an underlying probability probably joint probability distribution over the data x and the labels y from which our training sum was sampled when we think about the our data as having been sampled from some underlying probabilities probability distribution then we can write down our loss function as this expectation as we take an expectation over all possible samples from some true underlying data distribution and then the the form of the loss function that we've seen thus far which is an average over the independent loss functions on all of the samples from our data set is then a monte carlo estimation to this full expectation over the full probably probability distribution and then clearly when you think about loss functions in this way then we have a choice in how many samples we want to take in order to approximate this full expectation for computing law's function that's and that's that this this notion of thinking probabilistically is therefore what is named stochastic where this term stochastic comes from in the names to cast a gradient descent and of course the the gradient is much the same so then we can approximate the gradient as well by taking a small monte carlo estimate of samples over this whole distribution we have an interactive web demo demo up here that you can if it's still on stanford.edu i guess i need to fix that um that you can play around with that will let you play with linear classifiers and using gradient descent to train different types of linear classifiers interactively in a web browser and i would encourage you to check this out to gain some intuition about how you can change the loss function change the values of the weights change the step size and sort of see interactively how changing all these hyperparameters will affect the ways in which decision boundaries shift around and affect the training of linear classifiers so it turns out that this is actually this actually works now at this point you you know enough to go and implement the linear classifier portion on assignment two but it turns out that this this simple algorithm of stochastic radiant descent even though it's actually quite effective there are some potential problems with stochastic gradient descent and let's think about so let's think about a couple situations where this basic version of stochastic radiant descent might run right might run us into trouble well one problem is what might happen if our lost landscape looks something like this so here we're showing a contour plot of a lost landscape this lost landscape is therefore a kind of very exaggerated taco shell type landscape where it's changing very quickly in one direction and then changing very slowly in another direction and the question is what might happen if we apply a stochastic reading descent or full batch gradient descent on an objective landscape that has this shape well one problem is that our steps might oscillate around if our step size is too large then right we kind of have a trade-off we need to set our step side if our step size is too large then when we compute the gradient um it we might overshoot in the fast moving in the fast moving dimension and then have to correct our overshoot and come back in the other direction and now if the display and this this can cause a kind of zigzagging pattern as we then zigzag towards the minimum that can cause us to take many more steps than we might otherwise have needed to take if we were somehow have been able to use the smarter object optimization algorithm and we kind of have a trade-off here because we could have then we could have maybe avoided this by setting the step size very very small but if we set the step size very very small that might prevent this overshoot zigzagging pattern but it would then cause the algorithm to just converge very very slowly and now you're kind of in you're kind of in a tough spot in this situation if you've got an objective landscape that's very exact very fast moving in one direction and very slow moving in the other direction then you're kind of screwed no matter how you set the weight the the step size right if you set it too big you're going to overshoot the fast moving convention if you set it too small then you'll make no progress towards the goal and this problem is called uh this is this is technically sometimes referred to as the problem having a high condition number um which you can estimate uh numerically by looking at the ratio between the singular values of the hessian matrix at any point um that we don't need to go into detail here about that but this is one potential problem for using this vanilla stochastic gradient descent problem or the vanilla stochastic gradient descent another potential problem with stochastic gradient descent is functions that exhibit local minimum or saddle points a local minima is a point on the function where the function has zero gradient but it's not actually the bottom of the function and here we show a one-dimensional example where uh we kind of go down and then there's a local minimum and then in order to escape that local minimum we would need to go over some some hill or something so you can imagine that if we were using this basic stochastic gradient descent algorithm on some function that had a local minimum of this form then we would get stuck in this local minimum because the gradient is zero there so our step size would converge to zero and we might end up stopping this local minimum and be unable to escape since the gradient there is zero as well we can have a similar problem in uh there's another problem that i think is much more common in high dimensional optimization which is the notion of a saddle point so here a saddle point is a point where in one direction the function is increasing and in another direction the function is decreasing and as it's called a saddle point it looks kind of like a saddle on a horse and the problem here is that at the tip of this saddle the gradient is also zero so if we were to imagine trying to optimize a function that had a saddle point you can imagine you can imagine we might get stuck in the saddle point similarly as we might get stuck in a local minimum because the gradient at the saddle point is identically zero so the problem in both these situations is that because they have zero gradient um we might get stuck and uh just to point out um this notion of a saddle point actually i think becomes much more common in high dimensions because if our optimization landscape is very a saddle point is basically a point where in one direction the objective function is increasing and in another direction the objective function is decreasing if we have an objective lens if we have an objective landscape with something like 10 000 dimensions or a million dimensions then it seems very plausible then actually very frequent that at many points in the objective landscape we might be increasing in some of those dimensions and decreasing in other of those dimensions so the intuition is that somehow perhaps saddle points are maybe a big problem in high dimensional optimization another potential problem with stochastic gradient descent is that s that that stochastic part because we are computing our gradients using only a small estimate of our full data set those gradients are noisy and the gradients that we're using to make our updates at any step of the algorithm may not correlate very well with the true direction that we want to go to in order to go to the bottom on the right here on the right we're showing an animation here that simulates this problem where we're running a gradient descent on this uh same bowl shape objective landscape as before but now we're adding some amount of noise to the gradients that we're computing um when running the algorithm and you can see that in contrast to the version of stochastic gradient descent previously that was kind of marching down towards the bottom instead we see that once our gradients are noisy then our algorithm is somehow kind of meandering around this objective landscape and kind of taking its time towards getting toward to the bottom and you might maybe if maybe you might hope that if you wait long enough this will kind of average out and get us towards the bottom but it's still not exactly obvious what the trade-offs there are so i think it's a bit of a problem potentially that our gradients in stochastic gradient descent are not exact that they are stochastic approximations to the true gradient that we want to descend upon to overcome all these problems therefore it's actually not so common to use this very vanilla form of stochastic gradient descent we'll often use slightly smarter versions of stochastic gradient descent when training our when training neural networks in practice the most common of these is called sgd plus momentum so here on the left we're showing our old friend the stochastic gradient descent algorithm and now i've for the remainder of the slides i've cut out the weight initialization step because that's going to be common to all algorithms and on the on the left with our familiar friend stochastic gradient descent you see that at every iteration we're simply stepping in the direction of the gradient as has been computed with our with our mini batch of examples now with sgd plus momentum on the right we imagine a a kind of physical intuition here of actually a ball rolling down this high dimensional surface so now at every point we imagine integrating the gradients over time to compute some kind of a velocity vector for this ball rolling down a hill and now at every point in time we will then update this velocity vector by taking it by taking some weighted combination of the current value of the gradient and this historical for this historical velocity this historical moving average of gradients and then when we take our gradient step we will not use the exact uh we will not use the true gradient direction instead we will step in this direction of this computed velocity vector that we've computed over the whole course of training and you should imagine this as like a marble rolling downhill that now as it kind of speeds downhill it picks up some velocity and now it continues moving in that direction even if the local gradient is not directly aligned with its direction of motion and concretely we implement this in we implement this on the right by introducing a new scalar hyperparameter called row which is uh something like you can think of as like a friction or a decay rate so now at every point in time um we're going so now we're going to keep track of two things we're going to keep track of our positions x t um and as well as our velocity vector vt and now at every point in time we're going to update our velocity vector by first we'll first decay it by multiplying by this scalar friction value and then we'll add back in the the the value of the gradient computed at the current point and now when we make our step we'll step according to this velocity vector um and as kind of a technical note depending on which papers or which textbooks you'll read you read you'll sometimes see to these two slightly these two different formulations of sgd plus momentum that have slightly different looking equations but as an exercise to the reader you can go back and work this out at home and show to yourself that these two formulations are actually equivalent um and in the sense that um you'll act you'll they'll actually visit the same sequence of weight values uh no matter which formulation you're using so it's something of an implementation detail as to which of these two formulations you choose but i wanted you to be aware that you might see different formulations depending on which source you're reading now once we have this notion of sgd plus momentum we can think about how it helps to solve all three of these problems that we pointed out with the basic sgd algorithm recall that we talked about one potential problem with sgd as local minimum where because the gradient was zero at the bottom of a local minima then there's zero gradient so if we step according to the true gradient we'd be stuck forever but now once we augment the stochastic gradient to set algorithm with momentum now because now you should imagine this as like a ball rolling down the hill that as the as the as we kind of descend from the top and come down to the local minimum then even as we pass that local minimum our our point might still have some velocity that can help carry us up the other other side and to hopefully escape from that local minimum um you by using our velocity to kind of power through it in some way and a similar intuition applies for saddle points um again once we start building up velocity by rolling down l then we can hopefully roll right over those saddle points and continue descending down along a different direction uh momentum can also help us help us overcome a little bit this problem of poor conditioning because it because what you can imagine this velocity vector is kind of like a weighted move an exponentially weighted moving average all the gradients that we see during training so now if we see gradients so now if we start off in this uh oscillate if we see this oscillatory behavior during training then you could imagine that our velocity vector would help smooth that out and the the quick oscillations along the y axis here would then be averaged out to give us maybe hopefully relatively low velocity or relatively low velocity in that direction um and then maybe we might even accelerate our velocity along the horizontal direction and help us smooth out or deal better with these objective landscapes with high condition number gradient descent with momentum can also help us with this problem of stochasticity so here on the right we're showing again this this same animation of adding using gradient descent so the black line is showing gradient descent where we're adding some amount of stochasticity some amount of random noise to the gradient at every point and as we saw before the black line is kind of meandering all over the objective landscape due to the effect of the noise on the gradients in the blue we're instead showing the same sequence of gradient updates but instead using stochastic gradient descent with momentum and here we can see that by adding momentum to the algorithm it's somehow able to smooth out the noise and take a more direct path towards the bottom of the objective landscape another way to think about what the momentum update is doing is somehow with this little uh diagram here where we imagine that at every point in time we're sitting at this red we're sitting at this red dot and then at this red dot we can compute we have this uh green vector which is some direction giving us our velocity which is something like our historical averages of the gradients we've seen during training and now we will also then compute this red vector which tells us the instantaneous gradient at this point and then we will average them to get this to get this new velocity in blue which is the actual step direction that we'll use to update our weight matrix so the intuition here is that we're combining the gradient at the current point with this historical average of gradients and this helps us smooth out the object uh smooth out our optimization procedure there's another version of momentum that you'll sometimes see called nesterof momentum that has something of a similar intuition but it kind of interleaves these steps in a slightly different order with nestor of momentum what we do is it's kind of a we kind of imagine a bit of a look ahead so now we're still starting at this red point at every iteration we still have this historical green vector of velocities at every at every uh that is our moving average somehow of all the directions we've seen during training but now the difference between nestorov and traditional stochastic gradient descent plus momentum is that with mesterod momentum we imagine looking ahead into the future and computing what would the gradient direction have been if i would have stepped according to the velocity vector so we kind of look ahead in the direction of the velocity vector and compute the new value of the gradient in that loop ahead and then when we take the actual step our actual step is then a a linear combination of this velocity direction as well as this direction that was computed via this lookahead step and this next-door momentum then somehow has a similar effect as gradient descent with momentum it just integrates the integrates the past and the present versions of the gradient in a slightly different way so you'll sometimes see people use these two different formulations for optimizing things so if we want to look at that we can write this down we can write this down mathematically and look at master of and look at nestor of momentum in the following way so here we see that we still keep a running tally of our velocity vector as well as our position vector but now when we update our velocity vector we take we compute the gradient at this look ahead point um so this look ahead point is now x t plus rho v t which is now what the gradient would have been as we step in the direction of the velocity vector then we compute the gradient there and then our velocity is then a combination of our old velocity plus the velocity at the lower head point and then we update the x the position x using this velocity computer this is kind of an awkward formulation unfortunately for an optimization algorithm usually it's more convenient to implement optimization algorithms when they take when they only depend upon the current point and the gradient at the current point which this nest drop function doesn't seem to be right because it has this funny structure of forcing us to compute gradients to look ahead it doesn't fit nicely into this api of objective of optimization functions that we expect yeah question so the the question is um is our velocity vector in the direction of the gradient or is our velocity vector in the direction of the negative gradient and the other choice is whether your velocity vector incorporates the learning rate or do you incorporate the learning rate after you do the velocity vector so these are all kind of equivalent they all kind of you can imagine pulling these different factors out um so they're all kind of equivalent and it doesn't really matter which one you use but i think that's that's what's going on here so in this version of nestor what we're doing is we're having the velocity be actually the negative direction of the gradient and we're rolling the and we're rolling the learning rate alpha into the velocity vector as well um so then when we take this step in the direction of velocity vector it's actually stepping in the negative direction of the gradient but uh thanks for pointing that out um so then we said that we saw that this this nest drop update has a slightly awkward formulation it doesn't quite fit into the api of uh functions that we might want to write so it turns out there's a simple change of variables we can use that i don't want to work through here um but just for you to be aware it's kind of fun to work through this on paper and see with this change of variables actually nester of momentum can be rewritten in terms of only the current gradient and the and the current position which is kind of cool given that it actually has this intuition of a look ahead and there's some right so this is a track that you'll sometimes see now we can return to this uh our our friend uh we can return to this simple example and now compare our uh sgd algorithm in black with sgd plus momentum in blue and nestor of momentum in green and we can see that they both accelerate the training process quite a lot that because we're building up this velocity vector over time they're somehow able to accelerate in quick and quickly moving regions of the objective landscape and then uh slow down and sort of converge towards the end uh you can see another feature here of that's very common with momentum methods is that they tend to overshoot at the bottom so because we're building up this velocity over time either with traditional or nestor momentum then we've actually got some non-trivial velocity by the time that we get to the bottom of the bowl uh yeah question the question is why is the nest drug lime green with a black overlay i think it's because i'm bad at javascript and i didn't implement this in a good way but um yeah that's not significant it should just be a nice clean green line um so but then you can see that that uh both they've had both nesterod and traditional momentum kind of have a slightly similar character in that they build up the velocity over time and then overshoot at the bottom and can kind of come back so any any questions about these momentum-based methods well these are okay so then these are these are actually very commonly used in practice to train not only linear models um but also a lot of deep learning models as well so then moving ahead there's another kind of category of object optimization functions that we sometimes see um which is this notion of sometimes called adaptive learning rates and a classic example here is the atograd algorithm here we similarly want to find some way to overcome some of these problems that we talked about with sgd but now with autograph we're going to overcome it in a slightly different way so rather than tracking this whole this this historical average of gradient values instead we're going to keep track of a historical average of squares of gradient values and now when we make then you can see up here in the in this in this algorithm we're keeping a running average a running sum of all the element y squared values of the gradients that we see during overtime and now when we make our step then we end up dividing by the square root of this historical sum so this seems like kind of a funny thing to do and it's not exactly clear what's going on here so to motivate this at a grad optimization algorithm we can think back to this taco shell-shaped landscape and you can think about what happened what does the autograd algorithm do in this in this uh in in uh when we have this objective landscape well what's going on is remember the problem here was that we had in this in this objective landscape we had one direction where the gradient was changing very fast so then if the gradient was changing very fast then um this anagram will then end up dividing by a large value so it will help damp down progress in directions where the gradient is changing very fast and now in the other direction where the gradient is changing very slowly then add a ground will end up dividing by a large value so oh right no it will not dividing by a small value see this stuff is confusing right you ought to work it through for yourself um then you end up dividing by a small value so it has the effect of accelerating motion along directions where the gradient is very small so then we can hope that uh this can help us overcome these kind of ill-conditioned types of objective landscapes um but now there's maybe a problem with autograph so what might happen if we run this adoge algorithm for a very long time well if we run at a grad for a very long time you can see that our grad squared will just continue accumulating continue accumulating um because squares are always positive so then this grad square will just grow and grow and grow over the course of optimization which means that we're going to be dividing by a larger and larger and a larger thing so which means that that that has this effect of um sort of effectively decaying the step size or decaying the learning rate um continually over the course of learning and it's possible that this um grad squared might end up getting too big and might cause us to stop making progress in the direction we want to make progress and so we could end up stopping somewhere that's before we get to the bottom of the objective landscape so so um to overcome this problem people sometimes use uh a variant or sometimes people don't use at a grad directly um a way to fix this problem is this uh another another optimization algorithm called rms prop that you can kind of think of as a leaky version of at a grad so just as you recall that in stochastic gradient descent plus momentum we had some kind of friction coefficient that was uh decaying the velocities at every iteration before we come before we compute them and now with rms prop it looks very much like adding like a atograd with this extra friction term that's going to decay our running average of square gradients and here the hope is that by adding some kind of friction to our autograd algorithm then it will cause it to make better progress and not slow down all the time over the course of training and then it's kind of instructive to then look back at the same example of this optimization problem now comparing sgd in black sgd plus momentum in blue and rms prop in red and here we can see that they have fairly sorry about the red on orange color choice maybe that was not so great but what hopefully you might be able to make out if you've got really good color vision is that the blue and the red have sort of qualitatively different behavior in how they navigate this landscape so when we use um this momentum-based method in blue you can see that it tends to overshoot and then come back whereas when we use this uh second order this adaptive learning rates idea from atograd or rms proc you can see that it sort of arrests progress along the facts moving direction while simultaneously accelerating progress along slow moving direction which helps it kind of bend directly towards the bottom okay so that's cool now basically we've got two different ways that we can augment this basic sgd algorithm to hopefully improve its convergence and improve uh its stability in these weird situations so now that we've got these two good ideas um why not put them together so there's another very common optimization algorithm that you'll see used in deep learning a lot called atom and atom is basically rms plot rms plus momentum so it's basically combining these two good ideas that we've seen into one learning one optimization algorithm that uses both momentum um as well as this adaptive learning rate's idea our pattergrad so the exact formulation here is that we keep track of two things two running quantities during optimization one is the first moment which is somewhat analogous to the velocity that we saw in our in sgd plus momentum and the other is the second moment which is the same as this uh leaky exponential average of squared gradients that we saw in rms crop and then when we make our final learning step then we end up doing both so then you can see here this part in red is kind of the momentum idea the from from sgd momentum the blue part here is kind of this adaptive learning rates idea from rms prop and we basically put it together and hopefully that gives us a really good optimization algorithm since we put two good ideas together um but there's a bit of oh sorry was there a question okay so but there's uh maybe a slight a slightly subtle problem that could happen with add-on um that also by the way applies to rms problem so that's the question of what might happen at the very beginning of optimization um especially thinking that what happens if our beta2 constant which is our friction on the second moment what if that value is some very large value like 0.999 well now something very bad could happen at the very beginning of optimization because if you look where if you look at the algorithm again we're initializing the second moment with zero and that's then when we make this when we make this first gradient step um our if our beta two is some very some value very close to one then our second moment will still be very very close to zero at the very first step so then when we take our very first gradient step we'll be dividing by the square root of something very very close to zero so that means that we could end up taking a very very large gradient step at the very beginning of optimization and that could stop that could some that can sometimes lead to very bad results so the full form of atom actually has a third good idea which is to add a bit of bias correction to overcome this problem that might happen at the very beginning of optimization so here the the the basic idea is that we want to make smaller steps at the very beginning of optimization as we as we try to build up robust estimates of those first and second moments um so there's you can look at the paper for a full derivation of exactly why this form of bias correction is correct but the idea is just to overcome this problem of uh your your moment estimates are biased towards zero at the beginning of optimization so by the way this this um this atom algorithm actually works really well in practice for a lot of deep learning systems so this is definitely my go-to optimizer when i'm trying to build a new deep learning system and as a bit of a pro tip um if you use atom with beta 1 is 0.9 beta 2.999 and learning rate is somewhere in the regime of 1 minus 10 to the minus 3 or 10 to the minus 4. that actually seemed that surprisingly tends to work kind of out of the box on a very wide variety of different deep learning problems and um to kind of drive that point home i took some excerpts from some of my own recent papers where we showed like these five different papers that are using deep learning systems for very different types of systems but they're all using atom this one uh this one we did a bad job didn't record the learning rate that was bad um this one we were using atom one minus four out of one minus four atom ten to the minus three uh adam and the minus three so uh these papers were all doing very different tasks but it turns out that atom is a very robust optimization algorithm that tends to work across a wide variety of tasks with fairly minimal hyper parameter tuning so when you're designing your own network from scratch it's usually a pretty good go-to optimizer when you're first trying to get things off the ground and now after hyping up adam we've got to look at this diagram to see actually what it does and now because we remember we had this notion of atom as trying to combine the good parts of momentum as well as these adaptive learning rates at a grad and when we look at this at this picture here we can some we can somehow see that it has properties of both of those optimization algorithms so we can see that some somehow like the momentum based method it tends to build up some velocity and overshoot and come back but its overshoots are maybe less drastic than in the full then using this this normal sgd pulse momentum um and like rms prop or atograd it tends to kind of bend in the right way towards the minimum and you can it tends to work well for a lot of problems for that reason so again um i i kind of need to caution you again against making intuitions about high dimensional spaces based on low dimensional problems um so even when i'm putting all these pictures like this about how these optimization algorithms behave you really should take this with a huge shaker of salt um because remember at the end of the day we're really training we're going to be training on very high dimensional spaces so the behavior in those very high dimensional spaces could look quite different than in these low dimensional projections that i'm showing you but i think it's still useful to look at these kinds of things to at least get a very coarse sense of intuition about what these algorithms are doing but try not to put too much faith in exactly what you see in these animations okay so as a bit of summary then we've seen this uh i put together this little comparison table of these different optimization algorithms that we've talked about and we can see that they kind of build up and iteratively add more features as we go along but this is maybe meant as a bit more review for you guys so so far all these algorithms are what we call first order optimization algorithms because basically what they're doing is they're using information about the gradient and only the only the first derivative to make their gradient steps so what we can think about them as doing is basically they're forming a linear approximation to the function and then they they form this linear approximation to the objective function we're trying to minimize which are using the gradient so then in a high dimensional situation you might imagine that we have this tangent hyperplane which is computed using the value of the gradient at the current point and then we step down on this tangent hyperplane in order to try to minimize our objective function well of course we can easily naturally extend this thinking to use higher order gradient information so in addition to using the gradient which is the first derivative we might also form we then could form a quadratic approximation to our objective function using both the gradient as well as the hessian which is the second derivative at every point and then the idea here is that now we could form some quadratic surface which locally approximates our our objective function in question and then step towards minimizing this uh this quadratic approximation and this idea is nice because it lets us let the algorithm more adaptively choose how big it could it should step because uh maybe at a point like this where the gradient is rather high and the curvature is rather high maybe we should take a fairly small step but at a point over here um we by looking at the second order curvature information we can see that maybe it might be safer to take a larger step so this idea of second order optimization um is uh maybe more robust in setting learning rates or in setting step sizes for itself and indeed you can see algorithms like autograd and sgd momentum as some kind of diagonal approximation to these second order algorithms because they're trying to use by integrating first order information over time that's somehow forming some type of approximation to the second order curvature information which again gives us a bit more intuition about why these momentum based methods or adaptive learning based methods might be a good idea so then a permanent math we could write down the second order order optimization by uh writing down this quadratic taylor approximation to our function and then solving to write this w star value um that was step to the minimum of the quadratic optimizer but there's a problem is that this is actually not used so much in practice for a lot of deep learning systems and the problem here um can anyone spot it well the problem is that there we want to work in high dimensional spaces right so when we want to work we want to be able to optimize models with tens of thousands or millions of parameters and now the hessian nature the gradient is fine the gradient has the same number of elements as the wave matrix itself so if we can store and manipulate the weight matrix we can also store and manipulate the gradient but the problem is now the hessian matrix is a matrix so if we if our objective if our uh weight matrix has n values then the second order repression matrix has n squared values and now if we have like a hundred million parameters 100 million squared is like way too big you're not going to have enough memory and now even worse is if you look at this at this equation we actually need to invert the hessian and now inverting that inverting a general matrix is now cubic in the size of the matrix so now inverting this thing is going to be like 100 million cubes like heat death of the universe situation like we don't want to go there so in practice um these second order optimizers are sometimes used for low or low dimensional optimization problems um but they're not used so much in practice for very high dimensional optimization problems um let's uh let's get that so then in practice kind of when you're training your own models then adam is a very nice default choice in a lot of a lot of situations um and so that's kind of a good go-to algorithm when you're trying to build a new deep fighting system um although i should also point out that uh sgd plus momentum is also used in practice quite a lot as well so in practice you'll typically see most deep learning papers use one of these two optimization algorithms um so my general rule of thumb is to go with adam at the very beginning because it's fairly easy to get to work and it tends not to require too much tuning of those hyperparameters those default values that i gave you some pro tips about tend to work out of the box for a lot of different problems um whereas sgd pulse momentum can sometimes actually give better results but might require a bit more tuning of the hyperparameters and in future lectures we'll talk more about some of the strategies you might employ um for using as for uh hype for tuning the hyper parameters of something like sg momentum um and in if for some reason you can if for some reason you can afford some kind of second order optimization um that maybe your problem is low dimensional uh maybe your problem is not stochastic because second order optimizers tend not to do well with stochasticity then you might consider yourself using some kind of second order optimizer lbfgs is a good uh is a good one for that but in practice the ones you'll usually see are atom and sgd momentum but by this point now we've talked about how we can use linear models to solve image classification problems then in the last lecture we talked about how we can use loss functions to quantify how much we care about different values of the weight matrix in our linear models and now in today's lecture we've seen how we can use stochastic gradient descent and its cousins to efficiently optimize these high dimensional loss surfaces so now the next so now we've kind of equipped ourselves with a lot of different tools working up towards solving this deep learning problem so now in the next lecture we'll finally start talking about neural networks and we'll see that by we can just simply replace this linear classifier with more powerful neural network classifiers and the rest of what we've been talking about will allow us to train much more powerful models you
Deep_Learning_for_Computer_Vision
Lecture_12_Recurrent_Networks.txt
okay welcome back today we are we're on lecture twelve and today we're going to talk about a new species of neural network called a recalled recurrent neural networks so before we talk about recurrent neural networks I wanted to back up and remember we had this slide from a couple lectures ago where we were talking about hardware and software and this was kind of our TL DR conclusions about pi torque versus tensor flow and if you'll recall kind of my biggest gripes with PI torch as of that lecture we're that it didn't have good TPU support for a googol specialized tensor processing unit hardware and it did not have good support for exporting your train models onto mobile devices like iOS and Android devices well since that lecture hayah torch has actually released a new version 1.3 and two of the biggest features in the new version of Pi torch actually address these two big concerns that I had with PI torch so PI torch 1.3 now offers some experimental mobile API that theoretically makes it really easy to export your models and run them on mobile devices which seems awesome and there's now also experimental support for teep running PI torch code on Google TP use which also seems really cool so I just thought it's it's nice to keep you guys up to date when things change in the field of deep learning as you can see things are changing even within the scope of one semester and some of our earlier lecture content becomes outdated even just a week or two later sometimes but we're gonna so this is the new PI torch version 1.3 we're gonna continue sticking with version 1.2 for the rest of this quarter unless colab silently updates to 1.3 on us again which may happen I don't know so these are really cool new features but like I said this was just released so it's gonna take a little bit of time I think before we see whether or not these these new features are really as awesome as they're promised to be so then kind of stepping back to last lecture remember the last two lectures we've been talking about all these nuts and bolts strategies for how you can actually train your neural networks so we talked in great detail about things like activation functions data pre-processing weight initialization and many other many other strategies and little details that you need to know in order to train your neural networks so now hopefully by this point in the class you guys are all experts at training deep convolutional neural networks for whatever type of image classification problem I might want to throw at you so since you are now experts at that problem it's now start - it's now the time in the semester when we need to start thinking about new types of problems that we can solve with deep with deep neural networks so that brings us to today's topic of recurrent neural networks so basically all the problems all the applications we've considered of deep neural networks in this class so far have been what is called a feed-forward Network now these feed-forward networks are something that receives some single input at the bottom of the network like a single image goes through one or multiple hidden layers maybe with special fancy layers like convolution or patch normalization but each each layer sort of feeds into the next layer and at the very end of the network it's going to output some single output like so the classical example of these kind of feed-forward networks are these image classification networks that we've been working with so far here there's a single input which is the image and there's a single output which is the category label that we want our network to assign to that image now as we've been the reason we covered image classification in such detail is because I think it's a really important problem that encapsulates a lot of important features of deep learning but there's a whole lot of other types of problems that we might imagine wanting to solve using deep neural networks so for example we might want to have problems that are not one-to-one but instead are one-to-many so where the input is a single input like maybe a single image and the output is no longer a single label but maybe the output is a sequence an example of this would be a task like image captioning where we want to have a neural network to look at an image and then write a sequence of words that describe the content of the image in natural language now you can imagine this would be much more general than this single image class this single label image classification problem that we've considered so far another type of application we might imagine is a many-to-one problem where now maybe our input is no longer a single item like an image but maybe our input is a sequence of items for example a sequence of video frames that make up a video and now at the end we then maybe want to assign a label to this to this sequence of inputs maybe we want to look at the of a video sequence and then say maybe what type of event is happening in that video so this would be an example of a many-to-one problem because the input is a sequence and the output is a single label of course you can generalize this you can imagine problems that are many to many that want to input a sequence and then output a sequence an example of this type of problem would be something like machine translation where we want to have neural networks that can input a sentence in English so that would be a sequence of words in English and then output a translation of that sentence to French which would then be a sequence of words in French and again and that's sort of on every for passed the network we might have seek input sequences of different lengths and output sequences of different lengths this would be an example of something we call a many-to-many problem or a sequence the sequence problem there's another sort of sequence to sequence problem where maybe we want to process an input sequence and then we want to make a decision for each element of that input sequence this is another example of a type of many-to-many classification problem so here an example might be processing frames of a video and rather than making a single classification decision about the entire content at the video maybe instead we want to make a decision about the content of each frame of the video and maybe say that the first three flav frames were someone dribbling a basketball the next 10 frames were someone shooting a basketball the next frame was him missing the shot and then the next couple frames were him being booed by his team something like that so maybe and maybe we'd like to this to build this network in a way that can process sequences so this you can see that once we have the ability to work not just with single single input and single output but now we have now if we have the ability to process sequences of inputs and sequences of outputs that allows us to build neural networks that can that can do much more general types of things and now the general tool that we have in deep learning or rather one of the general tools that we have for working with sequences about the input and the output level is a recurrent neural network so the rest of the lecture will talk about so whenever whenever you see a problem that involves sequences at the input or sequences at the output you might consider using some kind of recurrent neural network to solve that problem and an important task here in an important point here is that for all these problems we might not know the sequence length ahead of time right for each of these tasks we want to build one neural network that can process sequences of arbitrary length so we'd like people to use the same video classification network to process a video frame like a very short video frame or a very or a very very long video sequence so then this recurrent neural network movement will be this very general tool that allow us to process different types of sequences in deep learning problems but recurrent neural networks are actually useful even for processing non sequential data so it turns out that sometimes people like to build use recurrent neural networks to perform sequential processing of non sequential data so what do I mean by that as an example so this isn't this is a project from a couple years ago where they were doing our favorite image classification tasks so remember image classification is no there's no sequences involved it's just a single image as input and a single category label as output but now the way that they want to classify images is actually not with a single feed-forward neural network instead they want to build a neural network that can take multiple glimpses at the image that it maybe wants to look at one part of the image then look at another part of the image then look at another part of the image where at each point in time the position that the network the the decision of the network of where to look in the image is conditioned upon all the information that it's extracted at all previous time steps and then after looking at many that many glimpses in the image then the network finally makes some classification decision about what is the object that it's seeing in the image so here we can see it so this is an example of using a sequential processing inside of a neural network even to process non sequential data so here this visualization or so we're showing that it's doing digit classification and each of these little green squares is one of the glimpses that the neural network is choosing to make to look at one little sub portion of the image in order to make its classification decision now another example of using a sequential processing for non sequential data is sort of doing the inverse and now generating images so here rather than trying to class taking as so in the previous slide we saw the network was taking the image as input and then using a proxy Qin sub glimpses to make classification decisions now instead we have some tasks where we want to build neural networks that can generate images of digits and now the way that it's going to do this is by painting little sequences of the output canvas sort of one time step at a time so at each point in time the neural network will choose where it wants to write and then what it wants to write and then over time those those writing decisions will be integrated over time to allow the network to build up these output images using some kind of sequential processing even though the underlying task is not sequential so here these examples are from a couple years ago here's one that I that I saw on Twitter just last week so here the idea is again we're using we're building a neural network that can produce these images which is a non sequential task but using sequential processing so here the the here what they did is actually into integrated the neural network into an oil paint simulator and now at every time step the neural network chooses what type of brushstroke it wants to make on to this virtual oil paint canvas and then at every time step its conditioned on what it saw in the previous time step it chooses where to make one of these virtual oil paint brush strokes and then over time it actually builds up this these sort of stylized artistic images of faces so these are all examples of where you might imagine using a recurrent neural network so now we've seen that recurrent neural networks can be used to both to open open open the door to new types of tasks involving processing sequential data and they can also give us a new way to solve our old types of problems where we might want to use sequential processing even to prop even for these inherently non sequential tasks so hopefully this is good enough motivation as to why over current neural network is an interesting thing to learn about so given that motivation what is a recurrent neural network and how does it work well the basic I had the basic intuition behind a recurrent neural network is like I said we're processing sequences so then at every time step the recurrent neural network is going to receive some input acts here shown in red and going to emit some output on Y shown in blue and now the recurrent neural network will also will also have some internal hidden state which is some kind of vector and at every time step that worker in there network will use the input of the current time stuff to update its hidden state using some kind of update formula and then given the updated hidden state it will then emit its output for the current time step why so then concretely we now we can see so then concretely what this might look like is that we in order to define the architecture of our current neural network we need to write down some kind of recurrence relation of recurrence formula or recurrence relation FW so here H t8 now we have that now we have this intuition that the network is working on this sequence of hidden states where H sub T is the hidden state at time T which is just going to be some vector just like the hidden state activations of the fully connected networks that we've worked with in the past and now we'll write down this recurrence relation F F that depends on learn ablates W and this learn about this dysfunction will take as input the hidden state at the previous time step HT HT HT minus one as well as the input at the current time step X T and it will output the hidden state but the next time stuff on HT and then you can imagine that we can write down different types of formulas FW that caused these inputs and hidden states be related to each other using different algebraic formulations so the most simple way that we can write and the in the important critical point part is that we use this same function f W with the same weights W at every time step in the sequence and by doing this we're sort of sharing weights and using the exact same weight matrix to process every see every point in time and every point in the sequence and by this construction allows us to have just a single weight matrix that can now process sequences of arbitrary length because again we're using the exact same weight matrix at every time step of the sequence so now with this kind of general definition of a recurrent neural network we can see our first concrete implementation of a recurrent neural network so this simplest version is sometimes called a vanilla render or recurrent neural network or sometimes an element current neural network after of Professor Jeffrey element who worked on these some time ago so here the hidden state consists of a single vector HT and every time step and now the are wait we are gonna have to learn about weight matrix matrices one is WH H which is going to be multiplied on the time the hidden state at the previous time step and the other is WX H which is going to be multiplied on the input at the current time step so what we'll do is we'll take our input at our current time step multiply it by one Nate weight matrix take our previous hidden state multiply it by the other way matrix add them together also add a bias term or normal bias term which I've omitted from this equation for clarity and then we're going to use the non-linearity that I told you not to use and squash them through at an H and then based on and that after squashing through at an H this will give us our new hidden state HT at our new time step and now we can produce our output at the at this time stuff YT by having another weight matrix that is going to be just a linear transform on that hidden state HT so is this definition of the this element recurrent neural network clear exactly what's going on great so then one way to think about this is another way to think about the processing of a recurrent neural network is to think about the computational graph that we build when we're unrolling this recurrent neural network over time so we can imagine that at the very beginning of crossing our sequence we're going to have some initial input to the this first element of the sequence x1 and we need to get some initial hidden state h0 from somewhere and to kind of kick off this recurrence it's very common to either initialize that first hidden state to be all zeros is probably one very common thing sometimes you'll also see people learn the initial hidden state as another learn about parameter of the network but those are both kind of implementation details you can just imagine that this initial hidden state is all zeros and that usually works pretty well so then given this initial hidden state and this first element of the sequence then we feed them to this a recurrence relation function FW that will output our first hidden state h1 and then now given our first hidden state will then feed it again to the same function FW and slurpin the next element of the sequence x2 to produce the next hidden state and so on and so forth and what's important here is that we're using the exact same weight matrix at every time step of the sequence so you can see that in the computational graph this is manifested as having a single node W for the wave matrix that has then used at every time stuff in the sequence so then you can imagine that maybe during back propagation if you remember the rules for sort of copy nodes in a computational graph then in the forward pass if we use the same node in multiple parts of the computational graph then during the backward pass what do we have to do yes we need to sum so this will be important when you implement recurrent neural networks on assignment 4 so hopefully that will be and you can also see by this design that again because we're using the exact same weight matrix at every time step in the sequence then we can this this one recurrent neural network can process any sequence of arbitrary likes and if we receive a sequence of two elements we'll just sort of unroll this graph for two time steps if you receive a sequence of 100 elements will unroll this graph for 100 time steps and no matter what length of sequence we receive we can use the same recurrent neural network and the same weights to process sequences of arbitrary length and now at every time and now this is kind of the basic the basic operation of a recurrent neural network and then remember we saw all these different one-to-many many-to-many of these different types of sequence tasks that we might want to use well now we can see how we can use this basic recurrent neural network to implement all of these different types of sequential processing tasks so here in the case of many-to-many where we want where we receive a sequence of inputs and now we want to make a decision for each point in the sequence this again might be something like video classification we want to classify every frame of a video now we can have another weight matrix maybe W out or WY that's going to produce our output Y our output Y and every time step in the sequence and now maybe we have some desired label then then to train this thing we might apply a loss function at each time step in the sequence so for example if this was something like video classification and we're making a classification decision I'm at every point in the sequence then we might apply we might have a ground truth label at every point in time and we apply a cross entropy loss to the predictions at every time step to now get a loss per time point per per element of the sequence then to get our final loss function we would sum together all of these per time step losses and that would give us our final loss that we could back propagate through so now this would be something like the full computational graph for a many-to-many recurrent neural network that is making one output per time step in our input sequence but now it maybe if we were doing a many-to-one situation this might be something like video classification but we just want to produce a single classification label for the entire video sequence then we can just hook up our model to make a single prediction at the very end of the sequence that only operates on the final hidden state of the recurrent neural network and what you can see is that at this final state of the recurrent neural network it's it kind of depends on the entire input sequence so hopefully by the time we get to this final hidden state that kind of encapsulates all the information that the network needs to know about the entire sequence in order to make its classification decision if we maybe if we're in someone's many situation something like image captioning or we want to input maybe a single element like an image and then output a sequence of elements like oh a sequence of words to describe the image then we can also use this recurrent neural network but now we pass a single input at the beginning which is our single input X and then we use this recurrence relation to produce a whole sequence of outputs now there's another very common application of recurrent neural networks which is the so called sequence to sequence problem this is often used in something like machine translation where you want to process one input sequence and then produce another input sequence where the lengths of the two sequences might be different again this might be something like we input the sequence of words in English and then output a sequence of words in French giving a translation of the sentence and I don't speak French but I think that all that an English sentence does not always have the same number of words it says corresponding translation in French so then it's important to be it that we are able to build recurrent neural networks that can process secret and input sequence of one length and produce an output sequence of another length so the way that we implement this is the so called sequence to sequence recurrent neural network architecture and this is basically taking a many-to-one recurrent neural network and feeding it directly into another one-to-many recurrent neural network so here the way that this works is that we have one on recurrent neural network called the encoder that will receive our input sequence this might be our English sentence that we're receiving as input and it will process that input sequence one element at a time and then it will then the entire content of that entire sequence will be summarized in the hidden vector that's predicted at the very end of the sequence and now we can take that hidden vector at the end of the encoder sequence and feed it as a single input to the second to a second recurrent neural network called the decoder and now this second recurrent this second decoder network is now a one-to-many Network because it receives the single vector which was output from the first Network and then it produces a variable line of sequence as output and here from this computational graph you can see that we're using different weight matrices in the encoder and the decoder shown in orange and purple here which is pretty common in these sequence of sequence models yeah question the question the question is why why do we separate them like this well one problem is that the number of output tokens might be different from the number of input tokens so we might want to process the English sentence and then output the French sentence but you don't know how like the number of words in the output might be different from the number of words in the input so there it's important that we separate them somehow you might imagine we just use the same weight matrix for both the encoder and the decoder and that court that would be like we process for we process the whole sequence where we give an input for the first K time steps then for the last K time steps we don't give it any input at all and just expect it to produce an output and people do do that sometimes yeah the question is how do we know how many tokens we need in the second one so that I think is a detail we'll get to in a couple slides was there another question over here yes okay good good I'm glad you're thinking about that because we'll get there so then so that's kind of a concrete example of how this works as a more contrary task we can talk about this so-called language modeling task so here the idea is we want to build a recurrent neural network it's going to process an infinite stream of input data and then at every point in time going to try to predict what is the next character in the sequence and this is called a language model because it allows you to write down the problem it allows the neural network to score the probability of any of any sequence being part of that language that it's learning so here the way that we set this up is we'll typically write down some fix we'll have some fixed us set of vocabulary that the network knows about in this case our vocabulary consists of the letters HDL and oh this will be some fixed vocabulary that we need to choose this at the beginning of training and now we can see that our input sequence we've encoded each of now we want to process this training sequence hello h-e-l-l-o so now you can see that we process this input sequence by converting each of its characters into a one hot vector so given our vocabulary of size for h-e-l-l-o then we convert the letter h into the vector 1 0 0 0 because it consists of the it just having a 1 for that first slot in the vector on meaning that the first element of our vocab then we process our input sequence in these 1 cutters and this gives us a sequence of vectors that we can feed to recurrent neural network so then we can use our recurrent neural our recurrent neural network to process the sequence of input vectors and produce this sequence of hidden states and now at every time step then we can use our output but our output matrix too at every time step predict a distribution over the elements in our vocabulary at every point in the sequence and now because the task of the network is to press at every point in time is trying to predict the next element in the sequence so you can see that after it receives the first element H in the input sequence it tries to predict the next element e so then that would be a cross entropy classification loss at that point at that time step in the sequence then after then once it receives the first two input characters H and E it needs to predict L which is the third character in the sequence and so on and so forth so then you can see that the target outputs in this sequence are equal to the target inputs just kind of offset by by by 1 so then once you've got a trained language model so that's kind of what a language model looks like during training you got your your input sequence you shift the output input input sequences and then try to predict the next character at every time step of processing but now once you've got a trained language model you can actually do something really cool with it and you can use your trained language model to generate new text that is in the style of the text that it was trained on so as an example of what that might look like um given our trained given a language model that we've now trained on a set of sequences then what we can do is we can feed it some initial seed token unlike the letter H and now what we want to do is have the recurrent neural network generate new text conditioned on this initial seed token so then the way that this works is we give it our input token H we give it the same one hot encoding we go through one layer we unroll one tick of this recurrent neural network sequence and then get these this distribution of predictions for the next character at the time in time and now because our model has predicted a distribution of what characters it thinks should happen at the next time step what we can do is sample from that distribution to just give a new invented character for what the model thinks is probable at the next time step and then we could after we take that sample character we can take the sampled output from the first time step of the network and feed it back as input in the next time step of the network then we have this sampled character e that we can feed back as input at the next time step and then go through another layer of processing of this recurrent neural network so again compute the next hidden state compute a new distribution of predicted outputs and then gives us a new distribution over what the model thinks the yet next character should be and then you can imagine repeating this this process over and over again so then given your trained language model you can seed it with some initial initial token and then just have it generate new tokens that a thinks are likely to follow that initial token that you give it so it's kind of a one little ugly detail is that so far we've talked about encoding our input sequence as a set of one-hot vectors and if you think about what happens in this first layer in this first layer of the recurrent neural network is that remember in this vanilla neural network we were taking our input vector and then multiplying it with our weight matrix well if our input vector is just a 1 hot back is just a one hot vector with a 1 in one slot and zeros and all other slots but actually that that matrix multiplied is kind of trivial because if we were to if we were to take matrix multiplied by a one hot vector then what it does is it just extracts one of the columns of the vector so actually you don't need to implement that with a matrix multiplication routine you can implement that much more efficiently with an operator that just simply extracts out rows of a weight matrix so for that reason it's very common to actually insert another layer in between the input to the network and the recurrent neural network called an embedding layer that does exactly this so here the embedding now at the now in the input sequence at the input our sequence will be encoded as a set of one-hot vectors and now the embedding layer will just perform this one hot sparse matrix multiply implicitly so effectively this embedding layer just learns a separate each row of the column of the embedding matrix corresponds to an embedding vector corresponding to each element in our vocab each element of our vocabulary so in a very common design for these recurrent neural networks is actually to have this separate embedding layer that happens between the raw inputs the raw input 0 hot sequence and before the embed these vectors to this embedding layer before passing to our current neural network that computes these sequence of his hidden states so now to train these things this we've sort of seen this example of a computational graph using recurrent neural networks already and what we saw is that in order to train one of these recurrent neural networks we need to kind of unroll this computational graph over time and then then it at me every time at every the time point in the sequence we give rise to some loss per time step then these get summed over the entire entire length of the sequence to give a single loss so what was this kind of mean this is this is sometimes given a fancy name called back propagation through time because during the forward pass we're kind of stepping forward through time through a sequence and then during a backward pass we're back propagating backwards in time through this sequence that we had unrolled during the forward pass but now one problem with this back propagation through time algorithm is that if we want to work on very very long sequences and train on very very long sequences then this is going to take an enormous amount of memory because say we want to train on sequences that are like a million characters long well then you need to unroll this computational graph for a million time steps that's probably not going to fit in your GPU memory so in practice and when we're training recurrent neural networks and especially recurrent neural network language models on very very long sequences sometimes we use an alternative approximate algorithm called a truncated back propagation through time so here the idea is that we we want to sort of approximate the training of this network on this full possibly infinite sequence but then what we'll do is we'll take some subset of the sequence maybe like the initial the first ten tokens or the first hundred tokens of the sequence then we'll unroll the forward pass of the network for that short prefix of the sequence and then compute a loss only for that first chunk of the sequence and then back propagate through the through the initial chunk of the sequence and make an update on the weights and now what we'll do is we'll actually record the hidden weight the hit the the the values of the hidden States from the end of this initial chunk of the sequence and then we'll receive the second chunk of the sequence and we'll pass in these recorded hidden weights that we remembered when processing the first chunk and then we'll unroll a second chunk of the sequence like the next hundred characters in our possibly million character sequence so then we'll unroll this next hundred characters of the sequence compute a loss for the second chunk and then back propagates not through the entire sequence but instead back back propagate only through the second chunk of the sequence and then this will compute gradients of this loss with respect to our weight matrix then we can make an update on the weight matrix and continue then we would next then take the next chunk of the sequence remember what the hidden state was from passing the second chunk and then used that recorded hidden States to continue unrolling the sequence forward in time then again make a truncated back propagation through time and another weight update so what this back propagation through time algorithm does is basically the forward pass because we're always carrying and from a hidden information forward throughout forever perfectly through these remember in hidden States then the forward pass is still sort of processing an infinite but potentially infinite sequence but now we're only back property back propagating through small chunks of the sequence at a time which means that this drastically reduces the amount of stuff that we need to keep in GPU memory so this trick of truncated back propagation through time makes it feasible to Train recurrent neural networks on even infinite sequences even though you only have finite amounts of key to memory so all this sounds really complicated but in practice you can implement this whole idea of sort of training back with truncated back propagation through time yeah question yeah the question is for this truncated back propagation through time how do you set the h0 for passing the second like second chunk yeah then you would use the final hidden state when processing the first job so then right well when passing the first chunk we'll have this final hidden state and then we'll just pass that final hidden state from the first chunk will become the initial hidden state when crossing the second chunk and that's the trick that means that it's sort of processing everything forward in time potentially infinitely because it's carrying all this information forward in time through the hidden States but then we're only back propagating through finite chunks of the sequence at a time does that does that clarify yeah question yes the question about weight updates so when doing back truncated back propagation through time usually you'll go like forward through a chunk backward through a chunk update the wave matrix then you'll copy this hidden state over go forward backward update and then copy this one forward for backward update so then you'll always every time you do back pop through some portion of the sequence that will compute derivative of that chunk Schloss with there's the with respect to the weight matrix then you can use that to make a weight update on the weights of the network yeah yeah exactly so the idea is that with this truncated back propagation through time once you process one chunk of data you can throw it away like you can like like evict it from the memory of your of your computer because then all the information about that sequence that's needed for the rest of training is stored in that final hidden state of the recurrent neural network at the end of crossing the choke so then all this sounds maybe kind of complicated but in fact you can implement this whole process of truncated back propagation through time for training or current work language models and then sampling from them to generate new text you can do it in like 112 lines of Python and this is no PI torch so no autograph this is doing all the although gradients manually I did a version of this in PI torch and then once you have pie charts you can do it about like 40 or 50 lines so it's actually not a ton of code to actually do this stuff and now what's fun is that once you've implemented these things you can have fun and just sort of train recurrent neural network language models on different types of data and then use them to generate new texts to kind of get an insight to what types of stuff these networks are learning when we train a language model on text so for example what we can do is download the entire works of William Shakespeare concatenate them into a giant text file and then that's a very very long a sequence of characters and then we can train a recurrent neural network that processes the entire works of William Shakespeare and tries to predict the next character given the previous hundred characters or something like that and just train a recurrent neural network whose entire purpose in life is to predict next character from works of William Shakespeare and then once we train this thing then we can sample from the train model and after the first couple of iterations it doesn't look like it's doing too good then what we're doing rumber is we're sampling what the network thinks is the next character and then feeding that sample back fact the network as the next input and then repeating this process to generate new data so at first this thing is basically generating garbage because it's fairly random weights if you train this thing a little bit longer then it starts to recognize some structure in the text so it makes things that look like words and put spaces in there and maybe put some quotes but if you actually read it it's still garbage we train a little bit more and now I think it almost looks like sentences there's some spelling errors but it says something like after fall ooh such that the hall for a Princeville smoky so it's like starting to say something but you train it even longer and now it starts to get like really really good and starts to generate text that looks fairly realistic so now it says why do what they day replied Natasha and wishing to himself the fact the Princess Mary was easier so you know the grammar is not perfect but this does looked kind of like real English and now we train this thing for a very long time and sample longer sequences and it generates very plausible looking Shakespeare text so you can see these look like stage directions pan drape andreas alas I think he shall come approached in the day with little strain would be attained into being never fed and who is but a chain and subjects of his death I should not sleep so this sounds very dramatic it sounds very much like Shakespeare but unfortunately it's still gibberish now you can actually go further and imagine training these things on different types of data so this was the entire concatenated works of William Shakespeare years ago I did this one have you anyone ever taken a abstract algebra course or an algebraic geometry course well that's this sort of very abstract a part of part of mathematics it turns out there's an open source textbook for algebraic geometry that's something like many many thousands of pages written in low-tech so what I did is I downloaded the entire latech source code of the several thousand page algebraic geometry textbook and then train a recurrent neural network to generate the next character of latech source code given the previous hundred characters on this entire source code of this algebraic geometry textbook then you sample fake math that the neural recurrent neural network is just inventing out of the weights that it's trained unfortunately it tends not to compile so it's not so good at like producing exactly grammatically correct low-tech source code but you imagine you but you can manually fix some compile errors and then you can actually get this thing to compile so now these are examples of generated text from our current neural network that was trained on this algebraic geometry textbook so you can see that it's like this kind of looks like abstract math right it's like I'm having lemmas it's having proofs it even put the little square at the end of the proof when it's done proving things it tries to refer to previous lemmas that may or may not have been proven elsewhere in this text and and it's kind of like very adversarial and kind of rude in the way that some math math math books are so like the proof is see discussion in sheaves of sets so like clearly you should have a reference back somewhere else in this text work in order to understand this proof we can look at some more in in algebraic geometry you actually have these cool commutative diagrams that people draw they show relationships within different mathematical spaces that are generated with some low-tech source code some low-tech package and the recurrent neural network attempts to generate commutative diagrams to explain the proofs that it's generating that are also nonsense and actually one of my favorite examples of this is actually on this page as well if you look at the up top left it says proof omitted which is definitely something that you'll see in math books sometimes so we can go further on this so what's another really basically at this point you've got this idea that once you've got these character level recurrent neural network language models you can train them on basically any kind of data you can imagine so we also at one point we download the entire current source code of the Linux kernel and trained a recurrent neural network language model to predict this to this model the the C source code of a Linux kernel many of what you can do is you can sample from this and just generate invented C source code and this looks like pretty reasonable right like if you're not looking at this thing carefully this leg definitely looks like it could be real a kernel source code you know it's saying like static void do command struck SEC Phi M void star pointer it puts the bracket it indents it puts like int column equals 32 left shift left shift command of two like it even puts comments like free our user page pointer to place to place camera if all - so the comments don't really make sense but it knows that you're supposed to put comments it also knows that you're supposed to recite this copyright notice at the top of files so when you sample from this thing of it outputs this copyright notice it also kind of knows the general structure of C source code files so after the copyright notice you can see it's having a lot of includes like includes Linux k XC h so includes a bunch of headers has includes a bunch of other headers it defines some macros it defines some constants and then after all of that it starts defining functions so you can see that this thing and just by doing this very simple task of trying to predict the next character then our recurrent neural network language model has somehow been able to capture a lot of structure in this a relatively complex data of this C source code of the Linux kernel so then one thing you might you one question you might want to ask is how is it doing this what kinds of representations are these recurrent neural networks learning from these data that we're training them on well there was a paper from carpathia Johnson and Feifei a couple years ago that attempted to answer some question like that so here the idea is that we wanted to try to gain some interpretability into what these recurrent neural network language models were learning on when we trained them on different different types of sequence data sets so here what the methodology here is that we take our recurrent neural network and we unroll it for many time steps and we just make a skit to perform this prediction task of predicting the next character so then in the in the process of predicting the next character these recurrent neural networks are going to generate this sequence of hidden states one hidden state for each character of input and then trying to generate that character at output so then what we can ask is we can ask what do the different dimensions of those hidden state capture so for example what we can do is look at maybe look at dimension like 56 of those hidden states and now because that's going to be the output of @nh because of that's because it has that non-linearity then we know that each element of that hidden state vector will be a real number between negative 1 and 1 so what we can do is take the activation of element 56 and that recurrent neural network hidden state and then use the value of that hidden state which is between 0 & 1 to color the text on which the network was processing and that can give us some sense for what different elements of that hidden state when they light up when processing text so here's an example of a not very interpretable result so basically what we've done is when we trained our neural network to Don this Linux kernel data set and then we asked it to predict the next character and now at every time step we've chosen like element 56 so there are currently all Network hidden state and then we use the value of the hidden state at each character to color the text that of the of the text that it's processing is this is this visualization clear so then when when the when one of the characters is colored red that means that the that the value of that cell is very high close to positive one and when it's blue it means it's very low close to negative one so then then what you can do is kind of look at these different is hidden cell States and try to get some intuition for what they might be looking for a lot of them look like this and you have no idea what they're looking for they just look totally random but sometimes you get some very interpretable cells and these recurrent neural networks so here's an example one where we trained on some actually Tolstoy swore in peace and then we test and then we test the recurrent neural network and we found that one of these cells is actually looking for quotes so what you can see is that this one particular cell of the recurrent neural network is all blue which means it's all off all off all off and then once it hits a quote then that one self flips all the way on and turns all the way red and that remains red all the way all the way all the way until the end quote when it flips all the way back to blue so what that what that kind of gives us into this intuition is that somehow this recurrent neural neural network has learned this kind of binary switch that keeps track of whether or not we are currently inside of a quote we found another cell that tracks where we are in the current line so for example after we hit a carriage return then it resets to negative one and then it slowly increases over the over the over the course of a line because this data set this data set always had line breaks that et characters so then after we get about 80 characters then it knows we have to have a new line and then we reset that with that cell back to blue when training on the lid on the linux source code we found a cell that track that tracked the conditions inside if statements which was very interesting we also found another one that was checking whether or not we were inside a comment inside the Linux source code and we found ones that were also tracking what is our indentation level inside the code so basically this is this I thought this was really cool this means that even though we're just training this neural networks to do this seemingly stupid task of trying to predict the next character then somehow in the process of simply trying to predict the next character in sequences then the recurrent neural network learns all of these stuff all of these features inside its hidden state that detect all these meaningful all these different types of structures in the data that is trying to process so I thought that was a really cool result that gives us some insight into what types of things these recurrent neural network language models are learning so now as an example back to computer vision one thing that we can use these type of recurrent neural network language models for is the task of image captioning so here what we want to do is we want to input an image this is an example of a one-to-many problem so we're going to input an image feed it to a convolutional network that you guys are like all experts on now to extract features about the image and then past the features of that convolutional network to a recurrent neural network language model that will now generate words one at a time to describe the content of that image and then we can train this thing on if you had a data set of images and associated captions then you could train this thing using normal gradient descent so to kind of concretely look at what this looks like this is an example of transfer learning so we're going to step one download a CNN model that had been pre trained for classification on image net then we're going to chop off the last two layers of that network and now we're going to have now here we actually want to operate on sequences of finite length so unlike the language in the language modeling case we're kind of mostly concerned with operating on these like infinite streams of data and doing truncated back propagation through time but now in image captioning we actually want to focus on sequences that have some some actual start and actual end so then what we always do is then we start the first element of the sequence is always a special token called start which just means like this is the start of a new a new sentence that we want you to generate so then now but now we need to somehow connect the data from the the convolutional neural network into the recurrent neural network so the way that we do this is we slightly modify the recurrence formula that we use for proper producing hidden states of the recurrent neural network so recall that previously we had seen this recurrent neural network that applies a linear transform to the input a linear transform to the previous hidden state and then squashes through at an H to give the next hidden state well now to incorporate the image data we're gonna have three inputs at each time step of the neural network we're going to have the current element of the sequence that were processing we're going to have the previous hidden state and we're also going to have this feature that was extracted from the top of this convolutional neural network pre trained on image net so now given these three inputs we will apply a separate weight or linear projection to each of these three different inputs add them together and again squash to @nh so now you can see that we've modified the recurrence relation of this recurrent neural network that allows it to incorporate this additional type of information which is the feature vector coming out of the of the image and after that then things proceed very much like they did in the language modeling case so then what we do is in in sort of a test the test time case um this is going to predict a distribution over the tokens or the words in our vocabulary we sample from that distribution to get the first word in this example man we say we pass that back to be processed by the recurrent neural network as the next element of the input sequence pass it again and then sample the next word and then this repeats so this would be man in straw hat and then here's the answer to your question a special hope token called stop or end so whatever we're whatever we're processing sequences of finite length it's very common to add these two extra special tokens into the vocabulary one called the start token that we put at the beginning of every sequence and one called the end token that we insert that we that we do so then during training we force the network to predict the end token at the end of every sequence and then during testing once the network chooses to sample the end token then we stop sampling and that's the end of the output that we generate does that did that answer your question about how we know when to stop good there was a question here yes the question is what's the difference between blue and purple so here these these three inputs Green is the input of the sequence of the current time step so that would be like one of these one of these input tokens start man in straw hat that would be the X at the current timestamp the H at the current time step is the blue thing that's the previous hidden state from the previous time step of the sequence which would be something like H when we're trying to predict h2 then then H would be H 1 which is the privet the hidden state at the previous time step and now the purple is the is the feature vector coming from the convolutional neural network which we're calling V which is going to be a single vector that we extract once from the convolutional Network and then pass it to each of the time steps the recurrent neural network so with that so then for each of these three inputs we have a separate Associated weight matrix with width dimensions such that the they can be added in a way that doesn't crash does that clarify a little bit ok so then once you once we've got this then it's fun to look at some results for these things you know this computer got a look at images we've got a look at results and have a little fun so sometimes when you train this thing off on a data set of images and associated captions sometimes these image captioning models seem to produce really really shockingly good descriptions of images so here the one at the upper left it says a cat sitting on a suitcase on the floor which is like pretty good that's like a lot more detail than we were able to get out of our previous with image classification models that just output a single label or maybe in the upper right it says a white teddy bear sitting in the grass that looks pretty correct at the bottom it says two people walking on the beach with surfboards a tennis player in action on the court so it's like giving us these really non-trivial descriptions that seem really exciting so when these first papers came out that were first doing these image captioning results they got people really excited because for the first time these these networks were saying very non-trivial things about the images that they were looking at they were no longer just single labels like dog or cat or truck but these image captioning models actually it turns out are not that smart and it's actually really instructive to look at the cases where they fail as well so here's an example if we feed this image to to the to a trained image caption model it says a woman is holding a cat in her hand which I think it says that because somehow the texture of the woman's coat maybe looks kind of like the texture of cat fur that it would have seen in the training set or here if we look at this it says a person holding a computer mouse on a desk well that's because this data set came out before iPhones were prevalent so whenever someone was holding something near a desk it was always a computer mouse another cell phone here it says a woman is standing on a beach holding a surfboard which is like completely wrong but the data set where this was trained has a lot of images of people holding surfboards on beaches so basically whatever it sees someone standing near water it just wants to say it's a person holding a surfboard on the beach even if that's not actually what's in the image at all we have a similar problem in this example so this is uh you can maybe this is hard to see it's a spiderweb kind of in a branch and it says a bird is perched on a tree branch again maybe it's just sort of copying whatever it saw a tree branch there was always a bird there so whenever it sees that branch just wants to say a bird perched on a tree branch even if that's actually not what it says at all maybe one more example here it says a man in a baseball uniform throwing a ball so now this one I think it's really interesting right because it knows it's a man of baseball uniform but it kind of gets confused about exactly what action is happening in the scene but when we look at this we have this human understanding of like physics and we know that there's no way he could have like throwing a ball from that position so we know that he's probably like scoot diving in there to try to catch the ball but that kind of fine-grain distinction is just something that's completely lost on this model so I think these image captioning models are pretty exciting but they're actually like still pretty dumb and they're not there they're pretty far from solving this computer vision task and I think that's really in really get that sense when you look at these failure modes so now and by the way these image captioning um you'll have fun you'll get to implement your own image captioning model on the fourth homework assignment which will be out shortly after the midterm so now another thing we need to talk about is gradient flow through these recurrent neural networks so here is a little diagram that kind of illustrates pictorially what's going on in this vanilla or Elliman recurrent neural network that we've been considering so far so here is showing the processing for one time step of the recurrent neural network so here we have the input XT coming in at the bottom we have the previous hidden state HT coming in at the left then you can imagine that these are concatenated and then up then have this linear transform by the weight matrix and then squash this tannish non-linearity so now you should be able to point recognize some problems about what's going to happen during the gradients in the backward pass of this model so if we imagine what happens during the backward pass at this model then during back propagation we're going to receive the derivative of the loss with respect to the output hidden state at HT and we want to compute and we need to back propagate through this little RN n cell to compute the gradient of the loss with respect to the input hidden state HT minus 1 and now there's sort of two really bad things that are happening in this back propagation one is that we're back propagating through at an H non-linearity and we told you repeatedly that 10h nonlinearities were really bad and you should have used them so that already seems like a potentially up but you know these aren't ends were invented in the 90s they didn't really know better back then so maybe we can excuse that but another big problem that happens when back propagating through this recurrent neural network is that when we back propagate through this matrix multiply stage that actually then during back propagation it's going to cause us to multiply by the transpose of elite matrix right because you know when you back propagate through a matrix multiplication then you're going to multiply by the transpose of that way matrix so now think about what happens when we have not just a single recurrent neural network cell but now we're unrolling this recurrent neural network for many many times steps now you can see that as the gradient flows backward through this entire sequence then every time we flow through this recurrent neural network cell then it's going to multiply the upstream gradient by this weight matrix so then during back propagation we're going to take our gradient and just multiply it over and over and over and over again by this exact same weight matrix W transpose and now this is basically like really bad so suppose here I'm only showing four in the slide but imagine we're unrolling the sequence for like 100 or 200 or thousand time steps so then during back propagation we're multiplying by the same matrix a thousand times now that can go really bad in two ways one is that if the largest sink it's sort of intuitively if the matrix is too big as measured by its largest singular value then multiplying by the same matrix repeatedly is going to cause it to just blow up and explode to infinity on the other hand if that weight matrix is somehow to small as measured by its smaller say smallest largest singular value being less than one then that then those gradients will just tend to shrink and disappear and vanish towards zero during back propagation and the only possible way that so then basically we're caught on this knife edge where if the singular value is just a little bit greater than one then will our gradients will explode to infinity if our great if our singular value just a little bit less than 1 then our gradients will vanish to 0 and we'll have this either this exploding gradient problem or this vanishing gradient problem as they're called and the only way we can get this thing to Train is so if somehow we arranged for our weight matrix have a say all its singular values exactly 1 and that's the only way we're gonna be able to get straight stable training out of this kind network I'm over sober very very long sequences so that seems like a problem so there is one kind of a hack that people sometimes use to deal with this exploding gradient problem called gradient clipping so here remember here what we're doing is like we're not using the true gradient in when we're doing back propagation so after we compute the gradient of the loss with respect to the hidden state we check the nuke Lydian norm of that vector because you know the grading of the loss or the spec of the hidden state is just a vector and then if if the Euclidean norm of that Grady of that local gradient is too high then we just we just multiple it we just clip it down and cause it to be smaller and we continue back propagation so now basically what we're doing with this idea of gradient clipping is that we're computing the wrong gradients on purpose that will hopefully not explode or not not explode at least so this is like kind of a horrible dirty hack it means you're not actually computing the true gradients of the law of the law so we can spec to the model weights anymore but this is a heuristic that people sometimes use in practice to overcome this exploding gradient problem now the other problem is what do we do how do we deal with these vanishing gradients and how do we avoid this problem of singular values being very very small well here kind of the the basic thing that people do is they throw away this architecture and they use a different flavor of recurrent neural network instead so here so far we've been talking about this of vanilla recurrent neural network but there's another very common variant people use instead called long short term memory or LST M LST M very common acronym you should get to know very well and what's the mate and here this is a slightly complicated and confusing looking functional form and it's not really clear at the outset when you first see these equations like what's going on or why this is solving any problems whatsoever but basically the intuition of this LST M is that rather than keeping a single hidden type a hidden vector at every time step instead we're going to keep two different hidden vectors at every time step one is called CT the cell state and the other is HT the hidden state and then we're going to use the at every time step we're going to use the previous hidden state well as the current input to compute for different gait values I fo and G and those will be used to compute that out the updated cell state and then also be used to compute the updated didn't state I also think it's interesting to point out that this paper actually was published in 1997 that proposed this LS TM architecture and maybe for the first 10 years it came out it wasn't very well known and then stopped starting around about 2013 or 2014 people sort of rediscovered this LS TM architecture and it became very very popular again starting around that time and nowadays this LS TM architecture is what is a very widely is one of the most commonly used recurrent neural network architectures used to process sequences in practice but then to unpack a little bit more about what this thing is actually doing let's look at it this way so here what we're doing is that at every time step we're receiving some input XT and we are also receiving the previous hidden state HT minus 1 and now just like the recurrent neural network we're going to concatenate the input vector XT and the previous and the previous vector HT minus 1 concatenate them and then multiplied by multiply them by the by some weight matrix but now in the recurrent neural network in the vanilla or Elmen recurrent neural network case the output of this matrix multiplication basically was the next hidden state up to a non-linearity but now for the this LS TM instead what we did the the output of this matrix multiplication we're going to carve up into four different vectors each of size H where H is the number of elements in the hidden unit and these will be called these four gates the input gates the for the the forget gauge the output and this other one I don't know what to call it I just called the gate gate because I can't think of a better name but the intuition here is that now rather than using the rather than directly predicting the output from this matrix multiply instead we predict these four days and then we use these four gates to put up date the cell state and update the hidden state so you'll notice that these four gates each go through different nonlinearities so the input for get an output gate all will all go through a sigmoid nonlinearity which means they're all going to be between zero and one and now the gate gate goes through at an H non-linearity which means it's between minus 1 and 1 so now if you look at this region in Ooty or in CT you can see that in order to compute the next cell state what we do is we take the previous cell state C t minus 1 and we multiply it element-wise by the forget gate so now the forget gate remember is all elements between 0 and 1 so then the forget gate has the interpretation that it tells us for each element of the cell state do we want to reset it to 0 or continue propagating that element of the cell state forward in time that's that's how we use the forget gate and then we add then ER and then we also add on the element wise product of the input gate and the gate gate so now the gate gate is kind of remember between negative 1 and 1 so that's kind of what we want to write into the cell state at every point every element of the cell state and now the the input gate is again between 0 & 1 because it's a sigmoid that we element multiply element wise with the gate gate so in kind of the entry into the interpretation is that the gate gate is it all tells us at every at every point in a cell we can either add 1 or subtract 1 and the input gate tells us how much do we actually want to add or subtract at every point in the cell so that's how we use the input forget and gate gates and now to compute the final output state we're going to do is take the cell state squash it through at an H non-linearity and then multiply element-wise by the output gate so now that the interpretation here is that the cell state is this kind of internal hidden state that's internal to the processing of the lsdm and the LS TM can choose to reveal parts of its cell state at every time step as modulated by the output gate so then the output gate tells us how much of each element of the cell do we want to reveal and put in place into the hidden state because we could put we couldn't write we put could put the out part if we put some element of the output 8 to 0 then it would be sort of hiding that element of the cell and keeping it as kind of a private variable internal to the lsdm so there's kind of a tradition in explaining L STM's that you've got to have a number of very confusing diagrams to try to explain them so here's mine so one way that you can look and at the processing of a single LS TM is that we receive from the left and from we received two things from the previous time step we were to receive the previous cell state CT minus 1 and the previous hidden state - one and now we concatenate the previous hidden state and the current input XT we multiply them by this weight matrix W and then we divide them up into these four gates and then we use these four gates to compute the the cell state the next cell states ECT that we're going to pass along to the next time step as well as produce the next hidden state HT that will pass on to the next time step and now what's interesting about looking at the LOC on this way is that it gives us some different perspective on gradient flow through the LST on especially compared to the vanilla RN so now if you imagine back propagating from the next cell state C T back to the previous cell state CT minus one now you can see that this is a fairly friendly gradient pathway because in when we back propagate this then we first back propagate through a sum node so then what I remember when we back propagate through a sum node then what happens yeah that we copy the gradients so back propagating through a sum is very friendly so that's not gonna kill the information so you first hit this sum node and that's that's just going to distribute the gradients down to the inner parts of the LST M as well as backward in time to the previous cell state and then we back propagate through this element wise multiplication with with the forget gate so now this this has the potential to destroy information but this is not directly back propagating through a sigmoid non-linearity that from the perspective of computing the derivative with respect to the previous cell state this is just multiplies just back propagating through this element wise constant multiply we were multiplying by a constant between zero and one so now this again has the potential to destroy information if that forget gate is very close to zero but if the forget gate is very close to one then back propagating backwards to the next cell state or to the previous cell state is basically not going to destroy and for any information so now there's no what you'll notice is that when we back propagate from the next cell state to the previous cell state then we are not back back propagating through any nonlinearities and we're also not back propagating through any matrix multiplies so this top level pathway through the LS TM is now a very friendly pathway for gradients to propagate backwards during the backward pass so now if you imagine kind of chaining together these multiple LS TM cells one after to process along sequence then this upper pathway through all the cell states kind of forms this uninterrupted gradient superhighway along which the model can ease very easily pass information backwards through time even through many many time steps so this this form because now we see that this kind of funny formulation of the lsdm basically the whole point of it is to achieve these better dynamics of the gradient flow during a backward pass compared to the vanilla RNN yeah yeah so we do the question is we still do need to back propagate through the 1/2 through to the weights so that could potentially give us some problem but kind of the hope here is that for the vanilla RNN our only source of gradient is coming through this very long long set of time time dependencies so then if our information gets very diluted across these many many time steps then we can't learn but now for the LS TM there's always going to be some pathway along which information is kind of preserved through the backward pass so you're correct that when we back propagate into the weights then we are going to back propagate through a matrix multiply and through these nonlinearities but there exists a pathway along which we do not have to back propagate through any matrix multiplies or any non linearities so the hope is that that would be enough to keep the flow of information going during the learning process yeah yeah so the solution still has some HT at the end and usually for an LS TM the cell state is usually considered kind of a private variable to the LS TM and then usually you use the hidden state HT to either predict your to do whatever prediction you want on the output of the lsdm it's now also this design of the LS TM as sort of giving us this uninterrupted gradient superhighway should remind you of another architecture that we've already seen in this class can anyone guess yeah the rezident so remember that in the residual networks we had this problem of training very very deep convolutional neural networks with perhaps hundreds of layers and there we saw that by adding adding these additive skip connections between layers and a deep convolutional network then it gave us very good gradient flow across many many many layers and is basically the same idea with the LS TM that we've got these additive connections that are now giving us this uninterrupted gradient flow not across many many layers but many many time steps in time so I think the LST m and the resna actually share a lot of intuition and kind of is as a fun pointer there's another kind of a thing called the high highway network that actually came out right before resonance that looks even more like an LS TM that all that kind of cement these these connections a little bit more so you can check that out if you're interested in those those connections any any questions about this LS TM if we move on to something else yeah yeah the question is like how do you how do you possibly come up with this well it's called research so I mean it's this this iterative process of you have some idea that maybe I have this idea this that I think if I do this thing then things will improve and then I try it and it doesn't work then I have another idea and I try it and it doesn't work and I have another idea and I try and it doesn't work and then eventually you can have an idea or maybe not you but somebody is gonna come up with an idea sorry I don't mean you specifically I mean kind of you generically or me generically right any individual person you know it'd be troubling but then as a community hopefully over time eventually someone will come up with an idea that actually works well that then gets adopted by the community and if you look at the development of the lsdm actually it got more complex over time so kind of it I actually this would be kind of fun to go to read this history of papers and see exactly how it developed but kind of it they start with one thing and then they make it a little bit more complicated it works better and you kind of iteratively refine these things over long periods of time but yeah if I knew how to come up with things as impactful as the LST I'm like oh man that'd be awesome I wish okay any other questions on lsdm okay so then so far we've talked about single layer RNN so this is something i just want to briefly mention right that we've got this process the sequence of inputs we process we produce this sequence of hidden layers and we use this sequence of hidden hidden hidden vectors to produce a sequence of outputs well we've seen sort of from processing images that more layers is often better and more layers often allows us to achieve models perform better on whatever task we want to use so clearly we'd like to have some way to apply this this intuition of stacking layers I'm also too recurrent neural networks so we can do that by just applying another recurrent neural network on top of the sequence of hidden states that are produced by a first-year Network current or network so this is called a multi-layer or deep recurrent neural network so here the idea is we've got one recurrent neural network that processes our raw input sequence and then produces the sequence of hidden states and now that sequence of hidden states is just treated as the input sequence to another recurrent neural network that n produces its own second sequence of hidden states you know and you don't have to stop with two you can stack these things as far as your GPU memory will take you or as far as much space you have on the slide in this case but you can imagine stacking these recurrent neural networks to multiple to many many layers I think in practice for recurrent neural networks it's actually know you'll often see improvements from like maybe up to the three or five layers or so but I think in recurrent neural networks it's really not so common to have these extremely extremely deep models like we do for convolutional networks yeah it was our question yeah so the question is do we use the same weight matrix for the for these different layers or different weight matrix for these different layers and usually you would use different weight matrices for these different layers so this is kind of so you should think of each of these layers in of RN n is kind of like the layers in our convolutional Network so then at every layer in a convolutional Network we typically use different weight matrices um and very similarly will often use we'll also use different weight matrices at every layer in this deep recurrent neural networks so then kind of going back to this other question about how people come up with these things there's actually other different architectures I mean this is something that you can imagine people play around with a lot but it's very into it it's very appealing to think like oh maybe I can just like write down a better equation for a current neural network and we'll just make all of our problems go away so you'll see a lot of papers that try to play around the exact architecture in the exact update rules of different recurrent neural networks so there's like a ton of papers about this that you'll see I'm one that I want to point out and highlight is this one on the left called the gated recurrent unit that looks kind of like a simplified version of an LSD I don't want to go into the details here but it has this similar interpretation of using additive connections to improve the gradient flow and as compass compute power has gotten cheaper and cheaper then people have also started to take brute-force approaches to this as well so there was a paper from a couple years ago I liked where they used evolutionary search on the space of update formulas and they did a brute-force search over tens of thousands of update formulas for different where each update formula then gives rise to different recurrent neural network architecture so rather than trying to come up with it yourself you just have some algorithm that automatically searches through update formulas and then tries these different update formulas on different tasks and what they found is that so here are examples of three of the update formulas that this paper found um but kind of the general takeaway is you know no nothing really works that much better than an LS TM so let's just stick with that so I think maybe the interpretation here is that there's actually a lot of different update formulas that actually would result in similar performance across a large number of tasks and then maybe the LST on is just happens to be the one that people discovered first and it has kind of a nice historical precedent so people continue using it we also talked a couple lectures ago about this area of neural architecture search where you train one neural network to produce the architecture of another neural network we saw that in the context of convolutional I mean we didn't see in detail but we saw very briefly in the context of convolutional networks and it turns out people have also applied similar ideas to recurrent neural network architectures as well so here there was a paper from Google where they now have actually one recurrent neural network predict the architect of predict the architecture of the recurrent neural network cell encoded as a sequence and then train that thing on hundreds of GPUs for a month and then eventually at the end of training then you get this learned architecture on the right that I guess worked a little bit better than lsdm but I think the takeaway is from a lot of these papers is that the even this these lsdm and gru architectures that we already have are actually pretty good in practice and even if they're not perfectly optimal they tend to perform well across a wide variety of problems so then kind of a summary today is that we introduced this whole new flavor of neural networks called Roker neural networks I hope I convinced you that they're cool and interesting and let you solve a lot of new types of problems and we saw in particular how this LS TM improves the gradient flow compared to these vanilla of recurrent neural networks and we finally saw this image captioning example that let us build neural networks that write natural language descriptions of images that you'll get to do on your fourth assignment but before we get to the fourth assignment coming on next time will be the midterm so hopefully that will be fun on Monday and I'll see you there
Deep_Learning_for_Computer_Vision
Lecture_17_3D_Vision.txt
so welcome back to lecture 17 that's a prime number it's very exciting and today's lecture we're gonna talk about 3d vision so then the last two lectures we had really done this kind of world world whirlwind tour of a bunch of different tasks in computer vision where we wanted to not only identify objects and images but really localize what parts of images correspond into the different objects that we recognized so we talked about this wide array of different types of tasks and computer vision including semantic segmentation object detection instant segmentation and we talked about others like key point estimation that are there also not on the slide but the whole the the last two lectures basically have been a whirlwind tour of different ways that you can predict the 3d the the two-dimensional shape or the two-dimensional structure of objects that appear in images and these have a lot of really useful real-world applications so these are types of computer vision tasks that actually get used a lot in practice out there in the world but it so happens that the world we live in is actually not two-dimensional right the world that we live in is actually free dimensional so then there's a whole separate research area which is how can we add this third dimension to this third spatial dimension to our neural network models and now build neural network models that can not just operate on two-dimensional data but somehow push into this third dimension and understand the 3d structure of the world and the 3d structure of different types of data and that's going to be the topic of today's lecture is how can we add three dimensional information into different types of neural network models so we're gonna focus on two categories of 3d problems today one is the task of predicting 3d shapes from single images so here we're going to want to input some single RGB image on the left and then output some some representation of the 3d shape of the objects in that image and the second and and here for this task of 3d shape prediction we're using the 3d data as the output of the neural network model so then the input is still going to be our favorite two dimensional image representations that we've used many many times before but now the output of the model will somehow be a three dimensional representation of the objects and but but sometimes you might not want to produce three dimensional outputs from your neural neural network models instead it's also it's also a common thing that you might want to ingest or input three dimensional information into your neural network models and then perform maybe some classification decision or some segmentation decision based on a 3d input data so that's what we're gonna focus on today is sort of how can we structure our neural networks to both predict different sorts of 3d information as well as ingest different sorts of 3d information and I should and and and for both of these problems we'll focus on the fully supervised task so for all the tasks we talked about today we'll assume that we have access to a training set that has I'll maybe the input image and the corresponding 3d shape or the 3d shape and the corresponding semantic label that we want to predict so everything today will make this a fully supervised assumption that we have access to this full training set to supervise everything you want to predict but I should point out that that this is just really scratching the surface of what types of topics are possible in 3d computer vision and it's really sort of impossible to do proper justice to this topic in just a single lecture so instead we'll have to just point out on one slide that there's a lot more to 3d computer vision than just these sort of shape prediction or shape classification types of problems and what's really interesting about maybe 3d computer vision is that this is one area where there's actually a lot of non deep learning methods that are still alive and well and still very important to know about and that's because there's a lot of there's a lot of geometry involved in actually the 3d structure of objects and 3d structure of the world so there's a whole different there's a whole large set of types of tasks that people work on in 3d computer vision that we just don't have time to talk about today but to give you a sense one thing you might want to do is maybe like this this this concept of structure for motion where you maybe want to input a video sequence that's a sequence of 2d image frames and now you want to predict or reconstruct the the trajectory of the camera through the 3d world through the video sequence and there's a whole that's just maybe one flavor of a type of problem that you can do in computing computer vision that's sort of just beyond the scope of what we have time to talk about in a single lecture so I just want to give you that context that there's a lot more to the world of 3d vision then we're going to talk about today but today we'll just focus these these two particular problems of supervised sheet prediction and then maybe super bad shape classification so in any kind of questions on this sort of preamble about 3d computer vision before we really dive into these different types of models alright so then if we're going to talk about 3d shape prediction and 3d shape classification then I've been a little bit cagey here with this term 3d shape so here that that's kind of a loose ill-defined term but in practice there's a lot of different types of representations that people use to model 3d shapes and 3d information so we're going to structure this by talking about these different five different types of 3d shape representations that people work with often in practice that all have their different sort of pros and cons and we'll see how each of these five different types of 3d shape representations can be processed or predicted with neural network models so and then if these are cartoon graphics of these three shape representations or maybe not super clear by the out at the outset hopefully by the end you'll understand that each of these five these five little cartoon pictures that we've shown here are all sort of meant to represent different representations of the same underlying 3d shape so the first representation to talk about is that of the depth map so a depth map is conceptually a very simple 3d shape representation and basically what a depth map does is it assigns to each pixel in an input image it assigns the distance from the camera to that pixel right because each pixel in the image corresponds to some object out there in the real world and now a depth map tells us for each pixel in the image what is the distance like in meters between the camera and that that position out there in the real world that the pixel is trying to represent and then this is some than a traditional RGB image that we're used to working with would be maybe a height by width grid of pixel values where each pixel value gives us the color of a pixel and now the depth map is just a similar 2d grid where now the value of the pixel is not the color it's just this depth in meters of that pixel and then it's very common to actually combine together an RGB image and then add on this fourth channel of information giving the depth information and that would be an RGB image and sometimes these RGB images are actually called to play 5d images because they're not like real full 3d because one of the one of the cons one of the trade offs of these def image is is that they can't they can't capture the structure of occluded objects right because say that in this maybe this example here there's actually part of the bookcase that's occluded by the couch in the top image but now the depth map doesn't have any 3d representation for the portion of the bookcase which is behind the couch instead this this RGB D or depth map representation only is able to represent the visible portions of the image so for that reason we sometimes think of it as a slightly less powerful powerful or less general 3d representation which we encapsulate by just calling it maybe two point five D to mean that it's not fully 3d but one reason why this depth map type of data is very important is that is that we can actually capture depth map data with various types of raw 3d sensors so something like the Microsoft Kinect that you may have been familiar with actually uses a form of structured light to estimate these these depth images directly using some some fancy sensor tricks or if you look at something like the the face ID sensor on an iPhone then that's also capturing some kind of depth image because it's projecting out these infrared dots and then able to use that that information to estimate the depth at each pixel of the image that it captures so these definite information these understanding how to work with these depth map images is super important because this is actually a type of 3d data that we can just capture from the world using raw using actual sensors and then it turns out actually trying to then one one task that you might want to try to do is to try to input a vanilla RGB image and then given an RGB image try to predict this depth channel that is try to predict for every pixel in the input image what is this distance from the pixel to the object out in the world but that but that pixel is covering and it turns out that we can actually that this is the architecture that we could use to predict this is actually something that we saw a lot in last lecture and that's this idea of a fully convolutional network so if you recall in the last lecture when we talked about this this task of semantic segmentation there we wanted to predict for every pix in the input image what was the the semantic category label of the pixel and there we saw that using some kind of fully convolutional network with some pattern of down sampling and up sampling inside the network was a useful neural network architecture that let us make one prediction per pixel and we can reuse that exact same type of architecture now for predicting these depth maps so then in order to maybe predict the def map from a single image you would input your RGB image to some fully convolutional network the final convolutional layer of that network would have maybe one filter or one output channel that where then the output of that love that one filter would these would be interpreted as that depth that we're trying to predict and then we would train this thing using some kind of loss function that compares for every pixel of the predicted depth and we look at the corresponding pixel or the ground truth depth and try to compare them and then well obviously we want our pretty good depth to be the same as the the ground truth depth or do we it turns out that that's actually not quite possible with with 3d vision and the reason is this fundamental problem that we run into with 3d representations which is the problem of scale depth ambiguity so here what right we you if you're looking at a single image you can't really tell the difference between a large object that's very far away and a close object that's very close to you so in particular if we had this image of a cat and if you were looking at a cat that was like right in front of your eye versus a cat that was twice as large and two times as far away from you they would look exactly the same they would project the exact same image onto the retina onto your onto your retina and your eye or onto the the sensor in any kind of digital camera and for that reason that that this the overall the absolute scale and the absolute depth is actually ambiguous from a single 2d image so as a result of this scaled so whenever you're working with any kind of 3d representation or 3ds prediction problem it's always important to think about this this potential problem of scale depth and big Uwe T and then it's often the case that we need to actually change something in the structure of our neural network model in order to deal with this problem of scale depth contiguity so I don't I don't want to walk through the math here in input in concrete detail but it turns out that there's a very clever loss function that we can use for this depth prediction problem that is actually scale invariant and what I mean by that is that suppose that our neural network predicted a ground truth depth that was correct up to some constant scaling factor of the true depth right like maybe our network just predicted the depth all of the predicted depths from our network were like one half of what they were supposed to be from the drug from the ground truth well then this scale invariant loss function would still assign zero loss to that situation the only thing that it cares about is somehow that there's the way that you'll get zero loss with this with this loss function is that if there exists a scalar that you can multiply your predictions by and then match the ground treat perfectly then in that case you get zero loss but this log function does not actually penalize a global global office of a global multiplicative offset in scale and there's some clever math behind this loss function and see exactly why that works and how that works out I suggest you look at look into this 2014 NURBS paper that's referenced on the slide and that will walk you through the mathematical details of exactly how this scale variance is achieved with this particular mathematical form so then there's actually another very related 3d shape representation that is very similar in spirit to this idea of RGB D or def images and that's the idea of a surface normal map or surface normal image and here I'm just like with a depth image what we wanted to do is assign to each pixel the distance in meters between the pixel and the object out there in the world what a surface normal what a surface normal representation will do is assign to each pixel we want to know what is the orientation of the surface of that object out there in the world so then for every pixel that we would as we would have a some unit vector if for each pixel that tells us the orientation of the surface for the pics for that for the object that that pixel is showing and it's typical to represent or draw these 3d normal map of these normal map images using RGB colors so here these are for this particular example this image on the right is showing a normal map version of this image on the left so here blue represents may be pointed up so you can see that the floor and the top of the bed is all kind of colored to mean that that's but those normal vectors are pointing up red is kind of pointing kind of pointing this way to say that maybe the side of the bed is kind of pointing this way and then green is pointing the other way so if you look at the cabinet's you can see that they're colored green and the exact mixture between RGB it tells us the exact orientation of the of the surface normal at every point in the image and now we can imagine predicting these surface normals using a very similar technique right we can just take our RGB image run it through a fully convolutional net work and now predict at the output of three channel image that tells us what is this what is this three dimensional vector at every position in the input image and now the loss function here we want to compare the angles between two vectors so we can use a dot a per pixel dot product normalized by the by the by the norms of the vectors in order to have our loss function these it be something like the related to the angle of between the vectors in that our network is predicting and that is a present in the ground truth so this is so it actually turns out that you can actually train one joint network that does both semantic segmentation and surface normal estimation and depth estimation and you can actually train one network that will input a single RGB image and then predict for you all of those things all in a single forward pass so that's that's kind of one so that's a fairly conceptually simple or 2d representation but I think it's actually pretty useful in practice for a lot of different applications because once you have a depth map and once you have a surface normal that actually gives you quite a lot of information about the 3d structure of the image that you're looking at but of course the the drawback of these surface normal or a three-point or of these def map representations is that they can't represent occluded parts of the image so if we want to have a more complete 3d representation of our 3d scenes we need to move on and consider other types of 3d shape 3d shape representations so then the next 3d shape representation that we can think about is a voxel grid now a voxel grid is conceptually very very simple great so what is going on in a box of grid is we're just going to represent the 3d world as some 3d grid and within each cell of the grid we want to have have it turned on or off to say whether or not that that cell in the grid is occupied so this is basically like a Minecraft representation of the world right that because you know the whole world we're assuming the world is built from blocks and then there's some so there's some identifiers or some some property at each grid cell in the world and now this these voxel representations are conceptually pretty easy to think about and pretty straightforward to think about and it's sort of basically like the mass representation that we use in NASCAR CNN for representing foreground or background of an object except extended into 3d so you can imagine that all of the similar machinery that we use for processing may be two-dimensional occupancy grids in NASCAR CNN or for segmentation we can use similar sorts of machinery to represent these and 3d voxel occupancy grids but now a big problem with voxel representations is that you actually need to use a very high box'll resolution if you're going to capture the very fine details of the objects in the scene so as an example if you look at this chair there's actually a lot of sort of very fine detail over the part of the chair where you might put your hand there's actually a very subtle curvature there that looks like to be very comfortable to rest your hand on but if you look at the box or representation on the right you can see that it's very blocky and a lot of this really fine-grained geometry of the chair has been lost as we move from sort of a raw like the actual input image to some Vox live representation of the scene and to actually recover these very fine details we would need to use a very very high box'll resolution and that could be computationally expensive but it turns out that actually processing voxel represent halo processing voxel grids is fairly conceptually simple right so suppose that we were given a voxel grid as input like we received as input this voxel representation of the chair on the right and now our task was to classify these different voxel grids and say is this grid the shape of a chair or is an airplane or is a couch or something else well if we wanted to do this kind of classification of a voxel grid we can use a very familiar sort of 3d convolutional network neural network architecture the difference is that now we need to use 3d convolution or three-dimensional convolution as our basic building block so here the input to those two such a 3d convolutional neural network would be on this raw box'll this raw voxel model this raw voxel grid telling us for every point in 3d space is that voxel occupy or not occupied and now every layer of the model would be some 3d convolution operation where now our convolutional kernel is some little three-dimensional cube that we're going to slide over every point in 3d space over the previous feature map compute inner products and that will give us a scalar output at the next layer so now you can imagine building up three-dimensional convolutional neural networks that are very similar in structure to the familiar 2d convolutional neural networks that we've seen many many times before so you can build up maybe on several layers of 3d convolution followed by maybe a couple fully connected layers or some kind of global global average pooling layer and then finally go to some classification layer and basically all of the types of neural network architectures that we're familiar with working with for two-dimensional images you can imagine porting them over into three into these 3d voxel grids a fairly straightforward way oh yeah so to explain the input dimension a little bit so here the the one is the feature that the so we have now at every stage of the network it's a four dimensional tensor so we have three spatial dimensions and one channel or feature dimension so for the for the input to the network it's a voxel grid so there's three spatial dimensions of 30 by 30 by 30 and then for the input to the network we have one feature at every point in that voxel grid which is whether or not that voxel is occupied but now as we move along the convolutional mountain as we move through the 3d ComNet then we will still have three spatial layer three spatial dimensions at each edge at each layer but now we might have a whole feature vector at every point in that 3d grid which gives us a 4d tensor so if you look at the second layer in this network the it's a it's a 48 by 13 by 13 by 13 spatial grid well the 13 by 13 by 13 is the is a spatial size and that within every point of that grid we have a 48 dimensional vector which means that this this layer was produced using a 3d convolution operator that had 48 filters because each one of our three dimensional filters is now going to give rise to a full cube with one scalar value at every point and the and then we need to stack up those cubes to give us a four dimensional tensor is that does that clarify the the dimensions a little bit of these networks yeah yeah so then the input would be some binary tensor it just says whether every point in space is occupied or not occupied although maybe if you had some other type of information like maybe if you were literally trying to work on Minecraft then you might actually have like the block type at every at every position in space which you might represent as like an integer or something like that or a some one hot representation but for the if your fridge was representing raw 3d shapes then the input you would usually be some to some binary representation just to say whether or not each point in space is occupied oh yeah but the kernel does not need to be binary so typically on only the input that this network would be binary and everything else would all be real value it just like it has been in all the applications we've seen so far so the kernels would be real valued and each of these intermediate eacher layers would be real valued it's just the input is going to be forced to be a binary in this case okay so then the next task you might want to do with voxels is actually predict voxels from an input image so in this previous case we're sort of assuming we receive voxels as input and then we want to classify them or make some some prediction on input boxes a related task is say we receive an input image and now we want to predict a voxel grid that gives the 3d shape of that input image so now the input on the left is our is our familiar three dimensional tensor giving two spatial dimensions and one RGB Channel dimension and now all the way out at the other side on the right we need to somehow end up with a 4 dimensional tensor that has three spatial dimensions and one channel dimension giving us the occupancy probability at every point in the voxel grid so then we need some architecture that lets us sort of move add an extra spatial dimension somewhere inside the model and then assuming we had some way to convert the spatial dimensions in the right way then the output we could imagine training this thing with some cross entropy loss because ultimately maybe we predict a occupancy probability or occupancy score for every point in the voxel grid and then we compare that with our binary occupancies that we have in our graph truth and you could imagine training this thing with a softmax or a logistic regression type of type of binary of classification loss but now as for the network architecture one fairly common way to predict voxel grids would be to actually bridge the gap between 2d between 3-d and 4-d tensors using a fully connected layer so what we can imagine is processing our input image with a familiar 2-dimensional CNN and then at the end of a 2-dimensional CNN we would have a three dimensional tensor that has two spatial dimensions by H and W and one channel or feature dimension C and then you could imagine flattening this this this three dimensional tensor into a into a big vector and then having a couple fully connected layers and then from the output of the fully connected layers we could sort of reshape them back into a four dimensional tensor so then this then we could sort of use these these fully connected layers in the middle to sort of add an extra spatial dimension between our three-dimensional input and the four dimensional output that we want to predict and then once we've got this initial three dimensional output then you can imagine doing some three-dimensional convolution with sort of three dimensional spatial up sampling on the right-hand side of the figure in order to go from this this 3d representation with small spatial dimensions up to a 3d representation with the large number of spatial dimensions that we finally want to predict at the output of the network and you can imagine that this that now this the second half of the network would have sort of 3d analogs of all the different uncool Anor or up sampling operations that we talked about in last lecture in the context of semantic segmentation so this is a fairly straightforward way to deal with predicting 3d voxel information but it turns out that this is this this type of architecture is very computationally expensive because actually these 3d convolutions any kind of 3d convolution operation is going to be extremely extremely computationally expensive right because um the number of receptive fields is now going to scale cubically with the with the spatial size of the of the feature map of the feature grid which means that the the computational cost of performing 3d convolutions is very very high compared to the cost of doing 2d convolutions so as a result sometimes people try to put box of 3d voxel grids using only 2d convolutions and this is sometimes called a voxel tube representation um and here the idea is that our input image is going to be a three dimensional tensor with three channel dimensions and two spatial dimensions and then we're going to go through some two dimensional convolutional Network where at every point in our two-dimensional convolutional Network we will still have two spatial dimensions and to end one channel dimension but now the very last layer of our of our network will be very special and say that we want to predict a voxel output voxel grid of size V cross V cross V then for the last layer in our 2d CNN we will arrange the spatial convolutions such that the 2d spatial size of the final layer of our of our 2d CN n will be V cross V and then we the the number of output channels or output filters of the final tunic of the final 2d convolution will have will be the filters or V channels of the last two deconvolution and now at the very end of the network then we will play a bit of a trick where we will in where the output of the convolution will have sort of literally had two spatial dimensions and one channel dimension but when computing the loss we will interpret that laughs that that channel dimension as actually the depth dimension of the of the output tensor and if you by kind of using this this this voxel to representation it lets us predict voxel representations using only 2d convolutions which is much more computationally efficient and this is called a voxel to representation because it has this interpretation that we're doing to deconvolution that looks at the input image and then for the final layer of the convolution it's sort of predicting a tube along the channel dimension that gives us a whole a whole a whole tube of voxel probabilities or voxel outputs as the channel outputs of our final 2d convolutional layer so is that maybe this these these two different approaches of 3d convolution and voxel to representations clearer for predicting voxel outputs yeah that's a good question so the question is do we sacrifice anything when we move from this from this 3d convolution model to this box will to what representation model and what we lose is actually translational invariance in the Z dimension right so one proper one one nice property of convolutions is that they don't sort of care about the position in space where the inputs are located right so suppose that we were trying to do two to deconvolution and recognize a cat then recognizing a cap in the upper left hand at a corner and lurking as a cat in the lower right hand corner should be exactly the same because if we're sliding two-dimensional filters over the image then when our whatever our cap filters can intersect the cat then they'll fire cat features but now if we're 3d CNN we would also get sort of three dimensional spatial invariants but if there was some particular 3d structure in the input data that could occur at any point in 3d space then we could imagine having a 3d kernel but is now invariant to three arbitrary 3d translations of the input but when you're using a box with two representation that's not the case because now suppose that you wanted different versions we wanted to be able to represent somehow in the model different shifts over in the Z dimension then you would actually need to learn separate convolutional 2-d convolutional filters to represent each possible offset in the Z dimension so basically I think what you're giving up with this voxel to representation is um translational invariance or translational translational acrobatics in the Z direction but you still would good translational equivariance in the XY plane so I think that's what you're giving up here okay but then a big problem with voxel representations is that they take a lot of memory so we already noted that in or if we wanted to represent the very fine-grained details or fine-grained fine grain structure of objects then we would need to use a very high resolution voxel grids and it turns out very high resolution voxel grids take a very lot of memory and GPUs don't actually have enough memory to work with very high resolution voxel grids so as an example suppose we wanted to represent a voxel grid that was 1024 by 1024 by 1024 and then because this is a neural network within each cell of the voxel grid we wants to have a 32-bit floating-point number that represents the the the occupancy probability or the occupancy score for every point in this high dimensional voxel grid well then I'm just restoring this this 3/2 ten sir would take almost four gigabytes of memory and that's not counting all of the convolutional layers that we would need to use in order to actually predict this high-resolution voxel grid so as a result of this very high memory requirements of voxel grid's on people just don't just dismiss not possible or not feasible to use sort of naive voxel grids at very high spatial resolutions but there are some tricks that people sometimes play in order to scale voxel representations up to higher spatial resolutions so one trick is to sort of use a multiverse a kind of multi resolution voxel grid and one way to do this is this idea called an ox tree so I don't really want to go into too much detail here but the idea is that we're going to kind of represent a voxel grid where some kind of multiple resolutions so we will be able to capture the course through facial structure of the object using some low resolution voxel grid maybe a thirty-two cubed and then we can represent maybe the we can fill in the fine details by turning on a sparse subset of numbers of box'll cells at higher spatial resolutions like 64 cubed or 128 cubed and now implementing these things gets quite tricky because you need to deal with kind of mixing multiple resolutions and now using sparse representations of these voxel grids so in the implementation of these types of structures is a bit non-trivial but if you can manage that implementation hurdle then you can actually excuse this kind of tricks to scale box or representations of the fairly high spatial resolutions another trick that I thought was kind of cute is this idea of a nested shape layer which is kind of like these these nested matroyshka Russia and dolls so the idea is that rather than representing this like full 3d shape as a dense voxel grid instead we kind of are gonna represent the shape of the object from the inside-out so we're going to have some kind of like coarse outer layer and then some negative minus some negative voxels that are inside might plus some more positive box holes minus another layer of negative box holes and then we don't actually have to and we can represent all of these things sparsely we don't have to represent the full voxel grid in a dense way we just kind of represent it as this sum and difference of a couple different a sparse voxel layers so this is another way that people are able to scale box or representations to higher spatial resolutions okay but then so that's kind of the voxel grid representation and that's that's actually one that gets used quite a lot in practice now another kind of really interesting 3d shape representation is that up an implicit surface so with the idea with an implicit surface is that we want to represent our 3d shape as a function and what we're going to do is learn some function that inputs the court some coordinate in 3d space and what it's going to output is the probability that that position that arbitrary through position in 3d space is either occupied or not occupied by the object and then rat so then we rather than kind of trying to fill so with a voxel grid what we're kind of doing is sampling such a function at some finite set of points in 3d space and then storing those samples to the function in some explicit grid representation but now with an implicit function we're kind of just using some mathematical function itself to represent these these 3d shapes implicitly so then we could then sample from this function at arbitrary put points in 3d space and it should be able to tell us whether or not arbitrary positions in 3d space are either inside or outside the object and then the actual of the exterior surface of that object would be represented as the level set of points in 3d space where that occupancy probability is equal to 1/2 and then we can kind of represent this representation visually on the left here where now this implicit function the color of each position in space sort of gives on what the value of this implicit function would be if we were to have evaluated it at that point in space where blue were corresponds to values very close to 1 and red corresponds to value is very close to zero and then this white region in the middle is where the we actually have this level set of 1/2 that represents the actual surface of the 3d shape you'll also sometimes see this called a signed distance function where the idea is that we this dysfunction is giving us Euclidean distance from the point in 3d space to to the surface where that distance is maybe positive or negative depending on whether the point is inside or outside the object but these are basically sort of equivalent representations just a question of whether we whether the output of this function is between zero and one or between minus infinity and infinity but otherwise there they're sort of equivalent and now of course whenever you see a very complicated function that you might want to represent or learn what we're going to do is just like learn this function as a neural network so then what we're going to do is learn a neural network that inputs a 3d coordinate and then outputs a probability to say whether that 3d coordinate is actually inside or outside the shape and then you can imagine training such a function by having some some data some data set of samples from your 3d shape and then training it to classify these coordinates as being either inside or outside the 3d shape and now if we actually wanted to extract some explicit shape representation from this learned function then what we can imagine doing is kind of sampling that learned function at some grid of points in space and then the function would tell us whether each one was inside or outside and then for the bount then we could imagine going back and resampling the function now at new points that are kind of on the boundary of the inside or outside then you can imagine sort of going back and iteratively resampling new points from this learn implicit function toward actually extract some explicit representation of the boundary of the shape that the implicit function represents but of course um this is actually this actually has a lot of sort of hairy implementation details as well as you might imagine and the exact procedure of how you hook up these architectures and how you connect image information into these to these SDF neural network functions or how you actually what is the exact algorithm for extracting a 3d shape from a trained SDF these are all sort of complicated details that I don't really want to get into I just thought that this is a kind of interesting way to represent 3d shapes because we're sort of representing the shape implicitly as the values computed by a learned function whereas most of the other representations that we use are kind of explicitly representing the shape of the object using some some some primitives in 3d space so I think this is an interesting 3d shape representation to be aware of okay so then the the next 3d shape representation to think about is the the point cloud representation so here a point cloud representation is basically saying we're going to represent a 3d shape as a set of points in 3d space where the set of points in 3d space are going to somehow cover the surface of that 3d shape that we want to represent so for example if we wanted to represent this this airplane here as a plainclothes n tation then we might represent it as the set of points in 3d space that all kind of were many many points on the surface of the airplane representation so one sort of nice property about point cloud representations is that they're somehow more adaptive compared to voxel grid representations so we saw that if we wanted to use a voxel grid to represent 3d shapes of with very fine details and with high fidelity then you would need to use a very very high boxful resolution but now with a 3d point cloud representation you can imagine that we can represent fine details of 3d shapes by varying the density of the point cloud at different points in space so for a point cloud representation like for the the four parts of the object that require very fine detail like maybe the tips of the wings or the the the tail fins of this airplane you can imagine putting more points there to just represent those fine details whereas other parts of the object like maybe the fuselage of the plane that don't have as much fine structure you can imagine having maybe allocating fewer points on that part of the object to represent it with with less fidelity so that means that even if you have a fixed finite number of points to allocate or your your 3d shapes then you can position those those points out in space in different ways to more flexibly or adaptively represent areas of high and low detail of the shapes you want to represent but sort of one one downside with 3d point cloud representations is that you need to do some kind of post-processing if you actually want to extract out some actual 3d shape to visualize and that's kind of clear even from this visualization that we're showing on the screen because um mathematically this point cloud representation each of our points are infinitesimally small but there's no way that we can actually visualize these infant pestilent infant testimony small points so even just to visualize a point cloud representation we can I need to inflate the points to some finite ball size and then render these balls so that's what we're showing on the screen here um so that means that this this raw point cloud representation is something that we can't really work with for a lot of applications in order to actually represent any kind of dance to your application we might need to do some post-processing on the point cloud to convert it into some other format or some other representation for us to render or visualize or work with but that said this point code representation is still a very useful thing to work with in neural networks and this is actually a very common representation that's used for example in maybe self-driving car applications so for a self-driving car they actually have this aw this spinning lidar sensor that's uh that's on the roof of the car and then it actually collects this point cloud representation of the environment around it so then for any kind of self-driving car application you need like kind of the raw data that the system is ingesting is some point cloud representation of the world around it and that's maybe and then what was then it's sort of important that we are able to build neural network models that can ingest these raw Planck load representations and then make some some decision based on raw point cloud inputs so then one kind of neural network architecture that people often use for ingesting point cloud inputs is this so-called point net architecture so here this is kind of a simplified version but what we want to do is input a set of three points so that one so then our input is going to be a point cloud with P points and each of those points will have an XYZ position in 3d space and now we want to input this set of points as input to the network and then we maybe want to make some classification or regression decision based on the this point cloud input so maybe one thing we might want to do is classify what is the the category of the shape that is being represented by this input point cloud so then once then we need some kind of neural network architecture that can input a set of points and then output some classification score but what's interesting here is that we don't want the order of the points in the cloud to matter right this this point cloud is really a set of 3d points and then the order that the points are represented in memory actually should not matter so what that means is that our operation to this like the transformer that we talked about several lectures ago that we want the operations performed by our neural network to be invariant or equivariance to the order of the points that are represented so then one way that we can do this is this this point net architecture so here what we're going to do is train a little Eva's have a little MLP a multi-layer perceptron a fully connected neural network that is going to input the 3-dimensional coordinate and then go through several fully connected layers and then finally output a feature of dimension D and then we can run this of this fully connected network independently on each point in the cloud and that will then give us some feature vector for every point in the cloud and then and then if and then this this then if we had point clouds with varying numbers of inputs and we could imagine just running this NLP independently on point clouds with arbitrary numbers of points and then once we've used this display connected network to extract a feature vector for every point in the cloud then we can use a Mac spooling operation to do some max pooling over all of the points in the cloud and that will collapse these these people in the cloud down to a single feature dimension a feature vector of dimension D and then this single feature vector of dimension D we could then imagine going back to using it to some other fully connected network to eventually output our final class scores or class probabilities and because this max pulling up because the max function doesn't care what order its inputs were in then this architecture doesn't care what order the points were represented on the input tensor so that means that this this this architecture is really operating on sets of input points um so it's quite appropriate for dealing with these point cloud representations and this is actually um maybe a quite a simplified version of a point out architecture so in practice you might imagine vert so this is kind of doing one layer of global aggregation across all the points but in more complicated versions of this architecture what you could imagine doing is taking this pooled vector and then concatenate it again to all of the vectors of the points in the cloud and then doing more independent MLPs and then more pooling and kind of iterating this procedure of independent operation on the feature vectors of the points pooling across the points and then concatenate pulled vector back back to the feature vectors of the points and that you could imagine sort of more complicated variants of this type of architecture but this is sort of very commonly used for processing point cloud inputs now another thing we might want to do is generate point cloud outputs so here what we might want to do is input an RGB image and then output a point cloud representing the 3d 3d shape of the of the point the 3d shape of the object so then what we could imagine doing is kind of hooking up some three to some neural network architecture that is now spitting out some 3d point clouds that give the 3d shape and maybe we can skip over the exact details of this architecture because I think the interesting point about generating point clouds is that we need some kind of loss function that is able to compare our predicted the point cloud that our network predicted and the point cloud that we should have predicted and now this is kind of a new thing that we haven't really seen before because we need to write down a loss function that operates on two sets and tells us how similar is this point cloud does these two sets the one that we predicted and the one that we should have predicted and then of course this loss function needs to be differentiable so we can back propagate through it and use it to train the network well one function that we often use to compare point clouds is called the the chamfer distance and it has this particular mathematical form but I think it's maybe easier to understand if you walk through it visually so here the idea is that we're going to input on two sets of points the the orange points and the blue points and now the chamfer distance should tell us how different are these two sets of points so there are two terms in this loss function so the first one what we're going to do is for each blue point we're going to find its nearest neighbor orange point and then we're going to compute the Euclidean distance between each blue point and its nearest neighbor orange point and then we're going to sum up those those distances across all of the blue points and that will be this first term in the loss function and then the second term will do something kind of equivalent so for each blue point we will now find sorry for each orange point we will now find its nearest neighbor blue point and then compute that distance to its nearest neighbor and then sum of all of those distances and now our chamfered unmeant our final chamfer loss will then be the sum of these two sort of nearest neighbor matching loss functions and now you can see that the only possible way to get this to drive this chapter loss to zero is if the two point clouds coincide perfectly that is if every orange point is exactly on top of some blue point and vice versa that's the only way that we can achieve a zero loss but of course because of this nearest neighbor operation the order of the two points in the clouds does not matter so that's exactly so then this this chamfer loss function is somehow is somehow matching the L to nearest neighbor distance between two sets of points in 3d space so then we can imagine using this chamfer loss function to to train our neural network that's predicting point clouds so then we could have the point cloud that's predicted by our model and then the the point cloud from the ground truth data set that we should have predicted and then compute this chamfer distance between them and the back propagate through everything to Train it the weights of the network so that gives us our our point cloud 3d shape representation and the final one is the the mesh representation of 3d triangle meshes so here a 3d triangle mesh is actually a very commonly used representation in computer graphics if you've ever taken like a graphics or a rendering rendering class and here what we're going to do is represent the 3d shape as a set of vertices in 3d space which is basically a point cloud representation but now in addition to this set of points in 3d space we're also going to have a set of triangular faces that are triangles with vertices on the point cloud and then this will give this will basically let represent the 3d certain 3d shape of the object not as a set of points in space but as a as a set of triangular faces throughout interconnected through per distance so this is also sort of adaptive because we can have sort of bigger or smaller triangles at different points in the model and it's also very useful for computer graphics because it gives us this explicit representation of the surface of the object and what this means is that we can also sort of interpolate arbitrary data over the surface of a triangle mesh using something like very central coordinate interpolation but the exact details of that don't matter what that means is that for example if we had maybe if we attached data at each vertex like a color or a normal vector or a texture coordinate or some other piece of data at each vertex then you could sort of interpolate those pieces of data over the triangular faces of the triangle mesh so that would sort of ladder represent this sort of extension of our finite samples of data over this entire 3d surface in 3d space and that's for example of how we often render 3d textures in in computer graphics engines so for all those reasons I think this this 3d mesh representation is really nice especially as for for graphics II type applications but actually on processing pretty Ramesh's with neural networks there's kind of a non-trivial operation you need to sort of invent a lot of new structures to process meshes with they're all networks so there was a very nice paper presented by some folks at EC that last year in 2018 that have I think a lot of really cool ideas for processing meshes with neural networks this was called pixel to mesh because what they wanted to do was input a single RGB image on left and then output a triangle mesh giving the full 3d shape of that about object in the image and they had maybe three four key ideas for processing meshes with inside of a neural network that I thought were quite interesting so the first is the idea of iterative mesh refinement so ultimately we want to build a neural network that is able to output or emit a triangle mesh representing a 3d shape but it's sort of difficult to invent trade 3d meshes from scratch in in the differentiable way so instead what we're going to do is sort of input some in some initial template mesh to the neural network and then throughout processing of the neural network what it's going to do is deform that initial template mash it to give our final mesh output so then what we're going to do is sort of input to the network this on sphere initial sphere or initial ellipsoid mesh that gives us some initial positions for all the vertices and some set of triangular faces over those vertices and then we're going to then iteratively refine the positions of those vertices so then in the first stage of processing will process that input mesh in some way see how much does that initial mesh match the image and then move the vertices around in 3d space to give some updated or refined version of that triangle mesh then given that refined triangle match rule again somehow compare it to the input image see how much does it match the input image and then again move each of the vertices a little bit in order to further refine the exact structure of the 3d mesh to be output and you can imagine going through several stages of this and hopefully by the end you'll be able to output a 3d triangle mesh that matches the geometry of the input image very nicely so that's the first sort of interesting useful idea for processing triangle meshes with neural networks the second is that we need some kind of neural network layer that can operate over mesh structured data and the way that we do that is using an operator called graph convolution so we're very familiar with two-dimensional and three-dimensional convolution right the idea with normal 2d convolution is that we have some grid of feature vectors and that at the input and then the output we're going to compute a new grid of feature vectors and every feature vector in the output grid is going to depend on some local receptive field or a local neighborhood of the features in the input grid and then we're going to slide that same function over every point in the grid to compute all of our feature vectors in the output now with graph convolution it's going to be very similar but sort of extending not to two dimensional or three dimensional spatial grids but instead to arbitrary graph structure data so what we're going to do is that the input to a graph convolution layer will be a graph and a feature vector attached to every vertex of the graph and now the output of the graph convolution layer we want to compute a new feature vector for every vertex in the graph and the the output feature vector for a vertex should depend on a local receptive field of the feature vectors in the input graph to the graph convolution layer so then we can use this particular mathematical formalism up here at the top that lets us compute a new output feature vector F prime I for vertex VI that's going to depend both on the input feature vector F I as well as the the all of the neighboring feature vectors F J that are all the neighbors in the graph to that feature vector F I and the way that we do this as we all maybe this is sort of a lot of different low-level ways that you can implement graph convolution there's a lot of different papers showing exactly did slightly different architectures for this thing but they all share this similar intuition that we want to compute the output feature vector in a way that depends on sort of the local neighborhood of feature vectors in the integrand and then we're going to apply this same function to every vertex in the graph and sort of slide / convolutional e and begin with this and this is sort of we think of this as a convolution because we're sort of sliding this function over every vertex in the graph to compute our output feature vectors and then just like image convolution can be applied at test time to arbitrary grids of arbitrary sizes well then a single graph convolution layer can operate on graphs with arbitrary numbers of vertices an arbitrary topology or connectivity patterns at test time then we can use sort of 1:1 neural network layer that can process our graphs of arbitrary topology so then in order to process triangle meshes with graph convolution where the main body of a network is now going to be a graph convolutional Network where we stack up many many of these graph convolution layers so then in the inside the body of this network we will always attach a feature vector at every vertex in the mesh and now at every layer of graph convolution it's going to update or compute a new feature vector for every vertex in the mesh in a way that sort of depends on local neighborhoods of the input of the neighbors in the mesh to all of the to each vertex where the neighbors are sort of along the edges as indicated by the faces and you can imagine stacking up many many graph convolution layers to now update or to now process feature vectors and propagate them along the edges of this mesh so I thought this was again a very nice way to process mesh structure data with come up with with some kind of neural network structure but now the problem is that I said that our initial task here was to input an RGB image and then output the triangle mesh so we need some way to actually mix in image information into this graph convolutional Network so that brings us to I think the third really cool idea that this pixel too much paper had and that's this idea of getting vertex aligned features so here what we want to do is for every vertex in the mesh we want to get some kind of feature vector from the image that represents the kind of visual appearance of the image at that put at that spatial position of the vertex so then what we can do is take our input image and then run it through a 2-dimensional CNN and that will give us some 2-dimensional grid of image features that we've seen many many times and now what we can do is if we understand the in the camera intrinsic or the then what we can do is take our 3d triangle mesh and then project the vertices of the mesh onto the image plane using kind of a 3d to 2d projection operator and that will take every three that will take every 3d position of each vertex out in space and now project it down onto the image plane and now for each of those projected vertex locations we can use bilinear interpolation to sample a feature vector from our image from our convolutional neural network features that now gives us a feature vector for each vertex that is all perfectly aligned to the position in the image plane where that feature vector projects and this idea of been this bilinear interpolation is basically the same that we saw last lecture in the ROI align operator in NASCAR CNN but here rather it was C so here we still want to kind of sample on feature vectors and arbitrary positions in the 2d image plane but rather than sort of sampling them at a regularly spaced grid like we did in the ROI align operator instead what we want to do now is sample a feature vector at every point in the image plane for all the projected vertex positions and that will allow us to mix in image information into our graph convolutional Network okay so then the final thing that we need to figure out with processing graphs is what is our loss function so then we're going to have our model predicts own 3d triangle mesh and we have some ground truth the 3d triangle mesh and we need some loss function that now compares 3d triangle meshes and tells us how similar our predicted mesh was to the ground truth mesh but the problem here is that we can actually represent the exact same 3d shape using different triangle meshes and when I and as an example here we could represent a square using a try using two big triangles or using four small triangles and both of these different not represent Asians and represent the exact same shape and somehow we want our loss function to be invariant to the particular way that we represent the shape using triangles we just want the the loss function to depend on the underlying shape itself and not the particular way that we decide to carve it up into triangles so the the idea to get around that is actually our loss function we have we have actually seen though we already seen the answer so what we're going to do is going to take our meshes convert them two point clouds by sampling points along the interior of the of the mesh and then use our chamfer distance to compare these these two point clouds and in practice what we're going to do is sample points from the ground truth mesh on the right sample points from the predicted mesh on the left and then compare these two point cloud representations using this chamfer loss that we've already seen but of course the problem here is that for the ground truth mesh on the right you can imagine doing those that sampling off line and sort of sampling all those points and cashing them to disk but now to in order to two sample points on the left from our prediction now we need to in order to do that we actually need to do this sampling operation online so that's sort of a difficult implementation to do and then also we need to be able to back propagate through this sampling operation on the left if we're going to do this to do this online I'm gonna turns out there is a way to sort of back propagate through the sampling operation in a nice way that you can check out this ICML 20:19 paper to see the exact details ok so then once we've seen all of these four key ideas that gives us some that gives us a Mac a way that we can operationalize a neural network that can input an RGB image and then output a triangle mesh that represents the 3d shape of that image right then we're going to use iterative refinement we're going to use graph convolution at several points in this graph convolution Network we're going to mix in image information using vertex align features and we'll train the thing using a chamfer loss function so that gives us our four 3d shape representations there's actually a couple more issues they need to deal with when when dealing with 3d shapes but maybe we won't go into full detail on these so we need some some metrics to compare 3d shapes actually tell whether our models are working well we saw in 2d we can use intersection over union to compare bounding boxes we can actually use a similar type of intersection over Union to compare 3d shapes but it turns out that I'm actually intersection over union of 3d shapes is maybe not as meaningful or useful as a metric as we might like so another metric we can use to compare 3d shapes is this chamfer distance that we've already seen so one way to compare 3d shapes is to sample point clouds from each of our different shape representations and then compare them using a chamfer distance and but the problem with the chamfer distance is that because it relies on this kind of l to distances and it's very sensitive to outliers so if you look at these these two examples here this this this this one example unlit on the left has very different chamfer distances to these two examples on the right on because of this l2 nature of the loss function so as a result I think a better loss function for sure come for comparing 3d shapes is to use an f1 score which also operates on point clouds so this is sort of similar to the the chamfer loss in that we will take our to 3d shape representations and sample point clouds from them and then compare the two 3d shapes as point clouds so what we're going to do is maybe we've got our predicted point clouds in orange here and our ground truth point cloud point clouds in blue and then we can compute the precision that is the fraction of the predicted points that were actually correct and when I say correct that means that yeah a predicted point is counted as correct if it is within some some threshold radius of a ground truth point so then for this example we kind of imagine expanding out a sphere around each of our predicted points and then if some growing truth point falls within that sphere then the predicted point is counted as true so then the precision in this example it would be 3 over 4 because 3 of our 4 predicted orange points somehow our correct and have a blue point fall within that within the radius then we can go the other way and compute the recall which is what fraction of the ground truth points were hit with a predicted point within some radius and then here for this example the recall would be 2/3 because two out of the three that doesn't seem right it actually looks like they're all hit no they're not because the one the lower right doesn't quite hit they just sort of barely touching right um so then of these three blue points on two of them are hit with over the predicted point within the radius and the third one is not so recall is 2/3 and the f1 score is this is this is this a geometric mean of the of the of the precision and the recall so in this case would be point seven so this is a number between zero and one and the only way we can get one is if both the precision and the recall are 1 and this f1 score is sort of a nicer metric for comparing 3d shapes because it's more robust to outliers so I think that this is the the best roll like sort of the nicest metric we have right now for comparing 3d shapes okay so another thing you need to deal with worry about when you're working with 3d shapes is actually the camera coordinate system that you're working with because now cameras and these camera systems get kind of complicated once you're working in 3d so there's one so that so suppose we're working on this task where we're going to input input an image and then we want to output a pretty shape representing the 3d shape of that input image now one up and then we have to answer the question what is the coordinate system that we're going to use to represent the 3d shape out in 3d space well one option is to use a so called canonical coordinate system and what this means is that we sort of for the object for each object category we kind of fix canonical directions of front and left and up and running and and down so for example if we're maybe training a network to predict 3d shapes of chairs then we say that the plus Z Direction is maybe always the front of the chair and the plus y direction is always the up is always going up normal to the to the seat of the chair and then plus ax is always to the right another option is to predict in View coordinates so this would be to use a 3d coordinate system for the target that is aligned to the input image and this would be another option for representing the three the the coordinate system of the 3d shapes and we're trying to predict that and actually if you read a lot of papers in detail I think a lot of people use view coordinates just because it's sort of easier to implement because these 3d models are kind of stored on disk and some canonical coordinate system you could just kind of load them up in disk and then predict the coordinate system that they're stored with natively so I think a lot of people use these canonical coordinates for a lot of in practice but I think view provide but in but there's a problem with canonical coordinates which is that it means that the features of your outputs are no longer properly aligned to the features of your inputs whereas if you use a view coordinate system that means that the position of your the feature is corresponding to each thing that you output are going to be better aligned to the original input that you process as input so for this reason I think it's preferable to make predictions in View coordinates there's actually some been some some research that backs this up so here it is experiment where they see that if you train two networks that are identical but one is in view coordinates and one is in canonical coordinates then the network in canonical coordinates tends to overfit to the training shapes that it sees during training whereas networks that are trained in view coordinates tend to generalize better at test time to either novel 3d shapes or to novel object categories so for those reasons I think it's maybe preferable in most scenarios to make predictions in you coordinates so then if we're as soon as kind of an example what that looks like is that if we're going to make view centric voxel predictions then what that means is that this voxel 2 representation that we talked about for predicting voxels becomes very natural because now we're going to maybe for each we for every aligned appropriate position in the input image we need to predict an aligned tube of occupancy of box occupancies which then is sort of natural to process with this 2d voxel tube convolution representation that we talked about ok so the way all states that maybe cover a couple of the datasets that people often use for these tasks one is shape that right because we're all very familiar with image net an image net was this like you know a large-scale data set of images that led to keep learning revolution and blah blah blah and because of the because of image that was so successful then like everyone wanted to build a data set and call it something that so then we have shape net which is supposed to be like the image net of 3d shapes so then shape net is actually a fairly large scale it gives about fifty thousand of 3d CAD models or 3d mesh models spread across 50 different categories and then it's common to render these 3d CAD models from a variety of different viewpoints to actually generate a fairly large data set of maybe around a million images so it's fairly similar in it to image that in scale in terms of images but because um because it's kind of synthetic right because these objects are just like isolated CAD models they're not real images they're not they don't have real context to them so they're kind of it's kind of nice for playing around with with 3d representations but it's not it's not a realistic data set another data set that I like a lot is this pix 3d data set from some people at MIT which actually has real-world images and it actually has 3d mesh models of furniture that are aligned to pixel wise to the input images and the way they collected this was quite ingenious because it turns out that um people loved shop at IKEA and when people buy IKEA furniture they love to go online and say like hey look at my new IKEA bed model like whatever and it then let you can just like Google the IKEA model number and then get a lot of images that show you like this exact bed in a lot of different rooms and then it turns out that IKEA actually publishes 3d mesh models of their furniture so then what they can do is download all these 3d mesh models from Ikea um go on Google Image Search and search google for a bunch of images of the specific IKEA models and then pay people to align the the IKEA models to the input images so that's like pretty cool I thought that was very clever way to collect a data set and because of that it means we actually get like real world images with like lots of like cluttered messy bedrooms and stuff that show this show-off like different types of IKEA furniture in sort of cluttered real world scenarios ok so then that finally brings us to this mesh our CNN architecture that we teased in the last lecture so here we're basically combining all of the detections - all of the detection pipeline that we built up in the previous lectures but now in addition which then we want to input a real-world RGB image detect all the objects in the image and then for each detected object emit a full 3d triangle mesh to give us the 3d shapes of each of those detected objects and the way that this works is that it's basically as we said last time all the detection stuff is basically NASCAR CNN and then the 3d shape prediction part is sort of a blend of all of these 3d shape prediction methods that we've talked about in this lecture so one of the things that we do in mesh our CNN is actually use a hybrid 3d shape representation we ultimately want to predict a mesh so we liked this idea of mesh deformation from from pixel to mash but the problem with mesh deformation is that it constrains the topology of 3d shapes that you can output because if you recall this idea of 3d mesh deformation is that the model was going to input some initial 3d mesh and then it was going to reposition all the vertices in order to give our final output mesh but this only works if the if the the model the the shape the output has the same topology as that initial 3d mesh shape so for example it means that there's no possible way to deform like if you take in a topology class you know there's no way that you can continuously deform a sphere into a doughnut so then there's there's just it's just fundamentally impossible to input this like a lip side or spherical mesh and then deform it to predict a doughnut output so that's kind of a fundamental limitation with this idea of iterative refinement but it turns out that these other other things that these other shape representations we've talked about today actually do not suffer from this so basically what we're going to do is we want to overcome this limitation of limited topologies of this this iterative refinement method by first making coarse voxel Pradesh predictions converting the voxel predictions to a mesh and then using iterative refinement from there so basically the pipeline is that given an input image will do all the 2d object recognition stuff that we're familiar with from NASCAR CNN this is going to be this these are P ends and then for each RPN we're going to regress boxes and then have a second stage it's gonna do our classification and our instant segmentation and that's going to give us these these boxes for the detective and use images in the scene then for each detected detected image we're going to use a voxel tube representation a voxel tube network in the second stage of the the NASCAR scene and pipeline to predict a course voxel representation of the 3d of of each of each shape and then we'll convert each of those predicted voxel representations into some blocky mesh representation and then use that blocky mesh representation as the initial mesh to use to go along for iterative refinement but then allows us to finally output for each detected in it for each detected object I'll put this this high fidelity mesh representation and then our results is like basically we could predict things with holes like that that's the big thing right because because we're going through this intermediate box or representation it allows us to output meshes that have sort of arbitrary topology so if you look at so these are some example results where the top row shows our input RGB image the middle row shows examples from this iterative refinement approach that is going to deform from a sphere for initial sphere or mesh and you can because it's it just can't get rid of holes in objects so for this example of like the the chair on the left it seems like the network kind of knows that there should be a hole there so it kind of pushes the vertices all the way from the hole but it just can't delete that face so it just has no way to actually properly model the holes in these objects whereas on in our in our in our results because we go through this initial course box or representation then the voxel representation can be used to model the initial holes in the object and then the meshes can be used to refine and give us this very fine-grained outputs there's a slight problem though which is that if we train only with the chamfer loss we actually get really ugly results so we actually find that we need to use some regularization term that encourages the result the generated meshes to be less degenerate and more visually pleasing um so the way that we do that is that in addition to minimizing the chamfer loss between the predicted mesh and the ground truth mesh then we also minimize the l2 norm of each edge in the mesh and it turns out to this relatively simple regularizer let's the model predict now very well structured meshes and the results is that now we can predict these full triangle meshes we can input these real-world images on the Left we can detect the objects in the images and then predict these these output 3d mesh mesh models that give very fine-grained 3d shapes for each of the detected objects and you can see that they actually represent not only the visible portion of the object but also the invisible back sides of the objects and from this bookshelf example on the upper left hand here you can see that we can represent it we can output these 3d meshes with very complicated topology with a lot of different holes so that's a kind of nice nice result and then you know because this is built on top of an object detection framework then it can actually detect many many objects per scene but of course it doesn't work perfectly so and actually another kind of nice feature of this architecture is that it actually predicts it does what we call a modal prediction so that means that it predicts not only the visible parts of objects but also the occluded or the invisible parts of all of all the objects so if you for example if you look at the image on the left you can see that like part of the couch is actually occluded by the dog's head but our prediction on the right we actually predicts the full we actually predict the 3d of the couch even in the port even in the region of the image without were though where the couch was covered up at the dogs head um so that's that's called a mobile prediction and that's actually a bit a little thing that's something like Nascar Sina usually will not do and notice that we don't predict the dog because IKEA doesn't sell dogs and then there's another it's also kind of interesting to look at the failure modes here and what's an interesting failure mode is that we see that places where the 2d recognition fails are also places where 3d recognition tends to fail so here if you look at the predicted segmentation mask for the bookcase you see that the regions of the image where we miss the segmentation mask are also the regions of the image where we miss on the predicted mesh so that makes me think that maybe improvements in 2d recognition could also lead to future improvements in a 3d recognition so that's then to kind of a recap where we got today we talked about these two fundamental problems of understanding 3d shapes of neural networks which was predicting 3d shapes from edges or maybe classifying 3d shapes and we had kind of a very fast walk over of how you can represent all these different types of 3d record 3d shape representations using neural networks and that's that's kind of where we ended off today and basically today was about adding a third spatial dimension to our on their own networks and now next time we'll talk about videos which is like adding a temporal dimension to our neural networks so it'll be a different way that we can extend our neural networks to it to an extra dimension so come back next time I'll learn about that
Deep_Learning_for_Computer_Vision
Lecture_18_Videos.txt
so welcome back to lecture 18 and today we're going to talk about videos so uh last time we so for the majority of the class we have been talking about this this task of 2d image recognition um so for example we spent a long time diving into a lot of details about different ways to build image classification systems with with deep learning and over the past a couple weeks ago we spent two lectures talking about moving beyond image classification and really stepping into understanding and recognizing 2d shapes of objects in images and that led us to tasks like semantic segmentation object detection instant segmentation key point prediction panoptix segmentation and all these other sort of 2d shape prediction tasks and then in the last lecture we pushed beyond the second dimension and started talking about ways to represent 3d shapes with deep neural networks so in the last lecture we talked about predicting taking as input a 2d image and predicting the 3d shape or we talked about ways for processing 3d input shapes and then making different classification decisions and of course we did that in the context in order to do that we needed to define these different types of 3d shape representations um so it turns out that you know it's pretty complicated to represent shapes in 3d and we actually needed to define these different sorts of representations to represent 3d shapes and they all had their own pros and cons and we saw how we can build how we could build neural network models that could deal with all these sorts of all these different types of 3d shape representations well um so last lecture was really about pushing uh convolutional networks from two-dimensional stuff to three-dimensional stuff and we did that by adding this third spatial dimension and now today there's actually another way that we can imagine augmenting our neural network models with an additional dimension um so last time we talked we added an extra dimension of space um today we're instead going to add an extra dimension of time so when you think about what is a video a video is basically a sequence of images that unfold over time um so this is basically another way to deal with 3d representation this is another sort of 3d type of representation um that that we can work with with deep neural networks except that now um unlike the unlike having three spatial dimensions like we did in the previous lecture um instead now we have two spatial dimensions and one temporal dimension and that will that will lead to a lot of additional challenges because as it turns out we might want to do different sorts of things to represent the spatial dimensions and the temporal dimension in maybe different ways and we might want to treat them differently depending on the the structure of the task we're working about um so then for all for pretty much everything that we're going to do in videos our networks are basically moving from these three-dimensional tensors that we've worked with from 2d data and instead moving to these four-dimensional tensors so for example we can look at a video as a four-dimensional tensor that has t being the time or temporal dimension three is the channel dimension which is maybe three uh colors rgb channels for the raw input video and then two spatial dimensions h and w um and actually we'll see that for exactly what type of architecture you use for video stuff you might sometimes transpose those first two dimensions so sometimes we'll want to put the temporal axis first and sometimes we'll want to put the channel axis first and we'll see cases where we want to use either those two different representations but the basic idea is that a video is just a four dimensional tensor we've got one spatial dimension one temporal dimension two spatial dimensions and one channel dimension and we need to figure out ways to to build deep neural networks that can work with this sort of data so as a kind of motivating task that we'll use to motivate a lot of our architectures on video we'll imagine this task of video classification so here this is basically the analog of the image classification task that we've seen many many times except extending it in time so now the network will accept some input video that will be a stack of rgb frames and the system will be we'll need to select some category label that classifies the action or activity or a semantic category of that input video and just as in the image classification case the system will be aware of some fixed set of categories at training time um and we'll have a training data set that associates a set of videos with some labeled with some labeled category label and i think we don't even need to talk about loss functions anymore at this point because it's clear that we would train the system with a kind of cross-entropy loss function um so that just as we've seen in all the other classification problems in this in this semester so then the entire the entire trick that we need to solve is how do we go from this four-dimensional four-dimensional input tensor to this vector of scores that we can use to train our cross-entropy loss function and train these video classification systems well um before we talk about concrete architectures it's useful to to point out exactly what are the types of things that we want to recognize in videos so when we're doing 2d recognition tasks we're often recognizing objects and we all know and objects are like things that have some kind of spatial extent or or identity in the world so when we recognize 2d images we might want to recognize on like dogs or cats different types of animals or maybe inanimate objects like bottles or cars so that's kind of what we usually try to classify when we're working with two-dimensional input images and then when we are working with three-dimensional video sequences the thing that we usually want to classify are actions or activities so these are things like maybe swimming or running or jumping or eating or standing and you can see that kind of when in the two-dimensional case we're trying to recognize nouns grammatically and then in the three-dimensional case we're often trying to recognize verbs so the nature of the categories that we're trying to recognize in video is often very different than the types of categories we want to recognize in 2d images and another thing to point out is that most of the time in most of the verbs that we usually care to recognize in videos are actually not just arbitrary actions but actually actions that human beings are performing so actually the majority of video of video classification data sets that you'll find out there most of them most of them have object have a category labels that correspond to different types of actions or activities that people can do because it turns out that that's maybe a thing that we are really interested in one really important thing that we want to use deep learning for to analyze videos is to recognize what people are doing in videos so that's just kind of to point out the types of category labels that we usually are trying to predict in video classification data sets so maybe keep that in mind keep that distinction in mind about nouns versus verbs as we move through uh different concrete architectures so the big problem with videos is that they are really big right this is the main constraint that we have that we need to overcome when dealing with video data so if we try to work at this naively well um something like a tv show or a movie is often shot at like 30 frames per second for most tv shows or 24 frames per second for most movies and now if we just figure out what is the size of raw uncompressed video files they end up being absolutely massive so if you imagine that we have maybe a standard res a standard definition video stream then it has a spatial resolution of 640 by 480 in pixels and if we have that at 30 frames per second then just storing the uncompressed video stream of a standard of a standard definition video stream comes out to about 1.5 gigabytes of data per minute and this is um because this is raw uncompressed video where we need to represent each pixel with three bytes one for the red one for the green one for the blue so when you multiply all that out um it actually takes a lot a lot of data to just store on raw uncompressed video um and uh if we if we move to maybe high definition video then uh sort of a full hd video would be at 1920 by 1080 and that's up at 10 gigabytes per minute if we're storing it in this sort of raw uncompressed form um so this is going to be like absolutely catastrophic for for dealing with videos in neural networks if the uncompressed video just using a byte representation of pixel values is going to be this big there's absolutely no way that we can possibly fit such large video sequences into our gpu memory right because um you know a standard state-of-the-art gpu might have something like 12 or 16 or 32 gigabytes of memory and that memory needs to hold not just the raw video but also all the activations of our network as well that we need to process on top of those pixels so as a result there's we can see from just running these numbers that it's completely infeasible to build deep neural networks that actually process the raw temporal and spatial resolution of the types of videos that we usually watch so the solution is of course we need to make the data a lot lot smaller if we're actually going to process it with deep neural networks so when we talk about video classification usually what we mean is that we're training neural networks that classify very very short clips of video that are typically something like three to five seconds in length and then um to make it even more tractable we'll often temporarily subsample those clips so they are so they're at a very very low frame rate so rather than something like 30 frames per second we might have like five five frames per second and then we'll also tend to heavily spatially down sample the spatial resolution of the video frames of the video clips that we work with as well so as if you recall that when we worked with um maybe image classification it was common to use image resolutions of something like two two four by two to four but now um in order because of these these constraints of working with video when we work with classifying video clips we'll often down sample the spatial resolution to something like 112 by 112 so that's actually a very very low resolution video and like if your netflix stream dropped to a video resolution of 112 by 112 you'd probably be very very sad as a viewer but due to computational constraints um that's kind of the the world the video world that our neural networks are going to have to live with maybe until our memory gets much much bigger so this is just to kind of ground the problem that even though when we think about processing videos with neural networks we think about kind of maybe processing hours hours long video but in practice that's really not what we do that just due to these computational constraints we're really forced to to work only on very very short clips that are very small in size and very small in time um so this is kind of the what we this is kind of the setup of these video classification problems that we're usually working with so then maybe to make this a little bit more concrete um in in the raw video it would be very very long and have a very very high frame rate but during training we would um have to take very very short clips of these very very long video sequences and then actually subsample them in time as well to reduce the frame rate um and then our the models that we would train that we would train our models on these very very short clips um so the idea is that maybe if this input video sequence had the category label of running then probably each of these little temporal clips that we could sample from the video sequence should also have the exact same label so when training our video classification models usually we'll take these these very very short clips from slightly longer videos that we might get in the data set and then the model will be changed to work on these very very short clips of lower fps and then at test time often what we'll do is take that model which has been run on these very four very short clips and then apply it at different positions at the full uh of the original video and this kind of gives us uh many different classification decisions for different sub clips of that raw input video sequence and then we'll make our final classification prediction for the video by averaging the predictions of the the classifier when we run it on these little different sub clips so that's a little bit um of a kind of trick that we need to play when we're working on video this video classification problem um and then we then we get this kind of mismatch between training and testing we're training we're doing these short clips and then testing will often ensemble a set of clips over this this longer video sequence okay so then let's actually talk about our very first video classification model um and this seems stupid but it actually works like really really well so the idea is let's forget that we're working on video and instead let's train a model that looks there's a standard two-dimensional cnn that is trained to classify the individual frames of the video completely independently so for example if this video sequence was uh was supposed to have the action of running then what we'll do is just chop up all the frames of that video sequence into individual 2d rgb images and then train your favorite standard 2d image recognition model on the individual frames of the video sequence um and they will be trained to classify using the the label that we should assign to the entire clip so and then at test time what we'll do is simply uh run this single frame model on every frame in the video in the longer video sequence and then average the predictions over all of the all of the video frames and this seems like a really stupid model right because it's basically ignoring all of this temporal structure in the data but it turns out that this is actually like a really really strong baseline for many many different video classification tasks so we'll see some concrete numbers later but basically my advice is that you if you find yourself in the regime of needing to build a practical video classification system do this first because it's very likely that this very simple straightforward single frame baseline model actually will work good enough in practice i think for many different applications that you might actually want to might might want to do in practice um and then all of the kind of fancy temporal modeling that we'll talk about the rest of the lecture will be kind of like will not really will not usually make the difference between the system working and the system not working usually what it does is just kind of bump the accuracy up from from pretty good to like maybe a little bit better so uh i think you should really not discount this very simple single frame baseline when you're confronted with any kind of video classification or video recognition task so definitely do always try this first so then we can talk about a slightly simpler a slightly more complex model that actually does take into account the temporal structure of the video sequences that we're presented with and this is this idea of late fusion so basically what we did in this single frame baseline is run a cnn independently on every frame of the video and extracted in independent classification decision for every frame in the video and then at test time we kind of average those using some averaging mechanism now this late fusion approach is basically very similar except that we're going to make the averaging we're going to somehow build this this notion of averaging across time into the network itself so that the network can be trained in a way that's aware of the fact that we will be doing this this averaging at test time so more concretely what we'll often do with this idea of late fusion is that you'll be presented with this the sequence of frames from your input video here at the bottom that maybe has a temporal size of t for tap for time and then your rgb and by h and w and then we'll then we'll run your favorite cnn architecture independently on each frame of the video sequence and that will extract us some frame level features so for each uh those will all come operate completely and it can be operating completely independently and will be your favorite 2d cnn architecture like a resnet or an alex nat or a mobile mat or whatever suits your fancy um and after running those those uh those networks independently on each frame we will get these frame level features um that will maybe be a one channel dimension and two spatial dimensions and one temporal dimension so we've just extracted uh convolutional features for each frame of the video sequence completely independently and now we need some way to kind of collapse or average or combine all of these all of these per frame independent features so one way that we can do that is using a fully connected layer that we've seen in early image classification architectures so then what we could do is take it take our our per frame features flatten them into one big vector that now has shape t by d by h prime by w prime and then apply some kind of set of fully connected layers that moves from the flattened features into our final class score c and then we can train this thing with a cross-entropy loss as we often do and this is called a late fusion approach to video classification because we're kind of fusing the temporal information at a very late phase of the classification pipeline that we're doing all this independent per frame modeling with the cnns and at the very end of the architecture we're kind of fusing information temporally um so this approach to late so this is one of the earliest approaches to late fusion that um people worked with and this was using fully connected layers but just as we saw in the case of two-dimensional cnn architectures using a lot of fully connected layers adds a lot of learnable parameters to your network and can lead to some kind of overfitting so in fact another way that we can do late fusion is actually the same trick that we did in image classification models which is to replace the the fully connected the flattened and fully connected operations with instead some kind of average global average pooling followed by fully conducted layers so when we apply this this idea of average pooling to the to the temporal domain then we will still get these independent per frame features using our favorite 2d cnn architecture and now we will apply a global average pooling layer that will pool both that will perform an average pooling over all of the spatial dimensions and over all of the temporal dimensions so that will collapse out all the spatial information and all of the temporal information and leave us with just a d-dimensional vector that we can then apply a class a set of a linear layer to to get our final class scores so this is again a fairly simple straightforward approach that is a fairly easy and simple thing to try and this is a this is again an example of late fusion since we're doing all this independent modeling and then fusing all the temporal information very late in the classification architecture but so now kind of a problem with this late fusion approach is the fact that it's late right so it's actually somewhat difficult using this late fusion approach to model very low level pixel very low level motions of pixels or features between independent video frames so as an example if we look at this this running these clips from this running video here on this slide maybe one useful signal that the model might want to use when uh when classifying this video is running is the fact that his foot is kind of moving up and down and maybe in one frame you see the foot is down and then in the same location at the next frame we see that the foot is up and this motion kind of repeats periodically at the very low level at the very low pixel level of the input video sequence now with this late fusion approach i think it's kind of difficult for the network to learn to model these very low level interactions between adjacent pixels in the input video frames because all because we're kind of um summarizing the entire information about each frame in maybe a single vector so it's sort of difficult for the network to compare these low-level pixel values between adjacent frames so that i think is one intuitive shortcoming of this late fusion approach so given that this is called late fusion you should imagine that you should anticipate that the the the proposed solution to this would be early fusion so here the idea is that um with late fusion we were kind of performing independent processing on all of the video frames and then fusing the temporal information at the end of the architecture well we can actually do the opposite which is instead fuse all of the temporal information of the network in at the very beginning of the of the architecture in the very first layer of the cnn and we can actually do this using a two our familiar friend of two-dimensional convolutions so concretely again we have our input video frame our input sequence of video frames at the bottom here with a temporal dimension a channel dimension with rgb color values and then two spatial dimensions of h by w now we can actually trans we can actually reshape this four-dimensional tensor and interpret the the temporal dimension as a set of channels so what we can do is then um sort of imagine taking our our set of input video frames and kind of stacking them along the channel dimension and that will give us a three-dimensional tensor with um a channel dimension of now three by t which is just all of our rgb frames sort of stacked along the channel dimension and then with our two of same spatial dimensions and this sort of um concat and then basically we're concatenating all of the frames along the channel dimension and then we can then fuse all of this temporal information using a single two-dimensional convolution operator that now where the number of input channels to the convolution is is 3t and the number of output channels is whatever we want in our convolutional architecture so and then after this very first layer that is performing this early fusion then the rest of the network can be whatever kind of standard two-dimensional convolutional architecture we want because after this very first layer we've collapsed all of the temporal information and we just have a sort of three-dimensional tensor to work with and after that we can use our favorite 2d architecture and this hopefully maybe overcomes the problem of because now hopefully the network can better model very low level pixel motion between adjacent video frames because you could for example try to learn convolutional kernels that kind of compare uh whether adjacent video frames have uh have local pixel motion between them but the problem with this approach is that uh maybe we're kind of maybe too aggressive in this approach in the way that we pool or aggregate the temporal information because using this early fusion approach we're kind of um destroying or concatenate or we're kind of destroying all the temporal information after one 2d convolution layer and it might be the case that one 2d convolution layer is just not enough computation to properly model all the types of temporal interactions that can happen in a video sequence so for that reason we might need we we might want to consider some kind of other alternative that doesn't really fuse early or doesn't really fuse late instead maybe we want some mechanism that allows us to kind of fuse slowly over the course of processing a video sequence and for that we can use a three-dimensional cnn and this is also sometimes called a slow fusion network so the idea here is that this is kind of like the cnns that we used in the previous lecture for processing voxel grids where now at each layer in the cnn it's going to we're going to maintain four dimensional tensors that have a channel dimension a temporal dimension and two spatial dimensions and then at each layer of processing inside the cnn we will use um three-dimensional analogs of convolution and three-dimensional analogs of pooling that will allow us to kind of fuse information slowly over the pros over the course of many many layers of processing um so then as kind of an as kind of an example of maybe three tiny it's kind of so then it's useful to kind of draw distinctions between this late fusion approach this early fusion approach and this 3d cnn approach so to kind of make those distinctions clear um let's walk through three little tiny toy examples of um early verse late fusion versus 3d uh cnn architectures so these will all be like tiny tiny architectures that will just give you the flavor of these different types of models but in practice we would use much much deeper larger models um so for a late fusion approach um it would it it will have um the input will maybe be three three channel dimensions 20 temporal dimensions and 64 by 64 height and width dimensions then the first layer would be a two-dimensional convolution um that will maybe be a three by three convolution that outputs twelve features and this is kind of uh so then it will it will apply this convolution independently to every video frame and this will just sort of fuse information across space but it will not fuse information across time so then we can write down both the size of the of the resulting tensor after this operation as well as the effective receptive field in the original input video after the operation so you can see that after a single three by three 2d convolution operation then each output of that 2d convolution is looking at a 3x3 region in space and maybe only a one by one only a single uh input plane input frame in time so then we can then uh we can kind of look at this by sort of building up receptive fields just as we did in our familiar 2d convolution case so then we could imagine maybe we add a four 4x4 four pooling layer after our three by three comb that is again applied independently to every uh every slice in time that is now gonna build up some larger receptive field in space but not going to build up any receptive field in time we could follow this with another three by three convolution that will again build up more receptive field in space but no more receptive field in time um and then finally have a global average pooling layer where we flatten everything and predict our number of output categories so then at this global average pooling layer what we're doing is kind of all at once building up a giant receptive field over the entire temporal extent of the input video as well as increasing our spatial receptive field over the entire spatial size of the video so this late fusion approach is kind of building up receptive fields slowly in space but kind of building it up all at once in time at the very end of the network so in contrast if we look at an early fusion approach then we can see that this is an identical architecture the only thing that's different is the first convolutional layer because now in this idea of early fusion we've taken our input video frames and stacked them along the channel dimension at the very first layer of the network so now the first layer of the network is building is a sort of building up a temporal receptive field immediately over the entire temporal extent of the video but we're still building up this spatial receptive field kind of slowly in space over the course of many many layers now if we look at a 3d cnn in contrast then we're replacing the two all of these two-dimensional convolutions and two-dimensional pooling operations with instead three-dimensional analogs so now we're going to maintain these four-dimensional feature tensors at every layer of processing inside the network and you can see that our now instead of using a three by three convolution that slides um in the slides the filter over 2d and space instead we'll use a three by three by three convolution that is sliding the filter over both space and time and now our pooling operation instead of just collapsing a four by four pooling region in space it's going to average over a four by four by four three dimensional pooling region um in spa both space and time and you can see that by extending these two dimensions these familiar two-dimensional operators of convolution and pooling into the temporal domain then it allows us to have this network architecture to build up a receptive field slowly over both space and time over the course of many layers of processing so for this reason because it's not kind of because it's kind of building up this receptive field slowly over the course of many layers these three-dimensional cnns are sometimes called a slow fusion approach because they are slowly fusing the spatial and the temporal information over the over the over many many layers of processing and again in practice these are like really really tiny micro architectures to just give you a sense of what early verse late first 3d cnns look like but in practice we would use much larger much deeper models that have many more layers and probably work on bigger images as well so then uh one thing that i always get kind of maybe tripped up on when i when i first think about video models is like what exactly is the difference between this 2d convolution operation that we do in early fusion versus this 3d convolution operation that we do in a 3d cnn because it seems that in both contexts we're using a convolution operator to build up some receptive field over both space and time but exactly the mechanics or the trade-offs or what's exactly different between these two forms of spatial temporal convolution are useful i think to to look at in a little bit more detail so in the early fusion approach recall that what we've done is we've taken our input tensor so now we're kind of drawing this this three-dimensional tensor with two spatial dimensions h and w and one temporal dimension t and now we're imagining that at every point in this 3d grid there is a there is a feature vector attached to every point in this 3d grid and now if we imagine using a 2d convolution to process this kind of output what we're doing is we're having a convolutional filter that extends over a tiny region of space but extends over the full depth in time so that means our 2d convolutional filter has the same temporal size as the input video sequence and now this convolutional weight we're going to slide over every position in space because at every position in space the temporal extends over the full length of time so we can compute a familiar inner product and that gives us a two dimensional output um that will that will that will uh we can pass the further layers of the network so one kind of shortcoming with so this is kind of appealing because it feels like a pretty easy and straightforward way to use convolution to work with temporal data but one big shortcoming with this early fusion approach using 2d convolution is that it's not temp it's not temporarily shift invariant so what i mean by that is that suppose we wanted to suppose that we wanted to learn to recognize sort of global transitions in color and maybe we want and maybe an important feature that the network might want to learn to recognize is having the entire the color shift from blue to orange at some point in time now um one problem with this is that because our filters extend over the entire length in time then if we want to detect changes in color both at different positions in time we actually would need to learn separate filters to represent those changes so for example if we wanted to learn a field we would have we might be able to learn one filter that can learn a shift from blue to orange at time three but if we wanted to recognize a shift from blue to orange at time seven then we would actually need to use a whole separate filter in order to learn that that same temporal change but at a different position in time so that that means that's sort of a limit a limitation of this this idea of using 2d convolution to process 3d data is that it's kind of just not temporarily shift in variant so in contrast if we're going to use 3d convolution to process our temporal data then as input we're going to accept the same space the same cube with two spatial dimensions one temporal dimension and a featured and a feature vector at every point in this cube but now our convolutional kernel extends over only a very small region in both space and time so what this means is that now our convolutional filter is going to extend only only over a small region in time so we're going to slide the filter over both all the spatial dimensions as well as over the temporal dimension and at every at every position in 3d space over which we slide the filter then we can compute the inner product between the filter and the chunk of the input at that position and then compute a single scalar output to produce this full 3d output as output from this layer and now the key difference is that these three this this idea of 3d convolution fixes this problem of temporal shift in variance in that that we had with 2d convolution so now if we wanted to learn to learn to recognize transitions from blue to orange we could actually do that at all points in time using only a single three-dimensional convolutional filter because we're going to slide that filter over all positions in time so that allows us to recognize so now we no longer need to learn separate filters if we want to recognize the exact same thing happening at different moments in time for the input video so that means that this 3d convolution is maybe more somehow more representationally efficient because we don't need to learn as many filters to represent all the things we might want to learn from video sequences and also kind of a cool thing about these 3d cnns is that we can actually visualize these learned filters as video clips themselves right so what we can do is that here on the right we're showing these learned 3d convolutional filters from a three-dimensional cnn and because the each of these little convolutional filters has a small extent in both space and time we can actually visualize those filters as little moving rgb video clips and that gives us some sense for what types of features that these uh these 3d convolutional networks want to learn to recognize so it's actually kind of interesting to dive into these like you see some of them actually don't seem to learn any motion at all they're just kind of recognizing the same oriented edges or blobs or opposing colors that we often see in 2d cnn's but other convolutional filters seem to want to recognize like local motion in different directions so that gives you some intuition about the types of low-level features that these three-dimensional convolutional neural networks can learn to represent so now in order to kind of benchmark and see the trade-offs of these different approaches we actually need some data set to train on so one example data set that people sometimes work with for this video classification problem is this sports 1m data set um so here what they what they've done is they've some folks at google they went and took a million youtube videos and for each of those youtube videos they annotated it with um one with a different type of sports category so they took like a million different sports videos on youtube because a lot of videos of people playing different sports on youtube and then annotated them with different uh with different types of sports and actually like the types of sports in this data set are like really fine grained i don't actually know what all of them are um so then if you so here's some example of input frames from videos from this data set and now the blue shows the type shows the ground truth sport category for each of these video frames and the next five all show the top five predictions from a model that was trained to do this video classification task so for example this first one i don't know if you guys can necessarily see that tiny text in the back this first video should the the correct label is track cycling um but the top model prediction was just normal cycling so that was a wrong prediction and actually the second prediction from the model was track cycling which was the correct prediction um but then the next one the next uh the next predictions from the model were like road bicycle racing marathoning and ultra marathoning so those are like different all different fine grains different types of sports that exist in this data set or in the second category it's kind of interesting apparently the correct label was ultra marathon but the next category labels were half marathon running and marathon so i don't know exactly how a model is expected to tell the difference between a full marathon and a half marathon just from a video a short video clip but like somehow this model managed to do it on this clip so that's pretty surprising to me um but um so that's just to say that i think this is a really challenging data set since it involves all these really fine-grained distinctions of different types of sports categories and then what we can see is we can now compare a single frame version of the model an early fusion model and a late fusion model and a 3d cnn model and what we and what's kind of shocking here is just how well this single frame baseline does so here we see that if we took this uh this this just like train your favorite two-dimensional cnn on independent frames of this video of these videos we can already get up to like over 77 accuracy which is just like shocking how well this simple single frame baseline model can do so again always try single frame baselines first whenever you're confronted with a video classification problem and then as we move from the the single frame to the early fusion to the late fusion to the 3d cnn we actually see the early fusion works a little bit worse than the single frame than the single frame baseline which is a little bit surprising um but the late fusion and the 3d cnn approaches do a little bit better than the single frame approach but not massively so which again just motivate which is again just evidence as to how strong this single frame model is as a baseline for video recognition tasks but of course if you read this like tiny tiny citation of the slide down in the lower left corner of the slide you'll see that this paper was actually from 2014 which is quite a long time ago like in 2014 people weren't even using gpu clusters at google to train their models so these models were actually trained on cpu clusters at google and there's a great line in the paper that says that it took all the models a month to train so um i think that the state of the art in sort of convolutional architecture design has advanced quite a lot since 2014 um so these these results are a little bit old and are to be taken with a big grain of salt um so as a result uh people people have made continual advances in both 2d cnn architectures and 3d cnn architectures since that time so as an example of an improved 3d cnn architecture is the the very famous c3d model for convolutional 3d um and basically this is the vgg of 3d cnns so we'll recall that vgg was this like very simple convolutional neural net 2d convolutional network that was that consisted entirely of three by three convolutions and uh two by two poolings and the vgg was just this simple architecture of com kong pool comp compulsool i think you you remember now and now c3d is basically the three-dimensional analog of this vgg architecture um so it consists entirely of three by three by three convolutions and two by two by two poolings with the slight exception that the very first pooling only does pooling in space and not time but basically it's a it's a pretty straightforward architecture to look at here and this was a particularly influential video model because unlike the previous google paper the c3d authors actually released the pre-trained weights of their model so as a result a lot of people who couldn't afford to train these these video models on their own data sets would would often use the pre-trained c3d model as a fixed video feature extractor for their other types of downstream video classification tasks so just as we saw that in images it was very common to pre-train like vgg on imagenet and then use the features for other tasks in video it was also very common to take this c3d model that was pre-trained on a large video data set and then used the features from that pre-trained c3d model for other types of downstream video tasks but the problem is that this c3d model is like shockingly computationally expensive right because if you imagine just how much com computation like 3d convolution is really expensive because now we need to take a three-dimensional receptive a three-dimensional kernel and slide it over an entire three-dimensional spatial temporal grid um so now everything kind of spells scales cubically which is like a really bad way for your models to scale computationally so then if we go and compute the number of the number of floating point operations that are needed to run the forward pass of this c3d model and and kind of sum it up over the entire model we see that it's like really really really expensive so um so even though this model takes a really really tiny input of 16 uh 16 uh frames in time and a 112 by 112 spatial resolution at each frame that that's even though it takes that very very tiny low resolution input the total computational cost of the model is almost 40 gigaflops for a single forward pass and to and for comparison a bgg 16 was like 13.6 gigaflops so this uh this c3d model is like almost three times as expensive as a vgg and compared to alexnet um alexnet on two to four by two to four image inputs was only like 0.7 gigaflops so then c3d is just like way way way more computationally expensive than something like alexnet um so that's kind of a common problem with these video models is that anytime you're using 3d convolution the models just quickly become very very computationally expensive but that said this c3d model was able to do a pretty good job of pushing up the state of the art on this sports 1m classification task so here it was a kind of it's kind of a similar story that we saw in images right that in images we saw that for a period of time people built like ever bigger and ever deeper and ever more computationally expensive models on images and that led to improvements in 2d image recognition and it's kind of the same story here that um we're just building a bigger deeper more expensive video model and that also pushes leads to higher classification accuracies on this large scale video benchmark so that's kind of um one angle that you can imagine so then there's kind of this question of like what's the best way to build a convolutional neural network architecture that works over both space and time um and maybe one way is just like continue scaling up this c3d model and try to make it bigger and deeper and maybe add residual connections and bash normalization all that kind of stuff that we know works well in images but um i think there's another interesting approach which is to think a little bit more deeply about the fact that space and time really maybe should be treated differently in our models right because when we're using 3d convolution and 3d pooling we're basically inside the way that the model is performing computation we're really treating sort of space and time as interchangeable in a way because the way that we're processing both of them is exactly the same so instead what we might want to do is is think carefully about ways to represent a space and time differently and one interesting way is to represent motion more explicitly inside our neural network models and to motivate why representing motion might be a good idea it's interesting to realize that humans can actually recognize a lot using only motion so with this 3d with just three moving dots you have some sense of what's going on in this video sequence so any anyone you want to guess what's going on there or this one's like super clear like that's definitely a person walking that's like definitely a person walking he's climbing and waving he's maybe riding a unicycle so this is kind of amazing right it turns out that human brains somehow must be doing something different between processing motion and processing visual appearance right because all i think all of us have no trouble recognizing the actions that are going on in this video just from this very low level of motion cues and it turns out that we don't actually need to see the images or the pixels at all in order to tell what actions these these people are performing or even to recognize that they are people so kind of motivated by this fact that human beings seem to be able to do a lot with just motion information um there's a class of neural network architectures that try to more explicitly represent motion information as a primitive inside the inside the network so um to kind of make this concrete we need to talk about um how can you even measure motion information more quantitatively using computers so one way that we often measure motion information is using this idea of optical flow so optical flow is going to take as input a pair of adjacent video frames and then compute a kind of flow field or distortion field between two adjacent video frames so what it's going to do is say for every pixel in the first frame what is the vector displacement telling where that pixel is going to move in the second frame and there exists many many algorithms that for computing this thing and lots of details about optical flow that we just don't want to go into at this time but basically the idea is that there exist algorithms for computing optical flow on input image frames and optical flow is just giving us some kind of local motion cues about how pixels move between adjacent video between adjacent video frames so then um optical flow is kind of highlighting local motion in the scene um and what we can what we're showing in this visualization is visualizing the horizontal components of the optical flow in the top and the vertical component of the optical flow in the bottom so here given these two input video frames are this like person like drawing a bow and arrow you can see that the optical flow representation is just highlighting the local motion of the person's arm and of the bow in these adjacent video frames so now optical flow is now this this very low level signal about motion that we can feed to cnns to allow them to more explicitly disentangle the appearance of what's going on in visuals in videos versus the motion of how the pixels move so that leads us to this very famous architecture for um for video recognition called the two stream network so what this two stream network does is that it has two parallel convolutional neural network stacks the top stack is the spatial stream which processes the appearance of the input video so because we know that this single frame baseline is such a strong baseline for recognizing videos um the top spatial stream in the two stream comnet is just going to input a single frame from the video clip and now and then from that single frame it's going to try to predict a classification distribution over all the categories that we care about and now on the lower stream is this temporal stream which is processing only motion information so what we're going to do is given our clip of maybe t video frames we're going to compute optical flow between every adjacent pair of video frames in the input sequence so that will give us t so if we have t video frames that will give us t minus one um sets of optical flow between video frames and then each of those optical flow fields will give us two channel dimensions one in x and one and y so then we'll concatenate all of those things into a single big tensor of size with number of channels equal to two times t minus one and just stack all of those optical flow fields in the channel dimension and then use an early fusion approach in the temporal stream to do early fusion to kind of fuse all those optical flow fields in the first convolutional layer and then use normal 2d cnn operations at the rest of the network and then this this this temporal stream will also sort of try to be trained to independently predict the classification decisions uh given only the motion information and then at test time what we're going to do is that the spatial stream will predict a distribution over classes the temporal stream will predict a different distribution of our classes and then at test time we'll just take an average of the two probability distributions that are predicted by the spatial and the temporal streams yeah question yeah the question is where does the two come one and come from in the stack of optical flow well that's because each optical flow gives us a vec a displacement vector in the image plane so if so optical flow at every point in the image it gives us a two dimensional vector telling us both so that's the x the x component and the y component of that optical flow vector at every point in the input image so that gives us the two and then this two stream network if we uh look so i've kind of like done a bit of a bait and switch on you and it's actually a different data set than we've looked at in the previous slides but if we look at this two stream network we see that this uh two-stream network using only the temporal stream is actually able to outperform this this spatial stream which means that this idea of recognizing activities from motions is maybe not particular to our own human brains it turns out that our neural networks are also pretty good at recognizing activities using only motion information and then when we fuse the the the appearance stream and the temporal stream then we actually get a pretty a slight improvement over using only the temporal stream alone okay so basically at this point now we've seen a bunch of different ways for lot modeling short term structure in in using different types of cnn architectures so we've seen using maybe 2d convolution or 3d convolution or optical flow as ways to compute information over the temporal dimension of our input videos but all of these different types of processing operations we've seen so far are very very local in what they are able to do right so 3d convolution is maybe looking at a so early fusion actually just like didn't really work that well um 3d convolution only is looking at only a very tiny receptive field in time and optical flow is also only looking between adjacent frames to compute the motion between a pair of frames but what happens if we actually want to build cnns that can recognize stuff which is like more distant in time well then we're going to need some other kind of operator um so we already but it turns out we've already seen one thing in this class that lets us one type of neural network architecture that lets us deal with very long term sequence information and that's recurrent neural networks so why don't we why don't we combine together these two ideas so here the idea is that we'll um have our video stream in time at the bottom we'll apply some kind of cnn either a 2d cnn or a 3d cnn to extract local features at every point in time and then we'll use uh some kind of recurrent neural network architecture to fuse information temporally given the outputs of these uh of these individual of these independent rnns and then we could use kind of a many-to-one operation if we wanted to make a single classification decision at the end of the video we could use the final r and hidden state to then make the prediction over the entire video sequence or we could also do kind of a many-to-many task and maybe make a set of predictions at every point in time over the video and actually turns out there was a paper all the way back in 2011 that proposed this exact architecture that used sort of 3d cnns to extract spatial temporal information locally and then fuse them over long term using an lstm so this was like way back in 2011 i think this paper was way ahead of its time and people should cite it more um but the one that people tend to associate this idea with more is this a 2015 paper that i think got a little bit more recognition and then another kind of a thing to point out here is that one trick when combining cnns and rnns for processing videos is that sometimes we only back propagate through the rnn remember like in c3d we talked about people would sometimes use c3d as a fixed feature extractor well that then that then here we could do that same thing um in the cnn rnn setup so we could use a pre-trained c3d that's been trained on a video data set and then use c3d as a feature extractor to extract features from every point in video and then use an rnn to work on top of those pre-extracted c3d features so that will be a way to get us around this kind of memory constraint and if we're only using the the cnn as a feature extractor then we don't need to back propagate into the cnn and then we can actually train these models over very uh very wide time periods as well so that's kind of an appealing approach okay so now we've kind of seen maybe two different approaches for merging temporal information one is using kind of operations inside the cnn that kind of merge information locally and the other is using some kind of recurrent information to kind of fuse at the top outside after we run our cnn architecture so is there some way that we can combine both approaches and it turns out we can maybe take another inspiration from recurrent neural networks and look back at this idea of a multi-layer rnn so now what we can do is remember in a multi-layer rnn we had this two-dimensional grid where at every point in the grid our vector was going to depend both on the previous vector in the previous layer at the same time step as well as the vector from the previous time step at the same layer and then we would um you process video sequences of arbitrary length by sharing weights over the sequence well we can actually do the exact same thing using our convolutional neural networks and now we can build a kind of multi-layer recurrent convolutional network so now what we're doing is we're building up this two-dimensional grid of features every point in this two-dimensional grid is a three-dimensional tensor with spatial with two spatial dimensions and one channel dimension and now at every point in the grid the this feature this feature tensor is going to be computed by combining both the the the features at the previous layer at the same time step as well as the features at the previous time step of the same layer and then we can fuse these two things using some kind of convolution operation so then to see exactly the the way that we might fuse these things remember that in a normal 2d cnn what we're doing is we're kind of inputting uh one two 3d tensor running two 2d convolution and then outputting another 3d tensor now what we want to do is build a recurrent convolutional network where we're taking as input two 3d tensors one gives us features from the same layer and the previous time step and the other gives us features from the previous layer and the same time step and then we want to fuse these in using some kind of rnn-like recurrence formula that gives us the features for the current layer and the current time step and this um if you'll recall this kind of structure looks very similar to the structure that we saw when using recurrent neural networks so now to use the exact functional form here what we can do is just again copy and paste from our recurrent from our favorite recurrent neural networks so if you recall what we did in a vanilla tan hrn we would take our two inputs process project each of them using a separate learned weight matrix add the two results and then squash with the tan h and this operation will give us our next output vector well now in order to convert this this vector this vector version of an rnn that we've seen before what we can do is replace all the matrix multiply operations with 2d convolution operator operations instead so now in order to take kind of a recurrent convolutional form of this vanilla rnn we can use this architecture here so then what we can do is given our two feature tensors one from the same layer in the previous time step and one from this from the previous from the same time step and the previous layer we can project each of them to some new feature tensor using some 2d convolution operation then sum them and then squash the result with a tan h and that will give us our next feature vector at the next our output feature vector for the recurrent rnn and then we could do this not we could do this with kind of any rnn architecture so then you could take your favorite rnn architecture like a gru or an lstm or something else and just translate it from this kind of rnn that operates on vectors into an rnn that operates um using convolution over three-dimensional feature maps and the way that you do that is simply convert all the matrix multiply operations into convolution operations instead so then what we've done here is that we're kind of contrast then we saw this this previous approach on the left actually combined two different ways of modeling spatial and modeling temporal information so if we use a cnn and then an lstm on top then inside the cnn we're kind of modeling local structure between adjacent frames and then the rnn is maybe processing long-scale long-term global structure well now using this recurrent cnn then it's kind of like every layer in every neuron inside of our convolutional network is itself a little recurrent network so there's kind of like both spatial and temporal fusing happening in this very nice way uh inside every layer of the network so this is actually a very beautiful idea i think but it turns out that people have actually not used this idea too much in practice so then to think about why that might be the case it turns out that even though this idea is very beautiful it's actually not very computationally effective right because the problem is that rnns are actually very slow for long sequences because rnns force you to compute the previous time step before you can compute the next time step so that means that r and n's just don't paralyze very well over very very long sequences and i think that's a reason why people have actually not used this kind of recurrent cnn architecture too so much in practice and and if you think about ways to solve this problem you should actually again think back to our discussions in uh previous lectures where we talked about different ways to model sequences and their pros and cons so um in a previous lecture we talked about these different approaches for processing long sequences we saw one approach was recurrent neural networks which are nice because they're good at modeling long-term temporal information but they were bad because of their sequential processing and now that's kind of in in the video domain that's kind of equivalent to this architecture of cnn followed by an lstm now we had this other approach which is maybe 1d convolution for processing sequences and this was nice because it's sort of computationally effective we can paralyze the computation of the sequence um at training time we don't no longer have the sequential dependence on the time steps um and now in video the equivalent of this is kind of 3d convolution but if you'll recall we actually saw another mechanism for processing sequences in the past and that's this notion of self-attention so if you'll recall self-attention kind of like was really good at processing long sequences because it computes this like a tension between all pairs of vectors and it's highly parallelizable because uh because there's no temporal dependency on the structure just like with rnns so now then the question is can we find some way to apply this idea of self-attention to our video to process video sequences as well and of course it turns out the answer is yes so a quick reminder on self-attention remember that the input was a set of vectors then for every set every input vector in the set then we would predict uh keys queries and values then we would pre by using some kind of linear projection operator on the on on the input vectors we would compute an affinity matrix which tells us how much is every pair of vectors related by computing some kind of dot scaled dot a square root scale the dot product between the keys and the keys and the queries and then the output for each vector would be this weighted linear combination of the values that are weighted by the values in the infinity matrix so now we can actually um just port this whole architecture over into 3d where now um now what we're going to do is is process some part of our network using a 3d cnn and that will give us this four dimensional tensor with a channel dimension and three spatial dimensions and now we can interpret this four-dimensional tensor as a set of vectors where we have a set of vectors of dimension c and the number of vectors is t times h times w and then we can just run this exact same uh self-attention operator using uh the set of vectors from uh some layer in a 3d convolutional network so then we actually have seen this exact slide before in a previous lecture it just it had 2d convolution instead of 3d convolution but the idea is that we'll receive some set of input features from a 3d cnn um we'll compute keys queries and values using one by one by one convolution then we'll compute some inner product between every key and every value and then we'll compute a soft max over one of the dimensions and that will give us this big affinity matrix or a set of attention weights then we'll use this to combine uh to linearly weight the values and then sum that out with a matrix multiply and then it's very common to actually project the output of this operation with another one by one con and then actually to connect the whole thing with a residual connection so that gives us so when we combine all these three all these things together then this operator when can is a block that we can slot into our 3d cnns that's computing a kind of spatial temporal self-attention and this is sometimes called a non-local block for the paper that introduced it and now kind of a neat trick is that we can actually initialize this if we initialize this last conflair to be all zeros then everything inside this operator is going to compute zeros which means that the whole block is going to compute the identity because of the residual connection so then actually a common trick that people use with these non-local blocks is to initialize the the non-local block such that it computes the identity function by initializing the last thing the zeros and then we can take a a non-local block that is computing identity and insert it into an existing pre-trained 3d cnn model and then continue fine-tuning with this additional inserted non-local block so that's kind of a trick that people will use so then it looks something like this that we now we now we have this notion of um we have some kind of 3d cnn and we're going to slot in these non-local blocks at different points in the 3d cnn and now um the 3d cnn chunks are kind of going to do this slow fusion over space and time and now each non-local block is going to give us this this global fusion over all of space and all of time so this is actually a pretty powerful pretty close to state-of-the-art architecture for video recognition but then the problem is um what is the actual so then now non-local blocks give us this power powerful way to introduce global temporal processing into our 3d combnets but the question is we still need to choose some 3d cnn architecture that we can insert our non-local blocks into so then the question is what is actually the best 3d cnn architecture for us to build so one really cool idea is this idea of inflating existing two-dimensional networks from 2d into 3d right because we know that there's been a lot of work on people doing a lot of effort to design really good two-dimensional cnn architectures and somehow we would not want to repeat all of that effort as a community and we don't want to sort of start over from scratch in inventing the best 3d cnn architectures so instead what if there was a way where we could take a 2d cnn architecture that we know works well on images and then somehow adapt it in a very tiny way to make it also work on video so that's the idea of inflating a two-dimensional convolutional network and sort of blowing it up like a balloon and then adding this third dimension to an existing network architecture so then the recipe is that we're going to take an existing two-dimensional cnn architecture that is that has maybe two-dimensional convolutions and two-dimensional pooling operations and then for every uh every one of those operations we're going to replace it replace the 2d convolution with a 3d convolution and replace each 2d pooling with a 3d pooling and now the only choices that we need to make so then what that kind of looks like is that um here's an example using the inception block so then the paper that introduced this was uh from google and inception architecture was from google so of course they had to apply that thing on top of the homegrown network architecture so then what this then here's an example of the 2d inception module remember that it has these uh parallel branches of convolution and pooling and now and then kind of concatenates them a lot after having these parallel branches of processing and now after inflating the three dimensions all we do is add an additional temporal dimension to each convolution operation and each pooling operation and this gives us this super simple recipe for taking a 2d architecture and just applying it to 3d but it turns out that um so this this is about taking this is about transferring architectures right so far this trick has allowed us to take an architecture that works on 2d networks and then formulatedly change the architecture so that it can be applied on 3d 3d videos as well but it turns out we can actually go a step further and we can not only inflate the architecture we can also transfer the weights the trained weights from a network which was trained on images and then we can use those trained those weights that were trained to operate on images to actually initialize the inflated version of our 3d cnn and to see how that works um what we do is remember that when we inflate the cnn we're going to add this extra temporal dimension to all of the convolution layers so now what we can do is what we're going then um in order to initialize the weights of this inflated architecture using weights of the image version what we're going to do is copy the convolutional kernels for each convolution layer and copy it t times in time so we're not only adding this extra temporal dimension we're actually just copying in time the the weights of each convolutional layer and just like duplicating them in time and then all after you duplicate them in time you then divide them by the duplication factor and the reason that we do this is that because um if you imagine taking an original input image and then making the world's most boring video by copying that input image many many times in time that will give us this sort of trivial constant input video and now if you imagine running uh this to this 3d convolution with duplicated weights on the boring input video that will end up computing the exact same result as running the 2d convolution on the original image so what that so what that means so that you can kind of like think through this all what's going on but basically this works because convolution is a linear operator right so when we duplicated the the the frames in time then we can also duplicate the weight and time and divide with a duplication factor and that that means we're computing the exact same thing so because of this trick this this uh this super cool trick actually allows us to pre-train a model on images and then initial and then inflate the architecture and inflate the weights and then continue fine-tuning that model on a video data set and moreover it also lets us recycle all of the existing architectures that we know work well on images and also recycle all the pre-trained models that we have laying around that work well on images so then this actually works quite well um so then if we uh kind of look at yet another 3d yet another uh video data set called kinetics 400 um then if we run this per frame baseline then the the blue so the blue bars here are showing um training from scratch on the video data set and the orange bars are showing models where we initialize the video model from the image model so we can see that when for a per frame model moving from a perfray model to this cnn lstm model to this two-stream cnn actually gives us steady improvements but as we move from the two-stream cnn to an inflated cnn that actually does much better so then this kind of gives us some empirical results that actually this idea of taking 2d comnets and inflating them in time is actually a pretty good way to generate good 3d cnn architectures and it turns out that two stream networks is still a good idea so we can actually take two inflated networks and take one inflated network to work on the appearance stream and a second inflated network to work on these optical flow fields and that gives us a two stream inflated network that actually works quite well okay so that kind of as a fun aside one thing we can do is that now that we've got these kind of two stream networks that work really well at classifying videos we might want to know like how can we visualize what these learned what these video models have learned and we can actually use this exact same trick that we've used for visualizing trained models and images we can take um take an input we can randomly initialize an input image randomly initialize a flow field and then compute forward passes through the trained network to compute the score the classification score and then back propagate through the network to compute the gradient of the score with respect to the input image in respect to the flow field and then we can use gradient descent or gradient ascent to find the image and the flow field that maximize the classification score for the particular category so then what does this look like so then here's an example so then here on the left we're showing uh the appear the optimized image for the appearance stream and then in the middle we're showing an optimized flow field that has been add we added some temporal constraints to constrain the flow field from changing too fast in time and then on the right we've kind of lifted that temporal constraint and allowed the flow field to change faster in time so can anyone guess the action that's going on in this in this video frame i think i heard weightlifting was a good it was a good guess right so this is actually pretty cool then we see that the appearance stream is kind of looking for barbells anywhere in the image um at the slow motion stream it kind of looks like it's looking for kind of like the bar wiggling at the top of the lift and in the fast motion stream it looks like he's looking for the part where it pushes the bar overhead so that's actually pretty exciting okay let's try it again so here's the appearance stream here's the slow motion and here's the fast motion i i don't think anyone's going to guess this what eyes eyes or face that's a good guess the it's actually applying eye makeup right so you can see that for the appearance stream it's look so there's like a lot of youtube videos of like people doing makeup tutorials right and then um so that on the appearance stream is kind of looking for eyes anywhere in the input image and now for the slow motion it kind of looks like maybe the motion of the head or the hands and then the fast motion kind of looks like the local brushing motion of actually applying the makeup so it's i thought it's really cool that we can take all these same techniques that we know and love for visualizing images and then just kind of apply them on video sequences as well okay so then i also wanted to briefly mention so kind of looking at these two stream networks we saw that they still rely on this external optical flow algorithm and it would be really nice if we could relax that constraint and just build networks that can train on the raw pixel values and not rely on this external optical flow um so that brings us to this uh state-of-the-art slow fast network that was just published like a month ago that is actually current state of the art on a lot of video recognition tasks so what the idea is is that we're actually going to have we're still going to have two parallel network branches just as we did in the two stream network except that both of them are going to operate on raw pixels the difference is that they are going to operate at different different temporal resolutions so one branch will be the slow branch that operates at a very low frame rate but has a lot is a very expensive network that uses a lot of channels at every layer of processing so to visualize this network we kind of have three dimensions we can play with one is the spatial dimension of the input image one is the channel dimension the number of features the number of layers in every layer of the network and the other is the temporal dimension which is what is the temporal frame rate at which the network is applied so then the first branch is this slow branch that uses a a small temporal res uses a very slow temporal frame rate but a lot of channels so it's a doing using fewer frames but putting a lot of processing on each frame then so it's a low frame rate and then the second branch of course is the fast branch that operates at a higher frame rate but uses a much thinner network so then the the fast branch has operates at a high temporal frame rate but uses only a little bit of processing on each frame and then so it is a high frame rate and it uses a very small number of channels at every at every layer and moreover there's lateral connections that fuse information from the fast branch back into the slow branch so unlike the traditional two stream networks that only fuse the information at the very end through averaging these slow fast networks are actually fusing information at multiple stages inside the network and then at the very end there's some kind of fully connected layer that does the predictions and then i'm not going to walk through this in detail but this sort of gives us the the concrete architecture instantiation of these slow fast networks and this kind of combines together all of the techniques that we've talked about in this lecture so it has two streams um one that's operating on mostly appearance and one that's operating on temporal information each of these streams is itself an inflated resnet 50 architecture so this brings us back to this idea of inflation so they take an existing resnet50 architecture that we know works well on images and then inflate it in time to to form each of these two streams um but now the difference is that the slow pathway operates at a very low frame rate and the fast pathway operates at a very high frame rate and then it turns out that we can also add non-local blocks into this model as well and get it to work really well so i don't want to go through this in detail but i think this is actually the current state of the art in video understanding and video recognition for a lot of tasks so if you're looking for the for the model to use today i think it's this one okay then i also wanted to very briefly mention that so far we've talked mostly about this this this task of classifying very short video clips like taking these like two to like three to five seconds of video and then predicting a classification score um but this was a good this was a good task to motivate a lot of our spatial temporal architectures in comnets but it turns out that of course there's a lot of different video tasks video understanding tasks that people try to tackle with convolutional neural networks so as an as just a very brief sample another thing that people sometimes want to do in video is work on this task of temporal action localization so here we're going to be given a very long untrained uh untrained untrimmed video sequence and now we're going to want to detect this the there might be multiple activities that happen in this video sequence and we want the model to both um detect all the activities and tell us for each activity which span and time um did that activity occur for and you can imagine that you could actually build an architecture similar to faster rcnn that did this type of uh this type of operation and this is indeed what people have done so you can imagine this type of architecture that does now kind of like some lightweight computation over the video sequence to get um action proposals instead of region proposals that tells us what regions and video are likely to contain an action and then have a second stage of some kind of like 3d comnet that um works on that that processes to pick the raw pixels or the features of each of those actions so you can imagine the building a kind of faster rcnn like architecture that works over space that works over time and of course um now we've seen how we can build models to detect objects in space this model lets us detect actions in time so of course we've got to do both so there's also some work on spatial temporal action detection so here the task is that the input is this video is like a long video sequence and the task is that you need to detect the people all the people in the in each video frame and also say the temporal span of the actions those p that those people are performing in the video so now this is a super challenging task because it involves both um spatial detection in each frame as well as temporal detection in time and one really exciting data set for this spatial temporal detection task is this uh very recent ava data set that was published just last year in 2018 so what they did is they went and annotated a bunch of movies with different kinds of actions that people are doing and now so this also brings in the problem of very long temporal dependencies very long video clips so now you need to detect not just like classify little two three second clips but instead the input is like 15 minutes of video and you need to both detect all the people in that 15 minutes of video and say what activities they're performing so i think there's there's been not yet a ton of work on this data set but i think that in the next couple of years we'll i i predict that we'll see a lot of people moving to this kind of spatial temporal activity detection on a long untrimmed video because i think that's an exciting task that people will probably start working on so that was kind of our whirlwind tool uh tour of video models today um so today we saw many many different video classification models um including building all the way up to slow fast networks which again was like just published this like a month ago and i think is the current state of the art so if you're looking to work with video i think that's a good one to play with and actually there's code available as well so that gives us that's our brief summary of video models and the next time we'll move to a completely different topic and start talking about generative models and in particular about generative adversarial networks
Deep_Learning_for_Computer_Vision
Lecture_22_Conclusion.txt
all right welcome to lecture 22. we made it to the end of the semester um or at least i did so you guys i guess still have finals and a couple more assignments to turn in um but today is uh lecture 22. um so we're gonna today talk about uh kind of an overview a recap of all the major points that we covered this semester that'll be about half of the lecture and then the second half of the lecture i wanted to talk about some of my thoughts about what are some of the big problems some of the big open challenges that are facing computer vision and deep learning as we move forward beyond just the content of this semester's class this semester we've really talked about deep learning for computer vision and we've spent a lot of time talking about deep learning and computer vision right so to kind of recap on all of that um computer vision kind of zooming out from all of this details that we've been dealing with is that computer vision is really about building these artificial systems that can process perceive and reason about various types of visual data and of course the big goal in the big challenge in all in getting this to work is this semantic gap that you know computers we can visually we can easily look at these images and understand that it's a cat but images or computers are just seeing these giant grids of numbers and somehow we need to deal with that um and we need and this visual data is super complex right as as um as we maybe change our viewpoint or we change the illumination um or our cats are deforming then or they're hiding under the couch then the pixel grids of these images can totally change but our semantic understanding needs to remain the same and this is kind of the big challenge that we were dealing with in all of our computer vision applications so then the solution that we really hit upon this semester was this idea of the data-driven approach and using machine learning to solve all these problems that for all of the tasks that we considered all the types of complex visual data we wanted to understand we wanted to collect a data set of images and labels and then use some machine learning algorithm to compress or distill the knowledge or the information in that label data set into some kind of classifier object that we typically resolve that we use neural networks for most of those and then we could use that classifier then evaluate and make predictions on novel images and this machine learning paradigm to computer vision has just become kind of the major dominant approach just it's just the way that people do compute mo the way that people solve most types of computer vision problems nowadays and of course the model that we are all very familiar with by now is this idea of using deep convolutional neural networks to solve a lot of different types of problems in computer vision but of course models are not alone models alone are not enough to make us solve this computer vision problem we need both models as well as large data sets on which to train are those models so of course the imagenet dataset which came out maybe back in uh 2012 oh sorry all the way back in 2009 was um this super influential large-scale data set that gave this that gave us this large-scale data set on which to train our our machine learning models and then sort of in 2012 uh thing all the magic happened and we combined these large data sets with these large powerful convolutional neural network models which were combined with the increasing speed of gpu and other types of computing devices and that just led to lots and lots of progress in computer vision so then as measured by progress on the imagenet data set you we know that you know we've now sort of this this data set used to be considered super hard back in maybe 2010 the air rates were really high and now in 20 2017 uh actually was the the end of the challenge because error rates on this data set were just so good and this led to a massive explosion in just um all kinds of computer vision research across across the world really so um one of the main venues of publication where computer vision researchers share their results is um cvpr it's the conference on international conference on computer vision and pattern recognition so this is a photograph i took at cvpr over the summer at cdpr2019 where they were giving some statistics in the opening ceremony of the of the conference about just how cdpr as a conference has grown over the past 20 years or so and you can see that there's been this just exponential growth in both the number of submitted papers in blue and the number of accepted papers in green at this top computer vision conference but um it turns out that this trend has actually continued even this semester like history has been being made in computer vision even while you guys have been taking this class so actually uh the seat i took this picture at cvpr2019 but the cbpr 2020 submission deadline actually was about a month ago and november 15th and i saw on twitter some stats about the number of submitted papers to cbpr2020 and this is just convinced this growth in computer vision as a research field has just been continuing its exponential trend even this semester while you guys were learning about computer vision so now um cbpr2020 had about more than 650 submitted papers um and we'll see exactly how many of those get accepted but i think it's safe to say that this trend of exponential growth in computer and computer vision as a research field is just continuing continuing as we speak but despite all the success we know that all of this success this deep learning stuff was really not invented overnight um all of our success has really been drawing on a really long history of a lot of really smart researchers who've come up with a lot of problems throughout the past several decades and we can see echoes of um of their work even in our modern deep learning systems now so if we for example if we look all the way back in 1959 we saw this perceptron model by frank rosenblatt that basically implemented a linear classifier except that they didn't have sort of general purpose programmable programming languages they didn't have python they didn't have pi torch they didn't have google colab they had to make physical hardware to do these things um but but in the mathematical formulation of what they were building we would recognize now as a linear classifier then in computer vision we saw that huble and weasel back in 1959 were trying to probe the visual representations that were that cat neurons used to recognize the visual world and that gave us this notion that maybe edges and oriented edges and motion were really important cues in the mammalian visual system and we've seen over and over again that these cues of motion and edges and and colors um just show up over and over again whenever we try to visualize the representations that our deep neural networks are learning then moving a little bit forward into 1980 we had this neocognatron model from from fukushima and this really looks and this really looks a lot like a modern convolutional neural network way all the way back in 1980 where fukushima was directly inspired by huble and weasel's idea of this hierarchy of simple and complex cells in the mammalian visual system and tried to write out this computational model to simulate or or process electronically a visual data and that gave us an architecture that looks very much like our modern convolutional networks all the way back in 1980 and really by 1998 when young lacoon published his his now famous work on convolutional networks this is basically convolutional networks nearly in their complete modern form back in 1998 and really as then moving ahead another 14 years we got to alexnet which was this big breakthrough performance on the imagenet dataset and fundamentally it was not that different from the work that lacoon had done in 1998 the difference was that we had faster computers we had more data and we had some tiny tricks like we had relu's versus sigmoids was a big one for alexnet where we had momentum rather than just vanilla sgd so really alexnet was a very minor leap intellectually in some ways from all of this work that had come before but it just happened at an amazing moment in history and everything came together and led to this deep learning revolution that we've seen in the past 10 years and of course in recognition of all of this influence we know that the the turing award last year in 2018 which is the highest award in computer science considered somehow the nobel prize in computer science was awarded to joshua benjio jeff hinton and john le for their work in popularizing deep learning and pioneering a lot of the deep learning methods that all of us have been using so that that that of course um then of course the final point in the history of deep learning is of course fall 2019 this class so then uh what did we cover in this class more concretely right so we started off pretty simple we talked about um we saw these we saw some simple ways that we could use data-driven approaches to solve machine learning problems so we saw we got familiar with these k-nearest neighbor classifiers and that led us to beautiful discussions on train test splits and on hyper-parameter and on hyper parameters and other kind of critical components of the machine learning pipeline we talked about linear classifiers which were our first parametric classifier where we wanted to build write down some some functional form that would input the data x and then output some scores y telling us what we want to predict for that data and we trained these things by learning some weight matrix w and fitting the classifier to our data set and that paradigm of writing down a parametric form of your model and then using data to fit the parameters of that model to your data set is really the on is really the same mechanical process that we use for just about all of the all of the machine learning methods that we saw this semester we talked about optimizing those things right we need to once we've got a w we need to find the w in some way so we we talked about this notion of gradient descent or stochastic gradient descent as trying to compute gradients and follow this lost landscape downhill where the loss quantifies how well your model is doing on your data and by by running gradient descent you're kind of finding trying to find model weights that work well for the data set at hand we saw that gradient descent actually had some problems right it didn't it sort of runs into problems with local minima with saddle points with poorly conditioned optimization problems and with stochasticity in the learning process and to overcome these problems we saw that we could add some of these tweaks like momentum like fancier optimizers like nestor momentum at a grad rms prop or atom and that some of these simple tweaks to optimizers really helped us overcome some of these problems with the vanilla stochastic gradient descent formulation and like i said one actually moving from sg to sgd momentum was in fact one of the small tricks that um was actually different between young lacoons comnets in 1998 and alex not in in 2012. um so these were actually these uh these tweaks were actually pretty important in the modern history of getting deep learning to work on large data sets so then once we had in hand this idea of optimization we moved on to neural network models and we saw our first neural networks which were these these fully connected neural networks which basically was the same idea as linear classifiers right we write down some functional form that inputs the data x outputs the scores y and we learn those weights on our data set using gradient descent and we saw that one interpretation of what these fully connected models were doing was that they were learning some kind of bank of reusable templates in their first layer and then reshuffling those templates in the second layer to make to recognize different categories that they might want to represent or learn and this gave us a hint of that neural networks or fully connected neural networks were doing something very powerful compared to linear classifiers since they could do this id this recombine these templates to represent different categories and we made this intuition more formal with this notion of universal approximation where we recall that actually it turns out that a fully connected neural network with a single hidden layer using regular nonlinearities can approximate any continuous function subject to many mathematical constraints and this gave us some some really powerful theoretical intuition that neural networks are this very powerful class of function approximation algorithms that we could use them for many many different problems in computer vision and beyond but of course fully connected networks um are great they're theoretically powerful but we wanted them to return to the problem of computer vision and think about how we can build neural network models that are specialized to the spatial structure or the problems that we that arise in computer vision and that led us to our discussion of convolutional neural networks where we augmented these fully connected layers these activation functions with additional types of differentiable operators this convolution operator that maintains spatial structure of our input images and shares weights across across different spatial positions in the image pooling layers which somehow reduce the the spatial size of our representation and introduce additional invariances into the model and these normalization layers like batch normalization uh instance normalization group normalization that allow our deep our deep neural network models to be uh trained more and more efficiently and optimize more efficiently but of course once we've got this set of components for neural network models it's really sort of tricky to understand what's the right way to put them together to solve different types of problems in computer vision so we talked about classical architectures in computer vision which are different paradigms for how are good ways to stack together these basic building blocks to to arrive at really high performance uh neural network models for solving computer vision tasks so we talked about this alex net architecture from 2012 bigger and bigger models like vgg like google map and like resnets which finally allowed us to train models of hundreds of layers in depth and then we saw that a big trend in um designing neural network models in the past uh year or two has really been a focus on efficiency that now because we're we know how to build really big high-capacity neural network models like residual networks and train them efficiently on large data sets then a big goal becomes actually not just what is the best performance we can get on our data set but actually what's the most efficient model that we can build to get high performance on our data sets so then we saw that in in the last couple of years people have been much more sensitive to building not just models that work well but models that are efficient and actually could be deployed on mobile devices or or to run inference across server farms for big tech companies so then we saw that um architectures like res next or mobile nets were really focusing on improving the efficiency of convolutional neural network architectures and a lot of these um played tricks by uh playing around with the convolution operator so in the resnext architecture for example it made use of these grouped convolutions which allowed us to have these sort of multiple parallel branches of convolution inside the model and that improves the overall computational efficiency of the models right but now once we've got these really high performance really deep really complicated neural network models we need some way to think about them more formally as mathematical objects and we saw that one way to do that was this notion of a computational graph right that rather than write that we can write down our neural network models and rather than writing to bound as a giant equation with a lot of with a lot of learnable weight values inside that equation we could represent our neural network models as these graph data structures where each node was some primitive operator in the model and the edges are passing data between these different differential operators and then these computational graph this this idea of a computational graph then lets us really easily represent even these very complicated neural network models we saw this backpropagation algorithm that then let us efficiently compute gradients in these computational graphs of arbitrary complexity and the the beauty of this backpropagation algorithm of course was that it takes this global problem of computing gradients in this arbitrarily complex graph and converts it into a local problem where each individual node in the graph only needs to know how to compute its local derivatives given the upstream compute its local gradients um so then it receives these upstream grip upstream derivatives and passes downstream derivatives down to the left and it doesn't need to care about exactly the global topology of the graph into which it's embedded then based based on this back propagation algorithm then we moved on and talked about different sorts of practical hardware and software platforms on which people are running these deep neural network stacks these deep neural network models so we saw that we have we've seen kind of a shift in hardware from cpus maybe back in 1998 when john lacoon was training his models to gpus graphics processing units which were one of the one of the key components in the success of the alexnet architecture in 2012 and then more recently we've seen people start to design specialized hardware for neural network uh computing like the tensor processing units or tpus from google we've also seen a dramatic we've also seen some really there've been some really powerful software systems built to help us build these really big neural network systems so of course i think you all are now experts in pi torch because you've been using it for all your homework assignments um there's this other deep learning framework called tensorflow you may have heard of from people at google which i think you'll see out there in the world as well and we saw that one of the big one of the top level differences between these two different frameworks was this these software ideas of static computational graphs versus versus dynamic computational graphs and these are kind of different software paradigms for organizing the computational graph obstruction abstraction inside your software systems so i i always thought that this this topic was really fascinating right because we cut we sort of set out in this class to solve computer vision which is this particular application domain but then in trying to solve computer vision we ran into this new technique called convolutional networks but then in order to make convolutional networks really good then we had to branch out into other areas of computer science and then we had to build new sort of new hardware platforms on which to run those algorithms as well as think about new ways to organize complicated software systems in order to let us build those powerful algorithms so i always really thought that this topic was was quite interesting since it really branches out and forces us to think about about how computer vision and machine learning interacts with a whole large different sub-areas within computer science but then we have to sort of get back to this machine learning problem and we talked about a lot of nitty gritty details about how you can actually get your continents to work so we you you saw a bunch of different activation functions we talked about things like pre-processing which are important for getting your networks to learn efficiently weight initialization it turns out that the way you initialize the weights in your models is really important and now you guys know the right way to do it um there's things like data augmentation which sort of let you bake in additional invariances into your models by artificially expanding your data set during training time and this can be really important for for building high performance models we saw a bunch of different regularization techniques right so regularization was this idea of um with is that we don't want our models just to work on the training set we want to build neural network models that can extend and apply to general model to general images to new images that they had not seen during training and then regularization techniques are some way to constrain the capacity of our model and maybe make it work a little bit worse on the training set such that it can work better on the test set on these images that we really care about at the end and here we saw that a really common paradigm for regularizing deep neural networks was to add some kind of stochasticity or randomness to the processing of the model then we saw things that that and then at test time you kind of average out or uh average out that that randomness so then this we saw things like a fractional pooling like dropout drop connect batch normalization fits into the same paradigm and just a whole bunch of different techniques for regularizing our neural network models it turns out that learning rates are important so we saw different schedules for to changing your learning rates over the course of optimization we talked about different mechanisms for choosing hyper parameters um we and hopefully now you guys have trained a bunch of neural network models and you've gained some intuition and you know that you can gain some intuition about whether your models are working well or not working well just by looking at the learning curves of the loss and the training and the validation accuracies and that just but and often by just looking at these curves can give you some insight into maybe what what things you might imagine changing and to make your model work better hyperparameter search you guys felt the pain on your homeworks it's it's tough to choose good hyper parameters um but hopefully that's that was enlightening for you to go through that process for yourself so then after going through all these tips and tricks we were kind of um we were pretty good at training neural network models and we knew at this point pretty much how to train state-of-the-art models for image classification um and there armed with that knowledge we started to branch out and consider other applications of these deep learning models to other types of problems one problem is just how can we understand what it is that our neural network systems have learned so we talked about techniques for visualizing and understanding what a trained neural network has learned on our different computer vision data sets and then we saw that some of those same techniques could be used for fun applications too like making artwork um we saw these deep dream out these deep dream uh these deep dream and neural style transfer algorithms that let us use sort of feature visualization techniques to actually generate beautiful pieces of artwork then from there we started to get even more wild and consider applications even on non-visual data or on other sorts of data so we talked about recurrent neural networks and we saw that recurrent neural networks were this general mechanism that let us learn to process sequential data with deep learning architectures and that opened up a whole new wide realm of possible possible applications in for computer vision and beyond we saw some concrete architectures for different recurrent neural network units like the vanilla rnn that you implement in your homework and i think that the lstm which uh it is much more robust as one of these applications that i always thought was really fun in computer vision was this application of image captioning right where we saw that you could teach neural network systems to write natural language descriptions of images by combining a big convolutional neural network that processes the image features with a recurrent neural network that spits out the language and this is something that you implemented on your homework um but i this this was a sort of a nice application of combining sort of computer vision processing with convolutional networks with natural language processing with recurrent neural networks and we saw that this basic mechanism this basic recipe of image captioning could be improved with this idea of attention right that at each step of processing of of our of our system we could actually have it look at different parts of the image using this kind of soft differentiable attention mechanism and that gave some additional interpretability to our image captioning systems but attention it turned out was not just a trick to be used in image captioning attention was actually this general mechanism that we could use for processing sets of data so then generalizing the notion of attention led us to this this general self-attention layer which inputs a set of vectors computes a tension across all vectors in the set and then outputs a new set of vectors and uh we we saw this idea of self-attention over and over and over for different applications we saw self-attention we saw this these attention mechanisms maybe for augmenting or current neural networks for captioning we saw them uh for uh in video in video classification networks there was some notion of attention um we saw attention also in uh in a generative models where it turns out that adding self-attention to big generative models also can improve their performance so this self-attention layer was something really important and really general and i think a really interesting new basic component of deep learning architectures that has really come to the forefront in the last couple of years and then to really drive home this idea of attention as a basic building block of machine learning architectures we saw this transformer architecture which um it turns out attention is all you need and you can actually build really high performance systems for natural language processing using only self-attention as our main processing as our as our main computational primitive so then we moved on we moved back to computer vision and we talked about a bunch of different additional um more involved tasks in computer vision so we saw we talked about object detection where we wanted to train i mean i guess you guys have object protection homework due today so you should know all about these single stage and two-stage methods right but we wanted to build systems that can draw boxes around objects and images and we saw that there were different ways for hooking up the convolutional neural networks to solve those kinds of problems for us uh semantic segmentation was another way of adding spatial information to our our computer vision tasks where we wanted to or now in spanish segmentation we wanted to label every pixel of our input image as one of our category labels and we saw that we could do this using some kind of fully convolutional neural network that has both learnable down sampling layers like stranded convolution and learnable up sampling layers like transpose convolution or different types of interpolation we saw that we could combine together these ideas of instant segmentation and a semantic segment of object detection and semantic segmentation that gave us this new task of instant segment segmentation where our models wanted to both detect all the objects in the images as well as tell us which pixels belong to each of the detected objects and now the even now this this really complicates seemingly complicated task it turned out we could actually do in a fairly straightforward way by building on our object detection systems and attaching this additional mass prediction head on top of our on top of some kind of two-stage object detection system and then we saw a bunch of other applications where you could do a lot of other types of per frame processing of sorry region based uh outputs in computer vision systems by kind of attaching additional heads onto the output of an object detection system as well as another application of that type of paradigm we saw this this mesh rcnn system that could predict full 3d triangle meshes giving not just the 2d shapes of objects and images but the 3d shapes of objects and images and the way that we do that is sort of attaching an additional mesh processing head onto the output of our object detection systems that we that we came to know and then it turned out that 3d computer vision was a very rich and interesting domain unto itself and in order to process 3d data or generate 3d shapes we talked about a whole bunch of different types of 3d representations that people use for dealing with different for dealing with 3d data and that led us to discussions of different types of neural network architectures that could be used for processing each of these different types of data then we tried adding not just a spatial dimension but a temporal dimension to our neural network models and we talked about mechanisms for generating for classifying or processing videos with deep learning models and there we saw um some ideas like uh like 3d convolutional networks that are doing convolution not just open not just over the two spatial dimensions but also over the temporal dimension we straw we saw two stream networks that um combined one stream for processing motion data in the term in form of optical flow and another stream for processing visual data in it that was a normal rgb comnet we saw ideas like self-attention come up again in video or recurrent neural networks as a mechanism for fusing information across time in long video sequences then having been experts in video we spent a lot while talking about generative models which were then models that now not ingest visual data but try to generate or produce or output novel visual data in some kind of learned deep learning paradigm and there we talked about three major different paradigms of generative models of course we saw auto regressive models which just try to directly maximize the likelihood of the training data under some kind of parametric function represented with the neural network we saw variational autoencoders which gave up this this idea of explicitly maximizing the likelihood of the data and instead introduced a latent variable and they wanted to both learn the latent variables for all of our data as well as as well as jointly maximize the the likelihood of the data but we saw that in order to do that we actually had to give up on exact maximum maximization of the data likelihood and instead we saw that we could maximize this variational lower bound to let us jointly learn to model not just distributions over our data but also distributions over their latent variables and that let us do things like edit images and do and learn late representations using only image data and we also saw a lot of generative adversarial networks that are sort of state-of-the-art in generating beautiful images and we saw a bunch of examples of using generator adversarial networks in different contexts to generate different types of visual data and then finally in the last lecture uh we talked about this this whole different paradigm of machine learning which is a reinforcement learning where now rather than just sort of learning to fit a data set instead we want to train some agents to interact with the world and we saw that this introduced a lot of extra sort of mathematical formalism so we only saw the bearish taste of this reinforcement learning problem in in one lecture but we saw a sort of two basic algorithms for reinforcement learning one was q learning and one was the policy gradients which were sort of two different ways of attacking this idea of training models that can interact with the real world so that's basically uh the whole semester of content in like 25 minutes right we could have saved ourselves a lot of time just done at the beginning right or maybe not um but then now that we've understood all of this stuff there's a big question of like what's next right this is an active research field we saw there was just like six thousand people almost seven thousand papers submitted to cbpr um there's thousands and thousands more people around the world writing new research papers in this area as we speak so then what are some of the big topics that i think are going to be interesting in computer vision and machine learning and deep learning going forward of course it's impossible to predict the future but these are just some of my my ideas or my predictions or my hypotheses and you may disagree with me you may think that other things are going to be important but that's that's that's interesting and exciting so one prediction that's you know kind of trivial is that we're going to discover new and interesting types of deep learning models right throughout the history of um deep learning for computer vision we've seen people continually inventing new and interesting architectures that allowed us to build bigger and more interesting models and tackle more interesting tasks and i think this will continue in the future and the the types of the the types of models that we consider deep learning will continue to expand and expand and expand over time so as kind of one example of what i think is a really novel type of new architecture that really changes our perception of what a deep learning model can be is this idea of the the neural ode that um actually won a best paper award at nurep's last year in 2018. so here we were sort of really familiar with residual networks right we know that in any kind of residual network it's learning a sequence of hidden states as we process the data and each hidden state those are like the activations of your convolutional model and these activations what we're going to do is to compute the next layer of activations we're going to take our previous layer of activations and then apply some function to that layer that depends both on the activations themselves and on some learnable parameters and then add it back to produce the next hidden layer so then that gives us some formulation like ht plus one equals ht plus some function of ht and theta t where these are our weights and now this uh some some really crazy people thought that this uh equation actually looks a little bit like solving a numeric uh differential equation right like how do you actually numerically integrate a differential equation well usually what you do is you sort of have a have a differential equation you start at some initial point you make some small step um that you like make some small gradient step on the differential equation and you kind of use that idea of many many small steps over time to actually inter numerically integrate differential equations and then that from that perspective the number of approximation steps that we take to numerically integrate a differential equation is kind of like the number of layers in a residual network model so then if we want to take it to infinity then we could actually have a neural ode where the states of the neural network are in fact a set of continuous solutions to some kind of differential equation so then we write that maybe the the differ the derivative of the hidden state with respect to a continuous variable time is then equal to some parametric function represented as a neural network and then we can actually uh write down like i'll solve this differential equation again give a trajectory of hidden states over time that is kind of like a neural network of infinite depth and then it turns out even though that seems crazy there's actually ways you can train these kinds of models and then represent neural network models as differential equations um that gives us a very different i mean this is just a whole different category of looking at what a neural network model can be and this i thought was really exciting and i i don't know what the practical applications of this could be but i think it's just a hint of the fact that i think we will discover new and more interesting types of neural network models that will push our perceptions of what it means to be a deep learning model and some of them might be dead ends some of them might end up being the next big thing in computer vision or deep learning and it's just impossible to say which is going to be which at this point so another kind of safe boring prediction is that deep learning will continue to find new applications right we at this point we know that supervised learning on large label data sets works really well for a lot of problems it turns out that for a lot of problems if you can collect a large data set of images and labels that you want to predict from them from those images then for many such problems if you get enough data and spend enough time tuning the model you can probably train a neural network that works pretty well for a lot of those applications and i think that people will use those basic ideas in supervised learning and just apply them to more and more and more things out in the world even beyond the beyond the small set of data sets that we work on within computer vision so i think we'll see a lot more deep learning for many different types of scientific and medical applications moving uh more and more throughout time so i i think that like medical imaging or different types of medical applications i think it'll be more and more common for people to try to train computer vision systems that try to diagnose or or aid in the diagnosis of different types of diseases by using deep learning models on different types of medical data and i think in a lot of different scientific disciplines people are like scientists in lots of different disciplines are always generating more and more types of data that they need to be able to analyze and i think that deep learning will help build help scientists in different domains just analyze the data that they're already collecting and i think it will lead to improvements across many different areas in science now i think also that deep learning will as these are all kind of obvious applications i think that anything that uses images i think we will see deep learning applications for in the future but i think that sort of interesting and surprising applications of deep learning will pop up as well so as kind of an example of that there's this really interesting paper um at sigmod 2018 about using deep learning to improve traditional computer science data structures like a hash table right so how can deep learning right a hash table should be an implementation detail inside the deep learning system um now they're kind of flipping it around and using deep learning to improve these basic data structures like hash tables and here the idea is like what is a hash table well if you kind of remember back to your data your your um your uh your data structures course then a hash table is going to input some key and then that key is going to go through some kind of a hash function the hash function is going to assign the key to one of these buckets and then when there's a hash collision then you'll maybe have some kind of linked list of all of the bits of data that had been hashed to each different bucket and now to get a really good performing hash table you need a really good hash function that is going to minimize the collisions of your of your data set and assign different data different data elements to different buckets and usually these hash functions are kind of hand designed functions but it turns out you can put a neural network in there instead and kind of now learn a neural network that learns to assign your data elements to hash buckets inside a hash table and the idea here is that then you could actually use the hash table you could learn a good hash function for your hash table that is customized the type of data that you want to hash because it's always to get good good performing hash tables you need to reduce collisions but exact but you know even if we're always working with images maybe that whale data set is going to collide in different ways from that galaxy data set and now you can maybe use a neural network to learn a hash function that will minimize the hash collisions for the particular data set or types of data on which you want to learn things so i thought this was a beautiful idea and this is just sort of a surprising example of place i think deep learning will find homes in more and more areas across science broadly and also within computer science and we'll just continue to see more and more surprising applications where we can slot neural networks into things and then lead to better versions of those things um kind of another example that i really like in the last year or so is this idea of using deep learning for symbolic mathematics right this is um sort of surprising to me that this can work in some way right the idea is um suppose you want to do things like automated theorem proving or symbolic integration right like the kinds of things that mathematica is usually used for um right then you can actually train neural networks to do these types of tasks as well and the idea is is you know we need to convert the data into some format that can be processed with a neural network so it turns out that you know we can write down sort of mathematical formulas as we can actually convert them in some nice way from a sequence of formulas from a sequence of symbols actually into some kind of graph structure that represents the structure the underlying structure kind of like a parse tree of that sequence of symbols and then we can actually run sort of graph neural networks on these sequences of symbols and that lets us process these mathematical expressions using deep neural networks as well and there have been applications and this is not just theoretical people have actually done this and people have then for example used deep learning to do theorem proving right so what is theorem proving um it's like you start at some some original set of mathematical statements that you take as assumptions you want to arrive at some some mathematical statement at the end and there's certain types and there's a very wide tree of possible different mathematical transforms you can imagine applying at every different step all right this is kind of like this massive tree search problem where from every equation that you know is true there's a large number of potential mathematical transforms that you could imagine applying to those equations well that actually looks like a reinforcement learning problem where the state of the reinforcement learning system is all of the all of this mathematical statements that you currently know to be true the actions that we can take in this reinforcement learning system are the potential uh the potential mathematical transforms that we can make on the statements we know to be true and then we can use a deep reinforcement learning system that is trained to discover proofs to discover mathematical proofs um that actually there's been papers about this that can then use deep reinforcement learning to improve upon some some aspects of mathematical theorem proving or you can use this for like symbolic integration where you like write down uh write down a random equation and then actually want to generate another equation that represents the integral of that input equation and people have been training neural networks to do that kind of thing as well so i think deep learning will continue to find new and surprising applications to lots of different areas within within science within computer science within mathematics and i think that deep learning will just become a standard scientific or engineering tool that gets slotted into all kinds of different disciplines so then prediction sort of safe boring prediction number three is that deep learning will continue to use more and more data and more and more compute right so we've seen this plot i think a little a couple times before in the past in the semester so here on the x-axis it's showing us uh time from 2004 to 2017 and the y-axis and each dot is a is a gpu that was a different computing device and the y-axis shows us the cost of computation in terms of how many gigaflops of computation can you buy for a dollar and you can see that the cost of computation has just been decreasing at kind of actually an exponential rate um as gpus have gotten better and better and better in the last 10 years and i think that this this increase this exponential increase in the affordability of gpu computing has really allowed us to scale up our models and build ever bigger models on ever bigger data sets and that's led to a lot of improvements in in deep learning but i think this trend will continue forward going forward in the kind of obviously um and i'm not the only one who thinks so there's this really cool plot from open ai they have a nice blog post about this i think last year where they wanted to track they tracked this trend of the the role of computing in ai systems going all the way back to the perceptron back in the 1950s and now the x-axis is showing a different milestone sort of high-profile uh projects in artificial intelligence starting with the perceptron although back in the 1950s going up something like alphago zero um that we talked about in just the last lecture and now the y-axis is the is the amount of computation that we use to train these models and what's crazy is that this y-axis is actually on a log scale already and you can see that this uh the amount of compute that has been used to train these state-of-the-art deep machine learning systems has been growing super exponentially since the 50s and this is likely to contin this trend is likely to continue going forward but you know how can this trend possibly continue going forward um we've already like we are like you already see large uh large industrial players like google and facebook training machine learning models distributed across maybe thousands of gpus in order to train the biggest machine learning models so i think we will also in order to continue scaling deep learning models i think we'll also we'll also see innovations in hardware that will lead to new types of hardware that are specialized to building large-scale deep learning models so um it's kind of a bit of free advertising for this startup um there's one really there's one one one example of this is this uh this company cerebrus which makes wafer scale chips for deep learning so they basically build like a gigantic computer chip that's like absolutely massive that is specialized for doing deep learning so on the right so i mean this is obviously marketing material from the startup so take it with a grain of salt but the idea is that kind of the largest chip that we have from the biggest baddest nvidia gpu is this relatively small chip over here on the right and cerebrus's wafer scale computing engine is this like massive piece of silicon which is like orders of magnitude bigger than the largest gpu that nvidia is currently making and the idea here is you've just got like tons and tons of compute elements tiled out over this like massive piece of silicon with a lot of memory with a lot of compute and this type of novel hardware platform might be able to i mean it's it's impossible to say but maybe some kind of innovations in hardware maybe by cerebrus maybe by others um would uh can will help us i think push deep learning to the next level and train ever ever bigger models um on these ever bigger data sets so those are my three kind of um safe predictions about things i'm relatively certain are going to happen in the future around deep learning models but i think there's also a lot of problems with the way that we're doing ai right now the way that we're doing machine learning and computer vision right now and some of these problems are things that i don't know how to solve but i think that as a community we need to find ways to solve them so one of the biggest problems i think facing machine learning models right now is that they're biased right that um machine learning models are deployed in the world and they treat people they treat different types of people in different ways and that's not fair that's not a good thing that's just a thing that we need to avoid um and it's kind of a concrete example of that you remember we could do something like vector arithmetic with gender adversarial networks remember with gender adversarial networks we could do something like taking the smiling woman vector subtract the neutral man vector add the neutral man vector and then get an image of a smiling man well this idea of sort of analogies or vector arithmetic with deep learning models actually didn't really didn't actually originate with generative adversarial networks this idea actually originated with uh with another type of model called a word vector so here the idea is that this is a this is a technique from natural language processing where you input a large corpus of text data and then somehow you process that corpus of text data to uh give some vector representation some vector that represents every word in your corpus of your data and then it turns out that you can use those learned uh word vectors from your text corpus of text data to do these similar kinds of analogy problems so for example you could solve the analogy man is to king as woman is to what and the way that you can solve that problem or that query using a word vector approach is to take your your learned vector for man subtract to learn the vector for king add the learn vector for woman and then do a nearest neighbor search among all the other word vectors in your data set um and it turns out that if you um perform a lot perform a large scale analysis of a different types of analogies with trained word vector with trained word vector models that have been trained on large corpus of text data they actually reveal some pretty ugly gender biases that these machine learning models have picked up on by just training on all the data set on all the data out there in the web so for example you can uh probe these machine learning models these are word vector models and you can ask like what types of occupations are more situated in that learned word vector space corresponding to men or women and it gives these really really kind of troubling stereotypical occupations for men and women it says that extreme she occupations are things like home homemaker nurse receptionist librarian and extreme masculine professions are things like maestro skipper prodigy philosopher captain architect so you can or if you look at um solving some of these analogies um with uh this idea of vector arithmetic you can see that uh they learn very gender stereotyped representations of the types of things that men and women do so they learn things like a registered nurse on the female side is equivalent to physician on the male side or interior designer on the female side is equivalent to architect on the male side so somehow these machine learning models have actually become biased and and slurped up these very ugly biases on the training data just based on the data on which they're trained and this is a problem right this means that if we're training neural network models on data out there in the world they could slurp up biases in that data and then make predictions that stereotype different groups of people and this is not just a theoretical problem this has actually been observed in uh in in systems that have been deployed by big tech companies so as another example um we can look at economic bias in visual classifiers so here there was this uh this nice the this nice paper from just this year in 2019 where they want were they they tried to make the argument that deployed classifiers that have been deployed out in practice by major tech companies are actually biased towards high-income western the high-income western households so for an example they collected these data sets of household items from different cultures across the world and at different income levels across the world so for this example this is an image of soap that was collected from a household in the united kingdom within with a monthly income of about two thousand dollars a month and if you run this type of uh image through a lot of commercial image classification uh pieces of software you can see that it's not perfect it makes some errors but it produces like kind of reasonable predictions for this type of image but then what if you take an image of soap from a different type of household from a different type of person at a different part of a part of the world so here's an image of soap that was taken from a household in nepal that earns just about 300 a month and if you take this image of soap and run it through the exact same deployed classifiers out there on the web that have been deployed by major tech companies you see that it just fails catastrophically it thinks that this this soap is food or cheese or bread um it like it just doesn't recognize that this is soap so somehow it seems that the types of data on which these systems have been trained has somehow biased their predictions towards the types of images that are seen in wealthy western households and that's a problem i think that we should be building machine learning systems that are good for everyone and as another really ugly example of this there's actually been racial bias in visual classifiers as well so this was a really uh high profile image that circulated twitter back in 2015 where google's image classification system categorized these african-americans as gorillas which was just shockingly racist and was a very bad thing and this is not something that we want our machine learning systems to do so i think that this is a really big problem in machine learning this is not a theoretical problem this is something that is facing all machine learning systems all computer vision systems today then i think it's important that we build machine learning systems that take all different viewpoints and all different types of people into account and actually actually treat them fairly in in unbiased ways so there's been some academic work that's really pushing towards understanding these biases um in machine learning models and measuring it and improving it but i think that this is still a really big open problem in computer vision and machine learning and machine learning more broadly so i put a couple of citations here if you want to get started in this research area but i think that this is a really big important problem that's facing a lot of machine learning models so then another sort of more uh maybe more academic problem with deep learning is that i think we might need new theory to actually understand what's going on inside of our machine learning models and the the way that i see that is that there's certain number of empirical mysteries that there's experiments that we can run that give us very strange results that are seemingly counterintuitive and this makes me think back to like in the early 1900s right before quantum mechanics was discovered people knew about classical mechanics but there were certain experiments that you could run around like black body radiation or other types of phenomenon that just could not be explained with the classical physics at that time and it could be the case that there might be some experimental results in deep learning that hint that we might need a better theoretical understanding of the systems that we built so one problem that kind of mystifies me is this i this empirical observation of really good sub networks within deep learning models so here's something that you can do you can start with a randomly initialized neural network model and train it up and then step two train it on your favorite data set and then what then what you can do is uh this process of pruning the trained neural network so you can remove a large number of the learned weights in inside that trained neural network model and it turns out that you can actually remove like a large fraction of the weights of the trained model and still retreat and still retain the same performance of the trained model so somehow these trained neural networks even though they might have a lot of weights it seems that they don't actually need all of those weights to actually get their good performance so i feel that feels like something funny is going on and you know the mystery deepens it turns out what we can do is we can take that trained pruned model and then go back to the initialization and then uh take the the initialized values of the non-pruned weights and then go back to the initialization right so that's kind of like we took step one we had a random network we trained it we pruned it and then we applied the pruning of the trained network back to the original initialization of the original unoptimized weights from step one and now we can train the pruned network and it turns out that it works almost as good as training the full dense network from step two and this has been called the lottery ticket hypothesis of the deep neural networks that it's like each weight inside the neural network is playing the initialization lottery and then some weights inside the network have won the lottery and got good initializations and that caused maybe good sub networks to emerge within these initialized neural network models and then the question is what the hell is going on like this this feels like we're maybe missing something fundamental in the way that neural networks learn on data or the way in which neural networks are initialized or optimized and the mystery deepens even further there was this paper from actually just about to a week or two ago on archive where you can take a randomly initialized neural network model and you can train and you prune the ra the random model we don't change the weights the only learning in this model is removing connections from this randomly initialized neural network model and it turns out that you can tr that through some kind of gradient descent procedure you can prune an untrained model to result in a sub-network of the randomly initialized untrained neural network model that actually achieves non-trivial classification performance on data sets like cfar and imagenet this is very mysterious and very shocking to me this means that inside a random model there exists sub networks that actually do a good job on image classification and this actually happens at the scales of networks like a resnet50 that we are using in practice so it feels to me like we're maybe missing something kind of fundamental here in the way that neural networks learn or represent functions on their data i don't know what the answer is but i think that this is one of those empirical mysteries that maybe we should pay attention to that hint that maybe we need to develop some new theory another empirical mystery in deep learning is this question of generalization so there's been a lot of work on classical statistical learning theory going back over several decades and from classical statistical learning theory we can expect this plot on the left that um if we on the x-axis we plot the complexity of our model that's maybe like the size of our neural network or something like that and the y-axis we plot the error rate on the training data and the test data then we see sort of two regimes in uh in our model on the left as we start with a very small model we get high error on both the training and the test set as we increase the size of the model then both the error on the training set and the test set should decrease as the bigger model is able to better fit the training data but at some point when we make the model even bigger even bigger then classical statistical learning theory tells us that the the model should start to get worse on the test set as it continues to get better on the training set and this is overfitting this is where a machine learning model would have over fit to the noise in the training set and then no longer generalizes to the test set and this is a phenomenon that we have been familiar with in the community for decades from statistical learning theory but then here's some funny facts about neural networks one is that deep neural networks can actually fit random data on image classification data sets so what this means is if you take something like your your favorite resnet50 classifier and you trade it on cfar but you train it on a version of cfar where all of the labels have been assigned randomly to all of the images or where all the pixels of the images have been randomly shuffled or where um where the the pixels are just random gaussian noise then it turns out that the same resnet50 that gives us really strong performance when trained on the real data can actually achieve perfect accuracy on the training set on when trained on a training set of these random data sets so some classical results in statistical learning theory um say that one way to measure the complexity of a machine learning model is something called the rottermocker complexity which is something like its ability the the ability of the model to fit random data so the classical statistical theory tells us that if a model is able to fit random data perfectly then that model is likely too big to generalize and give useful predictions on the test set but a resnet50 model can be trained to achieve perfect accuracy on random data but when we train it on real data it generalizes great and gives us really good and meaningful and useful predictions on the test set and that's a mystery i think that means that there's something we don't understand about the way that neural network models in particular generalize to unseen data now another mystery around generalization is this so-called uh double descent phenomenon so here's an empirical plot on the right corresponding to the theoretical plot on the left so on the right we're showing uh on the x-axis is the the size of a fully connected neural network that has been trained on the cfr data set and the y-axis gives us this training and test performance of of this of these models of increasing size and you can see that up to a point when up to up to the model size of 40 then the empirical results that we see actually match these theoretical predictions from classical statistical learning theory but as we make the model even bigger even bigger even bigger then something qualitatively different happens and it seems that we push beyond this regime of overfitting into some other regime where now if the model is just in the middle then it overfits but if you make the model even bigger then it no longer overfits and this is something extremely mysterious about neural network models that i think is not well understood and this phenomenon was actually um uh scaled up by a blog post from open ai just a week or two ago where this this initial phenomenon was reserved was uh observed by a paper by belkin at all at pnas this this year and open ai observed similar results on a wide variety of deep learning models so here's a plot from this new open ai paper where they're training now no longer fully connected networks but now like resnet18 the same type of residual networks that we're using in practice and we see the same empirical mystery of double descent in these practical uh large-scale convolutional neural network models so that means that somehow we're there's just something we don't understand about the way that large neural network models generalize to unseen data and i think that there's a possibility that we might need some new statistical learning theory to explain some of these empirical observations okay now another big problem in in deep learning is that we need a lot of training data and training data is expensive to collect so as a practical concern we would all like to be able to build deep learning models that will that rely less on a large label data sets so one way to that people have started to make progress on this problem is by building new data sets that special that are specialized towards this problem of low shot learning that is learning where you have very few numbers of samples per category that you want to learn so i think one of the initial high profile data sets in this regime was the omni data set from brendon lake at all that's kind of like a a low a low shot learning version of mnist so mnist of course was this super classical image classification data set that's uh classifying uh binary digits from zero to nine and it gives you six thousand training and test example six thousand examples in the data set of each of these categories and now the omniglot data set said it now scales it up and uh now gives us 1 623 categories giving handwritten letters from handwritten symbols from 50 different alphabets of natural languages around the world and it provides just 20 examples of each of those types of letters and somehow from this very few numbers of examples um you need the goal is to build machine learning models that can learn to recognize these characters even though they only get a very few number of samples of each type of character during training another example of a data set in this flavor is the kms data set that provides a large data set of japanese kanji characters that now gives something like almost 4 000 different categories but provides a variable number of samples per category and now this is then one way to make progress on this task of learning with few data of your data is actually providing well-structured data sets and challenges around uh learning with small data so um these are kind of some of these uh these early class these uh sort of now from a couple years ago these these data sets on learning with low data and they're a little bit like they're they're they're definitely interesting they're definitely important but the problem they're trying to solve is this uh handwritten character recognition it's not really as realistic as many of the types of problems that we've been solving in computer vision this semester so then there's another really there's a new data set that just came out this year that i'm really excited about called elvis and elvis tries to push this idea of low shot recognition um with learning from very few examples per category into this uh computer vision regime of very complicated images so here um rather than doing image classification elvis is a data set for instance segmentation and it annotates more than a thousand different category labels for instance segmentation and uh this data set is actually still being collected right now so v 1.0 of the data set is not even out but zv 0.5 gives you about 57 000 images and about almost 700 million labeled object instances from these thousand different categories on this uh on the uh for this instant segmentation problem so i think that um the cocoa data set that was uh back from 2014 really drove a lot of progress on instant segmentation and i'm hoping that the elvis data set will similarly help to drive progress in low shot recognition or learning with small amounts of data for these complex real-world computer vision types of tasks so expect to see so i expect to see a lot more work on this type of data set moving forward so now another type of idea towards reducing our reliance on label data is this idea of self-supervised learning and self-supervised learning kind of pushes towards this holy grail challenge of unsupervised learning that we talked about several lectures ago so here the idea is that we're going to have a two-stage process when building our neural network system in stage one we're going to train the neural network on some kind of pretext task that we don't actually care about but that can be defined and trained without any kind of human labeling of the data and now on step two we'll take the the network that we learned for this pretext task and fine-tune it on whatever small amount of label data we can afford to collect so we've seen this paradigm a little bit already in the context of learning generative models as a pretext task but it turns out there's been a lot of work on people defining other types of pretext tasks for self-supervised learning one example is solving jigsaw puzzles so here's a concrete example what we can do is take an image cut it up into a nine by nine grid of patches and then shuffle the patches and now these shuffled patches will be fed to a neural network system and the neural network system needs to solve the jigsaw puzzle so it needs to classify what was the correct permutation of those patches that would unscramble the the puzzle pieces and now in order to solve this task it's likely that the network would have to learn something non-trivial about the visual world right because for it to know that the tiger head should go in the upper right hand corner and the tail and the leg should go at the bottom then it should be able to recognize that what these body parts of the tiger are and it should understand their kind of relative orientation so the hope is that by so if a network could solve this type of task of solving jigsaw puzzles then it could be fine-tuned for a lot of other computer vision applications and critically you can you can prepare this data set of jigsaw puzzles without any human annotation whatsoever it just requires you to download a bunch of images from the data from the web and then kind of cut them up automatically to make these jigsaw puzzles another idea in this vein is colorization right we sort of download a lot of color images from the web convert them to black and white and then ask the network to predict the color and again this is a supervised learning problem because we're asking the network to predict one thing which is the color from another thing which is the black and white but we can generate these labels without having people annotate the data and again hopefully if the network can solve this colorization problem then it should have learned something about the world like it should be able to in order to colorize this photo it must kind of know that this is a butterfly and understand what types of colors butterflies tend to be and so a network that can solve this problem is hopefully uh visually intelligent to some degree another idea is in painting so we can take take an image remove some portion of the image and then ask the network to predict back the removed portion of the image and this actually is maybe an example of a generative model right this is sort of a conditional generative modeling problem where you want to predict a part of the image conditioned on another part of the image and again you can just sort of train this type of model using the different approaches for generative modeling that we talked about and then at the end of the day kind of throw away the generative model and just fine-tune the underlying convolutional network for whatever downstream task you care about so there's been i think a lot of really recent interest in uh different mechanisms for self-supervised learning so i i put a couple of these uh current state-of-the-art methods for self-sufficient learning up here in case you want to dig more into these references and one of these these pretext and variant representations actually was published just went up on archive just on december 4th like five days ago so i actually haven't read that paper in detail yet but uh i i think that that that gives you a sense that this is like really active area of research is discovering better ways to train networks in a self-supervised uh manner okay then i want to leave you with um maybe one big i think the underlying problem in deep learning models that i don't know how to solve and i don't know if anyone knows how to solve is that deep learning models just don't learn to understand the world in the same way that humans do that they kind of learn to mimic the data on which they're trained but they just don't seem to get the world in the way that we do and one way that we can see this is um kind of through large-scale language models and they just lack common sense so here we talked about this language modeling problem right where you can download a lot of text from the internet and then train a neural network model to predict the next word conditioned on the previous words so then you can kind of use a neural network model to then you can um condition one of these language models on some input piece of text and then ask the language model to just complete the rest of the sentence for you and then see what it thinks is a likely completion of this text that you've given it so i was playing around with this over the weekend and now there's this really cool website called talk to transformers.com where you can just uh have this nice web interface where you can type in your own queries and then sample sentence completions from this very large language model that was built by openai so here's an example if you i typed in i was born in 1950 actually i wasn't in the and in this year and in the year 2025 my age will be so that's the query that i provide to the system and then the neural network writes the rest it says 35 that was only a few years ago most things in life just continue to improve so it's like here's another example i see a black dog and a brown horse the bigger animal's color is well what do you guys think brown right because we know that horses are bigger than dogs for the most part but this guy says the bigger animal's color is black and the smaller is brown and actually if you kind of uh it's a stochastic process if you kind of sample many different completions it just like kind of gives you answers randomly i sampled another one where it said that the the biggest one was pink so it seems like or here's another example one of my parents is a doctor and the other is a professor my father is a professor my mother is obviously a doctor right because we know that people have two parents and we know that professions are kind of a mutually exclusive thing and this says my mother is a social worker they're super smart people so it seems that these neural network models just don't understand the world around us that even when you train them on all of the text around us in the in the world even when you download gigabytes or terabytes of textual data from the web and even when you train on thousands of gpus for weeks at a time you can get state-of-the-art numbers but the neural network models are just somehow missing something in the way that they understand the world and i think that there's just something fundamental here that this idea of training ever bigger models on ever bigger the ever bigger data sets it works for practical problems but i am a little bit skeptical sometimes about whether this will actually get us all the way to ai um there's one school of thought that says maybe we just need to make things even bigger another 10 times bigger another 100 times bigger another thousand times bigger and maybe if we keep going then they'll kind of pick up common sense for themselves maybe that's true i don't know or maybe we need to sort of maybe there's some paradigm shift that we need to make in the way that we're building our machine learning models maybe there's some drastically different approach to learning that we need to be building instead and i don't know so then kind of another example more in computer vision um there's this really on the nose paper called the elephant in the room because i think this is a problem that everyone knows about who works with deep learning models but we don't usually talk about it or give it enough credit and that's this idea that just machine learning models are brittle they don't understand the world and they don't work when you train them on things that were when you test them on things on which they were not trained so as an example here's an image of some people in a room and we ran a map a pretty good objective texture on this image and it works great it tells that there's a person there's a laptop there's a cup there's a chair there's a handbag there's books it recognizes all the reasonable object categories but now what we do is we just like photoshop in like do a really bad photoshop job and just put in this elephant in the room and obviously we humans have no trouble i mean maybe the contrast on the screen is kind of bad so maybe you do have some trouble but if the lights were maybe dimmed properly in the room then you should have no trouble recognizing that there's like hey there's a big elephant in the back of this room and i think if the elephant appeared in the back of the classroom like we would all notice it without any problem um but and sometimes these object detectors will recognize novel objects just fine sometimes it'll just miss it completely so if you just move the elephant to a slightly different position in the scene sometimes it's just like not detected at all by the detector and that's just because this network has never seen images with elephants in living rooms and it's just not able to generalize in that way or it gets even worse sometimes if you put the elephant in a different place then it gets recognized as a chair and also in recognizing the element at the elephant as a chair it also messes up some of the other predictions about that the model is making about the other objects in the room so now um you can see that it's flipped the chair to a couch and it's no longer detecting the cup so this is a big problem this means that convolutional neural networks and really all of the vision systems that we know how to build with deep learning are really seeing the world in a qualitative qualitatively different way than humans are seeing the world and they just can fail catastrophically when we apply them on data that is even slightly different in some way on the data on which they're trained and i don't know how to fix this but i think this is a major problem that needs to be solved if we're going to a build machine learning systems that are making intelligent decisions out in the world for us and this is just something fundamental that i think is maybe baked into some of our current approaches so i don't know the solutions here but i think that these are some of the biggest most pressing issues some of the big picture issues in computer vision and machine learning more broadly right now so that's kind of my summary that we have to see these sort of boring safe predictions we'll see new models we'll see new applications we'll see bigger compute but we also have some of these more fundamental problems that are facing the field that i don't have answers to and we people recognize that their problems but i think there's a lot of room for people to come up with creative new solutions that could radically have massive impact on this growing field of deep learning so i think the summary is that i think now is a really great time to be getting into this field and a really great time for you to be learning about this field um so hopefully with some of the the stuff that you've learned in this class maybe some of you will go out and solve some of these big challenges that are facing the field so with that i'd like to have a big round of applause for our gsis who uh really made the class actually work i don't know if they're actually here though but i really wanted to give a shout out to them and then thank them for uh making the class actually work um and i wanted to thank you guys for uh putting up with the first class that i've taught here at the university of michigan so thank you you
Deep_Learning_for_Computer_Vision
Lecture_5_Neural_Networks.txt
so welcome back to lecture five uh today the microphone is actually working so hopefully that uh that will give you a little bit better audio this time and i don't have to shout quite as loud uh today we're work work today's topic is uh neural networks we're finally getting to the meat of the course and we're finally talking about our first deep learning models so so far we've talked about using linear models as a way to to build a parametric classifiers we've talked about using different kinds of loss functions to quantify how happy or unhappy we are with different uh with different settings of the weights in our linear classifiers and in the last lecture we talked about using stochastic gradient descent or some of its slightly fancier relatives like momentum and atom and rms prop for actually minimizing these objective functions and finding values of the weights that satisfy our preferences as specified by the objective function and today we're going to iterate on point one and we'll step away from linear models for the first time and start to explore neural network based models that will be much more powerful and allow us to classify images with much higher accuracy but before we talk about neural network based models we need some i think we should step back and motivate them a little bit so as we've talked about several times already across the left across the semester so far we've seen that linear classifiers although they're very simple and easy to understand are really quite limited in the types of functions that they can represent and their functional power is somehow not as well not as good as we would like them to be we saw this from the geometric viewpoint recall that from this geometric viewpoint of a linear classifier the we saw a linear classifier as kind of drawing high dimensional hyperplanes to carve up this high dimensional euclidean space into two chunks and with with situations like that on the left it's just impossible for a linear classifier to possibly carve up the space in a way that separates the green points from the blue points um when we thought about linear classifiers from the visual viewpoint we had this notion of linear classifiers learning just a single template per class and that therefore they were unable to represent multiple modes of the same object category for example we saw that in the horse template it kind of blends the horse looking to the left and the horse looking to the right and this is somewhat of a representational shortcoming in the linear classification model well before moving to neural networks i think we should discuss a different way to overcome this limitation of linear classifiers and that's the notion of feature transforms so here the idea with feature transforms is that we will take our original data which is given to us in some native input original space on the left and then we will apply some cleverly chosen mathematical transformation to the input data to now transform it in a way that will hopefully be more amenable to classification so as an example here on the left we we maybe with our human intuition we think that maybe transforming this data from cartesian into polar coordinates would be a better representation of this data for the purpose of classification so we can imagine writing down this feature transform which simply converts the the cartesian representation of our data into this polar representation of our data and after we apply this feature transformation to this input data it now lives in some new space that we call a feature space that is defined by the mathematical form of the feature transform that we've chosen but what's particularly useful about this feature transform for this problem is that now after transforming this input data set from cartesian to polar we can see that in polar coordinates this data set actually becomes linear linear linearly separable so now we could imagine training a linear classifier not on the original input data space but instead training a linear classifier on the featured on the feature space representation of our data and then if we imagine what if we imagine porting this linear decision boundary in the feature space back into the original space on the left we can see that a linear just a linear classifier or a linear decision boundary in the feature space corresponds to some kind of non-linear descent decision boundary or non-linear classifier in the original space and and so by cleverly choosing a feature transform that suits the properties of your data it may be possible to overcome some of the limitations that we've seen with linear classifiers so far and for this particular example of transforming from cartesian to polar seems kind of trivial but in general we need when you think about applying feature transforms more broadly you have to think very carefully about the structure of the data that you're working with and think about what types of functional transformations you might consider applying to your input data that might make it more amenable to to feature to linear classification downstream and this is not just a hypothetical thing this actually is this this notion of feature transforms was very broadly used in computer vision um and even still is in some in some sub domains one example of a feature transform that we might use in computer vision is a notion of a color histogram so here we can imagine uh taking each pi dividing the color space the rgb spectrum of color space into some number of discrete bins and then for each pixel and our input image we could assign where it is in the bin uh in in the bin representation of our color space and now the feature representation could be some kind of normalized histogram over what colors happen to appear in the image and this this type of this this color histogram representation of an input image somehow throws away all the spatial information about images and only cares about what types of colors are present in the image so you might imagine that a color histogram representation might for example be more spatially invariant so suppose that we we had maybe uh a car image like a red car on a brown background and all of our car images were of this nature but the car might appear in different locations in the image then a linear classifier might have a hard time dealing with that kind of representation but a color histogram representation would always represent it as a bunch of red and a bunch of brown um no matter where in exactly in the image the car might be located or the frog in this case so this was this is one fairly simple feature representation that you might imagine applying to input images another feature representation in images that was very very widely used is the so-called histogram of oriented gradient approach this i don't want to talk about in too much detail the basic idea is that it's it's somewhat dual to the color histogram approach so in the color histogram approach we saw that it threw away all the spatial we threw away all the spatial information and it didn't care about textures or locations and it only cared about what colors were present in the image the histogram of oriented gradients representation does something of the opposite it throws away all of the color information because it only cares about local edge orientations and strengths and instead it tells us something about the local orientations of edges and the local strengths of edges at every position in the input image so the histogram of oriented gradients representation could tell us that for example in the red region there is a fairly strong diagonal edges in that region in the blue region around the frog's eyes there are edges in all different kinds of directions and in the yellow region in the upper right corner of the image there's very little edge information at all because it's a blurry background very photographic and beautiful so this histogram of oriented gradients representation is somehow dual to the notion of the color the color histogram representation that we saw before and this was very widely used in computer vision for tasks like object detection and many other tasks in the mid to in the mid to late 2000s but one interesting feature of a feature of both of these feature representations is that they sort of require the practitioner to just think about what is the right qualities of their data that they want to capture with the feature representation and requires the practitioner to think ahead of time about how to design the right types of feature transforms well there's there are there exists other types of feature transform methods that are somehow data driven that are actually driven by the data that we see in our training set that drive the the one one canonical example of a data-driven feature transform is the so-called bag of visual words representation here the idea is that we have some large training set of images so that and then from our training set of images we're going to extract a large set of random patches of various scales and sizes and perhaps aspect ratios cropping out random bits of all of our training images and then we will cluster all of those random patches of our training images to get what is called a code book or a set of visual words that can represent what types of features tend to appear in our images the idea here being that if there are common types of structures in your images that appear in many many images in your training set then you will hopefully learn some kind of visual word representation that can capture or recognize each of the many of those common features that appear in your training set um so then after this step one of building your code book of visual words there is some step two that encodes your image using the learned code book of visual words here we will take this code book of visual words that is cluster centers of all the local image patches and compute some kind of histogram representation for each input image to say how much does each of those visual words appear in that input image representation and this you can imagine is quite a powerful type of feature representation because it not it does not require the practitioner to fully specify the functional form of the feature representation it instead somehow allows the features to be that are used in the feature representation the visual code book words in this case somehow they're able to be learned from the training data to better fit the problem at hand so this is maybe a bit more flexible than some of the other feature representations that we saw previously but another common trick with image features is that you don't have to settle for one you can imagine having a bunch of different feature representations of your input image and then concatenating them all together into one long feature vector you might concatenate a color histogram representation at the top with some bag of words representation in the middle with some histogram of oriented gradients representation at the bottom to get some kind of long long high dimensional feature vector that describes your image in various that were different parts of the feature representation now capture maybe different types of information from the input image like color or edges or whatnot and this this this idea of combining multiple types of feature representations was very very widely used in computer vision in the late late 2000 in the 2000s and early 2010s as a canonical example of that actually the winner of the 2011 imagenet challenge used some kind of complicated feature representation uh to to actually this was the state of the art in image in large scale image classification as of 2011 which was of course the year right before the alex net architecture made deep learning kind of very popular across all parts of computer vision but if we look back at this 2011 imagenet winner we can see that they have some they first took some low-level feature extraction where they extracted a bunch of patches from the image images they extracted sift and color representation they reduced the dimensionality of those to pcm using pca they applied something called fischer vector fischer fischer vector encoding to compress and get another layer of features on top of those original features they did some compression step and after that they end up with some kind of feature representation upon which they can learn linear svms to finally classify learn a linear classifier on top of this complicated feature representation so this this pipeline of coming up with feature representations for images actually can perform reasonably well in some contexts but one way i'd like one way i like to think about feature extraction is something like the following diagram here on the here on the top here we imagine if we have some kind of machine learning image classification pipeline that's built with feature extraction basically what we've done is we've decomposed our system into two parts one is the feature extractor that may and one and the second is the the learnable model that will actually operate on top of that feature representation or that feature space and typically this feature extraction stage is not even even if this feature extraction stage has some kind of data driven component like we saw in the bag of visual words example this feature extraction stage is not necessarily going to automatically tune itself in order to directly maximize the classification performance of the overall system instead somehow when we learn a linear classifier on top of a fixed feature representation we have this we end up with this very large complicated system that goes from raw image pixels all the way to our final classification scores but only a small part of that system at the very end is actually tuning its tuning itself or tuning its weights in response to trying to maximize the classification accuracy of our system in contrast as so then in contrast well clearly here we might want to do something better which is to somehow automatically tune all parts of the system to maximize the performance on image classification tasks and that's one motivation to think about what a neural network is doing when we build a neural a deep neural network system for image classification what we're doing is building a single end-to-end pipeline that on the left-hand side takes in the raw uh the raw the raw pixels of the image and on the right hand side is predicting these classification scores or classification probabilities or uh whatever else you actually want to want to predict and then during the process of training we will tune not only that that final layer that's that's using some linear classifier we will tune the entire system all parts of the system jointly in order to maximize the performance of classification or whatever other end task we're considering and then from this point of view it seems that a neural network based system is really not so different from these large deep feature representation systems basically the change is that a neural network is somehow jointly learning both a feature representation and a linear classifier on top of that feature representation in such a way to maximize the classification performance of our system with this introduction we can finally talk about our a concrete example of what exactly a neural network might look like so far in this class of course we're very familiar with this linear classifier right given our input data which is stored in a giant column vector x then our linear classifier is going to have a learnable weight matrix w of size number of input dimensions by number of categories and the linear classifier is this matrix vector multiply between the input data and the the learnable weight matrix well now our simplest neural network model is not much more complex than the linear classifier now we've generalized this system rather than have we we still we still represent our input data with a single long column vector where we stretched out the raw pixel values of all of the all of the parts of the image into one big vector but now we have two learnable weight matrix matrices one of these learnable weight matrices is w1 which is now shaped number of input dimensions by h h is called the hidden size of r of the neural network and we and then we first do a matrix vector multiply between our input data x and our first learnable weight matrix w to produce some new vector of dimension h then we apply this element-wise maximum function upon this this vector of size h and finally we we perform a second matrix vector multiply between this h dimensional vector and our second learnable weight matrix w 2 which is of size of h by by or c by h where c is the number of output channels and uh this for this functional form of a neural oh and of course i should also point out then in practice each of these matrix vector multiplies will typically also have an associated bias term that will also add a learnable bias uh whenever we do a matrix vector multiply but that but writing down the bias clutters the notation quite a lot so in practice you'll see people often omit bias terms and equations when they actually do indeed use them when training their system so i've wanted to indoctrinate you to this confusing bit of notation um even for from our first slide on neural networks and of course this idea of generalize of this we can generalize this notion of neural networks beyond two weight matrices we contributely generalize to any number of weight matrices where at each stage what we're going to do is take our current input our current input vector apply a matrix multiply and hit add a hidden bias apply this element-wise maximum function and then repeat repeat repeat until we've applied all of our learnable weight matrices in the system we'll often see this type of system drawn pictorially with a diagram kind of like the following where on the left we a map so here we imagine data flowing through this neural network based system from left to right and on the left is our input data which is this column vector containing all of our pixels of our input image x in the in the mid in the middle we have this hidden vector h which is maybe of size 100 100 elements in this hidden vector in this example and then on the right are our final score vectors uh maybe s giving 10 classification scores part-time categories here and we imagine these weight matrices as living in between each of these multiple layers of our neural network where we can interpret these weight matrices as somehow telling us how much each element of the previous layer influences each element of the next layer where for example if we look at element i comma j of the first learnable weight matrix w1 then this this is a scalar value of the weight matrix that tells us how much does how much is the element h i in the hidden layer influenced by the input element xj in the input and you can look at a similar representation a similar interpretation of the second weight matrix w2 showing how much does each element of the hidden vector affect each element of the output scores and because these are dense general matrices we can recognize that in this particular structure here we see that each element that that each element of the input x affects each element of the in the hidden dimension of hidden vector h and similarly each element of the hidden dimension vector h affects each and every element of our final score vector s so because of this dense connectivity pattern this type of neural network is typically called a fully connected network because the units in each layer of the network are all connected to one another so it's fully connected and this type of structure is also sometimes called a multi-layer perceptron or mlp this is in reference way back to the perceptron learning algorithm that we talked about in the first lecture i think it's it's maybe a strange bit of terminology but you'll definitely see people refer to neural networks of this type as mlps so you should be aware of that bit of notation now we can think we can add a little bit of extra interpretation to what exactly these neural networks are computing so if you'll recall in the linear classification case oh sorry was there a question that's a great question uh the question you asked what was the purpose of the max i'll get back to that in about four slides so hold on and then ask again so one way that we can think about this neural network based classifier is in contrast to a linear classifier so if we'll as we'll recall the linear classifier we interpreted as learning this set of one templates per class and then the scores were then the inner product between each of our large templates and the input image that somehow said how much does each of those templates each how much does our input image match each of the templates well now we can interpret the weight matrix the first weight matrix of the neural network in a very similar way where somehow now the first layer of the neural network the weight matrix in that first layer w1 also learns a set of templates and one way that we can interpret the values in the hidden layer is is as how much does each how much does each of our learned templates respond to the input image x and here we can see that most of these images are not most of these templates are not very interpretable you can't really always tell what's going on with these templates but there is definitely some kind of discernible spatial structure that these temp that these first layer features or first layer templates have learned in this two-layer neural network system but sometimes you get lucky and sometimes you do get some beautiful layers in this first layer so i don't know if you noticed but uh maybe i've looked at these too long but for these two examples here they actually look to me like one is a horse facing one way and the other is a horse facing the other way so now this this is like finally overcoming this two-headed force problem that's been plaguing us for the past couple weeks as we with linear classifiers and then of course the second layer of the neural network is then somehow predicting its classification scores by recombining the by having another weighted recombination of the responses of the input image these templates so what that means is that the neural network could then hopefully somehow finally recognize multiple types or multiple subsets of a category where it could recognize both the left-facing horse using one template and the right-facing horse using another template and then use the weights in the second layer to recombine the information from both of those two templates but this is sometimes called a distributed representation because um really this this two two faces horse two facing horses example is really quite rare the more common case is that most of the time the features and the the features or the image templates that we learn in the first layer of the neural network are not very human interpretable instead they have maybe some kind of spatial structure and then and this there's this notion of the neural network using a so-called distributed representation to represent the images where somehow by having some kind of linear combination of each of these templates the network represents something about the image but we can't it's not super interpretable to us what exactly those different templates are trying to capture and of course once we have the oh yeah was there a question so the the question is there there seems to be some repeated structures in these templates a lot of them have this uh structure of like a red blob and then a blue blob i don't know that's what i decided to learn um that's kind of the mystery and the magic of neural networks you don't always know what exactly types of features they're going to learn but they're going to learn something that's trying that's going to maximize the classification accuracy um my my intuition here is that those are representing some kind of car um because i think there's a lot of cars in the cfr10 dataset another really common pattern that you'll often see in the learned first layer features of neural networks is that they often represent oriented edges just as we saw in the huble and weasel example when they were investigating the human or mammalian visual system by presenting different visual stimuli and see how human neurons or cat neurons activate then we we know that many of our own neurons in our visual system end up being sensitive to oriented edges and with these neural network systems you often get a similar type you learn tend to learn similar types of features in many cases where you learn either oriented edges or um opposing colors so in this case the blue and the red is maybe some kind of opposing color scheme and you can imagine that by recombining many of these uh many of these features it can represent something about the structure of the image yeah so the question is is there some risk of redundancy that maybe we learned multiple filters to represent the same thing and that's definitely a possibility but there exists some networks some techniques actually for network pruning that i think we'll talk about later in the semester um whereby you can first train a neural network that's maybe big and represents a lot of stuff and then as a second post-processing step you can try to prune out the redundant representations that have been learned by the network so that's something that uh people sometimes do in practice but we're not going to cover in this lecture now with this notion of neural networks you can definitely imagine generalizing this to many many layers as we've already seen and for a bit of notation here the depth of the neural network is usually uh is the number of layers that it contains um and when we count layers we usually count the number of weight matrices so our two layer network would have two learnable weight matrices um this would be a six-layer network because it has six learnable weight matrices and then the the width of the network would be the number of units or the the dimension of each of those hidden representations um in practice you could imagine that each layer might in principle have a different feature dimension at each of those hidden layers but in practice what's more it's more common to set the same width throughout every part of the network just as a bit of convention then we had this this very astute question a couple a couple minutes ago about like what is this funny max doing hanging out in this neural network equation well this turns out to be quite an important feature or quite an important component of the neural network this this function max of zero and the input is uh takes takes an element-wise maximum between zero and the input um which means that we input a vector and then anything negative we throw away and set it to zero instead this function of is has seems simple but it's so important and so widely used that it's given the special name relu for a rectified linear unit and we have a plot of this beautiful function here on the left you can see that um if the argument is positive there's nothing if the argument argument is negative we return 0 instead and this function we imagine as being sandwiched in between our two learnable weight matrices in the neural network and this function is called the activation function of the neural network and it's super it's actually completely critical to the functioning of the neural network so as a as a thought exercise you should think to yourself what if we built a neural network system with the following structure that had no activation function in between the two weight matrices what if we took our input vector multiplied by one by our first weight matrix w1 and then multiplied by our second weight matrix w2 what would be what would be wrong with this type of neural network-based approach yeah exactly it's still a linear classifier because if by then we know that matrix multiplication is associative so you could imagine grouping those two matrix matrix multiplies together and this just devolves back into a linear classifier so without some kind of lit a non-linear function between our two matrix multiplies we have that we we have no additional representational power beyond a linear classifier uh so the this choice of activation function and the presence of an activation function is absolutely critical to the functioning of the neural network i could point out that some sometimes these these networks with no activation function are called deep linear networks and they're sometimes studied in the optimization community because even if their representational power is the same as a linear classifier the optimization dynamics are actually much more complex than that of a linear classifier so sometimes people do actually study these in the theoretical context in the context of optimization um there so in this first example of a neural network we've we've chosen this activation function to be this rayleigh function but there's a whole menagerie of different activation functions that people out there work with sometimes very common one one very common that was used maybe before the mid-2000s was this sigmoid activation function that starts off at zero and then ramps up to one and there's a whole other there's a whole zoo of these things that people work with sometimes um that i think we'll we'll talk about the choices behind these for different reasons for why you might choose one over another um in a later in a later lecture but the tl dr is that rayleigh was a pretty good default choice um and for most applications sticking with relu you probably can't go wrong so this is definitely the most widely used activation function in deep learning today and you should probably consider using this for most of your deep learning applications now i also want to point out that this neural network system is actually super simple to implement we can actually train a full neural network system in just 20 lines of code here i'm doing numpy instead of pytorch because i can't give away your homework right um and here uh what we're doing is that uh the first these first couple lines are setting up a bit of uh setting up some random data and some random weights um the second bit of lines is doing uh this forward pass that we call where we compute the score function as a function of the input we've also used a sigmoid non-linearity here as well which is that first line up top here we compute the gradients with respect to our weight matrix and here we take our gradient descent step so you can see that this neural network system is actually quite simple to implement and if you actually copy paste this code and run it in your terminal um you'll be training your own neural network in 20 lines of python so that's pretty exciting now when we talk about neural networks there's one word that people often get hung up on and that's this word neural um so i think we could not have a course on deep learning without at least acknowledging the presence of the word neural in our neural network models so we have to talk about that a little bit but i'm not a neuroscientist by any stretch of the imagination so please i i i expect that i could say something slightly wrong with respect to neuroscience but i'll do my best and please don't actually ask me too many hard questions about neuroscience so i'm not a neuroscientist but i gotta talk about this somehow um but the basic idea here is that our brains are by our amazing biological organisms and the basic building block of our brain is this little cell called the neuron and neurons i guess look something like this if you search them on google and uh i i think they actually look like this and they have a couple important components in these cells one is they have a cell body in the middle where kind of all of the all of the juice is happening then they've got this long terminal out to the right called the axon where where this this neuron will be sending electrical impulses out from the cell body down the axon and now these these neurons are all connected in these giant networks where these axons are then sending electrical signals to other neurons um the electrical signals from axons are received by these other little little protrusions from the cell body called dendrites and the gap between the dendrite of one neuron and the axon of the other neuron is called the synapse and what basically happens here is that this neuron in the middle is going to collect electrical impulses that are that are coming down the axons of all of the incoming neurons on the left those electrical impulses will be somehow modulated or modified by these synaptic connections between the dendrites and the axons and then at some point based on the based on the the rate and transformation of the electrical signals that are happening then eventually this neuron will send down some other electrical signal downstream to the other neurons that are connected downstream and one way that we can one kind of abstraction that we can think about these neurons is by representing them using a firing rate that these neurons are maybe firing electrical impulses at some rate and the rate at which our electrical impulses fire is maybe some kind of non-linear function of the rate of all of the input the input connections that we get from all of the input neurons on the left so then we can imagine having this very very simple very crude uh mathematical model of a neuron where the cell body then collects so then the cell body then collects all of these incoming signals from all of the neurons on the left and then sums up all of the firing rates of all of the incoming neurons on the left and then based on this sum of all the firing rates coming in on the left then we apply some kind of non-linear function maybe a sigmoid maybe a relu maybe other some other kind of nonlinear function that now computes the firing rate of this neuron as a function of the firing rates of all the input neurons and then this this is now the firing rate that this neuron will send off to other neurons downstream in other parts of the network and this is basically where the similarities end i think uh this is a very cr this is on the right here this is basically what we're doing in our neural network systems that we use today um and as you can see there is some crude approximation to biological neurons but there are many many many differences between biological neurons and artificial neurons and artificial neural networks so you shouldn't get too hung up on these similarities one bit of similarity one bit of dissimilarity is that biological neurons tend to be organized into very very complex networks where that could be highly irregular they could even loop back around and have one part one neuron kind of loop back and send signals back to itself running around in time so there can be very complex topological structures of neurons in real mammalian brains in contrast when we work with an artificial neural network based systems we typically organize our neurons into layers and this these layers are something of an artificial construct that allows us to perform all of these multiplications and sums um all jointly all at the same time using efficient uh matrix and vector operations so this notion of a layer is something of an abstraction to represent this big soup of neurons that might exist in a real mammalian brain but people are getting creative and people are starting to explore artificial neural network-based systems with very crazy or even random connectivity patterns as well and that can actually sometimes work in some case so one of some of my colleagues at uh facebook ai research uh had this paper last year where they trained neural networks with these connectivity patterns on the right that look totally nuts but they actually train and get near state-of-the-art performance so i guess random connections can if you're careful can sometimes work in these artificial systems as well but the general story here is that you should be extremely careful with your brain analogies even though that word neural is hanging out in the term in the name of neural networks i think that it's really something of a historical term at this point and you should not take too much significance between analogies between our artificial systems and um actual biological neurons so in particular um there's just maybe a couple caveats here in our artificial systems our neurons are kind of all the same kind of um in a real brain you might imagine there's different types of neurons that have different specialized function um you can imagine that our neurons could have very could perform inside an individual neuron it could perform very complex non-linear computations that are not well modeled by our simple activation functions um you can also remember that we modeled these neurons in terms of a single scalar firing rate and that was our main abstraction that we used to represent the activity level of a neuron in the brain and that was a very very coarse abstraction of what might actually be going on inside and between neurons so this simple idea of a firing rate it might be too coarse to represent what's truly going on um so that's pretty much all i want to say about brains for the semester so um with that let's go back to um let's go back to math and let's go back to engineering that i actually do know a little bit about hopefully um so then one question is um if we're not going to take these brain analogies with too much seriousness then why actually should we choose neural networks as a powerful image classification system or as a powerful function approximation system more broadly well one we've already seen sort of coarsely how neural networks can represent multiple templates in their first layer and then recombine them in the second layer but i think another interesting way to think about and see why neural networks are such a powerful system is through this notion of space warping so this so um here we we want to think about this geometric viewpoint of a linear classifier remember with a linear classifier when we thought about about them geometrically we thought about our data points as all living in this high dimensional space and then each each row of our linear classifier then gave rise to some uh some plane in feature space in our input space um and now then we could imagine that the maybe this these lines are the values at which that you have a dot product of zero with your map with your weight matrix so each of these lines would be score of zero for that class and then they would increase linearly as we go perpendicular to the plane or or line um previously we had always thought about this in terms of predicting classification scores but another way that we can think about what's going on here is as warping the input space here what we can imagine doing is that we're taking our input space which has features x1 and x2 and transforming it into another into another feature space that has coordinates h1 and h2 for the two dimensions of our two-dimensional hidden unit here and now if we have a linear transform you can imagine that all of these uh all of these regions of space are somehow deforming in a linear way that we with these two linear classes with these two uh two-dimensional linear transform we got these two rows and our we've got these two lines in our input space and those two lines in our input space divided up into four regions and each of those four regions get transformed into the four quadrants in this transformed output space where now the the a space is going to get kind of rotated and transformed like this and all four of these quadrants will get transformed in this linear way when we think about a linear transformation acting geometrically on the space so now we can then think about what happens when we want to try to what happens when we try to train a linear classifier well one one way if we try to train a linear classifier on top of a linear transform then you can imagine we've got this cloud of data points here on the left where blue are maybe images of one category and orange are images of another category and then once we apply this linear transform that linearly warps the space now we transform our input data into some new new representation but now because this feature transform was only modifying the input space linearly now we can see that even though we've transformed the space the points are still not linearly separable in this new transformed output space so somehow applying a linear classifier on top of a linear feature transformation is not going to increase the representational power of our model we'll we will still be unable to separate this particular data set linearly if we use a linear feature transform well now let's think about what happens once we apply this rayleigh function well now what happens is that we our input space we still have these four lines corresponding to the two rows of the weight matrix then they still carve up our input space into these four quadrants but now because of the because of this non-linear rayleigh function of the way in which these four quadrants get transformed will all be different so in this first region a where that corresponds to the positive quadrant in the input because you can see it corresponds to both both of the directions of increase for the red and the green lines that will transform just as it did in the previous linear linearly transformed case and this quadrant a in the input space will be linearly warped into the quadrant a in the transformed feature space but things get very interesting when we imagine quadrant b so quadrant b in the input space corresponds to a positive value for the red feature but corresponds to a negative value for the green feature and if you'll recall based on the based on the structure of the relu when the feature is positive it will be left alone and when the input is negative it will be set to zero so what this means is that because all of the points in this quadrant have a negative value for their uh green feature that means that their green feature will be set to zero so what geometrically happens in this case is that the entire b quadrant in the input space is now collapsed onto the positive uh the positive h2 axis in our transform feature space which now is very dramatically different from what's going on in the linear classification setup and a similar things happens with quadrant d it's now collapsed onto the positive h1 axis because of the the rayleigh function and now quadrant c is really tight for space because all of quadrant c is now packed into the origin in our transform feature space now if we now that we've seen what happens with quadrants let's go back to this this example of the data cloud so we saw that transforming this data cloud linearly resulted in this linearly transformed data space but now when we imagine trends applying this relu non-linearity to this feature representation now we can see something very interesting has happened in the transform feature space and what has happened is that now our yellow and blue points have become linearly separable in this transformed feature space in particular now if we were to train a linear model on top of this feature space h we we would be able to properly separate the blue and the and the yellow points and now if we imagine porting this decision boundary from the speed from the output feature space back into the input space we would see that that that decision boundary now corresponds to some non-linear decision boundary in the input feature in the input feature space so we kind of have this have this interpretation of these relu based neural networks as kind of being like multiple linear classifiers all kind of folding the space onto itself linearly and then applying linear classifiers on top of that folded transformed collapsed version of the space in this example we've seen a fully connected network with two dimensions in the heat in the hidden unit and that that that allowed us to kind of fold the space over twice um or collapse two of the dimensions in the output space if we would are to and if we are to increase and use neural network and train neural network based systems with more and more and more dimensions in the hidden unit in the hidden layer you can imagine that now in the feature space um we're using works we're drawing more and more and more lines in the original input space to divide it into two regions and then folding them back on itself which leads to an ever more complicated collapsed representation in the feature space such that linear boundaries linear decision boundaries in that in that complicated feature space now correspond to very complex non-linear decision boundaries in the original input space and in general from this example here we can see that by using more and more units in the hidden layer we end up with decision boundaries that are more and more complex well remember we talked last time about uh regularization and how regularization is a way of controlling the complexity of your model and when you see an image like this you might think that oh this model on the right is way too complex it's got a much too wiggly decision boundary it's very likely to overfit my training data so you might be tempted to try to regularize your your neural network based model by reducing the number of dimensions in the hidden layer but in general that's actually not a great idea in general what we want to do is to regularize your neural network model using some kind of tunable regularization parameter rather than using uh the rather than using the size of the network itself as a regularizer in this example we can see that using the same number of hidden units then by increasing this strength of l2 regularization just by increasing the strength of l2 regularization we've been able to smooth out the decision boundary that the network has learned between the categories and all of these examples on the last two slides were generated by this online web demo where you can go and train neural networks in real time in your browser and see these decision boundaries fly around in real time so i would really encourage you to check her out check out and play with that to gain some intuition about this notion of neural networks as transforming feature spaces now with this notion of now with this idea of neural networks as transforming feature spaces and now that we've seen these very very complex decision boundaries that can be learned by neural network systems we might have an intuition that these neural network systems are very powerful and can represent very very large categories of functions much more so than the linear classifiers we've previously considered and we can actually formalize this a little bit so there's a property called universal approximation which is that a neural network system with one hidden layer can approximate any function from rn to rm of course because when you say any and function people have taken real analysis will get on your back so there's a lot of technical caveats to this statement something like oh the maybe it's only a compact subset of the input space maybe it's a continuous function um what do we mean by arbitrary precision but like this isn't a real analysis class so we'll just ignore all those details um and just say that neural networks can learn to approximate any continuous function on a bounded input space um and to kind of get at intuition for how a relu based system can learn to approximate any function we can think about algebraically how a relu based neural network is computing its outputs so let's consider a simple example of a relu based fully connected two-layer neural network that takes as input a single real number and produces as output a single real number and now the hidden layer in this neural network has maybe three units here and we've got our weight matrix of the first layer we got a weight matrix the first layer which is just a vector in this case because our input is a is one and uh our weight matrix at the second layer is also a vector because our output is only a single dimension then we can write down how what is the functional form of each of these hidden layer activations we can see that the the first hidden unit is equal to the max of 0 and w1 times x minus the bias b1 here i'm putting the bias in because it's actually important for this example and this then it's it's a similar a similar structure for all three of the units in the hidden layer and then the the output value y is then a linear combination of these hidden layer values in this case i've written the second with matrix as u so the the final output value y is u1 times h1 plus u2 times h2 and you get the idea but now we can actually reorganize this output a little bit and we can write down this output this output value y as a a u of a scalar value u in the second major in the second from the second weight matrix times this maximum of w of zero and an element in the first weight matrix times the input minus a bias and then the final output y then decomposes into the sum of these three different terms and what each of these three different terms looks like is a some kind of shifted or scaled or tr or translated version of the rayleigh function where now here the we can then flip this rayleigh function to the left or the right depending on the sign of this w1 element in the first weight matrix we can determine the point between the the flat region and the linear region um is that that point where it changes between flat to linear is given by the bias by the bias terms bi and the slope of the of the of the non-flat part is given by the ratio of the terms in the second wave matrix and the terms in the first weight matrix so now our well now we have this notion of this type of neural network system as computing its output as some kind of sum of all of these kind of shifted rayleigh functions and if we're clever with the way we do the shifting we can actually build up approximations to any to any function you can imagine and our strategy here is to build something called a bump function so this bump function will be flat over over all of the input and then once we get to some chosen value s1 it will increase linearly up to some second chosen value t it will remain flat at t until we get to a second value s3 and then once we hit s3 it will it will decrease linearly back down to zero at s4 and then it will be zero again for the rest of forever and you can kind of imagine that because we've written down our neural network system and we've seen that the output is now a linear combination of these kind of shifted and scaled rayleigh functions you can imagine building up this bump function up by by having a weighted sum of four different um hidden hidden units in particular um we can imagine that we can imagine writing down the slope of this of these two regimes by using a simple slope calculation and once we have computed the slope of these two parts of the bump then we can approximate this first part of the bump with uh with one linear unit that's doing one relu like this and now we can then deal with the second kink in the bump function by adding in a second unit that has the following form then we can deal with the third bit in the bump by without with a fourth uh ship scaled and shifted and flipped relu we can complete out our bump with a with a force with a fourth uh scale and shifted rayleigh so now with this formulation you can see that but with a combination of four rayleigh functions with a that is using a neural network with four hidden layers we can represent exactly this bump function where the the exact location of the bump the slopes of the lines and the height of the bump are all controllable by the weights in the first and the second layer and now if we have not just uh hit a neural network with four hidden units but instead have a neural network with like 8 or 12 or 16 hidden units then we can imagine using each group of four hidden units to compute a separate bump and then we can then the overall function that would be computed by this neural network we could set it up in a way so that it's a composition of some that it's a sum of these different bumps located arbitrary positions and arbitrary heights over the input and once we have this this freedom to position bumps wherever we want then we can position the bumps in such a way that they perfectly approximate any type of continuous function that we want over this domain and now you can imagine that to increase the fidelity of our of our um representation we need to make the bumps narrower and maybe uh reduce the gap between the bumps and things like that so then you can imagine that we can add more and more bumps to get better and better approximation to any underlying function by using wider and wider and wider neural networks so with this interpretation you kind of get the sense that a two-layer neural network is actually good enough for arbitrarily compute for computing any kind of continuous function with the big caveat that if in order to approximate those functions within ever increasing fidelity we need to use networks that have more and more and more units in that in that middle hidden layer and then there's many questions you might ask about this universal approximation setup because this was not really a formal proof this was something of a sketch of a proof you might ask about what's going on with the gaps between the bumps you might ask about what how would you deal with nonlinearities other than relu you might also wonder how you could extend this analysis to higher dimensional functions and not just one variable in and one variable out i think we don't have time to talk about that in this lecture but if you're interested in those questions i suggest you look at this chapter by michael nielsen's book on deep learning that talks about universal approximation in a bit more detail so this is really cool right basically we've shown that with a neural network we can learn any kind of function no matter what that function might look like as long as it's continuous and blah blah blah but this is then much more clearly a much more powerful type of representation than we could do with linear classifiers but we need a bit of reality check here so you need to realize that this universal approximation construction is really a mathematical construction to show that in principle neural networks could potentially have the capacity to represent the any network any any function if you happen to set up the weights in exactly this right way but in practice if we try to train neural networks to do this single variable regression type problem they don't actually learn these bump representations at all if we actually try to for example here i used a neural network with something like i think it was 16 hidden units to try to learn a sine function and you can see that it learns to fit the sine function pretty well but it didn't actually use bumps at all it kind of used its linear relu units and scaled and shifted them around in some kind of way that ends up fitting the sine function very well but doesn't actually use this bump construction that we use in the proof of universal approximation um so i think sometimes you see so i think this this result of universal approximation is really cool and really interesting and it gives us hope that neural networks are indeed a very powerful class of models that can flexibly represent a lot of different functions but you should not put too much stock or too much faith in this universal approximation result because as we've seen universal approximation just tells us that there exists some setting of the weights that lets neural networks compute very complicated functions but it leaves a lot of questions unanswered for example it does not tell us how easy it is to actually learn those values of the weights to learn arbitrary functions it tells us nothing about the learning procedure that we need to go through to set those weights nor does it tell us anything about how much data we actually need in order to properly approximate any function so this result of universal approximation is really interesting but you should not take it as the end-all proof that neural networks are the best type of model ever because if you'll remember back to k-nearest neighbors back in lecture two we saw also that k-nearest neighbor also had this universal approximation property um so just uh just having universal approximation is not a strong enough property that that's not really where the magic is in neural networks because even something like k n is universal is universal um so now we've talked about a lot of good reasons about why why and how neural networks can be more powerful and more flexibly represent functions compared to linear models but we have not really talked about the optimization process right universal approximation tells us that there exists values of the weights that are diffic that can represent lots of functions but it doesn't tell them tell us how to find them well there is a so one question you might ask is how can we know whether whether neural networks or other types of machine learning models will actually converge to solutions that are useful or to globally optimal solutions or other things like that well one type of mathematical tool that we often use in uh optimization to talk about notions of optimality and and golden convergence of optimization problems is the notion of a convex function well a function is convex so a convex function is one that's going to take an input vector and redu and return a single scalar you could imagine this is something like the loss function or a neural network based system where the input is going to be the setting of the weight matrix and the output is going to be the scalar loss that tells us how well that weight matrix is doing and now the a function is said to be convex if it satisfies this particular inequality constraint which i think is better understood visually so if we imagine what this inequality constraint is doing is saying or asking about in this exam concrete example of f of x equals x squared what it says is that when we take two points in the input x1 and x2 and we look at linear combinations of those points and then feed linear combinations of those points back to the function itself then that should be less than something on the right hand side well the thing on the right hand side is basically referring to this chunk of the curve which is this chunk of the function f between the two points x1 and x2 and now the thing on the right is saying that is is a is a secant line where if we compute the value of the function at x1 and the value of the function at x2 then the right side is the right hand side of this equation is going to look at all linear combinations of those two values of the functions computed at the end points x1 and x2 so what this um what this co what this property of convexity is saying is that whenever you take two points in the input domain then the secant line between the two points always lies above the function itself and so then with that with that kind of geometric interpretation of convexity you can clearly see that this this quadratic function is indeed convex because if we imagine taking any two points in the input and imagine drawing any two and drawing a secant point between any two of those lines in the input then that secant line will always lie above the function itself you can prove this formally but i think it's it's quite intuitive when you look at it visually at least in contrast something like f of x equals cosine of x is not convex because we can draw these two we can find a secant line where the curve lies above the secant line of the function itself so something like cosine is not convex and the in your intuition about convexity more generally for higher dimensional functions is that a convex function is somehow a high dimensional analog or high dimensional generalization of a bowl shaped function where um because if it's kind of a bowl because if you take now a secant line between any two points it's always the secant is always going to lie above the function itself and if you that always kind of gives us some kind of general bowl shaped function and now convex functions have a lot of beautiful and elegant and amazing mathematical properties um so that if you want to know more about uh you can take an entire course about it is ioe or math 663 so um clearly you should not expect to learn everything about convex functions or convex optimization in this one little chunk of a lecture but for the purposes of this class the course intuition that you should know is that convex functions are roughly bowl shaped and amazingly convex functions can actually be optimized efficiently so if you take this class and work through an entire semester of math then you will be able to write down formal theorems about when that you can actually have optimization algorithms that provably converge to the global minimum and you can prove that convex functions have global minimum and that local minima are global minima and that things actually converge and it actually works um so that's amazing but it takes actually quite a lot of mathematical machinery to work up to those results but the takeaway is that also in practice convex optimization problems are quite easy to solve in practice and they also tend not to depend on initialization right because the the convex prop these convex functions are nicely bowl shaped so you can get convergence guarantees of the form like no matter where you start you're always going to find the bottom somehow so the course into it the very coarse intuition that you should take away is that convex functions are easy and robust to optimize and we have theoretical guarantees about optimizing convex functions now one reason that we've spent so much time talking about linear classifiers and linear models is that the optimization problems that we end up optimizing when we try to fit linear models to our data whether we're doing a soft max or an sbm other or other things like linear regression then these optimization problems that arise from training linear models are convex optimization problems which means that when we try to train linear models we can actually write down formal theoretical guarantees about the convergence of those training runs in the context of linear models so this is actually a reason why people sometimes prefer linear models over neural network based models is if you actually want some kind of formal guarantee about the convergence of the system now no such guarantees exist for neural network-based systems unfortunately empirically what we can do is kind of slice through different slices of the lost surface of a neural network based model to kind of get some intuitive picture of what these lost surfaces of neural networks might look like so here what i've done is i've built um this is a five layer what was it the details are on the slide but it's some kind of multi-layer rayleigh network and then i've picked a single element of the weight matrix in the first layer of the weight of the of the rayleigh network and then i'm computing on the x-axis are different values of that one element of the weight matrix and the y-axis are the values of the loss function as we change one element of the weight matrix inside this deep relu based neural network system so what you should think about this as being is that our loss surface when we optimize these high dimensional neural networks are these very very high dimensional loss surfaces that we can't visualize because we live only in three dimensions but what this is visualizing is now a one-dimensional slice through this very high dimensional loss surface and what we can see here is that sometimes we get slices of loss of lost surfaces of neural networks that actually do look sort of convex or bowl like but other times we get this very non-convex slices of our lost surfaces in neural network systems so this is another lost surface through a different part of the same relu based architecture and this looks very very much non-convex or you can get even wilder than that we can get chunks of the lost surface that have this this is like adversarial to gradient-based learning right if you imagine trying to optimize this type of a loss surface with gradient descent you have to like climb up this hay up this thing and you'll get stuck in this very deep hill that needs to climb out somehow like this is like bad news um for gradient based optimization um and you can get these very very wild types of slices of lost surfaces when we try to train deep neural networks so the takeaway here is that neural networks rely on non-convex optimization that in general the optimization problems that we are trying to solve when we optimize neural network systems and fit them to our data we're trying to solve a non-convex optimization problem and this is like terrible news theoretically we basically have no theoretical guarantees about convergence we basically have no theoretical guarantees about anything but empirically it seems to work anyway which is somewhat surprising so this is an extremely active area of research of trying to characterize the theoretical properties of the optimization problems that arise from training neural network systems but right and i think there's maybe some promising results or promising progress on this but the story is far from complete and i think we still as a community do not fully understand the theoretical properties of these optimization problems but it seems to work anyway so i guess we'll we'll do it right that's kind of the takeaway here and hopefully the theory will catch up eventually at some point so the summary then of what we talked about today is that we saw this notion of feature transforms and how by combining a feature transform with a linear model we could end up with much more complicated decision boundaries and we saw a neural network system as kind of jointly learning a feature transform and this linear model we talked about these two layer neural network systems and saw how they use these distributed representations to reshuffle different template values to more powerfully represent visual features compared to linear models we talked a little bit about brains but i don't want to talk about that too much and then we talked about these kind of nice interesting properties of these fully connected networks the notion of space warping of a universal approximation and this bad property of non-convexity then of course there's a big open problem for us to consider which is how do we actually compute gradients in these big neural network based systems i think it's not just working it out on paper is not going to scale as we move to very complicated systems and very big and complicated models so to learn how to do that we'll we'll cover the back propagation algorithm in the next lecture
Deep_Learning_for_Computer_Vision
Lecture_3_Linear_Classifiers.txt
so welcome back to welcome back to lecture three today we're going to talk about linear classifiers so a quick recap let in the last lecture we talked about this image classification problem and you'll recall that this was a foundational problem in computer vision where we had to take this input image and then our network or system had to predict a category label from one of a fixed set of categories for the input image and remember last time we talked about various challenges of this image recognition or image classification problem that we somehow need to build classifiers that can be robust all these different sorts of variation that can appear in our input data things like viewpoint to look with viewpoint changes illumination changes defamation etc that somehow the challenge in building high performance recognition systems is building systems that are robust all these different changes in the visual input that they need to process so you would remember also last time we talked about the data-driven approach to overcoming some of these challenges that rather than trying to write down an explicit function that deals with all of those hairy bits of visual recognition instead our approach is to collect a big data set that hopefully covers all of the types of visual things that we want to recognize and then to use some kind of learning algorithm to learn from the data how to recognize various text images and it's a concrete example of this pipeline in the last lecture we talked about the K nearest neighbor classifier that was fairly simple that memorized the training data and then output the label at test time that of the image most similar to in the training set to the test data and we saw how this led to ideas of hyper parameters and cross-validation and this was and we let and we went through this entire pipeline of an image classification system in the last lecture but remember when we left off we said that the K nearest neighbor algorithm was actually not very useful in practice for a couple reasons one was that it inverted this idea of what is slow and fast that it was very fast at training but very slow to evaluate and the other problem was that it wasn't very perceptually meaningful that sort of l2 Euclidean or l1 distances on raw pixel values was not a very perceptually meaningful thing to measure so today we're going to talk about a different sort I mean a different sort of classify classifier that is very different in flavor from the case his neighbors classified that we talked about before so today we're going to talk about various types of linear classifiers that we can use to solve this image classification problem so linear classifiers might sound kind of simple but they're actually very important when you're studying neural networks because sometimes when you build new neural networks it's kind of like you want to stack all together your layers as a set of Lego blocks and one of the most basic blocks that you're going to have in your toolbox when you build these large complicated big neural networks is a linear classifier so sort of speaking hoarsely once we move beyond linear classifiers and move to these big complicated neural models then we'll see that meant that the individual components of those neural network models will look very similar to these linear classifiers that we'll talk about today and indeed much of the intuition and technical technical bits that we'll cover today will carry over completely as we start to move to neural network systems in the next couple of lectures so as a quick recap remember with that we've been working with this C part n dataset and that the C part then C part n dataset is one of these standard benchmark data sets for image classification that contains 50,000 training images and 10,000 test images where each of these images is little little little tiny so it's 32 pixels by 32 pixels and within each pixel we have three scale your scalar values for the red blue and green color channels of the pixels so in so the idea of a linear classifier is part of a much broader set of approaches toward building machine learning models so that's the idea of a parametric approach so the idea of a parametric approach is that we're going to take our input image much as we've seen in the previous lecture but now there's a new component in our system and that's these learn about stubble you down in red at the bottom of the slide so then we're going so we're then going to write this this function f which is going to somehow inputs the image the the pixels of the image acts as well as well as these learn ablates w and the functional form will somehow end up spinning out ten numbers giving some classification scores for each of the categories that we want the system to be able to recognize so this is a fairly general framework and a fairly general set up and this this idea of a parametric classifier will carry over completely to the neural network systems that we'll talk about but we're going to talk about the possibly the simplest the simplest possible instantiation of this parametric classifier pipeline and that's the linear classifier where it has the simplest possible functional form where this F of image acts and weights W is just going to be a matrix vector multiply between the learn about weights W and the pixels of the image X so to put to make this a little bit more concrete and remember that the input image for something like C part n has is a 32 by 32 by 3 which means that if we count the total number of scalar values that are inside that each of those images we had kind of multiply it out you end up with third with 3072 individual scalar numbers that make up the the pixels of that input image so now so then we will have a weight matrix so then we'll take the the pixels of the image and stretch them out into a long vector so this will completely destroy all of the spatial structure in the image and we'll just reorganize all of the data in the input image into a long vector that has 3072 elements and of course we'll need to do we'll need to do this vector vector application of our image in a consistent way that every time we take an image we always need to convert it into a vector in the consistent same way every time and once we have chosen some way to to flatten our image data into a vector then our noble weight matrix will be a two dimensional matrix of shape 10 by 3072 where 10 remember is the number of categories that we that we want to recognize and 3072 is the number of pixels in the image and this and when you perform this this matrix vector multiplication the output will be again a vector of size 10 where 10 giving one score for each of the ten categories we want our classifier to recognize so sometimes you'll also see linear classifiers with a bias term that will be a matrix vector multiply plus an additional bias term B where B is this vector with ten elements giving offsets for one of each of the ten categories that we wish to learn so this is a fairly so this is a fairly straightforward way to think about linear classifiers but over the next couple slides I want to dive into what this means in the context of image classification so first as a concrete example suppose that we just want to make this super concrete suppose that our input image is a 2 by 2 grayscale image so then it has only 4 pixel values that give the full state of the image then we want to stretch the pixels out into a vector form into a column vector with four entries so here I've just written out the exact values of each of the pixels in this in this image and then our weight matrix is and then in this in this simple example will consider classifying only three categories rather than ten may be cats dog and ship shown in the three with these three corresponding colors now in this simple example the weight matrix W will have shape 3 by 4 where 3 is the number of categories we want to recognize and 4 is the total number of pixels in our input image and then our bias will again of shape 3 because this is the number of categories that we want to recognize so then we'll perform this vector vector matrix multiplication and we'll output the specter of scores getting one score for each category we want to recognize so when you look at the problem in this way you can start to recognize a little bit of structure in how we're breaking up this image classification problem so if you remember the way that matrix vector multiplication works you know you take the vector and you kind of multiple take inner products along each row the matrix you recognize you realize that each row of this matrix corresponds to one of the categories that our classifier wants to recognize so I think it's useful to think about linear classifiers and a coupled a couple of different equivalent ways and when you think and by using different viewpoints to think about linear classifiers it can make certain properties of them very very obvious and or not obvious so having different ways to think about a linear classifier can help you understand it more intuitively so the first idea that the first way I like to think of linear classifiers is what I call the algebraic viewpoint which is exactly this this idea of a linear classifier as a matrix vector multiply plus a vector offset and if you think about the algebraic viewpoint of a linear classifier you reckon you there's a couple features or facts about linear classifiers that immediately become obvious one is that we can equivalently we can do what sometimes is referred to as the bias trick that eliminates the bias as a separate learn will parameter and instead incorporates the bias directly into the weight matrix W the way that we do this is that we can augment our input image with an the the vector representation of our input image with an additional constant one at the end of the vector and then augment our weight matrix with an additional column corresponding to that that will now perform the exact same computation as the W X plus B formulation that we saw before so that's kind of a nice feature and this biased trick is quite common is pretty common to use when your input data has a native vector form so it's nice to be aware of as you think about building different types of machine learning systems but in fact in computer vision this bias trick is less common to use in practice because it doesn't carry over so nicely as we move from linear classifiers to convolutions later on and furthermore it's nice sometimes to separate the weight and the bias into separate parameters so we can treat them differently and how they're initialized or regularize or other things like that but nevertheless this bias trick is a fairly nice thing to be aware of for linear classifiers and it's totally obvious when you think about it when you think about linear classifiers through this lens of the algebraic viewpoint another another thing that's very obvious when you think about linear classifiers in this algebraic way is that the predictions are linear so what this means is that isn't it so in is a simple example if we ignore the bias and we imagine scaling our whole input image by some constant C then we could just pull that constant out of the linear classifier and that means that the predictions of the model will also be scaled by that by that scalar value C so if you think about images that means that if we have some input in it some original image on the left with some set with some predicted cat classifier scores from a linear classifier then if we were to modify the image by sort of desaturating it by multiplying all the pixels by some constant one half then that then all of the predicted category scores from the classifier would all be cut in half as well so this is maybe a bug maybe a feature but it feels kind of weird for linear classifiers to behave in this way on image data because you might think that just by scaling down all the pixels by a constant value we as humans have can still recognize this as a cat just as easily but somehow it's a bit unintuitive that just scaling down all the pixels change the predictive scores from the classifier so that's a look that's a kind of a weird feature of linear classifiers that may or may not be important depending on exactly what loss function used to Train B's so we'll talk about that a bit later so that's the algebraic viewpoint that I like to think about for linear classifiers but there's a very there's a we can reformulate this computation in an equivalent but slightly different way that will give us a slightly different way to think about exactly what image linear classifiers are doing in the context of image data so remember from the up this this algebraic viewpoint of a matrix vector multiply we saw that the classification score that's predicted for each category is the result of an inner product between the vector representation of the image and one of the rows or the matrix right well in this algebraic viewpoint recall that we had taken the pics the pixel values of our input image and stretch them out into a column vector and then when we took this inner product we ended up with an inner product of these these rows in the matrix and the and the column of the stretched out version of the image well rather than stretching out the image into a column vector we can instead think about Rieff reshaping the rows of that matrix into the same shape as the input image then we get a system that looks something like this on the right so here we've taken each of the rows of the matrix and reshaped them to have this same two-by-two shape as the image that we're trying to classify and now then we've broken up these rows of the matrix into these four different sort of columns in the diagram here and now the weight and now the bias vector has then been broken up into these three separate elements that we split that we split along the columns so then when we think about linear classifiers in this way it lets us interpret that interpret their their behavior in a slightly different and slightly perhaps more intuitive more intuitive way so that's the what I like to call the visual viewpoint of linear classifiers because if you think because now that we've taken each of these rows of the weight matrix and stretch them out to have the same shape as the image what we can then do is try to visualize each of the rows of that matrix as an image itself and this interpretation of a linear classifier looks kind of like template matching right because now the classifier is learning one image template per category that we want to recognize and each of these templates is then and then to produce the category score for the template we simply match up the template for the class with the pixels of the image by computing it and inner product between them and you might remember that if you have two vectors that are maybe of unit norm then they and you take the inner product of two vectors then they achieve their maximum when they're all lined up which sort of fits with this idea of template matching and now it's really interesting if you then you buy by visualizing these tuppy's learned templates from the classifier as images themselves you get a bit more intuition about exactly what this linear classifier is looking for in images when it tries to recognize the different categories so for example on the bottom left you can see that this plane category it's maybe looking for some kind of a blob in the middle and it's generally looking for blue images so any images that have a lot of blue in them are going to be very highly received very high scores for the plate-glass using these particular weight matrix for a linear classifier similarly the the the dear class is kind of this green blobby background with kind of a brown blob in the middle but it's maybe the deer so that's again gives us some more intuition about what the linear classifier is looking at and one thing that's kind of interesting from this viewpoint is that it's it becomes clear that even though we told the classifier that we wanted to recognize object categories like planes and dogs and deer in fact it's using a lot more evidence from the input image than just the object itself and it's in fact relying very strongly on the context cues from image so right if you so for example if you imagined putting in an image that had a deep that had maybe a car in a forest that would be kind of confusing for a linear classifier because the forest background might be very green and then would achieve very high scores according to the deer classifier where the car in the middle might match up more to the car template so it in some kind of image with objects in unusual contexts it would be very likely that if that a linear classifier would completely fail to properly recognize those objects and that becomes very obvious when you think about the visual viewpoint of these linear classifiers so another sort of another Henschel failure mode of linear classifiers that becomes clear when you think about this visual view viewpoint is that of mode splitting so our linear classifier is only able to learn one template per category but there's a problem what happens if we have categories that might appear in different types of ways so as a concrete example think about horses so if you go and look at the CFR 10 dataset which maybe you might have done if you started working on the first homework assignment then you'll see that horses on C part n are sometimes looking to the left and they're sometimes looking to the right and they're sometimes looking dead on now if we have horses that are looking in different directions then the visual appearance of the images of horses looking in different directions will be very different but unfortunately the linear classifier has no way to disentangle its representation and no way to separately learn templates for horses that are looking in different directions so in fact if you look at this example of a if you look at this learned template of a horse from this one particular linear classifier you can kind of see that it actually has two heads so if you look at the horse here he has kind of a brown blob in the middle and green on the bottom which you might expect but now there's a black blob of black blob on the left and a black blob on the right which might court so then this is the linear classifier trying to do the best that it can to match horses looking in different directions using only a single template that it has the ability to learn so this is also somewhat visible in the car example you can see that the car template doesn't actually look anything like a car it just kind of looks like a red blob and a windshield and again if the car template might have this funny shape because it's trying to use a single template to cover all possible appearances of cars that you might see in the data set this also gives us a sense that maybe see if our tent has a lot of red cars because the car template that's learned is red and maybe if we try to recognize green cars or blue cars then the classifier might fail and all of these type of failure modes become very obvious when you think about the linear classifier from this from this visual viewpoint so another a third way that we can think about linear classifiers is what I like to call the geometric viewpoint so here we can imagine drawing a plot where on the x axis so here we pick we pick out a single pixel in the image and now we draw a plot where the x-axis is the value of the pixel and the y-axis is the value of the classifier as that pixel changes maybe as we keep all the other pixels in the image fixed and now because this linear classifier is a linear function then clearly the classifier score must vary linearly as we change any of the individual values in the any of the individual pixel values in the image so this is not very interesting when you think about this this this example with only a single pixel so we can instead try to broaden this viewpoint and incorporate multiple pixel simultaneously so then we can imagine drawing a plot where the x-axis is maybe one pixel in the image and the y-axis is a second pixel image and then now because I can't really draw three dimensional plots on PowerPoint you have to live with some kind of a contour plot so here then we could draw a line where the car score is equal to one half and you can see that this this level set of the car score forms a line in this in this pixel space and that and then the court because this is linear the car school the cop there is a direction in this pixel space along which the car score will increase linearly which is orthogonal to this line and kind of tying this back to the template view the car template will lie saw the learned car template will lie somewhere along this line which is orthogonal to the level set of the of the car score and then similarly similarly for all the scores for all the different categories that we're trying to recognize we'll end up having different lines with different level sets and different and the cart and the template the learned templates for those categories orthogonal to the level sets in this pixel space now of course looking at only two pixel images like we're doing in this example is not very intuitive but you can imagine that this viewpoint would extend to higher dimensions as well so here the idea is that we imagine a beriberi we imagine this linear classifier as taking the whole space of images as this very very high dimensional Euclidean space and now within that Euclidean space we have different hyper planes that are trying to one hyperplane per category that we want to recognize and each of those hyper planes we've try to recommend we each each of the hyperplane for each of the categories who want to recognize are now cutting this high dimensional Euclidean space into two half spaces along this level set so that's this this third viewpoint on linear classifiers which is of one hyperplane per class cutting up this high dimensional Euclidean space of pixels so when I this this geometric viewpoint is a very useful way to think about linear classifiers but again I would caution you that geometry gets really weird in high dimensions so we unfortunately are cursed to live in a low dimensional three-dimensional universe so all of our physical intuition about how geometry behaves is really shaped by these very low number of dimensions and that's kind of unfortunate because the way that geometry the Euclidean geometry behaves in very high dimensions can be very non-intuitive to do this the two low dimensions to our low dimensional experience so well I think that this this geometric viewpoint is kind of useful sometimes it's very easy to be led astray by geometric intuition because we happen to have all our intuition built on low dimensional spaces but nevertheless the geometric viewpoint does let us get some other ideas about what kinds of things a linear classifier can and cannot recognize so then based on this geometric viewpoint we can try to write out different types of cases or different kinds of classification settings that would be difficult or impossible for a linear classifier to properly recognize so here the idea is that we've colored this two-dimensional pixel space with red and blue corresponding to different categories that we want the classifier to try to recognize and these are all three cases that are completely impossible for a linear classifier to recognize so on the Left we have this case of red and blue in these these the first and this and the third quadrants having in one category and the second and fourth quadrants being of a different category and then if you think about it there's no way that we can draw a single hyperplane that can divide this that can divide the red and the blue here so that's a case that is just impossible for linear classifiers to recognize another case that's completely impossible for linear classifiers is this case on the right which is very interesting of three modes so here this we've got the blue cape in the blue category there's maybe three distinct patches and parts in regions in pixel space that correspond to possibly different visual appearances of the category we wish we want to recognize and then if we have these different disjoint regions in pixel space corresponding to a single category again you can see there's no way for a single line to perfectly carve up the red and the red and the blue regions so this this this right example of these three modes is I think similar to the what we saw in the visual example of maybe the horse is looking in different directions that you can imagine maybe in this high dimensional pixel space there's some region of space corresponding to horses looking right and a completely separate region of space corresponding to horses looking in a different direction and again with a single and now with this geometric viewpoint of hyperplanes cutting out high dimensional spaces it again becomes clear that there's it's very difficult for a linear classifier to carve up classes that I have completely separate modes of appearance and this also ties back to the historical context that we saw in the first lecture if you remember in the first lecture last week we talked about this historical context of different types of machine machine learning algorithms people have built over the years and one of these very first machine learning algorithms that got people very excited was the perceptron that all of a sudden there was this machine that could learn from data it could learn to recognize digits and characters and got people really excited but it had this but now if we were to if you were to look back at the exact math of the perceptron now we would recognize it as a linear classifier and because the perceptron was a linear classifier there's a lot of things that it was just fundamentally unable to recognize the most famous example was the XOR function which is shown here which where we have the the green as one category and the blue is a different category so because the linear because the perceptron was a linear model there was no way that it could carve up these these input these red and green regions red and sorry green and blue regions with a single line and therefore there was no way that the perceptron could learn the XOR function so that's kind of a nice bit of historical context about why the geometric viewpoint was historically useful for having people think about how machine learning algorithms could operate so then we so now to this point we've talked about linear classifiers as this fairly simple model of a matrix vector multiply and we've seen how even though there this is a fairly simple equation to write now if you unpack it and think about it in different ways some of the shortcomings of its representation abilities become clearer as we think about it from these different viewpoints so is there any questions about these these different viewpoints of linear classifiers so far ok so then basically where we are now is that once we have a linear classifier we're able to predict scores right given any value of the weight matrix W we can perform this matrix vector multiply on an input image to now spit out a vector of scores for the for the classes that we want to carry that we want to recognize so as an example here we've got three images and ten categories for C part n so for any particular value of the weight matrix W we can run the classifier and get these vectors of scores but this has told us nothing about how we actually select the weight matrix W and we've not said anything about the learning process by which this this matrix W is selected or learn from data so that so now in order to actually write down linear and actually in order to actually implement linear classifiers we need to talk about two more things one is we need to use the idea of a loss function to quantify how good any particular value of W is and that's what we'll talk about for the rest of this lecture and then in the next lecture we'll talk about optimization which is the process by which we try to search using our training use our training data to search over all possible values of W and arrive at one that works quite well for our data so a little bit more formally a loss function is some way to tell how good our classifier is doing on our data with the interpretation that a high loss means we're doing bad and a low loss means that we're doing good and the goal the goal whole goal of machine learning is to write down loss both well okay that's a little bit reductive but one way to one way that we can think about ma'sha'allah - Murel network systems is writing down loss functions that try to capture intuitive ideas about what types of models are good or when models are working well and when models are not working well and then finding and then once we have this quantitative way to evaluate models then to try to find models that do good and as a bit of terminology this at this term of a loss function will also sometimes be called an objective function or a cost function in other words literature and because people can never agree on names sometimes people will talk about the negative of a loss function instead so then what loss function is something you want to minimize sometimes people want to maximize something instead and it's the thing we care to if that if we want to right now in our model by maximizing maximizing a function then it'll typically called something be called something like a reward function profit function utility function fitness function each subfield has their own names and bits of terminology but they're all the same idea it's just a way to quantify what your model is doing well and when your model is not doing well then a bit more formally the way that we'll usually think about this is we have some data set of examples where each input is a vector X and each output is a label Y in the image classification case X will be these these images of fixed size and Y will be an integer giving the label of get will be an integer indexing into the categories that we care to recognize now the loss for a single example will often write as Li and it will take in so then f of X I and W will be the predictions of our model on a data point X I and the loss function that will then assign a score of badness between the prediction and the ground truth or true label Y I and then the loss over the entire data set will simply be the average of all the losses of the individual examples in the data set so then this is kind of the idea of a loss function in the abstract and the first concrete loss function and then you can imagine that as we try to tackle different tasks in machine learning we can we need to write down different types of loss functions for each different task that we want to try to solve and even when we're focused on a single task we can often write down different types of loss functions that encapsulate different types of preferences over when models are going to be good and when models are going to be bad so as a first example of a loss function I want to talk about the multi-class SVM loss for the infer image classification or really for classification more generally so here the idea of the multi-class SVM loss is quite intuitive what it basically says is that the score of the correct class should be a lot higher than all of the scores assigns to all of the incorrect classes right that's that's kind of an intuitive statement that if we want to use this classifier to actually predict it to recognize images then at the end of the day we don't care about the predicted scores we want to assign a single label to the each image that we want to classify and in order to do that we it seems reasonable that we want our classifier to assign high scores to the right category and low scores all the other categories and now the the multi-class SVM loss is one particular way to make the intuition concrete so what exactly the multi-class multi-class SVM lost computes is that we can draw a plot here where the x axis is going to be to score for the correct class for the example we're considering and the y axis will be the the loss for that individual data point that we're trying to classify then in addition to keeping track of the score of the correct class we also want to keep track of the highest score among assigns to all other categories that we care to recognize so maybe if the if we were classifying an image whose correct class is cat then the x axis would be cap score and then this particular dot would be the highest score assigned to all of the other categories in the in the classifier and then the multi-class SVM loss looks like the following it's going to decrease lynnie its if it's going to decrease linearly and once the score of the correct class is more than some margin over the second highest score among all the impact classes well that will give us zero wasps and I'll call that low loss means of a good a good classifier and then moving to the left you can see that as the score for the correct clock class becomes close or even higher than the score to all to the highest incorrect class then the loss we assigned to that example will increase linearly and this type of loss function that has a general shape of kind of a linear region and then a zero region this type of a shape of loss function comes up a lot in different contexts and machine learning and this is often called a hinge loss because it looks kind of like a door hinge that can open and close so we can write down the same intuition mathematically like the following given a single data example X I X I image and y I label then the SVM loss has has the form where has the form where we sum over each of the category labels in not including the correct label Y I so you see the the sum here goes over all category labels but excludes the correct class and now it's going to take the max of 0 and the class we're looping over - the correct class plus 1 and if you kind of think about the different cases about what can be higher and what can be lower you can see that this correspond to on the on the right corresponds to two cases one is that if the correct class is more than one greater than the indirect class then we then we achieve a loss of 0 for that class right so basically what this is saying is that we're summing over all the collect all the classes that we want to recognize and we're going to assign a sort of a mini loss per class per category incorrect category and now if the incorrect category is less than is greater than 1 less than the correct class then we attend then we invokes then we achieved then we get then we take some loss whereas if the if the correct class is more than 1 greater than the incorrect class then we get 0 loss for that class example pair and then we loop over all the other classes that we care to recognize so because that's a little bit hard to wrap your head around we can kind of look at a more concrete example so here we're imagining a data set of three images hopefully you can recognize as expert human visualizers that these are cats cars and frogs and now we're imagining some particular setting of the weight matrix W that causes our classifier to spit out these scores for these images so given these scores and these images we can compute the SVM loss as follows so first we want in order to compute the the loss for the cat example then we need to loop over all the incorrect classes of the all the incorrect categories so we skip the cat category and now for the car category we compute max of zero five point one is the car score minus 3.2 is the cat score plus one is the margin and that gives us a score for that thing of 2.9 and now for the car category we see that then we see that cat is more than 1 greater than frog then the Frog score so then we achieve zero loss for the for the crab for the category of frog and the overall loss for the cat example is 2 for this cat image is 2.9 we can we can do something similar for the car image and here because the correct category for the correct cat or category of this image is car and the score we're currently assigning to it is 4.9 and 4.9 is more than one greater than all of the scores assigned to the incorrect categories so we achieve a block a loss of zero for this example and you can imagine doing the similar computing doing the same computation for the Frog example here we get a lot of loss because we've assigned a very low score to the Frog category and then to compute the loss over the full data set we just take an average over the loss over the examples so now a couple questions first think about what happens if the loss what happens to the to this loss if the if some of the predicted scores for the car image were to change a little bit well in this case because the car image is achieving zero loss overall if we a met and and the predicted car score it's a lot greater than any of the other scores assigned to the incorrect classes you can see that if we were to change the predicted scores of this example by a little bit then we would still achieve zero loss so that's kind of it that's that's one interesting property of the of the multi of the multi-class SVM loss is that once an example is correctly classified then changing the predicted scores of that example just a little bit don't really affect the loss anymore so another question is what's the maximum and minimum possible values for this loss on a single example yeah so the minimum loss is zero so we achieved the minimum loss when the correct category has a lot has a score much higher than all the incorrect categories and the maximum loss is infinite and that happens when the correct category has a very very low loss that's much smaller than all the other predicted losses so then another question if all of the score if we had a linear classifier that was randomly initialize the weight matrix has not been learned at all then and if the values of the wave matrix for all may be small random values then maybe we would extend we would probably expect at initialization when we first start the learning process that all of the predicted scores for the linear classifier would also be small random values for each of the categories so in this case if all of the predicted scores are small random values that approximately what loss would we expect to see from the SVM classifier I heard they heard zero that's actually not correct small so when I say it okay maybe this was not a bit not very precise so maybe that was my fault for asking an imprecise question but maybe if all of this so maybe if we're going to draw on each of the scores from some Gaussian distribution with maybe a standard deviation of like 0.001 something very very small then in that case if all of the predicted scores would then be small random values so then the expected difference between the correct category and any of the incorrect categories would be approximately zero so then if you imagine turning through this lost computation we would get like small value minus small value is approximately zero and then this overall and then plus 1 would give max of 0 & 1 so then we would achieve a loss of 1 / incorrect category which and again because this sum is looping over all the incorrect categories then in this case we would expect to see a loss of approximately C minus 1 where C is the number of categories that we're trying to recognize now this might seem like kind of a stupid question to ask but it's actually a really useful debugging technique whenever you're implementing a neural network or other kind of learning based system you can you you you should think about what type of loss do you expect to see if all of the scores are approximately random and then when you start training your system if you actually see a loss which is very different from what you expect then probably have a bug somewhere so this might have seemed like a contrived question but it's actually a very useful debugging technique to go through this exercise of thinking about what kind of loss would you expect to see with small random values whenever you go and implement a new loss function or start training the new loss function so then another question is that we should we saw in this formulation of the SDM loss that we're summing over all of the incorrect categories only so what would happen if we were to sum over all of the correct category over all of the categories including the correct category would this represent the same preference over classifiers or would this represent some other type of classifier some other to the preference over weigh matrices well in this case all we would just expect all of the scores to be inflated by one right because this would be adding an extra term to the sum sorry okay yes yes then we would then we expect all the predict we expect all all the flicked losses to just go up by a constant one because we add an extra value to the sum which was syi - whistle syi which would be zero plus one x is 0 1 1 is 1 so we just add 1 all the losses so this would express the same preference over classifiers because all the losses would be inflated by a value of 1 but the relative assignment but the we would not change our order about whether we would prefer one mate 1 wait majors all over and over because all the losses would just be inflated by one it's done another question what would happen if if we rather than using a sum we used a mean over categories instead of a sum so here then all of them all of the computed losses would just be multiplied by a factor of 1 over C minus 1 and again because that's a monotonic transform this would Express the exact same preference over weight matrices so the values of loss would change that we see when we're training but exactly the the preference over weight matrices would be the same so another question what if we use some other type of formulation what if we took a square what if we put a square over this max value so this would now express quite a different this would actually be quite different so this would change all of the scores in a nonlinear way and this would cause our prep the the preference over weight matrices that we're expressing with our loss function to change in a non-trivial way so this would no longer be called this you can no longer call this a multi-class SVM loss because this would now be expressing a different set of preferences over our weight matrices so then now another question what happened if we found some if we happens to get lucky and find some weight matrix W that caused the overall SVM loss to be zero if we if we happen to find such a such an example with zero loss would it be unique so here it would not be right because if if we would take my our weight matrix and multiply out all by two then we would still get over a loss of zero and we can see that by working through one of these examples that if the loss was zero that meant that all that the score for the correct category was more than one greater than all the scores for the incorrect categories so if we then if we multiply the weight matrix by two then all of the predicted scores will also go up by a factor of two because the classifier is linear which will mean that now all of our predicted all of the predicted scores for the correct categories will be more than two greater than all of the scores the incorrect categories so we'll still be over the margin and we'll still get zero loss so now that leads kind of an interesting question now it now that it's possible that we can have two different weight matrices that exceed the exact same loss then how can we possibly express preferences over these weight matrices right because in this case we found two different weight matrices that achieve the same loss on the training data so in order to distinguish them we need some other mechanism additional mechanism beyond the training set laws in order to express our preference or preferences over classifiers so this is an idea this is one idea called regularization so regularization is some thing some piece that you add to the objective function or the overall learning objective that is fighting against the training data what is performing well on the training data so so far we've seen this overall loss as the average loss of all the examples on the training set so this is usually called the data loss which is somehow measuring how good are the models predictions on the training data and it's very common to add an additional term to our overall loss function that does something else that might not depend on the data that's called this is called a regularization term that it's that hold that serves a couple different purposes one is to prevent model one is to express me right so here the second term is called a regularization term and you'll see that it does not involve the training data this is meant to prevent the model from doing too well on the training data basically to give the model something else to do other than just try to fit the training data and here these different types of regularization will often come with some kind of hyper parameter usually called lambda in terms of regular for regularizer z' that will be some hyper parameter or controlling the trade-off between how well the model is supposed to fit the data versus how well is the model supposed to achieve this regularization loss so a couple a couple very common examples of regularization that are typically used for linear models are l2 regularization which is the overall norm of the of the weight matrix W the l1 regular and we can sometimes use an l1 regularizer which is the sum of the absolute values of all the elements in this weight matrix W sometimes you'll see what's called an elastic net in statistics literature which is a combination of the l1 l2 regularizer regularizer z-- so all of these types of regular risers will also be used in neural networks but as we move to neural network models we'll also see other types of regular Reiser's such as dropout batch normalization and more recent things like cut out and mix up stochastic gap there's a lot of interesting regularizer that people use for neural networks but the basic idea of why we might want to use regularizer z' is somehow threefold in my in my thinking one is that adding some additional term term to the loss beyond the data loss allows us to express our preferences over different types of models when those different types of models are not distinguished by their training accuracy and this can sometimes this can be a way that we can inject some of our own human prior knowledge into the types of classifiers that we would like to learn a second is to avoid what we call overfitting so overfitting is a bad problem in machine learning this happens when you build a model that works really really well on your training data but it actually performs very poorly on unseen data and this is here this is a point where where machine learning is quite distinct from something like optimization right in optimization we typically have an objective function and our whole goal is just to find the bottom of the objective function but in machine learning we often don't really want to do that at all because the end of the day we want to build a system that performs well on unseen data so finding a model that does the bat gets the best possible performance on the training data might be working actually against us in some ways and might result in models that do not work well on unseen data and then there's another kind of technical bit is that if we're using gradient-based optimizers then adding an extra term of adding this extra regularization term can sort of add extra curvature to the overall objective landscape and that can maybe sometimes help the the optimization process so the I'd one idea of regular so I said that one idea of regularization is to is that we can express preferences over different types of classifiers that we want a model to learn so here's an example where we have an input vector X that has all ones and now we consider two different weight matrices W 1 and W 2 and now imagine that we're in some kind of linear classification or linear regression setting then the prediction of a linear model with this input X and either of these two weight matrices will be one right because the inner product of the of the of the vector X and either of these two matrices is 1 which means that if we were solely going by something like a data loss then the loss would have no way to distinguish these two different of these two different values of the weight matrix and they would be preferred equally but if we're to use if we were to add an l2 regularization term to this model and end to our loss function then this allows us to in express an additional preference to tell the model which of these two we would prefer so here we add this l2 regularization term then we see that if you imagine computing the l2 norm of the w1 vector then it's l2 norm is 1 whereas the l2 norm of the second vector is what 1/4 squared is 1/16 and we got four of those so the overall 2 norm is 1/4 so the the weight matrix W 2 would be preferred if we would if we add in this l2 regularization and what's and here this is very interesting right because what this is one way to think about what an l2 regularizer is doing that when you have two different options that compute the same value on the input well you could either sort of choose to spread out your weight matrix to use all of the available input features or you could concentrate all of your weights on exactly one input feature and when you're using an l2 regularizer you're kind of giving the model this extra hint that you that you would prefer that it used all available features were possible even if using a single feature would have achieved the same result so this could be useful maybe if you believe that individual features might be noisy and that you have maybe a lot of features that all could be correlated and you want to tell the model to use all of available features something like l1 regularization it tends to express the opposite preference where in l1 regularization it tells the model to prefer to put all of your weight on a single feature where it possible so it's kind of interesting that these different regularizer --zz allow us to give the model extra hints but what types of classifiers we'd like them to learn that is completely separate from their performance on the training data so I said the the second really interesting piece of regularization is to prefer simpler models in order to avoid overfitting so here we can imagine we're building some model that is receiving a scalar input X and is predicting a scalar to output Y and we've suppose we've got some noisy training data well specified by these blue points well we could imagine fitting two different models to this training data maybe the model f1 is this blue curve that goes and perfectly fits all of the training points whereas the model f2 is this green curve that does not perfectly fit all the training points but somehow the the model F the f2 curve is somehow simpler because it's a line and not a big Wiggly polynomial so it might be that given our human intuition about the problem we might we might have reason to believe that a line might be a more generalizable solution to the task at hand and indeed if we were to imagine collecting a couple more data points that are also kind of noisy data points that fall roughly along a line then you can see that the this blue curve f1 might achieve very bad predictions on unseen data while the simpler green curve f2 might might achieve better predictions on unseen data of course I need to point out that we've been talking about linear models and people always complain that this slide has a model definitely not linear on it so it's just a cartoon to express the idea of preferring simpler models with regularization so and so the kind of a takeaway here is that regularization is really important when you're building machine learning systems and that you should basically always incorporate some form of regularization into whatever machine learning system you're trying to build so here now we've seen this idea of a linear classifier we've seen the notion of a loss function with it we saw a concrete example of the loss function being via the multi-class SVM loss and now we've talked about regularization as a way to prefer one type of classifier over another well another way that you can tell the model how you you can give the model your preferences about the types of functions you'd like it to learn is by using different types of loss functions to train the model so we've so far seen the the multi-class SVM loss but another very commonly used loss and perhaps the most commonly used lost when trading neural networks is the so called cross-entropy loss or multinomial logistic regression and this comes by this comes in a lot of names so you'll see a lot of names for this but it all means the same thing so here the intuition is that we'd like so remember we so far we've not really given much interpretation to the scores that are being spit out by our linear model we just said that we had a input X we had a weight matrix W it was somehow spinning out some collection of scores but we didn't really but the the multi-class SVM loss did not really give any interpretation to those scores other than telling that that the score of the correct class should be higher than a score of all the other classes well now as we as we move to the cross-entropy loss were motivated by a different we want to give some interpretation to the scores that the model is predicting so with the multi with the cross-entropy loss what we want to do is to try to find a way is to have some probabilistic interpretation of the scores that are being predicted by the model and we'd like to find a way to take this this arbitrary vector of scores and interpret it as a probability distribution a distribution over all the categories that we're trying to recognize so the way that we do that is with this this particular function called softmax that has this functional form here but basically what we're what we want to do is we're going to take the raw scores predicted by the classifier that and these raw scores are sometimes called unnormalized log probabilities or logits you'll see these terms thrown around and we'll take these these raw scores and run them through an exponential function so we'll take e to the power of each of an individual score and apply this element wise from the score vector so here we do this the the interpretation is that we know that probability distributions are supposed to be non-negative in all their slots and the output of exponential is also non-negative so this is a way to force our Opitz to now be non-negative and these are sometimes called unnormalized probabilities and that name unnormalized probabilities is very suggestive it should tell you that the next thing we want to do is to normalize so indeed then we then what we want to do is divide is take the sum over all unnormalized probabilities and divide each of the unnormalized probabilities by the sum and then after this operation now we have a vector each element of which is nonzero and which sums to 1 so now this vector we can just we can interpret as a probability distribution over all the classes that we're trying to recognize and this this um this combination of taking exponential and then dividing by the sum of the Exponential's is called the softmax function and this gets used in a lot of different places in machine learning the reason it's is kind of a side the reason it's called softmax is because it's a differentiable approximation to the max function so if you were to look at this raw score vector the max would be this middle slot 5.1 so you can imagine a version of the max function which output the vector 0 1 0 that had a 0 in all the none in all the non max slots but a 1 in the slot of the max element but that would be a non differentiable function which or rather would have 0 0 derivative almost everywhere so we would not like to use that when training neural networks whereas this softmax function is now a soft differentiable approximation to that hard max function the Edit where you can see that now the maximum value of the unnormalized log probabilities was 5.1 so then that ended up as the largest element of the normalizer of the the final probability distribution of 0 etc and this softmax function gets used in a lot of places in different types of neural network models whenever you want to whenever you think you want to compute the max of something but you also want it to be differentiable so that's a very useful function and a very useful tool to have in your toolbox when you're building different types of differentiable neural network for systems but that with that long aside basically what we've done is we've taken this this raw set of score vectors and we've now converted it into a probability distribution and given that probability distribution we now need to compute a loss for this element and the way that we do that is by taking the the met the opposite of the log of the probability assigned to the correct category so in this case the correct category should be cat the probability assigned to the correct category is zero point one three and then the minus log of that would be two point zero four so the loss that we assigned to this example when training with a with a cross entropy loss would be two point zero four so then this operation of taking the minus log of the correct class maybe seems a bit arbitrary and but the reason that we take this particular form is because it's an instance of maximum likelihood estimation that I don't want to go into the details up here but if you've taken something like a es 4 4 4 5 or 5 4 5 you would have talked about that in detail maybe excruciating detail but the the 1 1 basic intuition behind why this is maybe a reasonable loss to talk about is because we can imagine you can basically say that our model has now predicted some probability distribution over the categories and there exists some ground truth or correct probability distribution that we would have liked it to predict now the correct probably distribution would have had a 1 would have assigned all the probability mass onto the correct class so the target probably distribution in this case would have had a 1 in the first thought 0 and all the others and now we want to have some function that compares probably distributions so if you take information theory then there's a lot of nice mathematical reasons why this particular functional form called the callback lie blur divergence is often used as a way to measure differences between probability distributions and now if you imagine using this callback lie board divergence to compute the difference between this predicted probably distribution in the greenbox and this target probability distribution in the purple box then if you work out the math you'll see that it comes out to be the net minus log of the probability assigned to the class and this is called and this is there's another I mean information theory has all these nice little ways to manipulate probabilities that are all related to each other right there's another thing called a cross entropy which is a slightly different way of measuring differences between properly distributions that is the entropy of 1 plus the KL divergence of the two and the reason that this loss function is often called the cross entropy loss is because it's monotonically is because it's monotonically related to the cross entropy between the two probabilities of distributions so I so then if we kind of sum this up then the cross entropy loss what it's doing is maximizing the probability of the correct class using this particular log formulation so then we can ask a couple questions about this loss as well just like we did for the multi-class SVM loss so first what's the the minimum and maximum possible loss for an example when we're using the cross entropy loss yeah so the minimum loss would be 0 and the maximum loss would be infinity but what's interesting here is that with the SVM loss it was actually possible to achieve the minimum because remember with the SVM loss we could achieve a loss of 0 by just having the / the correct class be a lot higher than all the other classes but with the cross entropy loss the only possible way that we could actually achieve a loss of 0 would be if our target problem if our predicted probably distribution was actually one hot and if our the only way we'd actually get 0 is apart our predicted and target probably distributions we're actually the same but because our predicted probability distribution is being printed as being predicted through the softmax function there's no actual practical way we can ever actually achieve zero loss so another question remember we've got the same debugging trick that we use for svms that if all of our scores are going to be small random values then what loss would we expect to see well in this case if all of our scores were small random values that were about the same then we would expect to predict a uniform distribution as we run our predicted scores to the softmax function so our predicted probability distribution would be uniform over C categories which means would have probability of 1 over C in each of the C slots which means that when we produce when we predict the minus log of the correct class it would be minus log of 1 over C it would be minus log of 1 over C and that's a typo or log of the number of categories and this is a number you should again be very familiar with so if we're training on the CFR 10 dataset then you should know that natural log of 10 is about 2.3 because that's the loss you should expect to see at the beginning of training so when you implement a linear classifier with a cross-country loss on CFR 10 and you don't see something about near 2.3 at the beginning that means that you've done something very long very wrong and you have a bug this is also a useful number to know because if you're if during the training process you ever see losses that are much much higher than 2.3 with a 10 category problem that means something has gone very very wrong during the optimization because now your classifier is doing worse than random so sort of practically speaking when you're whenever you're training a model with a cross entropy loss it's always useful to have in the back of your mind what is this what is log of the number of categories and then kind of use that as a way to benchmark whether you've implemented things properly or whether the model has totally blown up and is now predicting something work much worse than random okay so then it's now we've talked about two different types of losses one being cross entropy and one being soft matter the the multi-class SVM and it's interesting to think about what happens in what happened how do how to compare these two different losses and how these two different losses would behave on the same data so let's assume that we've got some data set of three examples in three categories like we've been thinking about so far and assume our predicted categories are the predict the ground truth category is category 0 for each of these examples and our classifier has predicted these through these set of scores on the left so then what would be the cross entropy loss in this situation and what would be the SVM lost in this situation well in this case the the the SVM loss is easy because we can see that the that the ground truth category scores of 10 are r1 are at least one greater than all the incorrect scores so the SVM loss would get zero here in this situation and the cross-entropy loss would be some value that's greater than zero that I definitely can't compute all those logs in my head but the difference is that this is this is kind of pointing to the same point we saw that before whereas with the SVM loss it's very easy and very possible to actually achieve zero loss whereas for the cross entropy you'll never get zero loss so then what what happens to each loss if I slightly change the scores of the last day of the last data point all right so this last data point has a predicted score of 10 for the crack code category and a predicted score of minus 100 for the two in correct categories so in this case the SVM loss won't care the SVM is already giving zero loss to this example and if we change it just a little bit then it's it doesn't really care but the cross-entropy loss on the other hand is never satisfied for this particular example it's already doing a really good job at classifying an example because the correct score is like way way way higher than all the incorrect scores but the cross entropy loss doesn't care the cross entropy loss always wants to continue pushing these farther and farther apart and continue pushing the the predicted score of the correct class up to positive infinity and keep pushing all the scores of all the incorrect classes down tune that down to negative infinity so with cross entropy you just keep training forever and it'll just continue trying to separate those scores more and more and more and then we get a similar intuition if you think about doubling the score of the correct class from 10 to 20 then again the cross entropy loss will decrease where the SVM loss will still be zero so then kind of to recap what we talked about today we introduced this notion of linear classifiers as this matrix multiply and a vector we talked about these three different viewpoints to think about what linear classifiers are doing and saw how these different viewpoints can have different implications what we're thinking about and we saw the idea of a loss function to quantify our unhappiness with the present performance of our classifier but now the next question of course is how will we actually go about finding the best W once we've written down our preferences and for that well you can come back next time and we'll talk about optimization
Deep_Learning_for_Computer_Vision
Lecture_19_Generative_Models_I.txt
alright let's get started so welcome back to lecture 19 and today we're going to talk about generative models part one and then coming up after this will be generative models part two next time as you could probably guess so last time we talked as a as a recap last time we talked about different ways to deal with videos with convolutional neural networks so we saw a bunch of different strategies for extending our neural networks from working with two spatial dimensions to working with two spatial dimensions and one temporal dimension so you'll recall that first off we saw the super simple method of a single frame CNN actually worked really well and we should always use that the first time we try to do any kind of video tasks and then we saw other techniques like late fusion early fusion 3d CNN's to stream networks CN n plus RN n convolutional ardennes all these different mechanisms for fusing in spatial and temporal information so that was all I was all really good stuff that hopefully will be useful if you ever find yourself confronted with the need to process videos with deep neural networks but today we're gonna take a kind of different approach and talk about a very broadly of a very different sort of problem than we have the rest of the semester so this will be the problem of generative models and what exactly a generative model is we'll take a little bit of unpacking but I do feel the need to warn you on this lecture that there's gonna be a bit more math in this lecture than we have seen throughout the most of the semester so I think we'll see more equations and fewer pretty pictures but we'll hopefully try to get through it together and then so this lecture a next lecture we're going to talk about different approaches to generative models last lecture I promised that today would be generative adversarial networks while I was going over the material I realized that that was not a good idea so today I'm gonna talk about so I'm going to talk about the minute different order today we're gonna talk about variational auto-encoders and auto regressive models and then next time we'll cover variation we'll cover cover generative adversarial networks so it's kind of unpack what is a generative model I think we need to step back a little bit and talk about two sorts of distinctions to keep in mind when training neural network systems or really machine learning systems more broadly so one big distinction that we need to think about is the distinction between supervised learning unsupervised learning so supervised learning is what we've been doing the majority of the semester so in unsupervised learning the setup is that we're given some data set and this data set consists of tuples so each each element in our data set gives us some input piece of data X which might be an image as well as some kind of output Y that we want to predict from that input and that might be a label like like cat or something like that and the general goal of supervised learning is to learn some function that maps from the inputs X to the outputs y and the protego and one thing to keep in mind about supervised learning is that most supervised learning datasets require humans to annotate them right so we can go on the internet and we can download lots and lots of images to give us lots of examples of X but in order to get the labels Y that we want our system to predict we typically have to have people go out and annotate all of that pop all of the outputs that we want for all of our training images so supervised learning is really really effective as we've seen over and over and over again this semester that if you have access to a big data set of X's and Y's that have been labeled by people then you can usually train a good neural network to predict that to learn some - mapping and predict the Y's given the X's so we've seen examples of supervised learning over and over again this semester so far the canonical example has been something like image classification where X is an image Y is a categorical label this can be something like object detection where X is an image Y is a set of boxes in the image of categorical labels this can be something like semantic segmentation where X as an image why is this list label this semantic category label per pixel in the image or this can be even something like image captioning where then acts as an image and Y is some natural language description of the input image that has been written by people and the kind of underlying thing to remember about this is that this we can train on big datasets using neural networks but with supervised learning we need someone to annotate the datasets for us so this data annotation can put be a potential barrier in our ability to scale up models to the extent that we really liked like to sew of course um we need to contrast this with a different approach to machine learning which is that of unsupervised learning so unsupervised learning I think is a little bit of a nebulous term right so sort of unsupervised learning is like everything that's not supervised to some extent but the general idea with unsupervised learning is that we only get raw data we don't get we only get for example a large collection of images X and we do not get access to any kind of ground truth labels or outputs Y we only get the raw images X and the goal and unsupervised learning is to somehow build a model that can process this large set of images and uncover some kind of hidden structure in that data now hidden structure is a little bit of a fuzzy term which is why I think that unsupervised learning as a whole is a little bit of a fuzzy term and the important part but unsupervised learning is that we don't require human annotation so the kind of the dream of unsupervised learn learning is that we like to build systems where we can just download all the possible data that's out there on the web or all bits of data that we get access to and then train models the kind of uncover structure on these large quantities of data without having to go through and label them one by one with human annotators so if so this is kind of like a holy grail of machine learning in some way is that we want to discover methods that can just slurp in as much unsupervised data as we can and use that unsupervised data to make them better and better and better so I think we're a little bit far away from achieving that sort of holy grail task and unsupervised learning but that's kind of where we strive towards when we think about this unsupervised learning problem so then to kind of make this a little bit more concrete we can talk about maybe a couple concrete examples of unsupervised learning tasks that you may have seen that you may maybe have seen before in other contexts so one example is something like clustering so in clustering we just get a whole bunch of data samples and the goal is to break the data samples into clusters now note that there's no labels here we're just trying to uncover this this latent structure in the data which are these sort of clusters that might naturally alert emerge from the image pair from from the raw data another example of an unsupervised learning problem would be dimensionality reduction which if you've taken an introductory machine learning class you may have seen techniques like principal component analysis which can be used to project high dimensional data down into lower dimensional spaces and they are the kind of objective in these dimensionality reduction tasks is that we've got this large set of data points that maybe live in some very high dimensional space and we'd like to find some very low dimensional subspace within the high dimensional space that sort of captures most of the structure or most of the variability of the raw input data set so then somehow discovering this low dimensional sub manifold of data would be some way that we could try to discover structure in the raw data without requiring labels to make our predictions another example today that will actually cover in more detail today is this idea of an autoencoder so this is a special type of neural network that tries to reconstruct its input and in doing so it tries to learn a latent representation in the middle that can be useful for other downstream applications and again this can be done using only raw data X and no labels Y whatsoever and then another example would be something like density estimation where suppose we are given a large selection of the collection of data samples and our goal is to learn a probability distribution that puts high probability mass on all the data samples that we see in our data set and low probability mass on all of the other on all other potential points that correspond to images or our samples that did not appear in our data set so this gives us this this big contrast between supervised and unsupervised learning where you know supervised learning is what we've done and it actually works very well but it relies on these large quantities oh not just large not just big data but requires on but requires a big label data and that's a big constraint and where we want to get to with unsupervised learning is to develop techniques that can uncover or learn useful latent structure using very large quantities of unlabeled data so I think this is a big distinction that we need to keep in mind because this distinction of supervised purses so it turns out that our topic today of generative models is going to be one way that we can try to approach these these unsupervised learning tasks so by learning large large scale generative models will be a way that we can try to learn structure in our data using a lot of label data but without requiring human annotation on that data so then that's that's the first major distinction that I wanted to point out and was this was clear to everyone was any kind of questions on this supervised versus unsupervised the distinction all right then let's then let's talk about our second big distinction in machine learning models that we need to be clear on so this is so here the distinction is we want to sort of think about different sorts of probabilistic frameworks that we can use to represent different types of machine learning models so in particular it's often we often like to think about a distinction in discriminative models versus generative models and the distinction here is in the type of underlying probability structure that these different type of models are trying to learn so one kind of lens through which we can view a lot of different machine learning models is that we're trying to fit some under some some probabilities some kind of probability distribution to the data that we're training on and now the exact structure of what we're trying to model in that probability distribution has these different names so in a discriminative model what we're trying to do is learn a probability distribution that predicts the probability of the label y conditioned on the input image x and this is something that we've seen over and over again so discriminative models sort of go hand-in-hand with supervised learning that whenever you build a discriminative model like an image classifier it's going to input the image and then output a probability distribution over all possible labels for that image so then a generative model on the other hand is going to try to learn a probability distribution over images X and there's another thing called a conditional generative model which is going to try to learn a probability distribution over images X conditioned on labels Y and it seems like these three different categories of probabilistic machine learning models don't seem that different when you just look at the symbols on the page because it seems like we've got an X on a Y we kind of swapped the order what's the big deal but it turns out that there's actually a huge mathematical distinction between these three different categories of machine learning models so the reason for that is this we need to do a really quick recap on what is the what what is a density function in probability right so anytime we're kind of working with probabilistic models the mathematical objects that we often use to model probability distributions are these these functions called density functions so a density function P of X what it's going to do is input some kind of piece of data ax and then output some positive number that sort of tells us how likely was that piece of data under this probability distribution and the where we're higher numbers mean that it's more likely and lower numbers mean that that piece of data was less likely yeah question the question is do we need to use supervised learning to learn conditioner conditional generative models so there you tell you typically would need to use supervised learning because it needs access to both the data and the labels so a conditional generative model will will typically require require human labels but now the critical aspect about probability distribution functions is that they're normal APIs right so if you take your you're completely distributed function your probability density function it needs to integrate to one so if you integrate over all possible inputs to the distribution and we'll just run that integral it might be finite fin there's some sum over oh if you're doing a distribution over a finite set or and or a continuous integral if you're integrating over some some infinite sets like a vector space but the key distinction is that these density functions need to be normalized so they need to they need to they need to integrate to one and that's a really critical piece that leads to huge distinctions between these different categories of probabilistic machine learning models what this means is that because of probability distribute but because of probability density function needs to integrate to one it means that different elements within the support of the probabilities reduce distribution need to compete with each other for probability mass so because the distribution because the density functions integrate to one in order to assign higher probability to one element we must by design assign lower probability to a different element so this this this into this normalization constraint introduces some kind of competition among the different elements over which we are modeling probabilities and this this this this question of what is competing under the probability distribute distribution leads to pretty huge implications in the different in the differences between these different kinds of probabilistic models so that first let's let's think about this this idea of discriminative models or super supervised classification through this lens of probability distribution functions and labels competing with each other well here what we're doing is that we input a piece of data acts like an image and then it outputs of probabilities just distribution over all of the labels that could potentially be assigned to that in the input image and width and so in this toy example or where may be modeling to different out labels a cat and a dog and we're just outputting a binary decision a binary distribution over these two different types of labels and because the density function is normalized then in when we assign a higher probability mass to cat then by design we must assign lower probability mass to dog because it has to integrate to one but what's critical right and but what's what's what's critical here is that under this and if we assign and if we run our run a different image like this image of a dog through the same classifier then we would produce a different probably distribution probability of cat conditioned on the dog image probability of dog conditioned on the dog image and in under that second probably distribution again the labels would have to compete with each other but the critical that the critical thing to realize about about discriminative models is that images don't ever have to compete with each other under an under a discriminative model there's no competition among different types of images there's only competition among different types of labels that could potentially be assigned to each different image and that has a couple important distinctions so one is like what happens if we try to run an image into a discriminative model that just doesn't fit the label set so for example in this image on the top we were training a binary classifier that knows how to recognize cats and dogs and if we feed an image of a monkey then there's nothing the classifier can do to sort of tell us that this was an unreasonable image that did not fit its label space because the way that a discriminative model works is that for every possible image it's forced to output a normalised probably distribution over the labels and now another and this this gets even more extreme when you imagine feeding like totally crazy or totally wild images into a discriminative model so if we feed these like really abstract images that clearly have no business being called either a monkey or a cat then again our model is still forced to output a normalized distribution over these two different types of labels so um personally I think that this is one reason why things like adversarial attacks are possible on discriminative models because when you generate an adversarial attack on a supervised machine learning model what you're doing is kind of synthesizing an unreasonable image which is sort of not in the support of the types of images that that model was trained on so then when you pass this sort of unreasonable image to a discriminative model then it's still forced to output some normalized distribution over the possible label space so I think that this this this fact that so there's this fundamental shortcoming with discriminative models is that they just have no capacity to tell us when we feed them with an unreasonable input image so then let's think about what happens with a generative model in contrast well a generative model is learning a probably distribution over all possible images that could exist in the world and now so what this means is that for each possible input image that we could imagine feeding to the system this model needs to assign a number to say how likely is this image to exist and what's really fundamental is that this is a really hard problem because it needs to tell us forever because because these these probability these are these likelihoods that are being output from the density function are sort of need to tell us whether any pair of images is going to be more likely than another so this is like a really really hard problem that requires some kind of really really deep visual understanding right so if you train a generative model on images it kind of needs to be able to know what is more likely in the world maybe a three-legged dog or a three armed monkey because a generative model needs to kind of input any image that we could possibly throw at it and then output this this likelihood value that tells us how probably probable was that image so then even sort of then with a generative model different potential images are competing with each other for probability mass and in order to assign reasonable probabilities to all possible input images that it's very likely that this generative model would actually need to have a very deep understanding of the visual world so even in this example I kind of I can this is not a real generative model I just kind of like put bars and PowerPoint right but even in deciding this it's like kind of tricky right so I decided that maybe the the dog bar should be the highest in this example because I think probably dogs are more likely to be outside than cats or maybe this is an adult dog and it's more likely to see adults dogs outside than it is to see like kidneys outside because I know that this is a kitten and I know that cats are only kittens for a very small period of their life span and then maybe photos of monkeys are maybe less likely than either dogs or kittens just because people tend to take less photos of monkeys and then this may be abstract our image is maybe even less likely because I don't know what this is I think it's maybe very unlikely to see images that look exactly like this yeah questions yes so you got you got to be careful that on a density function over an infinite space there's not actually a sign of probability instead it assigns a likelihood or is it apply of science among some amount of density so once you're operating over an infinite dimensional space it doesn't make sense to talk about the probability of a single data point instead what you can do is integrate the density function over some finite region and then integrating a density function will actually give us a probability of observing some piece of data that lies within the region over which we integrate so you need to be very careful with the word probability when you talk about density functions over infinite infinite dimensional spaces yesterday that's a fantastic question it's very insightful so the question is how can we tell how good is a generative model and that's an extremely challenging question that I think a lot of people struggle with but one sort of mechanism that we use to evaluate generative models is often this idea of perplexity so then if I train a generative model on some training set and then I present the generative model with a unseen test images then if it did a good job of learning the underlying visual structure of the world then it should assign relatively high probability density to unseen images even if those particular images had not been seen during training time so that's kind of the best I think that's the gold standard evaluation metric that we have for generative models so then another interesting fact about generative models is that they have the capacity to reject samples that they think are just unreasonable so for example if we maybe put in this abstract art then the generative model can just tell us that this was an unreasonable input that had very little to do with any of the inputs that it saw during training so a generative model has his capacity to tell us when inputs that we present it with are maybe very unlikely under the training data that it was that it was trained on so then we have this third category of model called a conditional generative model so now a conditional generative model is learning for every possible label why it's learning a probably distribution over all possible images acts which means that now every possible label that we could learn is going to induce a separate competition among all possible images so then for example in the top row we're showing that the probability of each image conditioned on the fact that that image is a cat and we see that maybe the cat is a very high density under the under the top distribution and now in the but in the middle we're showing probability of each image given that it's a dog and now again now the dog should have a higher density and all the other images should be lower but now what's interesting is that a conditional generative model sort of has this capacity to tell us when inputs were so you could imagine doing classification with a conditional probability with a conditional generative model you could take your input image X and then evaluate the likelihood of that image and evaluate probability of our input image x over each you over each possible label Y and then you could make a classification decision based on which one of those were higher so you could indeed imagine training a supervised classifier using a conditional generative model but the distinction is that if you were to train a classifier using a conditional generative model then it would actually have the capacity to reject unlikely data samples because with a conditional generative model it's possible that an input that some input image could have a low density under all possible labels and we see that for example with the abstract art example in this slide so then this tract art image is given a very low likelihood under each different under both of the probability of distributions which leads us to which so that if you imagine building a classifier that was based on conditional generative models then you can imagine like maybe there's a threshold under which we say this is an unreasonable image and I refuse to classify it so that's kind of a big distinction between these different categories of models although I think it's also important and interesting to realize that these different types of models are actually not fully distinct so if you recall Bayes rule then Bayes rule lets us sort of flip around the conditioning in a properly distribution so Bayes rule tells us that the probability of X given Y is equal to the probability of Y given X divided by the probability of Y times the probability of X and what this means is that using Bayes rule we can actually build a conditional generative model out of existing pieces that we've already seen so this this expression on the Left probability of X given Y is a conditional generative model now the top expression in this the numerator in this fraction is the discriminative model probability of Y given ax the term on the right in purple is an unconditional generative model and the term at the bottom probability of Y is some prior distribution over labels so prior distributions over labels you can sort of just count up the number of labels that occur in the training set so this is kind of a nice beautiful relation among these different types of probabilistic models and it shows that shows us that maybe the really the two most fundamental types are the discriminative model and the generative model and if you can build those two then you can build a conditional general a conditional generative model as well so then kind of a big thing that we want to do is learn how to build these unconditional generative models because we already know how to build discriminative models and then if we could build both then we could put them together to build a conditional generative model so then it's sort of interesting to think about what we can do with these different types of probabilistic models once we've learned them so discriminative models we've seen many times already basically what they can do is we can learn we can assign labels to new data at test time we've also seen that they can be used for a type of supervised feature learning so for example we can train a big model that is supervised using images and labels on the image net data set and that we can strip off the final classification layer and then use the body of our neural network as a kind of feature extractor so then this tells us that as a discriminative model can learn to extract useful meaningful semantic features from images assuming that it's allowed to be trained with a lot of label data now a generative model we can do a lot of we can do a lot of more interesting things one that we've already kind of mentioned these one it can detect outliers that is it can tell us when input images are very unlikely given the data on which it was training um it also can potentially let us do feature learning without labels right because if we could learn a really good generative model that could assign really meaningful probability or really sorry you I almost got myself really meaningful likelihoods to all the different images that we could pass it then it's very likely that a model which was good at this generative modeling task mutt might also have been good at learning useful feature representations for its images and now another really cool thing we can do with generative models is actually sample from the generative models to actually synthesize new data that kind of matches the input data on which it was trained now because generative models are actually learning a distribution over images then we can sample from a learned generative model to synthesize new images which is a really cool application and now a conditional generative model we've sort of already seen that it can be used to assign labels while also simultaneously rejecting outliers and then similarly conditional generative models can be used to synthesize novel images that are conditioned on labeling date on label on a label value so for example with a conditional generative model you could you could you could ask it to generate new cats or new dogs or if maybe Y was a sentence rather than a label then you could learn to generate images that were conditioned on natural language inputs like give me a cat give me an image of a cat with four legs and a purple and a purple tail or something like that I don't think that would actually work unfortunately so then this this kind of gives us a lens through which we can view different types of machine learning models do a bit of probabilistic formalism ok so then actually it turns out that this this this idea of generative models is so important that it's a massive topic and there's really no possible way that we can cover it even in to lectures so to give you a sense of the different types of generative models that live out there I wanted to go through this very brief sort of family tree of different categories of generative models to give you a bit of the lay of the land so it's a very it's a very root node we've got this generative models and then a big split that we get is between what are called models that have explicit density functions versus models which have implicit density functions so I told you that a generative model was all about assigning density functions to images well it turns out that there there are some categories of generative models where after training you can input a new image and then it spits out this this likelihood value but there and that's those are these explicit density these gendered models with explicit density functions and now on the right hand side models with implicit density functions we do not week there's no way we can extract a likelihood value but we can sample from the underlying distribution after the models been trained now with even within explicit density functions there's sort of two categories that you need to think about one are tractable density models here are models where you know it sort of does what you expect you can actually input a new image at test time and receive the actual value of the density function on that image now with an approximate now there's also models that use approximate density functions that have no way to efficiently spit out an actual value of the density function but instead can compute some kind of approximation to the density function and now within those maybe there's two categories of methods one there's sort of different ways that you can make these approximations so one category our variational methods and we'll actually see an example of that later and the other may be Markov chain models that I don't expect you to know too much about and then over on the implicit density side again these are models where you can sample from the underlying density function but you cannot get out the value of the function so maybe examples here would be some kind of Monte Carlo chain methods where you have to you can sample from it or these direct methods where you can just directly sample from the function so within this big taxonomy we're going to cover sort of three different pieces three different types of generative models within this taxonomy that will help you to get get a flavor of these different types of generative models and what powered he's one of these different things mean so today we'll talk about auto regressive models which are examples of explicit a generative model with an explicit and tractable density function and we'll also talk about variational auto-encoders which an example of an exposure of model with an approximate but explicit density function and then next lecture we'll talk about generative adversarial networks which are our example of a generative model with an implicit density functions then we can't get out of a Lu of the density function but we can sample from it yeah yeah so whenever you generate a sample that means that I want to generate a random value and the probability they're things that I generate are what I'm going to I'm going to be more likely to generate things which have high values of the density function and less likely to generate things which have low values of the density function and that's what I mean by sample it was it was there a question in the back or is it just retching yeah so I think the question is you don't quite see the distinction of being implicit in approximate density I think that's fine at this point but if you still are confused maybe at the end of this lecture that I think a really at the end of next lecture I think that's going to be a problem so this is meant to be kind of a lay of the land so and that hopefully you can return to this map once we've seen exam or concrete examples and then you'll be able to better understand the distinctions between them okay so then with this and a book any more questions and this kind of overview of generative models all right then let's actually talk about our first generative model so this is an example of an autoregressive so we're going to talk about auto regressive models so an auto regressive model as we talked about is an example of a generative model it has an explicit density function which is tractable so this is kind of like the easiest type of generative model to wrap your brain around basically the idea is that we want to write down some parametric function that's going to input a piece of data X and a learnable settlor noble weight matrix W and it's going to spit out the value of the density function for that image and then what we're going to do is we're going to train this model by taking some data set of data samples X and we're going to try to maximize the the value of the density function we're trying to maximize the likelihood of the observed data so we want to observe a science sort of high probability to all of the samples in our dataset and we and then because of the normalization constraint that will by design force the model to assign low-mass to things that were not in the training dataset and if we make this so then this is sort of sort of very standard probabilistic formalism and whenever we're training any kind of probabilistic model then we can assume that the samples of our dataset are independent so then the probability of observing the data set is just the product of the probabilities of all of the independent samples and then we want to learn the value of the weight matrix that will maximize the likelihood of the maximum that will cause the likelihood of the datasets to be maximized of course products are a little bit ugly to work with so it's common to do a log transform on this to transform the product into a sum and we want to find the value of the weight matrix that maximize the sum of a log probability over our data set and now now our log probability is going to be represented with some kind of parametric function that inputs the data and input in inputs to the training sample and inputs the weight matrix and it's going to spit out the value of the density function on that piece of data and spoiler alert this F is going to be a neural network for us so then the question is that we need some way to write so this is kind of um this is kind of a general formalism for any kind of explicit density estimation so this is this will apply to any kind of explicit density estimation where we can actually evaluate and compute and back propagate through the value of this density function which is parametrized by a neural network so now we need to actually write down some concrete mathematical form for this density function so an autoregressive model is a particular way of writing down and parameter izing these these likelihood these these density functions so here the assumption is that each train each a piece of data acts so access like an image each piece of data is going to be composed of different sub pieces of data or sub parts so for an image on each of these X 1 X 2 X 3 might be the different pixels that make up an image and what we're going to do is sort of assume that there are raw data samples ax just consists of these many different sub parts and then what we want to know is that the the probability of an image is equal to the probability of observing all the sub parts in the right order and then based the chain rule we can write we can factor this joint distribution over all the sub parts in a particular way so then the joint distribution over all the sub parts we can factor out as the probability of the first one times the probability of the second one conditioned on the first times the probability of third conditioned on the second and the first so on and so on and so forth and this sort of holds in general right this is the this is the this is the chain rule that we can use to factor joint probability distributions so this is always true and then if you kind of iterate this then it means that the we're going to write down the probability of the density function of our data sample X is going to be equal to a product where each term in the product is going to be the the probability or likelihood of observing the current piece of data conditioned on observing all the previous pieces of data and now this formula actually should remind you of something that we've seen already can you what can anyone guess it Bayes rule not not quite not this lecture something we've seen in previous lecture recurrent neural network yep that's it right so this this this structure that we've gotten auto regressive model is that we want to break down this probability of a sequence by modeling it as the probability of the current token conditioned on all the previous tokens and it turns out that's the exact structure that we have in a recurrent neural network so then what we can do is build a recurrent neural network that's going to input that every time step the one of the tokens and then output the probability of observing the next token and through the recurrent design of the network then each prediction is going to be implicitly conditioned on all of the sequence of sub parts that occurred before so it turns out we've already you guys have already trained autoregressive generative models for generating sequences of words and sequences of letters so this is actually not a new not a new concept it's just an old idea that we're sort of couching in different mathematical formalism and now the idea is that we connect so so far we've used this idea of autoregressive sequence models to model probabilities of captions and probabilities of sequences of words and now it turns out we can actually use the exact same thing the exact same mechanism to model probabilities of images but now all we need to do is write down sort of break up our image into pixels and then iterate over the pixels in some meaningful order and now we can use this exact same mechanism of recurrent networks plus an autoregressive model to generate to build a density to train a density function that is an explicit density function of a generative model of images so then we can train a pixel RNN which is going to be a generative model with an explicit that's an explicit and tractable density function that we can train on input images so here the idea is that we're going to generate image pixels one at a time starting from the upper left-hand corner and now we're going to compute sort of an RNN hidden state for every pixel in our grid of pixels and it with within each within each of those hidden states it's going to be conditioned on the hidden state of the pixel directly above it as well as the hidden state of the pixel directly to the left of it and then within each pixel we're going to output the colors one at a time so first it's going to output the red the value of the red channel of the pixel then the value of the blue Channel the pixel then the value of the green Channel of the pixel and for each of these color channels we're going to divide it up into this discrete space over values from 0 to 255 and and within each pixel it's going to predict a soft max distribution over each of these discrete values 0 to 255 for each of the three color channels and now to kind of see how this generation proceeds then we're going to start by kind of generating the pixel of the colors of this very upper left-hand pixel and then we're going to march sort of run one RNN LST M over each row of the image and one on another RN n over each going down over each column of the image so then after we generated the value of the RGB the RGB value of the pixel in the upper left-hand corner then we can condition the the pixel immediately below and the pixel immediately immediately to the right and compute their hidden States and predict their pixel values then once we've got all three of these computed and we can kind of expand out in a going diagonally across the image so then for each pixel that we want to generate the color of we're going to compute a new sort of RN and hidden state for the pixel that is conditioned on the hidden state of the pixel directly above and the hidden state of the pixel directly to the left and then after we predict the hidden state for the pixel then we're going to generate the colors of that pixel so this is going to sort of march from the upper left to the lower right sort of down over the entire image and now due to the structure the RNN that means that the the prediction that we make for each pixel is going to have an implicit dependency on all of the pixels above it and all of the pixels to the left of it so it will have an explicit dependency on the hidden state immediately above and the hidden state immediately to the left but because those in turn depend on other hidden states then each pixel will have an implicit dependency over all of the previously generated pixels that occur up in to the left and now we can and then then this sort of occurs for all of the images and then to train this thing it's going to look exactly like training these RN ends that we've done for tasks like image captioning except rather than generating words of a caption one at a time instead we're going to be generating values of pixels of an image one at a time and other than that it's going to look very similar to what you've already seen in the context of captioning but now a big problem with these pixel RN ends is that they're very very slow during both training and testing right because this is sort of a continual problem that we've seen with ardennes is that they have a sequential dependency because each hidden state depends on the value of the hidden state before it in time so that means that during training if we want to model an n-by-n image then we need to mock tape like 2 n minus 1 steps in order to sort of march all the way down and all the way on the bottom so this is going to get expensive if we want to generate really high resolution images so these pixel RN models are super slow for both training and evaluate and and evaluation a test time when we want to sample new images so then there's there's an alternative formulation called a pixel CNN which does a very similar mechanism and is still going to generate pixels one at a time starting from the upper left-hand corner but rather than using a recurrent neural network to model this dependency instead we're going to use a sort of masked convolution to handle this dependency so then here then to generate the the new hidden state and the pixel values of this highlighted pixel in red we're going to run sort of a convolution where the convolution only is looks over the pixels to the left of it in the same row and the pixels above it in within some finite receptive field and then using this formalism you can actually compute all these receptive fields kind of in parallel so it makes a pixel CNN is much much much much faster to train although they're still quite slow at at sampling time so then at training the pixel CNN can kind of paralyze these receptive fields over all regions of the of the training image but at test time we kind of to generate pixels one at a time so it's al it's still quite slow at test time and now if we actually look at some generated samples from a pixel our gannett model then on the Left what we've done is we've trained well not me but like the people who wrote the paper then they were they trained a pixel RNN model on the Seafarer data set that we've been that we've used on our homework so far and then and then here we see generated samples that are kind of like new c4 images that the model has invented for itself and on the right we do the exact same thing except we train the model on a Down sample image net images and you can see that it's it's kind of hard to tell what's going on in these images so like it clearly there's some interesting structure like they're modeling edges they're modeling colors like if you kind of step back they look like images but if you kind of like zoom in then you realize they're like full of garbage so it looks like they're kind of learning something some kind of reasonable high level stuff about images but they're kind of not really generating really really high quality images at this point so this kind of gives us kind of our our summary of maybe auto regressive models so one of the pros of auto regressive models is that because they're they have this explicit density function then we can actually event a test time we can feed them in a new image and then actually compute out the value of the density function for every new image that we want that we might pass at a test time and then actually it's these are these are among the easiest generative models to evaluate because of this this property that we can directly evaluate the density function so then we can evaluate these these these these Auto regressive models like we said by just like training them on some data set and then evaluating the density function on unseen test images and then it should do it and then it should assign high high probability mass to unseen test images and that's actually a fairly reasonable evaluation metric for these auto regressive models um and actually the samples are fairly reasonable I I mean I made fun of them a little bit but these are actually like pretty reasonable samples I think from a generative model because they actually do kind of model they have a lot of diversity then they model edges they kind of model both load like local structure as well as global structure so it seems like even though they're not generating like photorealistic images at this point they are adding some some really non-trivial modeling of the underlying images themselves yeah question yes that's a great question the question was for these kind of methods where we're kind of generating pixels one of the time you have to generate the first pixel and that's actually the same problem that we already had in something like like generating language so there it's common to maybe add a special kind of start pixel token that you feed to the RNN that kind of you pad the the outside boundaries with some special tar start pixel value and sort of feed that at the very first time step yeah that's true so this is so this is an unconditional general generate this is an unconditional generative model so we have no control at test time over what is being generated although there are conditional variants of a pixel art and a pixel CNN so for example there's ways that you can sort of thread in label information or other type of conditioning information into these generative models that we haven't talked about here so you can do you can train conditional generative model versions of these auto regressive models and then you do get some control over what you just wait generate a test time ok so then there's also a lot of things that you can do to improve these these auto regressive models so these samples that I showed you were from a very first pixel art on paper and there's been a lot of improvements since then so there's sort of different architectures you can use multi scale resolution to a multi scale generation to make things more efficient and there's a lot of training tricks that you can use to improve the quality of these things so you can check out some of these references if you're interested in those although one thing that is a big negative about these auto regressive models is that they tend to be very slow at test time because we need to sample the pixels one by one then it's actually very slow for us to generate these images at test time yeah it's a good question the question is can these models generalize to different image resolutions than the resolutions on which they were trained so basically so this is for this I'm very vanilla a pixel RNN model I think it probably cannot generalize to new resolutions because the RNN kind of expects the rows to be certain lengths but I think there are other variants especially these multi scale pixel are these multi scale auto regressive that actually can do better generalization the test time so I think the vanilla version that we presented here cannot but multi-resolution versions I think could okay so then if then let's move on to our second generative model of variational autoencoders so I think there's a chance we may not actually get through all this material in which case some of it will get a booted into next week next lecture which is fine because we have two lectures for generate models anyway so with very with with pixel RN and pixel CNN what we did is that what we saw is that we wrote down some parametric model that was able to compute the value of the density function on arbitrary input images and then what we wanted to do with these these explicit density models is then just train the model to maximize the the density value at all the training samples and that actually worked pretty well for this case of pixel art ends and pixel cnn's so now with variational auto-encoders it's going to do something a little bit different with a variational auto encoder we actually will not be able to access or compute the the true value of the density function in any reasonable or computationally efficient way but with a variational auto encoder it turns out that even though we cannot compute the exact value of the density function we can compute some lower bound on the density function so then what we're going to do is rather than maximizing the true density instead what we're going to do is maximize this lower bound to the density and then kind of hope that like if the true density is here and the lower bound is here that you know if we maximize lower bound then it should sort of push up the true density as well so that's kind of the vague intuition with these variational auto-encoders now to understand these variation auto-encoders there's sort of two loaded words in this term one is this term just this the word variational and the other is this word auto encoder so I think we need to talk about these two different words sort of one at a time to understand this this model so first let's talk about non variational auto-encoders sort of normal auto-encoders as to kind of set us up for the variational flavor so then with a with a regular non variational auto encoder this is not a probabilistic model instead of ver a regular auto-encoder is an unsupervised learning method that aims to learn feature representations for images in an unsupervised way so we want to learn useful latent representations for images even if we don't have access to any any any meaning any any labelled labels Y so here kind of what we want to do is build a model that's build some neural network is going to input the in the the the raw images X and an output some some useful features Z that tell us something interesting about that image and kind of the hope is that these these these features that we can learn from this unsupervised learning method might be useful for some downstream supervised learning tasks so kind of the idea with autoencoders is that we want to train up this model using a lot of unlabeled data and then learn this good feature representation and then use that good feature representation for kind of transfer learning approaches to other downstream tasks so that maybe rather than pre training on imagenet as a supervised training task instead we want to do our pre training on a large collection of unlabeled images so now the problem is that we want to learn this feature transform from raw data and of course we can never observe these feature vectors z because if we got the feature vector Z in the training data then our problem would have been solved for us already so somehow we need to learn a neural network is going to output this feature vector Z sort of without any without any help from from labels so how are we gonna do that and architectural why is this this neural network that goes from X to Z is sort of like any sort of convolutional any kind of neural network architecture that you want so original it might have been sort of a fully connected network with sigmoid learned nonlinearities sort of a long time ago but more recent instantiations would have some kind of like deep residual network with reloj and batch norm and all that other good stuff so the architecture here is sort of CNN architectures that we've seen many times already but now so now the idea with an auto encoder is that what we're trying going to try to do is force the model to learn to reconstruct the original training data so what it's going to do is have one portion of the model called the encoder and the encoder is going to input them put the raw data and then output the feature vector and now we're going to also train a second component of the model called the decoder and the decoder is going to input this feed this feature vector and then try to spit out the try to reconstruct the the raw data that we fed to the encoder and the decoder is sort of again a neural network that architectural II we've seen many times before and if it's convolutional maybe the encoder would kind of have down sampling where's and the decoder we kind of have transposed convolutional near stop sample and now we would train this thing using where our loss function it would now be that the output from the decoder should match the input to the encoder and this just it should sort of learn to it should sort of learn the identity function that it should learn to just reconstruct whatever we put into the model so then maybe if we look an example of an auto encoder on the right trained on C part n then at the bottom it's going to input some raw images from the C part n data set it's going to go through an encoder that maybe has four convolutional layers and then a decoder that goes maybe for our transpose convolution layers and then it's going to try to reconstruct the input data now this seems like kind of a stupid function to learn right like we actually don't need to learn the identity function we know how to compute the identity function we just like return the thing that we that we output that we just return us output the thing that we presented as input so the point with these auto-encoders is not that learning the identity function is a useful thing instead what we want to do is force this feature vector Z to be very low dimensional compared to the raw input data ax so what this Auto encoders are really trying to do is somehow compress the input data so then if we have maybe a very high resolution input image X but we can compress it down to some very low dimensional latent code Z then reconstruct the original input image with very high fidelity from this low dimensional latent code then in that case we've probably learned something non-trivial about the data so in this case it's very important when you train an auto encoder to have some kind of bottleneck in the middle between the encoder and the decoder so often this bottleneck will be some kind of size constraint so just like the size of the layer the number Asians should be it should be much much smaller than the number of raw pixels in the input but you can imagine sort of but in a variational auto encoder we will add a different sort of constraint inside the middle of this in between the encoder and the decoder and with a variational auto encoder we'll add a kind of probabilistic constraint in between the encoder and the decoder but then the key is that this just needs to be very low dimensional or have some kind of bottleneck to force the network to learn how to compress the data in a useful way and then after we train we're going to throw away the decoder because we don't care about predicting the identity function and instead we're going to use the encoder to initialize part of some other model than maybe train it on some other data set so the whole point of auto-encoders is to learn some useful representation that we can use for downstream transfer learning applications but here of course we can do this with a lot of unlabeled data which is a really exciting part about learning auto-encoders yeah the question is what's the structure of the encoder compared with the structure of the decoder so that's that's a lot of hyper parameters but um typically the decoder will be some kind of flipped version of an encoder so a very common architectural patterns you'll see for these things with convolutional models is this kind of like down sampling and up sampling type of type of architecture that we saw for example and semantic segmentation so you recall that architectures for semantic segmentation would often take the input and then kind of down sample with convolution and then up sample again using transpose convolution so then the encoder will often have sort of striated convolution or pooling to gonna do down sampling and then the decoder will kind of have some kind of up sampling models like the second half of a semantic signal model so typically the the encoder and the decoder kind of mirror each other architectural II although there's nothing in the formalism that forces that to be true yeah so the question is if you have one encoder but different decoder then so actually that's another interesting idea there's actually sort of variants of auto-encoders where you train like encoders for different types of data but they all have to go through a shared decoder so you kind of learn a shared latent space which can be understood or or or represented or generated by lots of different encoders on different types of data but then you kind of need to train them jointly in a way to encourage them to learn the same space yeah yeah so the encoder in the decoder are both neural networks and the architectures are just going to be whatever kind of neural network architecture that you want so if we're modeling images these will typically be convolutional networks that are pretty deep in many cases but a critical component with these things is that there's some bottleneck between the encoder and a decoder ok so then a big so then these Auto encoders are kind of a nice formalism that lets us sort of have the capacity to learn representations with a lot of data and then transfer them to downstream tasks although I should point out that even though they have this nice made they seem like an awesome mechanism to learn from unlabeled data in fact in practice I think they haven't worked out as much as people would have liked because I I can't really there's really no state-of-the-art systems that rely on training autoencoders on large unsupervised data so it's a really beautiful idea but I think that the current formulations of it that we have now just sort of don't seem to actually live up to the dream of unsupervised learning so I think they're a really useful thing to know about but I think they're actually not right now a very practical approach to unsupervised feature learning um so that's a big caveat that I want to I want to point out about auto-encoders but now a big downside of auto-encoders is that they are not probabilistic right so there are an unsupervised learning model in that they can train they can learn this feature representation without any labeled samples without any labels for our data but there's no way that we can sort of sample new images from a trained auto encoder um the only thing that they could do is Jenner is a predict features for new images a test time there's no way that we can use them to generate new images so then that moves that then that moves us on to this idea of a variational auto encoder so a variational auto encoder is going to be a probabilistic sort of a probabilistic upgrade to these auto to these non variational auto-encoders so with variational encode autoencoders we want to do two different not be able to do two different types of things one is to learn latent features zi from raw data just as we did in the non variational flavor so we want to retain that capacity and but the second thing we want to be able to do is to samp be able to sample from the trained model after training just people to generate new data so now what we're going to do is assume that we have maybe some training data set with a bunch of unlabeled samples and we're going to assume that each each of the each of those samples in the training data set was generated from some latent vector Z that we cannot observe so this is kind of similar to what we saw in the non variational auto encoder so then kind of what we want to do at test time after training our variational auto encoder is maybe something like this so that we want to write down some prior distribution overs over the latent variable z and then at test time we want to sample a new latent variable from the prior distribution and then feed that latent variable to some decoder model which will take the latent variable z and then predict the image x so that looks kind of like the second the decoder that we saw from the non variational auto encoder the difference is that now it's probabilistic and we both have a probability distribution over the late the latent variables z a prior distribution over Z and the output from the the decoder is actually not a single image instead the output from the model is itself a distribution over images so then to handle this then the to handle the prior will often assume some sort of very simple prior on the ladie variables so it's very common to assume that the latent variables might be a vector of dimension D then you assume that is very common to assume that the prior distribution over Z is just like a standard unit diagonal Gaussian over in n dimensional space so we typically assume a very simple prior over the latent variable Z something we can compute with very easily and now what we want to do is we want to the the second half who wants to input this this decoder wants to input a latent variable and then output of probabilities distribution over images and this thing will model with a neural network but now how the heck are we going to output a probability to distribution from a neural network and that's sort of a tricky thing we've never really seen before so then here the trick is that we're going to sort of assume a parametric form for the probability distributions over images so in particular we're going to assume that like for the probability distribution over image we're going to have a it's going to be a Gaussian distribution with a number of units in with a number of dimensions in the Gaussian equal to the number of pixels in the image and now we can parameterize that Gaussian distribution using a mean value for each pixel as well as a standard deviation value for each pixel so then what this neural network is going to do is output a got a high dimensional Gaussian distribution where where it's going to output a mean value for each pixel and a sander deviation or covariance value for each pixel and then we can combine the predicted mean the predicted per pixel means with the predicted per pixel standard deviations to give us this high dimensional Gaussian distribution which is going to be a distribution over images that is conditioned on a latent variable Z is this construction clear so it's yeah yeah so I'm for a kind of a general Gaussian distribution it would be a full covariance matrix over all the over all the dimensions but if you want to model like five twelve squared pixels that means our covariance matrix would be like five twelve squared times 512 squared and now that's a thing that we need to output from a neural network so then the weight matrix that predicts that thing is going to be like size of the hidden of the previous hidden layer times 512 times 5/12 squared so then the width at weight major is going to be absolutely astronomically large so as a simplifying assumption we're going to not use a general Gaussian distribution we're going to assume it's a diagonal Gaussian distribution so that assumes that this that there's no covariance between the pixels that means that on the so that there's an underlying probability in there's an underlying independence assumption here which is that conditioned on the latent variable z the pixels of the generated image are conditionally independent and that's kind of the there's that's the independence assumption that we're making when we when we get back to the distribution in this form yeah yeah the question is is that this seems like a pretty restrictive assumption to put in our images that the pixels are conditionally independent given the latent variable and I think you'll find that the generated images we tend to generate from auto-encoders tend to be kind of blurry and I think that's exactly why is that because that's a very strong assumption that we're putting on the model so I think that the kind of caricature of airing of Chillon Tobin coders is that the math is very beautiful and they're very nice they're a very beautiful way to learn late representations but they tend not to generate the most beautiful images although we'll see that we can actually combine variational autoencoders with other approaches to actually get high quality images as well okay so then now the question is how do we actually train this variational autoencoder model well our basic idea is that we want to maximize the likelihood of the training data set so then what we want to do is to try to write if we could observe a like if for example we were able to observe the Z for each X during training then we could train this thing as a conditional generative model and then then the training would be fairly straightforward we could just sort of directly maximize the probability of each of each X conditioned on its observed Z and that would actually work quite well but the problem is that we cannot observe Z right if we could observe Z then the problem we kind of be already solved what we want to do is train the model to discover for itself this latent space Z and latent latent vector Z so what we need to do is right then our our approach is we're going to try to write down a function write down the probability density function of X so what we can do is sort of one thing we can try to do is like marginalize out the unobserved Z so then you could sort of write down a joint distribution over X and Z and then the distribution over just X we could to get it to get the density over just X we could integrate out that latency but the brand now and then you could factor that that a joint distribution inside the integral into using the chain rule into this this conditional probability of x given Z times the prior distribution over Z and now the things the terms inside the integral are friendly because this probability of x given Z is exactly what we can compute with our decoder and this prior is something that we've assumed has a nice functional form like a couch time but this integral is the part that kills us right because Z is some like high dimensional vector space and there's no tractable way that we can actually compute this integral in any kind of finite amount of time so we'll need to come up with some other approach so another thing we can try to do is go back to Bayes rule and then Bayes rule is a way the Bayes rule gives us another form to kind of write down the density over X so then the density over X we can write as write out as in Bayes rule using this form and then we can look at these terms again so then again we see if we use Bayes rule we again get the same term pop up probability of x given Z and this is a friendly term because this term is exactly what our decoder network is predicting so this is a term that we like to see and again we have another friendly term which is the the prior distribution over Z which again we can easily compute because we assumed that it was had a nice functional form but now this term of the bottom is like the one that really kills us because this term on the bottom we want to compute the distribution the the probability of Z conditioned on X so that's the probability of the latent variable conditioned on the image so that's kind of like the opposite of what our neural network is predicting and there's like again if we wanted to compute that thing we'd have to do it with some kind of infinite integral over all of ax or all of Z and there's just like no no tractable way that we can actually compute that bottom term in Bayes rule so then what we're gonna do is cheat a little bit and we're going to train another neural network that is going to try to predict that bottom term for us so then we're going to have another neural network q that is parameterize by a different set of parameters P and now this Q of V this this is going to be a neural network that is going to input the image X and then output a distribution over the latent variables Z and this is going to be a completely separate neural network with its own ways but we want to train it in such a way that will cause its output to be approximately equal to this to this marginal distribution over the decoder which we cannot track tably compute so this is the way that we cheat inside a variational auto encoder is that there's this term in the in Bayes rule that we just can't compute so instead we're going to just introduce another neural network that is going to try and compute this incomputable term for us and then once we have this other term then we can compute this approximate density over X where we replace this intractable term in the denominator of Bayes rule with this approximation that is coming through our auxilary neural network and this auxilary neural network by the way what it's doing is it's in putting the the latent variable Z sorry it's in putting the image X and it's outputting a distribution over latent variable Z so that is an encoder right because it's inputting the image and it's outputting the latent variables so that has a structure very similar to what we saw in already so then this is going to look this is going to have the same sort of structure it's going to be a neural network that inputs the inputs the image X and then uses the same sort of diagonal covariance trick to output a distribution over Z that is conditioned on the input image X ok so then what we're gonna do is that if somehow we want to jointly train this encoder network and this decoder network in such a way that this decoder network that sorry the encoder network will approximately equal this this this this posterior term of the decoder that we cannot track tably compute so then what we're gonna do is kind of jointly train both the encoder and the decoder to make all of these things happen so then more concretely we need to do a little bit of math on this slide so then here we have Bayes rule so this is the log probability of our data and then using youth breaking it up using Bayes rule and then here what we want to do is just multiply the top and bottom by this new term that we introduced so here we're just multiplying the top and bottom by a Q Phi of Z given X where Q Phi of Z connect z given x is the thing that we can compute with the with the this new network that we've introduced and now this is an equality because we're multiplying top and bottom by the same thing and now we use the magic logarithms to break this term up to break this equation up into three terms so if you kind of do the math over on your own you can see that we kind of match up these different terms from this from this top and break it up into this this sum of three different terms okay so that's kind of step one then we kind of realize another probabilistic fact which is that the log probability of X actually does not depend on Z so whenever you have a Rand variable that does not depend on another random variable then you can kind of wrap that whole thing in expectation right that means we're taking an expectation of a thing which does not depend on the variable of the expectation so like that's kind of dumb but it's a thing that's mathematically true so in particular what we're going to do is look at the expectation of the log probability of X where the the variable over which we're taking the expectation is is the distribution is the expectation over Z using the distribution which is output from the new network so this seems like kind of a strange thing to do but it's mathematically true because the thing inside the expectation does not depend on the variable outside but now we know that this log probability is actually equally mad eclis equal to this do these three terms so then we can actually apply this expectation over Z directly to these three terms and then we've just said that that's what we've done exactly here we've just applied this expectation over Z to these three logarithmic terms and now we can actually simplify it we can actually put some interpretation on these three terms so this first term actually is a type of data reconstruction term that you'll have to maybe take my word for at this time and the second term is actually a KL divergence between the distribution which is output from the encoder network and the prior distribution over the latent variable Z so this is also something that we can compute so actually this first term is something we can compute this is kind of like a data reconstruction term so we can compute this first term this second term we can compute because this is a sort of a distance between the distance that is the distribution that is output from the encoder Network and the prior distribution so this one we can compute and now this last term is something that we can cannot compute because this last term involves this this really awful P theta Z given X so that was that that posterior of the of the decoder that we just cannot compute so this last term is something that we cannot compute but the first two terms are something that we can compute and we also know because this last term is a KL divergence of two probability distributions then we know that in fact this last term has to be greater than equal to zero because one of the properties of the KL divergence between these two distributions is that it's always greater than or equal to zero so now when we combine all these facts together we get this this final equation here at the bottom so this is the lower bound on our density that we can now actually compute so now on the Left we have the actual true density of the of the data under the probabilistic model that we've set up and on the right is a lower bound to that density that involves a reconstruction term and a KL divergence term and both of these two terms on the right are things that we can actually compute using our encoder network and our decoder network so now the hope is that what we what we want to do is train our variational auto encoder we're going to jointly train both the encoder network and the decoder network in a way that tries to maximize this this lower bound so this lower bound is a very standard probabilistic trick so this is called a variational lower bound and this trick are kind of introducing an auxilary network or an auxilary function to compute this intractable posterior distribution is called a variational inference so that's kind of a very standard probabilistic trick that people use a lot in the days of probabilistic graphical models which was a really a prevailing machine learning paradigm that occurred that people used a lot before a deep learning became popular so one kind of beautiful thing about these variations autoencoders is that they're kind of using this like very this trick of variational inference that was very popular for graphical models but now actually incorporating that cool mathematical trick into neural networks so now the idea is that we have these two neural networks one of the encoder what is the decoder and we can compute kids and together they can compute this lower bound on the probability so then we can't actually compute the true probability of the data we can compute this lower bound and as we modify the parameter what we're going to do is maximize the lower bound and learn them learn the parameters of these two networks that will that will maximize the lower bound and then hopefully in maximizing lower bound that will also push up the true density of the data that we observe so I think I think we are about at time so then let's leave variational Auto and here for today and then we'll pick up with exactly this mechanism exactly how to train them in the next lecture so then come back next time for generative models part two when we'll talk about some more we'll sort of go over the rest of variational autoencoders and then we'll also talk about a generative adversarial networks okay thank you
Deep_Learning_for_Computer_Vision
Lecture_8_CNN_Architectures.txt
okay welcome back to lecture eight today we're going to talk about CN n architectures and this is really this is really getting into the details of convolutional neural networks hopefully this will be pretty interesting so the last in the last lecture we left we left off talking about convolutional networks in particular in the last lecture we talked about these different in these different building blocks that we can use to build up convolutional networks and we saw that convolutional neural networks are just neural networks that are built up of convolution layers pooling layers fully connected layers some activation function probably Ray Lu and some normalization layer often batch normalization but we were left with a big question of how do we actually combine these basic ingredients to actually hook up and make big high-performing convolutional neural networks even once you've defined these operations you have a lot of freedom and how you might just choose to stick them together what the hyper parameters are going to be and just knowing these basic ingredients is far from enough to know how to actually get good performance out of convolutional neural networks so rather than leaving you totally in the dark today we're going to cover a historical overview of many different types of deep convolutional neural network architectures that people have used over the past few years or so and a good way to ground this discussion is in the imagenet classification challenge remember we talked about the image net data set in the first two lectures that it was this very large scale data set for image to classify Kait classification that had about 1.2 million training images and classifiers and classification networks had to recognize about 1,000 different categories in this one 1.2 million data image data set for training and imagenet was very what was it was a huge benchmark for image classification because they held a yearly challenge from 2010 to 2011 teen where different teams would enter their best performing image classification system and everyone around the world would compete against each other to try to build the best classification system and the image that classification challenge really drove a lot of research and a lot of intense progress in convolutional neural neural network design over the past several years so I thought it would be useful to ground this discussion by stepping through and about some of the some of the highest performing winners in the different years of the imagenet competition as we've already seen in 2010 and 2011 the first two years the competition was run the the winning systems were not neural network based at all they were these compositions of multiple layers of feet of hand design features together with some linear crossfire on top but in 2012 as you should probably remember by now was the year that convolutional neural networks first became a huge mainstream topic in computer vision research when the alux net architecture just crushed all the other competition on the image net challenge in 2012 so what is Act so what actually did Alex net look like well Alex net was a deep convolutional neural network by today's standards I think it actually wouldn't be considered that deep those who as we'll see as we go on to a lecture but Alex not accepted to 27 by 227 pixel inputs it had five convolutional layers they used max pooling throughout and it had three three fully connected layers that followed the convolutional layers and it used rayleigh nonlinearities throughout and in fact alex net was one of the first major convolutional neural networks that use Rayleigh nonlinearities there's a couple other kind of quirks and features of the Alex net architecture that are not so much used anymore one is that it had this funny layer called local response normalization which has not really been used to date anymore so we won't talk about it in detail but it was a different type of normalization and maybe as a very early precursor to something like batch norm but nowadays we preferred use batch normalization instead another piece another kind of quirky bit about Alex net is that when he when alex tkachev ski and his collaborators were working on this network back in 2012 back in 2011 and so Brett it was trained on graphics cards GPUs and the biggest GPU that they had at the time was a GTX 580 which had only three gigabytes of memory if you look at maybe the GPUs you guys are using on colab today they have something like twelve or sixteen gigabytes of memory so back in 2011 the GPU is available just didn't have very much GPU memory so in order to get this this neural network fit into GPU memory it was actually distributed across two different physical GTX 580 cards in kind of a complicated scheme where some of the network ran on one card and some of the network ran on another card and this was kind of an implementation detail that was required in order to fit the cheapest this network onto the GPU hardware that was available at that time this this this idea of splitting neural networks across GPUs is still sometimes used today but in general it's not a very common thing to see with most of the networks that we'll see in this lecture and of course at the top of the slide here is this very very famous figure from the Alex net paper that shows this convolutional neural network design of the Alex net and you can see that it has these five convolutional layers and it's split into two chunks at the top in the bottom to fit onto the two GPUs that it was distributed against but one kind of funny thing about this figure is that it looks it's actually kind of clipped at the top and if you look at the paper itself even in the Alex in that paper itself this figure was actually clipped at the top so even though this is one of the most even though this is a very important paper now where everyone is stuck looking at this clipped figure because that's the version of the figure that actually was published in the paper and I'd also like to point out just as a historic note Alex net I think is it's hard to overstate just how influential this paper has been if you look at the number of citations that this paper has gotten per year since it was published in 2012 it's already gotten something like 46,000 citations since 2012 and if you look at this citation trend it seems to be still growing exponentially so this is certainly one of the most highly cited papers post in computer science but okay I think across all disciplines in all areas of science in the last few years and to put this into context I think it's also interesting to compare these citations with some other famous citation with some other famous scientific papers throughout history for example if we look at Darwin's Origin of Species back in 1859 has something like a similar number of citations as Alex that does today or Claude Shannon's a mathematical theory of of communication which invented the field of information theory had something like 69 has something like 69000 citations and it was published in 1948 if we look at contemporary research there was another extremely important piece of scientific research published in 2012 which was the experimental discovery of the Higgs boson particle at the Large Hadron Collider this was published the same year as the alux net paper and this is a fundamental this is a fundamentally important advance in basic science that we observe a new fundamental particle in the universe and this has only 14 thousand citations compared to Alex that's 46,000 so um I need to caveat here that psyche looking at citation counts for papers is really not is really kind of a very coarse measure of their impact and it's really unfair to compare citation counts across time and across different disciplines but that said I think it's think it's pretty clear to many people that this alex net paper in this Alex that architecture represents an important advance not just with in computer vision or computers or computer science but really across all of human knowledge as a whole hopefully that's not overstating up too much but with that with that historical context in mind what actually is what does the Alex net architecture actually look like well this so Alex net starts off with an input image of to 27 by 227 pixels and has works on RGB images so it has three input channels the first convolutional layer has so this is by the way should be a bit of recap from the convolution layer that we talked about in the previous lecture but the first convolution layer in alex net has 64 filters a kernel size of 11 by 11 astride of four and a pad for so given those settings for this first convolutional layer what is the number of channels in the output of that first convolutional layer yeah it should be 64 because recall that for a convolution layer the number of channels is always equal to the number of filters now the next question is what is the what is the output size here in this table at clacks collapsed height and width into one column because that work everything is squared out this retexture anyone want anyone take one thing I guess at the output spatial size of this convolution layer yeah 56 so if you remember this formula from the slide on the last lecture we know that the output size is equal to the input size minus the kernel size plus 2 times the padding divided by the stride plus 1 if you plug in those numbers you see that we get 56 for the output of this first layer now another question how much memory would this output feature map to consume in kilobytes well I don't I don't know if it's reasonable to do this multiplication in your head but the number of elements in that output tensor is going to be C by output size by output size so the number of elements in that output tensor is something like 200,000 and we typically store these elements in 32-bit floating-point so each element takes 4 bytes of memory so multiply that by 4 and divide out and you see that this layer takes about 700 takes about seven hundred eighty four kilobytes of memory to store the output of this layer now the next question how many parameters are how many learn about parameters are in this layer of a network well for this one we remember this what is the shape of a wait for a convolutional layer and we remember that the shape of the wage for a convolutional layer is a four dimensional tensor of size output channels by input channels by kernel size by kernel size so output channels of 64 input channels is three kernel sizes 11 plus there's just learn of a bias which is a vector of the same number of channel width of the number of output channels so the total number of learn about ways here is about 23,000 next how many floating-point operations does it take to compute this this convolution layer well again I think it's maybe tricky to this multiplication in your head but in order to compute this we so by the way this this idea of floating-point operations and counting floating-point operations for layers in a neural network will be in a very important topic throughout today's lecture so first off when we talk about floating-point operations in a neural network we usually count the number of multiply ads where a multiplied together with an ad counts as one floating-point operation for the purpose of counting operations in a neural network this is because many many actual bits of computing hardware can perform a floating point multiplication and an accumulation in a single cycle so we tend to account multiply and an ADD as a single operation now to count the number of operations that it costs to perform this convolution layer we think how many output elements are there in the tensor and that's see out by output size by oversize and how many operations does it take to compute each element of that output tensor well recall that each element of that output tensor is computed by taking the convolutional filter and slapping it inside the input dimension some at some time somewhere so each element of the output tensor results is the result of computing an inner product between a convolutional filter which has size CN by K by K and another chunk of the input which has size CN by K by K and and a dot product taking a dot product of two vectors with n elements takes n multiplies and adds once you count bias term so when you multiply all that out you see that this first convolutional layer takes something like 73 mega flops in order to compute the convolution of this first layer so now the second layer in Alec's net is a pooling layer immediately following oh I mean this actually goes there's a visiray Lu so I'm sort of omitting the reimu's from many of the many of the architectures in this lecture because it's always assumed it'll be summer a Lu or some non-linearity immediately following the convolution layer so immediately after the Ray Lou and the first convolution Alec's net has its first pooling layer and the pooling layer here for Alex the first pooling layer has a kernel size of three a stride of two and a pad of one so given those parameters what should the output shape of this first layer in Alec's net be well the number of channel dimensions is the same because recall that pooling layers operate independently on each input channel so cooling layers don't change the number of channels and here this pooling layer has the effect of down sampling the input spatially by a size by a factor of two Alex that's kind of a funny architecture and all the numbers don't actually divide evenly in alex net which is a little bit annoying so here we have to actually after we do have to be divided by the stride we also have to round down to get the output spatial size of 27 by 27 how much memory does the output of the pooling layer take and we see that we have the same procedure of four bytes per element multiplied by the number of elements in the tensor gives us the amount of memory usage of this layer next how many learn about more in this this pooling layer is zero because we're call pooling layers have no learn double parameters they simply take a max over their receptive field then how many floating-point operations does it take to compute this pooling there well here it's begun difficult to do the multiplication in your head but again we have this similar way of thinking about how many up how many elements are in the output tensor which is number of output channels by output size by open size and how many floating-point operations does it take to compute one element in the output tensor well recall that each element in the output tensor is the result of taking a max over the receptive field within one channel so we have to take the max of three by a three by three grid of elements so we need to find the maximum of nine elements you can imagine that taking approximately nine floating-point operations may be eight but for simplicity we'll just say that it's equal to the curl size square so if you multiply this out we see that the float that this max cooling layer takes only about a balmy about 0.5 only what 0.4 megaflops which you should notice is very very small compared to the convolution layer so this is a fairly general trend in convolutional neural networks that the convolution layers tends to have a lot of contains tend to cost a lot of compute tend to take a lot of floating-point operations where the max pooling layers or other types of pooling layers generally cost very little floating-point operations so much so that when sometimes people write papers and calcium operations in a neural network they'll even not even count upload the maximum of the max pooling layers just because the number of operations there is so small compared to the number of convolution layers now alex net has five more convolution layers i'm not going to walk through this exact procedure for each one of those layers but we can similarly compute the output size and the number of and the the number of memory and parameters and flops for each one of these five convolution layers and alex paths and interspersed with these convolution layers are or pooling theirs and by the time we finish with all the convolution layers and all of the pooling layers Alex that is left with an output tensor with 256 channels and a 6x6 and spatial size and after all of these convolution layers terminates then we have this flattening operation that flattens out all of the that destroys all the spatial structure in that input tensor and just flattens it out into a vector so then this flatten layer just flattens everything into a vector and it has no it has no parameters and no flops now after the flat after this flattening operation we have our first fully connected layer with 496 hidden units again we can compute the memory the parameters and the flops for this fully connected for this first fully connected layer after the first fully connected layer we've got two more fully connected layers one more with 496 hidden units and the final fc8 layer with 1,000 units to produce the 1,000 scores for our 1000 categories so this is the Alex that architecture and the question is like how it was this designed the unfortunate reality I think behind Alex net is that the exact configuration of these convolution layers was really a lot of trial and error so the exact settings of these are somewhat mysterious but it seems to work well in practice as we'll see moving forward people wanted to try to find principles for designing neural networks that let them scale up and scale down and didn't rely so much on extensive trial and error of explicitly tuning the filter sizes and strides and everything for every individual layer but for Alex that I think the best answer is that these settings are really trial and error but if we look over at this at these last three columns and look at the memory the parameters and the flops marching down through the network then we start to see some very interesting trends that hold true not just an Alex net but also across a lot of different convolutional neural network architectures here we've already pointed out one trend that see the as we mentioned all of the pooling layers take such little floating-point operations that they all round down to zero so we can so we can effectively discount the floating-point the the pooling layers when trying to count the number of operations in an ER Marc and here we can read and we can redraw that exact same data of the number of members the amount of memory the number of parameters and the number of flops for each layer of this network and convert it into some bar charts so here we see a couple very interesting trends so here if we look at the chart on the left this shows the amount of memory that is used but at the outputs of the first cut the first five the five convolutional layers and at the outputs of the three fully connected layers and here we see this very clear trend that the vast majority of the memory usage in Alex NAT actually comes from storing the activations at the early convolutional layers this happens because at those early convolutional layers we the outputs are have a relatively high spatial resolution and a relatively high number of filters so when you multiply that out it happens that most of the memory usage happens in the very first couple of layers of the network now if we look at the middle plot this shows the number of parameters in each layer and this shows the opposite trend with from compute this shows that for the convolutional layers have very very few parameters whereas the fully connected layers actually take a very very large number of parameters and the layer with a single largest number of parameters is that very first fully connected layer that happens after the flattening operation because what if you look at if you think about what happens in an FC six layer we had this spatial tensor of six by six by what was it six by six by 256 and that gets fully connected into 400 4096 hidden dimensions so the weight matrix is now six times six times 256 times 4096 so that one weight matrix of FC six has something like 37 million print as almost 38 million parameters in just that one fully connected layer of the neural network and in fact basically all of the blur Nobel parameters and Alex Net come from these fully connected layers whereas if you look at the owner of computation that each layer costs then you see yet another trend which is that the fully connected layers take very little computation because they're just multiplying a very large matrix whereas the vast majority of the computation in this neural network comes from all the convolutional layers and especially layers that take a lot of computation our layers that have convolutions with large numbers of filters at high spatial resolutions and this is quite a general trend across many different neural network design it's not just Alex Matt that you can you'll you'll have most of the memory usage in the early convolutional layers most of the parameters and the floyd connected layers and most of the computation in the convolutional layers so these trends are kind of interesting to keep in mind as we move on to later architectures that try to address more efficient architectures that try to fix some of these trends so that that's our brief overview of the alex net architecture so that well that's what happened in 2012 what happened in 2013 well in 2013 pretty much all of the entrants to this competition now switched over to using neural networks and the winner of the competition was also an 8 layer network called ZF net after the authors of Matt Zeiler and Rob Fergus ZF net is basically a bigger Alex net I told you that Alex net was essentially produced via trial and error well ZF net is more trial and less error so basically in ZF net it's the same basic idea as Alex net except they between some of the layered configurations in particular in the first convolutional layer alex net had 11 by 11 stride for turns out it so it works better if you use 7 by 7 stride to who'da thunk and for those later convolutional layers in convolutional layers 3 4 & 5 instead of using 384 384 256 filters like an Alex net instead we increase the number of filters and use 512 1024 512 and who knows this also tends to work better so to be a little bit less facetious the truck I think the takeaway from the F net is that it makes it's just a bigger version of Alex net so if you look at the first convolutional layer if we when we change from strive for to stride 2 that means that we are aggress aggressively down sampling the input in space at the very first layer so that on this 11 by 11 strive for in Alex net well immediately spatial spatially down sampled by a factor of 4 whereas for Zee and that that first convolutional layer will only downsample by a factor of two which means that all the other feature maps moving throughout naziha that will now have a higher spatial resolution and higher spatial resolution means more receptive fields means more compute so the Xia that actually is going to cost a lot more computation than Alex net and for the later convolutional layers by increasing the number of filters this also just makes the network bigger it has more learn about parameters it takes more memory it takes more compute so I think the takeaway from alex net to ZF net is that bigger networks tend to work better but at this point in time there was not really a principal mechanism for making the networks bigger or smaller at will instead they kind of had to reach into individual layers and tunes both the individual parameters one at a time in order to make the network's bigger but in doing so they were you able to achieve a fairly large increase in performance over Alex net and we saw the error rate on this image net challenge dropped from sixteen point four down to eleven point seven with Zi net now 2014 was when things started to get very very interesting and 2014 brought around the the so called vgg architecture from Karen Simonian and andrew zisserman now vgg was really one of the first architectures to have a principal design throughout so we saw that alex net and ZF net were designed in somewhat of an ad hoc way that there was some number of convolution layers there was some number of pooling layers but the exact configurations of each layer were set independently by hand through trial and error and this makes it very hard to scale networks up or down so instead as we moved into now it's starting in 2014 people started to move away from these hand designed bespoke convolutional architectures and instead wanted to move to architectures that had some design principles that were used to guide the overall configuration of the network and the bjg networks were particularly just followed a couple very very clean and simple design principles the design principles of bgg were that all convolution layers are going to be three by three stride to all pooling layers are going to be max pooling layers 2x2 stride - and every after a max pooling layer you're gonna we're gonna double the number of channels and then we're going to have some number of convolution layers and eventually some fully connected layers and the number of hidden units and the Holy Tech layers were the same as Alex NAT so these with these simple designer rules it lets you design it lets you not have to think so hard about the exact configuration of each layer in your neural network but let's think about so and also this this network had five convolutional stages remember that alex net had 5 convolutional layers now vgg pushed that forward and moves deeper networks we're now rather than five individual convolutional layers we have five stages where each stage consists of a couple convolution layers and a pooling there so the vga architecture is like calm calm cool calm calm pool calm calm pool and for however many state ever performing stages you're going to have there were exam several different bgg architectures that were tested but the ones that were most popular with the 16 layer and the 19 layer bgg architectures which had always two convolutional layers in the first few stages and either three or four convolutional layers in the last two stages so that's pretty much all you need to know in order to know how to build a bgg network but it's useful to think about why people chose these particular design principles for designing the network in this way well first let's think about why it makes sense to have only 3x3 convolutions in your network so you saw in alex nets and nzf net that this number of the size of a convolutional kernel in each layer was a hyper parameter and people played around with different convolutional kernel sizes at different layers well let's let's let's think about two different options that we could have as alternatives for as one alternative we could imagine a convolutional layer with five by five kernels that takes see it's the channels of input and produces C channels of output that operates on an input of size H by W and here we can assume that we have padding of two in strategy lines with the output size is the same spatial size as the input well this this convolutional layer would have a number of the number of parameters is 25 C squared because we each we've got a c convolutional filters each one has five by five by CS or 25 C squared learn about parameters in this layer ignoring bias and the number of floating-point operations that it costs to compute this convolutional layer is now 25 C squared HW because the number of outputs from the layer is going to be H by W by C and the cost of computing every one of those outputs will be five by five by C so the overall cost of the layer is twenty five C squared HW now let's contrast this with a stack of two convolutional layers that each have kernel size three by three that also produce input the Tagus C channels as input and produce C channels as output well as we remember from our discussion of receptive fields in the previous lecture we know that if we stack up to three by three convolutions then it has an effective receptive field size of five by five so in terms of how much of the input can we see after this number of layers in terms of receptive fields this is five by five convolutional layer and this pair of three by three convolutional layers is somehow equivalent in terms of the amount of the input that are able to see but but if we compute the number of parameters for these things we see that each of these two convolutional layers the number of parameters is 9c squared so the total number of parameters for the for the stack of 2 comma 3 by 3 comma is 18 C squared and similarly the number of floating-point operations for this stack of 2 convolutional layers is only 18 C squared HW because again the output is C HW and the cost of computing any one of those outputs is 3 by 3 by C so each output costs 9 C and multiply it all out we've got two layers so the overall cost of the stack of to cure become is 18 C squared HW and now we see something interesting is that even though these two layers have the same receptive field size the stack of 2 3x3 convolutions has fewer noble parameters and TN costs less computation so the insight from the bgg network is that well maybe there's no reason to have larger filter sizes at all because anytime you wanted to have a 5x5 filter you could have instead replaced with 2 private right three builders and by a similar argument rather than a single seven by seven filter you could have replaced with a stack up three by up three three by three convolution layers so with that in mind it lets us sort of throw away the kernel size as a hyper parameter and the only thing we need to worry about is how many of these 3x3 conv layers are going to stack within each stage and right so then the turning to the second design oh right the other piece about this is that if we stack two 3x3 convolutional filled layers after one another we can actually insert rail ooze in between those two convolutional layers which actually provides us more depth and more nonlinear computation compared to a single 5x5 convolution so not only is the is the stack of two 3x3 convolutions has fewer parameters it has fewer plot flops and it allows more nonlinear computation so it just seems like a clear win over a single 5x5 convolutional layer so that's the idea behind this first design rule and so then let's think about the second design rule in vgg it says that all of our all of our pooling layers are going to be 2x2 max pooling stride to pad 0 which means that every pooling layer is going to have the spatial resolution of the input feature map and the other the other rule here is that every time after we pool we will double the number of channels so then let's think about what happens between two stages when we follow these rules so for let's think about the computation at stage one for instance the one that one of our layers inside stage one would have an input of size C by 2 H by 2 H and the layer would be a 3 by 3 convolution with C input channels and C alpha channels if you multiply all this out we see that the amount of memory consumed by this output tensor is going to be 4 HW C it's the number of elements in the output tensor after this convolution the number parameter noble parameters is 9 C squared excluding the bias and the number of floating-point operations is 4 HW C squared using right actually that's that doesn't seem right I think that's an error but that's ok the same errors propagated over here so the argument still holds so so if they're both off by the same concept I'll fix this after after the lecture now after we go to the second after we move to the next stage then the number of channels would be doubled and the number spatial resolution would be halved well when this happens we can see that the memory is reduced by a factor of two and the number of parameters increases by a factor of four but the number of floating-point operation stays the same now here's the error is that these two number of floating-point operations I think are both off by a factor of nine but since they're both off by a factor of nine it's still true that they both these these two layers in two subsequent stages still cost the same number of loading off point operations so this design principle has actually followed along Metin it's been followed by many many many convolutional architectures following vgg the basic idea is that we want to preserve this equipment we want each convolutional layer to cost the same amount of floating-point operations and we can do that by having a spatial size and double younger channels at the end of each convolutional stage so then another thing to point out is that we can compare Alec's net and bgg sixteen head by side to side remember that Alex net had five convolutional layers and three fully connected layers and now all of the bgallagh vgg networks also have five convolutional stages and three fully connected layers and now we can draw this same plot of memory parameters and and floating-point operations to compare at a comma at a stage by stage basis between Alex net and vgg and here the overwhelming result from these graphs is that vgg is just a gigantic network compared to Alex net if we look at the number of member like you can't even see these blue bars on these on these graphs like alec vgg is just dwarfing Alex that on all of these different accents of computation it takes dramatically more memory if you look at the total amount of memory consumed by storing activations for all these outputs then bgg is something like 25 times greater if you look at the total number of learn about parameters Alex that had about 61 million vgg 16 has 138 million so more than twice as many learn about parameters and the real killer is that computation at these two things cost if you if you add up the total number of floating-point operations that cost that it takes to compute a single four pass Alex net versus a single for pass in bgg we see that vgg is more than 19 times more expensive in terms of floating-point operations so vgg 16 is just this absolutely massive network and again we still get this story that we saw moving from alex net to ZF is that bigger networks tends to achieve better results on these large-scale image net challenge but now with bgg it gives us a guiding principle a couple of guiding design principles but let us easily scale up or scale down the network and we no longer have to go in and fiddle with the individual hyper parameters of every layer now 2014 was such an interesting layer such an interesting year that there's actually two convolutional neural network architectures that came out of that year's challenge that we need to talk about one was vgg oh I should point out another very amazing thing about this bgg architecture is that it was done in academia by one grad student and one faculty member so that was quite a heroic effort on their part the other network that we need to talk about from 2015 from 2014 was from Google this was a very large team with access to a very large amount of computation so I think it was quite a testament to the the bgg team in 2014 that even though they didn't win the image that classification challenged that year they really held their own against this this this corporate team with access to many more resources now the main takeaway of Google net is that so because they wanted to be cute remember there was this oh sorry was our question yeah the question was was vgg also split across multiple GPUs well from starting around that time well I think we'll talk about this more later but there it was it was split across multiple GPUs but it was it was a it was data parallelism so they basically take the batch of data you split the batch and compute different elements of the batch on different GPUs so each so you no longer split the model across GPUs instead you split your mini batch across GPUs so that required that's that's my simpler to implement compared to the model parallelism that was used and how it's not yeah question that's a great question so BJ yeah so bgg stands for the visual geometry group which is the name of the academic lab that this network came out of so it's I guess the the name of the lab is now intrinsically linked to this particular convolutional neural network design I'm not sure if it's good or bad but that's what that's what it stands for so in the other the other architecture we need to talk about from 2014 is Google net it's a cute name remember that there was one of the earliest convolutional neural network architectures was Lynette that was created by Yann laocoön and now in homage to Lynette and Yann Laocoon this Google team decided to name their network Google net it's very cute and the idea the the overwhelming idea behind Google net was to focus on efficiency if you look at the trend from Alex net to ZF to be Gigi the trend that we can see is that bigger networks perform better but with Google net the the team was really focused on trying to design efficient convolutional neural network because Google actually wants to run these things for real in data centers and on mobile phones so they save a lot of money if they can get the same performance with a cheaper convolutional network design so they were really focused on trying to build a network that worked really really well while also minimizing the overall complexity of the network so they had a couple innovations that were really started out were made popular by the Google net architecture that were carried forward to a lot of other following neural network architectures one is the use of a stem Network at the very at the very first couple of convolutional layers which very aggressively down samples the input image and very in order to aggressively downsample the spatial resolution of the input very quickly because as you'll recall with vgg or alex that the very expensive layers were those big convolutions on a lan feature maps of very large spatial size so to avoid those expensive convolutions on large spatial feature maps they use a lightweight stem that quickly down samples the input so I'm not going to walk through the stem design in in in detail but you can see that it very quickly down samples from the input resolution station resolution of two to four by two to four all the way down to 28 by 28 using only a couple layers that really so that you can spend the bulk of computation now operating at this lower spatial resolution and you no longer have to do expensive convolutions at a high spatial resolution so here we could actually compare two vgg 16 so if you look at the the component of Google net that down samples from 2 to 4 down to 28 in spatial resolution that entire part of the network costs about four hundred eighteen next flops for Google net if you look at the equivalent spatial downsampling in bgg 16 that goes from 2 to 4 down to 228 you can see that that same amount of spatial down sampling and vgg 16 costs more than 7 gigaflops so that that same amount of spatial down sampling was nearly 18 times as expensive if you look at on Google net against BGT the other innovation in Google Maps is this so-called inception module they were very clever because they got to go deeper as they called it inception yeah I don't know but the idea is that they had this little module that they called Inception that had that was this local structure that was repeated throughout the entire network we saw just as vgg use this simple repeated structure of calm calm pool now google map used this little inception module design that was repeated throughout the entire architect do you have the entire network many times the inception architecture the this inception module is kind of funny it had it also introduced this idea of parallel branches of computation so you're in VG remember that the kernel size the convolutional kernel size there's always a hyper parameter that we want to try to avoid in bgg they took the approach of replayed you know we it turns out that we can replace kernels of any not any any convolutional size with a stack of three by three and it's kind of equivalent well google net took a different approach and they said that in order to eliminate the kernel size as a hyper parameter we're just going to do all the kernel sizes all the time so they've been inside this inception module they had four parallel branches one that does a one by one evolution one that does a 3x3 convolution one of those a 5x5 convolution and one that does a max pooling with a stride of one so within every one of these layers it does all the things so there's no need to tune hybrid kernel sizes a hyper parameter because you've got all the kernel sizes in all the places and the other bit of innovation in the inception module was the use of one-by-one convolutions before expensive spatial convolutions that was used to reduce the number of channels before doing is these expensive spatial convolutions and we'll we'll revisit this idea of convolution one-by-one convolutional bottlenecks when we talk about residual networks in a few minutes so I don't want to talk about that in detail here the other innovation in Google meant is the use of global average pooling at the very end of the network if you remember back to B Gigi and Alex net we saw that the vast majority of the parameters in V Gigi and Alex that were coming from these giant fully connected layers at the very end of the network well in because one of the ways that we might what focused on efficiency is by reducing the number of parameters in the network so in order to do that Google Nets simply eliminates those large fully connected layers so remember in Alex net and B Gigi at the end of the network at the end of the convolution layers we had this flatten operation that destroyed spatial information by flattening the convolutional tenser into a giant vector well Google net uses a different strategy for destroying spatial information rather than flattening the tensor instead they use an average pooling with a kernel size equal to the final spatial size of the last convolutional there so in in particular the last convolution at the at the end of this last inception module in Google met the the output tensor is has a spatial size of seven by seven with 1024 convolutional with 1024 feature maps now then they apply an average pooling with kernel size equal to seven by seven and the stride doesn't matter because it only fits in one place so what that means is within every of those 1024 channels they average out they take the average of the the values of those channels across all the spatial positions in the input tensor this now reduced this this now also destroys spatial information but rather than flattening into a giant tenser instead it actually reduces the total number of elements so it ends up with this compact vector with only 1024 elements so then there's only one fully connected layer in Google net which then goes from this 1024 output from global average pooling that gun converts to the 1000 that goes from 1024 into 1000 where again 1000 is the number of categories in the image net data set so Google net is able to eliminate a huge number of learn about parameters by simply eliminating this fully connected layers and instead replacing them with idea of global average pooling and this is something that got picked up by a lot of different convolutional neural networks on the following Google Maps we can also compare this side by side with vgg and just see how profoundly this affects the number of parameters in last couple of layers another piece of awkwardness in Google net is that they had to rely on this idea of auxilary classifiers so one should point I ate to point out that in Google net actually occurred before batch normalization so before the discovery of batch normalization it was very very difficult to train networks that had more than about 10 layers or so and whenever people tried wanted to train deeper networks than about 10 layers without batch normalization they had to resort to some ugly hacks and one of the ugly hacks that was used in Google net is this idea of auxilary classifiers so here what they did is they attack you know at the end of the Google net they had a global average pooling and a fully connected layer that produced class scores well they had auxilary global average pooling and fully connected layers at several and several internal points in the network so this thing was actually outputting three different sets of class scores one from the end of the network and one and two from these intermediate parts of the network and then for these intermediate classifiers they also compute loss and propagate gradients coming back through all three of these classifiers and this had the effect of making gradients propagate more more easily through the network because they now they're in if you think about what happens in the backward pass they inject gradient at the very top of the network at the final classifier and they also inject gradient directly in these according to these two auxilary classifiers and this was a trick that they used in order to get things to converge in order to get deep networks to converge at that time yeah so yeah that was very astute the question is that I said that Network before batch normalization training networks of more than 10 layers was very tricky and people had to have it and VT you're right VG also hacked it in some way what they did is that they trained a shallow vgg network on something with like the 11 layer variant of VG G and then they first trained that to convergence and then they inserted new layers in between the already trained layers of the 11 layer bgg and then continued training from that point although you can imagine that there's some definite tricks in optimization that are needed in order to get things to converge after you stick in new layers in the middle of an already trained network so that was a bit of hairiness in vgg that they had to use in order to get their networks to converge so you can see that 2014 was kind of a dark time for a neural network print practitioners we went to resort to all these crazy hacks to get your networks to converge once they got beyond a certain depth thankfully things changed in 2015 so by so actually one of the important things that between 2014 and 2015 is when bash normalization was discovered so once batch normalization was discovered people found that they were able to Train vgg and train Google net from scratch without any of these tricks by just using batch normalization instead but then there was an extremely important innovation in neural network architecture design that happened in the 2015 iteration of the image net challenge and those were called residual networks or res nests and here you can see something amazing happen the number of layers jumped in one year from 22 all the way up to 152 so people so this was a very important innovation in the history of neural network architecture design you can also see that the error dropped dramatically again from 6.7 an almost hat down to three point six so res Nets were we're kind of a very important moment in neural network architecture design so as we mentioned once once batch normalization had been discovered people realized that they were able to train networks that were fairly deep with even with dozens of layers so then the question is what happens if we just keep stacking layers and stacking layers and train try to Train very very deep networks well here is kind of a cartoon picture of the types of plots people saw at that time here's the training curve where the x-axis the number of training iterations and the y-axis is the test error and we're comparing a 56 layer model and a 20 layer model and here's something very strange happens we see that the 56 layer model actually performs worse than the 20 layer model and this is surprising because the previous trend that we had talked about was that bigger neural networks tended to work better up to this point in time so it was very surprising to all of a sudden see the bigger deeper networks now performing worse once you got past a certain point so the initial guess of what was going on is that maybe these networks had started overfitting that you can imagine maybe once once you got to a 56 layer network and you have batch normalization maybe this was just such a large network that it was now overfitting just image that training set but we in order to test this hypothesis we can look at the training performance of these same networks and if you look at the training the performance on the training set of the same 20 layer and 56 layer networks you see that the network was not over today that in fact this 56 layer network was somehow under fitting the training set that somehow there's a problem in optimization that somehow this deeper model even once we have batch normalization once you get to a certain depth were no longer able to efficiently optimize very deep networks this is a problem and this is also surprising because we should expect that a deeper model should have the capacity to emulate a shallower model what do I mean by that um so you could imagine that a 56 layer network could emulate a 20 layer Network because we could imagine taking the 20 layer Network copying all of its layers into the 56 layer network and have all of the other remaining layers just learn the identity function so in principle if our optimizers were working properly then the the deeper the network goes the deeper network should always have the capacity to represent the same functions as the shallower networks so if we are actually underfitting then it means that we have an optimization problem that somehow these deeper networks are not able to efficiently learn these identity functions in order to in order to emulate shallower networks so the solution is to change the design of the network to make it easier for it to learn identity functions on unused layers and this should make it easier for deeper networks to learn to emulate shallower networks in case they have too many layers and more layers than they actually needed so here is the is the design change that was proposed by residual networks so previously on the left here we have the kind of plain convolutional block that we had seen in vgg that is a stack of two consecutive convolutional layers maybe with array lu in between maybe some batch normalization in between as well now residual networks should propose the this residual block design on the right which still has a stack of 2 convolutional layers but now rather than now at the end we take our input X and actually add it to the output of the second convolutional layer this means that the overall block computes this function f of X plus X where f of X is the output from the this basic in displaced basic block inside of the corridor through the residual there's a residual additive short map and what this means is that this the idea behind this is that this layer can now very easily learn the identity function if we set the weights of those two convolutional layers to 0 then this this block should compute the identity and now this should make it easier for deep networks to emulate shallower networks this should also help improve the gradient flow of very deep networks because you remember that if you remember in what happens in the backward pass of an add gate well as we back propagate through an ad it copies the gradients to both of the inputs so you can imagine that now when we back propagate through this residual block it actually copies the gradients it shortcuts the gradients through those convolutions and this again helps to improve the propagation of gradient information throughout these very very deep networks so then a residual network is a stack of residual blocks residual networks were sort of inspired by the best parts of vgg and Google Maps in my opinion so they combined this idea of simple design principles from bgg with a couple of these innovations from Google maths so this network much like bgg this network was divided into stages within each state with between each stage we are going to have the spatial resolution and double the number of channels just like bgg and all of our convolutions inside these stages will be through 3x3 convolution is just like in dgg but each of these blocks will now be these residual shortcut blocks residual networks also it takes some innovations from Google Maps they use they like Google Map they also use this aggressive stem in the first couple layers that aggressively down samples the input and residual networks also took this idea of global average pooling from Google net that again eliminates the fully connected layers and reduces the total number of parameters in the network so so then with these simple patterns the only thing you need to choose is the initial width of the network which was 64 and all their experiments and the number of blocks per stage so this gives us the resident 18 which has two residual blocks per stage which means for convolutions per stage four times four is sixteen convolutions there's the convolution in the stem there's linear at the end so if you add that together that's 18 layers with learn up awaits yeah the question is what do we mean by down sampling in this context we mean any operation that reduces the spatial extent of the image so that could be strided convolution that could be max pooling that could be average pooling I'm taking every other pixel is not used I don't think that's I've ever seen that use in a neural network context but it's differentiable you could try it I don't recommend it yeah so in this context it means any operation that reduces the spatial size of the inputs so what's interesting about this ResNet is that now they become very efficient so now we're able to achieve very low errors on image net with very few amount of floating-point operations there's also a 34 layer version of ResNet which just adds more blocks to some of the stages but otherwise the design is exactly the same and again we can compare this to vgg which had now something like 13 gigawatts for the whole network and got errors of about 9.6 whereas the reson at 34 got was only 3.6 he go for gigaflops and actually had lower errors and a lot of these gains and efficiency were due to these regressive down sampling at the beginning and this global average pooling at the end so so there's as we go to deeper residual networks they actually modified the block design so here on the left is the so-called basic block that is used in residual networks this has to that this basic block has a 3x3 convolution and another 3x3 convolution and railers and batch farms in between them and this residual shortcut around the 3x3 convolutions well then we can compute the amount of total floating-point operations for this block again only counting the convolutional layers and we see that each of these convolutional layers has costs 9hw C squared so the total computational cost of this thing is 18 HW C squared now in for deeper residual networks they introduced an alternative block design called a bottleneck bottleneck block so here in the bottom of the bottleneck block now consists of three convolutional layers the first is a so it accepts an input tensor with four times as many spatial channels as the basic block the first layer is a one by one convolution that acts to reduce the number of channels that are that are that are contained in the tensor so the first one by one convolution reduces the number of channels from 4c down to C and then once we reduce the number of channels then we perform this 3x3 convolution and then we have another one by one convolution that expands the number of channels again increasing the number of channels from C up to four C again so now if we come now we can use the same procedure to compute the computational cost of the bottleneck design block design and we can see that each of these bottleneck these one-by-one bottleneck and unbought elect convolutions each cost for HW C squared and this middle convolutional layer has the same cost of 9hw C squared so actually the overall cost of this bottleneck design is 17 HW C squared which is slightly less than this bait then then the computational cost of this bottleneck residual block but at so this means that we get less computation but again more non-linearity and more sequential computation and somehow the intuition is that deeper layers should be able to deeper layers with more on the charity should be able to perform more complex types of computation so by switching from this basic block to this bottleneck block it lets us build networks that are deeper well not it while not increasing the computational cost so then so far we've seen the 18 layer and the 34 layer residual networks that use this basic block then if we simply take the 34 layer ResNet and replace all the basic blocks with residual block with the with bottleneck blocks now this increases the total number of layers in the network from 34 to 50 but does not really change the overall computational cost of the network and now by making this change that increases the number of layers without increasing computation you can see that the the open the error on image net actually decreases so that by simply making it deeper without increasing computation we're able to decrease the error on image net from five point eight five down to seven point one three which is actually a fairly large reduction in error given that these two networks actually have a similar computational cost now this residual networks we can actually go deeper than that and we can define hundred and one layer and hundred and fifty two layer versions of residual networks that have the same basic design they use bottleneck blocks they just use more bottleneck blocks per stage and you can see the clear trend with residual networks is that as you stack these layers to go deeper and deeper the networks tend to work better and better so this was a big deal in 2015 so in previous competitions there's always a bunch of computer vision competitions every year and usually one team will win one competition another team will win another competition but in 2015 Reza Nets who crushed everything they won every track in the imagenet competition that's the classification challenge the localization challenge which we have not talked about and the detection challenge which we have not talked about there was also a concurrent set of challenges run by or on a different data set called Microsoft Coco the same team that built residual networks also swept every challenge in Cocoa winning the detection challenge and the semantic segmentation challenge in the cocoa dataset as well and there the main thing that they did is took existing methods for all these different tasks and just swap in there 152 layer residual Network and in doing so they just crushed everyone that year so this was a very big deal it got everyone to wake up and pay attention and from that time forward residual networks became a baseline that is a really widely used even still to this day for a wide variety of different tasks in computer vision so there was a follow-up paper in residual networks that played a little bit with the exact organization of Bachelor the exact order of convolution bachelor-man Dre Lou and they found it by shuffling the orders of these things you could get it to work a little bit better so which was kind of interesting to see that I think is maybe useful if you're trying to squeeze out that little last percentage of performance or a mere residual networks now it was kind of a summary of where we've gotten to today you can see this comparison of computational complexity so here on the right is this very nice plot where on the x axis so each dot here is a different neural network architecture the x axis is the number of floating-point operations that it takes to compute afford paths of that architecture the y axis is the accuracy on the image net challenge and the size of the dot is the number of learn about parameters so this is a really nice visualization that packs a lot of lessons that we've seen into one nice diagram so here you can see that Oh Google has been going on they have inception versions 2 3 & 4 and inception version 4 plus ResNet we haven't talked about that we don't want to talk about the details of those but here you can see from this plot is that vgg has a very high memory takes a ton of computations and is just kind of an inefficient Network design overall Google meant is this very small very efficient but not quite as high-performing as some of the later later networks Alex Knapp down here in the corner is relatives very low in compute lower than any of the other variants that we see here but it still has quite a lot of parameters due to the large fully connected layers and we can see that residual networks give us this fairly simple design moderate efficiency but able to give us high accuracy as we scale to deeper and deeper residual networks so then moving forward what happened in 2016 nothing much that was very exciting the winner in 2016 was model ensemble Aang I don't know if you've ever done like Cagle challenges or whatever but basically it took all the winning there word architectures in last couple of years I'll move them together and slightly better so that was not very exciting but there were a couple attempts to moving forward to improve residual networks so recall that we've seen this bottleneck residual block that was the building block of the 152 layer residual networks well if one bottleneck branch is good then why not have multiple bottleneck branches in parallel and that was the idea behind res next which was supposed to be the next generation of residual networks so here the idea is that we will have our basic building block for the rest next network will have G parallel pathways and each parallel pathway will itself be a little bottleneck block but now the inner channel dimension of that of these parallel bottleneck blocks will be a new constant called little C so now we can compute the total flux of this and then at the end after we compute the output from these different parallel pathways these parallel battle back blocks the outputs will all be added together and now we can compute the total computational cost of this of this strange multi pathway design and you can see that each of these individual bottleneck each of these individual branches has a computational cost of eight big C little C plus nine little C squared which you can kind of multiply out and see the pattern here and that what's interesting is that we can actually set up a quadratic equation and find and once we set the channel dimension C and the number of parallel pathways G then we can set up a quadratic equation and solve for little C such that this parallel this this multi path architecture with G parallel branches will have the same computational cost as this original bottleneck design so then it turns out that if you kind of solve these things the integer they don't work out exactly but if you kind of round to the nearest integer then if we have for example 64 channels and for parallel pathways then we can set little C to be 24 channels on each of those parallel pathways does the same computational cost or if we have maybe 32 parallel pathways then we could have each of those pathways have little C plus 4 channels so this basically gives us another mechanism that lets us modify the design of our network now in addition to setting the number of channels we can also set the number of parallel pathways and by solving the equation in this way we can set the number of parallel pathways in a way that also preserves the amount of computation that's being done and it turns out that there's an up there's a there's an operation in pipe in convolution called grouped convolution that lets us implement this equivalent this this idea of parallel pathways using this idea of route convolution that I don't have time to talk about the specifics here but the idea is that now we've modified our our design to effectively have these multiple parallel pathways that gives us another axis on which we can modify our network so then what the in this res next design this res next architecture you see that we can as we keep the same computational cost but increase the number of parallel pathways within each block actually that leads to an increase in performance so even though the good so then we can have we can start with a baseline 50 layer ResNet model and then increase the number of parallel pathways and as we do so the accuracy actually increases and the same trend holds for the hundred one layer ResNet that we can increase the number of pathways and get improved performance even while maintaining the same computational complexity for the network so then what happened in 2017 people built on top of this idea of res next and added another little bell and whistle called squeeze and excitation that made things work a little bit better and 2017 was the end of the road after 2017 people decided that the Jews had been squeezed out of the image that dataset and the challenge was shutting down so it will live on on Kaggle as a cackle challenge but it's no but there was kind of an end of the air end of an era when the image net challenge ended in 2017 but design but even though the image net challenge ended people still been going nuts trying to design ever bigger and more interesting neural network architectures so a couple you might want to be aware of one is this so-called and by the way a lot of these lessons are sort of carried on to further neural network architectures this idea of aggressive down sampling of trading computation of trying to maintain computation we're trying to build networks that were efficient and performing well while maintaining or reducing computation this idea of having repeated block structures throughout the network that could be repeated in order to design your networks without having to tune every layer all of these design parameters and design ideals were carried forward into later neural network designs one that you might see floating out there in the wild sometimes is this densely connected neural network the idea here is that just let it's a different it's sort of a different way of doing shortcut connections or skip connections so as we saw in residual networks we were able to propagate gradients better by having this additive shortcut connection well densa that these densely connected neural networks instead use a concatenation shortcut so rather than adding previous features two later features instead they contaminate previous features with later features in order to reuse the same features at different parts in the network and again they repeat this little dense block at multiple stages throughout the network then another trend that became very important in the last couple of years has been that so so far we've kind of been operating in this high parameter regime of people trying to just get trying to max out the accuracy on imagenet while also minimizing the the couple of the flops but the over but the overarching design was always to get high accuracy now kind of happening in parallel was this idea of you know maybe it's okay to trade-off some accuracy in some contexts and maybe what we want to do is try to get the absolute tiniest network possible that still performs okay but has a very very minimal computation such that you can run it on mobile devices or run it in a betta in embedded applications so there's been a whole sequence of works around designing very efficient very copied convolutional neural networks that have very very low computational cost but maybe are willing to sacrifice some accuracy and no longer beat these big residual networks one very famous example is this so-called mobile net and again it has this idea of repeated blocks so we can look at this basic 3x3 convolution on the left that has convolution batching Andre Lu as you should know by now the computational cost of this layer is 9c squared HW now on the right we replace this convolution with two different convolution one is a so-called depth-wise convolution that come that you can look at the details I think offline we don't have time to go through that and there's been a whole of sequence of papers that have been trying to design very efficient neural networks that can run on mobile devices so now at this point it's kind of seems that there's been a lot of activity around designing different neural network architectures but it still takes a lot of work and a lot of human effort and a lot of human activity to design a neural network architectures so as another hint of something that has been very popular the last few years has been automating this process of designing architectures and actually training one neural network that outputs the architecture of another neural network and we don't have time to go through the full details of this idea in this in this lecture but the basic idea is that we'll have one neural network called a controller and this controller will output the architecture for another neural network so then the training process here is that we'll take the controller Network we'll sample a bunch of architectures from the controller we'll train all of those architectures on our data set and then after we train all of those child networks we'll see how well they did and use that to compute a gradient for the controller network and the exact mechanism of computing that gradient requires a is a policy gradient approach that we'll talk about in a little bit later but what that means is that after training a batch of child networks then we can make a single gradient step on the controller so then in order to perform gradient descent on that controller it's going to be very very very computational expensive but over time this controller should learn to produce good neural network architecture the initial version this was a really cool idea but the initial version of this was just unbelievably computationally expensive because now each gradient step on the controller takes so the initial version of this trained on 800 GPUs for 28 days and that is just an amount of computational resources that I'm sorry I cannot give you for your homework but if you're at Google and you've got free GPUs laying around then this is the kind of experiments that people are trying but and follow-up research has actually this this paper has become kind of a joke in the community just because the the unbelievable scale of resources that were used in this paper but actually follow-up papers under architecture search have significantly reduced this search time so but people always love to compare to this paper and say like oh look for ten thousand times more efficient than this previous one if that's a good way to get your papers accepted but the takeaway is that if you are if you have the resources to burn then neural architecture search actually has been used to find neural network architectures which are themselves very efficient so this is kind of another plot that is nice to summarize a lot of the architectures that we've seen so far in this lecture so here on the x-axis we're showing the computational cost of running the running the neural network that is the number of multiplied adds and the y-axis now is the accuracy on imagenet and you can see that this all these dots correspond to different neural network architectures that we've talked about in this lecture and all these red lines corresponds to different neural network architectures that were learned using a neural architecture search method and what you can see is that this neural architecture search method was able to learn a set of different architectures that achieved higher accuracy at a lower computational cost compared to other architectures so that's that's kind of a next frontier in neural network architecture design and the kind of summary from what we've seen today is that in the kind of early days of convolutional neural networks as we move from alex net to xia net to vgg people were just focused on training ever bigger and ever bigger neural networks in order to get higher and higher accuracies google net was one of the first to focus on efficiency and in doing so they wanted to get high accuracy while also being aware of the computational cost residual networks gave us a way to scale networks to become very very big and we were able to Train layer 4 networks with hundreds of layers once we had this IDs ingredients of batch normalization and residual networks and after ResNet people started to focus on efficiency more and more and more and somehow that became the guiding principle of a lot of neural network architecture design after residual networks so so then things then with the we saw this huge proliferation of different architectures that we're trying to achieve higher or same accuracy at lower combinational cost this includes neural these tiny networks like mobile nuts and shuffle Nets and neural architecture search promises to maybe one day design all of our time no networks for us but now the final question is you know this is all great but what architectures should I actually use in practice well my advice is don't be a hero so for most applications don't try to design your own neural network architecture you're going to cause yourself sadness and you don't have 800 GPUs to burn for a month I think so what you should probably do in most situations is take an existing neural network architecture and adapt it for your problem and for that and that's what I do in my research and that's what I recommend you guys do for your for any projects that you undertake in particular despite the number of things that have come after I think ResNet will resonate 50 and resident 101 are still really really great solid choices that work if you want something to just work and not have to fill with it too much those are the choices that I usually grab for if you are concerned about computational cost for some reason then look to some kind of mobile network shuffle net but in general you really shouldn't be trying to design your own neural network architectures so that kind of is all the stuff we wanted to cover today and next time we'll talk about some of the actual software and hardware that we use to actually train these different networks [Applause]
Deep_Learning_for_Computer_Vision
Lecture_1_Introduction_to_Deep_Learning_for_Computer_Vision.txt
welcome I hope y'all in the right place welcome to ECS four nine eight zero zero seven slash five five eight zero zero five this special topics class talk with a first-time here at Michigan departing for computer vision I wish we had a snappier easier more easy to remember course type course number but when you teach a special topics class they give you numbers like this so I'm sorry about that but hopefully you're all in the right place so the title of this class is deep learning for computer vision so I think we need to unpack a little bit what this what these terms mean before we get started so computer vision is the study of building artificial systems that can process perceive and otherwise reason about visual data and this couldn't this kind of quite a broad definition on what does process we'll just perceive what does reason mean it's kind of up for interpretation or what this visual data that could be images that could be videos that could be medical scans that could be just about any type of continuously valued signal you can think about can sometimes be found in computer vision conferences or publication somewhere so these terms are really defined quite broadly so why is computer vision important well I think computer vision is particularly particularly important and exciting topic to study because it's everywhere I think many of us in this room right now are carrying around one or more some several cameras and who were just taking millions of photos every day there's cameras all around us all the time people are always creating visual data sharing digital data talking about visual data and this is very important that we build algorithms that can perceive reason and process for a couple country statistics if you look at YouTube actually anomalous looking instagramers so Instagram is very popular many of you are familiar with it and they some there's something like 100 million photos and videos uploaded on Instagram every single day if we go on YouTube it's even worse so on YouTube as of 2015 so I'm sure it's grown since then people are uploading roughly 300 hours of video on YouTube every minute so that means if you do the math and you think if I wanted to as a single individual human being look at all the visual data just being uploaded Instagram in YouTube in one day if you do the math say I'm going to look at images for maybe you call one second each I'm going to look at my YouTube videos at double speed it's gonna take me about 25 years to look at the visual data that's going to be uploaded on just these two sites in a single day so when you think about this these massive statistics and think about the massive amount of visual data being processed and shared across the Internet these days it becomes clear that we need to be able to build automated systems that kind of deal with it because we just don't have the human manpower to look anything process and perceive all of the data that were created so that's why I think computer vision is such an important topic to be to be studying these days and it's only going to get more important as the number of visual sensors out in the world keep increasing with new with new emerging technologies like autonomous vehicles augmented and virtual reality drones you can imagine that the role of computer vision in our modern society will just continue getting more and more and more important so clearly I'm is because this is my research area but I think this is the most important and exciting research topic that we can be studying right now to your science so that's computer vision computer vision is a is the problem that we're trying to solve its X force this problem of understanding digital data but it doesn't but computer vision doesn't really care how we solve that problem our goal is just to stop just to crunch through all of those images and videos however we have but the way it that means the the technique that we happen to be using in computer vision in across the field these days is deep learning so deep learn so before get to the report we get to deep learning what is learning learning is the process of building artificial systems that can learn from data and experiences notice that this is someone worth bogging all to the goals of computer vision computer vision just says we want to understand visual data we don't care how you do it learning is this separate problem of trying to build systems that can adapt to the data that they see and the experiences that they have in the world and from the outside it's not immediately clear why these two go together but it turns out that in the last 10 to 20 years we found that building learning based systems is very important for building many kinds of generalizable computer systems both in computer vision and across many areas artificial intelligence and computer science more broadly so now when we think about deep learning deep learning is then yet another subset of machine learning where deep learning is sort of maybe a bit of a bit of a baby name a bit of a buzzword a name but my definition is that it's a type of type the deep learning consists of hierarchical learning algorithms with many layers whatever that means in the context of Han that are very very loosely inspired by it by the out by the architecture of the mammalian brain and some types of a million visual system now I know could I say loosely this is a thing that you'll often see people talk about in deep learning that it's how the brain learns or how the brain works I think you should take any of these comparisons with a massive grain of salt there's some very coarse comparisons between brains and neural networks that we use today but I think you should not keep them too seriously so that I'm kind of stepping back a little bit from these two topics computer vision and machine learning both fall within the purview of the larger research field of artificial intelligence so artificial intelligence is very general very broad it's broadly speaking how can we build computer systems that can do things that normally people do so that's kind of my definition I think people will argue about what is and is not artificial intelligence but I think we just want to build smart machines whatever that means to any of us and I think there's clearly many different sub disciplines of artificial intelligence but two of the most important clearly again in my biased opinion our computer vision teaching machines to see and machine learning teaching machines to learn and these are the topics that we'll study in this class so then kind of where is deep learning fall in this regime ether me would be a subset of machine learning and it intersects both computer vision and falls within the larger AI ground I think it's important at the outset to end so then this class is going to focus at kind of this section right in the middle the intersection of computer vision machine learning and deep learning to start out with this slide is because it's really easy to get caught up in the hype these days and think that computer vision is the only type of AI deep learning is the only type of AI deep learning is the only type of computer vision but I think none of these are true there are many there are types of AI which have nothing to do with learning nothing to do with deep learning there's classical results about symbolic systems and other approaches to AI that are very different technically there's areas of computer vision that do not use any very much machine learning over much deep learning so I love it even though the focus of this class will be the intersection of these different research areas I just want to keep I just want you to keep in mind as a whole that there is a much broader realm of AI research being done tonight around the world by different groups that falls into different pieces of this pipe heart and of course there's many other areas within AI that we won't talk about too much so there's natural language processing things like speech recognition things like robotics and I kind of ran out of space on the chart with many more sub areas but suffice to say artificial intelligence is a massively is a massively successful a massively popular area of research an area of study these days that again with the broad goal of making machines do things that people normally do you can imagine that there's a whole lot of things that we might do out in the world that fall under this umbrella of our different intelligence so that's kind of the big picture roadmap and now for the route for the rest of the semester we're gonna focus on this little red area in here but again don't forget that there's a lot more to the world than what we're talking about in this class so today's agenda is a little bit different from most of the lectures in this class because again it is the first week so before we can really dive into that red piece of the pie chart and talk about machine learning and deep learning and computer vision all that really good stuff I think it's important to get a little bit of historical context about how we got here as a field this has been a hugely successful research area in the last five to ten years but deep learning machine learning in computer vision these are areas with decades and decades of research built upon them and all of the successes we've seen in the last few years have been a result of building upon decades of prior research in these areas so today I want to give a bit a bit of a brief history and overview of someone who puts historical context that let up with the successes of today and then following that we need to talk about some of the boring stuff of course overview logistics all that other stuff that you expect to see in the first election class so let's start with so we're going to do this in two ways right we're going to do a parent we're going to do a parallel stream first we're going to talk about the history of computer vision and we're going to sort of switch switch a little bit and we'll cover the history of deep learning so before we dive into the material as any sort of questions before we launch into this historical s-capepod know ok perfectly clear so if we go I think whenever you talk about a research area it's always difficult to pinpoint the start right because everything builds on everything else there's always prior work everyone was inspired by something else that came before but with a finite amount of time to talk about a finite number of things you got to cut the line somewhere so one place where I like to draw the line and point to as maybe the start of computer vision is actually not with computer scientists at all and happen it's from this this seminal study of Hubel and Wiesel back in 1959 who were not interested in computers at all they wanted to understand how the malian brains work so what they did is they got a cat they got an electrode they put the electrode into the brain of the cat into the visual cortex of the cat just the part in the back of your head that processes visual data and with this electrode they're able to record the neuronal activity of some of the individual neurons in the cat's visual cortex so then with this somewhat grotesque experimental setup they were able to have the cat watch TV and then not really TV because it was 1950 time but they were able to show different sorts of slides to the cat they cash in and with they had this general hypothesis that maybe there's certain neurons in the brain that responds different types of visual stimuli and by showing the cap different types of visual stimuli and recording the neural activity from individual neurons maybe we can start to puzzle out how this thing called vision works at all so that's exactly they did they got these cats they stuck neurons in their brains and they started showing a bunch of different images on a slideshow to try to see what kinds of images would activate the neurons and cats brains so they tried different things you can show them make mice and fish and other kinds of things that cats like to eat or play with but it was really hard to get any any solid signal about what these neurons were responding to so what what one really interesting discovery happened is you know today we're using PowerPoint back in the day we've natural slide projectors and when you change the slide like there's kind of a vertical bar that would move up and down the screen and what they surprisingly found is that some of the neurons in the cat's brain which consistently responds to the time when they change the slides and they even though they couldn't recognize any patterns of what was how it was the cat responding to things on the slides and they eventually discovered that it was in fact this this moving vertical bar that was indeed causing some of the neuronal activity in the cat's brain so with this hint they were able to puzzle out that there are certain that there are different types of cells in the brain that are responding to different types of visual stimuli many of them are very hard to interpret but some of these easiest are these so-called simple cells that they that they discovered so the simple cells would respond to an edge that's maybe light on one side dark on another side at a particular orientation at a particular position in the cat's visual field and if there happened to be an edge at the right position on the right angle in the right place then that particular neuron might fire that was very exciting because men may have some concrete evidence of what it is that cats are actually responding to in their brains then with a bit more exploration they remember to find other types of cells in the brain that responded to even more complex patterns like the complex cells that would respond to bits of motion or could respond to orienting edges but anywhere in the visual appeal to give a sense of some sense of translation and Berryman's in the visual representations that they perceive so I think that this is really one of the bounding oh and by the way of course I have to mention that these guys this was very seminal research and these guys won the Nobel Prize for it in 1981 so this was a very important research in history of science and psychology and vision overall but I like to point to this as the beginning of computer for a couple reasons one is this emphasis on oriented edges will see this come up over and over again on the different architectures that we study throughout the semester on the other is this hierarchical representation of the visual system of building from simple cells that represent one thing combining with complex cells and more and more complex cells that respond to more and more complex types of visual stimuli this broad idea was hugely influential on the way that people thought about visual processing and even on neural representations more generally so then if we move forward a couple years in 1963 Larry Roberts then his that's when Larry Roberts graduated from MIT PhD and did perhaps what was the first PhD thesis on computer vision here of course it was 1963 doing anything with computers was very cumbersome doing anything with digital cameras was very cumbersome so large portions of his thesis just to talk about how do you actually get photographic information into the computer because this was not something you could take for granted at that time but even working through those constraints he built some system that was able to take this this raw visual picture detect some of the edges in the picture sort of inspire inspired by useful and weasels discovery that edges were fundamental to visual processing then from there detect feature points and then from there start to understand the 3d geometry of objects and images now what's really interesting is that if you go and look at your Larry Roberts for Wikipedia page it actually doesn't mention any of this at all because after he finished his PhD he went on to become the founding father of the internet and did went on to be a hugely a major player in the World Wide Web and all of the networking technologies that were developed around that time so doing the first PhD thesis in computer vision was kind of a low point in his career I think all of us can aspire to that successful so then moving forward a couple of a couple more years people are getting really excitement so there was this very famous study in 1966 from MIT a similar pack word proposed the the summer computer vision project the summer computer vision project basically what he wanted to do is like oK we've got digital cameras now they can detect edges we know how all those cubed Wiesel told us how the brain works what we're gonna do is hang a couple undergrads put them to work over the summer and after the summer we show it we should be able to construct a significant portion of the visual system man these guys are really ambitious back in the day because now it's a clearly computer vision is not solved they did not achieve this this a lot people and nearly 50 years later we're still plugging away trying to achieve this what they thought they could do in the summer with my brother so moving forward into the 1970s one hugely influential figure in this era was was being Tamar who proposed this idea of stages of visual representation then again kind of harkens back to Google and reasonable so here you can see that maybe we want the input image then we have another prop another stage of visual pops and we extract edges then from the edges we extract some kind of depth information that maybe beacon segment objects and say which which which parts of image belong to which two different types of objects and then think about the relative depths of those objects and then eventually start to reason about whole 3d models of the world and of the scene and then we'll be bored of the seventies people were started to become interested in recognizing objects and thinking about ways to build computer systems that could not just detect edges and simple geometric shapes but more complex objects like people bombs it was work about some things like generalized cylinders and pictorial structures that built that try to recognize people as easy formal configurations of rigid parts with some kind of known topology and you can see ideas and this was this was out this was very influential work at a time but the problem is that in the nineteen seventies processing power was was very limited visual cameras were very limited so a lot of this stuff was sort of toy in a sense and as we move into the 80s people's people have much more access to better digital cameras more a more computational power and people began to work on slightly more realistic images so one one kind of theme in the 80s was trying to recognize objects and images via edge detection I told you that edges were going to be super influential throughout the history of computer vision so there was a very famous paper from John Candy in 1986 that proposed the very robots algorithm for detecting edges and images and then David Lowe the next year in 1987 proposed the mechanism for recognizing objects images by matching their edges so in this example you can imagine we've got this this cluster razors and then we detect the edges then maybe we have some template rate picture of a razor that we know about then we can detect the edges of our template razor and try to match it into this image this cluttered image there's a menu of all the razors and then by kind of matching edges in this way you might be able to recognize that there are many ten razors in this image and what are their relative configurations just based on matching with our tumbler image and now I'm moving at moving on into the 1990s people again wanted to build to more and more complex images more and more complex scenes so here a big theme was trying to recognize objects via grouping so here rather than maybe just matching the edges what we want to do is take the input image and segment and segmented into semantically meaningful chunks maybe like maybe we know that the person is composed of one meaningful chunk the the different umbrellas would be composed of a different meaningful chunk with the idea that if we can first do some sort of grouping then later Dallas tree and recognizing or giving a label to those groups might be an easier problem then in the 2000s a big theme was was recognition via matching and this is a there was a hugely famous paper called sift by David loved by David Lowe again in 1999 that proposed a different a different way of recognition via matching so here the idea is that we would take our input image detect little recognizable key points and different position 2d positions in the image and have each of those key points we're going to represent its appearance using some kind of feature vector and that feature vector is going to be a real valued vector this somehow encodes the of image at that little point in space and the end by very careful design of exactly how that feature vector is computed you can encode different types of invariances into that feature vector such that if we were to take the same image and rotate it a little bit or brighten or darken the lighting conditions in the scene a little bit that hopefully we would compute the same value for that feature vector even if the underlying image were to change a little bit and there was a lot of work in the it's been once we can extract these sets of robust and invariant feature vectors then you can improve again perform some kind of recognition via matching so that on the left if we have some template image of a stop sign we can detect all these all these distinctive invariant feature key points then on the right if we have another another image of at a stop sign this may be taken from a different angle with different lighting conditions then by a careful clever design of these invariant robust features then we can match and then correspond points in the one image into points in the other image and thereby be able to recognize that the right image is indeed a stop sign so then another hugely influential work in the 2000s was the viola Jones algorithm published in 2001 and this was really and they developed a very very powerful algorithm for recognizing faces in images so here they would you know you have an image then you want to draw a box where all the people's faces are and this was notable for this this piece of work was notable for a couple reasons one it was the first major use of machine learning and computer vision so viola and Jones used some algorithm called the boosted decision trees that were able to learn somehow the right combination of features to use in order to recognize images they were to recognize faces and to what was particularly notable was the very fast commercialization of this algorithm that this this piece of research went very quickly from a sort of academic piece of research publishing 2001 and within a few years this was actually being shipped in digital cameras at the time so if you remember maybe they had like an autofocus feature or you would kind of hold the shutter half down and it would like beep a little bit and draw boxes are on the faces and then focus on the people in the scene well that was most likely using this viola Jones algorithm so this is a particularly notable piece of work for those two reasons so now after they kind of unlocked the box of using data and using machine learning to augment our visual representations then moving on into the 2000s we began to see more and more uses of machine learning more and more uses of data in order to improve our visual recognition systems so one hugely influential piece of work here was the Pascal visual object challenge so here they download a bunch of them because now it's 2,000 layered Roberts had excellent computer vision and invented the internet so we could then download images from the internet to help build these datasets images and then we could get graduate students to go and label those images and then then we could use your machine learning algorithms to mimic the labels that the graduate students have written down the images and then if you do that then you can see this nice crack on the right performance increasing on this of this recognition challenge increased steadily over time from about 2005 to 2011 so then this this brings us to the image net learners feel visual recognition challenge so here this was a very very large scale data set in computer vision that has become hugely influential and has become one of the main benchmarks in computer vision even leading up to this day so the image in that classification challenge was this fairly large data set of more than about 1.4 million images and each of those 1.4 million images were labeled with one with one of a thousand different category labels and the big the big new piece of innovation here was that if you kind of do the math here you gonna be a lot of graduate students label all this stuff so the big piece of innovation here was to not label data using graduate students instead this this made use of crowdsourcing so here you could go on services like Amazon Mechanical Turk and then person up little pieces of work and then blast mode on over there over there over the Internet and then people anywhere in the world could label McConnel images get paid a couple of cents for each image they label and that was able to massively increase those amano I mean this this is beneficial in two ways right because one researcher get people to label their data without being constrained by the number of graduate students that they have and to it become a nice source of income for some people who were bored at worked and just like this or or in classes that gets maybe or not it's still weird it's the running by the lady that grew up introducing tasks but anyway this became a hugely influential data set and computer vision and more than a data set it became a benchmark challenge so they had every year they ran a competition when they were different researchers would compete and try to build their own algorithms that would try to recognize objects in this limit in this in his classification challenge and this became somewhat jokingly known sometimes as the Olympics of computer vision that there was a period of time from maybe in the the mid 20 2010 the late 2000s early 20 times when people would just really excitedly want to look at the results of imagenet competition every year and see what kind of advances the field had made last year then then given that I told you is this competition you can look at what was the error rate on this on this competition moving over time so the first time the competition was ran in 2010 in 2011 we were sitting in error rates of around 28 around 25 and then something big happened in 2012 so at the 2012 imagenet competition suddenly the error rates dropped in a single year from 25% all the way down to 16% and when it and this and after 2012 errors just kept on diminishing diminishing very very fast such that once we got to about 2017 we were building systems that could compete on this imagenet challenge and to perform even better than humans when they try to recognize images in this data set so then the question is is what happened in 2012 so what happened in 2012 is that deep learning came onto the scene and this was really the breakthrough moment for deep learning and computer vision researchers suddenly woke up and saw that there's this this crazy new thing that is suddenly sweeping our field so in 2012 there was this absolutely seminal paper from Alex pachowski Ilya Salisbury and Geoff Hinton where they proposed that deep convolutional not work Alex not that just crushed everyone else at the imagenet competition five years and four people working in computer vision at that time this was shocking this felt like there was this Brandon thing that just came out of nowhere and just suddenly crushed all these algorithms that we've been working on you know as we kind of watch this history of computer vision walking from the 1950s all the way up into the bat'leth till the 2000s you'll notice that neural networks were not a mainstream part of that history throughout much of computer visions history so when this suddenly appeared it felt a lot of computer vision researchers like this was this brand-new thing suddenly appearing all at once that this brand-new exciting technology and that was a little bit flawed because this was not a brand new technology there had been a parallel stream of researchers going back similar similarly similar amounts of time that had been developing and holding these techniques for decades and 2012 was the sudden breakthrough moment where all of that hard work paid off and became mainstream so then let's talk a little bit about the history of deep learning kind of going back in time yet again so then about the same time that Hubel and Wiesel were doing some of their seminal work on the visual recognition and Katz there was another very influential algorithm called though actually wasn't even algorithm Oh called the perceptron so the perceptron was one of the earliest systems that could learn as a computer system but what's interesting is that this was what 1958 so the idea of an algorithm the idea of programming the computer these were already quite these were quite novel research topics on their own at that Friedan style so the perceptron was actually implemented as a piece of hardware they there's a picture of it on the right that is the perceptron it was this giant like cabinet size thing with like wires going all over the place it had weights that rep that were stored in these potentiometers which I don't even know what that is because I'm a computer scientist and and these these these weights were just mechanically up the values of these weights were changed mechanically dear during learning by a set of electric motors which again I'm not a mechanical engineer so I definitely could not build this thing but what was but even though this was this mechanical device that was bigger than a person it actually could learn from from data somehow and it was able to learn to recognize letters of the alphabet on these tiny 20 20 by 20 images that were super state-of-the-art in 1958 but so if I don't want to talk about any of the math with us for the perceptron but if you were to look at it with modern eyes we would probably call it a linear classifier and we'll talk about next week on Wednesdays lecture so this perceptron got a lot of people really excited it caught people thinking like wow here's a mechanism that allows machines to learn novel stuff from data without people having to explicitly program how it's going to work and all of that kind of came to a crashing halt in 1969 when Marvin Minsky and Seymour Packard who published this this infamous book called perceptrons in 1969 and what mitts in Peppard pointed out in their book basically was that perceptrons are not magical devices right the perceptron is a particular learning algorithm and there are certain types of functions that it can learn for our present and other types of functions that it cannot to learn to represent in particularly they pointed out that the act of the the actual function is not something that is learn about my eyelid by the linear perceptron learning algorithm that we'll talk about again a little bit more next week and this is this sudden realization well okay so the normal story that gets told is that the sudden realization of this book was that oh these these learning algorithms are not magical there's things they can't learn and people just lost interest in the field and work in learning work in perceptrons kind of dried up for a period of time following the release of this bugger but what's interesting is that I think nobody actually read the book because um if you actually read it there are sections where they say yes the original perceptron learning algorithm is quite limited and it can only represent certain functions but there's something else there's another potential version of the algorithm called a multi-layer perceptron that actually can learn many many many many different types of functions and is very flexible in its representations but that Penna got lost in the headlines at the time and nobody realized that and people just heard that for sub Tron's didn't work and we're dead so you should definitely read the assigned reading but then going forward because then going forward quite some amount of time we skip ahead to 1980 and there was this very influential paper called system proposed called the neocon neutron that was developed by Fukushima with a Japanese computer scientist and he wanted he was directly inspired by Hubel and Wiesel idea of this hierarchical processing of neurons remember people and Wiesel talk about these business simple cells these complex cells he's hierarchies of neurons that could gradually learn more learn to and see more and more complex visual stimuli in the image so Fukushima proposes this computational realization people and weasels formulation if you call the new economy economy so the neocon Neutron interleaved two types of operations one were these computational simple cells that if we were to look at them with modern terminology would they would they look very much like convolution and the latter was these computational realizations of complex cells that again under modern terminology look very much like the pulling operations that we use in modern evolution or networks so what's striking is that even back in this neo cognate Ron from 1980 had an overall architecture and overall method of processing that looked very similar to this famous Alex net system that swept in 2012 um even the figures that they have in the papers look pretty similar so they gotta be the same thing right but what was striking but the Neo cognate Ron is that they defined this computational model they had the right idea of convolution and pooling and hierarchy but they did not have any practical method to train the algorithm because remember this there's a lot of learning awaits in this system a lot of connections between all the neurons inside they need to be set somehow and even then Fukushima did not have an efficient algorithm for learning to properly set all those although free-weight parameters in the system based on the based on data so then a couple years later there was this again massively influential paper by rumelhart and Ted Williams in 1986 the interview introduced the backpropagation algorithm for training these multi-layer perceptrons so remember that in the perceptrons book there was this thing called a multi-layer perceptron that was thought to be very powerful in its ability to represent and learn to protects functions well in the back propagation in in this paper that introduced the back propagation algorithm was one of the first times that people were able to successfully and efficiently training these deeper models with multiple layers of computations and this looks very much like a modern neural network that we're using today that if you look at one you kind of look at this paper and over to knock them through it you'll see they talk about gradients and they talk about jacobians impressionnant all this kind of mathematical terminology that we think about today when we're when we're building and training neural networks so these look very much like the modern fully connected networks that we still use today there are sometimes called multi-layer perceptrons in homage to this of this long history so now that does not let a lot of people that let's do a lot of small people sort of get really excited about them there are networks together and try to figure out different types and structures of neural networks that could be built and trained powered by this new back propagation algorithm and one of the most influential works are on that time was a young McCune at almost paper in 1998 that introduced the idea of a convolutional neural network so this looks again very much like the Fukushima algorithm that we split this what they do here is they took Fukushima this kind of idea of convolution and pooling and multi-layer multiple layers inspired by the visual system and combine that with the back propagation algorithm from rumelhart paper in 1986 and with that combination they were able to train this convolutional these very large at the time convolutional neural networks that could learn to recognize different types of things and images and this was a hugely successful system it didn't I think it actually was very successful commercially so in addition to being a piece of very influential academic research it was also deployed in a commercial system by NEC labs and for a period of time this convolutional neural net system developed by that group was actually being used to process the handwriting get a lot of checks that were written in the United States at that time um one thing that I found stated that up to 10% of all cheques in the United States we're actually being having their the numbers on the cheque being read automatically by these convolutional neural net systems in the late 90s and early 2000s and again if you look at exactly what this this algorithm is doing Lynette was kind of like this so the algorithm here was called Lynette's after Don McCune ironies pressure and then that if you look at the structure of an algorithm it looks very similar almost identical to the types to the to the algorithm that was used in Aleks that near nearly nearly 30 years later so then emboldened by the success they were again a small group of people throughout the late 90s and early 2000s whom were really interested in trying to move them in in push neural net systems and figure out ways to train neural net systems that were ever bigger ever deeper ever wider and could be used on an increasing variety of tasks and around this period of time in the 2000s was wearing the term deep learning first emerged where it records were it where the term deep was meant to refer to multiple layers in these neural network type algorithms there's a and so this was really not a super mainstream research area at this time there was a relatively small number of research groups and relatively small number of people studying these ideas are on this time but I think a lot of the fundamentals that were reaping the rewards of now were really developed during this period of time in the 2000s when people started figuring out all the new modern trips to train different types of neural network systems so that finally brings us back to Alex fat and then in 2012 we had this great confluence of this great this this computer vision task called image net that people in computer vision were super excited about we had the second new techniques convolutional neural networks and efficient ways to train them that have been developed by this parallel research community and everything just seemed to come together just time in 2012 so then from 2012 to present day we've seen an absolute explosion in the in the usage of convolutional and other types of neural networks across both across computer vision and across other types of related areas in AI and across computer science so here on the Left we have the Google Trends for deep learning that shows you this massive sort of exponential growth that really took off starting in 2012 and on the right this is a photo I took at the computer vision of pattern recognition conference on this summer in 2019 so this is one of the premier venues for academic publications in computer vision and here is a graph that they were showing at the keynote for that for that conference where they showed on the x-axis the year of the conference and on the y-axis of the number of submitted and accepted papers in this and this conference so you can see that even though the last five to ten years have resulted in this massive explosion both of machine learning systems both maybe in popular perception especially from Google Trends as well as the massive increase in academic interest into both machine learning systems and computer vision systems as evidenced by this fine spot on the right and if you look around the field today we see the convolutional networks other types of deep neural network you see are being used for just about every possible application of computer vision that you can imagine so from 2012 these convolutional networks are really everywhere they're getting use from such a wide diversity of tasks like image classification we want to put labels on images or image retrieval or want to retrieve images from collections things like object detection we want to recognize the positions of objects and images while simultaneously label like that the record should be image segmentation I'm threatening to fix the slide where we want to label the what where this is going back to this idea of semantic grouping you saw for computer vision in the 90s where we want to label the pics of pixels as being part of a cohesive whole comments are going to use for things like video classification on activity recognition things they're gonna use for things like pose estimation where you want to say how are the exact geometric poses of people arranged in images even for things that don't really feel like classical computer vision like playing Atari games with a process the visual input of the Atari game with a convolutional neural network and combine that with other sorts of learning techniques in order to both jointly learn a visual representation of the video game world as well as how to play in that normal um convolutional neural networks are also getting use for visual tasks that are about visual data that humans don't real aren't very good at so convolutional networks are getting used in things like medical imaging to diagnose different types of tumors diagnose different types of skin lesions other medical conditions they're going to use in galaxies classification they're getting used in tons of scientific applications like classifying whales or elephants or other types of people or because there's this problem where scientists want to go out into the world and collect a lot of data and then be able to use images and visual recognition as a kind of universal sensor to make use of all this data that they collect and gain insights into their their particular field of expertise that they're interested in and we've seen computer vision and convolutional networks branch out into all these other areas of science and just open up and unlock lots of new applications just across the board they're big the comments are going to use for all kinds of fun tasks like image captioning that we can write systems we can build systems they can write natural language descriptions but images these are using convolutional net words we can use convolutional networks for generating arts so we can know we can make all these kind of psychedelic portraits again using a convolutional neural networks so then we might ask what was it that happened in 2012 that made all of this take off well I think the jury's out and we'll have to see what the story ins right 50 years from now but my personal interpretation is that it was a combination of three big components that came together all at once one was the set of algorithms that we saw that there was a stream of people working on deep learning at common neural networks and machine learning who had developed these powerful set of tools for representing learning functions and for learning that was the fact propagation algorithm we saw this the second stream of data that with the rise of digital cameras later Robertson running the internet and people to develop a crowdsourcing we were able to collect unprecedented to label data that could be used to train these systems and the third piece that we haven't really talked about was the massive rise in computational resources that has been continually happening throughout the history of computer science so one graph that I put together that I find particularly striking is looking at the gigaflops of computation per dollar as a function of time so here on the blue you can see these are different types of CPUs are in a CPU central processing units the thing that's remained on your cloud with all of your laptops truck and they get faster but not that much faster over time but starting in 2008 there was some really interesting developments with GPUs graphics processing units and so these were these special-purpose pieces of hardware that were originally developed to pump up pixels in computer graphics applications but around 2008 people started developing techniques to run generalized programs on these graphics processing units and starting and and then over time these techniques became more and more easy to write general-purpose scientific code and mathematical code to run on these massively parallel or graphics pasta units and then if you look at the timeline from 2006 to 2007 teen and look at the gigaflops per dollar on these graphics processing units you can see that although this exponential moore's law may not have held up for CPUs it actually has been we actually haven't seen exponential increases in GPU computing power over time over the last 10 years and if you look at maybe the Alec and this has been striking even in the last couple of years so if you look at the Alex next system in 2012 he was using this GTX 580 GPU that was very very exciting at the time if you're a gamer and if you push it on into more recent cards like the gtx 980ti or more recently a 2080 ti about update the slides then you can see that the cards we have even five years later are are literally exponentially more powerful than the cards that they were going to keep it in 2000 as well so I think that it was really this this confluence of algorithms of data and of massive increase in computation fueled by advances in GPUs that led to them all this magic happening in 2012 that led to all these these new applications of convolutional networks on different types of computer vision and in recognition of all of this in recognition of the impact of computer vision and deep learning across the field the 2018 Turing award was awarded to Yahoo of NGO Jack Hinton and Yama kun for their work on pioneering many of the deep learning ideas that we'll learn throughout this class and for those of you who don't know the Turing award is basically that considered the Nobel Prize equivalent in the field of computer science so this just happened last year and this was just a recognition that this has been a massively influential piece of research that's been changing all of our lives over the last over the last several years but I think it's important to stay humble and realize that despite all of the successes that we've seen in convolutional networks in deep learning and computer vision I think we're really still a long way away from building systems that can perceive and understand visual data to the same fidelity and and power and strength as humans one image that I like to use to exemplify this is this this example so what's if we were to send this to a convolutional network he would probably say person or scale or locker-room maybe but if we were to look at this you see quite a different story you see a guy standing on a scale you know how scales were which require some into some idea of physics you know that he's looking at the scale you know that he's trying to measure his own weight you know that people tend to be self-conscious about their weight you know that the person behind him is stepping on the scale and pushing down because of your knowledge of physics you know that that's going to make you feel for a bigger number because of your knowledge of that guys psychology you'll know that that might make him feel embarrassed or uncomfortable because he thinks he ate too much and then you also know who that person pushing down on the scale is and because of your knowledge of who he is it makes it may be surprising that he's acting in this way you understand you can see the people behind him watching this scene and laughing and understand that you need to know how it is the people look at each other you understand that maybe they're surprised that this guy is doing this thing that's causing this guy to be embarrassed so there's a lot going on in this image that we as human as visually intelligent humans could understand and perceive and I think we're a long way away from building computer vision systems that can match that level of visual fidelity but I'm hoping that as we move forward and continue to advance the field maybe one day we'll get there but in the meantime I think that computer vision technology really has massive and massive potential to improve all of our lives it'll make our lives more fun through sort of new videoman applications applications in VR AR it'll make our transportation safer with advances in autonomous vehicles it'll lead to improvements in medical imaging and diagnosis and overall I think computer vision as a whole has a massive Trek massive ability and potential to continue leading to massive improvements in all of our day-to-day lives so that's why I think we should be studying computer vision that's why I make it excited to be teaching this class this year so that basically covers our brief history of computer vision of deep learning except there's one little spot on the timeline that we didn't fill out and that's this class so with that it's time to if there was any if there's any questions about historical stuff then we're going to move on to course logistics you're out no okay great so for staff who are wheaton I'm Justin Johnson I'm a new assistant professor here in the computer science and engineering engineering department this is the first class I'm teaching here at Michigan this is the first time I've been in this room so glad I found it about the laptop identity but I'm excited to be here excited to be teaching you guys this class there we have an amazing team of graduate student instructors that are going to be helping us out this semester how is your guy the standard will be docile so these guys are all experts in computer vision they're all PhD students here moonscape works and video understanding and generative models keep on course and robustness and generalization gluant works a lot in visual vision plus language so if you have questions both those particular research areas you should go talk to them so how to contact us so this is an important slide right taking pictures with this comes a good idea I probably could feel some kindness to something we'll extract abuse formation out from them automatic but we kept on a course website that's up in this URL the course website has followed the information that you'll need for others throughout the quarter you can find the syllabus this head so the schedule you'll find links to assignments they're your clients links the lecture videos assuming a set of lecture capture properly really important we're going to use the Gaza or most communication with you guys so we really encourage you if you have questions for the course material we're going to use the Gala's F is our main mechanism to communicate back and forth with you guys so if you have questions about course content questions about material post post on Piazza and similarly if we when we need to make announcements back to the class to announce thing is like of the homeworks about changes to logistics will be announcing all of those through Piazza so it's really important that you guys get signed up as quickly as possible and one piece of the note is that please don't post any code on P about public questions on Piazza if you do need to ask particular questions about code we ask that you make private questions that are visible only to you start only to the instructor so we will have canvas I I think I need to suck that up still this week but we're really use canvas primarily for just turning in assignments mostly will be using Piazza and the course website for this class we will have office hours both me and the GSIS started next week and you'll be able to find the times and locations of the office hours on the Google Calendar that I set up here finally we really want most the vast majority of communication you do with us should be should be through Piazza but if you have some kind of very sensitive topic that you would prefer to discuss directly with me then you can email me directly but for the vast majority of circumstances you should be going through Piazza for four course communication that will ensure that everyone all will all open teaching staff is able to help you in a timely manner and if you're able to make and if you're making public questions and lets you all help each other and learn collectively and learn from each other's mistakes I think Piazza is really great learning tool for that so we're going to have an optional test in text level there's no required textbook for this class on this schedule you'll find on the website there will be recommended readings for all of the lectures this this text this textbook is totally optional and it's completely available for free online you don't have to visit being purchased a copy if you don't watch it so on course content and grading we're gonna the main bulk of the course is gonna be six programming assignments oh is that a problem is that's I think these are gonna be really exciting assignments we're going to use Python high court and Google collab and we're gonna walk you through the implement eight the detailed implementations a lot of the ideas that we talked about in lecture we will have a midterm exam and we will have a final exam but there will not be an important project the majority of the stuff will be learning us through the programming assignments so we have a leak policy so don't turn it in late but more seriously you all get three free big days of use on your homework assignments you don't have to tell us beforehand just randomly and then I can automatically once you've exhausted your late days I'm gonna take 25% earth-protecting something like that's reasonable are there any questions about content of policies late days anything like that yeah over here yeah so the question is will the course materials be available for people not on the waitlist and they will be as really available as I can possibly make not so even if you can't get an open the class you can definitely feel free to follow along with the lecture slides with lecture videos what the course yeah question yeah it's up to you you're all grown up so you can use the funny lengthiest first time as you want but we you take too many and we were zero so I don't recommend that but three ladies you can use as many as you want and expert ladies you somebody as you want for sign that but we'll just take this tape off lines and sizes here yeah so let's talk about collaboration policy so we really encourage you guys to work together in groups I think it's great to discuss which course material it with your classmates and to learn together but we have a couple of ground rules about that for a collaboration policy one is that everything you submit should be your own work it's fine to talk about ideas with other other students that's great and encourage but you shouldn't be sharing or looking at each other's code word necessarily you can talk about things conceptually but you shouldn't be turning on the same code as as people you work with secondly on the flip side don't share your solution code to other people this means don't post on Piazza don't give it to your to your roommates don't throw down big puzzles because that will make it easier and more tempting for other people to violate collaboration policy and review charities and third when you turn in assignments we have to indicate who you work with what would in turn in your assignment and will be more equal in instructions on that later and in general training in something late or incomplete is much better than potentially violating collaboration policies and not just in this course but more generally any questions about that okay so then of course philosophy what are we here for right all right yeah this class is not learn pipework too many games this class is dogs learn deep learning in ten lines of Python you can find tutorials on the internet that tell you how to do that that's what you want but I think that learning about deep learning in that way does yourself a huge disservice I think that you want to focus on fundamental concepts we want us we want you to understand not just the latest and raised API level API is the raft answer club we want you to understand the fundamentals of how those these guys might have been implemented why I didn't let it the way they work such that when faced with the next piece the next bit of technological tools you you understand the fundamentals and you do them yourself what that means is that you'll be writing a lot of backdrop code yourself in this course I think that it's very important for people to learn how to compute gradients and how and how the computation of gradients affects the overall flow of learning pressure system so for the first several assignments you'll be using no autographs you'll be using Java tensorflow still be deriving and in fluent in your own background your own gradient computations and you will be a better computer scientist born so given that we prefer to move afraid to write inventory for the purpose of pedagogy we encourage you we're going to encourage you to write and stretch rather than relying on existing open source implementations again this will make you a better keep learning practitioners that said we're also practical we will well we're going to give you lots of tools and techniques for debugging and training big neural networks because it's tricky when you can't rely on that find that ten lines of code wrapping around lots of stuff so we're going to talk a lot about how you can practically get these things to work what tips and tricks should you be using when developing and debugging and training their networks we will use state of the art software tools like I taught you tensor flow but only after you've earned your wings by writing a lot of Badcock code yourself we're going to focus on state of the art a lot of the material that we cover in this course despite the long historical context we talked about in this lecture but the majority of the actual concrete implementations and concrete results we've discussed that we'll talk about in this course have been discovered in the last five to ten years so this has a couple of interesting applications for implications for teaching a course that means that there aren't good textbooks for this stuff that means that no you there might not be great resources outside of original piece of original research papers for learning about this stuff so that's going to be maybe a bit of us a bit of a struggle and a bit challenging but on the upside I think it's really exciting to be learning about such deep in our material in a classroom setting so also in philosophy we want you could also have a little bit of fun so we'll be Petzl we'll be covering some some sort of fun topics like image captioning that you've got a good laugh when I put it up here couple slides ago and some B's on a deep dream and our artistic style transfer that lets you use neural networks to generate new pieces of art not just improving our lives so in terms of the course structure the first the bird the take down the first half the course will focus on fundamentals we'll talk about the details of how to implement different types of neural networks we'll cover building a fully connected commotion all recurrent net neural networks will talk about pug debug them how to implement them how to train them and they'll be very detailed and by the end of this module is goal basically implementing your own convolutional neural network system from scratch now in the second half of the course we're going to shift a little bit in flavor and here we're going to focus more on applications and more emerging sort of research topics so around that point in the course you'll notice the bit of shift in tone in the lectures so they will become a little bit less detail will sometimes skip over some of the low-level details and perhaps refer to put your papers if you need to know those details and instead the lectures will more focus on giving you an overview of how people are how these different things are used across different applications in computer vision and beyond well in the second half we'll talk about things like 100 detection image segmentation 3d vision videos we'll talk about attention formers vision language generative models I think it's gonna be a lot of fun but because there's a lot to get through first homework assignment will be out over the weekend that will cover basically an intro and a warm-up to the collab and height or the environment that we'll be using for our quarter so this should not this is not intended to be difficult or a long assignment this should be your home over the weekend and whenever we get it out it'll be Zoo will be after that and everything you need to do this first homework assignment will get through the con based lecture so with that said welcome to the class come back on Monday when we'll talk about English concertina [Applause] you
Deep_Learning_for_Computer_Vision
Lecture_13_Attention.txt
okay it seems like a microphone is not working today so i'll just have to shout loud is that okay anyone can hear me in the back okay good so today we're back at lecture 13. uh today we're going to talk about attention and i'll try to avoid making bad jokes about paying attention so last time before the midterm we left off by talking about recurrent neural networks and remember that we saw recurrent neural networks with this really powerful new type of structure that we could insert into our neural network models that would let us process different the processed sequences of vectors for different types of tasks um and we saw that by moving from these feed forward neural networks to recurrent neural networks that could process sequences this allowed us to build neural networks to solve all kinds of new types of tasks um so we saw examples of maybe machine translation where you want to transfer transform one sequence into another sequence um or we saw examples like image captioning where you want to predict a sequence of words to write a natural language caption about input images and today we're going to pick off pretty much where we left off at the end of the recurrent neural networks lecture last time because that will lead very naturally into the discussion of attention so to recap let's think back to this problem of sequence to sequence prediction with recurrent neural networks so here remember that the task is that we receive some sequence of input this is x1 through xt and this might be and then we want to produce some output sequence y y y t where here maybe x the x's are the words of a sentence in english and the y's are maybe the words of the corresponding sentence in spanish and we want to do some translation that converts the word that converts a sentence in one language into sentence in another language and of course because different languages use different words for different concepts then of course the two sequences might be of different length so we represent that here by saying that the input sequence is like x as length of capital t and the output sequence y has length of t capital t prime um so then remember last last lecture we talked about a sequence of sequence recurrent neural network architecture that could be used for this kind of a sequences sequence translation problem um so recall the way that work is that we use um one one neural network called one recurrent neural network called the encoder which would receive the the x vectors one at a time and produce this sequence of hidden states h1 through ht um and then at every time step we would use this for current neural network formulation fw that would receive the previous hidden state and the current input vector x x i and then produce the next hidden state so then we could apply this one recurrent neural network to process this whole sequence of input vectors um and now here's uh going a little bit more into detail on how this works so then once we process the input vector then we want to somehow uh summarize the entire the entire uh content of that input sentence um with two uh vectors here so one we need to produce because remember we're going to use another recurrent neural network as a decoder that's going to be used to generate the output sentence one word at a time so now we need to summarize the input sentence in two vectors um one we need to compute this uh this initial hidden state of the decoder network which is shown here as s0 and we also would like to compute some context vector c shown here in purple um that will be passed to every time step of the decoder and now here it's often one common implementation here would just be to set the context vector equal to the final hidden state and another common thing would be to set the initial hidden state s0 um to be predicted with some kind of feed forward layer or like a fully connected layer or two and now in our decoder what we're going to do is we're going to start off by receiving some input start token to say that we want to kick off the generation of this output sequence and then at the first time step of our output recurrent neural network of the decoder is going to receive both the initial hidden state s0 as well as this context vector c as well as this start token y0 and then it will generate the first word of the output sentence so then this would repeat over time and now at the second time step of the decoder remember it's going to again input the hidden state at the previous timestep s1 uh the the the the word the first word of the sentence and again we'll receive again this context vector c um so then we can run this for multiple time steps and then you can see that we've been able to translate this input sentence we are eating bread into its corresponding spanish translation which i'm hoping getting right because i haven't taken spanish in a while um but then one one thing to point out about this architecture is that we're using this context vector here in purple serves this very special purpose of transferring information between the encoded sequence and the decoded sequence so now this context vector is supposed to somehow summarize all of the information that the decoder needs to generate its sentence and then this context vector fed in at every time step of the decoder um so this this is kind of this is a fairly reasonable architecture that we left off on for last time but there's a problem here is that maybe this this kind of makes sense and we can imagine this this working out if these sequences are going to be fairly short um so for these these simple example any kind of example that fits on a slide is only going to be sequences that are a couple elements long but in practice we might want to use the type of sequence of sequence architectures to process very very long sentences or very very long documents so for example what if we were translating not a short simple sentence but trying to translate an entire paragraph or an entire book of text then this architecture becomes a problem because now what happens is that the entire uh information about that input sentence or input document is all being bottlenecked through this single context vector c shown in purple and while it might while using a single vector to represent the entire sentence might make sense for a single short sentence um it feels kind of unreasonable to expect the model to pack an entire paragraph or entire book worth of content into just a single context vector c um so that that seems like so in order to overcome this shortcoming we want to have some mechanism to not force the model to bottleneck all of its information into a single factor c so there's a simple workaround that we might imagine well what if rather than using a single context vector c what if instead we compute a new context vector at every time step of the decoder network and then we can sort of allow the decoder the ability to choose or reconstruct a new context factor that focuses on different parts of the input sequence at every time step of the decoder well that's that seems like a pretty cool idea um so the way that we formalize this intuition is with a mechanism called attention um so here we're going to um still use a sequence to sequence recurrent neural network we're going to have a recurrent neural network as the encoder that will encode our input sequence as a sequence of hidden states and that we're again going to use a decoder network that is again a recurrent neural network that produces the output one at a time but now the difference is we're going to add some additional mechanism into this network called an attention mechanism which will allow it to recompute a new context vector at every time step of the decoder so to see how that works the encoder looks exactly the same we still are going to produce the sequence of hidden states of the input sequence and then predict this initial hidden state for the decoder sequence but now here's what here's where things get different and here's where things diverge from the the previous sequence of sequence architecture that we've seen so now what we're going to do is we're going to write down some alignment function um here shown in the upper right hand corner f sub att and now this alignment function will maybe parametrize as a fully connected neural network so you should think of this alignment function as a little tiny fully connected network that's going to input two vectors um one is the current hidden the current hidden state of the decoder and the other is one of the hidden states of the encoder and now this alignment function will then output a score to say how much should we attend to each hidden state of the encoder given the current hidden state of the decoder so for example when we're pro when we're about to and then we'll use this information to somehow construct a new context vector at every time step of the decoder um so concretely the way that this looks at the first the very first time step of the decoder is that we've got our initial hidden state our initial decoder hidden state s sub zero here and now we'll we'll use this f sub ett function to compare s0 with h1 and that will give us so this alignment score e11 and that is basically how much does the model think that um the hidden state h1 will be necessary for predicting the word that comes after uh output hidden state s0 um this will output a single sc and this this this alignment function will output scalars um so then this will output a scalar e11 um a scalar e1 e12 that's again going to say how much does do we want to use the second hidden state of the encoder when we're trying to produce the word at the first hidden state of the decoder and then we'll just run this function for every hidden state of the encoder i'm passing it in that hidden state of the decoder so now each now we've got now we've got an alignment score for each hidden state in the encoded in the encoded sequence but these alignment scores are just arbitrary real numbers because they're they're getting spit out from this feed forward neural network f sub att so then the next step is that we'll apply a soft max operation to convert that um those set of alignment scores into a probability distribution um right so remember the softmap this is the exact same softmax function that we've seen in the image classification setup um that's going to in input a vector of arbitrary scores and then output a probability distribution with the intuition so that means that higher scores will give higher probabilities um each of the outputs will be real numbers between zero and one and all of the output probabilities will sum to one so basically now we've converted we predicted this um this probability distribution that says for the first hidden state of the decoder how much do we want to use how much weight do we want to put on each hidden state of the encoder so now that we've got these attention and these uh these this color distribution is called attention weights because this is how much we want to weight each of these hidden states so now the next step looks a little bit scary but it's basically we're going to take a weighted sum of each of the hidden states of the encoded sequence and we'll sum them up according to these predicted probability scores and this will produce our context vector c1 that we're going to use um for the first time stuff of the decoder network um and what this basically means is that we've used this the network has sort of predicted for itself how much weight do we want to put on each of the each of the hidden states on the input sequence and we can dynamically shift that weight around for every time step of the decoder network that will allow us to predict a different context vector for each decoder time step and now we're and now um our d now we can finally run the first time step of our decoder or current neural network so this decoder network will input the context vector that we just computed it will input the first word which is the start token that i forgot to put on the slide and then it will output this first predicted word of the sentence and now the intuition here is that when we're trying to generate these output words of the output sentence then each word of the output sentence probably corresponds to one or multiple words in the input sentence um so then we can we're trying to dynamically generate this context vector that causes the output uh the output the decoder network to allow it to focus on different parts of the input of the encoder network for each time step um so maybe the kind of concrete intuition here is that when we're trying to translate this particular sentence um we are eating bread then the first word we need to generate is estamos which means uh we which is sort of like we are doing something in spanish um so then the intuition is that maybe we want to place some relatively high weights on these first two words of the of the english sentence and relatively low weights on the latter two words of the english sentence and that will allow the decoder network to focus on the important parts of the in the input sequence that are needed for producing this particular word of the output sequence and the other point is that these are all differentiable operations we are not telling the neural network which things it is supposed to pay attention to at every time step of the decoder sequence instead we're just letting the network decide for itself which parts it wants to look at and because all of the operations that are involved in this computational graph here all of these operations are differentiable which means that we don't need to supervise or tell the network which parts it's supposed to look at instead we can just write down this whole big mess of operations as one big computational graph and then back propagate through each of these operations to allow all parts of this network to be uh jointly optimized and it will decide for itself which parts of the sequence it wants to to focus on so now at the next so then at the next time step we'll kind of repeat this a very similar procedure so then we'll take so now we've got a new hidden state of the decoder network s1 we'll again take that s1 hidden state of the decoder compare it to each hidden state of the input sequence this will produce a new sequence of alignment scores e21 e22 etc and then we'll again run the softmax to get a new probability distribution over the input sequence that now tells us what while generating the second word of the alpha sequence which words of the input sequence we want to focus on and then we'll again use these predicted probabilities to produce a new context vector which is again a weighted sum of the hidden states of the input sequence that are weighted by this new probability distribution that our model has predicted at the second time step of the decoder and then we can kind of repeat this process over and over and then again we'll run the next forward pass or the next time step of the decoder rnn that will receive the nuke the second context vector receive the first input word and then produce the second output word and then again the intuition here is that maybe when we're generating the second word comiendo means something like eating or we are or are eating so then when generating the second word then maybe the model might prefer to put some more attention weight on the time steps of our eating and then it might choose to ignore the parts of wee and bread which are not relevant for producing this particular word of the output sequence but again this is all sort of trained end-to-end differentiably we're not telling the model which parts it's supposed to focus on but the intuition is that this gives the model the capacity to choose to focus on different parts of the sequence when generating the output sequence um so then again we can then we can kind of unroll this for multiple time steps and then we've got in this example of sequence to sequence uh translate a sequence to sequence learning with attention um so now again this this this basically overcomes this problem of bottlenecking that we saw with this vanilla sequence the sequence model because now rather than trying to stuff all of the information about the input sequence into just one single vector and then using that one vector at every time step with the decoder we're giving the decoder this flexibility to generate its own new sequence of context vectors that are going to vary with each time step of the decoder network so now again the intuition is that if we're working on very very long sequences then this will allow the model to sort of shift and shift its attention around and focus on different parts of the input for each different part of the output so then uh one very cool thing that you can do is have my slide completely messed up okay so this is the image that powerpoint refused to display for me um but basically what we're doing here is we've trained a sequence of sequence for current neural network that's going to input a sequence of words in english and then output a sequence of words in french um or maybe maybe it was the reverse i can't actually remember um because this figure is not clear but we're doing some translation tasks between words in english and words in french um so then one way one thing and then again this model has been trained with this sequence sequence attention so at each time step of the decoder network it's generating this probability vector over all of the words of the input sequence that allow it to focus its attention on different words of the input sequence while generating the output sequence and we can actually use these attention weights to gain some interpretability into what the model is doing or how the model is making its decisions when doing this translation task so you can see we've got this english sentence at the top uh the agreement on the european economic area was signed in august 1992 period end and then down the the bottom here we've got a corresponding sentence in french which i will not attempt to pronounce because i will horribly butcher it um but what's very interesting and now this um this uh this diagram here is then showing us for every time step of the output what what are these attention weights that the model is choosing to produce so there's a lot of interpretable structure in this figure that we can see so the first thing is that up here in the upper left corner of the of the of the thing we see this diagonal pattern of the attention weights that means that when generating the word the the uh the model put most of its attention on the the french token or apostrophe and when generating the english word agreement then it put most of its attention on the french word accord which i guess means agreement um hopefully right so that what this means is that this this gives us this interpretability that while generating these first four words of the output sequence those correspond in a one-to-one manner um to the first four tokens of the french sequence and again this correspondence was discovered by the model for itself we're not telling the model which parts of sentences align to which other parts but now the really interesting thing happens with this zone economic europe sorry the english is european economic area but the corresponding part of the french sentence has the three same words but they're in a different order so now i'm guessing zone in french corresponds to area in english and you can see that economic economic in english corresponds to economic and french and european as this corresponding structure as well so now the model is sort of figured out that these three words correspond with the order flips in the sentence and this this sort of trend continues throughout the entire sentence so what's really cool about these attention mechanisms is that now we've actually got some interpretability into what the until we have some insight into how the model is choosing to make its decisions which is something that we haven't really gotten much we haven't really been able to do before with other types of neural network architectures so now we've got this pretty cool setup right that we've got this attention mechanism that lets the model generate sequences and then at each time step of generation choose to look at different parts of the input sequence but there's actually something that we can notice about the mathematical structure of this model which is that this attention mechanism that we built does not actually care about the fact that the input is a sequence right it just so happens in this in this task of machine translation that we're having our input is a sequence and our output is a sequence but for the purpose of this attention mechanism it didn't actually use the fact that the input vectors were a sequence right so in principle we can actually use the exact same flavor of attention mechanism to build models that attend to other types of data which are not sequences right so here on the left we're supposed to input this image of a bird and then we run it through a convolutional neural network and you should imagine that remember that a convolutional neural network um if we take the outputs of the final convolutional layer we can interpret that as a grid of feature vectors so the second image you should imagine as a grid of feature vectors that correspond to a different feature vectors corresponding to each uh spatial position of this invisible image of a bird and now we're going to do is we're going to use this exact same attention mechanism that we did with with the sequence model that is going to input the hidden the this so then we're going to use that sequence of uh that grid of vectors to predict this initial hidden state for a decoder rna and now this decoder rnn will use that initial hidden state as 0 to then predict um to then uh use our comparison function f sub att to compute a pairwise alignment score between every uh position in the grid of features and our initial hidden state as zero and that and again each of those outputs will be a scalar that is high when we want to put high weight on that part of the vector and we'll be low when we want to give low weight to that part of the vector and that will give us this grid of alignment scores so then above the grid of features you should imagine a second grid of alignment scores where each element in that grid again contains a scalar that is produced from this f att mechanism and now we have another image that we need to imagine is that we have that grid of attention scores that will then pass through a max operation that will again normalize all of those scores to some probability distribution that sums to one and again this will basically be a print of the model predicting a probability distribution over all positions in the input image that it will choose to attend to when generating the first word of the output caption um so then again we'll take a weighted combination of those uh hidden vectors um that are uh we'll take the this grid of vectors and we'll have a linear combination of them that are all weighted by these predictive attention probability scores and this will produce this first context vector c1 that we will use when generating the first token of the output sequence um and then we can pass that through the the first layer of the rna decoder and generate the first word sql so now you should imagine a sequel here um and then we can repeat this process again so then again we can use uh the next hidden state to compare the hidden state s1 with every position in the grid of features to produce another grid of alignment scores again normalized through a softmax to give a new distribution over all positions in the image and then give a new context vector the second time step of the decoder that we'll use to generate the word over so now you're clarifying this picture in your mind and then continue forward and eventually generate this caption seagull over water stop and you can see that this this structure of the model is now very very similar to that which we've done in the the sequence of sequence translation case that we're generating this output sequence which is a caption we generate one word at a time and now at every time step of generating this output sequence we're allowing the model the ability to generate its own new context vector by having a weighted recombination of the grid of features of the input image then we can get up so this was the bird image you were supposed to be imagining for um and what you can imagine here is that now when generating this up so that the model is receding this input bird image that you can now finally see and then we're generating this output sequence of words one word at a time a bird is flying over a body of water period and now at every time step of generating this output sequence the model is predicting these attention weights they give a probability distribution over the positions of the the grid of features that are predicted by the convolutional network which is processing the input image so you can see that when when producing when predicting the words bird flying over then the model tends to give attention to these positions in the input image that correspond to the bird and then maybe when it predicts the word water then the predicted attention weight is sort of no but now ignoring a bird instead looking at the parts of the image that have the water around them instead yeah the question yeah so um the picture the second row is something we were not supposed to talk about today and it was actually not on the powerpoint slide but since you asked the idea is that in this method in this version of attention that we've seen so far the model is actually using this weighted recombination of features at every point in the input image but what if instead rather than having a weighted recombination what if we wanted the model to just select exactly one position in the in the input grid of features and then rather than using a weighted recombination instead just use the features from exactly one position in the input and it turns out that that's what the second row is doing but the but training the model to do such a thing requires some additional techniques that we'll cover in a later lecture um so that part was actually cropped out in the image on the slide now the next slide was supposed to be this figure which gives us a couple more qualitative examples of this model um using its attention mechanism to focus its attention on different parts of the input image when generating uh text of the output so for example here when the model is looking at this image of these people playing frisbees then when generally it generates the caption a woman is throwing a frisbee in the park and then when generating the word frisbee you can see that the model chooses to put its attention on the on the portion of the image that actually corresponds to the fritz speech uh was there some question over here so here um you would have seen if you were not if you were using your imagination strong enough but but here um we actually would have a grid of input of features being predicted by the convolutional network where um h i j is the i j feature and the grid of features predicted by the network so it would be one position in the feature map um so actually this was supposed to be a three by three grid that was uh h11 h12 h23 h21 h2823 then the idea is that we predicted this probability distribution which gives us a probability distribution over all the positions in that in that grid um so then where the probability is high that means we want to put a lot of a lot of attention or emphasis on the on the features from that position and where the attention weights are low is going to have very little emphasis on the features at that position uh so ct is a vector um ct is a vector h-h-i-j is a vector um e i e t i j is a scalar which tells us how much do we want to emphasize the vector h i j um at time step t and now a t i j is a scalar telling us how much do we want is a normalized scalar telling us how much we want to emphasize vector h i j at time step t and then c t is a vector where we're summing over all positions in the image i j um and then we have uh and now uh a t i j is a scalar so we multiply by the vector h i j so then this is a sum of vectors then c t will be affected one uh one intuition around why we might want to do this image captioning is it actually has a biological motivation as well so if you imagine on the left here a diagram of the human eye then you know the human eye is a sphere and then you've got a lens at one side light comes in through the lens and then at the back of your eye there's a there's a region of the eye called the retina um and then the light comes in through the lens is projected onto the retina and the retina contains photosensitive cells that actually detect the light and those uh those signals from those photosensitive cells get sent back to your brain and get interpreted as the stuff that we see um now it turns out that uh the retina is not create not all all parts of the retina are not created equally and there's actually one particular region of your retina in the very center of the retina called the phobia which is much more sensitive than all other parts of the retina um and what that means is that the graph on the right is supposed to be a graph of visual acuity um on the y-axis and the x-axis is supposed to be the position in the retina and what this means is that at the very far parts of the retina you have very low visual acuity as you move towards the phobia the visual acuity goes up and up and up and up and up and then right in the phobia you've got this very very high visual acuity and then moving down from the phobia it moves down down down down towards the edge of the retina on the other side where you have very low visual acuity and what this means is that there's only a very small region in your eye that can actually see the world with high definition um and you kind of have an intuitive sense here you kind of put something right in front of you you can see it with good detail as you move your hand to the side you can kind of tell that there's something here but you can't really see it you can't really maybe it's hard to count how many fingers are way over here and that's because different parts of your retina actually have different amounts of sensitivity now actually to account for this problem or this this design of the human retina your eyes actually do are actually constantly moving around very very rapidly in time periods of something like tens of milliseconds your eyes are constantly moving even when you feel like you're looking at something um something stationary your eyes are actually constantly moving around um taking in different parts of the visual scene and those very very rapid eye emotions are called psychotics and they are not they're not something that you have physically the digi that you have conscious control over um but basically the those cicadas mechanisms that your eyes do are a way that the human body tries to overcome this design problem of only having very high sensitivity in a very small portion of the of the retina of the eye and now what this means and now image captioning with rnns um actually is sort of very loosely inspired somehow by these maybe cicada emotions that the human eye makes so that you know when you look at something um your eye sort of constantly looks around different parts of the scene at every moment in time and similarly when we're using an image captioning model with attention then at every time step then the model is also kind of looking around at different positions in space very rapidly it's sort of very loosely like the psychotic motions that your human eyes can make so then the first paper that introduced that this idea was called show attend and tell right because you're going to show the model some input image it's going to attend to different parts of the image and then it will tell you what it saw by generating words one at a time now this was such a catchy title that a lot of other people started using titles like this when building models that use attention of different flavors so then we had ask attendant answer also show ask attendant answer that you can kind of guess what these are doing right these are models that look at an image they're presented with the text of a question about that image they attend to different parts of the image or different parts of the question and they answer the question we've got listen attend and spell this is going to process the raw audio waveform of some piece of sound then it's going to generate letters to spell out what words were spoken in that piece of sound and it's going to attend at every and when generating the words of the output one at a time it's going to attend to different spatial position different temporal positions in the input audio file um we had a listen attendant walk that was a processing text and then outputting decisions about where a robot is supposed to walk in some interactive environment um we have show attend and interact um that's also supposed to uh output some robot commands uh show attendant read so you know these these paper titles got to be very trendy after a while um but basically this is to say that this this mechanism of attention that we've seen so far is actually very very general um and we've seen it used for this this machine translation problem where we want to input a sequence into another sequence and it can also be used for all these other all these other tasks so basically anytime you want to convert one type of data into another type of data and you want to do it over time one time step at a time then often you can use some kind of attention mechanism to cause the model to focus on different chunks of the input or different parts of the input while generating each part of the output so that gives us um that that gives us this very general uh mechanism of attention um and then they all work exactly the same way um so you can kind of guess how the models work just by reading the titles of these papers oh man this image finally showed up and you can see the bird it's great um okay so then uh what so now we've seen this this mechanism of attention used for a bunch of different tasks um but over the past couple of years um what do you do in computer science once you've got something that seems useful for a lot of different tasks then you want to try to abstract it away and generalize it and apply that idea to even more types of tasks so then um we want i want to step through a couple steps that we can start from this this piece of this this mechanism of attention that we've used now for for image captioning and machine translation and by generalizing it just a little bit we'll end up with a very powerful new general purpose layer that we can insert into our recurrent neural and into our into our neural networks so one way to reframe the type of attention mechanism that we've seen so far is that it inputs a query vector we're going to call it q um and in in the previous uh attention that would have taken the place of these hidden state vectors that we had at each time step at the output we also have a collection of input vectors x um that correspond to this this set of hidden vectors that we want to attend over and we also have some similarity function f sub att that we're going to use to compare the query vector to each of our database of input vectors so now this is uh the so then the computation of this attention mechanism that we've seen several times now is that in the computation we want to produce this vector of similarities by run by running this uh attention uh this f att attention uh function um on the query vector and on each element of the the input vectors and then this will give us these these unnormalized similarity scores that we will run through a softmax function to give now a normalized probability distribution over each of the input vectors x and now the output now we will output a single vector y um that is a weighted combination of the vectors x in the input okay so now the first generalization is that we want to change the similarity function so previously we had written down the similarity function is this kind of a general f sub att function but that's indeed what early papers on attention had done but it turns out it's much more efficient and works just as well if we use the simple dot product between vectors as our similarity function um and that's going to simplify things a bit um so now uh rather than running a neural network to compute these similarities we can compute all these similarities all at once with some kind of matrix multiplication and that will be much more efficient but then there's actually a little detail that people use rather than using the dot product instead people often use what's called a scaled dot product for computing these similarity scores so now when computing the similarity score between the query vector q and one of our input vectors x i um the the similarity score will be uh q dot product with x i divide by square root dq where dq is the dimensionality of those two vectors and now why would you want to do that well the intuition is that we're going to take we're about to take those similarity scores and we're about to run them through a slot max and we know that if elements of the soft max are really large then we'll have a vanishing gradient problem right if there's one element of those uh ei of in that attention weights e that is much much higher than all the others then we will end up with a very very highly peaked soft max distribution that will give us gradients very close to zero almost everywhere and that might make that might make learning challenging so what we want to do is um and another problem is that as we consider vectors of very very high dimension then their dot products are likely to also be very high in magnitude so as a concrete example consider computing the dot product of two vectors a and b both of dimension d and suppose that these are constant vectors now the dot product of those two now remember the dot product of two vectors is the product of their magnitudes multiplied by the cosine of the angle between the vectors right um so then if suppose that we have these two constant vectors um then the the the magnitude of one of these vectors now is going to scale with the square root of the dimension of the vector which means that if we are going to work with neural networks that have very very large dimensions then then naturally we would expect these dot products of very high dimensional vectors to give rise to very high values so to counteract that effect we're going to divide the dot product by the square root of the dimension to counteract this of this effect by which the dot product tends to scale as we scale up the dimension and that will give us nicely more nicely behaved gradients as we flow through the softmax function okay so then the next generalization is that we want to allow for multiple query vectors right so previously we always had a single query vector at a time right at each time step of the decoder we had one query and then we use that query to generate one probability distribution over all of our input vectors well now we'd like to generalize this notion of attention and have a tension that has a set of query vectors so now our inputs now we input a set of query vectors q and a set of input vectors x and now for each query vector we want to generate a probability distribution over each of the input vectors so then we can compute all of these similarities and then we need to we need to compute a similarity between each query vector and each input vector and because we're using the scale dot product as our similarity function we can compute all of these similarity scores all simultaneously using a single matrix multiplication operation um then remember we want to compute for each query vector we want to compute a distribution over the input vectors so now we can achieve this by by doing a softmax over these output attention scores where we take the softmax over only one of the dimensions and then we want to generate our output vectors as now now we want to generate previously we were generating one output vector now because we have a set of query vectors we want to generate one output vector for each query vector right where the output vector for query q i will be a weighted combination of all of the input vectors and they will be weighted by the distribution that we predicted for that query vector um and again we can act if you're kind of careful with your matrix shapes you can actually compute all of these linear combinations all simultaneously using again a single matrix multiplication operation between these predicted attention weights a and the the input vectors x is this are we clear to this point okay so then the next generalization is the way that we use the input vectors right so if you look at this formulation we're actually using the input vectors in two different ways right now right first we're using the input vectors to compute the attention weights by comparing each input vector with uh each query vector and then we're using the input vectors again to produce the output right and this these actually are two different functions that we might want to serve so what we can do is separate this input vector into a key vector and a value vector right so what we're going to do is we're still going to input a set of query vectors q and a set of input vectors x but now rather than using the input vectors directly for these two different functions inside the inside of the operation instead we will use we will have a learnable key matrix wk and a learnable value matrix wv and then we will use these learnable matrices to transform the input vectors into two new sets of vectors one of the keys and one of the values and now these now we use these two keys and values for these two different purposes in the in the computation of the layer so then what we're going to do is we're going to compare when in order to compute the similarity scores we compare each query vector with each key vector and then when computing the output scores that the outputs are going to be a weighted combination of the value vectors that are going to be weighted by the by the predicted similarity scores right and the intuition here is that this gives the model more flexibility in how it uses its input data right because the query vector is kind of the model saying that i want to search for this thing and then hopefully it needs to get back information which is different from the thing it already knew right it's kind of like when you search into google um uh how tall is the empire state building that's your query and then google goes and compares your query to a bunch of web pages and then returns you the web pages but you don't actually care about web pages that match your query because you already know the query you want to know something else about the data which is relevant to it which is related to the query in some way so like in a web search application you retrieve according to this query how tall is the entire state building but the data you want to get back is actually a separate piece of data which is that height like however many meters which maybe occurs in the text that is next to the query in the text is that kind of distinction clear what we might want to separate the key value and the value vector right it gives the model then more flexibility to just use its inputs in two different ways okay so then this is kind of a complicated operation so we have a picture and this picture actually shows up um so we can kind of visualize this operation or we've got this set of query vectors coming in here at the bottom of q1 and q4 and then we've got the set of input vectors x1 to x3 coming in on the left and now the first thing we do is for each input vector we apply the p matrix to compute a key vector for each input and now we compare each key vector to each query vector and this gives us this matrix of unnormalized similarity scores where now for each out where each element in this similarity matrix is this a scale dot product between one of the key vectors and one of the query vectors and now the next thing we do is these uh these attention scores are unnormalized so the next thing we want to do is for each query vector we want to generate a probability distribution over each of the key vectors or each of the inputs so the way that we do this is we perform a softmax operation over the vertical dimension of this of this alignment matrix e and this gives us our alignment scores uh a so and now because we've done the soft max over the vertical direction now each column of this alignment matrix gives us a probability distribution over all of the inputs x1 x2 x3 so now the next thing we do um is we've got our alignments now we need to compute the output so then we um again transform the input vectors um into another into the we transform each input vector into a value vector that gives us these value vectors v1 v2 v3 in purple and then we're going to perform a weighted combination of the value vectors according to these computed alignment scores so then what we do is that um for example when computing v1 um we'll will uh take a product going going this way and then take a sum going this way um and what that means is that actually that's not quite right right so what we want to do is we want to take a v1 multiplied by a1 v2 multiplied by a12 b3 multiplied by a13 and then sum going up so it's kind of like we need to take their value vectors and then match them up with each column and then take a sum going up and then what you can see is that this produces one output vector y for each query vector um q and these output vectors y will be this linear combination of value vectors where the weights are determined by the dot products between the key vectors and the query vectors okay so now this is uh this is the this is an attention layer and now this is a very general layer that you can imagine inserting into your neural networks but now anytime you have sort of two sets of data on one that you want to think of as a query and one that you want to think of as the inputs then you can imagine just inserting this attention layer that computes this kind of this uh all pairs combination where each query needs recombined with each input and now one special case of this is the self-attention layer where we actually have as input only one set of vectors and now what we want to do is compare each input of each each vector in our input set with each other vector in our input set and the way that we do this is we add another learnable weight matrix layer so now rather than taking the query vectors as input we're going to again predict the query vectors by transforming the input vectors one at a time um and then everything else is the same as what we've already seen so if you look at how this works pictorially on the right we receive as input this set of input vectors x1 x2 x3 and now for each input vector we convert it into a query vector by multiplying with this uh with this query matrix and then similarly we also convert every input vector also into a key vector using the dip using the separate key matrix and now we can now we compute these alignment scores that gives us the similarity between each key vector and each query vector which is then this pairwise similarities between each pair of inputs in our input sequence x now we do the exact same thing we do a softmax to compute a distribution over each column and then we take our input vectors and now convert them again into value vectors and then perform this again weighted similarity um to uh produce this sequence this set of output vectors y um and now this is a very very general mechanism right so what now we've got this is basically a whole new type of neural network layer right because it's inputting a set of vectors and it's outputting a set of vectors and internally what it's doing is comparing each vector with each other vector in the input in this sort of nonlinear way that is decided by the network for itself actually this one really interesting thing about the self tension layer is to think about what happens if we actually change the order of the input vectors right so what if we change the input vectors and we have the same vectors but rather than presenting them in the order one two three we now present them in the order three one two so now what's going to happen then we're going to compute the same key vectors and the same query vectors because the computation of the key vectors and the query vectors was all independent of each other so we'll end up computing exact same key vectors and the exact same query vectors but we'll just have them also permuted in the same way that the input vectors were from unit and then similarly when we go to compute our similarity scores between these permuted vectors we will end up computing all the same similarity scores but now again this matrix will be permuted because all the rows and columns are commuted but um but the values in this in this matrix are the same and then similarly the attention weights that we're going to compute will again all be the same but permuted the value vectors will all be the same but permuted and the output vectors will all be the same but permuted so what this means is that um one one technical way to talk about this is that this this self-attention layer or self-attention operation is permutation equivalent that means that if we take our input vectors x and then apply some permutation to them s then that then the output is going to be the same as uh applying the layer to the unpermuted inputs and then permuting the outputs does that make does that make sense right so and then another way to think about that is that this self-retention layer doesn't care about these order of its inputs it is somehow a new type of neural network layer that doesn't care about order at all it just operates on sets of vectors so what another one way to think about what this self-attention layer is doing is that you get a set of vectors it compares them all with each other and then gives you another set of vectors um so what this uh this means that this layer actually doesn't know what order the vectors appear and when it processes them um so but in some cases you actually might want to know the order of the vectors right so for example if you were imagining some kind of a translation or captioning task then you know maybe the further you get along in a sequence then the more likely it becomes that you should generate a period or generate an edge token so for some types of tasks it might actually be a useful signal to the model to let it know which vectors appear in which positions um but because this self-potential layer is permutation equivariant by default it has no way of telling which vector is the first one and which vector is the last one so then as kind of a half we can recover some sensitivity to permutation by appending each input vector with some encoding of the position so there's different ways that this can be implemented one way that you can implement it is you just learn a lookup table and you add a learnable weight to the metric to the to the network we're going to learn a vector for position one learn a vector for position two learn a vector for position three and so on and then when you perform the forward pass of your network you're going to append the learned position one vector onto the pert under the first vector we'll append to learn position two vector input and so on and so forth and now this gives the model the ability to now distinguish which parts of the of the sequence are at the beginning and which parts of the sequence are at the end this is sometimes called positional encodings and you'll see these sometimes used in these self-attention models now another variant on this self-attention layer that we'll sometimes use is called a masked self-attention layer so here the intuition is that um when doing some kind of tasks we actually don't we want we want to force the model to only use information from the past so if you remember in our current neural network this sort of happened by design by the way that we have this progression of hidden states so for some kind of task like language modeling we might want to try to predict we want to ask the network to predict the next token given all of the previous tokens and using this default transformer of this default self-attention layer um the the model the model is allowed to use on every vector when producing every output vector um so that it won't work for this kind of language modeling task but we can fix that by um adding some some some structure to this attention matrix so if we want for example to force the model to only use information from the past then we can manually intervene into this predicted matrix matrix e what we can do is just put in a minus infinity in every position we want to force the model not to pay attention to things so then in this example we want to when producing the output vector for the for the for the first input on q1 um then we want it to only depend on uh we want the first output to only depend on the first input so we can do that what we do is then we block those parts of the matrix by sliding a negative infinities so then when we compute the softmax going up the column then max of minus infinity will give us a zero of the attention weight um and there's attention in the attention weight matrix above um so then this is a sort of structure that you'll see when you want this is called a mask subattention layer because we're kind of masking out which parts of input the the model is allowed to look at when producing different parts of the output and this is used very commonly for these language modeling tasks where you want to force the model to always predict the next word given all the previous words another variant of this self-attention layer you'll sometimes see um is a multi-headed self-attention so what this means is we'll just run uh we'll you will choose a number of heads h and that will run h self-tensioned layers independently in parallel so then given um our set input vectors x what we'll do is we'll split so if um well if our vector x has a dimension d then we'll split each of our input vectors into equal chunks into h chunks of equal sides and then feed the chunks into these separate parallel of self-retention layers and now these will produce some each each parallel subatomic layer we'll produce some set of outputs one output for each input and then we'll concatenate those outputs to get the final output from this multi-headed self-retention layer and now this this multi-headed self-attention layer is actually used in practice quite commonly um and here there's basically two hyper parameters that we need to set when setting up one of these multi-headed self-detention layers one is this all right so then the input and the output dimension are kind of fixed right the dimension of the query vector x is now the input dimension of our data and the final dimension of y is our output dimension that we want to predict and now internally in the model there's two hypogrammers we need to set one is the query the dimension of the internal variable key vectors dq and that's a hypergrammar that we can set and the other hyperparameter is the number of heads that we want to use um so these are both when you look when you see self-attention layers used in practice you'll see people report to the overall width of the overall width or size of each flare that means the prayer dimension dq and also report the number of heads and self-attention their h so then as an example um this this self-attention layer is this brand new primitive that we can imagine inserting into our neural networks and this is basically a whole new type of layer that we can slot into our networks so as an example we can see how we can build a convolutional neural network that incorporates one of these self-attention layers so here we can imagine some convolutional network is taking a acute cat as input and producing some vector some grid of feature vectors of size c cross h cross w as the output of some stack of convolutional layers now what we can do is we'll use uh three different one-by-one convolutions to convert our grid of features into a grid of queries a grid of keys and a grid of values and these will have separate parallel and these will be converted with three separate one-by-one convolutions with their own weights and biases that are learned and now we'll compute this uh inner product between the queries and the keys that will give us uh these attention and then we'll compute a softmax that will give us um for every position in the input image then how much does it want to attend to every other position in the input image so then this will generate this very very large matrix of attention weights of size h cross w by h cross w and now we'll use these attention weights to then have this weighted linear combination of the value vectors and will end up producing one a value vector for each position in the in for each position in the input and what this means is that now after we do this linear combination then every input vector from the feature we're producing a new grid of feature vectors but now every position in the output grid now depends on every position in the input grid and that's qualitatively a very different type of computation than we have with something like convolution and now in practice when people do these things they'll often insert maybe another one by one convolution after the end of this attention operation um and they'll also and it's also very common to add a residual connection around this entire self attention operation and now once we put all these pieces together this gives us this new self-contained self-attention module that is now this new neural network module you can imagine sticking inside of your neural networks and you can imagine building networks that have maybe some convolution some self-attention some more convolutions more self-attention and this gives us basically a whole new type of layer that we can use to build neural networks and now it's interesting to think that basically at this point we've got three different primitives that we can use to process sequence of sequences of vectors with neural networks so the most obvious is um is these recurrent neural networks that we talked about in the previous class right that given a sequence of input vectors x it produces the sequence of output vectors y or h and now what's nice about a recurrent neural networks is that they're very good at handling long sequences right when we use these recurrent neural networks like an lstm and they're very good at carrying information over very over relatively long sequences and in particular um after a single r m layer then the final output or the final hidden state y t actually depends on all the on it depends on the entire input sequence so in a single rnn layer is sort of able to summarize an entire input sequence but there's a problem with recurrent neural networks and that's actually that they're not very parallelizing right because if you think about the way that a recurrent neural network is computed we need to compute hidden state 1 then hidden state two then hidden state three that hit in state four and this is a sequential dependency in the data that we just cannot get around so um if you recall back to the lecture on gpus um the way that we build really big neural network models is by taking advantage of massive massive parallelism inside of our graphics processing units or tensor processing units and somehow a recurrent neural network is not able to do a very good job at taking advantage of this massive parallelism with that we have in our hardware so that's actually a problem for the scalability of building very very large recurrent neural network models so now we actually know another way of processing sequences so we could actually use one-dimensional convolution to process sequences so you could imagine using one-dimensional convolution and then having a one-dimensional convolutional kernel that we slide over the input sequence so maybe each out each position in the output sequence depends on a local neighborhood of three uh three adjacent elements in the input sequence and this is also something that we could use to process sequences um now convolution is unlike recurrent neural networks using convolution to process sequences is highly paralyzable right because each output element in the sequence can be computed independently of all other output elements so using convolution breaks the sequential dependency um that we had with recurrent neural networks but the problem with convolution is that it's not very good at very very long sequences right because if we want to have a c if we want um if we want to have uh some output depend on the entire sequence then we can't do that with a single convolution layer um we're going to have to stack on many many many many convolution layers on top of each other in order to have um see each point in the sequence be able to see or talk to or depend on each other point in the input sequence so that's actually also a problem for using convolution to process sequences but the the benefit is they're very paralyzable unlike required outputs and now what you should think about is that self-attention is a new mechanism that we can use to process sequences or sets of vectors that overcomes both of these shortcomings one is that self-attention is good at long sequences right because given a set of vectors it compares every vector with every other vector so then similar to an rnn after a single self-attention layer each output depends on each input okay so that's a good thing but also like convolution self-attention is highly highly paralyzable because if you saw the influence if you recall the implementation of self-attention a couple of slides ago self-attention is computed with these giant with these uh just a couple matrix multipliers and one suffix operation so um softmax uh so uh this self-attention operation is highly highly paralyzable and it's very very well suited to run it on gpus um so what you should think about is self-attention is now an alternative mechanism we could use to process sequences or sets that overcomes both of these shortcomings of convolution and uh recurrent neural networks the downside with self-attention is that they take a lot of memory but gpus are getting more and more memory all the time so maybe we can uh ignore this point so then the question is that if you're faced with a problem where you want to process sequences with neural networks how should you combine these things should you use rnns should you use convolution should you use self-attention should you use some combination of them well it turns out there was a very famous paper a couple years ago it's that attention is all you need and it turns out if we want to build neural networks that process sequences we can do it using only self-attention and the way that that works is that we build a new primitive block type called the transformer block and this transformer block is going to depend on self-attention as the only mechanism to compare input vectors so the way that it works is that we receive this input sequence of vectors x1 to x4 we'll run all of our vectors through a a self-attention layer that might have multiple heads and now after the self-attention layer each output from the self-attention layer will depend on each input so that gives us our interaction between each element between all elements of the sequence or set and now after self-attention we'll add a residual connection around the self-attention that will improve the gradient flow through the model after the residual connection we'll add layer normalization recall that adding having some kind of normalization in our deep neural networks is going to aid optimization and it turns and in convolutional networks we'll often use batch normalization it turns out for these sequence models layer normalization is a very useful thing to do but what's interesting about it but what's interesting about the layer normalization the way that it works out is that um the output of self-attention is giving us a set of vectors and now layer normalization does not involve any interaction or any communication among those vectors it's going to normalize each of the output vectors from self-attention independently so the layer normalization does not involve any any communication between our vectors after after layer normalization we're going to run a feed-forward multi-layer perceptron that's what we'll call it's a fully connected neural network and right the output of layer normalization will be a set of vectors then we'll take each of those set of vectors and run it through a little fully connected network and again this little fully connected network is going to operate independently for each sequence that is for each of each vector in our set which is which is output from the previous line normalization um we're going to add another residual connection around these multiplier perceptrons and then we'll add another layer normalization output after the output of this residual connection and then we'll put all of these things together into a block that called the transformer block and this will be the basic building block of these large of this can be the basic building block for large scale models that process sequences of vectors so now the input of a transformer block is a set of vectors x the output is also a set of vectors y with the same where the out the number of output vectors is the same as the number of input vectors but the number of output but it might have a different dimensionality right we could use uh you could imagine uh changing the dimensionality of these vectors inside the model um and now what's what's interesting about this self-attention block is that the only interaction between the vectors occurs inside the self-attention there because the layer normalization and the multiple perceptrons all operate independently on each vector on each input vector and because of the because of these nice properties of self-attention that we saw this transformer block will be highly paralyzable and very amenable to gpu hardware um and highly paralyzable highly scalable so this is going to be a very good fit for our hardware and now to now what we can do is just build a transformer model is just a sequence of these transformer blocks um and what we in in order to build a transformer model we need to choose a couple hyper crankers one is the overall depth of the model the number of blocks that we use and the original attention is all you can need in the original attention is all you need paper they used as a sequence of 12 blocks each of those blocks had a query dimension of 512 and they used six uh multi-head attention layers sorry six um multi-head attention heads inside each self-attention operation um and that's basically it right so then um what what it turns out is that this transformer architecture has been called somehow the the imagenet moment for natural language processing um and it turns out that these transformer models have been very very useful for natural language processing tasks so you know in computer vision it's very common to pre-train our models on imagenet and then fine-tune them for some other downstream task and it turns out that we can use these transformer models to achieve a very similar effect for a lot of sequence prediction or language prediction tasks so the common paradigm that's emerged in natural language processing maybe like this is all like really really recent stuff right this is all basically researched in the last year um but this there's been a whole bunch of papers in last year that basically show that we can pre-train a very large transformer model by downloading a lot of text from the internet and then train a giant transformer model on a lot of text from the internet um that tries to predict the next word or do other types of language modeling tasks on a whole bunch of internet text then we can fine-tune this big transformer model on whatever downstream language processing task you want to use you want to do whether it's machine translation or language modeling or language generation or question answering or whatever other type of task you might want to do with natural language processing and this has been remember a couple lectures ago we talked about how imagenet in one hour was this like really trendy thing that all the companies were getting behind and trying to beat each other well a very similar thing has happened with these transformer models in the last year that basically over the past year um every big ai lab has been competing with each other to try to build bigger and bigger and bigger transformer models so the original one was this uh so-called transformer base and transformer large models um from this 2017 paper attention as all you attention is all you need um their their large model had 213 million parameters learnable parameters and they trained it for 3.5 for three and a half days on eight gpus so that's like a lot of training a lot of gpus a lot of parameters but like kind of reasonable for a lot of research groups but things got really crazy really fast so the next one um there was a so the original transformer paper it was from folks at google so they had a lot of gpus there was a follow-up paper called bert that really introduced this pre-training and fine-tuning paradigm um they had much bigger models that were up to 340 million vulnerable parameters and they changed this thing on 13 gigabytes of text that they download from the internet and you know text is small so 13 gigabytes of text is actually a lot a lot a lot of data right um i couldn't actually find the training time or the hardware requirements of this model in the paper but then um another group from google came out and a team from facebook kind of jumped on this as well and they had two new models called excellent large and roberta that were trained on about on more than 100 gigabytes of data and each of these are now trained for fairly ridiculous amounts of time so the google model was trained on 512 tpu devices for three for two and a half days and the facebook model was trained for on uh 1024 gpus for one day and you know that's how long it takes to chunk through 160 gigabytes of text but not to be one-upped open ai came out and really decided to push the envelope so they generated their own data set of 40 gigabytes of text and they trained uh transformer models with up to a 1.5 billion learnable parameters um and the model like i should point out all these models are the same right the only thing they're doing is using transformer models that have more layers have wider uh bigger query dimensions inside each self-attention layer and use more self-attention heads but all these models are fundamentally the same um and now this openai model called gpt2 they trained uh models with up to 1.5 billion parameters um and the latest results i think is from nvidia they came out just a month just in august this year they had a transformer called megatron right if you guys watch transformers the movie you know that megatron is the leader of the transformers in the movie the transformers so nvidia wanted to build the biggest baddest transformer model of them all so they built a transformer model called megatron and their biggest megatron model um has up has more than eight billion learnable parameters and they trained it for nine days on 512 gpus um so i went ahead and had went the liberty of computing how much this would cost to train this model on amazon web services does anyone have a guess how much 500 000 anyone else kind of guess okay 500 000 was actually a good guess so if you were to train this model on amazon web services it would cost you about 430 000 um using the pricing uh today so my research group will not be training this model anytime soon but really all these companies have been jumping all over each other to try and train bigger and bigger and bigger transformer models um and what's really amazing about these transformer models is that as they get bigger they seem to get better and the thing we are bottlenecked right now on transformer models is just how big of a model can we trade how much data how much we're not bottlenecked by data because we can download as much text as we want on the web the only constraint on these models at the moment seems to be how many gpus can you wrangle together and how how long can you how long can you patiently wait to have them trained um so i think this is a really really exciting area of really ai and machine learning research right now and i'm excited to see where this goes in the next couple of years now another really cool thing that we can do with these transformer models is actually generate text from them right um so actually open ai was very famous for this um with the open ai model they trained it to actually generate text from the internet um this is using this language generation language models that's very very similar to the recurrent neural network models that we've seen before but basically with these uh these transformer models we can take the model and give it some input text and then use that as seed text to the model and then tap the model generate new text that it thinks would be probable to happen after this query text so for example what we can do is write some human product so this is going to be written by a human in a shocking finding scientists discovered a herd of unicorns living in a remote previously unexplored valley in the andes mountains even more surprising to the researchers was the fact that the unicorns spoke perfect english so this is a totally crazy sentence right but this was written by a human but if you feed this sentence as seed text into a transformed model and you sample from the transformer model to generate more language so now this next part is all written by the transformer so it says the scientists named the population after their distinctive horn or ovid's unicorn these four horned silver white unicorns were previously unknown to science now after almost two centuries the mystery of what sparked this odd phenomenon is finally solved and it goes on and it goes out and it talks about dr jorge perez who's an evolutionary biologist who has ideas about how where these unicorns originated so somehow this transformer models are able to learn a very very amazing representation of the text on which they train and their their ability to generate this very coherent long-term text is just a way blows past all these recurrent neural network models that people were using for so this is this this idea of training bigger and bigger transformers on different types of tasks has got a lot of people really excited right now and i think this is a super open area for people to explore in research these days um and if you're interested to um write your own text and see what transformer has to say there's a website talkthroughtransformer.com i i can't take credit for making this but someone put up this website you can go on there you can write your own c text and then see what type of text the transfer model will generate after this stuff so then in summary what we've seen today is that we can add this attention well first off we saw i don't know how to use powerpoint right but after that we saw because there's supposed to be a picture here right but um we saw that we could add attention models uh that we can add attention to our rnn models and that allows them to dynamically attend different parts of their input then we started to generalize that attention mechanism to this new type of operation called self-attention to a new general mechanism for building neural networks that operate on sensor vectors then we saw that we could use this as the building block of these transformer models which are basically the the thing everyone's excited about this that's our summary for today um and now next week um i will actually be out of town in a conference so we're going to have uh two guest lectures so on on monday our gs5 will be giving a guest lecture on some of his research in vision and language and then on wednesday we'll have a guest lecture from professor arthur prakash um who's a professor here also michigan and he'll be giving he'll be telling you about adversarial machine learning and adversarial attacks on these models so hope you've enjoyed those guest lectures next week
3D_Computer_Vision_National_University_of_Singapore
3D_Computer_Vision_Lecture_9_Part_1_Threeview_geometry_from_points_andor_lines.txt
hello everyone welcome to the lecture on 3d computer vision and today we are going to talk about three view geometry from points and or lines and hopefully by the end of today's lecture you'll be able to derive the trifocal tensor constraint that from points line image correspondences of three views and you will be able to describe the homography relations between two views in particular we'll look at how to derive this homography relations from the trifocal tensor next we will look at how to extract the three view epipose and epipolar lines from the trifocal tensor and we'll also look at how to decompose the trifocal tensor into the camera matrices as well as the fundamental matrices that relates the three views finally we'll compute the trifocal tensor from image point line correspondences of three views and of course as usual i didn't invent any of today's material i took most of the content from chapter 15 and 16 of richard lay and andrew zizermann's textbook multiview geometry in computer vision which i strongly encourage every one of you to read these two chapters after today's lecture and if you are interested please also read chapter 8 of mahi's textbook an invitation to 3d vision so in the previous lecture we have learnt the fundamental matrix and its role in relating or describing the geometric relations between two view so today we'll look at the trifocal tensor which is analogous to the fundamental matrix but it's meant to relate the geometry of three different views in particular it encapsulates all the geometrical or projective relation between these three views that are independent of the syn structure what this means is that uh given these three views given the three views as long as we know the image correspondences it could be a point the point point or it could be a mixture of point and lines for example we will be able to relate and compute the trifocal tensor of that relates these three views and then we will be able to decompose this trifocal tensor into the geometric formation of camera projection matrices fundamental matrices receptor now many ways to derive this trifocal tensor relation in particular there are several different combinations of a point line image correspondences so we will look at one particular combination here which is the line line and line correspondences in the three views to derive the trifocal tensor but do take note that this doesn't mean that this is the only way to derive the trifocal tensor you could also start from other combinations but it will all end up to be the same relation of the trifocal tensor and so now suppose that we are given three views where we observe three line correspondences here which we denote as l l prime and l prime prime in the respective view from the previous lecture we all know that each one of this line in its respective view is going to be back projected to a plane that is represented by this plane over here and since there are three different views uh which see this uh the line respectively what it means is that these three different planes that are back projected from the respective view they are all going to intersect at the unique 3d line in the 3d space which is shown by this line over here and we'll make use of this particular relation to derive what we call the trifocal tensor so what's interesting here is that if we look at two views we'll generally notice that there is no constraint that is being provided by pair of line correspondences in the respective view and we can see this pretty clearly because any single line that is observed on the image is going to back project onto a as a plane in the 3d world so if we were to take the pair of images and denote the image correspondences as l and l prime we can see that basically it each one of these feedback projects to a plane and we know that in 3d space any any two planes is always going to intersect at the line hence there is no relation that can be derived or there is no unique relation that can be derived that relates to any two views but things change when we have a third view because we know that given a third view where we know that the correspondence to l and l prime let's denote it as l prime prime over here it's also going to back project to a plane and we know that in in general uh three planes they don't intersect at a line and when they do which is the case here because we know that l l prime and l premises are line correspondences which means that these three lines they back project to intersect at the unique line of l in the 3d space and in this particular case since there is only a unique intersection in the 3d space this means that there is a certain constraint that constrains geometrically this three particular view we will see that this particular constraint is what we call the trifocal tensor for each image in the in the free view we're going to denote the camera matrices in the canonical denotation as we have seen earlier on in the previous lectures we will denote the camera matrix of the first view as p i identity and 0. this is the canonical representation what it means is that we are assigning the camera frame to be aligned with the world frame for the first image over here and then we are going to denote the camera matrix of the second view as p prime which is obviously a three by four matrix that we write in this particular way we write a a matrix over here capital a matrix which is a three by three matrix over here and then we are going to write the last column which is a three by one vector over here as uh lowercase a subscript 4 over here this denote that this is the fourth column of the of the camera matrix p prime and similarly we are going to denote the third view the camera matrix of the third view as p prime prime and uh we are going to use a three by four matrix to uh denote this p prime prime and the first three by three matrix over here we are going to use the capital b to denote this particular matrix and the last column which is a three by one vector we are going to use a lowercase b uh with a subscript four that denotes that this is the fourth column in the third camera matrix and we we know that uh after this denotation p p prime and p prime prime we can also show uh lowercase a subscript four that means the fourth column of p prime the second view and the fourth column of p prime prime from the third view they represent the happy pose of the second and third view respectively that arise from the first camera so let's denote this as p p prime and p prime prime so there's a camera center over here which we denote as c uh what this means is that a4 over here represents the projection of c the first camera center onto the second image and this projection over here is what we call the epipol that we are going to denote this as e prime we will show that it's actually given by the fourth column of the second camera projection matrix and uh similarly b4 over here represents the projection of the camera's first camera center onto the third image over here and this particular guy over here we are going to denote it as e prime prime and this is going to be our uh epipol that is on the third image that arrives the camera center of the first camera image over here and so e prime prime is actually equals to b4 and e prime is actually equals to a this this relation can be easily shown by the following step over here we know that the first camera center here it's going to be given by zero zero zero one because uh as what we have mentioned earlier on we denote the first camera projection matrix in its canonical reference frame which is identity and zero this simply means that the the camera center is given by zero zero zero one and if we were to take the projection of this uh camera center into the second image which is uh going to be p prime that's given by a a three by three matrix and small a four uh this is a three by one uh vector multiplied by zero zero zero one we can see that this simply becomes a four which is equivalent to p prime of c and that's equivalent to the epipole in the second image that arose from the first image similarly we can also show that this happens for the in the in the third view where we take the camera projection matrix of b uh and small b four over here and then multiply it by zero zero zero one uh we'll see that this gives rise to d4 which is the epipole in the third view now we all know that each image line back projects to become a play we already seen in the previous lecture that this back projected plane that arose from the image line it's given by this particular equation over here where we denote the plane as pi and that would be equals to the transpose of the projection matrix multiplied by the image line in homogeneous coordinates and in this particular case over here we know the first plane which is given by this guy over here which is pi this uh it's the back projection of l over here and this is going to be pi equals to p transpose of l which is the first line over here and since we know that p over here it's the first camera frame which we denote using the canonical representation earlier on as identity and zero we can see that this gives rise to this particular equation over here this should be identity so it's a three by three identity and the last column is zero multiplied by uh the vector of l which is a three by one vector so this is a transpose this means that form a three by four matrix after a trans uh this becomes a four by three matrix over here and after taking the transpose of this guy over here and multiplying it by l we can see that it gives rise to the plane equation which is denoted by the first three and three would be the image coordinate and the last entry would be zero so similarly we can do the same thing for the second and third plane that is being back projected by the line correspondences in the second and the third view which is denoted by l prime and l prime prime over here so here in this figure over here we can see that uh l prime is here in the second view so the back projection here will give rise to pi prime and that would be equals to p prime transpose multiplied by l prime so the similarly for the third view this guy over here the the darker shade that uh plane over here that's pi prime prime and that would be given by p prime prime transpose multiplied by l prime prime and substituting this uh representation of a and b's over here into the second and third plane equations over here and we will end up to have this vector and this vector representing the second and third uh back projected plane by the line correspondences in the second and third view what's interesting here is that we know that these three planes they are not independent because they are formulated or they are they arose because of the line correspondences because l and l prime over here and l prime prime they are correspondences in the three views what this means is that pi pi prime and pi prime prime the three uh back projected planes they are not independent because they have to meet at a single line intersection which is the 3d line that gives rise to the line correspondences in the three respective images as l l prime and l prime prime and we also learned in the earlier lectures that the inter this intersection constraint can be algebraically represented by a four by three matrix which we call m over here that would be uh the four by three matrix where each column here it's the respective equation of the planes this is four by one this is four by one and the third guy over here would be also four by one uh we simply concatenate them together into a four by three matrix which we call m over here and since we know that these three uh back projected planes they are not independent of each other what this means is that the four by three matrix has to be rank deficient we'll show in the links like that this not just only that is rank diffusion but it has a very precise rank of two and now we proceed on to show that the m matrix over here that is formed by the three back projected planes are rank uh deficient and we know that this back three back projected planes which we denote as pi uh pi prime and pi prime prime over here they have to intersect uniquely at a single line over here which in the 3d space which we denote by capital l and this the reason is because this capital l back forward projects to the respective image line correspondence of l l prime and l prime prime uh this is because l l prime and l prime prime they are correspondences that is formed by the 3d line the enhanced the matrix that is formed by prime and pi prime prime is going to be ranked efficient and the m matrix that's formed by the three back projected planes it's going to give rise to the intersection so we all know that from the previous lectures that the intersection of the three black projected planes uh intersect from the unique line and this particular 3d line which we call l over here can be represented by all the points that sits on this particular line that we are going to call as uh x and we know that x over here can be parameterized by two any two points on this particular line because we know two points uh is going to represent a single line that is uh passing through these two points which we call x1 and x2 and this is going to be parameterized by alpha and beta which are scalar values here and we know that this x over here which represents the the points all the points possible points that sits on this particular 3d line it's going to also sit on the respective plane so there is an incidence relation between x this and any points that is on this line it could be anywhere on this line which we call x over here it has to have an incidence relation with pi the first plane pi prime the second plane as well as pi prime prime which is the third plane over here hence we can write the dot product of the plane and the the respective plane with this 3d point over here to be equal to zero and this holds for all the respective three planes hence we get this equation of time transpose x equals to 0 pi prime transpose x equals to 0 and pi prime prime transpose x equals to 0. this relation here can be rewritten into m transpose where m is what we have seen earlier on is pi pi prime as well as pi prime prime in the previous slide over here we can stack them together and put the transpose over here and write x so this effectively works out to be the respective plane equation dot product with any 3d point that is sitting on this particular line over here and that would add up to be a vector of zero zero zero which is uh because each one of this is going to give us a a zero which is shown here a scalar zero but uh concatenating them together as a matrix over here we're going to get the three times of zeros from each one of this respective relation over here and consequently we because the fact that a point on the line we saw earlier that it can be parameterized as by two separate points x1 and x2 which is two parameters over here or two bases over here that represents this particular 3d point on sitting on the 3d line hence this what this means is that the matrix m that we have defined earlier on would be of rank two because it has a two dimensional null space solution since the x over here can be obtained as a null space vector of the matrix m over here and we know that uh essentially any of this equation over here any of this point x1 or x2 is all going to fulfill all the three relation over here hence we can rewrite the the solution or the equation null space equation into these two equations over here and by substituting x1 into x and x2 into x respectively hence the family of two individual sets of solutions here therefore we conclude that m over here has a rank of two and this intersection constraint that we have seen earlier on the null space equation here that we have seen earlier on induces a relation amongst the image line correspondences which is l l prime and l prime prime so i have mentioned earlier on that uh this is the correspondence the the reason the the reason why this gives rise to the correspondence is that because this equation here the null space equation here it must be satisfied when all the three planes intersects at a unique line and hence uh once this is satisfied we can also say that the vice versa is true as well that the correspondence here uh the three correspondences here in the images it must have a certain relation and that relation would be that they must intersect at a unique line they are representing a single unique 3d line in the 3d space and since the rank of the of m over here is 2 there must be what it means is that there must be a linear dependence in the column because we saw earlier on that m it consists of pi pi prime and pi prime prime so this is actually a four by three metric and uh what this means is that since rank since the rank of m is only equals to 2 which means that this 4 by 3 matrix is a rank division matrix this means that out of the three columns two any two of them can be linearly combined to produce the third column hence there's a linear dependence between its column which it denotes as m subscript i where i denotes the column index and hence we can write this particular relation over here where we simply choose m1 the first column it becomes a linear combination of the other two columns in the matrix m over here which we call m2 and m3 and this would be parameterized by two parameters which we call alpha and beta they are both scalar values over here we have seen earlier on that uh the the equations of the respective back projected planes can be obtained exactly from the the line correspondences as well as the notations the denotations of the respective camera projection matrices uh the entries of the respective camera projection matrices in this form over here so we can now uh rewrite m into this particular form over here which is uh which consists of all the uh line correspondences the three line correspondences that we have seen earlier on here and uh the respective camera projection matrices and trees which we call a and b over here now uh if we were to plug this back into the equation we can uh derive some meaningful constraints over here and we'll see that in the following slide so what before before we do that one particular observation from the four by three matrix that we have seen earlier on here is that there's a specific number over here which is zero it's always zero due to the canonical representation of our first camera matrix which is identity and zero over here and as well as the the relation that we have earlier on which we have m1 equals to alpha m2 plus beta m3 over here let's rewrite this uh relation over here so m 1 equals to alpha m 2 plus beta m 3 so we can make use of the fact that this second row first column over here is always 0 so because we know that this is the first column over here is m1 and the second column here is m2 and the third column here is m3 so what it means is that we since we know this particular relation and we know that m1 here is going to be zero for this particular uh row and column over here we can see that in order for it to be zero we can actually solve for alpha in in terms of the a b and l over here in the in this particular m matrix over here so we can see that one of the obvious choice would be to let alpha be equals to uh some scalar k multiplied by b four transpose l prime and beta equals to minus k multiplied by a 4 transpose l and that will fulfill this particular equation the this particular relation over here we can we can let's let's take a closer look at this so in this particular case because we know m1 here is going to be zero so if we let alpha over here be b4 transpose l prime prime multiplied by m2 m2 is going to be this guy over here which is uh a4 transpose l prime plus we can see that from the next slide that beta we are going to fix it at uh minus a4 transpose l prime and m 3 over here would be b 4 transpose l prime over here of course the whole thing has to be multiplied by a scalar k which is uh which is anyways going to be cancelled out because on the left hand side it's going to be equals to zero we can see that the two terms over here they become the same and but with the negative sign in between hence the we can we show that the right hand side over here it becomes zero which is consistent with what we have over here and hence uh the the choice of alpha to be the b four transpose l prime prime and the beta to be minus a4 transpose l prime it's going to fulfill this particular constraint over here based on the observation that the second row first column of the m matrix is always going to be 0 over here hence we can apply this alpha and beta that we have found over here in the first step over here to the top three vector of each column that means that we can apply it to this over here and we'll get uh l l one because we know l one here corresponds to m one which is this guy over here then uh we can have m two equals to a transpose l prime which is uh this guy over here so this is basically m2 and this is m1 that we have seen earlier on so this whole term over here is going to be alpha that we have solved earlier on we can see that this guy over here b transpose l prime prime it's going to be n three so this guy over here is going to be m three and a four transpose l prime b we have computer earlier on to be representing beta over here so this is going to be beta over here so notice that the scalar value k can be set to 1 so we can simply set this to one over here and to get this relation and we can see that by putting this all this together it will actually fulfill this equation over here which we have uh seen earlier on that uh this equation uh is a arose from the fact that the rank of m is equals to two and first we can uh further evaluate this into uh this relation over here this guy over here the l is the image correspondence we we can simply represent this as l one l two and l three over here so each one of this the indices we can simply denote it as l i and what we are going to do next would be to derive the expression or show our expression of each entry in this particular line over here in terms of this particular uh equation over here so we can simply see that uh what this matter becomes would be the two sides of the ls or l prime prime and l these two vectors over here from this equation over here it can be factorized uh it can be factorized out and since we are only interested in any entries over here since this guy over here is a three by one vector over here we can simply look into a over here and take out a particular column of a for this dot product over here to become a scalar value hence the whole thing over here can become a scalar value that represents the i coordinate of the the line l and we can do the same for the second term over here then we can see that we can rearrange the this because this is a dot product it doesn't matter on which direction over here we can simply swap the transpose and swap the place because this is just simply a dot product over here now we can see that l prime transpose always appears on the left side of the equation and l prime prime ends up on the right side of the equation over here so we can simply factorize this out to get the relation of l prime transpose and then a i b 4 transpose minus a 4 b i transpose l prime prime so now finally once we get this particular relation we let's collectively call this term over here t i and this is actually a three by three matrix here we can see from the previous equation over here uh by pre-multiplying by l prime and post-multiplying by l prime prime it actually gives rise to a particular entry in our image line in the first image l1 l2 and l3 so l i here represents the coordinates of this uh the first image line over here so now we can plug this notation back into the equation over here which gives rise to the what we call the incidence relation the incidence relation simply means that the three correspondences they are related by this ti matrix over here because we know this was derived from the rank deficiency of the matrix m that tells us that the back projected planes that arose from the three line correspondences must intersect at a single line which you call l in the 3d space hence this equation over here it's what we call the incidence relation and now we are going to call this set of the three matrices which is denoted by t1 t2 and t3 recall that t1 t2 and t3 over here each one of this which we generally call ti over here it's going to be used to compute the respective entry in the first image line over here so uh we're going to collectively call this t1 t2 t3 where each of them is a 3x3 matrix over here so if we were to stack them together then this becomes what we call the trifocal tensor because this is a tensor when we stack them all up so you have a what to visualize this what you see is that this is actually a three by three matrix which called t one so imagine that we have another of this matrix of this three three by three uh matrix that is behind here so this is t two and then we have another of this matrix over here which we call t3 so the whole thing forms a 3 by 3 by 3 cube where t 1 is the first block over here and t 2 is the second block over here and then t3 is the third block over here so this three by three by three it's in comparison so uh if i have just n by one dimension i call this a vector and if i have two dimensions i call this a matrix so now what we have over here is a three dimension so this becomes a tensor and hence we call this thing here the the set of three matrices the trifocal tensor because trifocal because it's used to relate the incidence relation between three image views because it's a tensor over here it's a three-dimensional uh matrix which we can if you would like to call it that way so the formal term would be a tensor hence we call the whole thing a trifocal tensor and now now we can rewrite the incidence relation into this equation over here which is actually uh the same as what we have seen earlier on but in this case here it was just meant to be each respective and tree in the image line of the first view over here and we have to denote t using the respective i over here but we can collectively uh write them into a square bracket uh and uh t1 t2 t3 so this actually represents the tensor or the trifocal tensor over here it's a 3 by 3 by 3. so by writing this notation pre-multiplying by l prime transpose and post-multiplying by l prime transpose this actually represents an end result of the factor of which is given by first multiplying by the first block in the trifocal tensor and then the second block of the trifocal density as well as the third block of trifocal tensor and this individual operations are simply or what we simply derive here using the incidence relation of three views although we define the quadratic camera notation using the first frame this what this means is that we use the first frame as the reference to derive the trifocal tensor relation that we saw earlier on and this simply gives rise to this particular relation that we have seen earlier on here but because we based it on the first few hence in the equation earlier on we saw that it's l transpose equals to this l prime transpose t i multiplied by l prime prime so if we make u or the second frame as the reference frame this is what we will get uh we will see that the relation swap order as well as we use the third frame as the reference frame you will see that it's going to sort all the the else are going to some other and notice that if we change the reference view to the second view or the third view that although it is not just purely the lines over here that swaps order but we will end up with totally different uh trifocal tensor over here and what's interesting here is that uh actually because they are still the same three views that and the same three line correspondences that gives rise to the same uh line in the 3d space fire the intersection of the back projected planes so what this means is although by choosing a different reference view the second third or the first view we will end up with a different trifocal tensor there is actually a relation between all these three different trifocal tensors regardless of which view that we chose as the reference view and they are although they are distinctive there is a relation between the three different trifocal tensor unfortunately the relation is not so trivial so i will not derive the relation between these three trifocal tensor further and for the rest of the lecture we'll just focus on one of them that means that the choice over here would be simple we'll just choose the first frame as the reference frame and focus on this particular equation of the trifocal tensor that we have derived earlier on now uh with the trifold tensor although it has 27 entries because it's three by three by three we saw that the trifocal tensor is given by t1 t2 and t3 where each one of this is a three by three matrix so there are nine entries each and hence all together there are 27 uh elements but the trifocal tensor only has 18 degrees of freedom we can think of it this way each one of the view it has a camera matrix which you denote by p p prime and p prime prime earlier on so we we also know that from the previous lectures that each camera matrix has altogether 11 degrees of freedom uh this is because there are 12 entries it's a 3x4 matrix and we minus off the scale so it's 12 minus 1. so it will be 11 degrees of freedom each because we are saying that three views here they are supposed to be uh chosen independently what it means here is that it are going to be 33 degrees of freedom over here if this reviews are chosen independently but what's interesting here is that because all these reviews come together you can think of it this way where they are actually glued together by the line correspondence that we have seen earlier on it could be a point correspondence as well or our point line correspondence that we will see later in the later slides but uh so because this image correspondences they glue the three projection matrices together higher the line correspondences which should back project to three different planes that intersect at the single unique 3d line in the in the 3d space this and what it means is that this particular 3d line has 15 degrees of freedom due to the projective uh plane because we know that uh the this thing here the the 3d line l it can be transformed via the the uh transformation which you call h and this is actually a four by four uh matrix that transformed the four by one that can be applied to transform the four by one 3d line over here and this has 16 elements in there but uh since the scale is not defined what it means is that we can minus off one over here and that gives rise to 15 degrees of freedom in the projective transformation so all together although the three camera matrices gives rise to 33 degrees of freedom because it has 11 degrees of freedom each we have to take away 15 degrees of freedom because these three cameras views the three camera projection matrices they are related they are glued together by that single line that defines the image correspondences in the three view so hence all together we have 18 degrees of freedom 33 minus 15 for the projective world frame
3D_Computer_Vision_National_University_of_Singapore
3D_Computer_Vision_Lecture_9_Part_4_Threeview_geometry_from_points_andor_lines.txt
now we can use the fundamental matrix between the three views that we denote as f21 f31 and f32 we have three views over here first view second view and third view f21 relates the first and second view over here and f31 relates the first and third images over here and f32 relates the second and third image over here so we want to ask the question of given the two correspondences between the first and second images which we denote as x and x prime over here can we make use of the known fundamental matrices f21 f32 and f31 to transfer the correspondence between the first two images denoted as x and x prime into the third view hence this can be rephrased as we want to find this particular point over here x prime prime from the known fundamental matrices and the known correspondence in the first two view the answer is yes of course and can be simply followed by the intersection of the epipolar lines formed by x and x prime in the first two views and let's see how it's going to be done so in the first view over here we can transfer the poi x in the first image via the fundamental matrix denote by f3 f31 into the third view as the epipolar line so in the earlier lecture on fundamental matrix we know that this particular epipolar line over here which we denote as l 1 3 for example is going to be equals to the fundamental matrix of 3 1 multiplied by x over here and we can do the similar thing to transfer the point from the second image which you denote as x prime onto the abi polar line in the third image via fundamental matrix that relates the second and third view over here and this is more explicitly written as l 2 3 equals to f 3 2 multiplied by x prime over here so we know that these two uh epipolar line the correspondence in the third view must lie anywhere in both of these epipolar lines hence the intersection over here will give the correspondence x prime prime over here and this can be rewritten in this particular equation over here where the first term over here represents the epipolar line of the first image x over here onto the third image which we denote as l13 and the second term here would be the second image x prime transferring it to the third image as the epipolar line which we denote as l 2 3 over here and the cross product is going to give us x prime prime over here so unfortunately this quickly run into a degenerate case the two epipolar lines that is being transferred from the first image of x into the third image this particular epipolar line over here it is collinear with the point x prime from the second image transferred onto the third image when this two line becomes collinear then in this particular case x prime prime the third correspondence over here remains undefined because it could be anywhere on this particular line that joins the two epipolar lines so we can see that uh with the use of trifocal tensor this degeneracy will no longer exist and we we saw earlier on that from the point line point incidence relation that is given by this equation over here that the transfer of a point from the first view x over here from the first view to the third view it's not degenerate and uh however there's also a special case over here where the this transfer becomes undefined but we will see that this can be easily avoided by a simple choice of the line uh l prime over here so here is worthwhile mentioning that uh this particular point line point relation the transfer of the first point in the first view over here denoted by x over here into the third view as x prime prime over here would be via something that is similar to the homography that we have seen earlier on so this would be a homography transfer between the first view and the second view and this homography transfer is defined by the back projection of the line that means that is this plane over here in the second image and the trifocal tensor and that we have seen earlier on in the lecture and in this case over here when l prime that defines the homography is the epipolar line of the second view that means that this particular line over here l prime it sits on the epipolar plane together with the black projected ray from the poi x that creates this particular happy polar plane and the epipolar line in the second image then the relation over here the transfer relation that we have seen over here it becomes zero so this means that this guy over here is always going to be zero and hence x prime prime cannot be found in this particular case over here the reason why is because we can see that this l prime over here which is the epipolar line exactly defines the relation that we have seen earlier on the l prime here it lies on the null space of x i multiplied by t i over here this relation over here as what we have seen earlier on regardless of whatever point of x over here is always going to be equals to zero since it's in the null space of this term over here so uh easy solution out here would be to simply select the line that passes through x prime as the line that is perpendicular to the epipolar line formed by the transfer of x into the second view so here if l prime e over here which is the epipolar line of x from the first view transferred into the second view via f21 we denote it as l1 l2 l3 transpose over here the correspondence point in the second view to x so this means that i have the first image which is a point x over here and the corresponding view over here is x denoted by x prime over here uh the epipolar line we denote it as l e prime which is given by this guy over here and x prime we denote it by the homogeneous coordinate as x1 x2 and 1 which denotes this point which is the corresponding point to the first point x in the first image over here then a best choice of l prime would be the line that is perpendicular to the epipolar line and intersects with the epipolar line at the point of the corresponding point x prime over here as indicated by this particular equation over here so what this means is that since we know that the selection of at the epipolar line for this homography over here it's going to cause a degeneracy what it means is that we know that l prime can be any line as long as it passes through x prime in the second image over here so a best choice this l prime over here would be a line that is perpendicular to the degenerate case of the epipolar line which we denote as le over here and that's going to be given by this equation over here so i'm not going to derive leaf this i will leave it to you to prove that this is the perpendicular dark line that is formed by the epipolar line and intersects at the point x prime over here hence now the transfer point is given by this equation over here where the choice of l prime over here is guaranteed that this term won't degenerate it will never be zero over here now finally we'll look at the computation of the trifocal tensor from the image correspondences so we have seen that earlier on that the incidence relations can be defined in three views using points and or lines and so given several of these point or line correspondences between the three images we can rewrite the incidence relation in the homogeneous linear form given by this a t equals to zero over here where t here is 37 by one vector that is made up all the entries in the trifocal tensor and because we know that each ti is a three by three matrix and we have three times of this so all together we have 27 entries in the trifocal tensor so expressing it in the linear form like this a t equals to zero means that we need 26 or more equation to find the least square solutions that is computed by this equation over here subjected to the constraint that the norm of the trifocal tensor vector has to be equals to 1. so we saw many times in the earlier lectures that this minimization here can be solved using the singular value decomposition method and here's a table of the linear equation that is from the incidence relationship that we have defined earlier on so we can see that three point correspondences here it gives us a total of four linearly independent equations in this particular form expressed as a tensor notation and two points and one line gives us 2 and so on so forth so here notice that s t and w over here the indices over here is no longer one two and three it's actually just denoting one and two for example in this case here one and two you will give us four entries over here which is zero to equals to zero and that would be four equations and the reason why this is not equal to one two and three is because of the rank deficiency in the trifocal tensor so uh here we note that the first line in the table over here in the original incidence relation it has to be of a choice of s t equals to 1 2 or 3 that's in total it will give rise to 9 equations over there but in this particular case where because we know that the trifocal tensor it's a rank deficient uh tens tensor where each one of the three by three matrix in the trifocal tensor is only of rank two hence there are only four linearly independent equations in this coincidence relation over here and hence we will drop the index of three over here and only evaluate for the four combinations of s and t where each one of them takes a 1 and 2 respectively so this is as i mentioned earlier on this is due to the rank deficiency but the proof to this is rather complicated and we will not cover this in this particular course so once we obtain the equation the linear equation ax a t equals to 0 in this case over here where t is 27 by 1 vector and we saw that the solution can be obtained from taking the svd of a but we also saw in the earlier lectures when we compute homography and when we compute fundamental matrix that is that the a matrix over here the coefficient matrix over here it's actually formed by the image correspondences which might not be distributed evenly in the image so as a result the magnitude of the entries in a might differ significantly from each other and hence it's going to cause the svd result to be very unstable so we saw earlier on in the lecture on homography and fundamental matrix that a step of normalization is compulsory to prevent this ill conditioning from happening so we'll apply the same normalization steps over here where we will first do a translation on the image correspondences such that the centroid of all the points is shifted to the origin of the reference frame and then we will apply a scaling to this transform point and such that the average root mean square distance from all points to the centroid it's uh equals to square root 2. this ensure that the magnitude in the a matrix over here is even or is within a very narrow range such that a becomes a better conditioned metric to be put into the svd to get the solution and in case of line correspondence we will simply do the transform by considering the lines to end points so what this means is that if i have a line here our transform all these lines in the image using these end points over here so that means that i have x1 x2 x3 x4 x5 and x6 for example then i will compute the the similarity transform matrix using the translation and the scaling method that is defined here and applied it onto every respective endpoints over here so once i get the transformation i will apply it onto this and then i will get a transform point x1 tilde x2 tilde that defines the new line over here so here's a summary on the normalized linear algorithm for the computation of the trifocal tensor so since there are 27 unknowns in the tractor that we saw earlier on uh in this case a t that we want to minimize so this this means that we need a minimum of 26 equations to solve for this 27 unknowns over here and for the case of a point correspondences along the across the three images we saw that each one of this point correspondence gives us four linearly independent equation so this simply means that we need seven point uh if in the case where there is only a point correspondences across the three images that will all together gives us uh seven multiplied by four which is 28 linear equations over here and that will give us the enough constraints to solve for these 27 unknowns in a case where there are three line correspondences across the three different images we'll make use of 13 line correspondences since we see that each of this line correspondence is going to give us two constraints two equations over here so the 13 x 2 is going to give us 26 equations 26 linearly independent equations which is sufficient to solve for the 27 unknowns over here according to the different mixture of point on line correspondences here in this particular table we can make use of the number of independent constraints over here to determine the minimum number of line or point correspondences that we need to compute the trifocal tensor so we will follow the same step as before uh that to solve this linear equation uh we'll do a transformation this is the normalization that we seen earlier on we'll apply the normalization in each one of the respective image and then then we'll compute the tensor linearly using this equation over here using the svd equation over here so once we get t we can denormalize it to get back the trifocal tensor in the original point or line corresponding setting now similar to the fundamental matrix that after we solve for this equation over here after we do a mean over f over here subjected to the constraint that f has to be equals to one so similar to this particular case the fundamental matrix that is being re recovered here by reshaping the vector of f over here into a three by three matrix over here we saw earlier on that the this is not a valid fundamental matrix in most of the cases because the rank is not equal to 2 over here because of the noisy measurements in a so the rank of the f over here the fundamental matrix over here is not going to be two due to the noise and we can enforce this to be rank two by doing a formulas norm uh minimization and by forcing the epipole to be uh intersecting at a common point over here so in the case where the rank is not equal to 2 we saw earlier on that the epipole will not have a unique definition and here in the case of the trifocal tensor we did the same thing by doing the minimization over t a t over here subjected to the norm of t to be equals to one so here after we solve for t and reshape it into the trifocal tensor over here we know that this is also not going to follow the rectal constraint that we uh saw earlier on and what happens here is that if it doesn't follow the rank 2 constraint then the epipole that we saw that we can extract from the trifocal tensor will also not be uniquely defined as in the case of the fundamental matrix so we can enforce the epipolar uh constraints on the trifocal tensor after we compute the trifocal uh tentative trifocal tensor t1 t2 c3 by minimizing the norm of a t subjected to the constraint that the norm of t has to be 1. so here we can use this as a tentative trifocal tensor over here t1 t2 t3 and recall that earlier on that given a trifocal tensor t1 t2 t3 over here we can compute the epipolar lines in the third view which we now denote as a v1 v2 v3 that corresponds to three special points denoted by one zero zero zero one zero and zero zero one over here and that's simply the right null space of the respective ti 3x3 matrix that is obtained from the trifocal tensor that we computed earlier on using normalized linear algorithm now solving this equation over here for the right now space we will get the three epipolar lines over here and then the next step we can do here is that we know that this epipolar lines this three epipolar lines v1 v2 v3 they have to intersect at a certain point which is the epipole that we denote as e prime prime over here and that can be obtained by solving for the left null space of this particular matrix that is obtained from the concatenation of the three epipolar lines over here that we have seen earlier on but in general as we have mentioned earlier on that because the image correspondences that is obtained from the image observation is often corrupted with noise this means that the a matrix that is used to compute the trifocal tensor is also corrupted with noise hence the trifocal tensor will not fulfill the rank 2 constraint and the epipole that is being constructed here might not fulfill this condition that you will you will intersect at a certain point so the correct way to do this is that in the presence of noise we should minimize the norm of this t uh multiplied by v over here to solve for the null space of this guy over here because in a case of the inaccurate trifocal tensor that's obtained from the noisy a t i multiplied by v i will never be equals to zero hence the obvious way to do the minimization would be to minimize the norm of this guy over here such that it becomes as close to zero as possible by doing a mean over vi which is our epipolar line that we wish to solve for we'll do the same thing for when we solve for the null space of the epipole because we know that this guy here is solved from the a matrix that is corrupted with noise hence this relation will not be equal to zero so the solution that we've seen earlier on is to minimize the norm of this guy over here in state we'll do the same thing because we know that v here is computed from a trifocal tensor that is computed from a that is corrupted with a noise hence the relation over here will also never be equal to zero so the obvious choice to do here would be to simply minimize the norm of this for e over here and this can be solved using the svd technique that we have seen earlier on so now once the epipoles are solved in the second view and the third view which we denote as e prime as well as e prime prime over here we know that the two columns the last column of p prime and p prime prime would be known because we saw that the p prime over here is equals to a a4 over here and this a4 over here is simply equals to e prime and p prime prime over here is equals to b and b4 over here where b4 here is simply equals to e prime prime over here so we solved the since we have solved the e prime and e prime prime earlier on using the from the trifocal tensor we can rewrite the trifocal tensor relation linearly in terms of the entries in the two camera projection matrices over here and this can be rewritten in terms of this trifocal tensor relation that we have derived earlier on which is a function of all the entries in the two camera matrices so since this is a linear equation in terms of all the respective entries in p prime and p prime prime what we can do here is i can factorize this into e multiplied by a vector of a where the vector of a here it consists of the remaining entries a and b in p prime and p prime prime so because in this case here we have already solved for e prime and e prime prime which corresponds to the last column of p prime and p prime prime so the remaining unknowns here would be the entries in capital a and capital b the first three by three matrix of the respective camera projection matrix and we can and we know that the this relation over here it's equivalent it represents the respective entry in the trifocal tensor so uh in a linear relation so we can uh rewrite this relation in terms of this where e consists of all the known entries and a over here is a vector that contains the remaining unknown entries in the two respective camera projection matrices now having formalized this where we now rewrite t here in terms of this linear multiplication over here of a matrix and a vector over here where e contains all the known values from the fp poles and a here contains all the remaining unknown entries in the two camera projection matrix over here we will simply minimize the algebraic error that is given by this guy over here subjected to the fact that e a has to be a norm of 1 because e a here represents t here which is equivalent to minimizing a t subjected to the norm of t to be equals to 1. and once we have solved for this equation over here since we know that e over here it comes from the known epipose which is the condition that we have enforced earlier on hence solving for the remaining entries in t according to the unknown entries in the two camera matrices denoted by the vector a over here means that the t the entries of the trifocal tensor would have inherently enforced the epipose that we have solved for earlier and finally the trifocal tensor that is being solved from the algebraic area that we have seen earlier on serves as an initialization for the optimization using geometric distance that we have seen in the case of the homography as well as the case of the fundamental matrix so here we'll make use of the linear algorithm that we seen earlier on to provide an initial estimate for the trifocal tensor then we can pretty much retrieve the camera matrices so p will be the canonical camera matrix p prime would be what we have seen earlier on and p prime prime would be also what we have seen early on from the traffic tensor t then we will make use of the correspondences in the three view and the respective camera matrices to determine the the 3d point via the linear triangulation method that we have seen earlier on in the chapter of a fundamental matrix what this means is that once we have these three views all the camera matrices p p prime and p prime prime we can make use of this to the x x prime and x prime prime over here to do a linear triangulation to get the point the 3d point x over here but since there is an uncertainty or there's an error based on the uncertainty the noise of the observed image correspondence over we are sure that p p prime and p prime prime would never be the exact case so what happens here is that x from the linear triangulation algorithm would be the optimal point for the intersection of this uh three uh back projected light ray according to p prime p prime prime and p itself but uh what it means here is that if i were to reproject this x over here back onto the images uh let's say this is my the observed point x the reprojected point is never going to be the same on the same location as x to my observed point i'm going to re denote this as x hat over here it's never going to be the same location over here so there is a geometric error over here which can be computed by the euclidean as the euclidean distance between the reprojected point x hat and the observed point x over here so this distance over here will be denoted by these three terms over here and these terms over here represents the respective reprojection of the triangulated point into every single image so what we want to do here is that we want to minimize this reprojection errors over here over the 3m by 24 variable for the n3d points as well as the 24 elements in p prime and p prime prime because p here is simply i and 0 and there's no need to optimize over this guy over here but we'll look at this uh equation over here in more detail in the next lecture because this is equivalent to a three view bundle adjustment that we will look at in more detail in the next lecture so as a summary in today's lecture we have looked at how to derive the trifocal tensor constraint for three views using point and or line correspondences and then we have described the homography relations uh using the that is obtained from the trifocal tensor we have also looked at how to extract the epipoles and epipolar lines in the second and third view from the trifocal tensor furthermore we also look at how to decompose the trifocal tensor into the camera matrices as well as the fundamental matrices of the three views finally we look at how to compute the trifocal tensor in both the algebraic way as well as the geometric way from point and line correspondences in the three views thank you
3D_Computer_Vision_National_University_of_Singapore
3D_Computer_Vision_Lecture_8_Part_2_Absolute_pose_estimation_from_points_or_lines.txt
so after looking at the camera post estimation algorithm for an uncalibrated camera where the intrinsics of the camera is unknown let's move on to look at calibrated case where the camera intrinsic value k is known since k it's made up of the focal length as well as the principal points so this simply means that the focal length and the principal points are known in this particular case and what i'm going to talk about would be three different approaches that was actually invented over the past about 150 years time span and since i'm taking them from three different papers in contrast with the first work that i'm that i spoke about in uncalibrated camera that was taken from richard tatley textbook please bear with me that there will be a slight overload of the denotations of the parameters in the subsequent approaches because i'm simply following the notations in the original paper in the case of a calibrated camera usually the approaches are following a two-stage approach to solve the perspective endpoint post-estimation problem and in this case the camera intrinsic is known so in the first step we will first solve for the unknown depth suppose that we have three 3d points denoted as p1 p2 p3 so note that this notation here first from the previous notation where i use a capital x 1x2 x3 because i'm following the this particular paper that document this particular approach actually it was not proposed in this paper this approach was proposed in 1841 that was about 150 years ago by a photogrammetries and he proposed this particular approach so the notation here i'm following that paper and it denotes the 3d point as p1 p2 p3 and the camera center is here so let's denote this camera center as p for example so s1 the first depth here it's the magnitude of the vector from p the camera center to p1 and s2 would be similar from p to p2 and for s3 it will be p to p3 so these depths are unknown what we are given would be the points these three points with respect to the world frame as well as the observed image points which i will define later in this particular form formulation so the first step would be to compute this unknown depth and what it means here is that once we have computed the depth suppose we further denote these points p1 p2 p3 as in the world frame for example let's just use a superscript here and after computing the unknown depth what we will get is that we will get the corresponding 3d points that are defined in the camera frame so i'm going to denote this as p1 cpu 2c and p3c for example then once we get these two sets of 3d points one the of course the first set is given the second set is what we compute from the unknown depth s1 s2 s3 so once we get this we'll be able to simply align these two frames because these two points are actually referring to the same three points in space but the first set here is defined according to the world frame and the other set here the second set here is defined according to the camera frame so what we are trying to do here is that given these two after we have solved for the unknown depth where we can do what we call the absolute orientation algorithm to solve for the rotation and translation that aligns these two uh reference frames now uh we know that the unknown digit transformation between the two frames uh it's parameterized by a rotation which has three degrees of freedom so this guy here has three degrees of freedom and a translation which also has a three degrees of freedom so we look at rotation in the minimum representation is a oil angle the raw pitch and your angle hence there's three degrees of freedom so t it's with absolute scale so that's t x t y and t z t x t y and t z hence there's a three degrees of freedom here in total there are altogether six degrees of freedom that we need to solve for in the unknown rigid transformation and we also know from the previous uncalibrated case that each point correspondence gives two independent constraints so as a result because we are directly solving for the rotation and translation here in in this case where the intrinsic value is known so all together we will need three point correspondences to uh as a minimum set of correspondences to solve for the sixth degree of unknowns in our rotation and translation over here hence we are going to look at the solution for this uh using the three-point correspondences it was invented by this guy grunette a german scientist in 1841 that's about 160 years ago so this is also otherwise known as the p3p formulation the perspective three-point algorithm which makes use of the minimal set of correspondences which is three-point correspondences to solve for the camera post-estimation problem and here let's denote the three points as p1 p2 p3 at each point here the inhomogeneous coordinate is given by x i y i and zi given this set of three points we can compute the distance between any pair of points so all together since we have three point three choose two is going to give us three and this means that we have all together three sides to compute the length and let's denote the size as a b and c where a is the distance between p2 and p3 b is the distance between p1 p3 and c is the distance between p1 and p2 so this can be simply taken by the norm of the difference between the two vectors of that represents the any two of the points and we further denote the corresponding observed perspective projection so p1 p2 p3 are defined earlier on as the 3d points and we know that these three points the 3d point p1 p2 p3 are going to be projected onto the image as uh three image point here which we will call q1 q2 and q3 over here so each one of this uh qi denoted by qi is simply represented by the inhomogeneous coordinates of ui and vi which means that it's the image pixel location and we know that since the camera in trin6 is known this means that this 3x3 camera in 36 is known it also means that we also know the focal length as well so if we treat the principal point c x and c y equals to 0 that means that the center of reference frame is in the middle of the image we can applying this focal length on the 3d point this means that by multiplying k with p to get q this is what we get in terms of inhomogeneous entry of q uh we will simply equate ui equals to f multiplied by x i where x i is the 3d point x coordinate x axis divided by z and similarly for v i it will be equal to f multiplied by y i divided by z and then from here we will be also able to get the unit vector from the camera center to the observed point this is this guy over here where this is the camera center and if i observe this point here which i call q i uh so what we can do here is that we can compute a vector using the homogeneous coordinates over here that is represented by u i v i and f where f represents the focal length divided by the norm of this homogeneous coordinate and this is simply going to give us the vector which we denote as ji that's pointing in the direction of the image coordinate of q i having found the unit vector that represents the direction of each light ray uh over here that we denote as a j 1 j 2 as well as j 3 we can move on to find the angle between any pairs of light ray for example in this case here we denote the angle between j2 and j3 that means that it's this angle over here as alpha and alpha here the cosine of alpha can be found from the dot product of the two unit vector uh which we denote as j2 and j3 that was computed earlier here in the previous slide so we can do the same thing for the other two angles that are created by other two pairs of unit vectors that represents the light ray and we denote it as a beta and gamma so now let us further denote the unknown distances between the point uh p1 p2 from the center of projection here we will further denote this length the length of this vector as s1 s2 so s1 represents the length the magnitude from the camera center to p1 s2 represents the magnitude of the distance from camera center to p2 and s3 represents the magnitude of camera centers to p 3. so we can then denote we can then denote a p i as equals to the magnitude the scale the unknown depth multiplied by the unit vector in the direction of that particular point so in this case here p i is actually a known entity because we are given the 3d point in the world frame and j here it's also a known entity j here is known because we know the 2d correspondence and this is the image point u i and vi and we also know the camera intrinsic value which also means that f over here the focal length is also a known entity this means that in this case we have an equation which consists of the unknown of s i and putting everything together we'll get these three equations by the law of cosines so if i were to do uh two of the light rays which from the camera center here for example to the point of p two and the point of p3 this actually forms a triangle that looks something like this and we have denoted the angle made by these two rays as alpha in the previous slide so cosine alpha is given by j 2 dot with j 3 which i have defined earlier on and we also know that according to the law of cosines so the cosine rule for a triangle this cosine alpha can be expressed in terms of the lengths of all the three sides and we also know from before that uh since these two points are known this also means that the distance between these two points which i have denoted earlier on as a is also known and the two sides which we the magnitude would be denoted by s2 and s3 would be an unknown so where cosine alpha here is a known value and a here is also known so these two here are known from the from the triangle and the unknowns here would be in terms of s2 and s3 which are the depth so we can do this for the other two triangles that are created by the other pairs of points namely p1 and p3 so here the angle here will be beta and we can do the same thing so cosine beta and this the distance between p1 and p3 they are all known and the unknown here would be s1 and s3 similarly for the third pair where it's the angle between the two points of p1 and p2 so we know that the distance here which we call c is known and whereas the depth here s1 and s2 are unknown and the cosine angle here which we call gamma this two values here they are known values and the unknown here s1 and s2 and as a result we get three equations over here and three unknowns which we can then solve for the three unknowns uh using this three sets of equation so in the original solution that was proposed by grenet in 1841 he suggested that we'll introduce two auxiliary variables which is called u and v and then we will represent s2 in terms of u multiplied by s1 and f3 in terms of v multiplied by s1 so now uh with s2 uh represented as s in in terms of s1 and u and s3 represented in terms of s1 and v we can substitute this back into this equation to replace s2 and s3 and as a result we will get the three equations in terms of s1 u and v and these are the three equations that we get as a result for each one of this equation we can further reorganize it such that we make s1 the subject and consequently we can make use of these equations here to eliminate away s1 and by simply equating the first two equations and then the next two equations to form two equations two equations in terms of the two unknowns u and v since we have two equations two unknowns now the problem becomes much more easier to solve where we can simply eliminate u doing elimination we can eliminate you from these two equations by making first uh making uh u the subject and then substitute it from equation six and substituting it to uh equation seven uh we'll get a four degree univariate polynomial that looks something like this uh in terms of the only unknown so now it's down to one equation and one unknown where the unknown is actually uh v one of the auxiliary variables that we have introduced earlier on so uh and the coefficients here i'm not going to go through the derivation of the coefficient but you can if you are interested you can work this out and verify that this coefficients here are all correct it's it looks uh pretty complicated over here so once we have the four degree polynomial that we have seen earlier on here in this in this slide now the objective is to solve this four degree polynomial for the unknown variable v which will give us up to four real solutions because it's a four degree polynomial so one way to solve this is to use what we call the companion metric where the companion matrix of a 40 degree polynomial is given by this particular matrix over here so this is the companion matrix of a 4 degree polynomial which is uh which is in this form over here uh where the coefficients are simply uh a 4 v to the power 4 plus a 3 v to the power 3 all the way until a 0 equals to 0. so we will just simply substitute this coefficients into the last column of the companion matrix so i'm not going to show the derivation of the companion matrix here but we'll see this merely as a tool to for us to solve a four degree polynomial equation and once this is formed what we can do here is that we take the eigenvalues of this companion matrix and interestingly the eigenvalues of the convenient matrix gives the roots to the four degree polynomial since this is a four by four matrix there will be altogether four eigen values which denotes the four solutions and this simply implies the four solutions to our fourth degree polynomial and once we get this fourth solution we will substitute each one of these four solutions will retain the real solutions and then substitute these solutions for v back into equation 7 here since we know v already there's only one unknown which is u over here we can substitute every one of the roots of v back into this equation to solve for u we'll get a set of four solutions for the pairs of u and v's here then finally once we are done with this we can substitute it u and v here back into equation 5 which is shown here to solve for s1 and once s1 is known since we and s1 and v we can proceed on to solve for the remaining unknown which is as two and three so what this means is that uh by this step over here we have gotten the unknown depth s1 s2 and s3 and now uh what it means here by solving for the unknown depth is that uh since we know the unknown depth with respect to the camera center this is a which is denote by fc over here and s 1 s 2 s 3 are the magnitude of this vector over here so i represent this as s 1 s 2 and s 3 as seen earlier on and we also know that the directional vector this directional vector can be computed from the image correspondence which is given by j the unit the direction vector is j 1 j 3 in this case and then j 2 in this case with this known that right now after solving the first step we can compute the 3d points in the camera coordinate frame s p i prime equals to s i multiplied by j so this would be the 3d points in defined in the camera frame and now the next step would be given this set of 3d points that is defined in the camera frame as well as the set of 3d points that were defined in with respect to the world frame we want to find the relative transformation between the camera frame and the world frame such that we align these two frames and that would be given by a rotation matrix as well as a translation vector and this problem can be formulated into a optimization problem given by this cost function over here where we want to transform the pi prime which was represented in the camera frame so after the transformation here we are simply transforming p prime from fc into the world frame and what we want to do here is that after the transformation into the world frame of p prime we want it to match up with the original 3d points that was given in the world frame so we want to minus this the the distance away and and we want to minimize the total error in terms of this uh alignment error over here uh over the unknown rigid transformation uh parameterized by a rotation matrix as well as a translation vector and we'll achieve this using what we call the absolute orientation algorithm and note that although grenade algorithm is developed for three point correspondences the absolute orientation algorithm it works for three or more point correspondences so the first step to absolute orientation algorithm would be to remove the translation between the two sets of points defined in the two different reference frames p prime and p respectively so the way to remove this would be first to compute the centroid of the respective set of points suppose that we have this we have three points over here that is represented in the world frame and i call this pi every single point has pi and i have another set of points which is represented in the camera frame i represent it as p i prime every single uh point so the first step here is to compute the centroid of p i and which i denote as p i bar over here and that's given by this equation over here we simply sum up every entries in the point and divide it by the total number of points in this particular set we'll do the same thing for the 3d point defined in the camera frame and to compute the uh centroid which we denote as p i prime bar over here and that will sum up all the points the coordinates of all the points divided by the total number of points then the next step here since we want to align this axis here with respect to the centroid what we will see what we will do here is that we will just uh subtract the every point with the centroid to do a translation of this frame to its centroid we'll apply the same thing for the 3d points expressed in the camera frame we'll sim uh we'll just subtract the every point with the centroid to move this frame here to the to the centroid so after removing the translation factor from the two points the only thing that remains in the two points would be the rotation metric and the next step here would be to compute the rotation matrix between the two points we can do this by first computing a matrix m this is actually a three by three matrix by simply summing up all the products of the points after removing the translation so this here it's actually the the point uh in the camera frame which is a three by one vector and so the first r i prime this is in the camera frame ri transpose here is in the world frame and uh this is a one by three entries over here so the product of this two three by one multiplied by one by three it's uh it's going to give us a three by three matrix and for every point over here we'll get a three by three matrix we simply add up every entries so first point we'll get a three by three entries and then second point we'll get another three by three we'll set up every corresponding element of this to get the matrix m so well after we get the metric m we'll proceed on to compute the rotation matrix the 3x3 octagonal rotation matrix using the product of m with square root of q inverse where q here is given by the product of the m of m transpose by itself and then finally once we have uh computed the rotation matrix we can go back to compute the translation vector by simply aligning the two sets of points after we have compensated the rotation in the first set of points and more specifically the way to compute the square root of the inverse of q would be to make use of the eigenvalues and eigenvectors so because this is a three by three square matrix that is given by this equation over here m transpose multiplied by m uh it's actually also a symmetric matrix because it's equals to this m transpose multiplied by m uh computing this with the svd of this guy q over here if the left and right now orthogonal matrices would be the same so it would be something like this v sigma v transpose where these v's are the same and here sigma over here would be given by in following this notation lambda 1 lambda 2 and lambda 3 because it's a three by three matrix uh there's three singular values over here and in order to compute the inverse of the square root what we will simply do is that we will compute the inverse of the square root of the respective eigen values in the in sigma and then putting it back into this equation over here to that we pre-multiply and post-multiply by v and v transpose over here to get back this particular square root of the matrix over here so after we have solved for the unknown camera pose using the grenade formulation which is the three point algorithm that we have seen earlier on the next thing that we are going to look at is the degenerate configuration of this camera post estimation problem under known intrinsic value so we have looked at earlier on that to solve for the to solve for the unknown camera pose we took a two-stage approach where in the first stage we are solving for the unknown depths of s1 s2 s3 using this three equations over here so here these three equations uh it can be rewritten into the three polynomial equations over here f1 xyz f2 xyz and f3 xyz equals to zero where xyz here represents the camera center so we have seen earlier on that this is the case we have three points p1 p2 and then p3 over here so we are going to represent the camera center here as x y and z and s1 s2 or s3 and s2 represents the magnitude of the vector from the camera center to the respective points so in the previous case we see this s1 s2 s3 as unknowns but we can also look at it from another perspective where we can keep this s1 s2 s3 as a fixed value where now the unknown becomes x y z which is in fact what we are solving in the second stage when we solve the rotation and translation where these two values here the rotation and the translation in fact uh gives us the camera center and now uh we can reparameterize this in terms of the unknowns uh from s1 s2 s3 to xyz which represents the camera center and i'll skip the full derivation here because it's uh very complicated so uh we'll just we'll just uh take it as that this can be re-parameterized into a set of a system of three polynomial equations in terms of the inhomogeneous coordinates of the camera center here then uh without a loss of generality so suppose that we want to find the solutions to the point set the system of polynomial equations uh which is in written in a general form uh suppose that we have n unknowns here in the system or polynomial equation and equations where i have f1 all the way to i have f1 that's in the function of this all the way to f n over here so in the in the context of our uh three point algorithm that we have seen earlier on this would be simply just three equations and three unknowns where x with xyz so without a loss of generality suppose that we wish to find a set of n unknowns which we denote by x1 x2 all the way to xn by the set of n equations in the form of f i over here where i equals to 1 to n that means that i have a system of polynomial equations f 1 that's parameterized by the set of unknowns x 1 to x n and all the way to n f n uh that's parameterized by the same set of unknowns over here we say that the this particular system of polynomial equation system of polynomial equation we say that the solution is non-degenerate or the solution is stable if we were to make a small any perturbation to the function by any of the variables unknown parameters here so in in terms of this fi over a small perturbation of x 1 for example then this should not give rise to a change in the value of dx uh one in another words dx1 should remain as uh equals to zero for the system to be stable and this can be rewritten as a system of homogeneous linear equation where the first part here is represented as the jacobian which is the first derivative of the respective equation over here with respect to all the respective unknowns in the equation and the second part here is uh simply the small change in the respective equation so this can be derived from the equation where we have a first we start off with a system of polynomial equation say for example f i uh x 1 all the way to x n equals to 0 we can derive this using what we call the total derivatives and this is given by a partial differentiating f i with respect to dt for example auxiliary variable and this would be equals to the partial derivation of f i over the first unknown variable in f i multiplied by the partial derivation of this x i x 1 over dt so this is following the chain rule plus all the way until we differentiate f i to delta x n and delta x n divided by delta t so after we get this equation over here we can see that delta t appears on all the terms on both for the left hand side and the right hand side we can cancel off delta t here and what we are left with would be delta f i equals to delta f i over delta x multiplied by delta x 1 plus all the way until delta f i over delta x n multiplied by delta x n and we can rearrange this equation into this uh matrix form where f i equals to f1 all the way to f n if we perturb this equation with any one of the unknown variable it shouldn't make any change to the final solution which is denote by x1 x2 x3 so dx1 means that the change after we perturb the individual equation with the respective uh unknown variable so in this case if there's zero change here that means that this guy here the solution to this because this can be seen as a homogeneous linear equation f a x equals to zero after we solve for this guy here the only solution for the vector here should be 0 that means that it must be a trivial vector if there is other solution here this what this means is that a perturbation of the function over here would lead to a whole family of solution or will lead to some unstable solution in our unknowns and we can use this to check whether uh what are the degenerate configurations on our three-point algorithm that was developed by grunette in 1841 so we saw earlier on that the three equations uh here equation 1 2 and 3 can be here can be rewritten into a system of three polynomial equations in terms of the inhomogeneous coordinates of the camera center x y z and following the total derivatives that i have briefly talked about just now this this step over here we can actually take the partial differentiation the total derivatives of every one of this equation and then rearrange it into this uh this form over here so this is what we will get i'm not going to derive this because it's pretty complicated and this results here is directly taken from the paper we can see that we can call this as our m matrix here which is a 3x3 and this is the dxty dz that we have seen earlier on so we can see that uh the polynomial equation here f1 is always going to be equals to zero f2 is also going to be zero what this means is that if we take the the small change in f1 here it better also going to be zero and hence we can form a homogeneous linear equation over here where a is given by this matrix over here it's simply the difference between uh the camera center the respective coordinate of the camera center with the respective coordinate of the individual 3d points x minus x1 and so on so forth and b is a function of s1 s2 s3 as well as cosine alpha cosine beta and cosine gamma that we have saw earlier on so we can see that uh in this particular case here for degeneracy to happen uh a small perturbation here which will lead to a non-trivial solution here it's going to be given by the case where m is rank division and we will go on to show that in the grenade formulation there are two cases of degeneracy that can be observed the first case is that the three points p1 p2 and p3 p2 and p3 here they are collinear what this means is that the three points are sitting on a straight line collinear uh vt and in this case the configuration is degenerate and the second case of degeneracy here is that the camera center and the three point light on a plane which means that they are coplanar because any three points will always form a plane p1 p2 and p3 will always form a plane this this means that p the camera center is also lying on this particular plane and this is going to be degenerate so in the first case uh when the three points are collinear we can illustrate this illustration over here so p 1 p 2 p 3 they lie on the straight line this means that they are collinear and p here can lie anywhere which we simply denote it using this point over here so if we were to look at this case because p1 p2 p3 forms a line so no matter where p3 is lying at it's always going to form a plane with all the other uh the three given points here so in this particular case we can what we can see here is that we can define a world reference frame here for example such that the x y plane here is on the plane that is formed by these four points over here and what this means is that if we look at the a matrix over here because the a matrix is simply the difference between the respective entries in our camera center with the respective entries in the 3d point we can see that the last column here vanishes because when the xy plane of the world frame aligns with the plane that is formed by p and all the other three points they would all have the same z coordinates and uh this means that the last column here becomes zero which means that this a here is going to uh be com rank deficient and this is not good because what it means here is that after we put this a matrix back into this equation over here since this is already rank deficient the multiplication of a rank division even with a full rank matrix would end up to be a rank deficient matrix of m over here so what this means is that now we would have a rank division m matrix that leads to a non-trivial solution of dx d y and dz and this means that uh if we were to protect the equation we will end up with a family of a solution we can also observe similar problem as the previous case when the camera center lies on the plane that is formed by the three uh 3d points that were given to us and in this particular case as illustrated in this figure over here so all the four points here p1 p2 p3 and p which is the camera center they form a plane uh we can similar to the first case we can also assign a world frame such that the xy plane over here it's aligned with this plane and what this means is that uh the z coordinates of all the points will become the same and the last column here in the a matrix would become 0 and this means that a becomes rank deficient which also causes the m matrix over here to become rank deficient and as a result we'll get a non-trivial set of solution for dx d y dz and this means that configuration becomes degenerate
3D_Computer_Vision_National_University_of_Singapore
3D_Computer_Vision_Lecture_12_Part_1_Generalized_cameras.txt
so hello everyone welcome to the lecture on 3d computer vision and today we are going to talk about the generalized cameras hopefully by the end of today's lecture you'll be able to use the glucoline representation that we have learned earlier on in our lectures to derive the generalized epipolar geometry that relates to views of a generalized camera next we will look at how to apply the linear 17-point algorithm to compute the relative pose between the two view of a generalized camera and then we'll look at how to explain the degeneracy cases of a generalized epipolar geometry in particular we'll look at three cases the locally centered projection axial cameras and the locally central and axial camera configuration finally we'll learn how to compute the absolute pose of a generalized camera using 2d 3d point or line correspondences today i'm not going to say that i didn't invent any of today's material because indeed i did invented two of the materials that we are going to talk about today so uh i took mainly four references for today's lecture the first is the paper written by robert please using many cameras as one in particular i took from this paper the derivation of the generalized epipola geometry and i took the degeneracy cases the three degeneracy cases that we are going to look at from the paper written by hongdong li a linear approach to motion estimation using generalized camera models and i took the generalized post estimation from point correspondence from this paper that i have written in 2015 minimal solutions for the multi-camera post estimation problem finally i took the non-prospective post estimation from line correspondences from this paper that i have written in 2016 in eccv and i strongly encourage all of you to look at these papers after today's lecture so so far in all our lectures we have been looking at the pinhole camera model where all the light rays that comes into the standard pinhole camera converge at a particular point which is to uh the camera center so this is the image and all the light rays are going to come in and project onto the image but they will all converge at the single center of projection known as the camera center one illustration here would be in the true view geometry of a pinhole camera we can see that lights that are reflected from these 3d points into the pinhole camera converges at the single center of convergence and this is true for the two views that is shown here in contrast light rays do not meet at the single center of convergence in the generalized camera so we can think of a generalized camera where the lens is a general shape it can be something like in the shape of a fun house mirror so that's this is a arbitrary shape what it means is that the light rays that are being projected into the cameras they do not meet at the single center of projection so here uh this is an illustration of a two view generalized camera where we can see that the light rays that are reflected from the 3d points in the environment do not converge at a single center of projection here so all these light rays they are in arbitrary locations of course in reality the fun house mirror lens this front house meter lens that projects light ray into arbitrary conv configuration it does not exist in reality but a generalized camera can be realized with a multi-camera system with minimal or without any overlapping view of view here's two examples of the multi-camera system that i've worked with in the past so here the first example here is a drone system where we mounted four cameras that look in four different directions we can see that from this camera setup there are minimal field of view between the front forward looking camera and the backward looking camera and here's another example where we mounted four cameras on a self-driving car one looking forward and one looking backward and two at the side mirrors so we can see here that uh there's minimal overlapping field of view between these four cameras and this is uh generally known as the generalized camera the reason is because let's look at this illustration over here suppose that i have a camera here and another camera here we can see that all these light rays from the first camera that we saw here it's going to project at a single center of projection and this is true for another camera that is mounted rigidly on this uh vehicle so here let's look at the second camera it's also going to project light rays that are going to converge at the single center of projection however the two points of convergence they do not meet or they are not the same point here so we can see that given an arbitrarily configured number of cameras that are rigidly fixed onto the vehicle here we can see a numerous point of projections over here and they generally do not meet at a single point so here this would be an example of how the generalized cameras can be realized in a physical system and the reason of why we chose a multi-camera system is because cameras are generally low cost and easy to maintain compared to other kind of sensors configuration or sensor setup such as the lidar sensor lidar sensor is a laser sensor and this is generally much more expensive compared to a camera and because there are many rotating parts on the lidar sensor and we know that generally more sophisticated mechanical systems means that it's more difficult to maintain it and it's also more susceptible to mechanical failure but in contrast for cameras there's no moving parts at all so it's generally easier to maintain and less susceptible to any failures and of course it's also low cost for lidar sensor that we see on some of the self-driving cars out there the unit price is about in the order of 60 000 usd where for a industrial camera the very high-end ones it probably will only cost you up to a thousand dollars to to buy so this means that we can easily gather many of these cameras and put it together in one particular system that we wish to operate on and another advantage of the multiple camera system is that we can easily choose the configuration such that it maximizes the fuel view so for example in this car here where we chose four of these cameras the multi-camera system uh such that we can achieve only directional uh view here this means that we the coverage would be 360 degree we get a surround view of by just mounting four cameras in four different directions and we'll also see in today's lecture that in the mathematics derivation of the generalized epipolar geometry that absolute scale can be obtained directly from the epipolar geometry so epipolar geometry as what we have learned earlier on in the lectures when we look at fundamental matrices in in a single camera we we have two views we saw that this is related by a essential matrix or fundamental matrix so in the case of a calibrated camera it would be essential matrix and we saw that this can be decomposed into the rotation and translation vector but in a single central camera configuration we saw that we can only uh obtain up to five degrees of freedom the decomposition of an essential matrix into the relative pose of a relative rotation and a relative translation where the translation is only known up to scale so this guy over here is known up to a scale and in today's lecture we'll see that given two generalized images we can relate them using the epipolar geometry where here we can see that we will derive something that is known as the generalized essential matrix gm and then we will see that we can decompose this as well into the rotation and translation vector but in this particular case here the translation vector will contain absolute skill this means uh we know the absolute matrix scale of this guy over here and this means that there are all together six degrees of freedom which we can obtain from the generalized essential matrix however there are challenges to the generalized epipolar geometry that is that when we have a multiple camera system to realize the generalized camera that would be no or minimal overlapping field of view as what we have mentioned earlier on and this simply means that the stereo setup or the the mathematics the physics behind stereo cannot be used directly here and the next challenge we will see is that we can mitigate this particular problem of no or overlap minimal overlapping fuel view by processing each camera independently this means that i will just treat the multiple camera system for example if i have a car here and i mount four cameras here so one argument could be that after i have moved this multiple camera system from one location to another i could simply treat this camera the same camera as one camera epipolar geometry which we have learned in the earlier lecture that means that i can compute the essential matrix of each one of these cameras independently from each other and then we can think of a way to aggregate this essential matrices that we have computed from the multiple camera system together to get the relative translation and rotation however the problem with this is that it's computationally inefficient and we will not be able to identify or will not be able to get the absolute skill if we do it this way the reason is because we're treating every single camera independently we're not making use of the relative locations or the relative extrinsic calibration between this camera as we will see later in the lecture that this actually helps us to get the absolute scale so the solution here that we are going to introduce in this particular lecture on how to use a multiple camera system efficiently would be to use what we call the generalized camera formulation in particular we'll look at the generalized epipola geometry so in contrast with the single pinhole camera model where there is a single center of projection in that case all the light rays here are going to be projected onto a single center of projection so in that particular case here we can see that we can easily or conveniently assign a reference frame to this particular single center of projection where every pixel in the image can also be conveniently represented using the homogeneous coordinates so they are coherent in the sense that there is a single center of projection a single reference frame that binds all of them together hence we can see that for example in this case here if this is this coordinate here is x and y we can easily call this x y and one in this particular illustration over here and all the pixels here they are all going to be related via this particular single center of projection this reference frame that is attached to the single center of projection in contrast in the case of a generalized camera since all the light rays they are all going to be projected arbitrary in arbitrary locations so here we cannot conveniently choose a single reference frame and we'll see that it becomes inconvenient to stick on to the standard homogeneous representation that we have seen earlier on for points hence we are going to introduce the use of plucker vectors this means that all picks us on the generalized images for example this particular point here suppose that it's called xy we're not going to represent this particular homogeneous coordinates that we have seen in the standard pinhole camera because there's no single point of convergence over here so what we are going to do here is that we are going to arbitrarily assign a reference frame here with respect to the generalized camera setup and we are going to represent every single point on the generalized image as a light ray that is with respect to this particular reference frame that we have defined or we have attached to the generalized camera and we can see that this is going to be convenient the reason is because we have seen earlier on the first two lectures that for a plugin vector we can represent it using a six dimensional 6d vector where the first three vector represents the directional vector which we denote as q over here and the next three vector would be representing the moment vector that we denote as a q prime so what this means is that any light array that we have seen so suppose that this is the light ray that projects onto our generalized image we can assign a general reference frame over here and then the first free vector of the pluco coordinate that represents this particular pixel on the generalized image would be the directional vector of this light array which we denote as q here so this would be a unique vector in the direction of this light ray that we denote as q over here and then the moment vector here will be simply the cross product of any point on this particular line so here we can arbitrarily choose a point which we call p over here and the cross product of this p which is a vector that is from the reference frame over here so note that the directional vector of q over here would also be expressed with respect to this reference frame that we have chosen for the generalized camera system and p here would be a point of a vector that is defined from the reference frame and q prime the moment vector would be the cross product of this p this point over here and the unidirectional vector of the first three entries in the plucker vector as given by this equation over here hence as a result we'll get this particular line which we can denote as l over here to be equals to q transpose q prime transpose and transpose so this means that this is a 6d vector where the first three entries of this 6d vector is our unit direction of the light ray and the last three entries in the unit vector would be the moment vector that is computed from the cross product of the unit direction and any point on the line hence as a result we know that since q prime is obtained from the cross product of a q with any point that lies on the line what this means is that q and q prime would be octagonal to each other hence the dot product of this two entities in the pluco vector should always be equals to zero because they are octagonal to each other and the remaining five parameters are also homogeneous this means that their overskill does not affect the line that they described and it is convenient to force the directional vector to be unit vector because as what we have mentioned here is that the scale of the direction does not matter at all so hence uh we will just conveniently choose the first three entries or the directional vector in the plucker line which is q and q transpose and q prime transpose over here so we'll conveniently choose the direction the vector to be a unit vector and this also defines the scale or rather we fix the scale at unit scale for the homogeneous coordinates and once we have defined the plucker line to be this guy over here what it means is that the set of points all the points that lies on the pluca line suppose that this is generalized image where there's a light array over here and we define this light ray to be l equals to q transpose q prime transpose transpose with respect to a arbitrary reference frame that is fixed according to the general camera system yes any points on this particular line would be given by this equation over here where it's a q cross q prime plus alpha q uh what the first term over here means is that we're expressing this to be the base point or the closest point to the reference frame from the this particular light ray l over here so this means that this is the closest point to the reference frame given by q cross q prime over here and hence any point on this particular line would be in the direction of the line which is given by q and taken at a certain regular interval that is scaled by this scalar value of alpha over here for example this particular point here it will be in the direction from q cross q prime that's the closest point to the reference frame fro of the line and you will be taken at a certain unit of this uh q which is the direction or in the plucker light ray and here we'll refer to alpha as the sine distance from that point because uh this alpha over here when it's positive it will tell us that it's moving in this direction according to the unit direction of the plucker line and if it is negative it means we are looking behind the opposite direction of the blocker line since we are looking at a multiple camera system setup our generalized camera will be made up of multiple uh cameras that are fixed rigidly together for example in the car system that we have seen earlier on we have four cameras that are looking in four different octagonal directions in the case where the reference frame for this general camera system can be chosen arbitrarily anywhere on this fixed rigid body that contains the multiple camera setup and in the case where we choose a reference frame to be on the center of projection for one particular camera suppose that this camera we call ci over here so what this means is that all the light rays are going to converge at a single center of projection which coincides with the reference frame for this particular camera in the multiple camera system set up and here a pixel sample along this light ray would be x and y so what this means is that we also have seen in our earlier lectures that given a pixel on an image over here as x and y we can easily compute the directional vector that corresponds to this particular pixel in the image and that can be given by the normalized camera coordinate we were kci over here represents the intrinsic camera calibration matrix of this particular pinhole camera that makes up the multi-camera system so here by doing this we will be able to find the unit direction along the vector that represents this light ray over here so this means that this guy over here will give us the first three entries of our plucker coordinate which is known as q and since in this particular case we fixed the reference frame of the multiple camera system setup coincide with this camera ci over here so this means that a point that lies on this particular array would can be conveniently chosen as the origin of this reference frame since all the light rays are going to pass through this particular point over here and this is also the point which contains the reference frame and this means that this point can be conveniently chosen as a 0 0 0 here so what this means is that uh q prime can be computed from q cross p where q here is from the normalized camera coordinates that we find here in this particular case and p here will be any point on this light ray and we can conveniently choose p to be equals to the origin since we have fixed the reference frame to be on the convergence point of the camera so here uh what this means is that q prime over here can be conveniently chosen as the vector of 0 0 0. and in the case where the camera center is not at the origin so what it means is that because we have multiple camera system here for example in the car system that i have shown earlier on we have four cameras we can as mentioned in the previous slide we can choose the reference frame to be coincidence with one of the camera center but what this means is that this reference frame cannot be coincidence with all the other remaining cameras so as a result the last moment vector the last three entries of the plucker coordinates cannot be zero and for all the remaining cameras in the multiple camera system setup so here we have to look at how to compute the plugin light ray of all the light rays that passes through these cameras in general so here we will define ci over here which is the camera center to be not at the origin so since we already chosen the reference frame to be at the origin or otherwise we can also choose the reference frame to be in general anywhere on this particular rigid body where all the four different cameras or multi-camera systems are rigidly fixed onto the body uh we will further assume that the respective cameras are calibrated that means that we have the calibration matrix the intrinsic calibration matrix which you call kci that represents the if camera on our multiple camera system and we'll also assume that we have the extreme six calibrations of this multiple camera system set out so what it means is that suppose we fix the reference frame to be here for example the extreme 6 calibration simply means that for camera ci we'll know the relative transformation which we denote as r c i and t c i that brings the camera to the frame of the reference that we have defined here and this can be easily found from uh calibration methods that we have described earlier earlier on and so in this particular case here where the reference frame are not at the origin of any of this camera system we can see that the directional vector would be the given by this particular equation over here so in this case here we have x and y to be a pixel on the image for that particular camera that we are that we are looking at and since the reference frame is somewhere else suppose that this is the reference frame over here which we call f uh w over here and we'll see that q that is computed from the normalized camera image coordinates over here would be either direction or with respect to the local frame of that particular camera that contains this particular image pixel or this particular light ray but we have to do an additional step that is to transform this vector which we can we can call q tudor over here because this is with respect that particular camera we need to transform this directional vector into the frame of the reference frame of the generalized camera so this can be done easily by transforming it according to a rotation this is because this guy here q tilde is just a directional vector in the direction of the light ray so in order to transform this directional vector into the frame of the generalized camera we can pre-multiply this with the rotation of the camera intrinsics for that particular camera so this would give us q the directional vector of the plucker line that represents this light ray with respect to the reference frame of the general camera and the next thing that we need to compute would be the next three elements of the plucker line which is q prime over here so in this case we have to find a point since we already have the directional vector with respect to the reference frame q over here the next thing that we need to find would be any point on this particular light ray with respect to the general camera reference frame fw over here and a convenient point to choose would be the translation vector of this point from the extrinsic calibration of this particular camera with respect to the world frame so the translation vector here simply refers to the point where the center of projection of this particular camera lies with respect to the reference frame that we have defined and this is given by tci that we have defined here hence the cross product of these two vectors over here will give us q prime as a result we would have obtained the plucker line coordinate which called l over here given by q transpose q prime transpose transpose over here where q transpose and q prime transpose is given by the camera intrinsic calibrations the pixel value with respect to that particular camera and the cross pro of the direction vector with the translation uh intrinsic value of that particular camera so here's an example of the multi-camera system on the car that i have shown earlier on in the in the lecture the post i have a arbitrary camera which i call ci over here so we can see that uh we normally in this car system in the self-driving car system we will assign a reference frame because this car system will contain many other sensors as well so one of the sensors that you would contain could be the inertial measurement you need so inertia measurement unit or in short the imu unit which we normally use to measure the acceleration or the change of the angles of this particular car over here so this inertia measurement unit because it has the highest sampling rate so it's normally around 200 hertz of sampling rate so this this means that it's a very good sensor to be taken as the reference frame or the suite of sensors in the robotic systems that looks like this over here so normally we'll place this inertial measurement unit somewhere in the middle of the car a good reference frame for the general camera would be on the inertia measurement unit or the imu over here which we denote as v over here hence all the cameras of this multiple camera system set up would have to be with respect to this particular reference frame over here and we can see as we have defined in the previous slide that any plucker line on any camera can be given by this equation over here l equals to this equation over here where q would be simply the inverse normalized camera coordinate of that particular camera pre-multiplied by the rotation extrinsic value with respect to the reference frame and then the moment vector over here q prime would be simply the cross product of the unit vector and a point on the light ray so this point over here can be simply chosen as the translation of the camera with respect to the reference frame now after looking at the definition of the plucker line coordinates to represent the light rays in a generalized camera we'll next look at the two view geometry of a general camera suppose that we are given two generalized images as illustrated in this figure here this is view one and this is view two where we have a correspondence pixel which we denote as x1 y1 and x2 y2 here so what it means is that i have a 3d point which i denote as x and this 3d point is simultaneously projected onto the general camera 1 and general camera 2 over here where this particular pixel over here it's uh x1 and y1 and the corresponding pixel on the second view would be denoted as x2 and y2 and we know that from the plucker line coordinate that we have seen in the previous slides that the first light ray in the first view can be represented as a pluco line which is given by these coordinates over here q1 transpose and q1 prime transpose where q1 is simply the unit directional vector of this light ray over here with respect to the local frame of this generalized camera of the first view of the generalized camera and similarly we can compute the cross product of the the moment vector q1 prime over here for the light ray in the first view by simply taking the cross product of q with this translation vector the extrinsic value or any point on this particular uh light ray over here with uh q and this will give us the plucker representation of this particular light ray we can do the same thing for the light ray in the second view where here that the reference frame if they are the same camera this means that if i have a same general camera that i move from one view to another then the extrinsic value of these cameras would remain the same over the two view however this is not necessary needed so we just need to know where this reference frame needs to be for the two views so in this case here i have the reference frame i can also compute the unit vector in the second view which i call q2 over here and i'm going to also take an arbitrary point from this light ray over here and do the cross product of the unit directional vector with at this particular point usually chosen as the extrinsic value of this uh camera that gives rise to this particular light ray over here so here i would also have a plucker line that represents the second light ray in the second view over here and we know that these two light rays they must intersect since there are corresponding points uh in in 3d space this means that they must intersect at the 3d point which we call x and similar to the model the epipolar geometry of the pinhole camera in this particular case there is also a relative rotation and translation over here that relates the tool reference frame note that this relative transformation it's the rotation and translation that relates the reference frame of the general camera between two views so there is a differentiation between this rotation and translation from the in the extrinsic value that we have defined here so in this case here the this is a multiple camera setup where each to make up this general camera there are many cameras on this particular generalized camera so in that case rci and tci refers to the extrinsic value each one of this camera with respect to the reference frame so this is the extrinsic value the rotation translation but this rotation and translation that we are looking at now it's the rotation and translation that relates two views the reference frame of these two views so after this rigid transformation we can express the plucker vectors of the first line in the second coordinate system so what this means is that i have two views over here and then i have this two light rays which intersects at a common point over here so the first line over return is l1 and this second line over here is written as l2 but we know that according to the definition of the plucker vector that we have defined earlier on that l1 is expressed with respect to the reference frame of the first view and l2 is expressed with respect to the second reference frame over here so in order to express both the light rays in a coherent manner we need to choose one particular reference frame to express this relation so here we in order to achieve this we will transform the light ray uh in the first frame into the second frame so the transformation here can be given by this equation over here where r and t over here is the relation between the two frames it's the relative transformation between the two frames so this means that i'm bringing the any points or any coordinates that is represented in the first frame of the first view into the second view so uh we can see that this is generally a six by six matrix over here the transformation matrix that transform a plucker line this is my poker line l1 in the first view i'm going to transform l1 into the frame of the reference frame of the second view so here what it means here is that after the transformation i'm going to call this l1 tilde this particular focal line it will remain the same in the 3d world but here instead of representing l1 tilde with respect to the frame of the first generalized camera i'm going to represent it in the frame of the second uh camera over here and the relation here gives the gives me the transformation so this six by six matrix over here can be seen as the transformation matrix that we have saw many times in the lecture but in those cases it's just a simply uh four by four matrix but in this case because it's a particular line so it's going to be a six by six uh metric so suppose that after the transformation we have a and b line plucker line a and b that are already expressed in the same reference frame via the rotation and translation the transformation that we have seen here in this particular step we can say that these two since the two light rays are correspond from a pair of correspondence point which means that these two light arrays over here they must intersect at the certain point here which we call x here in the 3d space we know that two light rays intersects at the 3d space if and only if this particular relation over here holds through and this can be rewritten into this particular equation over here which provides us a constraint on the intersection of the two plucker line so here we can take qb over here to be the second light array so this would be equivalent to q2 and this can be q prime a as we have mentioned earlier on that these two a and b these two lines they are all going to be expressed with respect to the same frame which is in the second camera coordinate frame and this means that qa prime would be simply the last three entry of the first line after transformation we're going to rewrite this guy over here into qa prime and similarly for qb prime would be simply q2 prime or that means that it's the last three elements of the plugin vector of a light ray in the second view over here and qa as we have mentioned earlier on that a would be the line of the first slide ray expressed with respect to the reference frame of the second view and hence qa would be given by this q1 so we are going to substitute this guy here into this equation and as a result we'll get this particular equation or in matrix form we can simply say that this would be l2 transpose the plucker line in the second view multiplied by this 6 by 6 matrix and multiplied by l1 which is the plucker line in the first view so this is l1 and this is l2 over here and we can see that the relation between l1 and l2 which are corresponding uh light rays will be related by a six by six matrix in this particular uh form over here so this is a six by six matrix over here where interestingly e here it arises from this particular component of the constraint the intersection equation that we have seen earlier on and this here it's e it's equivalent to the regular essential matrix of a single two view pinhole camera that we have learned in our previous lecture and r here is simply the rotation matrix that relates the two views so there's a r and t that relates the two views that we have defined earlier on and here we'll call this relation analogous to the epipolar geometry which is given in the form of x prime transpose e x equals to zero so this is the epipolar geometry that we have seen in our previous lectures that relates to pinhole camera views so this is the epipolar geometry and now we can see that this particular form of the app generalized apipola geometry is equivalent it's in a similar format as the pinhole camera over here hence we're going to call this the generalized epipolar geometry where now instead of a three by one uh image correspondence x and x prime we're going to have a six by one plugin line correspondences and this where this guy over here is six by one and this guy over here is also six by one and uh interestingly uh we will no longer have a three by three essential matrix as what we have seen earlier on in the epipolar geometry for a single pinhole camera so in this case this is three by three matrix now we are going to have a six by six generalized essential matrix that relates the a corresponding pair of plucker line in the two view generalized camera so here uh again this uh q1 or l1 over here that represents this plucker line and l2 that represents this the second the plucker line in the second view there are point correspondences in the first view represented as a plucker light rays and since there are altogether nine unique entries in the essential matrix over here so this is a three by three uh matrix which implies that there are nine unique uh entries there are nine entries here and there are also all together three by three which is equals to nine unique entries in our rotation matrices over here so all together we would have 18 unique entries in the generalized essential matrix that is given in this particular form over here so all together 9 from the essential matrix and 9 from the rotation matrix so in total we will have 18 unit entries in this particular generalized epipolar geometry and the fortunate thing about this is that we have l two transpose and then the generalized essential matrix multiplied by l equals to zero so if we were to rewrite this equation into a linear form that means that we are going to express this into a one by one equation so instead of in as the matrix form we are going to expand this expression over here and this would be linear with respect to the 18 unique entries which can be rewritten into this particular form of the equation over here so this is the simply a dot product of a and g over here where g would be a 18 by 1 vector that contains the 18 unique entries in the essential matrix as well as the rotation matrices here and the entries of these two matrices the 18 elements in there they are unknown so and the coefficient here which is a that it's made up of the known correspondence over here which is the respective entries in l2 and l1 are applicable like rays in the two views so this means that we will get this homogeneous linear form of equation and that is made up of a known coefficient a and this is going to be 1 by 18 and then it's also make made up of the unknown parts which is 18 vector made up of the entries in the essential matrix and the rotation matrix that relates the two fields and what this means is that we'll need at least 17 point correspondences to solve for the unknown of g so each one of the corresponds gives us a transpose g equals to zero so uh this means that first correspondence i have a one transpose g equals to zero and a two transpose g equals to zero and so on so forth until i have a n transpose g equals to 0. so i can factorize this a out into a general m by 18 matrix over here which i call capital a and g would remain the same as a 18 by 1 unknown vector so this would become our familiar homogeneous linear equation so this is the homogeneous linear equation that we are all familiar with that we have seen many times over the past lectures we also know that this is a homogeneous linear equation that can be solved using the svd vector method this simply means that i'm going to take the svd of a that gives me u sigma v transpose and the solution of g would correspond to the last singular vector in v that matches up with the least singular value and that would be taken as the solution of g so uh since the we all also know that this is a homogeneous so a g equals to zero this is a homogeneous linear equation this means that the last vector of v would gives us the basis of this homogeneous linear equation and the solution of g would be expressed as a one parameter family of the solution which we call lambda v over here and in the fundamental matrix or the homography case we know that there's one degree of freedom that cannot be determined in our unknown of the fundamental matrix essential matrix or this homography matrix that we have seen earlier on so we'll just leave that solution as a family of solution because the scale is unknown but in this particular case over here we can actually determine this lambda over here to be a unique value because we know that the rotation vector that means that the last nine entries of our g value has to be a octa normal matrix over here whose determinant should be equals to 1. so we need to enforce this particular constraint hence as a result we can make use of this particular constraint to solve for the unknown lambda in this family of solution so what it means here is that uh since this g has 18 elements i'll be able to write this into g1 all the way until uh g nine and then g ten all the way until uh g eighteen where this last nine entries over here represents the rotation matrix which is simply given by lambda v tan lambda v 11 all the way until lambda v 18 over here so there are two parts of this equation and where the last 9 entries corresponds to the rotation factor that means that we can simply rearrange this last 9 vector into the rotation matrix which would be lambda v tan lambda v 11 and so on so forth into this three by three matrix over here and then uh since this particular rotation matrix that we have obtained from the solid family of solution this means that lambda here is unknown but we can easily equate this to be the determinant of r in with respect to the unknown lambda to be equals to 1. hence we get one equation and one unknown which we can solve for lambda so once we have solved lambda this means that we have obtained a unique solution for e and r in our two view generalized epipolar geometry and finally once we have gotten this the e and r we can solve for t the translation vector using this decomposition that we have defined in our earlier lectures for the essential matrix and since here we know a unique solution for g what this means is that we would have obtained the absolute scale there's no ambiguity in the scale of the translation vector and intuitively uh we can see that because uh this is true because the correspondences l one and l two of the glucolite array it's obtained from known extrinsic value this means that we just earlier on we have seen that the this particular light ray over here it's computed based on the known extrinsic value rci and tci where tci is known with a absolute matrix scale so since this correspondence is embedded within the epipolar geometry here the coefficient here this means that a here would already embed information of the absolute scale and we are making use of this to solve for the unknown translation vector here where we also know that there's additional constraint of the rotation matrix which is that the determinant of it must be equals to 1. hence as a result this information is directly transferred to the relative transformation between the two views and hence the absolute scale of the translation can also be found in this particular case this means that there won't be any scale ambiguity in the relative transformation of the translation for a generalized epipolar geometry so once we have obtained the relative transformation between the two views given by the rotation matrix and the translation vector will go on to do a triangulation here so in this particular case since it's a generalized camera setup this means that the linear triangulation algorithm that we have seen in our previous lectures cannot be used in this case over here but we can make use of the plucker line equation that we defined earlier to get this relation over here if we look at the equations over here we can see that this part over here it simply means that it represents any point on the local line in the second view over here this is what we have defined in the very beginning of this lecture of any point lying on the plucker line can be expressed as the point that is closest to the reference frame so in this case it's given by q2 cross q2 prime and plus a scale from a certain unit away from the this particular reference point in the direction of the plucker line given by q2 so alpha 2 here would be the direction so this point will be given by alpha 2 multiplied by q2 and similarly since there's an intersection of this point from the first view which we denote as l1 we're going to do the same thing here so the closest point on this particular line over here it's given by q1 cross q1 prime but in this particular case we're going to express everything coherently with respect to a reference frame which we chose to be the second frame in the second view over here so this particular vector here with respect to the first reference frame we need to uh multiply multiply by the rotation we need to rotate it into the second reference frame and this is given by the rotation vector the rotation matrix that we have computed in our generalized epipolar geometry and uh similarly for the unit direction uh vector over here we also need to transform this into the second view since this is a three by one vector we need to multiply it by the rotation and translation that transform it into the second view over here so since these two lines are going to intersect at the certain point over here so this means that these two lines are going to define the same point in the reference frame of the second view so here we will need to equate the two uh points uh where the first point is parameterized by alpha one and the second point is parameterized by alpha two so we are going to solve for the two unknowns uh such that the two points that is represented refers to a single point in the 3d view and that's the point where the two light rays intersects each other and interestingly we can see that this particular equation over here the only two unknowns are alpha one and alpha two so we can rearrange this into an over determinant linear equation given by this form over here where uh the unknown here would be a two by one vector of alpha one alpha two and this is a inhomogeneous linear equation that we can solve easily solve for alpha 1 and alpha 2 without any ambiguity so once alpha 1 and alpha 2 is obtained we can simply plug this into the equation the line the equation of uh the the lines over here and uh to get the point in the in in with respect to any reference frame over here and after reconstructing the alpha one over here we can simply plug it into this equation we will choose the first frame as the reference for structure for motion uh using a generalized camera so this can be also chosen to be p in the second frame to be q 2 cross q 2 prime plus alpha 2 q 2 if we decide that the reference frame of the second view to be the reference frame for our 3d reconstruction
3D_Computer_Vision_National_University_of_Singapore
3D_Computer_Vision_Lecture_1_Part_2_2D_and_1D_projective_geometry.txt
so after looking at the homogeneous representation of points and lines we'll now look at the representation of konigs in the projective 2d space a coding is actually a curve described by a second degree polynomial equation in the plane on the cartesian coordinates and in euclidean geometry conics are of three main types namely the hyperbola the ellipse as well as the parabola and we'll see in the next slide that how this three different types of conics are generated from geometry and these three types of conics arises as conic sections generated by planes differing from or of differing orientation we'll see in the next slide that this is in fact the intersection of planes of different orientation that intersects two inverted cones and note that besides these three different types of conics there are also degenerate conics which we will define later so here's the illustration of how the three main types of conics arises from geometry the first type which is parabola it occurs when a plane intersects the two inverted conics uh at the location or at an orientation such that the tangent to this plane is aligned with the axis with the side axis or tilted axis of the two inverted conics over here so in this case here the intersection of the plane with the two inverted conics which is given by this curve over here would define a parabola conic and in the case where the tangent of the plane is aligned with the base of the or parallel with a base of the inverted cones then this will give rise the intersection will give rise to a sucker and if it is tilted at a certain angle but it's still close to the the tangent of the plane is too close to the the alignment with the base of the circle then this will give rise to the ellipse conics and in the case where the tangent of the plane is aligned with the vertical axis parallel with the vertical axis of the inverted cones then the intersection of this plane with the inverted cones would give rise to the hyperbola so now having a look at the geometric intuition behind how the three different types of chronics are formed let's go into the mathematical formulation of konigs so we all know that the equation of conics in inhomogeneous coordinates or cartesian space coordinates is actually a two degree polynomial given by this particular equation here so let us follow the concept behind the homogeneous conversion of a point where we saw that a point x can be represented as homogeneous coordinate of x 1 x 2 and x 3 where we simply replace x with x1 divided by x3 and x2 divided by x3 to get the homogeneous coordinates and or to get the coordinates in the cartesian space so this is a conversion behind between homogeneous coordinates and the cartesian coordinates that we are familiar with and in in this case here we are simply divide we simply divide the homogeneous coordinates the first two elements of the homogeneous coordinates with the third uh coordinate over here and uh we'll do the same here we'll simply uh replace x over here with uh x1 divided by x3 and x and y over here uh to be replaced by x2 divided by x3 to get this particular equation here so uh here we will replace this x and y over here so since this guy here is equivalent to x and y we'll simply use these two equations over here two relation over here to convert from cartesian to homogeneous coordinate so now the homogeneous representation of the equation of conics we can see that it's a parameterized by x1 x2 and x3 with the coefficients of a to f that remains intact here now we can rewrite the homogeneous form of the konigs equation into a metric form which is given by x transpose of c multiplied by x equals to 0 where x here is simply x 1 x x2 and x3 and c here would be in terms of this matrix form over here is a three by three matrix and now we can say that c is a homogeneous representation of a conic and here only the ratio of the matrix elements are important multiplying c by a non-zero scalar has no no effect this is similar to the homogeneous representation of a line and the homogeneous representation of a point we can see that because this is a this equates to zero multiplied by any k scalar value it will always cancel out and hence there's no effect on the konigs equation here the conics can be seen to have five degrees of freedom which is can be thought of as a ratio of a b c d e and f since we said just now that multiplying the konigs equation with a k here would have no effect this means that it's only the ratio of these six values the six parameters of the conics that is important and uh in this case here what we say is that uh it's also equivalently that the six elements of the symmetric metric uh less one for the scale which is given by k over here and hence it's a five degrees of freedom so each point on the conic's x i which we denote as x i the coordinates are x i and y i places one constraint on the conics coefficient given by this second degree polynomial equation over here and what this means geometrically is that given a conic for example this ellipse over here when we have one point that lies on this particular coding this is going to give one of this constraint and this particular constraint in terms of this polynomial equation can be written rewritten in a metric form or in a metric or in a homogeneous linear equation formed as this particular case so this is equivalent to a vector or dot product of vector which will give us a 0 over here where c this vector c here is the coefficients of the conics a b c d and e represented as a six vector here so uh since we have six unknowns inside the uh inside this six factor over here this guy here is a six factor which is actually a six unknowns over here oh what it means here is that two in order to solve for the six unknown where the less one minus one for the scale this means that we have looked at this earlier that there's only five degrees of freedom that defines the conics of the of c over here so what it means is that we only need five constraints we only need a total of five constraints or minima number of uh five constraints to in order for us to solve for the unknown conics so geometrically what this means is that uh in order to solve for these coding we need to know five points on these conics in order to solve for the koenigs equation here which is represented by c and stacking this up together we'll get a homogeneous linear equation where this guy here is equals to a and which is a five by six matrix and c here is actually a six by one vector and this would be equals to zero or what this means in linear algebra is that c here the solution of c here lies in the null space of the matrix of a here and once this five points are known this means that this coefficient here the the matrix of a here would be all known a known value we can easily solve for the null space of a to get the solution of c which represents the konig equation we'll see in the later lecture that this can be easily solved by doing the svd the singular value decomposition of a and to get the null space of a which will be the solution of c here and uh one interesting thing to note here is the tangent lines to the conics this is also uh the intersection the intersection of a line with a konig and this will give rise to a point when this line forms a tangent to the conics we'll also see the later the in the degenerate cases that that sometimes the line might also intersect at two points or sometimes the line will not intersect and but in this case here we'll look at the case where the line actually uh is a tangent line to the conics and when this happens uh this tangent line to the coding is actually intersecting at a point which we denote as x given by this equation over here l equals to c x let's uh here's the proof uh in the algebraic sense so the line uh l equals to cx passes through x this means uh which is denoted as the in this figure over here uh this means that this line is actually a tangent line to the conic c since there's only a unique point of intersection here at x since l x l transpose x equals to x transpose c x equals to 0. so l transpose x is what we have learned earlier the when the point x lies on the line l then this l transpose x or the duality x transpose l is going to be equals to zero so in this case here we are using this l transpose x equals to zero and we have also seen that uh the all the points that lies in the konig is going to fulfill this uh incidence relation where x transpose c x equals to zero so if the line has one contact point which means that this line is actually the tangent line to the conic then it is a tangent and we are done and we can see that uh in this case here the l here and l here that fulfills this since there's only one point of contact which is a tangent line then l here would we can rewrite this l transpose equals to x transpose of c which also implies that l is equals to c of x here hence we have proven that the tangent line intersects the conic at l equals to c x now the chronic that we have defined so far it should be properly termed as a point conic as it is uniquely defined by a equation on points this is because the second degree polynomial that we look at it's defining a point on uh coding and we saw earlier that five of these points that lie on the conic uniquely defines the conic and there is also a dual to the cloning as in the dual of a line so previously in the previous slide we saw that there is a point line duality given by x transpose l equals to l transpose x equals to zero and this duality actually means that uh we can define a line with a point and interchange the point with a line as well so what this means is that uh you can see that if i have many points over here it actually can be used to represent a line and if i have a line here it can be at the same time used to represent this many points on the line so here uh there is also a duality between the line conic and the point conics so a point coding is what we have defined earlier which is uniquely defined by a set of five points or more than five points so anything that is more than or equal to five point can be used to uniquely define this particular conic and we can also do the same with lines so we can see that the chronic can be uniquely defined with tangents of the line so if i have many tangents of the using the line onto a conic i can actually use it to uniquely define my conics and this relation here is given by this equation over here l transpose c star l equals to 0. we define c star over here to be the dual conics which is defined by the line or in other words this is actually just the line coding so we can see that the analogy would be our poinconics which is what the equation that we saw earlier is x transpose c x equals to zero so this means that the x here lies on the uh the the point here lies on the conics and in this case here it means that the lines lies on the conics c star so this case here is a c and we have all the points of x and this case here the dual conics will have all the lines denoted as l and a dual conics also has five degrees of freedom and what this means is that the conics can also be computed from five unique lines that define the conics and for a non-singular symmetric metric c star it's uh the dual conics denoted by c star over here is actually given by the inverse of the uh conics of the poinconics so we can write c star equals to c inverse up to a certain scale so uh the proof is here the let's given a point x on the on the conic c so let's say this is chronic c and given a point x over here that lies on the conic c the tangent is we have seen in the previous slide that the tangent is given by the this equation here l equals to c over x uh c c multiplied by x over here and this implies that in this case if c is invertible x would be equals to c inverse l and this means that we can rewrite this c inverse here with c star which gives us this equation this relation over here uh which is the dual of this l equals to c this tangent line so uh in this case here we will have a point c star this defines c star over here where we have a point that defines uh by the line that is tangent to this c star over here hence this particular relation over here furthermore since x satisfies this equation the second degree polynomial equation that we have seen earlier we obtain this relation by substituting the uh point with this equation this particular relation over here by substituting x with c inverse of l into x over here into two sides of this quadratic equation over here and simplifying this since this is a transpose c inverse l transpose will give us l transpose c inverse transpose over here and uh what we can see here is that we will end up to have this equation over here since since c and c inverse is equals to identity and c inverse transpose is equals to c inverse hence we'll get this particular relation over here and as a result we can see that this relation can be rewritten into l transpose c star l equals to 0 by simply saying that the dual conics is equals to the inverse of the coding and geometrically dual conics are also known as the chronic envelopes because i have said earlier that the uh do lower line conics here is defined by all the tangent lines as compared to the poi chronic as compared to the point conics which are defined by points on the conic so five or more than or equal to five points can uniquely define this conic c and here in the same case here we have more than or equals to five lines uh tangent lines to the conic we can define the line conic c star now suppose that in addition to x there's another point of intersection of the tangent line to the conics which we denote as y over here and uh it follows that then y transpose c y would be equal to 0 following the equation that we have seen of earlier of x transpose c x equals to 0 and we know that since this particular point of y must lie on the line l then the intersection or equation over here the the incidence equation over here l transpose of y must be also equals to zero uh it will hold and we have also seen that there is another point of x earlier that this must be the intersection of the tangent line and the conics therefore the equation l equals to c x must also be true and if we were to replace this particular l equals to c x in this equation over here then we will get another relation of x transpose c y equals to l transpose y equals to 0 over here and interestingly it follows that if all these relations uh holds true that means that for the tangent line there are actually two points that of intersection with the conics then it follows that this relation here must be true for all alpha and what it simply means here is that x plus alpha y is another point on the on the line and uh what happens here is that x plus alpha y it actually means the line itself and uh this simply means that i have two points x and y and any magnitude of this y is going to denote any point on this particular line over here and what this means is that if this if there's another point of y in addition to x that lies on the intersection of the tangent line to the conic then the whole line joining x and y over here the whole line here must lie on the konig so what this means here is that the konig will no longer be ellipse hyperbola or parabola shape or even a circle shape it means that the ellipse now or this conics now must also take a straight line therefore the tangent line to this conic which is also a straight line would have all the points lying on the conic and we say that this kind of conics is a degenerate chronic we will look at several cases of these degenerate conics so we have seen algebraically that the conic degenerates when we say that any line that is uh tangent to the conic intersects the chronic at every single point here in uh that is given by this equation here so what this means is that given a line which is uh spam by x and y these two points over here the conic is degenerate when it meets this particular tangent line at every point of the line that is span by x and y and there are geometrically three cases where this can happen in the first case where x and y actually collapse to a singular point this means that they are actually x is equals to y over here and in this case geometrically what this means is that we will have the plane intersecting at the intersecting point of the peaks of the two cones over here and in this case we can see that this particular conex over here fulfills this equation for whichever x and y and uh in the second case we can see that the uh it happens when the plane intersects exactly at the slanted axis of the inverted cones over here and in this case we can see that this intersection forms the line this particular line over here so this is the cooling c over here and any points we can see very clearly from this uh illustration over here that any points on this line will still lie on this particular conic itself because the cloning itself is a line and in the third case the conics would be intersecting the two inverted cones or the plane will be intersecting the two inverted cones in this manner over here which it forms two straight lines over here and in this case whatever lines or whatever points that lies on either one of these two straight lines would also fulfill this particular equation over here and they will all fulfill these two sets of equations since we are saying that both x and y now of the tangent line that lies on the tangent line as well as it must lie on the conics itself we'll see that in order for this to happen the rank of c must be less than 3. so i have mentioned earlier in the previous slide that if the metric is not full rank then the konig is termed as degenerate what this means is that the rank of c here must be lesser than three because c is actually a three by three square matrix and anything the rank lesser than three this means that the metric is not full rank and the degenerate clinics uh that will include two lines and a repeated line of rank two and rank one we will look at these two cases or after the example so in the first case where we are looking at here is that the codex is consists of two lines and we'll see that it actually forms a rank two konig and in this case this chronic can be formed by uh two lines which i denote as l and m over here and uh taking l multiplied by m transpose this is the dot product and plus the transpose of this dot product we'll see that we'll get the rank two conics over here and uh indeed after putting into the this particular equation over here which is the coordinate equation that we have seen earlier that uh from the two sets of equations over here the the two summation terms over here multiplying it by uh x and x transpose and x over here and we can see that uh this indeed gives us zero and since from this particular equation here which is the chronic equation that we have seen earlier that it actually reviews the two lines where the first term here the dot product of x and l denotes the first line equation of l which is the line and the point that is lying on the line as well as the second term here which is the dot product of m transpose and x over here where m is my second line and x is the point that lies on the line so we can see that from this particular formulation here it actually reviews two line equations over here and hence the degenerate conic in this case where the rank is equal to two it's actually uh two lines which is represented geometrically from this uh using this particular figure over here and in the second case we can see that uh if the rank of the matrix is equals to 1 this means that the rank of the c matrix the chronic matrix equals to 1 then c is actually the conic is actually a repeated line which overlaps each other the same line given by this case over here so l transpose l uh plus l or l transpose is going to give me a repeated line and with the rank of c here equals to 1 and in this case we can put it into the we can put the same equation over here into our konigs equation to get this particular relationship which is equals to zero over here and here it's not difficult to see that uh after putting it into this particular relationship over here we can see that this actually consists of two terms which is the dual of each other which highlights the point line relationship over here and hence l here and the l over here would be the same repeated line with rank of one forming the codings so there also exists degenerate dual conics or dual line conics that includes two points of rank two and a repeated point of rank 1 as we have seen in the first case of the illustrations of the geometric illustrations of the degenerate codings earlier and in this case here an example would be the line conics c star equals to x y transpose plus y x transpose so this is of of rank two which consists of uh which consists of two points over here that is a form in the in this way so though the reason why it has to be formed in this way the because c is actually a symmetrical square matrix so the way that it's formed in this way the it will ensure that c the resulting c is a valid conics and uh it will also ensure that it has a rank two and this dual conics consists of lines passing through either of the two points x or y so what this means is that it will form something like this where i have x and y over here and this point would be made up of all these lines that is intersecting on the point over here and similar formulation can also be done for the rank one line conics of the repeated points so i won't go into the proof or i won't go to show that this is indeed the set of points because as i have mentioned earlier that the duality principle means that as long as i have shown it for the points conics the same relation should also hold for the dual line conics over here and note that in this case here c star which is my dual conic the inverse of this is not equal to c as what we have mentioned earlier that if the rank of c is full rank which is equals to 3 then c star would be equals to c inverse but in this case since the rank of c is not equals to 3 then this relation doesn't hold anymore now after looking at the definitions of points lines and conics in the homogeneous coordinate space and in the 2d projective space let's look at how these entities can be transformed in the 2d projective space so 2d projectives geometry as i mentioned is actually a study of the properties of the projective plane they are invariant under group transformation known as projective so as i have illustrated earlier on that uh in the in this case here the 2d projective geometry is can be seen as the ray forming on projecting onto each one of this plane to form a line or it could be the plane that is intersecting on the uh on this individual planes of k equals to 1 k goes to 2 okay k can actually be any real number here the intersection here to form a line and a conics etc so in this case here when we are starting the 2d projective geometry we are studying uh how this transformation of these planes that intersects the line the plane or the inverted cones can result in the invariance or certain invariant property after the transformation and specifically a projectivity needs to be an invertible mapping which we denote hsh from p2 to p2 itself so this means that i'm mapping any input point or to or input entity in the 2d projective space it could be a point line or conex and the mapping should also be mapped into a projective 2d space denoted by h and this linear mapping simply means that if i have a point i denote this h as a mapping it should be able to rewrite it into this form where it becomes a linear mapping and it should also be invertible this implies that i should be able to recover x prime from the inverse of h multiplied by x over here and this means that the mapping from one space projective space to another would be denoted as h and the inverse of this would bring me back from the other space back to the original space so a projectivity is also called a collineation or a projective transformation or simply a homography the theorem of projective space is that as i mentioned in the previous slide is that given a mapping edge it actually maps an entity from a 2d projective space into a another 2d projective space and this is a projective this mapping is a projectivity if and only if there exists a non-singular 3x3 matrix h here which we denote as capital h over here such that any point in the 2d projective space represented by a vector is true that the mapping follows this particular function over here so in this case i would have mapped it into x prime given by h multiplied by x and both of x and x prime both of x and x prime must be in the 2d projective space and as well as h here must also be an invertible entity where we will skip the full proof so the partial proof begins with let's x1 and x2 and x3 lie on a line so these three are lying on the line x1 x2 and x3 thus this equation of the line point incidence relation goes through where the dot product of the line and the points will be equals to zero for one two and three and let h be a non-singular three by three matrix so we can verify that uh after the mapping after this mapping with h as i have mentioned in the very beginning of the lecture that despite that the geometrical properties of many shapes and geometry structures change when it undergoes a projective transformation the straightness of it will not be changed so what this means is that if i were to map these three points on the using this homography of age then it would be mapped from it would be transformed from a p2 space to another p2 space but in after the transformation these three points should also appear collinear to each other we can easily verify this by doing this so a projective transformation of l is given by this guy over here and a projective transformation of x is given by uh this guy over here so in this case i'm mapping the line independently from the points and we can see that by putting both of them together so this will end up to have l prime which is this guy over here l prime after the mapping l to l prime and this h x i would be my points of x i prime over here and this would be x i over here so putting them together in the coincidence relation i will still get 0 because h inverse and h cancels out what this means is that as a result after the transformation my l prime multiplied by the x i prime over here it will still be equal to zero what this means is that all the three points after the mapping after the homography mapping it will still be collinear to each other and it will still be lying on the line hence the collinearity is preserved by the transformation so what we want h to follow would be this we want to find an h such that it's capable of mapping p2 space to p2 space where the collinearity is preserved and it's a linear mapping which is invertible following the principles or following the rules that we have laid out earlier for the projective transformation h we now define the planar projective transformation in this matrix form let's denote the p2 space the final p2 space as x of x prime over here as x1 prime x2 prime x3 prime over here and h here would be a three by three matrix that it's capable of doing linear projection or linear transformation of my original point x in the p2 space as well and it will map this p2 space into another p2 space of x prime over here or simply we'll write this linear mapping relation as x prime equals to h multiplied by x and here's some properties of h it has to be a non-singular 3x3 matrix because as mentioned before h needs to be able to be inverted and hence it has to be a non-singular matrix for this relation to hold true it also has to be a homogeneous matrix since only the ratio of the matrix of the elements is significant and it has to be 8 degrees of freedom we will see later on why it has to be 8 degrees of freedom among the 9 elements of age when we learn about the robust estimation of homography in the later lecture now geometrically what the transformation what the homography transformation of projective transformation means is that we are actually trying to find a linear mapping such that as we have mentioned that x or any point it's represented by the intersection of the ray with the plane so what we are trying to do here with the linear mapping is that instead of planes that are parallel to each other which we have denoted using our homogeneous coordinates of x y and k or k x k y and k over here we say that by varying this particular ray over here is intersected by all the planes that are parallel to each other at a different interval different k interval so in this case here it represents the same point but a linear mapping homogeneous linear mapping simply means that i want to find after i have transformed this plane into another plane which is not uh paler to this particular plane over here by considering the same line the same ray that intersects these two planes here the relation between these two points which i call x prime and x over here would be related by a homography which can be written as x prime equals to h multiplied by x and hence the projective transformation of x prime equals to h multiplied by x can arise in perspective images the examples some examples will be given here and these are the unique cases where this particular transformation can be valid it doesn't hold in all cases but it has to be a transformation from planes to planes because as what we have defined in the earlier slide is that the what we're interested in would be the the origin with a ray i want to find the intersection of this particular ray with different planes and the relation between these two points here for example which intersects the which is the intersection of the planes of with this particular line over here would be related by the homography equation hence it has to be a point lying on the plane in order for this to be valid and another uh part that it has to be valid would be it has to be a perspective uh images what they mean what this means is that all this light rays all these rays over here it has to converge at the single point of convergence we'll look at this in the future lecture in more detail but here we'll study the properties of this homography on different geometrical structure so we have seen earlier that uh if a point x i lies on a line then the transform point x i prime would be equals to h multiplied by x i under projective transformation and this after the projective transformation this points over here which originally lies on the line so all these points which i denote generally as x i that lies on the line l it would be still lying on a line after the transformation which i denote as l prime over here and this point over here would be x i prime after the homography or the projective transformation and the transformation of the point would be given by x prime equals to h multiplied by x i uh and the projection of the the projective transformation of the line would be given by this guy over here in order for the incidence of the points on the lines to be preserved given by this equation over here we can see that by plugging in these two it the h over here will cancel out and it will be equals to zero hence it will give us the incidence uh relation which means that collinearity is preserved after this particular mapping over here hence we can write the point transformation given that a point if the point transformation is given by x prime equals to h multiplied by x then the line transform will be given by l prime equals to h inverse transpose multiplied by l or the transpose of this relation which is given by this equation over here and now let's look at how the linear mapping or the point trans uh projective transformation uh affects a conic so under the same projective transformation of a point given by x prime equals to h multiplied by x over here the conic transformation under this homography mapping would be given by this relation over here c prime would be equals to h inverse transpose multiplied by c multiplied by h inverse and under the projective transformation of x prime equals to hx a dual conic point c star will transform to c star prime equals to h c star multiplied by h transpose so this here is important to note that in this case here c star would be equals to the inverse of c where the rank of c is actually equals to full rank this means that we're looking at the conics which is not degenerate so here's the proof of the transformation of the koenigs and the dual conics under a point transformation given by x prime equals to hx the following the equation of the codex we have x transpose c of x and this would be equals to x prime transpose multiplied by h inverse transpose which is given by this guy over here so what this means is that i'm actually substituting x with the inverse of h multiplied by h prime over here so i'm substituting this guy here into this equation hence i get uh h inverse x prime transpose c h inverse x prime over here which is given by here and the transpose of this guy here would be simply equals to x prime transpose multiplied by h inverse transpose which is given by this term over here and further evaluating this expression over here we'll get this final expression over here and which is in the quadratic form of x prime transpose c prime x prime with uh from this particular equation here and this relation over here we can easily see that c prime after the transformation equals to this expression over here which means that the transformation the linear mapping of c would be given by this equation into c prime would be given by this equation by the way there's no inverse over here now after looking at the definition of the homography which defines the projective transformations of point lines and conics in the projective space let's look at the hierarchy of transformation and the first level of the in the hierarchy of transformation is what we call isometries so isometrics are transformation of the plane are in the r2 space that preserve euclidean distance what this means is that in this case here we are looking at the ray and the intersection of this with the plane over here and we say that the linea the linear projective transformation is which denote by h is actually geometrically trying to transform this plane into another orientation at another location so in the case where they are evenly spaced out and parallel to each other then in the homogeneous coordinates they actually means the same point so h over here is trying to relate h x and x prime where these two planes are not parallel to each other and there are special cases of h over here which defines certain transformation geometrically of these two planes over here such that it preserves the euclidean distance and in this case here since h is a three by three uh matrix which we say that it has a eight degrees of freedom when all the nine entries are free to move or there's no constraints on the nine entries in this particular h matrix except for the scale hence there's eight degrees of freedom then there's generally no constraint between these two planes but in the case where if h over here is a linear mapping that is defined by this three by three matrix over here where we simply say that the the first three or the first four elements is given by this equation over here and the last one is given by t x t y and 1 over here then this would be preserving the euclidean distance this equivalent to the isometry transformation where this guy over here is a 2d rotation transformation and this guy over here is a 2d translation so what happens here is that suppose that i have a point in x y and i'm rotating this by a angle of into x y x prime and y prime and rotating it with respect to angle of theta then this guy here will be the rotation matrix uh or which is a scale or which is uh has a symmetry uh element over here which is given by epsilon we'll see in the next slide what this actually means and but assuming that epsilon is equals to 1 then this will be a rotation matrix that maps this but that maps this particular point over here in the x y original x y axis into the new x and y prime axis over here after the transformation and uh in this case here the rotation would be where it's anchored upon the origin and in the case where the origin actually moves what this means is that i'm i have the original axis of x and y and in the case where i have rotated via a certain amount over here so data and then i'm also translating with respect to this original xy axis by an amount of tx t y and 1 then there's a translation over here which is given by this guy over here and in this case what i'm interested in looking at is that given a point which is originally x and y in the original axis over here i want to find out what is the new coordinate of this in the new coordinate frames over here and this is given by isometric transformation and we can see that after the transformation this axis over here according to this axis or with respect to the previous axis over here the euclidean distance between any two points so any two points over here let's say if i have two points over here and in the first case they are before the transformation both are expressed with respect to the original axis of x and y then we can see that the distance here is preserved even after the transformation it will be now with respect to x prime and y prime over here but the distance between these two points will still remain as d hence it's preserving the euclidean distance in this case here so if epsilon equals to 1 then isometry is known as orientation preserving and and it's a euclidean transformation the rotation matrix and the uh translation vector that i have mentioned we'll talk about this in more detail i'll show more detail and derivations in the 3d space when we talk about 3d projective geometry in the next lecture and if epsilon equals to -1 then the isometry reverses orientation hence it becomes an or reflection what this means is that instead of just orienting it according to this way so in this case it would be a the translation as well as a rotation of the angle into x prime and y prime over here and uh the original axis would be x and y so a point here that i'm interested in would be denoted in the new frame it would be in this particular frame over here but when epsilon equals to 1 when epsilon equals to minus 1 then what i will be doing is that i'm actually reflecting this axis into here so now the this point must be with respect to x prime prime and y prime prime over here and we can see that regardless of how we how this particular point or how any two points undergoes the isometric transformation the two distance between the two points will always be preserved this means that the length will be preserved as well as any three points if i form an angle over here it's also going to be preserved after the transformation and the area is also going to be preserved so the second hierarchy of transformations that we are looking at is the similarity transformation is an isometric compose of isotropic scaling so in this case in addition to the rotation matrix and the translation vector we have an additional scale this means that i'm scaling up or down the the scale of the all the points in the space or the lines and the conics in the space and we represent where this scalar as here represents a isotropic scaling the isotropic means that it's going to scale equally in both the x and y axis and a similarity transform is also known as the equi form transformation because it preserves the shape so in in other words after the transformation for example if i have a square denoted by that's formed by these four points over here after the transformation let's say the rotation equals to identity and the translation equals to 2 in the x direction t x equals to 2. so i'm moving it in this way let's say i'm moving it in this way then s here equals to 1 or s here equals to 2 for example so what this means that i'm scaling this up into two times the scale so after the transformation of two steps uh into the x-axis so i'm this would be the end result and the transformation here would be simply given by the similarity transform or equiform transformation and we can see that after this transformation because straight lines has to be preserved the mapping of straight line has to be preserved and the shape of this will be similarly preserved h here has four degrees of freedom three isometric plus one scale in comparison to the previous isometric transformation that we have seen where there's only three degrees of freedom in that case so in addition to the preservation of shapes the angle the ratio of two lengths and the ratio of areas are also preserved in this similarity transform the third level of the transformation is what we known as affinity and affinity transformation is a non-singular linear transformation followed by a translation and it's represented using a three by three matrix over here so in this case there's no more notion of rotation matrix where there's no more notion of the orientation around the z axis in this case over here so here the four elements over here are three numbers that means that they can take any numbers and uh we where this a over here the a matrix over here the four number has to form a two by two non-singular matrix because at the end of the day h over here has to be invertible and all together h a over here would have six degrees of freedom and can be computed from three points correspondences and because each point we'll see in the lecture when we talk about homography that each point correspondence will give us two degrees of freedom hence three points would be sufficient for us to compute these six degrees of freedom but we'll defer this to lecture four when we look at the homography estimation and the invariance in this case are the parallel lines the ratio of the length of parallel line segments as well as the ratio of areas and we can see that the a5 matrix a can always be decomposed into four components here denoted by r theta r minus phi and d as well as r phi over here and in this case r theta and r phi are rotations by theta and phi respectively and d is a diagonal matrix which we can see that this decomposition actually follows directly from the singular value decomposition which is given by this guy over here singular this is a definition of singular value decomposition it can always be any matrix any square matrix can always be decomposed into or factorized into the left octagonal matrix and the right octagonal matrix as well as the diagonal matrix that contains all the singular values which is also the eigenvalues of the metric of a over here and we can see that from this factorization since u and v are octagonal matrices what it means is that we can simply write the we can simply take u and d and v transpose apart so we can write u here and then d and v transpose over here and this uh we can see that the product of these two terms over here will still give us a because it's the singular value decomposition sin and since the v matrix over here is a octagonal matrix this means that the transpose of v is also equals to the inverse of v and hence the v transpose multiplied by v over here would give us an identity so by inserting these two terms over here it wouldn't affect the end result of this multiplication and hence as a result we can see group u and v transpose together and coit r of theta and this is since u and v are octagonal matrix the multiplication of two octagonal matrices is still an octagonal matrix which is a rotation matrix in this sense over here and in this case here we would have v d multiplied by v transpose which can be written in this way over here since v equals to since v transpose equals to v inverse hence we can write that this v over here is r of minus theta is a rotation matrix and v transpose here would be the inverse of it which is simply given by the negative of the angle in the rotation matrix so we have seen that the affinity transformation matrix denoted as a over here can be decomposed into four terms using the singular value decomposition and these four terms consist of rotation matrices as well as a diagonal matrix which consists of the eigenvalues what it simply means here is that if i have a original axis over here which is denoted by this particular diagram over here after the transformation the fine transformation of applying a on it this is what it's going to become it's going to get skewed in the two directions over here as well it's going to be rotated at a certain angle over here and what this essentially means after the decomposition here since a is a two by two matrix which represents the first four elements of the upper left matrix of the of h a and what this simply means is that i would have uh apply a rotation first that rotates the original axis by a certain amount over here so in this case it's rotated by 5 degrees over here into the new axis over here and then the second term over here i'm multiplying it by a diagonal matrix which is simply equivalent to skewing or scaling the the axis in an individual value that is given by the diagonal value over here so in this case it could be the case where i'm scaling it with lambda 1 and lambda 2 this means that d is actually equals to lambda 1 and lambda 2 over here so a multiplication with this simply means that i'm multiplying the first component of x with lambda 1 and the second component of x with lambda 2 hence i'm actually scaling it or i'm skewing in the stretching it in the with the components of lambda 1 and lambda 2 here then finally we will be multiplying this with two rotation matrices after the scaling with lambda 1 and lambda 2 we are going to multiply it with minus minus phi a rotation matrix of -5 what this means is that i'm going to actually rotate this guy back into the upright axis because i previously i uh multiply this phi over here now i'm going to undo this rotation but after this i'm also going to apply a final rotation of data to this and this means that from the upright axis i'm going to now apply a data rotation and this would be the final configuration after applying a over here now the highest form of projective transformation in the hierarchy of the transformation is the projective transformation itself and this is also known as a homography is generally denoted as a non-singular linear transformation of homogeneous coordinates as what we have mentioned earlier so this would be h here which is three by three matrix with eight degrees of freedom that we have seen earlier so where the scale doesn't matter but now we have all the numbers the three by three which is nine numbers in this h matrix that uh can be filled up with any numbers so here v uh can be also zero and uh it doesn't matter it can be any number here as i mentioned that h p here has nine elements but the ratio is only only the ratio is significant hence it has a 8 degrees of freedom over here note that it's always not possible to scale the matrix such that v is unity since this particular component over here could be zero so it is not possible to actually scale the homography or the hp matrix into 0 0 1 to become the similarity transformation or the finite transformation etcetera and a projective transformation between the two planes can be computed from four point correspondences with no three points collinear on either plane so we'll look at this in more detail when we talk about the homography estimation in in in lecture four but the intuition here is that uh the hp here has eight degrees of freedom and as i have mentioned each point correspondence will give two degree of freedom constraint so all together we will need four correspondences to find this general projective transformation and another point here is that it's not possible to distinguish between orientation preserving and orientation reversing projectivities in the 2d projectivity space because all the numbers or at least there's eight degrees of freedom where all the numbers are free and the invariance here is the order of contact the tangency or and the cross ratio which we'll talk about it later here's a summary on the hierarchy of transformation so far we look at the euclidean transformation the similarity transformation a fine transformation and the projective transformation and the invariant properties are given here now it can also be shown that the projective transformation can be decomposed into a chain of transformation given by hs multiplied by h a multiplied by hp we where this guy here is the similarity transform this guy here is the affine transform and this guy over here is the projective transformation which is denoted by these three cases over here and in this case here this projective transformation here it has to take a special form of identity and zero but the combination of all these three would give us a general projective transformation denoted by h over here this is the highest level of the hierarchy in the hierarchy of transformation which we have seen in the previous slide a here is a non-singular matrix given by which can be further given by this particular expression over here it's a whereas here is the scale r is the rotation matrix and k is an upper triangular matrix plus t is a translation vector and v is a three by one or vector over here and this decomposition is valid provided that v is not equal to 0 that this guy here is not equal to 0 and it's unique if s is chosen to be positive we'll see more of this how this decomposition preserve geometric properties of the line at infinity and the circular points when we talk about circular points and absolute conics in lecture three now let's move on to talk about the projective geometry in 1d space so we have seen earlier that in the 2d space a point is actually a ray in the 2d space that intersects a plane and we saw the homogeneous representation that this is going to be a intersection at all the parallel planes that represents the same particular point and in the case of the line it would be the intersection of a plane with the with the line with the plane over here that forms all these lines and the representation in the homogeneous coordinates so here it somehow gives us an intuition that uh from a point it becomes a ray when we represent it in homogeneous coordinates and from a line it becomes a plane in homogeneous coordinates so what this means is that in the 1d space we can also think of it this way that all the points are actually living on a line because now in the 2d space we're looking at the intersections of the geometric entity with a plane the all the entities that is living on the plane so in the projective one space we will be looking at all the entities that's on living on a line and in this case here it would be only possible for points to live on this particular line and also the line itself hence the projective geometry in 1d would define points and lines and uh we'll do the same here except for in 1d space in the 1d projective space instead of a 3 vector we now have a 2 vector to represent the points as a homogeneous coordinates so we will have x 1 and x 2 here and when x 2 equals to 0 that means that this is the ideal point on the line it's a point at infinity on the 1d space and similar to the 2d case a projective transformation of a point x is represented by a two by two homogeneous matrix over here where we denote it as x bar multiplied by this and it will transform it into x prime so what this means is that we are looking at the uh this how this point transform from or that lives in a certain line over here how uh it transformed from this particular line onto the other line where it becomes x bar and x bar prime over here given by the projective transformation of of h 2 by 2 matrix h 2 by 2 has four elements and it's uh the weather scale actually doesn't matter so there's a total of three degrees of freedom and hence this h over here can be computed from three points so in the case of the 2d projective geometry we say that the straightness of the transformation of a geometric entity has to be preserved after a projective transformation in the 2d space and in the 1d space after the geometric transformation or the projective transformation the property that must remain would be the cross ratio and the cross ratio is defined as the basic projective invariant of the 1d projective space and it's defined by this particular equation here so given four point in the 1d projective space that is defined by x bar i over here so assuming that i have x bar 1 x bar 2 and x bar 3 and x bar 4 over here the cross ratio can be computed as the determinant of the product of the two points over here and the metric multiplication of the two points over here so this guy over here represents the determinant which can be computed uh using the determinant equation so it would be just the cross product of the the cross multiplication of these two elements minus the cross multiplication of these two elements over here so uh in this case we after we compute this this would be a number that this entry over here multiplied by another number and the whole thing would end up to be also a number a scalar number so this would be the cross ratio and it should be preserved under projective transformation what this means is that if these four points over here undergoes an uh projective trans projective transformation of h2 by 2 over here the four points after the projective transformation the cross ratio that is computed from x1 bar prime x2 bar prime x3 bar prime and x4 bar prime should be the same number and geometrically this is how it means over here the illustration so in the 2d space we can see that the ray or to represent a point is a ray intersecting with a plane but in the 1d projective space it's actually an array that intersects with a line over here and the points are represented as a 2 by 1 vector over here so in the case where we it undergoes a certain projective transformation of two by two matrix we're actually transforming this particular line into another line over here or another line over here so here in this case after the transformation the cross ratio after this transformation here should be equivalent i'll leave it to you as an exercise to prove that this is true after the transformation but basically what you should do would be to simply substitute this guy here back into the cross ratio equation and show that the homography over here the h2 by 2 over here would cancel out each other and hence the numbers will become the same now there's also a duality of the points and lines in the projective 1d space and we call this the concurrent lines that fulfills the cross ratio property and the configuration of the concurrent line is dual to the collinear points on the and what this means is that if i have a set of points that are collinear lying on this line then all these rays or all these lines that passes through each individual points over here and intersecting at a certain conversion point over here and the intersection of these four lines with this line the collinear where the four intersection points collinear to each other would define the same cross ratio and we call this set of lines the concurrent lines and other properties of this uh concurrent lines would be any coplanar points denoted by x i over here this means that all this uh for example these four points x one x two x three x four they are living on a plane which is uh the plane of this powerpoint slide over here and once they all forms a line that uh converges at a certain converging center projection center which we denote as c over here it will intersect any lines over here at four collinear points denoted as x bar 1 to x bar 4 here and this would also fulfill the cross ratio property it can be thought of the projection of the 2d space over here which is a line of of all these points lying on a plane into a 1d space here and in particular this line here will represent the one-dimensional image of the projection the cross ratio of these four collinear points would be satisfied as well so in summary we have looked at how to explain the difference between euclidean and projective geometry and we have used homogeneous coordinates to represent points lines and conics in the projective space then we look at how to describe duality relation between lines and points and conics and dual conics and we also look at how to apply the hierarchy of transformation on point lines and conics as well as we also look at the properties of a 1d projective geometry thank you
3D_Computer_Vision_National_University_of_Singapore
3D_Computer_Vision_Lecture_4_Part_1_Robust_homography_estimation.txt
hello everyone welcome to the lecture on 3d computer vision and today we are going to talk about homography and robust estimation hopefully by the end of today's lecture you'll be able to show the existence of homography in two particular cases and you will be able to explain the difference between algebraic geometric and the samsung error and then apply them on homography estimation finally we'll look at the ransec algorithm for robust estimation of course i didn't invent any of today's material i took most of the slides and the content of today's lecture from the textbook written by richard clay and andrew zizerman multiple view geometry in computer vision in particular chapter 3 and 4 and i also took some of the materials from the textbook written by mahi an invitation to 3d vision in particular chapter 5.3 i strongly encourage every one of you to take a look at these two textbooks after today's lecture so we've seen in lecture one that for central projection there's a series of transformations that maps a point from one plane to another plane so essentially what it means is that we have looked at the series of operation or the transformations that maps a point from the p2 space to another point on the p2 space and we have denoted that this transformation as the s h over here and it could be a projective transformation a fine transformation similar transformation or a rigid transformation in the euclidean space and this is represented by a linear mapping of homogeneous coordinates and which is given by this particular equation over here h here is the linear mapping transformation which maps a point for x which is in the p2 space to another point x prime which is also in the p2 space here and today we'll look at this particular transformation in the most general form the projective transformation which we will also call it as homography in the uh today's lecture and we'll look at in particular how to estimate this homography uh we're using four point uh ransac algorithm but before we look at how to estimate the homography using four point correspondences from a pair of images let's first look at the existence of projective homography there are two cases in particular that projective homography exist due to the fact that it must be a transformation from a p2 space a point in the p2 space to another point into the p2 space and in the first case here it must be in a planar scene what this means is that suppose that i have a set of points 3d points that lies in the 3d world so this is my 3d world and i have a set of points denoted by this uh circus over here suppose that one of this point i'm going to call it capital x over here and the existence of this projective homography is true only when all the points that we see in the 3d world lies on a plane which we call pi over here and what it means is that when all these points lies on the plane pi over here the projection into two different views of camera one and camera two so it will form image one and image two over here uh where x one and x two here are correspondence of the or reprojection of the 3d point x into image 1 and image 2 here there will exist a homography which we denote as h that directly links these two corresponding points over here so what this means is that well when the 3d points over here lies on a plane then all its reprojections onto the image over here there will be a one-to-one correspondence that directly is related by h which we can write as x2 here equals to h multiplied by x1 and this is the case the first case where this homography or the projective homography exists in relating the two sets of correspondence between two images they must all lie in the on a plane in the real 3d world and now let's derive this particular relationship here starting from that suppose that x1 and x2 capital x1 and x2 are the 3d point x so i have this 3d point which i denote as capital x over here and what what x1 and x2 means here is that i'm re representing x in the first camera frame suppose that i have a camera frame over here that attached to this camera and the image and the x1 here would be the coordinates that is with respect to the camera coordinate frame 1 and x2 here would be the same 3d point x that is represented in the camera coordinate frame to here and uh we know from the rigid transformation that x2 is going to be related to x1 via a rotation and translation so this would be the rotation from frame 1 to frame 2 and the translation would also be from frame 1 to frame 2 over here now we further denote n as a 3 by 1 vector that represents the normal vector of the plane pi here and this is a unit normal vector this means that the norm of n should be equals to 1 here and this represents the pi uh the plane pi with respect to the camera frame c1 here and d is the perpendicular distance from the plane to c1 this means that from this plane over here from the camera frame over here we will have a perpendicular distance that this particular plane pi here make with the camera frame c1 and we will denote this distance here as d and hence the equation of the plane can be rewritten into this form here into the cartesian form which we have seen in our lecture 2 when we talk about the 3d projective coordinate and the rigid transformation of a plane and here n transpose x1 here can be rewritten into this format here which is the normal vector dot product with the x1 x1 is the point in homogeneous point in the image frame here so which can be converted into an inhomogeneous coordinate x y and z here and hence i will get x and 1 x plus n 2 y plus n 3 z equals to d here and now i can reformulate this move d to the left-hand side of the equation which will give me this particular equation here that is equals to 1. and what's interesting here is that i have two equations here equation 1 and equation 2 here since equation 2 is equals to 1. this means that i can multiply this term here anywhere here in and it will still be the same relation equation 1 will still hold because this equation 1 this equation 2 here equals to 1. so i'm going to multiply this guy here with t in equation 1 so i will have n transpose x 1 divided by d over here and now factorizing out x1 in this equation after multiplying equation 2 into the equation uh i i would get this particular relation here combining the two equations and factorizing out x1 i'll get this relation here and since we also know that lambda x1 equals to capital x1 here so what it means is that we know that these two points here x1 and x the the 3d point capital x over here it lies on a line and both of them lies on the line so uh what this means is that uh this uh the the scale of this a scalar product of this x1 the vector of x1 here if i were to scale it with lambda i will be able to reach the 3d point here and represent the 3d point here hence the equality in this two equations so lambda here is actually a scalar uh scale it's a scale that i scale up or x 1 such that i can scale up this vector of x 1 into x over here so similarly for x2 i would also have i would also be able to scale this x2 over here this vector over here with respect to the frame at c2 such that it becomes the same as the 3d point here in the frame of c2 hence that is another scale of lambda 2 here and substituting these two relations over here with the scale into the this this equation that we have obtained from combining equation one and equation two uh i would end up with this particular uh equation here so i'm replacing uh x capital x one over here with lambda 1 small x 1 this is my image point and i'm replacing this capital x 2 here with the lambda 2 and small x 2 here and now i can combine these two lambdas together i can simply rewrite lambda equals to lambda 2 divided by lambda 1 here and finally i will be able to get this equation which is up to a certain scale ignoring the scale i would obtain the equation of x2 equals to h multiplied by h x1 over here which is the transformation the projective transformation equation that we have seen earlier that maps a point in the p2 space to another point in the p2 space here and we saw we can see that in this relation here h has a certain expression here related by r which is the rotation of between c1 and c2 the two camera frames as well as the translation between the two camera frames and the normal vector of the 3d plane that we have defined earlier as well as the perpendicular distance between the 3d plane and the camera coordinate frame c1 here and the second case where homography exists that relates to two points or a set of points between two images here x1 and x2 when this h exists is that there is a plane at infinity this means that the actual 3d point that causes this two x1 and x2 in two camera frame here lies at infinity pi infinity here and what this means in reality is that this point that is projected onto the image lies at a very far away scene example uh from taking from the satellite you have a satellite image of the earth where all the objects on the earth actually had a very far away scene and that would appear to be uh the ground of the of the of earth would appear to be at pi infinity the plane at infinity and we can see that in this case here taking the homography that we have derived to be this d relation in this case here is a finite scene where the scene has to be a plane lying on a plane that means that all the 3d points they will have to lie on the plane for this homography to exist such that it relates to points a pair of correspondences between the two images and in in this case here we saw that the plane at infinity simply means that the distance has to be close to infinity this means that the distance between the camera frame 1 c1 here and the plane it has to be close to infinity this means that d has to approach infinity here and hence this plane here would be close to the infinite plane here and substituting this limit into this equation we can see that this guy here becomes infinity so this term here will be 10 to zero and we will be left with rotation what this means here is that a very far away scene would the point that is projected onto the two image will be related by the homography as well as if my camera undergoes a pure rotation that means that i'm purely rotating this camera the frame here along a certain axis here then the points that is seen by the two frames would also be related by the homography and this is similar to it has a similar effect to the point lying at the plane at infinity which we can see that uh from this limit theorem over here that at the end we will see that both have the same effect as pure rotation here the next question to us is that given two images where we have a set of correspondence that where we know that the homography exists well we want to compute the homography from these point correspondences for example in this case here we see that there are four point correspondences here uh that we know that x1 is corresponding to x1 over here they belong to the same point but at a different viewpoint so there is a certain rotation and translation that relates the two camera centers uh for for these two views over here and now we identify the correspondence as the point correspondences that are the same point in 3d that is projected onto the 2d view over here the question is that uh if we are able to identify these correspondences how can we compute the homography that relates x prime and x in the two views and the next question is that how many or a minimum of how many point correspondences are needed in order to solve for the homography that relates these two views here so as i have mentioned the question is how many point correspondences are needed to compute h so the answer is very straightforward it depends on the number of degrees of freedom in the homography matrix h and the number of constraints each point correspondences give and we know in lecture one that a homography uh consists of nine entries with one less two for the scale this means that a homography has eight degrees of freedom altogether and we will see that each point correspondences of x i and x i prime in the two image over here each point corresponds at x i as well as x i prime these are correspondence they give us two constraints and as a result all together we will need uh four point correspondences because each point give us two constraints and we have all together eight unknowns in our homography matrix to solve so uh four correspondences will give eight constraints and eight constraints eight unknowns over here we will be able to solve for the full homography matrix up to scale and it will be seen that if exactly four point correspondences are given then an exact solution for the metric of h is possible and this is what we known as the minimal solution which is important for ransack loops we'll see that the more number of points that we use increasing number of points that we use or correspondences that is needed in order to compute the a certain entity such as the homography matrix then the number of ransack loops that is needed for robust estimation will increase exponentially and this is not good and since the points are measured exactly what this means is that the points here an image is actually a sensor imaging device which is subjected to noise this means that every pixel the projection of the real 3d point or 3d structure onto the image onto the photo sensor it's usually corrupted by noise so it's not that precise there might be some noise error here and there and uh so these point correspondences that we obtain and it might not also be the exact correspondence it might be corrupted with some noise as well so the in terms of the location and under this kind of situation we'll see that we'll use the least square estimates or the least square solution to eliminate or to minimize the noise in our measurements now let us begin by looking at a simple linear algorithm for determining the projective homography denoted as h here given a set of four point correspondences x i and x i prime where i equals to 1 2 3 and 4. and we'll start by looking at the familiar equation of the transformation the projective transformation which relates the point correspondences in two views here so this is a projection from x i that transform x i to another view of x i prime and this transformation here is related by h where the mapping is from the p2 space to another p2 space here and uh we'll start off by looking at this equation here where the unknowns are found in h over here so they are all together nine entries over here where only eight of them uh matters because the scale uh that cannot be found here and here x i and x i prime are the four point correspondences that we can obtain uh in the from the image correspondences using uh something like a sift algorithm or the surf algorithm which i will not cover in detail in this particular module and now the objective here is that starting off from this equation h x i equals to x i prime where we have a set of unknowns in h and a set of known correspondences in x so where x 1 i here equals to 1 2 4 1 2 3 and 4 and we'll have four sets of these equations over here so we are our objective here is to reformulate this into a linear homogeneous linear equation ax equals to zero where x here consists of all the unknown elements in h and this would be a nine by one vector and a here would be all the coefficients from the known correspondences that we can observe from the image and we can do this by simply taking the cross product of h x i equals to x i prime here with x i prime itself so we will do this by taking x i prime cross with h i h x i equals to x i prime cross with x i here and we know that any vector when it's cross product with itself this guy here is going to be equals to zero hence we'll get this relation over here where we can rewrite h here into the uh respective rows h1 and then h2 and h3 here so we can take the product of each row with the with x and uh to form this h multiplied by x i over here and x prime i prime over here would be our three by one homogeneous coordinates from the image the second image where x i is observed x i prime here is observed and the cross product here substituting these two terms over here into the cross product term over here we can see that it explicitly gives this equation over here so factorizing out the unknowns of h the terms of h from the cross product matrix we can see that we can rewrite it into this particular form here the linear form over here where we have a known coefficient metric over here multiplied by a vector this is a 9 by 1 vector that consists of all the unknowns in my homography matrix here and what's interesting here is that this particular matrix the coefficient matrix which is three by nine but only two of them are linearly independent two rows are linearly independent this because if we take x prime i multiply by the first row and add it to the product of y i multiplied by the second row we will be able to obtain the third row up to a certain scale and this means that only two of the rows are linearly independent hence this will give us two constraints over here each point correspondence is going to give us two constraint this is going to produce a two by nine uh matrix here where the unknowns here are nine by one all the h in my three by three uh homography matrix here which i can rewrite into this form here a i h equals to zero and since we all together have nine unknowns or where eight of them eight degrees of freedom matters because the scale cannot be found and each point correspondence give me two constraints this means that all together i'll need four point correspondences so four multiplied by two constraints is going to have eight constraints where to solve for the eight unknowns or the eight degrees of freedom in my homography factor over here this means that with the four point correspondences i will be able to solve for the nine by one unknown vectors uh in my h matrix over here and as i mentioned earlier that each ai here is going to be a two by nine uh matrix because each point correspondence just give us two constraints and h here is a nine vector that is uh more explicitly here it's made up of each entry in my homography matrix where i write h1h2h3h4h586 and h7h8h9 that forms my small h the vector 9 by 1 vector over here so this would be h1 all the way to h9 over here and note that in our expression w the correspondence x i equals to x i y i and w i over here transpose this is a homogeneous coordinate of the point x i in my image the w over here is normally chosen as one this means that i'm going to normalize uh divide by w throughout here such that i will be able to get x i divided by w y i divided by w i and then get a 1 over here transpose and since h has 8 degrees of freedom as i've mentioned earlier and each point correspondences give two constraints this means that a minimum of four point correspondences is needed uh to solve for the nine by one factor of h where uh each point correspondences is denoted as x i and x i prime over here and uh since each point correspondences is going to give me a i uh h equals to 0 where this is a 2 by 9 matrix and this guy over here is a nine by one vector and four point correspondences means that i can stack them up together the a1 a2 a3 and a4 here i can stack them together and they will still multiply by h equals to and it will be equals to 0 here so this means that all together i will get a 2 i by 9 matrix over here which i collectively call a over here so note that it need not necessarily be just four point correspondences this is a minimal uh requirement in order for us to solve for the nine by one unknown in the null space equation here we'll see that stacking more than four point correspondences together will be able to solve for the least square solution that is robust to noise and in the real life images as i mentioned earlier that the measurements of point correspondences are usually corrupted with noise this means that given the image the correspondence in the other image over here because these are mapped onto or captured by real photo sensors which is subjected to noise and the algorithm that we use to extract the point correspondences are also subjected to a certain model noise this means that the two point correspondences here the the pair of course oneness is here they might be corrupted with some noise in terms of the homogeneous coordinates here and uh what this means is that if we were to put all these correspondences as x i and x i prime here into our equation of a h equals to zero this means that uh if we have a minimum of four points we are going to solve for minimum for for using four point correspondences to solve for all together the 9 by 1 unknown here it's going to give us a less accurate estimation this is because the correspondences the point correspondences are corrupted with noise and an exact solution for this a h equals to 0 does not exist this means that even if i get the best h over here after i solve for h over here using the four point correspondences and i plug this h back into this equation with the four point correspondences that is form that forms a over here is not going to be equals to zero because all my correspondences the four point correspondences are actually uh corrupted with some sort of noise and instead we seek to minimize uh a h over h subjected to the constraint of the norm of h equals to zero this means that uh since we know that the the point correspondences are corrupted with noise it's impossible to reach this constraint over here or to reach this cost over here such that we can never find a h such that it actually fulfills the equation of a multiplied by the h that we found equals to exactly zero instead we will seek to minimize the norm of a multiplied by h so we want to get this guy here as close to zero as possible but we know that this is impossible because it's corrupted with noise hence we minimize this this means that i'm going to do a mean of h over the norm of h subjected to the constraint that the norm of h should be equals to 1. this is because if we didn't have this particular constraint over here uh then the minimum of h here that minimizes this norm could be actually equals to the zero vector and this is uh not a solution that we want because we don't want a trivial solution for our homography matrix and subjected to one it's also because that we do not know the scale of our homography matrix hence we will just normalize the scale to 1 here and this is the least square solution of h which can be obtained by the right null space of a since this ah equals to 0 is the famous homogeneous linear equation that we know of in linear algebra what this means is that we can simply take the right null space of a and that will give us the null space vector of h over here that fulfills this this uh constraint or this optimization cost over here so the least squared solution of a h equals to 0 can be given by the computing the right now space of the matrix a over here a here is a 2i by 9 matrix where h here consists of all our 9 by 1 unknowns in this null space equation and by computing the right now space of a we can uh compute the solution to this least square estimate this means that we are computing the solution to the minimization mean of over h of a h such that subjected to the constraint that the norm of h needs to be 1 over here and we can make use of the singular value decomposition to compute the solution of h that minimizes this constraint here and this is given by the right now the right singular vector that corresponds to the smallest singular value or the eigenvalue in the singular uh value decomposition of the 2 by 2i by 9 matrix of a we can see that taking the svd of a the matrix of a 2i by 9 of a we will get the left singular vector this is a square matrix or the left octagonal matrix which we denote as u over here this is of size 2i by 2i and the singular values uh matrix over here which is of size 2i by nine since a here is a non-square matrix in general uh this would be a 2i by 9 where we will see that the the first nine rows over here the first nine rows over here the diagonal of the first nine rows are non-zero entry which corresponds to sigma one to sigma nine that corresponds to the nice singular values of this uh matrix a here and uh the subsequence rows over here that make up the subsequent 2i minus 9 rho so over here they'll be all padded with zero as well as the non-diagonal of the first nine rows they are all zeros over here and we also have a right singular vector denoted by v over here which is nine by nine and we can see that the multiplication of these three matrices the left the singular vectors or the left octagonal matrix and the right orthogonal matrix and the singular value matrix over here we are all together uh it will result in 2i by 9 of our a matrix or here and after computing this in general for a given n by m matrix where m here is more than n this means that a here is a tall and thin matrix we will get the after taking the singular value the singular value decomposition of a here we'll get m by m left singular uh vectors and and the singular value matrix would be m by n over here where the first n rows over here the diagonal should be a non-zero value or that corresponds to the eigenvalues or the singular values of the matrix a and the rest of all the the remaining rows here the remaining n minus n rows here it's going to be padded with zero and we will also have our right singular vector which is in general m by n in size we can denote this as u uh sigma and v transpose here and this will give us the metric of a and in the case where a is not corrupted with noise since a here we mentioned that it's a form from the observation from the correspondences between two images x i and x i prime here and in the case where all these correspondences are perfect correspondences they are not corrupted with noise then an exact solution of a h equals to zero exists and what this means is that uh the rank of a here the rank of a here is going to be exactly r over here where r is the number of point correspondences or the number of constraints in h here so since in our homography estimation case here it's going to be uh 2i by 9 where uh we are only interested in eight unknowns or in this particular nine by one vector here there are only eight uh degrees of freedom so what this means is that uh the rank of a better b equals to 8 in order for us to find the this particular solution here but in general the this does not exist because the correspondences are corrupted with noise and in general the rank of a is not going to be equals to 8 over here and we'll see that in this case uh if a is corrupted by the noisy measurements so what this means is that uh the next n minus r singular values they are not going to be equal to zero so in the case where in the previous slide here in the case where the rank equals to eight over here or exactly equals to r over here then the singular values of all the uh n minus r or singular values over here the least singular values over here they are all going to be equals to zero but in the case where they are not the rank of a is not equals to r then this list singular values over here they are not going to be equals to zero and since uh u and v are orthogonal matrices octagonal square matrices sigma is a diagonal uh matrix we have this uh a equals to u sigma v transpose this is uh obtained by the svd of a over here and we can simply uh flip this guy we can simply bring this v over here onto the left hand side of the equation by taking the transpose because for octagonal matrix the inverse is equals to the transpose of the same matrix itself so we'll get this relation av equals to u multiplied by sigma over here and since v here it's consists of uh v r1 v2 all the way to v n and u here consists of uh u1 u2 all the way to um and uh we can rewrite this relation into avi this means that i'm taking one of the column here and then multiplying it by a over here as well as u i sigma i this means that i'm going to take one of the columns here and directly multiplying it with the scalar value of my sigma over here and this equation will hold true for i equals to 1 all the way to i equals to n and what's interesting here is that since we have this relation of av i equals to u i sigma i over here and uh where this is a singular value is a scalar value multiplied by our vector ui over here and we'll see that a i or a v i over here this guy over here on the left side is the minimum when the singular value here is also the at its minimum or the product of these two terms here are at this minimum hence it corresponds to the singular or the smaller singular value at sigma n here so because since when we compute the svd in the previous slide here uh it's always uh computed or any uh or compiler that you are using uh python or matlab for example when you compute svd it's always going to arrange it in this way the singular value from the largest all the way to the smallest singular value and hence the column of v that corresponds to the smaller singular value is going to be the at its minimum over here since it corresponds to the smaller singular value here and hence this will give us the solution to the optimization uh cost function or the least square cost function that we are after so we are after this where a h should be the minima and subjected to the constraint that the norm of h should be equals to one since and this what this means here is that h here would be at this minimum when we choose v or vi that corresponds to the singular value that is at the smallest value and will also be guaranteed that the norm of h would be equals to one this constraint here would be satisfied because v here comes from the column of a octagonal matrix of v and where every column here the norm of every column and every row would have to be equals to one hence the solution of the problem is uh of to this least squares problem over here is simply given by setting h equals to v n that corresponds to the smallest singular value so as i have mentioned earlier on that the constraint of the norm of h to be equals to 1 is also automatically satisfied because v here is a octagonal matrix where every column and every row has to be unit norm here's a summary of the direct linear transformation algorithm for us to find the homography from four point correspondences or more than or equal to four point correspondences so the objective here is given by given n equals to or more than four point correspondence 2d to 2d point correspondences that is denoted by x i and x i prime so what this means is that i have two images here uh i1 and i prime over here or i and i prime over here i have the uh four or more point correspondences here so this is x1 x1 prime x2 and x2 prime and so on so fourth over here the given this observation from the image where i have this set of point correspondences the objective is to find the 2d homography h such that x i prime is related to x i via this relation over here and we have seen that the algorithm is very simple for each point correspondences we will compute a i this is given by the equation of a i h equals to 0 which we derived using the cross product earlier and we'll take the first two rows from this because we have seen that only the first two rows or only two rows in the cross product matrix are linearly independent and then we'll assemble n of this 2 by 9 matrix of a into a 2m by 9 matrix of a over here so we'll stack this means that we'll stack all the a's a i together a 1 a 2 all the way to a n over here and then we'll stack it together to form a h equals to zero and uh finally we'll obtain the svd of a because the objective here is that we know a here or the point correspondences are corrupted with noise and hence this uh equation here the solution here is not going to be fully satisfied and this is why we are going to just simply minimize the norm of a h over h subjected to the constraint that the norm of h should be equals to 1. now and this we have seen that this can be easily done by taking the svd of a such that we get the left octagonal matrix the singular matrix and the right octagonal matrix and the solution uh here of h will correspond to the smallest uh the uh vector or the vector in the right singular vector matrix that corresponds to the smallest singular value over here in this case i denote this as v n over here and then we'll be able to rearrange from h because h is a nine by 1 vector that consists of all the elements in the homography matrix and finally we'll be able to rearrange this 9 by 1 vector over here to recover our 3 by 3 homography matrix here and there is a degeneracy case where the formulation that we have seen earlier would fail so in this case here let's start off with the ah equals to zero and here we know that a solution or right now space of a for the solution of h exists for this null space equation here is that the rank of a must be equals to 8 for the solution of h2 exists this is because h over here is a 9 by 1 vector with 8 degrees of freedom hence the null space to fulfill this null space equation the rank of eight or the rank of a must be equals to eight and in the case where the rank of a drops below eight if uh three or of the four minimum four points correspondences are collinear this means that i have the correspondences and three of them actually lies in a straight line and one of them is outside the straight line so what this means is that these three points they become linearly dependent on each other because they all lie in a straight line given any two points i'm able to compute the other the third one so this means that they are not linearly independent and hence the rank the overall rank of a will drop below it this means that the total number of constraints which is given by the rows the linearly independent rows in a here it will no longer be equals to eight since three of the points are linearly dependent on each other and hence the rank of a will drop will be lesser than 8 and this means that i will not be able to solve for the null space of a to get h over here and in this case we call this a critical configuration or a degeneracy case where the solution of the h here would simply not exist and hence it's important to check that the selected points the correspondence points here are not in the critical configuration the collinearity of the three points can be easily checked via this formulation over here any given any two points we can find the line so x1 and x2 over here we can simply compute the line as the cross product of these two points and a third point any third point if to check for collinearity suppose that this third point here is x3 we will simply put it into this equation l transpose of x3 if it is equals to zero this means that these three points are collinear and if it is not equal to zero then this means that the three points are not equal to zero now we have seen earlier the the direct linear transformation algorithm which simply to formulate the formulation or the estimation of the homography into a h equals to 0 the homogeneous linear equation here such that we can make use of the known correspondences to form a and solve for the unknown of h here the 9 by 1 vector of h here that but there is a problem in the observation which will result in a u condition matrix of a that will lead us to the wrong solution here's an illustration of the problem suppose that i have a point of x y and w homogeneous coordinate of the point on the image which is given by 100 101 over here and suppose that x prime y prime and w prime is also in this order hundreds on hundreds and also one over here the scale here is set to one and by putting it into a h equals to zero formulating a a here the matrix of a here according to what we have derived in the previous slide this is what we are going to get the two by nine two by nine matrix of a i over here is what we are going to get we can see that from here some of the columns here it's going to be in the order of hundreds and some of the columns here are going to be in the order of ones and some of the all columns here because of the multiplication of the two terms is going to be the order of 10 000 over here 10 to the power 4 over here and the order of magnitudes difference is going to cause a bad behavior in svd so in linear algebra we call this kind of matrix u condition u condition uh matrix and this will lead to a very bad behavior in the svd solution i won't go on to prove this i'll leave this to you because it's out our scope of what we are going to teach in this semester and uh but i'll illustrate this with uh an example here uh suppose that i have uh five points that is uh selected from one image and here these five points are corrupted with noise each one of this point is corrupted with noise in 100 times 100 different noise over 100 different trials and every time these five points are corrupted with uh ones with a 0.1 pixel gaussian noise we'll use that configuration to compute the h that transfer any one of this point to the next image over here and over this 100 trials because this is corrupted with 100 different randomly generated gaussian noise of 0.1 pixel standard deviation what this means is that it's going to the transfer of the point from my the homography that i have computed over 100 trials it's going to end up in 100 different locations here and we can see that if we use a normalized data to compute this means that all these points over here they are normalized they are lying in the this means that the a matrix that i've formed from this unnormalized data over here it's going to have a different order of magnitudes over the different columns that we have seen the example in the previous slide and to use that to compute the the transfer or to compute the homography for the transfer we can see that the transformation here the transfer of the points over these 100 trials of each one point over 100 trials here it's going to end up to have a very large error distribution compared to a normalized data because in this case here this a matrix is eu condition and uh compared to a better condition a well-conditioned a matrix to compute the homography we can see that over 100 trials the transfer of the points over 100 trials is going to lead to a very small error over here and uh therefore we we have to to mitigate this particular problem we have to turn a into from an u condition uh matrix into a well condition matrix and this can be done by normalization we will look at how to do the normalization to to achieve this so normal data normalization is carried out by a simply a transformation of the points as follows first all the points are translated so that their centroid is at the origin because we have seen that the scale of the points uh given an image the scale or the the the magnitude of the these points it actually matters because of if it is 100 100 and one versus another point which is at uh for example one one and one then the scale of this or the distribution of these points are too large too far away hence as a result it will lead to the difference order of magnitudes in our formulation of the a matrix so we are going to first centralize or we are going to first centralize all these points with respect to uh their centroid as the reference point so that they are all centered to the uh this centroid and then the next thing is that we are going to prevent some points from being too far away from each other by scaling by applying an overall scale such that the average distance of all the points to the centroid here is equal to the square root of 2 and the transformation is applied to each of the two images independently this means that the average point is equals to 1 1 1 after the normalization that's why we have the square root of 2 here and this will as a result mean that there won't be any magnitude difference in the linear equation of a h equals to 0. here's a summary of the normalized direct linear transform algorithm it should be noted that the data normalization is an essential step in the dlt algorithm and it must be done in order for us to get the correct result so this means that this this step here is not optional is compulsory otherwise we will not be able to get the correct result so the objective here is as usual that given n equals to or more than four point correspondences to the 2d point correspondences we want to then want to determine the 2d homography such that x i prime equals to h x i we want to transfer a point from one image to another so we'll start by uh first normalizing the points by computing this so what this means is that we are going to apply a transformation on our area of our point in image i here and we are also going to apply another transformation on every image point x i prime on the second image notice that t norm and t prime none they are not the same because uh we will compute the transformation of each image individually because the centroid and the scale are not going to be the same in the two image because the the distribution of the points are not going to be the same since it has already undergone some uh homography transformation here and here's the formulation for the matrix this is a three by one transformation matrix over here whereas here is our scale it's uh computed by square root 2 divided by the mean distance of all points from the centroid this means that if i have an image i have all the points over here and i compute the centroid the first thing to do is to compute the centroid over here which i denote as cx and cy over here and then the uh the we will take the average distance will compute all the distances with respect to this new centroid and then denote this by take the average and denote it by bar of d and compute the scale as square root of 2 divided by d bar over here so by applying this transformation we'll ensure that all the points now are centralized with respect to the centroid and as well as their mean average distance is going to be 1 1 1 from the centroid and after applying this we will get the new set of correspondence point x i tilde and x i prime tutor then we'll use this set of point correspondences to compute the homography and finally we have to denormalize the solution this means that we will apply this transformation to the h tilde which we have computed from the normalized points to denominate it to get the final solution of h here and uh please remember that this step is not optional it's compulsory to do this otherwise we'll end up with a u condition matrix of a and that will lead us to a bad solution of the homography