playlist
stringclasses
160 values
file_name
stringlengths
9
102
content
stringlengths
29
329k
MIT_18100A_Real_Analysis_Fall_2020
Lecture_24_Uniform_Convergence_the_Weierstrass_MTest_and_Interchanging_Limits.txt
[SQUEAKING] [RUSTLING] [CLICKING] CASEY RODRIGUEZ: So let's continue our discussion of sequences of functions. So we had two different notions of convergence of sequences of functions. So the first notion was pointwise convergence. So we have a sequence of functions, f n from some set S to R, another function, fixed function f from S to R. And then we say f n converges to f pointwise if for every x in S, the sequence of real numbers converges to f of x. So if for all x and S, the limit as n goes to infinity of f n of x equals f of x. So I take an x out of S, stick it into f n. So now I get a sequence of real numbers. And I should get f of x as n goes to infinity. And then we introduced the stronger notion of convergence of functions, which was the following-- so we have a sequence going from S to R, function from S to R, then we said f n converges to f uniformly on S if somehow, across the entire set, f n gets close to f. So here, this statement says if for each fixed x, eventually, f n of x-- so f n evaluated at that point-- is close to f of x, f evaluated at that point. But uniform convergence says f n is close to f across the entire set. So if for all epsilon positive, there exists natural number M so that for all n bigger than or equal to M, for all x in S, f n of x minus f of x is less than epsilon. Now, if you write out what this means in terms of epsilon M-- remember, this is a limit, so this is a limit of sequences, so this means something in terms of epsilons and M's-- this would say for all x in S, for all epsilon positive, there exists an M, so on. So the x appears at the front of this definition, while here for uniform convergence, it appears at the end. That is not just a kind of meaningless difference in the way you write the definition, meaning this is a stronger statement than if the x is appearing here. So what am I going on about? So first off, we proved last time that if I have a sequence of functions, again from some subset S to R, converging to f uniformly, then this implies f n convergence to f pointwise. But now, what I'm going to prove is that in fact, uniform convergence is something stronger than pointwise convergence. In other words, this is a one-way street. Pointwise convergence does not imply uniform convergence. And we were just going to look at a very specific example, which I was going to state as a theorem, which is the following-- let f n of x be x to the n. And now we're looking on the unit interval, 0, 1. And let f the function that is 0 if x is in 0, 1, 1 if x equals 1. So first off, let me from last time-- or you can even just check by looking at the form of f n-- that f n converges to f pointwise. If I take x in this interval here, so not equal to 1, then x is strictly less than 1. And if I raise it to a high enough power over and over again, that's converging to 0. So it converges to f of x, 0. Now at 1, I just get 1, and that clearly converges to 1. So f n converges to f pointwise. And so the claim is that for all b between 0 and 1, f n converges to f. And I guess I could include 0. That would just be looking at one point and not very interesting, but f n converges to f uniformly. And the second is f n does not converge to f uniformly on the whole interval, however. So maybe it's best to again draw this picture that you're supposed to think about when it comes to uniform convergence. So we have the limiting function, f. And then we draw a little epsilon-sized collar around the function f. And then uniform convergence-- so this is f-- uniform convergence says that as long as I go far enough out, the graph of f n should be within this little epsilon collar, so that f n is getting close to f uniformly across the entire set. So number 2 will give us a chance to negate this definition of uniform convergence, which like I said, you should always-- so we're actually doing two things here. We're giving an example of uniform convergence, and also a sequence of functions which does not converge uniformly to this function. So we're doing both an example and non-example, which is the best thing to do for a new definition. So for the proof of 1, let's prove uniform convergence. So let b be in 0, 1. Then the limit as n goes to infinity of b to the n is 0, which implies-- so now I'm getting ahead of myself. Don't write that yet. So now we want to prove uniform convergence of f n to f on-- I didn't finish this statement, f n to f uniformly on b. So I have uniform convergence on any smaller interval other than 0, 1. Sorry about that if that looked a little weird. So then b n converges to 0 as n goes to infinity. And now we want to prove that f n converges to f uniformly on this interval 0, b. So let epsilon be positive. We now have to find an M so that f n is close to 0 because f is 0 on such an interval. Now, since b to the n converges to 0, there exists natural number M such that for all n bigger than or equal to M, b to the n is less than epsilon. Then for all n bigger than or equal to M, we get-- and also for all x-- 0, b, we get that f n of x minus f of x-- b is less than 1, so when I stick it into f, I just get 0. This is equal to x to the n minus 0 in absolute value is just x to the n because we're looking at non-negative x. And now x is in this interval 0, b, so it's less than or equal to b. So x to the n is going to be less than or equal to b to the n, and this is less than epsilon. So just as when we looked at uniform continuity of a function, it was a statement like, for all epsilon there exists a delta which just depends on, basically, the epsilon and the function. Now for uniform convergence, it looks kind of like pointwise convergence, except now for every epsilon you can find an M which does not depend on the point x. So for every epsilon, you can find an M depending only on epsilon, and maybe the function little f. But that doesn't depend on x. This n here that I chose, M, depended only on b, not the point x, which I have to stick into this here. So now, let's prove number 2. So first off, let's negate the definition so that we know what we're talking about. So f n does not converge to f uniformly on a set S if there exists a bad-- so every "for all" becomes a "there exists. So if there exists some bad epsilon 0 positive so that for all M, a natural number, there exists n bigger than or equal to M, and there exists an x in S so that f n of x minus f of x is bigger than or equal to epsilon 0. So this is the negation. But why should we not be surprised that f n does not converge to f uniformly on 0, 1, if you believe this picture? So let's look at what's going on here. So let me draw, now, the graph of f. And let's say I take epsilon to be 1/4, say. Now this is what my little epsilon neighborhood of f looks like or epsilon tube, or for epsilon equals 1/4, say. It's a tube around 0 up to x equals 1, and then it's a little area around 1. And now if I were to have uniform convergence, then as long as n is very large, f n has to be within this area that I have here. So in fact, let me shade it in. So for f n large, it has to be within this region that I'm coloring in. But now, what do we know about x to the n? Well, it starts at 0, and ends at 1, and looks something like that, which means here, always, it leaves this epsilon tube around f, this epsilon collar, I guess. f n leaves the shaded area, which is where it's supposed to stay. So I hope this intuitive explanation is clear, and why it shouldn't be too big of a surprise that f n does not converge to f uniformly on 0, 1. We'll see another reason in a little bit why it's impossible for f n to converge to f uniformly when we talk about the interchange of limits. But just using the definition, we can prove that f n does not converge to f uniformly. So the negation is that there exists a bad epsilon 0 so that we have all of this. So the point is to choose epsilon so that this never intersects with this epsilon neighborhood of the function down here, never intersects with this epsilon neighborhood of just the point x equals 1, y equals 1. So let's choose epsilon 0 to be 1/4. So now we have to prove for all n, there exists an n. So let M be a natural number. Choose n to be M. And choose x to be, let's say, 1 over 4 to the 1 over M. Now, this is a number less than 1. So its n-th root is less than 1 and also positive. Then this f of x equals 0, and f sub M of this x, which is just 1 over 4 to the 1 over M, now raised to the M-th power, 1 over 4, which implies that f sub M of x minus f of x is equal 1/4, which equals epsilon 0. Or I guess if you like, you can write bigger than or equal to epsilon 0. So basically, if you choose any-- so there was nothing special about epsilon 0 being 1/4. If you choose anything less than 1, that would do You can check that. If I chose 1/2 here, I could then choose this point x where f M is far away from f of x to be 1 over 1/2, 1/2 to the 1 over M, just as long as I don't choose epsilon 0 to be equal to 1. So we have that example. And we had another sequence of functions we had looked at last time, which were these functions that look like tents. I don't know why I'm making this axis bigger when it should be the other one. So there's 1, 1 over n, 1 over 2n, then way up here 2n, so at this point. And then it's piecewise linear. So then it's just a straight line down to here. I'm not going to write down exactly the equation for each piece, and then 0 to 1. So this is f n. And last time, we proved that f n converges to the function 0 pointwise. But it does not converge uniformly to 0. Again, what's the point? The point here is that if I were to draw a little epsilon collar around the function 0, it would look something like this. And f n would have to be within this little strip for all n sufficiently large. But f n is getting taller and taller, so it always leaves any strip that I put around the function 0. So f n does not convert to 0 uniformly. We can make a proof out of that, though. So f n does not converge to 0 uniformly on-- so I should always tell you where I'm talking about. And this should-- on 0, 1. So why not? Well, we can take any epsilon 0, really. So let's choose epsilon 0 to be 1. Let M be a natural number. So we should find an n and a x so that this inequality is satisfied. So let's choose n to be M, x to be 1 over 2M, at this point where I peak. Then f of M of x minus f of x-- f here is just 0. The limiting function is 0, so let me not even put f, let me put this 0. This equals f M of 1 over 2M, which equals 2M. And this is certainly bigger than or equal to 1, which is epsilon 0. Now in a little bit, I'll give you a whole host of examples-- it seems like I've only given you maybe one example up to this point-- of functions which converge uniformly to something. But in a minute, I'll give you a very useful test to decide when a series involving functions converges uniformly. But before we get to that, let's revisit these three questions that we asked in the last lecture in the context of power series, but now in this more general setting of convergence of functions. So now what I'm talking about-- and this is really-- although what I'm about to say may sound a bit alarming, that we're now in essentially the last week of the class and we're really getting to the first real part of analysis. So I was talking with a professor from Duke one time, and he made this funny observation that somehow in math-- and I think this is not just math, but this is a lot of science-based classes-- are taught, if you were to translate it into studying a book, it's like you spend a year studying introductions to a book, and then you spend a year studying middle parts of books, and then you spend another year studying the last parts of books. So now here, we're starting to get into the middle part of the book of analysis, not the textbook. We're at the end of that, but at least in the grand scheme of things, which is the interchange of limits. You have two limiting processes that you want to interchange. In analysis, first God created the limit, and then man asked, can we interchange limits? And so what do I mean by that? This is not always a thing you can do. And this is at the very heart of analysis is when can we interchange limits. So let me give you the simplest example. Let's say we have the following limit-- so I take the limit as k goes to infinity of the limit as n goes to infinity of n over k, n over k plus 1. So that's just a sequence depending on n and k. Now, as n goes to infinity, what do I get? This is just equal to-- I can multiply the bottom by k to get rid of this one. So then I just get n over n plus k. And remember, k is fixed. And then I take the limit as n goes to infinity. So I get 1. So I'm taking the limit as k goes to infinity of this expression. Now for each k, when I take the limit as n goes to infinity, I get 1. So I get 1 there. Now, what happens if I interchange the limit, and now look at the limit as n goes to infinity of the limit as k goes to infinity of n over k, n over k plus 1? Well, now as k goes to infinity for each n-- so remember, I'm taking the limit as n goes to infinity of this expression, which is formerly this expression with the limits interchanged-- this is equal to 0 over 0 plus 1 equals 0. And these two do not equal each other. So it's not always the case that you can interchange limits. That's the simple fact of life. And to be able to make certain statements, do certain computations, you need to be able to interchange limits. And what kind of limits? Maybe taking an infinite sum and integrating, or like we stated in the beginning, power series are, in a sense, a certain limit. They're a limit of partial sums of-- they're a limit of polynomials. And then let's say differentiation-- that's a limit. So a natural question, like we said last time is, is the derivative of this infinite sum the infinite sum of the derivative? Those are two limiting processes which we're asking can we interchange. So the three questions, again, that I asked in terms of power series, I'm going to now frame again in this general setting. So if f n, S to R, f from S to R, and f n converges to f-- maybe pointwise or uniformly, so let's leave this open ended for now because these are the only two notions we have of convergence-- and let me make-- so suppose f n is from S to R, f is S to R, and f n converges to f pointwise or uniformly, and this sentence is not written very well, and f n is continuous for all n, then is f continuous? So suppose we have a sequence of continuous functions converging to another function f, either pointwise or uniformly. Is the limiting function continuous? And I'll explain it in a minute why this is a limiting, kind of asking can we interchange two limits. Second question is suppose f n from a, b to R is differentiable for all n, f from a, b to R, and we have that f n converges to f-- either pointwise or uniform, we're leaving this open ended for now-- and the derivatives also converge, say to some function g, then is f differentiable? And is the limit of the derivatives equal to the derivative of the limit? And then the last question-- so this is the third main limiting process we've seen in this class, which is integration-- suppose f n is a sequence of continuous functions, f is a continuous function, and f n converges to f-- again, maybe either pointwise or uniformly, we're leaving this open ended for now. Does limit of the integrals equal the integral of the limit? Now again, I want you to-- it's really, I guess, more clear here, the fact that we're asking about interchanging two limits. So here, we have this integration is just a symbol for taking this limiting process, where you take a sequence of partitions of a, b with norm converging to 0, then the integral from a to b of f n-- this is defined to be this limit of Riemann sums. So this is a limiting process here, although I'm writing it with this simple notation. So I'm asking it, can I take this limit as n goes to infinity? And can I interchange it with this limiting process? Now, for continuity, it's maybe a little more hidden on what's the interchange of limits that you're really looking at. So it looks more like this. So remember, for 1, we're asking suppose x is in set S, x n is a sequence converging to x, then basically, can we do this? Now I would like to show that f of x is equal to the limit as n goes to infinity of f of x sub n. So let me actually make this an m, and use two different-- let's use a k. So if I compute limit as k goes to infinity of f of x sub k, would like to show that equals f of x if I'm trying to show that the limit is continuous, assuming the functions that are converging to f are continuous. If I look at the limit as k goes to infinity, then I'm tempted to do the following-- that this is equal to the limit as k goes to infinity of the limit as n goes to infinity of f n of x, assuming either pointwise convergence or uniform convergence, x k. And now, if I'm just being a little bit careless, I interchange limits and write this as the limit as n goes to infinity of the limit as k goes to infinity. So here is where I'm asking, can I interchange limits? Again, this is not a proof, this is discussion on what is the interchange of limits that I'm looking at for question number 1. Question number 3 is more clear. And then for this, once I've done this, differentiability will be a little bit clearer on what is the interchange of limits I'm looking at. So let's say I was just doing whatever I like and going through this. Then I would interchange these limits. And since each of these f n's are continuous, then the limit as k goes to infinity of f n of x sub k, and x sub k converges to x, this just gives me f n of x. And so since f n, again, converges to f in some sense, this should give me f of x. So what I'm asking is, was all this OK? Because at some point, at this point in particular, I had to interchange a limit. And we've just seen that we can't always do that. We can't always interchange limits and get the same thing. Here, we got 1. When we interchanged the limit, we got 0. So it's not always the case that I can interchange limits. This equality between this limit and the interchanged limit is the big question mark. So that's the whole basis for question number 1. So I hope that discussion was clear enough. Now, the answer to these three questions is, in fact, yes, but only for uniform convergence. If the convergence-- and this is the mode of convergence we must have so that the answer to all of these three questions, which we'll state and prove as theorems, is correct. Now the natural question is, what if we have a weaker hypothesis? Namely, what if we only assume pointwise convergence? Is the answer to any of these questions yes? And, well, no. So the answer to all three of these questions is no if we only assume pointwise convergence. So let's go through an example showing that each of these three questions is no if we only assume pointwise convergence. So let's look at an example showing 1 is no if we only assume pointwise convergence. And we basically already have it on the board. Take f n of x to be x to the n on 0, 1, x is in-- then each of these functions, and for all n, f n is a continuous function on 0, 1, f n converges to f pointwise. But f itself is not continuous function. So this provides an example of a sequence of functions which converges pointwise to a function which is not continuous. Again, what I was saying there in the answer is that if I have a sequence of functions which converges uniformly to a function f, then that function is continuous. Here, if we only assume pointwise convergence, we may not get a continuous function in the end. And this is what this example shows-- f n of x equals x to the n. These are all continuous functions. They converge to a function which is not continuous pointwise. I'll say something else in a minute. So let's look at example 2 now. Basically, we take the previous example and kind of integrate it to get an example of a sequence of functions that converges, and its derivative converges pointwise. 2, take f n of x to be x to the n over n on 0, 1. Then a couple of things-- f n converges to 0. f n prime converges to the function f, which is the same function from here. So let me actually make this g. And these are pointwise, so I have a sequence of differentiable functions that converge pointwise to something. And the derivatives also converge pointwise to something. Let's call f equals 0, but g is not equal to f prime. The derivative of f, the constant function 0, which is 0. g is equal to this function, which is 0 if x is in 0, 1 and 1 if x is equal to 1. So here, we see that the derivative of the limit is not the limit of the derivatives, if we only assume pointwise convergence of the functions. Now, the last example showing that 3 does not hold if we only assume pointwise convergence is also on the board. It's the whole point of this one up there. So let's go on the next board. So here, f n from 0 to 1 to R is this tent function, 1 over n, 1 over 2n, 2n. It's just piecewise linear. So it's 0 from 1 over n to 1, and it goes up to 1 over 2n, 2n, and then back down to the origin. And so we know that f n converges to 0 pointwise. Now, the integral of 0 is just 0. Let's look at the integral of f n from 0 to 1. The integral of 0, 1 of f n-- now if we were all together, this would be the point where I stop and ask if anybody can remember the area of a triangle, even though we're in this advanced analysis class. But I don't get to ask you that. I get to just ask myself that and I know the answer because I prepared and only because of that. This is 1/2 base times the height. So remember, the integral is an area. And I could write down the formula of what this function is and actually integrate it out using the fundamental theorem of calculus, but just go with me on this-- that the integral of f n is just the area of this triangle that has base starting at 0 and going to 1 over n, and it peaks at 2n. So the base is 1 over n. The height is 2n. So this equals 1 for all n. See, this is why I was making it peak. So the integral from 0 to 1, which is just 1 for all n, does not converge to the integral of the limit. So in this case, the limit of the integrals is not the integral of the limit. And what's the reason? Because we only have pointwise convergence. Like I said a minute ago, the answer is yes to all three of these questions if the convergence is uniform. And what these three examples are supposed to show you is the answer is no if I only assume pointwise convergence, the weaker notion of convergence. So let's now prove some theorems. So this first theorem addresses question 1. Suppose f n from S to R, f from S to R, f n is continuous, meaning it's continuous at every point in S for all n, and f n convergence to f uniformly on S. Then the conclusion is that f is continuous. So the proof-- we've done several proofs like this before but for some reason, at least I see in textbooks for this proof, they always refer to it as an epsilon over 3 argument. And then it's the last time they call it an epsilon over 3 argument. So we have to show f is continuous at every point in S. So let c be a point in S. Let epsilon be positive. We have to find delta so that for all x minus c less than delta in absolute value, f of x minus f of c is less than epsilon. And what we're going to do is we're basically going to replace f by some f m for m large enough. And the fact that we have uniform convergence is what allows us to do that. So let epsilon be positive. Since the f n's converge to f uniformly, there exists a natural number M so that for all n bigger than or equal to M, for all y in S, f M of y minus f of y is less than epsilon over 3. Now, this f sub capital M-- I'm just going to fix that so it should be n. But now, this is for all n bigger than or equal to capital M. I really just need one, so let's look at the f sub capital M. That's a continuous function. Since f sub capital M is continuous, there exists a delta positive so that for all x minus, if x is within distance delta to c, then I get that f sub M of x minus f sub M of c is less than epsilon over 3. So then this delta here-- so I stopped doing something which I was doing in all previous lectures, when I would say there exists delta 0. Choose delta to be this delta 0. There exists M0. Choose M to be this M0. Now I've dropped that and stopped doing that because it should be clear from the context now what I'm choosing delta to be. So then for all x minus c less than delta-- so I'm saying that I'm now choosing this delta that came from here, which came from f sub capital M. Capital M came from this, what I needed here. So now, if I look f of x minus f of c, which I want to show is less than epsilon, now if I add and subtract f sub M of x and f sub M of c and use the triangle inequality, this is less than or equal to f of x minus f sub M of x plus f sub M of x minus f sub M of c plus f sub M of c minus f of c. So I just added and subtracted f sub M of x and f of M of c, and then used the triangle inequality. Now, by this estimate here, since I'm looking at a particular n basically equal to M, I have this is less than epsilon over 3 no matter what y is in this set. So then certainly for x, I will have this is less than epsilon over 3. How I chose delta, remember, was to guarantee that this would be less than epsilon over 3 as long as x minus c is less than delta. And then of course, this one is less than epsilon over 3 again because of this uniform closeness of f M to f. And therefore, for all absolute value of x minus c less than delta, I have f of x minus f of c is less than epsilon. And that finishes the proof. So if we have uniform convergence, then-- a good theorem should be able to be stated in one sentence, or at least some simple and easy way to remember it. What this says is the uniform limit of continuous functions is continuous. So the uniform limit of continuous functions is continuous. Next, we'll show, in a sense, the uniform limit of differentiable functions is continuous. We're going to do the simplest statement of that, although one can make stronger statements. But in practice, this one suffices, really, at least where it pops up later in life. But before I get to that one, let's do 3, which is-- let's see, what would be the short way of saying that? The integral of the uniform limit is the limit of the integrals, something like that. That would be the short and sweet way of stating the following theorem, which is-- so this is an answer to 3. So we skipped over 2 for a minute. Suppose f n is a sequence of continuous functions on a, b, because we're going to be talking about integrals and we've only spoken about integrals for continuous functions-- suppose f n is a continuous function on a, b, f from a, b to R, and f n converges to F uniformly. Note that by what we just proven the previous theorem, this automatically guarantees that f is a continuous function. So we can ask about the relationship between the integral of f and the limit of the integrals of f n. Then the limit as n goes to infinity, integral from a to b of f n equals the integral from a, b to f. So the integral of the uniform limit is the limit of the integrals. So this is just a sequence of numbers, and we want to show it converges to this number here. So let's do this the old-fashioned way. Let epsilon be positive. Since f n converges to f uniformly, there exists a natural number M such that for all n bigger than or equal to M, for all x in a, b, f n of x minus f of x is less than epsilon. Now, basically what we're going to do is integrate this inequality. And remember, unlike differentiation, integration respects inequalities. And for all n bigger than or equal to M, meaning I'm choosing my M as this guy for proving this limit, if I look at the integral from a, b of f n minus the integral of a, b to f, this is, by linearity of the integral, the integral of f n minus f, absolute value. By the triangle inequality for integrals, which we proved, this is less than or equal to the integral from a, b of f n minus f. And what do we know for all n bigger than or equal to M? This function here, f n of x minus f of x in absolute value, is bounded by epsilon. And therefore, the integral of this side is going to be less than the integral of the right side. So I did this wrong. Let's put a b minus a over this. So this is less than integral a, b epsilon over b minus a. And so then I just pick up this number times the length of the interval, which equals epsilon. So for all n bigger than or equal to M, the absolute value of this integral of f sub n minus the integral of f is less than epsilon. So the integral of the uniform limit is the limit of the integrals. So now we'll use this to do the last interchange of limits theorem that I had in mind, so number 2. And again, this is kind of the simplest statement, and maybe easiest to prove, that one can make. One can make stronger statements and prove them, but in most cases, this suffices. So suppose f in a, b to R. So this is continuously differentiable for all n, f and g-- these are two functions from a, b to R-- and f n converges to f uniformly on a, b, and the derivatives converge uniformly to this function g. And in fact, I don't even need this. We just say pointwise. So I only need uniform convergence of the derivatives if I'm assuming everything is continuously differentiable, if I'm assuming the sequences are continuously differentiable. Then f is differentiable, in fact, continuously differentiable, meaning the derivative is continuous. And the derivative of f is equal to g, meaning that the uniform limit of the derivatives converges to the derivative of f. So again, what this says is the uniform limit of-- or at least in essence and spirit, it's not exactly what it says-- but it says the uniform limit of continuously differentiable functions is differentiable, and that the derivative of the uniform limit is the limit of-- the derivative of the uniform limit is the uniform limit of the derivatives. So to prove this, we use the fundamental theorem of calculus. So let x be a point in a, b. Then by the fundamental theorem of calculus, if I take f n of x minus f n of a, this is equal to the integral from a to x of f n prime. The integral of the derivative gives me back the function evaluated at the endpoints. And therefore, since I pointwise convergence, these two numbers converge pointwise to f of x and f of a respectively. So f of x minus f of a equals the limit as n goes to infinity of f n of x minus f n of a. And this is equal to, just by the previous expression, limit as n goes to infinity of the integral from a to x of f n prime. Now, these f n primes are converging to g, this function g, uniformly. And therefore, the integrals converge. that's what we just proved. So this equals the integral of the limit. And therefore-- so we started off with f of x minus f of a, and we showed it's equal to the integral from a to x of g. So f of x equals f of a plus the integral from a to x of g. But no, again by the fundamental theorem of calculus, this implies that f is differentiable. And the derivative of f of x, the derivative of f, is equal to the derivative-- so remember, f of a, that's just a constant-- equals the derivative of this function, which is g. I think we have just enough time to prove one more pretty good theorem. Pretty good-- it's very good. And then we'll use it in our next lecture to discuss, or at least conclude, some answers that we originally asked about power series. So this theorem is due to the godfather, Weierstrass. I don't know, did I refer to him as godfather or Riemann as the godfather? Anyway, I think it was Weierstrass that I refer to as the godfather. He proved the following theorem-- when can you guarantee uniform convergence for at least a series of functions, which are limits of partial sums, so limits of functions? And he proved this basically so he could come up with a whole host of examples of continuous functions which are nowhere differentiable. We kind of gave a proof of this theorem when we looked at that example, when we were talking about differentiability, but we didn't state it as a theorem there. So let's take a sequence of functions f j from S to R, and suppose there exists a sequence M j such that two things hold. So these are a sequence of positive numbers. I don't know why I just erased that, but-- so we have a sequence of functions from S to R and a sequence of positive numbers so that these numbers dominate the f j's. And 2, they're summable, so sum from j equals 1 to infinity, which is convergent. Then the conclusion-- and since I've been using numbers for conclusions and letters for hypotheses, let's go back to that-- then we have two conclusions. The first one is pretty obvious. For all x in S, this series where I just take x and stick it into f sub j, this converges absolutely. And the second is if I define, now, the function to be the sum of these series, then the partial sums converge to f uniformly-- converges to f uniformly as n goes to infinity on S. So if I have a sequence of functions, each one bounded by some number of M sub j, positive number-- or not negative number, at least-- and the series involving the M sub j's is convergent, then the series of the f sub j's converges uniformly. That's the conclusion. So in our example of that continuous, nowhere differentiable functions, each of these f sub j's were cosine of 160j x over 4 to the j. So before we prove this theorem, we can combine all of what we've done so far to state the following-- the function f of x equals sum from j equals 1 to infinity, and I could make it something else, but let's say sine of 40k over 2 to the k x is continuous on, let's say, 0, 1 for now. I could say on R. Why is that? Each of these f sub-- I'm really butchering this trying to go fast. This is 2 to the j, not 2. Each of these functions is bounded by 1 over 2 to the j, and 1 over 2 to the j is summable. So then by this theorem, this convergence, so this function here, is the uniform limit of the partial sums. The partial sums are just a finite sum involving sine, so they're continuous functions. So f is the uniform limit of continuous functions, and is, therefore, continuous. So we could have used this theorem to prove that function we looked at back in differentiability was continuous, but we didn't have uniform convergence and all that back then. So we just proved it by hand. But using this, this theorem gives you a big class of sequences of functions which converge uniformly to some function. So let's prove this quickly. So one follows from the assumption a, b, and the comparison test. If I take an x in S f sub j of x is bounded by M sub j, and M sub j is summable, this series converges. So by the comparison test, the sum, with absolute values here for each fixed x, converges. And that's absolute convergence. Now, let's show that the partial sums converge to the limit uniformly. So let epsilon be positive. Since this sum converges, there exists an M, a natural number so this-- remember, we have this Cauchy criterion for-- well, I guess we don't have to have-- so that sum from n equals M plus 1 to infinity of M sub j, so j, which is equal to-- I chose a poor letter. So this M here should not-- let's change this to an N. So this is less than epsilon. So the point is that the tail is small. The sum from j equals n plus 1 to infinity of M sub j is less than epsilon. And for all n bigger than or equal to this capital N, for all x in S, if I look at the limit, which is f of x, this is sum of the series minus sum from j equals 1 to n, this is equal to-- now, f of x is equal to the whole sum, so minus this first part, this is equal to sum from j equals n plus 1 to infinity of f of j of x. And by the triangle inequality, which does hold for a convergent series, this is less than or equal to sum from j equals n plus 1 to infinity of f sub j of x. And by assumption a, this is less than or equal to sum from j equals n plus 1 to infinity of M sub j. And since n is bigger than or equal to capital N, I have this is less than or equal to sum from j equals capital N plus 1 to infinity of M sub j. And this thing, we chose capital N so that that's less than epsilon. And that's the end. So again, this n was chosen depending only on the series. It didn't depend on any x at any-- it didn't depend on the point x in S. So I had to go through that kind of quickly because I'm up against the time crunch. But the point is, for this theorem, is that if I have a sequence of functions that are bounded by these numbers M sub j-- I was going so fast I didn't even label it correctly-- this is called the Weierstrass M test, M because you see M here-- if I have a sequence of functions bounded by some numbers M sub j, And those M sub j's are summable, then what it says is that the series of the functions converges uniformly. So we'll stop there.
MIT_18100A_Real_Analysis_Fall_2020
Lecture_22_Fundamental_Theorem_of_Calculus_Integration_by_Parts_and_Change_of_Variable_Formula.txt
CASEY RODRIGUEZ: We'll continue our discussion of the Riemann integral. So let me just recall a few bits of notation from last time and then also, the main result that we proved at the end of last class. So we had partitions, which are just finite sets. So here, a is less than b, and we're looking at an interval ab. Partitions, they're just finite sets of ab. And then we also had tags, which were finite sets as well with c one between x0 and x1, which is less than or equal to xi 2, which is less than or equal to x2 and so on. And we call these tag partitions. And we also had the norm of a partition. This is the max of-- so we think of partitions as being-- as breaking up the interval ab up into smaller subintervals, x0 up to x1. I got to get a different piece of chalk. This one is going psychedelic on me. And then the xi ones are just points chosen in these smaller subintervals. The norm of a partition is the length of the largest subinterval. And then we had the Riemann sum associated to a partition, tag partition xi, which is the sum from j equals 1 to n, f xi j xj minus xj minus 1. OK? This just some notation from last time. And then what did we prove? We proved the following theorem, which is the existence for the Riemann integral, which is the following that for all continuous functions on ab, they exist a unique number, which we denote integral abf with the property that-- with the following property. That if I take any sequence of tag partitions with norm going to 0-- so this is the partitions are getting finer and finer. They're all a sequence of tag partitions with norms going to zero, we have the associated Riemann sums, this sequence converges. So this is now a sequence of real numbers, and equals this number, integral abf. No matter what sequence of tag partitions I take with norms converging to 0, the Riemann sums converge to this number, which we call the Riemann integral of f. Net, I think the very last thing we proved was that the Riemann integral is linear. The integral of the sum is the sum of the integrals. And if I multiply a continuous function by a scalar, then the scalar pulls out. OK? So now we're going to discuss some properties of the Riemann integral. So the next is that in some sense, the area of the union, if you like, is the sum of the areas if we think of the integral as being the area underneath the curve. So this next property is the additivity of the Riemann integral, which states the following-- if f is in cab, and I take a point between c and b, then the integral from a to b of f is equal to the integral from a to c of f plus the integral from c to b of f. So to prove it, again, all we know about these numbers is that they satisfy this property. So what we'll do is, we'll take a sequence of tag partitions of ac converging to this integral in a sequence of tag partitions from c to b converging to this integral when I stick them into the Riemann sums. But then when I take the union of those tag partitions, I get a partition of ab, which will approach the integral of ab. That's the basic intuition. And of course, what was I saying a minute ago about area of the union being sum of the areas? Well again, like I said, if we think of the Riemann integral as being the area underneath the curve, then this says that the area from a to b underneath the curve is equal to the area underneath the curve from a to c, plus the area underneath the curve from c to b. OK, so let's take-- someone needs to write notation here. Let's take two partitions. Let be a sequence of partitions-- so a sequence of tag partitions, I should say-- of ac such that norm of this partition goes to 0. So this is now a partition of ac. And then we'll take another sequence of tag partitions of cb. For those watching at home wondering what these letters are, that's the Greek eta. That's the Greek zeta. So we have a sequence of partitions of ac. These are the y and eta. And then we also have a sequence of partitions of cb such that the norm of these guys go to 0 as well as r goes to infinity. So now if I have a partition of ac and a partition of cb, if I take their union, then I will get a partition of ab. This will be the union of these guys. So it's easy to see that this is now a sequence of tag partitions of ab. And moreover, so what is the norm of this new tag partition? What is the maximum length of the subintervals? Well, it's just going to be the maximum of the norms of y and z. And note that the norm of x sub r, this is equal to the maximum of the norm of yr and the norm of zr. And since both of these converges to zero, the maximum of them also converges to zero. So now we just put this all together. So by this theorem, what do we know? We know that the Riemann sum associated to this converges to the integral ab of f. The Riemann sum associated to this sequence of tag partitions converges to acf. And then, this sequence of Riemann sums associated to these tag partitions converge to cbf. OK? And now we just use the fact that taking a Riemann sum is additive, right? Since the Riemann sum associated to this partition, which is just a union of these partitions, is equal to-- this implies taking the limit as r goes to infinity. The left hand side, as I wrote down here, converges to abf. And then the right hand side, the limit of the sum is the sum of the limits. So this converges to acf. And then the right thing here converges to-- the second thing here converges to the integral from c to b of f. And that's it. So really, the fact that the integral is additive follows from the simple fact that Riemann sums are additive, and the fact that limits respect algebraic operations. The limit of the sum is the sum of the limits. All right, so the Riemann integral is additive and also linear. So we see that the Riemann integral is something that respects algebraic operations with respect to the real numbers namely, adding and scalar multiplication. And it respects I guess, what you could say topological considerations. Or, not really topological, but this additive property is a very natural property. What am I getting at here? So now one could ask, how does the integral interact with inequality? So this is something we ask about all limits, right? Or at least limits of sequences that we saw. If one sequence lies below another one, taking limits respects that. Of course, that's not true for the derivative. I can have two functions, one bigger than the other, and the derivative of the smaller one be bigger than the derivative of the first one. But so one can ask the question let's suppose I have two functions, one smaller than the other. What is the-- what's the relationship between the integrals? OK. So suppose f and g are now two continuous functions on ab. So then the first part of this theorem is that if for all x and ab f of x is less than or equal to g of x, then the integral from ab of f is less than or equal to the integral from ab of g. All right? And this is kind of understandable. I mean again, if we think of the integral as being a theory of the area underneath the curve. If I have one function sitting above another function, then the area should be bigger than the area of the smaller one underneath the curve, which sits below the first one. And now the Riemann integral is a limit of sums in some sense, and a limit of certain sums involving f. And what? So we do have this relationship involving sums and absolute values with the triangle inequality, which says the absolute value of the sum is less than or equal to the sum of the absolute values. And if you think of the integral as being essentially in a limit of sums, or if you like, a continuous sum, then you should expect that the absolute value of the integral is less than or equal to the integral of the absolute value. And that's indeed what we have. Second is the absolute value of the integral from ab of f is less than or equal to the integral from ab of the absolute value of f. Like this is the triangle inequality for integrals. All right, so we'll prove one, using again, the main property we have about Riemann integrals that they satisfy this property. And then we'll deduce two directly from number one. So let's take a sequence of tag partitions with norms converging to zero. Then for all r, if we take the Riemann sum with respect to f of this tag partition, this is equal to j equals 1 to n. This is just the definition. r here is just indexing the sequence of tag partitions. And I should say this doesn't have to end at some single number-- in fact, it can't-- in common with all of these tag partitions. So here n of r is just the number of partition points that we have in our partition x superscript r. And since we're assuming f of x is less than or equal to g of x for all x and ab, this is less than or equal to sum from j equals 1 to n of r, g of cj of r. OK, and what is this last thing? This last thing is just equal to the Riemann sum of g, with respect to this tag partition x and xi. So we started off with the Riemann sum associated to f, and we showed that's less than or equal to-- just kind of summarizing here. That this Riemann sum is less than or equal to this Riemann sum. So now if I take the limit as r goes to infinity, this on the left hand side approaches the integral from ab. And since limits respect inequalities, I get this, which is what we wanted to prove. All right, and from number two-- I mean, for number two, we get from number one-- I mean, we get number two from number one, essentially quite quickly. So since plus or minus f is less than or equal to the absolute value of f, we get that plus or minus-- so integral of plus or minus f is less than or equal to the integral from ab of the absolute value. Now scalar multiplication we proved last time pulls out. So that means plus or minus the integral of f is less than or equal to the integral of the absolute value of f, which is the same as-- so for the minus sign this tells me this inequality. And for the plus sign, I get that inequality. But that's equivalent to saying that the absolute value of the integral of f is less than or equal to the integral of the absolute value. And that's the proof of the theorem. Let me make a small remark here-- and this will be an exercise on the homework to prove this-- is the following. We know that taking limits respects inequalities but not necessarily strict inequalities. So I can have two sequences, one strictly less than the other for all n. And then as n goes to infinity, their limits could in fact equal each other. So for example, if I take the first sequence to be 0, the second sequence to be 1 over n, 1 over n is always bigger than zero. Yet, they both converge to 0. So taking limits does not respect strict inequality. However, in this setting, which is very nice-- I mean, it's actually an extremely important property of the integral is that this limiting process that we do to take integrals in fact does-- it's not just any old limiting process. It actually does respect strict inequality. So what am I saying? Is that we can prove something a little bit stronger. And that's-- which I'll say it in its kind of reduced form here. If I take a continuous function and it's positive, then we already know that the integral of f is going to be non-negative by what we just proved. But in fact, the integral of f is positive. And again, this is also kind of important from a psychological standpoint. If we're thinking of the integral as being a theory of the area underneath the curve. So what this states is that-- and again, I'm not going to prove this. This will be on the assignment. Again, this coincides with our intuition of what a theory of the area underneath the curve should be. So this says if a continuous function is positive-- like the picture I drew you-- then the integral, which we interpret again, as the area underneath the curve, is positive. So that matches what our desire for this to be a theory of the area underneath the curve should be. All right? And so we have- if you like-- I mean, we could approve this from basic considerations in what the definition of the integral is. But we have the following theorem that for all alpha and r-- or I could just say the following. Well, maybe not stated this way. Going a little off script here, but it's OK. So we actually haven't computed a single integral yet. Really, it's tough to with this definition. In a minute, we'll prove the fundamental theorem of calculus and be able to do it very easily. But let me first prove at least the most basic integral. So integral from ab of the function 1 equals b minus a. So what's the proof? Take a sequence of partitions with the norms converging to 0. And we just look at the Riemann sum. And for one this will be quite easy to compute what the Riemann sums are. So this is a sequence of partitions with norm converging to 0. Then if I look at associated Riemann sums, this is equal to j equals 1 to n of r. No matter what I-- no matter what xi is, I just get 1, and then times the length of the interval, the subinterval. Now this is a telescoping sum. Meaning this is equal to x1 minus x0, plus x2, minus x1, plus up until xn minus xn minus 1. So all I pick up is-- not x1-- but x0 and xn. So this is equal to xj of r xn, sorry, of r minus x0 of r. And this is the last point in the partition. This is the first point in the partition. And as always for partitions, the last point is b. The first point is a. So this is equal to b minus a. Now this thing on the left hand side is equal to b minus a. So this tells me that the integral, which is equal to this limit of Riemann sums is equal to b minus a. All right, and then the assignment for this-- or I guess, when you see this last week, was to essentially, compute the integral for-- the integral from ab of x, dx. And so we get from this theorem and this theorem following bound for the area. So let me draw a picture so that this is not so surprising. Let's say I have a function from a b. There's the function f. So that was supposed to go through this point. So let's make that a little bit-- I suppose I don't need these parts. Let's say I have-- so far a continuous function, it always achieves a minimum and maximum. And at least for this picture, the minimum occurs here. The maximum occurs here. Now, what's the comparison about areas from this picture? Well, the area underneath the curve of f will be bigger than or equal to the area underneath the curve of the constant function equal to little m sub f. And that's just b times a times m sub f, again, by intuition. But this is also seen from the theorem we just proved along with the linearity of the Riemann integral, which we've already proved. And the integral of f, the area underneath this curve is also less than or equal to the area underneath the curve where f achieves a maximum. It's the 12th week of class, and I still haven't brought colored chalk, so hopefully, that was clear enough without me having to color stuff in. So the theorem is the following. If f-- again, we're working with continuous functions because that's all we can integrate or are going to integrate in this section-- and I have these two numbers, the inf of fx, which we know is in fact, a min, meaning there exist a point in here-- so in fact, I'm going to write "min." we know that by the min, max theorem that for every continuous function it achieves its minimum-- and max. Then, as I was saying a minute ago, the area underneath the curve little m sub f, which is this smaller square, is less than or equal to the area underneath the curve of f. So little m sub f times b minus a, the area underneath the minimum, is less than or equal to the area underneath the curve of f, which is less than or equal to the area underneath the curve of the taller line, capital M sub f. How do we prove this? Well, we just use these previous two theorems, right? Since little m sub f is less than or equal to capital M sub f-- so since little m sub f is less than or equal to f of x, which is less than or equal to capital M sub f, for all x and ab we can apply that theorem up there to get that the integral of little m sub f-- again, this is just a fixed number-- is less than or equal to the integral from ab of f. And now again, these are just fixed numbers. So by what we proved here that the integral from a to b of 1 is b minus a, the fact that scalers pull out-- so this number here can come out-- and this is just m sub f times the integral from ab of one. So this is little m sub f times b minus a. Over here, this is capital M sub f times b minus a. And that's the proof. So now let me make a few comments about some conventions, really. So we've been talking about-- or at least, I've been talking about, you're not here. So we haven't been talking at all. So we've been talking about the integral from ab to f when a is less than b, But I'm going to use-- I'm just going to set down some convention, or you could say this is really notation. When I have this-- when I write down this symbol, integral a to a of f, this is really just another way of writing 0. And another is that if, in fact, b is less than a, then the integral from ab to f-- so remember, we've been talking about the integrals from-- over intervals-- or where the number on the bottom is less than the number on the top, so whenever I write this symbol with the bigger number on the bottom and the smaller number on the top, you should read this as minus the integral of, with this being in the right place, meaning this number here is less than this number here. OK and so again, these are really just notations, that if I write this, this is a fancy way of writing 0. When I write this, this means minus the integral with the right numbers on top and bottom, meaning the smaller numbers on the bottom, the bigger numbers on the top. So maybe you ask-- so this is-- so why do I define that to be 0. Well, if you like, 1 is consistent with the fact that for all continuous functions, if I take the integral from a to b with b bigger than a, and I take the limit as b goes to a, this equals 0. And two is then consistent with number one and then additivity. With additivity in number one, because then by number one, I get that the-- so additivity will tell me that the integral from a to a of f is equal to the integral from ab to f, plus the integral from baf, here assuming b is less than a. So if by 1 this is 0, then to be consistent with additivity, this would force the integral from ab to f to be minus the integral from ba of f. So both of these conventions, if you like, are consistent with the properties of the Riemann integral, namely that the limit as b goes to a of the area underneath this curve, as the base gets smaller and smaller is 0. And then from-- if I assume this, then additivity tells me that the integral from ab has to be minus the integral from b to a of f. All right, so now we computed successfully one integral. Of course, when you took calculus, that was not all you could do. Calculus is what it is because of the hero of the story, which is Fundamental Theorem of Calculus, which is what we'll prove now. Fundamental Theorem of Calculus, which states following. If I have a function big F-- so first off, let little f be continuous function. So the first statement is basically how to compute integrals. If capital F integral from ab to r is differentiable everywhere on ab and f prime equals little f, then the integral from ab of little f equals f of b minus f of a. Another way of saying this is that the integral from ab of capital F prime is equal to f of b minus f of a. And then the second part-- so the first part is about computing integrals. The second part is about solving differential equations, basically. So the function g of x equals the integral from a to x of f. So for each x and ab, I stick it in as the upper limit on this integral. This function is differentiable on ab. And it satisfies the simplest differential equation, which is g prime equals f, and g of a equals 0. So this is the Fundamental Theorem of Calculus, the first part being about how do you compute integrals, the second, how do you compute solutions to a differential equation. Namely, how do I find the solution to the problem g prime equals little f. So little f is the given. I want to find a function g that satisfies g prime equals little f with initial condition capital G of a equals 0. That's given by this function here where I take little left and I integrate it from a to x. So that's how I interpret the Fundamental Theorem of Calculus, the first about solving integrals, the second about solving differential equations. All right so this is the hero of calculus, the Batman of the story. And I said at one point that the Mean Value Theorem is the Alfred of this story. And therefore, it should play a decisive role in proving this theorem. All right. So we're going to connect these two things. So in order to even get at this guy, we have to take a partition, a sequence of partitions converging with norm converging to 0. So and then see what we get. So this is the proof part one. So let's take a sequence of tag partitions-- I keep forgetting to write tag and say the word tag, but you should hear that even though I don't say it-- with norm. Actually, we're not even going to-- this is correct. I'm actually going to come up with the tags in a minute. So let's first take a sequence of partition. So this is actually completely correct. So no tags yet, no points chosen in between, with Norm converging to 0. And then I'm going to choose a special tag for each r so that in the limit I get this equality between the integral of f and capital F of b minus f, capital F of a. So here comes Alfred by the Mean Value Theorem. So we have this subinterval x of j minus 1 and x sub j. So by the Mean Value Theorem, there exist-- for each j there exists a point in between these two guys such that if I take f of x sub j r minus f-- so capital F, remember, is this function that's differential and whose derivative gives me little f-- this is equal to capital F prime of xi j of r times xj minus xj minus 1. And by assumption, the derivative of capital F is equal to little f, right? So I'm going to replace this f prime with little f. So now these xi's I will take as my tag, my sequence of tags. Put a star by this relation here. What do we conclude? If I look at the Riemann sum for f associated to now this sequence of tags, where now the xi's are exactly these xi's that satisfy this relation. This is equal to as before, sum from j equals 1 to nr f of xg xi sub j. Now by how these xi's were chosen-- remember, they were chosen to satisfy this relation here, star by the Mean Value Theorem. So this is equal to the sum from j equals 1 two n sub r, f of x sub j minus f of x sub j minus 1. And again, this is a telescoping sum. And so all I pick up is when j equals n of r, the last point. And when j equals 0-- so this is equal to f of the last point minus f of the first point. Now for partition, this is always b. And this is always a. So this is f of b minus f of a. So every one of these Riemann sums for this sequence of tag partitions gives me f of b minus f of a So then I get to take the limit, and I get the integral. And therefore, integral from ab to f, which is limited as r goes to infinity of s of f equals-- and we just computed this is always equal to f of b minus f of a. We took a sequence of-- just a recap-- we took a sequence of partitions with norms converging to 0. And we chose special tags using the Mean Value Theorem. So by the Mean Value Theorem, on each one of these subintervals capital F evaluated at the right endpoint minus f evaluated at the left endpoint is equal to f, which is the derivative of capital F evaluated at some point in between times the length of the interval. So now if we take the Riemann sum associated to this sequence of tag partitions where these xi's are defined by this condition, then we can actually compute this Riemann sum, and it just gives me f of b minus f of a no matter what r is. So then when I take the limit as r goes to infinity, the left side goes to the integral, but it's always equal to capital F of b minus f of a. So that proves number one. So for number two, what do we want to show? We want to show this function. So first off, it's by convention g of a, which is the integral from a to a of f, this is zero. So I don't have to check the second condition. I just need to check that this function is differentiable, and the derivative of that function is little f. Let c be in ab. So what do we want to show? We want to show the derivative of capital G is equal to little f. So we would like to show that the limit as x goes to c of the integral from a to x of f minus integral from a to c of f, over x minus c, which is just g of x minus g of c, equals f of c. so we're going to do this by the books-- by the book, I guess. When I say-- so that's kind of a bad pun, I guess. I don't know. But meaning, an epsilon delta proof for this limit. So let epsilon be positive. The Delta that we pick in the end depends on-- and we'll use crucially the fact that f is continuous at c. So since f is continuous at c, there exists a delta 0 positive such that what? t minus c less than delta 0 implies that f of t minus f of c is less than epsilon-- let's give ourselves a little room-- over 2. Choose delta to be this delta 0. Remember, we have to show for every epsilon, there exists a delta so that if x minus c is less than delta, then this minus this in absolute value is less than epsilon. I'm now saying, choose delta to be this delta 0 where this delta 0 is ensuring this right here. OK as long as t minus C is less than delta 0. Now because of sign conventions, we'll do two cases. So not really, I'm just going to do one case. So suppose that x is between c and c plus delta. So we want to also show-- we'll also need to be able to do the case that x is between c minus delta and c. That would cover the whole range of the absolute value of x minus c is less than delta but bigger than zero. I'm just going to do this case. So now we want to show that if x is in here, then this thing minus this thing in absolute value is less than epsilon. So first off, I would like to note that if t is in this interval c to x, then that tells me that the absolute value of t minus c, which is equal just t minus c, this is less than or equal to x minus c, which is less than delta. And remember, this equals delta 0, all right? So as long as I'm in this interval here, the absolute value of t minus c is less than delta 0. So let me just draw a picture so that this is pretty clear. This length is delta. If I take any t in between, then this distance from t to c is also going to be less than delta. So that's all I've written down. Thus, by now compute the difference quotient minus the proposed limit. So looking at 1 over x minus c times the integral minus the integral from ac of f minus f of c. This is the thing I want to show is less than epsilon. Now by additivity, the integral from a to x since x is-- well, the integral from a to x of f is equal to the integral from a to c plus the integral from c to x. So this is equal to 1 over x minus c integral from c to x of f minus fc. Now, this is just a fixed number. And therefore, I'm going to do a little trick. Minus the integral from-- so in fact, let me do this. Let me throw in some integration variables so that this becomes clearer. So this is f of t, dt minus integral of f of t, dt. I've been using the notation where I just don't write down the integration variable, but let me do that here. F of t, dt minus f of c. Now this is equal to minus f of c over x minus c integral from c to x of 1 dt. Because the integral from c to x of the constant 1 gives me x minus c. That cancels with this guy. Now f of c is just a number. It's just a fixed number. So I can bring this inside the integral. And then use the linearity of the integral to in fact, rewrite all of this as 1 over the absolute value of x minus c integral from c to x of f of t minus f of c, dt in absolute value. Now 1 over x minus c, this is positive because x is bigger than c. So I can pull this out just like that. And now what do I know? For t in between this c and x, the absolute value of t minus c is less than delta 0, right? So by what I have here, as long as I'm in this interval cx, I'm going to have this inequality hold. So before I do that, let me first apply the triangle inequality for integrals. This is less than or equal to the integral from c to x of f of t minus f of c. So by the second thing that I've underlined, for t in this interval c to x, this thing here is always less than epsilon over 2. So by what we know about integrals respecting inequalities, this is going to be less than or equal to 1 over integral x minus c, integral c to x, epsilon over 2 dt. And now this is just equal to-- so this is a number. It comes out. So I get 1 integral from c to x of 1. So I get x minus c times the integral of 1. So x minus c, and that cancels, less than epsilon. And this was for x between c and c plus delta. The argument for c between c and c minus delta is essentially the same, just with minor sign changes. So similarly, c minus delta less than c implies that the integral from a to x of f minus integral a to c of f over x minus c minus f of c is less than epsilon. So we've done x between c minus delta and c, x between c and c plus delta, which implies that if-- then, which is what we wanted to prove. OK, so we've concluded the proof of the Batman of the story. And this gives us not only a way of computing integrals, but useful way of, if you like, shifting the burden or shifting the blame whenever we do compute the integrals of certain products. So what I'm talking about is the integration by parts, which I can't remember if I said it was first or second. I think he was second. First most useful thing from analysis is the triangle inequality, the second being integration by parts. And a close third being the Cauchy-Schwartz inequality, which in conjunction with integration by parts is quite a-- most current research papers are based on these two things and using them in a very clever way. So integration by parts is suppose f and g are continuously differentiable, what that means is that the derivatives exists on ab, and they're also continuous. So if you hear me say continuously differentiable in the future that means that the function has a derivative that's continuous. Then the integral from ab of f prime times g-- so the one with the burden, because remember, differentiablity is kind of a miracle. So miracles always come with some sort of burden. This is equal to f of b times g of b minus f of a times g of a minus the integral from a to b. And now we shift that burden to g times g prime. So let me put a parentheses around that so you can-- I don't-- let's do it this way, g prime f. All right, so I can take that derivative and put it on g. And what's the proof? The proof is just the Fundamental Theorem of Calculus and the product rule. Since the derivative of f times g is equal to f prime times g plus g prime times f, we get by the Fundamental Theorem of Calculus the integral from ab of, let's say the right hand side, so this is f prime g plus g prime f is equal to the integral from a to b of f times g prime, so by Fundamental Theorem of Calculus. And now the integral of the derivative is the function evaluated at the endpoint. So that's f of b times g of b minus f of a times g of a. So that's integration by parts, which is a consequence of the product rule. the quotient rule is something-- is just really the product rule in disguise kind of. So we don't get really anything new from there. But we also remember, have the chain rule, which then results in the change of variables formula. So theorem-- let b from ab to cd be continuously differentiable. So this means the function is continuous, and its derivative is also continuous. With the property that the derivative is positive on ab. b of a equals c. b of b equals d. It's hard to see that a. So you can think of phi as being a change of variables from cd to ab. Then the integral from c to d of f of u, du is equal to the integral from a to b of f of phi of x, phi prime of x, dx. So maybe instead of change of variables-- so this is change of variables, a.k.a. u substitution, where you said u equal to phi of x, then du is equal to phi prime of x dx. And again, the proof of this just follows from the Fundamental Theorem of Calculus and the chain rule that we know. So let f be a function from ab to r such that f prime equals little f. We can always find one by the second part of the Fundamental Theorem of Calculus. We can always solve this differential equation up to a constant. Then if I look at the function f of phi of x, and I take its derivative with respect to x by the chain rule, this is equal to f prime of phi of x times phi prime of x, which equals f of phi of x, phi prime of x. So then when I integrate that, integral from ab, phi of x-- f of phi of x phi prime of x dx, this is equal to the integral from ab of f of phi of x prime dx. Now by part one of the Fundamental Theorem of Calculus, this is equal to-- the integral of the derivative is this thing evaluated at the endpoint. So f of phi of b minus f of phi of a. This is equal to f of d minus f of c because phi of b equals d, phi of a equals c, and phi of b equals d. But again, by the Fundamental Theorem of Calculus since capital F is-- when you differentiate, it gives me little f, this is equal to the integral from c to d of f prime of u du, which is equal to f of u du. Again, so we just applied the Fundamental Theorem of Calculus really, three times, OK? First to find a function whose derivative is little f. Then we used the chain rule to conclude that the derivative of capital F of phi equals this thing, which is the thing we're integrating on the right hand side. So then when we integrate that, that's equal to the integral of a derivative. And therefore, we just pick up the endpoints, f of d minus f of c. But because capital F is equal to-- is an anti-derivative of little f, meaning it's derivative is little f, this number here is also equal to this integral here, which is this guy. And so then we get the change of variables formula. So I think we'll stop there. And next time, we'll do a quick application of the second most useful thing on Earth, which is integration by parts. And then we'll move on to sequences of functions.
MIT_18100A_Real_Analysis_Fall_2020
Lecture_12_The_Ratio_Root_and_Alternating_Series_Tests.txt
[SQUEAKING] [RUSTLING] [CLICKING] CASEY RODRIGUEZ: So let's continue our discussion of series. So last time, we proved the comparison test at the end of last time. So this was the comparison test. So which is a statement about series with non-negative terms, with one being smaller than the other. And one of two things, or two things are true. If I have xn and yn non-negative with yn bigger than or equal to xn, then the conclusion is, if the bigger series converges, this implies that the smaller series converges. And the second statement is, if the smaller series diverges, then the larger series diverges. And we also proved not using the comparison test, but for p-series, that the series 1 sum from n equals 1 to infinity of 1 over n to the p, This converges. Well, I mean, we did use a comparison test for one direction, I guess. So 1 over n to the p converges if and only if p is bigger than 1. So this converging implied p has to be bigger than 1 by the comparison test and what we know about the harmonic series 1 over n, sum of 1 over n. So typically, in applications you use these two theorems together. To say something about series that don't look so simple. So for example, if I look at the series, 1 over n squared plus 2020n, n equals 1 to infinity. So one could ask, does this converge, diverge? Well, this is a series with non-negative terms. And remember, I mean, the bigger thing has to converge to imply that the smaller thing converges. And if the smaller thing diverges, then the bigger thing diverges. So don't get the inequalities mixed up. So I have 1 over n squared plus 2020n. This 2020 times n is just making things bigger on the bottom and therefore smaller overall. So this is less than or equal to 1 over n squared. And since this converges, this implies by the comparison test that sum from n equals 1 to infinity of 1 over n squared plus 2020n also converges. Now, a mistake I'm sure we've all made at some point is mixing up the inequality and not getting quite the right answer. So for example, it is also true that 1 over n squared plus 2020n is certainly less than or equal to 1 over 2020 times n, which is also less than or equal to 1 over n. And so, you're tempted to say, since this diverges, it implies the other series diverges. But this is not right. Because the inequality is wrong. Remember, if you want to apply the comparison test, you either have to have a bigger series which converges or a smaller series which diverges. Here we came up with a bigger series which diverges, which gives us no information at all. If we have a bigger series, like 1 over n squared that we know converges, then do we get information about the original series. So let's do another well-known test, or at least test you should remember from calculus, so-called ratio test. So what is a statement of this? So suppose xn does not equal 0 for all n. And this limit l equals limit as n goes to infinity of x to the n plus 1 over x to the n, an absolute value exists. Then, if l is less than 1 this implies that the series xn converges absolutely. And if l is bigger than 1, then the series diverges. Now, what about l equals 1? There's no information for l equals 1. Meaning you could have a series so that this l equals 1. And you could have a series where it diverges. And you could also have a series where l equals 1 and the series converges. So for example, if xn equals 1 over n, for example. Or I mean, even worse, let's say 1 for all n. But we know that the series 1 diverges. Because the individual terms do not converge to 0. They're just 1 the whole time. But for xn equals, let's say, 1 over n squared, so the one we just saw a minute ago, implies l, which is the limit as n goes to infinity of n plus 1 squared over n squared equals the limit as n goes to infinity. Dividing through, one plus 1 over n squared, this equals 1 plus 0 squared equals 1. So for this series, the root or the ratio gives this l is 1. And this series converges. So again, for the case that l equals 1, we have no information. We can't say anything based on-- this theorem gives us no information. So we're going to prove this theorem basically by-- so we're not exactly going to use a comparison test. But we are still going to compare this series satisfying one of these two assumptions to a series we know, which is namely the ratio test for number one. So first off, let's get number two out of the way. So let's suppose l is bigger than 1. Let alpha be a number between 1 and l. Actually, we don't even need to do that. So here's-- here is 1, here is l. So since xn plus 1 over xn absolute value converges to l, there exists an integer M0. So that's a natural number. So that for all n bigger than or equal to M0, xn plus 1 over xn in absolute value is bigger than or equal to-- so here, you could write 1 as-- this is equal to l plus-- I should say, minus-- l minus 1. So think of this as being epsilon in the definition of convergence. And so, this is over here. l plus l minus 1. So by the definition of convergence, for all n sufficiently large, xn plus 1 over xn in absolute value has to be in this interval. So it has to be bigger than or equal to 1. Which implies that for all n bigger than or equal to M0, xn plus 1 is bigger than or equal to xn. And so, I could write this as saying that x M0 is less than or equal to x M0 plus 1, is less than or equal to x M0 plus 2, and so on. But this implies that these xn's cannot converge to 0 as n goes to infinity. Because for all n bigger than or equal to M0, they are increasing. And the only way for an increasing sequence which is non-negative to converge to zeros for them all to be 0. And we're assuming they're not 0. So that proves two. Let's prove one. So suppose l is now less than 1. And now, let alpha be a number between l and 1. So I just want to give myself a little bit of room, to work as you'll see in just a second. So by the same reasoning as before, so here's l, alpha 1 alpha. I can write as l plus alpha minus l. So I should think of this as kind of an epsilon. Then since xn plus 1 over xn in absolute value converges to l, which is less than alpha, there exists in M0 natural number. So that, for all M bigger than or equal to M0, xn plus 1 over xn in absolute value is less than or equal to alpha. So here I have alpha just a little bit bigger than l. And if I draw-- so this is l minus alpha minus l. So since I have this sequence here converging to l for all insufficiently large, this should be in this interval. Meaning this should be less than or equal to alpha. Then for all n bigger than or equal to 0, so let me just write this slightly differently, x sub n plus 1 is less than or equal to alpha times the absolute value of x of n. Now, let's see what this means. For all n bigger than or equal M sub 0. Let's say I look at the absolute value of x sub n. So now, this n has nothing to do with these n. It's just-- so then the absolute value of x of n is less than or equal to x-- so maybe I'm off by-- OK, so let's increase that by 1. Well, so let's not be confusing here. Let's make this l. Let's make that l. So for all l bigger than or equal to M0 plus 1. So think of l as playing the role of being n plus 1. This is less than or equal to x to the l minus 1, which is less than or equal to x to the l minus 2, now with a square. So what I have here is now alpha times alpha x to the l minus 2. And now, let's drop again by 1. And I can do this as long as this quantity here is bigger than or equal to M. So I get this is, if I keep doing that, this is less than or equal to alpha to the l minus M0, x to the M0. And so, actually let's, we shifted things-- yeah. So now we're going to use this to bound the partial sums of the x of n. So let M be a natural number. And if we look at partial sum, sum from n equals 1 to M of x sub n, this is equal to sum n equals 1. to M0 xn plus sum from n equals M0 plus 1 to M. x sub-- now let me just change dummy variables for signing, for indexing this. So this is equal to the sum from n equals 1 to M0 of xn. And now, this, I'm going to use this inequality here. So this is kind of a mess. So this is the inequality that I get from this. Remember, alpha's less than 1. Plus-- OK. So time's l equals M0 plus 1 to M. OK, everything's fine. Times alpha to the l minus M0 plus 1. So all I did here was replace this absolute value of x to of x sub l. l is just a dummy variable for indexing these guys, but using this inequality here. So, sorry, I kind of messed that up a little bit. But the important thing is that we have this inequality, which somehow tells you this series is not that far from being a geometric series, at least when l is big enough. So then, when we take an arbitrary partial sum, we split it up into two parts. The stuff that comes up to this integer sub 0, which we don't care about, that's just a fixed number, plus this part that we care about, which is after this fixed number M0. And we use this inequality to replace this by x sub M0 plus 1, alpha to the l minus M0 plus 1. The thing to remember is that little m is the thing that's changing. So we're trying to bound this independent of little m. Capital M sub 0, that's just something fixed. That could be 1,000. So this is equal to sum from n equals 1 to M0 xn, plus x sub M0 plus 1. And now, if I change the variable again, so l starts at M0 plus 1 and ends at M. So this is now a sum if I go-- so now, for the second sum, n is equal to l minus M0 plus 1. So now, n starts at 0 and ends at M minus M0 plus 1, alpha to the n. Again, Emma, is the little-- is the thing that's changing here. And we're trying to bound this independently of little m. But now we're in good shape. Because this looks like a geometric series. Alpha, remember, is less than 1. So this is less than or equal to sum from n equals 1 to M0 x sub n plus x sub n 0 plus 1 times-- now, instead of just being a sum from 0 up to M minus M0 plus 1, why not throw all of it in there. And this is equal to n equals 0, xn, plus x to M0 plus 1 times 1 over 1 minus alpha. Remember, alpha is a number that we fixed to be less than 1. If you try to do what we did before and not fix the alpha just a little bit to the left of 1, and try to do everything with 1, you would have wound up with 1 to the n here. And that wouldn't have finished the proof. That wouldn't have closed the proof. But given yourself a little bit of room, which is why we fix this alpha. Now, this number here is independent of little m. That's the whole point. So what have we proven? That for all natural numbers little m, the nth partial sum is less than or equal to a fixed number given by sum from-- OK. And therefore, this sequence of partial sums is bounded and therefore converges. So kind of the simplest application of this is maybe a series that looks familiar. So for all x-- so this is I guess a theorem slash example. But for all x in R, the series x to the n over n factorial, n equals 0 to infinity converges absolutely here. 0 factorial is, of course, 1. And you just use the ratio test. So of dx to the n plus 1, or the n plus first term over x to the n. So absolute value, and this equals the limit as n goes to infinity of n plus 1 factorial is equal to n plus 1 times n factorial. So this cancels with that n factorial. And I get x over n plus 1, limit as n goes to infinity of just this fixed number over n plus 1 equals 0. And this is certainly less than 1. And therefore, by the ratio test, this series converges absolutely. So I got a little hung up on exactly the indexes and matching them up precisely. But the important thing to take home from this proof was that when this ratio is less than 1, then this series behaves very much like a geometric series for as the terms get very-- as you go far enough out in the terms. And this idea of just trying to relate the series to a simple series that you know a lot about, I mean, basically the only series that you know everything about, namely even how to sum it, this is how you get this test. And this is also even simpler how you get the next test, which is the root test. So the root test, let's take a series. And suppose this l, this limit l equals the limit as n goes to infinity of x sub entity 1 over n exists. Then, two conclusions, just kind of just like in ratio test. If l is less than 1, and this implies that the series converges absolutely. And if l is bigger than 1, then this series converges-- no, the series diverges. And again, just like in the ratio test, no information for l equals 1. You take the same series that we looked at before. Did I leave it up there? Yes. So xn-- x sub n equals 1 for all n. This limit here, l, exists. It's equal to 1. And that series diverges. If you look at 1 over n squared, and I take this limit, I again get 1. But that series converges. So for capital l equals 1, we get no information. So let's-- and in this case, it's even clearer how we're relating the series to a geometric series. So let's-- in my notes, I proved two first both times. So perhaps I should have written it one, two. I'll know for the future. So suppose l is bigger than 1. And now, we're going to show that this series diverges, again, by showing that the terms do not converge to 0. So it's the same idea as before. Here's l, here's 1. So for all n sufficiently large, x to the n to the 1 over n has to be inside this interval. And since x to the n, x sub n to the 1 over n converges to l, which is bigger than 1, this implies there exists an integer in 0 such that for all n bigger than or equal to M0, x sub n to the 1 over n is bigger than 1, which implies all n bigger than or equal to M0, x sub n in absolute value is bigger than by just taking powers of both sides is bigger than 1. Now, since all of the absolute values of x sub n's is bigger than 1, this implies that this cannot converge a 0. So remember, I mean, let me remove this absolute value. x sub n cannot converge to 0. Why? I mean, we could go back to the basic definition of convergence, of what it means for a sequence to converge to a real number x, and what it doesn't mean, or what it means for it to not converge to x. So xn does not converge to 0 if there exists a bad epsilon. So that x sub n is outside of that interval. As long as is outside of that interval if I go far enough out. And we certainly have that here. Or you could use-- so this bigger than 1, in fact, implies that the lim sup of x of n is bigger than 1. Because if I have two sequences, one bigger than the other, then the lim sup of the bigger one is bigger than or equal to the lim sup of the other one, which is 1. And you're doing in this week's assignment, that this converges-- that xn converges to 0 if and only if the lim sup of the absolute values of xn's converge to 0. And this is not. It's bigger than or equal to 1. So we'll leave that there. So now, for the other case, that l is less than 1. Suppose l is less than 1, then let alpha be a number between l and 1. Again, since this converges to l, which is less than alpha, for all n sufficiently large, this has to be less than alpha. So there exists an integer, I have x to the n to the 1 over n is less than alpha, which implies that for all n bigger than or equal to M0, x to the n in absolute value is less than alpha to the n. So here again, we're seeing this series, if it satisfies these hypotheses, is very much acting like a geometric series. Remember, alpha's less than 1. Then for every natural number little m, if we look at the nth partial sum of the absolute value, let's say n equals 1. This is we split it up again into a part that we don't really care about, plus an interesting part. Remember, little m is the thing that's changing. This should look like a little m compared to capital M0. And this is less than or equal to sum from n equals 1 to M0, x sub n. Again, this is just a fixed number. Plus now we can put this inequality, n equals M0 plus 1, m alpha to the n. And this is less than or equal to sum from n equals M0. I'm going to go kind of fast here, because I'm running out of space on the board here. Maybe I'll write this out in just a minute. But why this is true. And this part here is less than or equal to 1 over 1 minus alpha. Because what do I do? This sum is a finite sum. And it's certainly bigger than if I make the lower bound smaller and the upper bound larger. And this is just the nth partial sum corresponding to a geometric series with non-negative terms, the alpha to the n. And so, that's less than or equal to sum from n equals 0 to infinity of alpha to the n, which equals 1 over 1 minus alpha. So that's how we got this term. And therefore, the partial sums corresponding to the series with absolute values is bounded. And therefore, that series with the absolute values converges. And we have absolute convergence. So now, let me state a theorem about alternating series. It is not-- I prefer not to call it an alternating series test, because there's nothing really to test. I mean, at least with the ratio and root test, you have to recompute a limit, which might require some work to do. And therefore, at least to me, that's kind of a real thing, that you have to do a little work to test whether a series converges. And for alternating series, the test is, you look at it. And that's it. You don't compute anything. You look at it. So I prefer not to call this theorem about alternating series an alternating series test. So the theorem is, for alternating series. And the statement is the following. Let x sub n be a monotone decreasing sequence converging to 0. So because this thing is monotone, and monotone decreasing and converging to 0, it's all-- so let me-- I'll put this in parentheses. Therefore xn is bigger than or equal to 0 for all n. I cannot have a monotone decreasing sequence converging to 0 if one of the xn's is less than 0, because they keep getting smaller. This is it. This is not-- this is all of the hypotheses. You don't have to compute anything. Then the series minus 1 to the n, xn, n equals-- let's of course, we don't have to start at 1 in particular. But at least for this statement, let's make it precise. Sum from n equals 1 to infinity of minus 1 to the n, x to the n-- x sub n converges. And we can just say convergence, not necessarily absolute convergence. Because again, if we look at minus 1 to the n, 1 over n, which is 1 over n is a monotone decreasing sequence converging to 0. That converges, but not absolutely. And if we have x sub n equal 1 over n squared, again, 1 over n squared is a monotone decreasing sequence converging to 0. That would converge absolutely. So we just have a statement about convergence. So how we're going to do this is, we are-- it's kind of like how we proved convergence for p series, in that we're going to show that a certain subsequence of partial sums converges. And then, we're going to use that to show that the full sequence of partial sums converge. So let me state this as claim one. So the subsequence of partial sums, S sub 2k, so this is just the sum from n equals 1 to 2k. We're going to show this converges. So again, just to be complete, S sub m, this is the nth partial sum. it is the sum from m equals 1 to m. So how we're going to do this is, we're going to show that these partial sums are in fact monotone decreasing and bounded from below. Monotone decreasing, basically because these guys are monotone decreasing. And bounded from below by the same reasoning. So let's show that. For k, a natural number, if I look at S 2k, so this is a sum. It equals 1 2k, minus 1 to the n x sub n. Now I have the n equals 1 term, which I can write as minus x-- so just writing this out in a certain way. So this is minus x1 plus x2 minus x3 plus-- and then n equals 2k is even. So then I get plus x 2k. This is equal to x sub 2 minus x sub 1, plus x sub 4 minus x sub 3, plus all the way to x sub 2k minus x sub 2k, minus 1. Now, this sequence is monotone decreasing. So x sub 2 going to be smaller than x sub 1. x sub 4 is going to be less than x sub 3. x sub 2k minus x sub 2k minus 1 is less than 0. So since this is monotone decreasing, this is bigger than or equal to. So what I just said has nothing to do with this inequality I'm writing here-- yet. So I kept everything here. And now I add x sub 2k plus 2 minus x sub 2k plus 1. So what I was saying was that the x sub n's are monotone decreasing. This term is right after this term. So it's smaller than this term. So this term here is less than or equal to 0. And therefore, this thing plus itself plus something that's negative is less than or equal to this thing. Now, what is this thing on the right hand side? This is just a x 2k minus plus x 2k, minus x 2k plus 1, plus x sub 2, k plus 1. And this is the k plus 1 2k plus 1 partial sum. Thus this series is monotone decreasing. So to show that the subsequence of partial sums converges, we just have to show that this subsequence is bounded, or bounded below. If it's monotone decreasing, it's already bounded above by something. So we just need to show it's bounded from below. Now, for all k, a natural number, let's look at S sub 2k and group term slightly differently. So this is equal now to-- I'm going to write it as minus x1 plus x sub 3. x sub 2 minus x sub 3, plus x sub 4 minus x sub 5, plus so on, plus x 2k minus x 2k minus 1. OK, I think-- ah, no, that's not right. So this is minus 2 plus x of 2k. All right. Now, this is right. So this is equal to minus-- so again, the x sub n's are monotone decreasing. So x sub 3 is smaller than x sub 2. So this is non-negative. This is also non-negative, because x sub 5 is smaller than x sub 4, so on and so on. So this is also non-negative. So S sub 2k is bigger than or equal to minus x sub 1. Plus non-negative terms, so 2k. And remember, x sub n's, these are monotone decreasing converging to 0. And therefore, that they're all non-negative. So this is bigger than or equal to minus x sub 1. Thus, what do we get? So for all natural numbers k, I have minus x1 is less than or equal to S sub 2k. And because these are monotone decreasing, this is always less than or equal to S k equals 1. And this is-- so if you like S sub 2, is just minus x sub 1 plus x sub 2. So this sequence of numbers is bounded between this real number and this real number. And therefore, it's bounded. And since it's monotone decreasing, it has to have a limit. So S sub 2k converges. Let's call this limit something. Let's call it S. And we'll show that the sequence of partial sums converge to S now. We just showed this for along a sequence of-- along a subsequence of partial sums. Let's show it converges to S along the full sequence. So this is, if you like, claim 2, which is that the full sequence of partial sums converges to S. And we're going to do this by brute force using epsilon M argument. So remember, to show something-- a limit of something equals S means for all epsilon, there exists a capital M, so that for all M bigger than or equal to capital M, S sub M minus S in absolute value is less than epsilon. So let epsilon be positive. Since S sub 2k convergence to S, there exists M0 natural number. So that all k bigger than or equal to M0 S sub 2k minus S in absolute value is less than epsilon. Now, we haven't used at all in this proof that the xn's really converge to 0. We did use that they were non-negative at one point. But now, this is where we'll use a conversion to 0, because here's the intuition. The S sub 2k's are converging to something. So now, I just have to look at the odd ones. I've shown all the even partial sums converge to S. So if I can show the odd partial sums also converge to S, then essentially, I'm done. What's the difference between an even partial sum and an odd partial sum? Well, it's just x to the 2k plus x sub 2k plus 1, which is converging to 0. So they don't differ by much. And that's essentially the whole argument right there. So since xn's converge to 0, there exists natural number M sub 1 so that for all n bigger than or equal to M sub 1, x of n is less than epsilon over 2. I should have written Epsilon. Over 2 here. Choose M to be the maximum of two numbers 2M0 0 plus 1 and M1. So you'll see why I made these choices in just a minute. So suppose m is bigger than or equal to M. We now want to show that S sub m minus S is less than epsilon in absolute value. So there's two cases. If m is even, then m over 2 is bigger than or equal to capital M over 2, which is the max of these. If I divide by 2, that's certainly bigger than or equal to M0. And therefore, I use this first inequality, S sub m minus s, which is equal to S2 times m over 2, minus S. Now, m over 2 is an integer bigger than or equal to m sub 0. So I can use this inequality. It's less than epsilon over 2, which is less than epsilon. And now, we do odd. So again, m is bigger than or equal to capital M. And there's two cases, even and odd. If m is odd, let k be this integer now. Since m is odd, m minus 1 is even, divided by 2, that's an integer. And so, this is-- so m equals 2k plus 1. And since m is bigger than or equal to M0, this implies a couple of things. This implies that this integer-- so our m, this implies that 2k plus 1 is bigger than or equal to 2. So m is just equal to 2k plus 1. So that's bigger than or equal to m, which is bigger than or equal to 2 m sub 0 plus 1. And therefore, k, this integer here, is bigger than M0. Also, m is bigger than or equal to m sub 1 as well. I mean, m is bigger than or equal to m. And m is bigger than or equal to the max of these two things. So it's bigger than or equal to 1. Then if I look at S of n minus S, this is equal to S sub m-- so the nth partial sum is equal to the m minus first partial sum plus the next term minus S. Now m minus 1, in terms of the integer k, is equal to 2 times k. And now, I'm going to take this S and group it with this guy. And so, since k is bigger than or equal to m sub 0, we can use this inequality here after we do the triangle inequality. So this is less than or equal to S 2k minus S, plus the absolute value of this thing. So again, since k is bigger than or equal to m sub 0, I can use this inequality to get this is less than epsilon over 2, plus this guy. And since m is bigger than or equal to, capital M, which is bigger than or equal to capital M sub 1, I can use this inequality. And this is less than epsilon. So we've done the case of m even or m odd, m bigger than or equal to m. So in summary, we've shown that if m is bigger than or equal to m, S sub m minus S is less than epsilon. And therefore, the S sub n's converge to S. And I don't think I have enough time to do this next theorem. So we'll stop there.
MIT_18100A_Real_Analysis_Fall_2020
Lecture_25_Power_Series_and_the_Weierstrass_Approximation_Theorem.txt
[SQUEAKING] [RUSTLING] [CLICKING] CASEY RODRIGUEZ: So at the end of last time, let me just restate the theorem that we proved, which was the Weierstrass M-test, which I'll state in briefer form now as the following. Suppose fj are a sequence of functions from some subset S of R, so that for all j, there exists non-negative number M sub j, such that two things hold. First off, these M sub j's dominate the f sub j's. f sub j of x is less than or equal to M sub j. And two, these M sub j's are summable. Then the conclusion is, the sequence of partial sums. So converges uniformly. So there exists some function f from S to R so that the partial sums converge uniformly. So this is a sequence of functions built up by the fj's by just taking the sum of the first j equals 1 to n of them. And so, for example, and I kind of went through this last time. So example, if fj of x equals our old friend cosine of 160jx over 4 to the j, then-- and let's say on R. So S is R. And a couple of things. fj of x for all x is less than or equal to 4 to the minus j. Because cosine of whatever you put into it is always bounded by 1. And since this converges, this implies that this series, which I'm writing in this way, which you should think of as a limit of functions of the partial sums, which are functions, converges uniformly on R. Not only that, this function defined by-- I take x, I stick it into here-- is continuous. Because last time, we proved that the uniform limit of continuous functions is continuous. This function here is the uniform limit of the partial sums, which is just finitely many cosines of things. So that's continuous. So this gives another proof of one of the things we did by hand when we spoke about differentiability, namely that this function that we considered was continuous. So let's go back to power series, which was our original motivation. So I'm going to be saying that a series-- so let me, again, make this perfectly clear. When I say, if you like, make this a definition. Or really, this is just a-- I was using this terminology up there in that theorem and when I was talking about this example afterwards. But when I say a series involving functions converges uniformly, I mean the partial sums converge uniformly. So means there exist a function f such that-- and all of these are from S to R as well, such that-- just so that there's no ambiguity. So if I have a sequence of functions, f sub j, and I form their series, and I say that converges uniformly, that means there exists some function f, so that the sequence of partial sums-- these are now functions-- converges uniformly to the function f. That's what power series are. They're expressions involving a free variable x. And therefore, you can think of them as a series involving functions. So let me just state the following theorem about when we have a uniform convergence. So let's suppose we have a power series with radius of convergence rho, which I will recall is defined to be the limit as j goes to infinity, aj 1 over j inverse. If this limit is 0, we say that the radius of convergence is infinity. So if this is a finite number, this is a meaningful expression. If this is 0, then this is shorthand for saying the radius of convergence is infinity. So let's assume this is-- then, as long as I stay strictly inside the radius of convergence, I have uniformed convergence. This is the statement of the theorem. And for all R in 0 rho, this power series, now thought of as a series of functions of x, converges uniformly on x0 minus R, x0 plus R. So the picture is, we have some radius of convergence, x0, x0 plus rho, x0 minus rho. If we take any closed interval, basically, inside of that-- strictly inside of that interval, then we have uniform convergence of the power series. So we'll prove this using the Weierstrass M-test. So let R be in 0, rho. And for all j-- and OK, so the Weierstrass M-test is for j equals starting from 1 and going to infinity. But you don't have to, just like for series, we don't have to start at 1. It can start at 0. So for all j union 0, what do we have? What are we writing? And for all x in this interval where we want to show uniform convergence, we have that this function a sub j, x minus x0 to the j in absolute value. What is this bounded by? This is bounded by absolute value of aj, and then j. And because x is in this interval, its distance to x0 is less than or equal to R. So R to the j. So this will be my Mj that I'll use for the Weierstrass M-test. Let me put that in parentheses, because I may not use that Mj. So these individual functions, just polynomials, are bounded by this number for each j. And now, I want to see if the series of involving these numbers converges. So let me apply the root test. We have this is equal to-- now, this R to the 1 over j just becomes R. And it can just come out of this limit. And I pick up a to the 1 over j. And this is equal to 1 over rho by-- so this is equal to, if you like, I can put R over rho, as long as if rho is less than infinity. Meaning if the radius of convergence is less than infinity and 0. And what do we notice? Now, R is less than rho. R is coming from the closed interval 0 up to the rho, not including rho. So this number-- so this is always less than 1. But this number is always less than 1 too, as long as R is less than rho. Which implies that the series sum from j equals 0 to infinity of aj R to the j converges. So now we have the sequence of functions, the polynomials that we used to build up our power series, each of them bounded by these numbers, a sub j, absolute value times R to the j. And these numbers are summable. The series converges. So by the Weierstrass M-test, this implies that the power series converges uniformly on this interval. So as long as we stay strictly within inside the radius of convergence, we have uniform convergence of the power series. And therefore, using the theorems that we proved before, we can differentiate and integrate term by term the power series. So which follows immediately from this previous theorem and then what we proved last time, which I'll state now. The power series with radius of convergence rho positive. So let me write it this way. And the first is for all c inside x0 minus R, x0 plus R, the function given by the power series, which I'm not going to call this f or anything. I'm just going to refer the power series directly, this is differentiable at c. And to compute the derivative, you can just do it term by term, meaning I can take the derivative inside. By dx of j equals 0 to infinity, a sub j, evaluated at x equals c. This is equal to simply differentiating term by term. And number two, for all a less than be, well, with x0 minus rho less than a, less than b, less than x0 plus rho-- so here I'm sticking to strictly with inside the radius of convergence. There's the x0 plus rho, x0 minus rho. And so, now I'm going to integrate over some interval inside. Then I can integrate term by term. The integral from a to b, sum from j equals 0 to infinity of a sub j, x minus x0 jdx equals-- and I'll just write it this way, sum from j equals 0 to infinity of-- OK. And in fact, let me just to really emphasize that we're interchanging limits here, let me instead write this equivalently as sum from j equals 0 to infinity of sub j d by dx of a sub j, x minus x0, j evaluated at x equals c. So for power series, I can interchange the limits. The limit being taking a derivative in the sum, I can interchange. And then, I can also interchange the integration and the sum as long as I stay within the radius of convergence. And this statement here is a point-wise. So let's see why one is the case. Two follows immediately from what we've proven, and so, this previous theorem and the theorem we have about integration and uniform convergence. So what about one? So first off, we know we have uniform convergence of the series strictly inside of the radius of convergence. So how about, we need to check to see if the formal derivative also has radius of convergence as well equal to rho. So I claim that the radius of convergence of the derivatives which equals-- I can write it this way. No. So the derivative of this guy is j times a sub j times x minus x0 to the j minus 1. So just shifting indices, this is equal to times j plus 1, times x minus x0 to the j. So our claim is that this power series here has radius of convergence equal to rho, the original radius of convergence. So rho is the radius of convergence of-- why did I come over here. I still had a board left. Sorry about that. The original power series has radius of convergence rho. What we need to show to prove part one is that the radius of convergence of the derivatives of taking the derivatives inside also has radius of convergence rho. Because then, this would imply that this power series, which is the derivative of that one, converges uniformly on the same set. So then we have uniform convergence of the power series and uniform convergence of the derivative of the power series. So by the theorem we proved in the last lecture, we would have that the derivative of the power series equals the power series of the derivatives. So I'm just going to focus on this claim. And this, again, just follows from what we know about limits. And so, we compute that if we take the limit as j goes to infinity of-- now, these are the coefficients for the new power series. So aj plus 1, j plus 1 raised to the 1 over j, this is equal to limit j goes to infinity of a sub j plus 1 now raised to the 1 over j plus 1, j plus 1, 1 over j plus 1. Now all raised j plus 1 over j. Now, this was a special limit that we looked at last time. This, the limit as j goes to-- not last time, but way back when we were looking at sequences-- the limit as j goes to infinity of this guy is 1. So this in fact equals limit as j goes to infinity, this goes to 1, this goes to 1. So you can actually say this converges to 1 as well. So this equals to the j plus 1 over j. Now, strictly speaking, you would have to-- so this thing also converges to-- what does it converge to? This remember, is-- so where do we have it? Do we still have it up here? So the radius-- well, it's way over there. So let me recall that the radius of convergence rho, let me put an inverse here, this is equal to the limit as let me put a k goes to infinity, of a sub k, 1 over k. So this is the exponents converging to 1. This is converging to rho inverse again. So I get rho inverse to the 1. So therefore, this is supposed to be the radius-- 1 over the radius of convergence to the differentiated power series, which implies the differentiated power series. j is 1 over this limit, 1 over that limit, which we computed is to be is 1 over rho, which is the original radius of convergence. So all this to say is that the radius of convergence of the differentiated power series, the formerly differentiated power series, is the same as the radius of convergence of the original power series. And therefore, you have uniform convergence of the derivatives wherever you have uniform convergence of the original power series. And therefore, by the theorem we proved last time, since both the derivatives and-- so since the power series and the derivative of the power series converges uniformly on the same set, the power series-- the infinite sum, so this guy is actually differentiable. And the derivative of the power series is the power series of the derivatives. But you can iterate this, because I now have this power series with radius of convergence equal to the radius of convergence of the original. And then, I can take a derivative of that and show that has the same radius of convergence as the original. So let me just leave this as a remark and not state it as a theorem. Iterating can prove that if I want the k-th derivative of the power series. So on-- so let me say, all x in here, the k-th derivative of the power series is equal the power series of differentiating term-by-term. So this holds for k equals 1, 2, and so on. And so in particular, if I evaluate at x0, this tells me that k factorial, a sub k is equal to the derivative of this function, which is what you get out when you stick x into this power series. And you can interpret this, although we never called them Taylor series, as a statement that every power series is the Taylor series of a function. At least in this setting that we're looking at. So we've answered pretty definitively, at least for the scope of this class, when we can interchange limits, we can do that as long as we have uniform convergence of the objects that we're interested in, be it the function, the continuous functions, or function and its derivative. But there are more powerful statements out there that one can make, especially when it comes to integration. This is why essentially why a different theory of integration was created, or one reason why. But hopefully, if I see some of you in 18 102, which is what I'm teaching next semester, we'll get into that further when we discuss Lebesgue integration. That was thought of because somehow Riemann integration is not complete the same way that the rational numbers are not complete. Riemann integration is not complete, Lebesgue integration is complete in a certain sense. I'm being very vague here for a reason. So I'm just giving you where you can go with this. Is what the next step is at least improving when can you interchange two limits is really a topic that's fundamental to the study of-- in the study of Lebesgue integration. And you have much more powerful theorems there than you do here, which allows you to prove interesting results, and especially about Fourier series. In fact, if I had more time, we would apply, in fact, some of the stuff that we've done here we could apply to the study of Fourier series. But maybe we'll do that in 18 102. So I want to now prove the last theorem of the class, which is also due to the godfather. So when we had these power series, so these are defining functions, very special types of functions, what are called analytic. They are by definition essentially the limit of polynomials. They're the limit of these finite sums of a sub j's times x minus x0 raised to the j power. That's a polynomial. So for analytic functions, which are these functions equal to power series, they are the limits of polynomials. But that's a pretty small class of functions, analytic functions. But Weierstrass proved this very interesting result that actually something like this is true for all continuous functions. So roughly speaking, Weierstrass proved that basically every continuous function is in some sense almost a polynomial. Just as we solve for these analytic functions, meaning defined by power series, they are very close to being, at least within their radius of convergence, a polynomial. In fact, this is true for all continuous functions, that not necessarily equal to a power series, but every continuous function is almost a polynomial. And in what sense do I mean that? So this is Weierstrass's approximation theorem, which states the following. If f is in continuous function and on the unit interval, say, you can make it a, b, just by rescaling the variables. But I'll just state it for continuous functions on the unit interval. Then there exists a sequence polynomials p sub n of x, such that p sub n converges to f uniformly on 0, 1. And so, in this sense, every continuous function is well approximated by polynomials. So every continuous function is close to being a polynomial. So I need to first prove a couple of things that will be needed. So first off, I'm only going to look at only a-- so let me make some remarks before we go to the proof. So we're putting the proof on hold now. So let me just make a remark. I don't necessarily need to prove this statement for every f, just for certain f. And then for every f will follow from this special class. So we only need to consider-- we'll only consider the case, f of 0 equals 0, and f of 1 equals 0. So f is 0 at the end points. Why we're doing this is so that we can extend f to a continuous function outside of the unit interval by just setting it equal to 0. And so, let's suppose we've proven this special case and we look at the general case. Then if I take any continuous function now, what do we know that there exists sequence of polynomials, p sub n, such that these polynomials, p sub n converge uniformly to a small modification of f that results in-- so this function here, if I look at it, it's a continuous function on 0, 1, because I'm just modifying it by constants and then times x. And then at f of 1, I get f of 1 minus f of 0 minus f of 1 minus f of 0. So I get 0. And at 0, I get f of 0 minus 0. f of 0 is 0. Minus 0 times something, I don't care, equals 0. So there exist polynomials converging to this function now. If we've been able to prove the case just for f of 0 equals 0 and f of 1 equals 0. So uniformly, and therefore, the polynomials given by p sub n of x plus now x times f of 1 minus f of 0 plus-- so this is still, if this was a polynomial, so is adding this to it. That's still a polynomial, converges to f tilde of x uniformly. And this, again, this is still a polynomial. So the whole point of this remark is that we only need to consider the case f of 0 equals 0 and f of 1 equals 0. And we're going to do that just so that we can extend f to a continuous function outside the interval. Now, the way we're going to build these polynomials that converge to f is by what's called an approximation to the identity. Really, I guess you could think of it as an approximation to the delta function. And I'll explain that in just a minute. So for all n natural number defined c sub n, this will be the integral from minus 1 to 1 of 1 minus x squared raised to the n, dx, 1 over. And then, q sub n of x, this is going to be equal to c sub n times 1 minus x squared raised to the n. Then a few simple consequences are, so first off, this is the integral of a function which is non-negative, but it's positive at plenty of places between minus 1 and 1. And you proved in the homework that this means that the integral has to be positive. Then the first is, for all N natural number, for all x in 0, 1. So the first two of these observations are very clear. qn of x is greater than-- so minus 1 to 1. qn of x is bigger than or equal to 0. So it's just c sub n, which is a positive number times 1 minus x squared. x squared, so x is between minus 1 and 1. And therefore, 1 minus x squared is always non-negative. Two is, this is also pretty clear. The integral from minus 1 to 1 of q sub n of x, dx equals 1. Now, why is this clear? Because this is equal to the integral of 1 minus x squared raised to the n, times c to the n, or c sub n. But c sub n is the inverse of that integral. So we should just pick up 1. Now, the third and less trivial thing, which is important, is that for all delta in 0, 1, this function q sub n, I mean, this is just a polynomial, converges to 0 uniformly on the set delta is less than-- so x such that delta is bigger than or equal to the absolute value of x. No, delta is less than or equal to the absolute value of x is less than or equal to 1. So in other words, here's minus 1, 1, 0, delta, minus delta. If I look in these regions, so, if you like, in the union of those two intervals, then q sub n, this polynomial, is converging to 0 uniformly as n goes to infinity on the union of these two intervals. So what is the picture of what's going on? What do these q sub n's look like? So here's minus 1. Here's 1. So the first one it looks like some constant times 1 minus x squared. So there's the first one. And then, as n keeps getting bigger and bigger, this is 0 at higher and higher order at 1 and minus 1, and getting pretty small near here. And in fact, according to three, if I take any interval around 0 and look outside of it, as n goes to infinity, q sub n in is going to 0 uniformly. So what it should look like is, maybe the next one, it's like that. And then further still, like that. So that if I look inside over here, or over here, q sub n is going to 0 uniformly. So what you should think of is that the q sub n's as n goes to infinity is something like a direct delta function, which is not exactly a function. So again, so let me re-emphasize that-- and this is in quotes-- you should think of q sub n acts like delta function centered at x equals 0. So that's in quotes, because that is meaningless. But some of you who've taken physics know what the properties are of a delta function. And the integral is 1, it's somehow 0 everywhere away except for the origin. And it's infinite there. But somehow, it's integral 1. So these q sub n's will show a form, an approximation to the identity in a certain sense. But let me-- let's prove these. Let's prove the only non-trivial one, which is three. Number one and two are clear based on how they're defined. So let's first estimate how big is this. So this constant c sub n, we don't really know what it is explicitly. But let's compute at least a rough size of it. So and then, we'll use this to prove the third part. So we have for all N and natural number, the following inequality in xn minus 1, 1, 1 minus x squared raised to the n, this is greater than or equal to 1 minus nx squared. Now this, if it's not-- so it shouldn't be like-- hit you in the face clear why this is true. But one way you can prove it is that if you look at the function g of x equals 1 minus x squared n. So first off, this inequality is even in x. It doesn't matter if x is negative or positive. So let's look at this function. On 0, 1, then what's the point? I look at g of 0, this is 0. And if I compute g prime of x, this is equal to n times 2x times what? Times 1 minus 1 minus x squared to the n minus 1. And this is always less than or equal to 1, this thing in parentheses. So this thing is always bigger than or equal to 0 on 0, 1. So on 0, 1, this function is increasing. And its value at 0's 0. So we get that, which is exactly what we wanted to prove. So now we compute the size of c sub n. So to do that, let me look at 1 over c sub n. If I want an upper bound on c sub n, I need to prove a lower bound on 1 over c sub n. So let's take 1 over c sub n and find a lower bound for it. This is the integral from minus 1 to 1. So 1 minus x squared raised to the n, dx. Now, I can't remember if I made this a homework problem or not. But for even functions integrating over an interval even with respect to the origin. This is just twice-- I mean, I'm sure you remember this from calculus. It's not hard to prove with what we know of the change of variables formula and so on, that this is equal to twice times the integral from 0 to 1 of 1 minus x squared raised at the n. Now, this is bigger than or equal to 2 times if I integrate to a certain point. This certain point is chosen so that I get a result in the end that's pretty, essentially. Now, this is where I replace this by the smaller thing, which is easy to integrate. So this is greater than or equal to 2, integral 0 1 over root n, 1 minus nx squared dx. And I leave it to you to verify that with this choice of the endpoint what I get is 4 over 3 root n, 1 over root n. And therefore, so and this is bigger than 1 over root n. So I started with 1 over c sub n. And I showed that it was bigger than 1 over root n. And therefore, c sub n is less than 1 over-- is c sub n is less than the square root of n. Now, we'll use this to compute what we want. In fact, I mean, we really just needed to show that c sub n is bounded by some polynomial in n. But this will suffice. So we now want to show-- so let delta be positive. We now want to show that Q7 q sub n converges uniformly to 0 on that set where the absolute value of x is less than or equal to 1 is bigger than or equal to delta. Now, we note that the following sequence converges to 0, that square root of n times 1 minus delta squared raised to the n. So we should also put deltas in 0, 1. So this sequence here converges to 0 as n goes to infinity. Now intuitively, why is this? This is because this is some number less than 1 raised to the n-th power. Exponential always beats just a polynomial or a power of n. If you want to see exactly why this is, we could compute the limit as n goes to infinity of this sequence raised to 1 over n power. Then this is equal to the limit as n goes to infinity of n to the 1 over n 1/2, 1 minus delta squared. And this converges to 1. We proved that. And so, this equals 1 minus delta squared, which is less than 1. And we have this theorem from our section on sequences that says, if this limit is less than 1, then the thing here converges to 0. Or you could interpret this as saying that the series with this as the individual terms converges. And therefore, the individual terms have to converge to 0. So which implies limit as n goes to infinity equals 0. So now we have this. And once we have this, we'll have what we want. So we want to prove uniform convergence on that set. So let epsilon be positive. Then since this sequence converges to 0, there exists in M a natural number, so that for all n bigger than or equal to M, square root of n times 1 minus delta squared raised to the n is less than epsilon. And for all n bigger than or equal to M, for all x so that delta is between less than 1, we get that q sub n, so it's non-negative minus 0 in absolute value. That's just q sub n of x raised to the n. This is less than or equal to the square root of n. That's for this guy. 1 minus x squared, this is getting smaller as I get closer to 1. So its biggest at delta. And how we chose M is, this is less than epsilon. So now, we're ready for the proof of the Weierstrass approximation theorem. So suppose f is continuous function on 0, 1. f of 0 equals 0, f of 1 equals 0. So this polynomial here is very concentrated at 0. So well first off, suppose f-- and it's continuous here. So you can check that if it's 0 there, and I extend it to be 0 outside of 0, 1, this is still a continuous function. I'm just defining it this way because I want to write certain symbols in a little bit without specifying exactly where the bounds of the integration are. So we extend f by 0 outside of 0, 1. And this function f is continuous function on the real number line. So we define-- now we're going to define this polynomial, sequence of polynomials, p sub n. This is going to be equal to the integral from 0 to 1 f of t times q sub n of t minus x, dt. And just to remind you, this is equal to the integral from 0 to 1, of f of t times c sub n, 1 minus x minus t squared raised to the n, dt. So this is just c sub n times this thing. If you expand everything out, is just a-- using the binomial theorem, this is equal to j equals-- I'm just going to put a sum here, meaning a finite sum, of some numbers a sub j, n times x to the j, times t to the-- well, just some finite numbers a jk, times x the j, t to the k. All I'm saying is, if you expand this out using the binomial theorem, you get this theorem, this polynomial and x sub j, t sub k. And then, this is getting integrated against f of t dt. And so, this is in fact a polynomial. And I'll give you-- so let's write this out and be precise, just so that you're convinced that it's a polynomial. This is equal to the integral from 0 to 1 of f of t times c sub n. Now we use the binomial theorem. This is equal to j 0 to n, and choose j. And then, minus x minus t squared raised to the n minus j. Or I could put j, 2j. And then, det. I'm not sure if this is even helping or anything. But just so that you see this is actually a polynomial. f of t, c sub n, sum from j equals 0 to n. And now, sum from k equals 0 to 2j, n choose j minus 1 to the j. Now, 2j choose k minus t to the k times x to the j minus k. And then, dt. So I have all this junk integrated dt, and then this is x to the 2j minus k. So that just pops out. So this is a polynomial. I mean, this is a polynomial. But then, when it integrates against f of t, this x to the 2j minus k comes out of the integral. And I get that times just all-- this becomes a sum of terms with x to the 2j minus k outside this integral, times f of t, times integrated against minus 2 to the k. I think I said more than I needed to there, but so the point is p sub n of x is a polynomial. Now, we can write this slightly differently. So piece sub n of x, this is equal to just as it was before. Now this, we change variables and is equal to-- so we do a u substitution now, where u is equal to x minus t. And then I'm going to change the dt. So what I did here was a change of variables. By setting u is equal to x minus t. Or I guess, if you'd like, t minus x. And then, I just recalled-- and then I just called ut then again. So I'm eventually looking at these polynomials only in the interval 0, 1. So this is what it looks like. But now, this f of x plus t, for t between minus x and 1 minus x, it's actually 0 outside of that. So I can extend the integration to minus 1 and 1. Again, with the understanding that I've extended f to be 0 outside of 0, 1. So since fy equals 0-- or let me write it this way. Since f of x plus t equals 0 for t not n minus x and 1 minus x. So that was a lot of explaining for some stuff. But here, let me get to the point. What is p sub n really? So why should you expect this to converge to f? So this is a little discussion. So I said that-- and this is for those who understood my comment about delta functions. In q sub n is very concentrated at 0, at t equals 0. So this, you should think of as being approximately like a delta function at t. If you don't know what delta function is or never heard of it, then forget the rest of this remark. But and then I'll say something. So q sub n kind of looks like a delta function. And therefore, minus 1 of x plus t qn of t dt should look more and more like f of x plus t delta of t dt. And what we know about direct delta functions is that when you integrate them against a function, you just pick up the function evaluated at 0, which is-- OK. So that's kind of why you expect it. I mean, if you look back at this picture of what the qn's are looking like, they are concentrating more and more at 0. So all of the contribution to this integral here, which defines these polynomials, is happening at t equals 0. At t equals 0, I just pick up f of x. So that's why you should kind of expect these polynomials to converge back to the function. So I think I can finish on this board. So this is a picture you should have seared in your mind. This is what the q sub n's look like, that they're concentrating all of their mass right at the origin. And therefore, I should just pick up f at the point x. since this is concentrating at the origin, t equals 0, I should just pick up f of x plus t at t equals 0, which is f of x. And why f of x plus t, not some constant times f of x plus t, that's because the integral of q sub n's is 1. So now, let's prove that the pn's converge to f uniformly. So let epsilon be positive. Since f is a continuous function on a closed and bounded interval, we know that it's uniformly continuous. And therefore, there exists delta positive such that-- how am I writing this here? Such that if-- let's write it this way-- for all y, so that z minus y less than delta, we get that f of z minus f of y is-- actually, we don't need-- OK. So I mean, we don't need so much of about f. But we'll go with this anyways. We really just know-- OK, never mind. I'll stop. So we know that f is uniformly continuous so there exists a delta, so that I have this. Yeah, so less than epsilon over 2. So z and y are within delta to each other. Then f of z and f of y, no matter what z and y are, are within epsilon over 2 of each other. Now, since f is continuous, it has a maximum on this interval. Well, I should say, since f is a continuous function, it has both a max and a min on this interval. And m there exist some number c such that f of x is bounded by c for all x in 0, 1. So now I have this delta coming from the uniform continuity of f. I have this c coming from the fact that it's bounded on this interval. And now, I'm going to choose my M for the uniform convergence of the polynomials just depending on these pieces of input. Then as we showed, since square root of n 1 minus delta squared and converges to 0, this implies there exists an M natural number so that for all n bigger than or equal to M, I get that square root of n 1 minus delta squared raised to the n is less than epsilon over 8c. I claim this M works. That for all n bigger than or equal to M, for all x in 0, 1, pn minus f is less than epsilon. And for all n bigger than or equal to M, for all x in 0, 1, if we look at p sub n of x minus f of x, we're going to use that this thing is an approximation to the identity, meaning it's essentially that it satisfies those three properties that I wrote a minute ago. So this is equal to integral from minus 1 to 1 of f of x plus t times qn of t dt minus f of x. What's the integral from minus 1 to 1 of qn sub t? That's equal to 1. So f of x is at times-- so this is-- all of analysis is writing 1 or 0 in a certain way. Not all, but-- so I can write this as f of x plus t now minus f of x times qn of t, dt. Again, because the integral of q sub n of t equals 1. So again, if you just expand this out, this is f of x. I'm integrating with respect to t. So I just pick up what I had before. Now, by the triangle inequality for integrals, this is less than or equal to the integral for minus 1 to 1 of f of x plus t minus f of x times the absolute value of q sub n of t. But since q sub n of t is not negative, that's just q sub n of t. And now, I'm going to split this integral up into two parts. This is equal to a part where t is less than or equal to-- so should minus delta, delta f of x plus t, minus f of x, q sub n of t dt. And then plus the other part, which I can write as-- so it's a union of the two intervals now away from delta out to 1. So it's the sum of the integral over these two intervals, which I'm going to write as the integral over delta less than or equal to t is less than or equal to 1. And times qn of t. Now, t is between minus delta and delta. And therefore, x plus t minus x in absolute value is less than delta. So in this integral here, I want to note that if t is in this interval here, x plus t minus x equals t is less than delta in this here. So since this guy minus this guy is less than delta in absolute value, I can use this uniform continuity part to say that this is less than epsilon over 2qn of t dt. Plus now the absolute value of this guy is less than-- by the triangle inequality-- is less than or equal to the sum of the absolute values, which is less than or equal to c-- 2c. So 1, and then 2c. Because that's bounded by c, that's bounded by c. So the absolute value of the difference is less than or equal to the sum of the absolute values, which is bounded by each of those bounded by c. Now, this is less than epsilon over 2 integral from, if I just integrate the whole thing, from minus 1 to 1, that just gives me 1. Plus 2c times qn of t. On this interval is, again, so I shouldn't put a delta there. 2c, c sub n, 1 minus delta squared raised to the n, dt. And so there's no t here. So this is equal to epsilon over 2, because this integral is 1 plus this-- c sub n, remember, is less than the square root of n times 1 minus delta squared and times the integral over this region dt, which I can make bigger by going from minus 1 to 1. It gives me 2. And this is equal to epsilon over 2 plus 4c square root of n, 1 minus delta squared n. And this is less than epsilon over 2, plus epsilon over 2, equals epsilon. I think I have one minute to spare. So this was quite an experience teaching to an empty room. I hope you did get something out of this class. Unfortunately, I wasn't able to meet a lot of you. And that's one of the best parts about teaching and being able to see you grasp in real time what I'm talking about. So hopefully this nightmare will end soon, and we'll get to see each other in the future.
MIT_18100A_Real_Analysis_Fall_2020
Lecture_4_The_Characterization_of_the_Real_Numbers.txt
[SQUEAKING] [RUSTLING] [CLICKING] CASEY RODRIGUEZ: OK, so let's finish the proof that we started at the end of the lecture last time. I usually don't end lectures in the middle of a proof. But since this lecture and the lecture before will come to you at the same time, I didn't feel so bad about it. All right, so we were trying to prove this theorem that if there's a rational number so that it's equal to the supremum of the set, then x is bigger than or equal to 1 and its square gives me 2. And so far, what we've shown is that if I have an element, a rational number that is given by the supremum, then it's bigger than or equal to 1 and x squared has to be bigger than or equal to 2. And so now, we would like to show that, in fact, x squared equals 2. Now, since x squared is bigger than or equal to 2, it's either equal to 2 or bigger than 2. And we want to show that the second possibility is not possible. So we're going to prove that the second possibility is not possible by contradiction. So let's assume that the thing that we say cannot happen does happen and we're going to derive a false statement. So define h. This is going to be x squared minus 2 over 2x And so note a couple of things that since x squared is bigger than 2, this implies h is bigger than 0, which implies x minus h is less than x. So what is the contradiction going to be? It's going to be that this element x minus h is, in fact, an upper bound for the set E. And this will contradict the fact that x is the supremum. In other words, it's the least upper bound for the set E. There's no upper bound for E that can be less than x. All right, so we now prove that x minus h is an upper bound for this set E. Now, let's take a look at x minus h squared. Then this is equal to x squared minus 2xh plus h squared. Now, just given by if we just plug in what h is, this is equal to x squared minus x squared minus 2. So that's putting h in. And then the 2x here cancels with the 2x there. So I just get x squared minus 2 plus h squared, which equals 2 plus h squared. Now remember, h is a positive number. So 2 plus a positive number, this is bigger than 2. So x minus h squared is bigger than 2. So now, I want to show-- I'm going to use this to show that x minus h is an upper bound for E So let q be in E. q squared is less than 2. Then I have q squared is less than 2, which is less than x minus h squared by what we just proved here. So let me write this out or just summarize that what we proved here was x minus h squared is bigger than 2. So we have q squared is less than 2 is less than x minus h squared. So that means 0 is bigger than x minus h squared minus q squared. q squared is less than this so than if I just subtract it I get this inequality. So that implies that 0 is bigger than-- now, this is the difference of two squares. So I can write this as x minus h plus q minus q. Now, if I just write out what x minus h is using the definition of h, this is equal to x squared plus 2 over 2x plus q over x minus h minus q. Now, the product of these two numbers is positive. This number is positive because q is positive. And so that means their sum is positive. So I have something positive times this. So I can divide by this positive thing and keep the inequality. So that implies that 0 is less than x minus h minus q, which implies that q is less than x minus h. All right, so we started off with an element of E, an arbitrary element of E. And we proved that q is less than x minus h. Thus, for all q in E, q is less than x minus h, which implies that x minus h is an upper bound for E. So let me keep that board for now while I write the last sentence or so of this proof. So we stated that x minus h is an upper bound for E, which implies since x is the supremum of E, it should be less than or equal to all upper bounds, which implies h is less than or equal to 0. But this is a direct contradiction to h being a positive number. So this is the false statement we ultimately prove. But the idea is that if x squared is bigger than 2, you can find a lower bound-- I mean an upper bound, which is strictly smaller than x, which contradicts the fact that x is the greatest lower bound for E. But remember, what was our original assumption to begin with in this line of reasoning? That was that x squared is bigger than 2. And that concludes the proof. All right, so this was a statement about just if I have some element that's a rational number and it's equal to the supremum, then it's square has to be 2. Now, I'm going to prove that basically no such element exists, and therefore that the rational numbers do not have the least upper bound property, which was the discussion above those two lines in that circle on that board up there. So what's the theorem we're going to prove? The set E as before. So this is a rational number such that q is bigger than 0. q squared is less than 2. It is bounded above and has no supremum in Q. So I'm not just saying that-- so this set E sits inside my ordered set, which is my universe Q. I'm not saying that it doesn't have a supremum in E. I'm saying it doesn't have a supremum in the universe where it sits. And therefore, the rational numbers do not have the least upper bound property because this set is a set which is bounded above and has no supremum in q. So the first thing I want to show is that this set E is bounded above. Let q be in E. Then q squared is less than 2, which is less than 4. So this tells me 4 minus q squared is positive, which tells me 2 minus q times 2 plus q is positive. Now, since 2 is positive and q is positive, 2 plus q is positive. So I can divide both sides of this inequality by this, 2 plus q, without changing the inequality, which implies-- I'm just going to flip this inequality. Thus, for all q in E, q is less than 2. So 2 is an upper bound for the set E. OK, so that proves that the set E is bounded above. So now, I'm going to prove that no supremum mom exists by using the previous theorem, which says if I have a supremum, then its square must be 2. OK, so now, we show that the sup of E does not exist. And we do this by contradiction. Now, I know it seems like a lot of the proofs we're doing are by contradiction. This should not give you the impression that all proofs should be done by contradiction. Many proofs that you will do in the homework can be done directly, meaning not by contradiction. For example, this first little proof that E is bounded above, this is a direct proof, meaning I prove directly that it's bounded above. I do not assume that it's not bounded above and arrive at a contradiction. But proof by contradiction is very tempting because it at least gets you going somewhere. If you can assume not, then you get to a next step, which this assumption implies something else. And hopefully, you can keep going until you arrive at something false. But many theorems that you want to prove can be done directly. All right, but not this one. Now, we show the supremum of the set does not exist. So by contradiction, so we will assume that such a supremum does exist. So suppose the supremum exists and call it x. All right, so x is the supremum arm of the set. OK, so by the previous theorem, since x is an element of the rational numbers, whose square gives me-- which is the supremum of the set, then x is bigger than or equal to 1 and its square is 2. By the previous theorem, x is bigger than or equal to 1 and x squared equals 2. In fact, x has to be bigger than 1. It cannot equal 1 because the square would give me 1 not 2. All right, so I can make that statement that x is, in fact, bigger than 1, not just bigger than or equal to. OK, so thus, since x is a rational, there exists m in natural numbers such that m is bigger than n and x is equal to m over n. So that just-- that's what it means for x to be a rational number and for it to be bigger than 1. Thus, there exists an n such that n times x is a natural number. All right, just multiplying through by n, that means that n times x is equal to m, which is a natural number. So let S be the set of natural numbers so that when I multiply x by k I get a natural number. And so what we've shown, just based on our assumption, is that S is non-empty because n is in S Now, this is a subset of natural numbers. So by the well-ordering property of N, this set S has a least element, which we'll call k0 in S. So what I'm going to show is that, in fact, this little guy k0 is not, in fact, the least element of this set S, which would contradict exactly how it's defined. So define k1 to be k0 x minus k0. And note this is an integer. Why? Because k0 times x-- so k0 is in the set S, meaning k0 times x is a natural number. So it's an integer. k0 is a natural number. So the difference of two integers is, again, an integer. So k0 is a natural number. k0 times x is a natural number. So the difference of two natural numbers gives me an integer. But in fact, it's a natural number. So it's a natural number. So let's go over here. So far, we have that k1 is a natural number. And I'm going to prove that k1 is actually less than k0. So since x squared equals 2-- I should say-- OK, so let's write it this way. Since x squared is equal to 2, this implies that 4 minus x squared is positive. I mean, I just wrote down 2. 4 minus x squared is just 2. So this tells me, taking the difference, 2 minus x times 2 plus x is positive. 2 and x are positive. So I can divide this inequality through by this term and maintain the inequality, which implies 2 minus x is positive. And therefore, x is less than 2. So x is less than 2. Then k1, which remember is k0 times x minus 1, is less than k0. So k0 is a natural number. It's positive. x minus 1 is bounded by 2 minus 1 equals k0. So what we've shown so far is that this number, k1, is a natural number. And it's less than k0. So I guess I don't have to write this positive part. Now, let's remember what k0 was supposed to be. k0 is supposed to be the smallest element of this set S. It's the smallest natural number so that when I multiply it by x I get another natural number. And from this k0, I constructed a new natural number called k1, which is less than k0 and it's a natural number. But here comes the fun part. If we compute what x times k1 is, this is, by definition, x times x times k0 minus k0. This is equal to x squared k0 minus x times k0, which equals 2 k0 minus x times k0 because x squared equals 2. And this last part, this is equal to k0 plus k0 minus x k0, which equals k1. Now, that's a natural number. That's a natural number. This natural number is bigger than this natural number. Therefore, their difference is, again, a natural number. So this says that x times k1 is a natural number. Thus, k1 is in S and k1 is less than k0, which implies k0 is not the least element in S, which is a contradiction to the fact that it is the least element of S. So what we've shown is assuming that this set E has a supremum in the rational numbers, then we arrive at a contradiction. So our original assumption must be false. Thus, sup E does not exist. All right, so this proof is maybe a little different than-- if you've seen a proof of the fact that the square root of 2 is not a rational number, there's a different proof that maybe is a little bit simpler. But this one uses ordering instead to prove it, which I thought was pretty cool. And it's originally due to Dedekind. OK, so we've discussed one aspect of the real numbers that was in that theorem that I stated earlier, which I'll restate in a minute, namely that it is a set that has the least upper bound property. So I stated that as a theorem. I'm not going to prove that theorem. We just want to understand exactly what sets the real numbers apart. And one aspect of that-- there were two things. One is that it's an ordered field. And two is that it has the least upper bound property. So we've now discussed what the least upper bound property means. So we just need to fill in one other part about the real numbers, which is the fact that it's an ordered field. And as I've said before, Q is also an ordered field. R is not special in that respect. But as the theorem stated at the middle of the last lecture, that R is, in fact, the unique ordered field with the least upper bound property. Remember, we've just proven that Q does not have the least upper bound property. So in some sense, Q is missing stuff. It's missing stuff. It's missing square roots of 2, which is kind of an algebraic thing, namely I can't solve the equation x squared minus 2 equals 0 in the rational numbers. And the fact that it's missing, that you have this kind of algebraic defect manifests itself in this fact that Q is also missing things with respect to an order. So Q, in short, this is me saying that has holes and R does not. I mean, that is probably the most basic statement one can make about R. And perhaps you heard in high school calculus and you're hearing repeated now, but in a nutshell, I mean, Q has holes and R does not. But this means something very specifically that R has the least upper bound property and Q does not. All right, so let me talk about what ordered fields are. So first off, I need to define what a field is. So a set F is a field if it has two operations-- plus-- and I'm going to put a dot here in the middle-- times basically, such that you have a list of properties that hold with respect to these operations. The first is with respect to-- so this is plus, this is times. This is addition, multiplication. So the first condition is that the set is closed with respect to taking sums. So if x y is in F, then x plus y is in F. This operation of additions should be commutative. Commutativity, I hope that's how you spell it. This means that for all x y in F, x plus y equals y plus x. Associativity, so this is a condition that for all x, y, z in F, if I add x and y and then add , this is equal to adding x to the sum of y and z. The fourth is that we have what's called an identity element, an additive identity element. There exists an element which I'll label 0 in the set F such that for all x in F, 0 plus x equals 0. And we also have additive inverses, namely for all x in F there exists an element which I call-- which I label minus x in F such that x plus minus x equals 0. So these are the conditions on addition that should be satisfied for a field. So that's about addition. So the conditions for multiplication are similar. Namely, it needs to be closed with respect to multiplication. So if x y is in F, then x times y is in F. Multiplication should also be commutative. So I'm just going to abbreviate that commutative as comm. For all x y in F, x times y equals y times x We also have associativity. Let's not shorten it by that. For all x, y, z in F, if I multiply x times y and then multiply it by z, it's the same as taking x and multiplying it by y times z. The fourth property is the existence of multiplicative identity. There exists an element which I label as 1 in the set F such that for all x in F, 1 times x equals x. And then I also have multiplicative inverses for non-zero elements. For all x in F, take away 0-- so for everything in the field that's non-zero-- there exists an element which I call x to the minus 1 in F such that x times x to the minus 1 equals 1. Now, these are statements about the two operations. There's one last assumption in the definition that connects the two. And that's the distributive law, namely that for all x, y, z in F, if I take x plus y times z, this is x times z plus y times z. So all these conditions-- so a field is a set with two operations, plus and dot, meaning multiplication. And these two operations need to satisfy all of these conditions for my set to be called a field. So the clearest example is, of course, rational numbers with the usual plus and minus, I mean plus and multiplication defined as you learned as a child. Now, what's a non-example? The integers. So the integers come with plus and multiplication. However, it doesn't satisfy the existence of inverses, of multiplicative inverses in 5, but it does satisfy everything else. And typically, one would call z what's called a commutative ring, commutative because multiplication is also commutative. But a ring in general does not necessarily have to satisfy multiplication. Being commutative, for example, the set of say 2 by 2 matrices form a ring. What's an example of another? What's another example of a field? Well, we have-- maybe I should have given this one first-- Z2, which is the simplest field there is. It's just the element of two-- it's just a set of two elements, 0 and 1, where I need to define what 1 plus 1 is, 1 plus 1 defined to be 0. And 1 plus 1 is 0. And yeah, that's it. I mean, 0 times 0 would be 0. 0 times 1 would be 0. 0 plus 0 would be 0. And that gives you all of the rules you need to know to be able to define multiplication and addition on this set of these two elements. This is a field because what is the inverse of 1? Well, it's just 1. And you can check that if I define addition by this rule along with the other rules that-- and along with-- that these defining plus and multiplication this way satisfies all the conditions of being a field. A more non-trivial example would be, let's say, Z3. This is a set 0, 1, 2. Only now, the arithmetic-- so I didn't use such a fancy word here. But arithmetic is done mod 3 here. Here, it was mod 2, meaning if I want to add two elements, I add them, and then I take the remainder of that sum after dividing by 3. So here, addition is defined mod 3. So if something is a multiple of 3, then I equate it to 0. So for example, 1 plus 2, which gives me 3, it's a multiple of 3. This is defined to be 0. 2 times 2, which equals 4, which equals 3 plus 1, multiples of 3 are 0. So that gives me 1. In particular, this tells me that in this field, 2 times 2 equals 1. So the multiplicative inverse of 2 is given by itself, 2, all right? And a times b equals, let's say, d mod 3, all right? OK, so that's another example of a field. In fact, if you take mod p where p is a prime, you get another field. You get a finite field. So these are examples of what are called finite fields because exactly that-- they are fields and they have a finite number of elements. Now, OK, based on just these assumptions that you assume your field to satisfy-- I mean, for it to be a field, you can prove all of the simple algebra statements that you've ever known simply from these axioms of fields. So for example, let me just give you the silliest example of a statement you can prove just using these elements-- I mean, these axioms. You can prove the statement that for all x in F-- so F is a field here. So F throughout will be-- if x is a field, then for all x in F, 0 times x is 0. All we know about 0 is that, when you add it to x, you get x back. Actually, this should have been-- yeah, yeah, this is wrong. This should have been x. Let me make sure there's no other errors. I don't think there are. That's another danger about not being able to lecture in person, is that errors on the board persist. OK, so you can prove this blockbuster theorem just using these axioms. So let's do a quick proof of this. If x is in F, then 0 is equal to 0 times x plus the additive inverse of 0 times x because that's just the definition of the additive inverse. Every element has an additive inverse. And now, 0 is equal to 0 plus 0 times x plus, again, the additive inverse of x. And now, I use the distributive law. This is 0 times x plus 0 times x plus the additive inverse of 0 times x. And that cancels with that. And I get 0 times x. So I started off with 0 and I arrived at it's equal to 0 times x, all right? And you can prove other simple algebraic statements using these axioms as well. I think I'll put it in the assignment. And I'll just state you can also prove things like-- if I want to look at minus x, then this is equal to the additive inverse of 1 times x. I mean, this is like-- it's not terribly interesting at the start. There are essentially some very deep theorems in algebra that you learn at another point in your life, but that won't be in this class. Actually, today is probably all we're going to talk about fields, which are algebraic things. Algebraic things are, to me, nice because somehow you always deal with equality. So how hard could it be to prove two things are equal to each other? Yet, analysis deals a lot with inequality, which somehow is much more subtle. But that's just a little biased. OK, so this is what a field is. A field is just, again, a set that has these two operations. What is an ordered field? So it's a field, first off, and which is also an ordered set-- but it can't just-- you can't have two different structures on your field and them not interact for that to be interesting-- so such that the algebraic structure and the order on the field are cohabitating nicely, meaning you have two conditions for all x, y, z in F. If x is less than y, then x plus z is less than y plus z. And one other condition-- x is positive. Or if x is bigger than 0 and y is bigger than 0, then x times y is bigger than 0. And I just use that terminology anyways right now. But for an ordered field F, if an element is bigger than 0, then we call it positive. Or if it's bigger than or equal to 0, we say x is positive and respectively non-negative. So non-negative if x is bigger than or equal to 0, positive x is bigger than 0, and then likewise with negative and non-positive. And so the most basic example, again, is Q. Q is an ordered field with the usual order and with the usual algebraic structure on Q. What is not an example is either one of the two fields I wrote down just a minute ago. So a non-example is this field here, 0, 1. So if I put an order on this, either 0 is less than 1 or 1 is less than 0. So remember, order does not have to necessarily correspond to the fact that 0 you connect to 0 in the integers and that 1 you connect to 1 in the integers. I mean, these are just two elements of a set. And an order would be saying either that element is less than that element or that element is less than that element. So let's consider either cases and suppose we have that order on this set and show that it does not turn this set into an ordered field. So then either 0 is less than 1 or 1 is less than 0. So let's do the first case, 0 is less than 1. If 0 is less than 1, then what happens if I add 1 to each side? So 1 plus 0 would give me 1. 1 plus one would be 0. And therefore, it is not so less than 1 plus 1. So it does not satisfy the first property. So if I have an order on Z2, two possibilities-- either 0 is less than 1 or 1 is less than 0. If 0 is less than 1, then, by the definition of addition on this set, if I add 1 to 0, I get 1. If I add 1 to 1, I get 0. And therefore, if I were to add 1 to both sides, I would not get a true inequality because I'm assuming 0 is less than 1. So this condition, number 1, does not hold for this order. And neither does it hold for choosing 1 is less than 0. By the same logic, then 1-- so 1 plus 1 is not less than-- so this set is not-- this field cannot be turned into an ordered field. And essentially, the same thing that I've done here shows that you cannot have any finite ordered field. So in general, there are no finite ordered fields. Now, just like we proved the blockbuster statement that 0 times anything in the field equals 0, we can also prove all of the manipulations of inequalities that you use without fear simply from, again, these axioms about an ordered field being a field and also satisfying these two inequalities-- I mean, these two conditions here for it to be an ordered field. So for example, if F is an ordered field, then if x is an element of f and x is positive, this implies its additive inverse is negative and vice versa. If x is less than 0, then minus x is positive. So I know it's tempting to think, well yeah, just multiply this inequality by minus 1 to get this inequality. But remember, this is really a statement about the additive inverse of x. So the proof is not hard. If x is positive, then-- well, I could write it this way. I can write it as 0 is less than x. Then by property number 1, I can add anything to both sides and get the same inequality-- and preserve the inequality. Therefore, minus x plus 0 is less than the additive inverse of x plus x. Now, by the fact that 0 is the additive identity, a4, the left side, is going to be minus x. And by the definition of the additive inverse of x, The right-hand side is going to be 0. And that's it. So what's going in here is this is by a4 and a5. And this previous statement is by 1 in this definition before. And I'm not going to prove this statement as well. It's the essentially the same proof. I add minus x to both sides, which I can do by 1. And then I use a4 and a5 to conclude that x is-- that minus x is positive. Let me just say See Proposition 1.1.8 in textbook for the other-- let's say for the other proofs of standard inequality manipulations. So Q is an ordered field. And as I stated before about R, R will also-- R is also an ordered field. But it has a least upper bound property. So maybe you are asking yourself, how about some sort of greatest lower bound property? The least upper bound property is about sups. Can we make a similar property about infs? Are there sets that have the least upper bound property that don't have, say, a greatest lower bound property, meaning does there exist something has the greatest lower bound property if every non-empty set which is bounded below has an infimum? That's kind of what I'm referring to, even though I haven't written it down. OK, what is that leading up to? In the setting of ordered fields, there really is no difference between a least upper bound property and a greatest lower bound property. If I have an ordered field which satisfies the least upper bound property, then it also satisfies a greatest lower bound property, which I'll state as a theorem, and then we'll prove. Let F be an ordered field with the least upper bound property. Now, I'm going to prove that it has, if you like, a greatest lower bound property. Then if A is a subset of F, which is non-empty and bounded below, then inf A exists in F, meaning A has an infimum in the set F. OK, so the proof of this is basically-- in some sense, we did something similar when we proved that one set given by all the minus 1, minus 2, minus 3 had the greatest upper bound property or the least upper bound property by taking its minus and then using, in a sense, the greatest lower bound property of the natural numbers, this well-ordering principle, to conclude that that set had the least upper bound property. And that's what we're going to do here, is we're going to take a set which is bounded below, take its minus, if you like, which is now bounded above, take the sup of that set, which we can do, and show that that's the infimum of that set. So let me write over here. This should be not part of the proof, but some intuition. And I'll draw it like F is a real number line, which it is because part of the statement about the real numbers is that it's the unique ordered field with the least upper bound property. But don't worry about that for now. Let's imagine we have a set A. And for now, I'll draw it like it's an interval, which is bounded below. So it stops after some point. And there's nothing there. So it's bounded below, then if I look at minus A-- so here, I'm drawing 0. If I look at minus A, which is the set of elements-- the set of the additive inverses of A, I now have a set which is bounded above. So if this is a lower bound for A, then minus B will be an upper bound for minus A. And therefore, minus A has a least upper bound which, in this picture I'm drawing, looks like x. And so then my goal is then to show that-- so here, let's go back to B. Here's A. And here's now minus x to show that minus x is an infinum of A. So that's the basic intuition on why this holds. We're using the ordered field property to be able to take minuses. That's where the field property is coming in. And that minuses-- although we didn't prove this, it is one of those-- essentially, we did prove this, that if I have something positive and I multiply it through by minus 1 using this as well, then that reverses the inequality. But again, these are short-- unless I'm telling you to prove a certain inequality, simple inequality statement like of this type, just freely use the inequality facts that you remember from high school. And just know that these persist for ordered fields with the least-- or ordered fields in general. OK, so let's turn this intuition into a proof. Suppose A-- so I am not stating it here, but F is an ordered field with the least upper bound property. I'll go ahead and state it. F is an ordered field with the least upper bound property. Let A be subset of F, A not equal to an empty set, A bounded below. Then that means there exists an element b in F such that b is less than or equal to a such that, for all a in capital A, b is less than or equal to a. So I mean, the proof is essentially there and you just need to turn it into English because we're in Massachusetts. If you were in a different country, you'd turn into the language that's spoken there. You just need to turn it into written word. So there exists a lower bound for A. Let me just note another set called minus-- which I'll label minus A. This is the set of all elements in F of the form minus the additive inverses of A, as a is in capital A. So it's the additive inverse of all elements in capital A. Then the fact that I have for all a in A b is less than or equal to a implies for all a in A, minus a is less than or equal to minus b because multiplying through by minus 1 flips the inequality. That implies-- so minus b is bigger than or equal to every element of minus A So that implies b is an upper bound for the set minus A. So minus A is a non-empty subset of F, which is bounded above. Therefore, it has a supremum. Thus, there exists an x in F such that x equals sup of the set minus A. And this is because we are assuming the least upper bound property, that every non-empty set which is bounded above has a supremum. I'm going to show this guy x is, in fact, the infimum of the set A, or minus x is the infimum of A. So the fact that x is the supremum of minus A implies that for all a in capital A, minus a is less than or equal to x because x is supposed to be the least upper bound. So it's, in fact, an upper bound. So that means that for all a in A-- again, just flipping the inequalities-- minus x is less than or equal to A, which implies minus x is a lower bound for capital A. We now have to show that minus x is the greatest lower bound of A. If I take any other lower bound of A, then minus x is bigger than or equal to that lower bound. Now show that if-- let's call it something else-- y is the lower bound for A, then y is less than or equal to minus x. And then that concludes the proof showing that minus x is, in fact, the infimum of A. And therefore, A has an infinum. We've actually identified what the infinum is. It's the sup of minus A. OK, so let y be a lower bound for A. Then exactly how I showed for this single lower bound that I had for A, you can verify this again. Just go through the argument and replace b with y. Then minus y is an upper bound-- just look at the picture, replace b with y-- for minus A, which implies since x is the supremum of A-- so since x is the supremum of minus A, it's the least upper bound. So it has to be less than or equal to minus y. And flipping the inequality again on that means that y is less than or equal to minus x which is what we wanted to prove. Thus, the inf of A exists. And we've, in fact, showed that it's equal to the sup of minus A. So in an ordered field with the least upper bound property, not only does every set which is bounded above have a supremum, every set which is bounded below has an infimum. So now, we move on and not talk about generalities, but we're just going to focus on R, the set of real numbers. And I'm just going to, again, state this theorem about the existence of R and its properties, so just to bring this all back to our goal of describing exactly what R is or what separates it from Q is that-- so there exists a unique ordered field with the least upper bound property containing Q. And this field we denote by R. So Q does-- just to bring this back, so we started off in ancient times with the natural numbers. We moved to the integers so that we could take additive inverses, although they didn't call them that. And we wanted 0. And then we moved to the rational numbers because we didn't have ways to solve the equation 2x plus 1 equals 0. And so we moved on from Q to R essentially because we can't solve the equation x squared minus 2 equals 0. And this inability to solve x squared minus 2 equals 0, although an algebraic fact, in fact means that Q is incomplete as an ordered set. It does not have this least upper bound property. And what characterizes the real numbers is that it is an ordered field containing Q. And it's the unique one with the least upper bound property. This unique should be kind of in quotes because unique up to what's called isomorphism. Isomorphism is a fancy way of saying what you call apples I call manzanas, essentially. OK, so this is what R is. It's the unique ordered field with the least upper bound property containing Q. So I'm not going to prove this theorem. The way you usually prove this theorem, you construct R either as what are called Dedekind cuts or as equivalence classes of Cauchy sequences. We'll talk about Cauchy sequences soon-ish. But I'm more interested in proving properties about R, and then going on to functions on R, limits, which is what analysis is, is the study of limits, rather than get tied up in really non-analytic facts, algebraic facts trying to construct this R, the actual field. So we're just going to take this as a given. And now, we're going to go from here and start proving facts about real numbers. Where Q failed, R succeeds. So the first fact is that there exists a unique element in R such that r is positive and r squared equals 2. So we saw before that if I replace this with Q, ta rational number, that's false. There does not exist a rational number whose square is 2. But in the real numbers, there does. And right now, maybe you're tempted to just say, yeah, you set R equal to square root of 2. Well, what is square root of 2? I mean, how do you come up with that guy? So we have to come up with some element of R whose square is 2. Now, we basically did that a minute ago in the rational numbers. And kind of the same proof works here. So let E be the set of all x's in R such that x is positive and x squared is less than 2. So earlier in the lecture and at the end of the last lecture, we had q here, right? Well, the same proof, basically And what we did earlier-- then E is bounded above by 2. So R, which I-- so sup E tilde exists in R. So I have this set. It's bounded above by 2. I mean, we did the proof earlier. So by the least upper bound property of R, the supremum exists. Call this element little r. Then the same proof-- I'm not going to do it again because we've already done it. And all you need to do is replace q's with r's-- shows r is bigger than or equal to 1. In fact, it's bigger than 1. And r squared equals 2. So there does exist an element of r which is positive and whose square is 2. Now, I will prove that it's unique. So now, I want to prove that r is unique. That means if I take-- if there's some other element in capital R that satisfies these two inequalities, then it must be equal to my original r. So suppose r tilde is in R. r tilde is bigger than 0. And r tilde squared equals 2. Then since their square is the same number, namely 2, 0 is equal to r tilde minus r squared, which equals r tilde plus r, r tilde minus r. Now, both r and r tilde are positive. So r plus r tilde is positive. In particular, it's non-zero. So since it's a field, I can therefore divide or multiply both sides by the additive inverse of this thing and arrive at 0 is equal to r tilde minus r or r equals r tilde. So there exist only one element in the real numbers which is positive and whose square is 2. And I'll put it on the assignment, namely that-- so this shows what? Square root of 2 exists. But you can then show a cube root of 2 exists in R. And we'll not prove this. But you can show, in general, that if x is in R, then x to the 1 over n-- that should say x-- is positive for all natural numbers. So where Q failed, R succeeds. And it's doing this-- or the fact that it does succeed is not coming from an algebraic property of R. It's coming from this property about its order. All right, we'll stop there.
MIT_18100A_Real_Analysis_Fall_2020
Lecture_6_The_Uncountabality_of_the_Real_Numbers.txt
[SQUEAKING] [RUSTLING] [CLICKING] CASEY RODRIGUEZ: So we ended last time with these elementary properties that you know about the absolute value. So let me prove one more theorem about the absolute value, which maybe you didn't cover in calculus. But it's of fundamental importance in real analysis, which is the triangle inequality, which states that for all xy in R, the absolute value of x plus y is less than or equal to the absolute value of x plus the absolute value of y. So why is it called the triangle inequality? Well, let me-- instead of x and y being real numbers, let's think of these as vectors. So there's x. Let's say y. And then vector x plus y is this side. Then what this is stating is that the length of one of these sides of this triangle is less than or equal to the sum of the two sides, the sum of the lengths of the other two sides of the triangle. So that's why it's called the triangle inequality. OK, so the proof is not very difficult, although this inequality is probably one of the three most useful things you'll learn in this class if you go on to study more analysis-- the other one being-- OK, so I'm coming-- this is knowledge from PDE. So I guess I'm viewing it through that lens of what I find useful in PDEs. But triangle inequality, integration by parts, change of variables, those three power the analysis machine that I'm most familiar with. So how to prove the triangle inequality-- if x and y is an R, then clearly, by property number 6, the x is less than or equal to the absolute value of x and y is less than or equal to the absolute value of y. So their sum is less than or equal to the absolute value of x plus y. And not only that, I can replace x by minus x and I get minus x plus minus y, this is less than or equal to the absolute value of minus x plus the absolute value of minus y. But this is just absolute value of x plus y. And both of these therefore imply-- so if I just take this and multiply through by minus 1, this inequality, which, by number 5 in the properties that we proved, for the absolute value implies that the absolute value of x plus y is less than or equal to the absolute value of x plus the absolute value of y. So let me just make a remark about the reverse triangle Inequality, which will make an appearance in your assignment. And so this involves the absolute value of x minus y and the absolute value of the difference in the absolute values. Why is it reverse? Because the inequality reverses. The absolute value of x minus the absolute value and y, take the absolute value. This is less than or equal to the absolute value of x minus y. And you'll prove this in the assignment. So these two-- so this follows from the triangle inequality. But these two inequalities get used quite a bit. And we'll use them quite a bit throughout the course. All right, so let me take a minute to reconnect R with what you know and love from calculus, namely decimals. And we're going to use this fact about R to answer a natural question that maybe you're asking yourself, which is the following. So let me raise this as a question. So this first part is not the question. We've already addressed this in the first assignment, the fact that the rationals is countable. In fact, you did it for the positive rationals. But if the positive rationals are countable, you can then prove that all rationals are countable. And a natural question-- is R countable? Or is the set of real numbers countable? So in the end-- I'm not going to leave you in suspense-- the answer is no. And we're going to prove this. But let me just make a few remarks about what this says. So it's the following. So if Q-- so this is not a proof of this. I'm just going to make a remark based on this answer. So since R is uncountable and Q is countable, this implies that R takeaway Q, the set of irrational numbers, is, in fact, uncountable. Why? So I'm just going to say in words why this is true. So suppose otherwise that both the rational numbers and the irrationals are countable. Then you can map in a bijective fashion the rationals to the natural numbers and the irrationals also to the natural numbers. But then you can map those natural numbers to 0, minus 1, minus 2, minus 3, and so on. So then you find that there is a bijection from Q to the natural numbers and the irrationals to the integers less than or equal to 0. And therefore, there's bijection from R to the integers. But the integers are countable, which would contradict the fact that R is uncountable. Therefore, the set of irrationals is uncountable. So in short, if you didn't follow that, the set of irrationals is uncountable because otherwise this would force R to be countable. Now, I haven't proved yet that R is uncountable. We'll do that in just a minute. And we'll use a theorem about decimal expansions for-- or decimal representations of real numbers. So now, connecting rationals and irrationals to what you've seen since you were small, so to decimals, we simply-- we typically-- we think of rational numbers as in terms of decimal representations. What does that mean? That means we take-- if x is in Q, then we'll write x as some number times some digit times 10 to the k plus 10 to the k minus 1 plus d0. And then we go to the tenths spot, so d to the-- d minus 1 10 to the minus 1 plus-- so this is discussion. I'm not stating any theorems right now, until we get down to maybe some last decimal spot. And with these digits, a subset of 0, 1, 9. So all I'm saying is typically we think of x as this way. And then we write x as dk, dk minus 1, d minus 1, d minus 2, and so on. I'm using this notation because I'm going to be-- this is the notation I need to state the theorem about the real numbers. But it's quite silly that I'm about to write this down in this setting. Welcome to MIT. So for example, 1 times 10 plus 1 times 10 to the 0 plus 1 times 10 to the minus 1. This is supposed to represent the number 11.1, which is the rational number. It's quite funny that I had to think about that for a minute to go from the decimal expansion to the rational numbers. But don't give me too hard of a time. In a lot of at least pure math, we typically write things in rational numbers. We don't write in decimal-- I haven't had to write in decimal expansion in quite some time. Anyways, now, of course, not all rational numbers can be written as a finite decimal expansion. For example, 1/3 you have to write as 0.33333 and so on and so on. So that's not necessarily covered under the discussion that I have right here. But it will be covered in what I'm about to say for the real numbers. So we could cover 1/3. And then we can also cover every real number if we allow infinite decimal representations. So I'll use the word decimal expansions or decimal representations interchangeably there. So let me make a definition. And what I'm going to state is about real numbers between 0 and 1. Of course, if you want to talk about real numbers bigger than 1, then you just tack on 1 to the-- if you like, to the d0 place. So the infinite part is having to deal with what comes after the decimal point. So the let me make the following. So this is a rigorous definition. Let x in 0, 1 excluding 0. And let d minus j be in 0, 1 to 9. for j, a natural number. So we say x is-- so what does it mean to say that x is given by an infinite decimal expansion or given by an infinite decimal representation? We say x is represented by the digits. And we write x is equal to d minus 1, d minus 2, d minus 3. If-- so this is the precise meaning of this. If x is equal to the supremum of the set-- so I take a finite decimal expansion. And then so I get a number for each n, a natural number. So the first one is just d minus 1 10 to the minus 1. And then for n equals 2, I tack on next hundredths place, and then the thousandths place. And so I get a set of real numbers. If I take its supremum and I get x, then I say that x is represented by these digits or that x-- or that this gives a decimal representation of x. So that you see this lines up with what you know, if I look at 0.250000, remember this is now the sup of what? Of a certain set. So let me do first this guy. So this is 2/10. And then now, I add the hundredths place plus 5/100, and then 2/10 plus 5/100-- this is a set, remember-- plus 0/1000 over plus-- and at each next guy, I'm supposed to take this and add on another 0. So as a set, this is just a set containing two elements, namely 2/10 and 25/100. And so this is bigger than that guy. So the supremum is 25/100, which equals 1/4. OK, so all of that to tell you that, yes, this definition should-- or at least does ring true with what you were taught, that 1/4 is equal to 0.25, all right? OK, so now that what we're done with the triangle inequality, I'm going to erase this. And so this is the last bit of discussion about the elementary properties of the real numbers. Next, we'll be talking about sequences of real numbers. OK, so all right, so I have this definition. The fundamental theorem about the real numbers is that every real number, at least in this setting between 0 and 1, can be represented by a set of digits or it has a decimal representation. And every decimal-- if you like, every set of digits corresponds to a decimal representation of a real number. So let me state the second thing I said first. So for every set of digits d minus j with each of these d minus j's in 0 up to 9, then there exists a unique x in closed interval 0, 1 such that x is equal to d minus 1-- with this decimal representation. So you give me some digits and I can find a unique real number that has that decimal representation. Now, before I state the second part of the theorem, this says that I give you digits, I get a unique real number. Now, you ask about the other direction. If I have a real number, do I get a unique set of digits? And of course, that is not true necessarily because, for example, 1/2, this is equal to 0.500 and so on. But this is also equal to 0.4999 and so on. So yes, every set of digits gives me a unique real number. So if I gave you these digits, they would spit out 1/2 and only 1/2. It wouldn't spit out 1/4. But if you give me a real number, there does not necessarily exist a unique set of digits giving a decimal expansion for that number. So I can have two different decimal expansions. I have 0.5 and I have 0.4999. I will always have at least one, but it's not necessarily unique. But we can single out one unique choice by requiring that, in some sense, if I were to truncate it, then the decimal expansion would be less than the number I'm looking at and wanting to expand. So that's the second part of this theorem, is that for every x in 0, 1, now excluding 0, there exists unique digits-- and as before, these are in 0 up to 9-- such that x is-- such that these give a decimal representation of x. And so what singles out a choice if we're ever coming up against these two things? And if I truncate the decimal expansion-- so I'm not going to write zeros after that. I'm just going to truncate it-- this always is less than x and this is less than or equal to plus 10 to the minus n. All right, so this inequality will always hold. And in fact, so this inequality will always hold. And in fact, x is always bigger than or equal to this side, no matter what. If x is given by a decimal representation, so we always have these two. This is a less than or equal to and this is a less than or equal to. But we single out a unique choice of decimal representation by choosing this to be a strict less than. So then that forces us to choose this guy over this guy. So for theorem 2, this would say that the unique representation that satisfies this for 1/2 is given by 0.4999. And for example, for 1 would be just 0.9999 because if I take this representation and cut it off, this is actually equal to 1/2 and not less than 1/2. OK, so that was a lot of discussion about the theorem. But that was for a reason because I'm not actually going to prove this theorem. It's not entirely difficult. It just uses the least upper bound property of the real numbers. But it's a little clunky to write down. And what I'd rather do is just use this theorem to prove my answer, the answer to my question up there, on whether R is accountable or not. In the textbook, there is a proof of this theorem, which you can read. OK, so we're actually going to use this theorem to prove the following theorem due to Cantor. So we just went from absolute values, which maybe it was not too shock and awe, now to this theorem, which to me is shock and awe, is the following, that the interval 0, 1 is uncountable. All right, so maybe you're asking yourself, well, that's showing this is uncountable. And that's saying R is uncountable. So where do we go from this to that? Well, I'll let you think about that. But let's assume this was uncountable and R was countable. So I'm just going to say this, say why out loud. If this was uncountable and R was countable, then since this is a subset of R, that means this under would be-- so let me just write. Why does this imply-- And I'm going to give you a fake proof, but you can actually make this completely rigorous-- fake proof in the sense that I'm just going to write symbols that imply this. So why is R uncountable then if this is uncountable? Well, the function that takes an element in here just into R, this is clearly bijective. so just the identity map from 0, 1 into R is an injective map. So I clearly have that the cardinality of this set is less than or equal to the cardinality of R. In fact, maybe you can think about it. Maybe I'll put it on the assignment. But in fact, one can prove they have the same cardinality. But I have that. And this is uncountable. Therefore, its cardinality is bigger than the natural numbers. So this should imply that cardinality of the natural numbers is less than the cardinality of R. In other words, R is uncountable. All right, so let's prove this theorem due to Cantor. Now, this other theorem due to Cantor, that the cardinality of the power set is bigger than the cardinality of the set-- I mean, that was-- at least the proof, the first time I saw it, that was some crazy shit. And this is also going to be some pretty crazy and clever shit, too. So this is what's referred to as Cantor's diagonalization argument or diagonal argument. And it's the following. So we're going to prove this by contradiction. So we're going to assume that this is countable. So it can't be finite. There's infinitely many of the elements in here. So suppose that, in fact, it has the cardinality of the natural numbers. And we're going to arrive at a contradiction. So then there exists a function, which I'm going to call x, from natural numbers into 0, 1, which is bijective. Now, so let me label this inequality star. For each n, we write x of n. So this is an element of 0, 1. So this is bijective. So every element in 0 gets mapped onto. So we write x of n to be in its decimal representation. I have two indices now-- one for the digit place and the one for which-- excuse me. At least I sheltered you from that sneeze by putting my hand over the microphone. Is the microphone still on? Yeah. OK, so for each n, we write each x of n in its decimal representation, satisfying star. So each x in here can be written uniquely in a given decimal representation satisfying this inequality here. And what we're going to do is we're going to come up with a real number in the interval 0, 1 which x does not map to, which therefore contradicts the fact that x is surjective. And bijective surjective means that everything has to get mapped onto. That's the onto part. Injective means it maps different things to different things. So what's the idea of finding this element that doesn't get mapped onto? So let me write down just a few of these decimal expansions. And I may not want to do a fourth one. And maybe I do. So this is, again, discussion. So I'm going to put this in brackets, meaning this is not part of the proof, but this is part of the idea of what's going on. And where's the diagonal part? OK, so and this should go off in this direction . And the decimal expansion is marked off in that direction. And so somehow, this list, if it kept going in that direction, would have to contain every real number between 0 and 1. And so what's the idea to come up with a real number between 0 and one that is not in this list is we go down the diagonal of the decimal expansions and we change each decimal to something else than what it is. And so therefore-- and then I take my element y. So remember, we're trying to find a y not in this list. And then I would take y to be the real number that I get by changing each of these decimal digits. So you can imagine if y somehow popped up-- let's say it popped up, since I don't want to write anymore, as x4, then somehow this number would have to be the digit for y, but I changed it. And therefore, it's not the digit of y. So that's it in a nutshell. If you didn't get that, that's fine. Let me now write out the details. So it's a little bit simpler to imagine if, let's say, all of these were 0's and 1's. So instead of being base 10, now it's base 2. And so instead of-- if the digit was the 0, I've flipped it to 1. So you can imagine that if y did-- so and then I form y by flipping the digits. So if this was 0, 0, 1, 1, then y would start off 1, 1, 0, 0. And therefore, if y appeared, say, fourth in line, it would have to be 1 and 0 in the same spot just by how I've constructed y by flipping it. And therefore, y can't be in that list. And therefore, this map cannot be surjective, which is our contradiction. So let me make this more precise. Let ej-- this is going to be 1. If the j-th digit of the j-th x of j does not equal 1; 2, if the j-th digit of the j-th guy in that list, or x of j, does equal 1. By part 1 of the theorem, previous theorem, there exists the unique y in 0, 1 without 0-- why? Because some of these digits are non-zero. It's always either 1 or 2-- such that y is equal to e minus 1, e minus 2, and so on. So moreover, since all of these digits are either 1 or 2, they are certainly non-zero. So y is given by a decimal expansion where all of these things are non-zero. So that means that for all n, a natural number, if I take a finite decimal expansion-- so I cut this off at e to the minus n-- since e to the e minus n plus 1 is either 1 or 2, that's going to be positive. So that's less than y. And this is always the case, that this is less than or equal to. So because all these digits are positive, if I cut it off, if I truncate it, then that number is going to be less than y because I'm missing stuff out in the millionths place or whatever each time for each n. And therefore, this decimal expansion is the unique decimal representation from part 2 of the theorem that I stated. Every element in 0, 1 has a unique representation, satisfying that inequality. Because all of these digits are positive, this, in fact, is a unique representation of this element y, which I've constructed by flipping digits from this map. Now, since x is surjective, there exists an n, natural number-- this n has nothing to do with that n. Maybe I'll use a different letter, m-- such that y equals x of m. Then let's say I look at e to the minus m. So then this says that the m-th digit, which should be giving me e to the minus m because this is equal to -- this, remember, is equal to 1 if dm 2 if dm equals 1. In particular, this does not equal dm. So I have proven that this number does not equal itself. All right, I can never if-- I can never have this number, whatever it is, equal to this thing because I'm always changing it. If this thing is 1, then this number is 2. If this number is not 1, then this number is 1. And therefore, it's not equal to that. And this is a contradiction. Thus, it shows that this set is uncountable. So I hope I explained that well enough. You can also look in the textbook for an explanation as well. And of course, you can ask me in office hours for more explanation. Let me fuel up real quick. OK, now, we're moving onto a new chapter. We're going to get to-- now, we're going to get to the analysis part. So analysis as I've said before, is the study of limits. Real analysis is the study of limits that has to deal with real numbers. So let's get to our first notion of a limit. And that is the limit of a sequence of real numbers. So now, we're moving onto sequences and series. So what is the precise definition of a sequence of real numbers? A sequence of real numbers is precisely a function x from the natural numbers into R, just a function. It doesn't have to be bijective, surjective, injective, anything. It's just a function from the natural numbers to R. So that's precisely what a sequence of real numbers is. Now, we typically don't think of a sequence as a function. In fact, we don't even use that notation. We denote x of n by x sub n and the associated sequence by this curly brackets, n equals 1 to infinity. Or we might not even write the n equals 1 to infinity part or just start listing them. All right, so let me quickly say that although-- so again, a sequence, unambiguously defined, is a function from the natural numbers to real. So don't confuse this with a set. A sequence is not a set. So think of a sequence, even though again it's defined as a function, as just an infinite listing of elements of R-- one element, the first element, the second element, the third element, and so on. And they don't have to be different. So for example, 1, 1, 1, 1, that is a perfectly good sequence. In terms of what is the function, this is x of n equals 1 for all n. But again, I will not think of really a sequence as a function. Just as we never thought about functions as subsets of-- as certain subsets of the Cartesian product of two sets, you should really think of these as just a list of real numbers. And when we refer to them, we'll either write them like that or with curly brackets around an expression for x sub n. So for example, if I were to write this, this means this is the sequence given by 1, 1/2, 1/3, 1/4, 1/5, and so on. OK. So here's another definition. So a sequence is bounded if, in some sense, it doesn't run off to arbitrarily large values. So sequence x sub n is bounded if there exists some real number b bigger than or equal to 0 such that for all n in the natural numbers xn is less than or equal to b in absolute value. So again, for example, if we take a look back at both of those two sequences-- 1, 1, 1, and the sequence 1 over n-- these are bounded. Why? Because every element in this-- I shouldn't say element, but every entry in this sequence is equal to 1. So it's bounded by 1. And every entry in this sequence is less than or equal to 1. So it's bounded by 1 in absolute value. It's also positive. So they're both bounded. Another example is the sequence minus 1 to the n. So I'll write equals, although this shouldn't-- this doesn't mean anything. I'm just going to write it-- write it-- or start listing the entries-- minus 1, 1, minus 1, 1, minus 1, 1. This is also bounded because the absolute value of minus 1 to the n is 1. So that's bounded by 1 for all n. So what's a non-example? The sequence n-- in other words, this is the sequence 1, 2, 3, 4, 5, and so on. Now, precisely why is it un-- so I should say this is bounded. And otherwise, I'll refer to x sub n as unbounded, unbounded. So why is it unbounded? Because the entries are getting larger and larger in size. But if I were to ask you to prove this, you would have to show-- and this is also a good first exercise-- what does it mean to be unbounded. So I said this at one point, that if you come across a reasonably interesting definition, you should look up or try to come up with examples. And typically, when you do come up with examples, you should come up with non-examples. And what that requires you to do is negate the definition. What does it mean for something to not be that? So even though I'm saying that if it's not bounded I call it unbounded, I haven't actually written a mathematical statement to this effect on what this means. So let me make this right here as a remark. So a sequence is unbounded, so if it doesn't satisfy that definition. So we need to negate this definition. The definition says this is bounded if there exists a b bigger than or equal to 0 such that this holds for all n. Now, when I negate a there exists, that becomes a for all. And when I negate a for all, that becomes a there exists. And then I negate this condition. So the sequence is unbounded if for all b bigger than or equal to 0 there exists a natural number in n such that xn in absolute value is bigger than b. Now, using this-- so this is-- if you like, you can take that as a second definition. Or it's not really a definition, since it's just the negation of the first definition. Why is this set unbounded? Because of the Archimedean property. So now, I'm giving you a little short proof on why this sequence is unbounded. Let b be bigger than or equal to 0. By the Archimedean property, there exists a natural number n such that n is bigger than b, which is exactly what we wanted to prove because, in this case, x sub n is n, is unbounded. OK, now, what does it mean for us to have a limit of a sequence? What does a limit of a sequence mean? What does it mean for a real number to be the limit of a sequence? What does it mean for a sequence to converge to something? So the sequence x sub n converges to a real number x in R if the following condition is satisfied-- if for every epsilon positive, there exists a natural number capital M such that for all n bigger than or equal to capital M xn minus x absolute value is less than epsilon. Now, remember the absolute value is meant to be something like a distance. So what does this statement say? This says if I give you a little bit of tolerance, epsilon, and you go far enough out in the sequence, then the distance between that entry in the sequence and x are very close to each other. Let me finish stating the rest of this definition and we'll do a little more discussion. If a sequence converges, we say it's convergent. Otherwise, we say it's divergent. So all right, let me draw-- let me draw a little picture of what this is meant to be. So the real numbers, it was this-- I mean, it is this unique ordered field with the least upper bound property. But when you think about the real numbers, think of it as you've always thought about it, as just the real number line. So what does this definition mean? It means that a sequence converges to some x if for every epsilon I can do the following. If I look at-- if I give you an epsilon and you go on either side of x epsilon amount, then you should be able to find a capital M so that if you look at entries in your sequence x sub n for n bigger than or equal to M they're in this guy. So then I should have x sub M there, x sub M plus 1, x sub M plus 2. And in general, x sub n should be in there, as long as n is bigger than or equal to M. So I should be able to do this for every epsilon. So for any amount of tolerance epsilon, you should be able to go far enough out in the sequence that all entries in that sequence are within this tolerance of x. I mean, although this is a picture of what the definition means, how should you-- I mean, you've probably gotten plenty of experience with sequences when you were doing calculus. But what does this mean loosely? It means if I look close to x, some little interval containing x, then all of the sequence, all of the elements of the sequence should be eventually in this-- in this little interval. Or another way to think about it is that as I go on in the sequence, the entries are getting closer and closer and closer to x Now, the way one makes these last two intuitive statements precise is via this definition. The closer and closer part is encapsulated in this for all epsilon part. And eventually getting closer and closer means for all n bigger than or equal to capital M. As long as I go far enough out in the sequence, they're getting close to x. So I hope that's pretty clear. All right, so again, we have a definition. And it's reasonably interesting. So we should do examples, and then also negate it. But before I do either of those, I do want to just prove a very simple fact about convergent sequences, namely that if I have a sequence which converges to x, then x is the only thing that sequence can converge to. I cannot have a convergent sequence which converges to two different things. Like I said, if you go by this intuition that eventually the x sub n's are getting closer and closer and closer to x, well, then there's no way they can be also getting closer to something other than x. So I just want to prove that real quick. And then we'll do examples and a negation of this definition. So the following theorem-- and so I'm going to state first the theorem that has nothing to do with sequences, but is a nice way if you want to show two things are equal. You can show they're smaller than anything. So the first theorem that I'm going to state is if x and y are in R and all epsilon positive x minus y is less than epsilon, then x equals y. This is not a surprising statement. If I have x and y and the distance between them is arbitrarily small, then they have to be the same thing. So what's the proof? Suppose xy is in R. And for all epsilon positive, x minus y is less than epsilon. So now, I want to prove x equals y. So let's assume that this does not hold and arrive at a contradiction. So this is a short proof by contradiction. Suppose x does not equal y. Then remember we proved that the absolute value of something is 0 if and only if that thing is equal to 0 and it's always non-negative. So x not equal to y means the absolute value of x minus y is less than-- is bigger than 0. Then by this assumption, for epsilon equal to x minus y over 2, I get-- which implies if I just subtract that over, x minus y is less than 0. So 1/2 is less than 0 or-- and that's a very false statement. We've already proven that the absolute value is always non-negative. So if I have two real numbers that are arbitrarily close to each other, then they have to be the same. I'm going to use this theorem to prove that-- I don't know why I put a 1 there-- why limits-- and I have not said that's what we call x-- but why a convergent sequence can only converge to one thing. So here is the statement of the second theorem. If x sub n is a sequence and it converges to x and it also converges to y, then x equals y. So a convergent sequence can only converge to one thing. And again, like I said, there should be clear because what's happening is if I have two things that are not equal and x sub n converges to x, that means as long as I go far enough out in the sequence, they're supposed to be here in this small interval around x. Well, if they're supposed to be converging to y, if I go far enough out, they're also supposed to be over here as long as I go far enough out. But you can't be in two places at once ever. And so that's why x must equal y. Although, I suppose that now we're doing all the classes online, some are recorded. So you can be two places at once, but at least not here. OK, so we're going to use this theorem to prove this statement. Suppose-- converges to x and y. And I want to verify that for all epsilon positive the absolute value of x minus y is less than or equal to epsilon. Now, remember if I'm proving something for all epsilon, I have to-- that means I do it for arbitrary epsilon. You get points just for setting let epsilon be positive. OK, now, I want to show the absolute value of x minus y is less than or equal to epsilon. And we're basically going to do the argument that I just erased. So since x sub n converges to x, that implies that there exists a natural number M1 such that for all n bigger than or equal to M1, x sub n minus x is less than epsilon over 2. Why the epsilon over 2? You'll see, magic happens. And since x sub n converges to y, thus this implies that there exists an M2 natural number such that for all n bigger than or equal to M2-- so again, this is the part that says as long as I go far enough out s sub n has to be close to x. Now, this is the part that says as long as I go far enough out x sub n has to be close to y-- so that for all n bigger than or equal to M2 x sub n minus y is less than epsilon over 2. Then since the sum M1 plus-- these are both natural numbers. This is bigger than or equal to M1 and M2. This implies-- now, I'm going to use these two inequalities and the triangle inequality. Then if I look at the absolute value of x minus y, this is equal to the absolute value of x minus x sub n plus x sub n minus-- so let me-- not x sub n, but x sub M1 plus M2 plus x sub M1 plus M2. Now, by the triangle inequality, this is less than or equal to x M2 plus M1 plus M2 minus y. Now, this is less than epsilon over 2 because M1 plus M2 is bigger than or equal to M1. And for the same reason, since plus M1 plus M2 is bigger than or equal to M2, this is less than epsilon over 2. Ah, now you see why I divided both of those by 2 because when I add them up, I get epsilon. So I've shown that for every epsilon positive, the absolute value of x minus y is less than epsilon. And therefore, by the theorem that I proved a minute ago, x equals y. Why did I state that theorem? Because I want to use notation. And I want this notation to be already apparent that it's consistent. And I'm going to use terminology. So I mean, this is not really a new definition. It's just some terminology. We call x the limit of the sequence. Now, the fact that I've shown only that there's only one guy that a sequence can converge to is why I get to use the word "the--" so not we call x a limit of x sub n. It's the limit because there's only one, if one exists-- and write x equals limit as n goes to infinity of x sub n. OK, so let's pause on negating the definition of what it means for a sequence to be convergent. And let's do a couple of examples of convergent sequences, proving that they are convergent sequences and computing what they-- and at least proving that these sequences do converge to the limit that I'm telling you. So for example, let's look at the constant sequence x sub n equals 1 for all N. So that's the sequence 1, 1, 1, and so on. Then I'm just going to write 1, not x sub n. The limit as n goes to infinity of 1 equals 1. This proof is not very enlightening, but I'll show you how it goes. So you're supposed to show-- what does this mean? This means for every epsilon positive, there exist a natural number capital M such that if I'm bigger than capital M, this minus the limit should be less than epsilon. So let epsilon be positive. I have to now come up with a capital M so that x sub n minus x is less than epsilon whenever little n is bigger than or equal to capital M. So I'm going to choose capital M to be 1. I can do that for this sequence. Then if n is bigger than or equal to M and I look at the n-th entry in the sequence, well, this is just 1 minus 1 equals 0. And that's less than epsilon. So not very enlightening because capital M equals 1 is good enough. So let's do something a little more. So how about the limit as n goes to infinity of 1 over n equals 0? So let's do a proof of this. Again, I'm supposed to be proving this epsilon statement in the definition in order to be able to state this, which is for every epsilon statement. So I start the proof off with let epsilon be positive. Now, I have to find-- I have to tell you how to choose capital M to ensure that little n bigger than or equal to capital M implies x sub n minus 0, in this case, is less than epsilon. So I will say choose capital M natural number such that m is bigger than 1 over epsilon. In fact, I'm not going to write it that way. I'm going to write it this way-- 1 over M is less than epsilon. And Why Can I do this? Why can I find such a natural number that satisfies this? And you can't just insert statements into your proof-- choose capital M to satisfy the unicorn property-- and not clarify why you can choose such a natural number that satisfies the unicorn property. I can do this by the Archimedean property. This is equivalently stating that I can find a natural number M so that n is bigger than 1 over epsilon, what I had written there originally. So let me write it this way-- such an M exists by the Archimedean property of the natural numbers. I'll say that a few times if I have to or to clarify why certain natural numbers satisfying certain properties exist. And then at one point, I'll just stop because it should be clear that I've done it enough times you know how or why something does exist. OK, so I choose a natural number and a natural number M so that I have this. Now, I need to show you that this number works. So let n be bigger than capital M. Then if I look at x sub n minus my proposed limit, which is 0-- so I look at 1 minus n 0-- this is equal to just 1 over n. Since little n is bigger than or equal to capital M, this is less than or equal to capital M, which is less than epsilon. And therefore, I've proven what I wanted to prove. So how do these-- if I just give you a sequence and I ask you to prove that-- so this is some discussion. How do you come up with these proofs? How should the proof typically look? If I want to prove-- equals L or x, what's the proof of that look like? It should always-- so the proof should be let epsilon be positive. And then maybe you need to do some explaining. And then you'll say choose M so that something. And that such an M exists probably will be explained there or in the same sentence. And then the next part of the proof should be showing that this capital M works. So then if n is bigger than or equal to capital M, then you should show that it works by looking at x sub n minus x and doing a calculation or inequalities and arriving at this being less than epsilon, which is what you're wanting to prove. You're wanting, in the end, to verify this definition, which says for every epsilon, you can find a capital M so that I get this. So in a proof, you are carrying that out. You're saying for epsilon positive, I choose M so that this happens. Now, how do you-- how do you come up with such a capital M? So again, this is just discussion. And maybe we won't get to any more examples. I'll just finish this discussion, and then we'll call it a day. How do you come up with such a capital M? How did I come up with M? Why did I choose M in this way? In the end, you can see it here, is that I chose it because if I fiddle around with the thing I want to bound, I ended up with 1 over n, which I can control and make it smaller than epsilon. So typically, how do you find capital M? So this is a discussion, I guess, within a discussion. Typically, you'll take x sub n minus x, your proposed limit. Maybe I give you a sequence explicitly. Or maybe it's a expression involving some other sequences. And you fiddle around with it. Maybe you write it differently, or add and subtract something, or multiply it by 1 in a fancy way. And you'll get some expression typically involving something involving n. And as long as this expression involving n is simple enough, then you choose capital M so that this thing is less than epsilon. Now, simple enough, for example, 1 over n, I can choose capital M so that this is less than epsilon by the Archimedean property. If, for example-- and that's basically 1 over n, or 5 over n, or something like that, that exhausts all the simplest ones for now. But if, for example, you ended up with 1 over n squared minus 3n plus 100, I would not say this is a simple enough expression to be able to say choose capital M so that this is less than epsilon without more explanation than what I just gave here. And we'll do some more examples next time to flesh this out a little bit more. All right, so we'll stop there.
MIT_18100A_Real_Analysis_Fall_2020
Lecture_11_Absolute_Convergence_and_the_Comparison_Test_for_Series.txt
[SQUEAKING] [RUSTLING] [CLICKING] CASEY RODRIGUEZ: All right, so last time we proved the following theorem-- that if I have a convergent series, and this implies that the limit as n goes to infinity of x sub n equals 0. So a natural question is, as a beginning advanced math class, does the converse hold? Is this a two-way street or a one-way street? So if the individual terms in this series converges to 0, does this imply that the series converges? And I'm sure you answered this question in some form in a previous calculus class. And the answer to this question is no. So what's the counterexample? It's the so-called harmonic series, which corresponds to our favorite sequence which converges to 0. So we'll state this as a theorem-- the series sum from n equals 1 to infinity of 1 over n does not converge. So how are we going to prove this theorem? We'll prove this theorem by showing a sequence of partial sums. Some sequence of partial sums for this guy does not converge. So if it were to converge, then the sequence of partial sums converges, and therefore, every subsequence of partial sums converge. So what's the strategy? We're going to show that there exists a subsequence of partial sums, here s n k-- let's make this m k-- so remember, the partial sums are simply summing up the first-- so the index here is m sub k, so the first m sub k terms, 1 over n, diverges. And this is enough to show that the full series doesn't converge again. Because if it did converge, the sequence of partial sums converges, and therefore, every subsequence of partial sums converges. So if we're able to show there exists a subsequence which diverges, then we're done. In fact, what we're going to do is something a little bit stronger. We're going to show that there exists a subsequence of partial sums which not only diverges but is unbounded. And therefore, the entire sequence of partial sums is unbounded. So if we're able to show there exists a subsequence which is unbounded, then the entire sequence of partial sums is unbounded, so it can't converge. Because remember, convergent sequences imply bounded sequences. So we're going to look at when m sub k is dyadic. And for some reason, I switched indices from k to l in my notes, so instead we're going to go from m sub l. So let l be a natural number, and we'll consider the partial sum corresponding to adding up the first 2 to the l terms. Now, you may ask, why 2 to the l? Why not 3 to the l? Well, 4 to the l will be a subsequence of that, but 2 to the l is-- you could do 3 to the l, you could do 5 to the l, but 2 to the l is sufficient for our purposes. So what we're going to do is we're going to take this partial sum and bound it from below by something which is quite large. So first off, all of these partial sums are bounded from below by 0. They're a sum of non-negative terms. So we write s 2 to the l. This is equal to 1 plus 1/2 plus-- and I'm going to put parentheses around that-- plus 1/3, plus 1/4, plus 1/5, plus 1/6, plus 1/7, plus 1/8. So how I'm grouping these terms is I'm grouping them according to whether the denominator falls between a power of 2 and the next power of 2, and then plus dot, dot, dot, 2 to the l minus 1 plus 1 plus 1 over 2 to the l. Now I can write this partial sum. So I've grouped terms this way. What is this in terms of precise symbols? This is 1 plus sum from lambda equals 1 to l, sum from n equals 2 to the lambda minus 1 plus 1 2 to the lambda 1 over n So. These lambdas are now parameterizing what power of 2 I'm at. So when lambda equals 1, I'm at this block. When lambda equals 2, I'm at this block, and lambda equals l, I'm at this block. And then I'm summing up the terms that have denominator between that power and the next of 2. So I have this, and now I bound that sum from below. Because again, I'm trying to show that this subsequence of partial sums is unbounded. So I bound it from below. I sum from lambda equals 1 to l, sum from n equals 2 to the lambda minus 1 plus 1, 2 to the lambda. And now for each of these, n is between 2 to the-- so this should be 2 to the lambda. Now for n between these two numbers, 1 over n is always bigger than or equal to when I plug in the biggest bound here, 2 to the lambda. And now I just have a sum, so over n, but there's no n in this term. So I just add up all the number of terms in a given block. And this is equal to-- so first I have 1 over 2 to the lambda coming from here, and then sum from n equals 2 to the lambda minus 1 plus 1, 2 to the lambda times 1. So again, I'm going a little slow here. And this is just equal to l 1 over 2 to the lambda times the number of terms I have here, which is 2 to the lambda minus 2 to the lambda minus 1 plus 1, so this minus this plus 1. So this is equal to 1 plus lambda equals 1 to l, 1 over 2 lambda. And so this 1 cancels with this one. And then I have 2 the lambda minus 1, which I can bring out. And so this is just 1. And I get 1 plus sum from lambda equals 1 to l. And this 2 to the lambda cancels with this 2 to the lambda, and I'm left with just a 1/2. And this is equal to-- now remember, there's no sum here in lambda, so this is just 1 plus l over 2. So what did we do? We basically showed that each of these blocks is bounded from below by a 1/2. That's this term that we get right here in the end. And we can see this if we just go through the first three terms which I have written here. So 1/2 is clearly bounded below by a 1/2. 1/3 plus 1/4 is bounded below by 1/4 plus 1/4, because 1/3 is bigger than that. So 1/4 plus 1/4 is a 1/2. If I look at this next block, that's 1 over 5 is bigger than or equal to 1 over 8. So is 1 over 6. So is 1 over 7. So this sum is bigger than or equal to 1/8 plus 1/8 plus 1/8 plus 1/8, which equals 1/2, plus, and then so on. So maybe I should have said that before I went into the actual computation. But in the end, we get that this subsequence of partial sums s to the 2l. So let me just summarize. This is bigger than or equal to l plus 2 over 2. And as l gets very large, this thing gets very large. So this implies s to the 2l, this subsequence, equals 1 to infinity is unbounded, which implies that the full subsequence, or the full sequence of partial sums, is unbounded. And therefore, the sequence of partial sums does not converge. And therefore, that series does not converge. So we see that the converse does not hold for that question or for that theorem. I will just make a very passing mention to the fact that there are fields for which that does hold-- so not ordered fields, because again, ordered fields with the least upper bound property have to be R, and therefore, we've just shown that the converse of that theorem does not hold. But in fact, if you look at the so-called p-adic numbers, they do have this property that if the sequence of terms converges to 0, then the series converges. But we will never see p-adic numbers. I just wanted to do a little lip service to that fact-- that there are at least fields of numbers that do have this property. So we had a theorem about limits of sequences and how they interact with algebraic operations. This naturally implies a theorem about series. So let alpha be in R, and let's suppose we have two convergent series. Then if I look at the series alpha x n plus y n-- so the terms of my new series are alpha times x sub n plus y sub n-- this is a convergent series. And the sum of this series is equal to what you expect. So the sum of the series alpha x n plus y n is equal to alpha times the sum of x n's plus the sum of y n's. So this theorem follows essentially kind of immediately from what we did for sequences. Partial sums satisfy-- if I look at sum stopping at m, alpha x n plus y n, now just by the linearity of just adding up finitely many terms-- I'm not going to put something down below because I really don't need to-- this is equal to alpha times the partial sum of x sub n plus the partial sum corresponding to y sub n. And so we're assuming this sequence of partial sums converges and this sequence of partial sums converges. So therefore, this term on the right-hand side converges, which implies the left side converges. So by the linear properties of limits, namely that the limit of the sum is the sum of the limits. And multiplication just by fixed real numbers commutes with taking limits, so we get that limit as m goes to infinity, so the partial sum corresponding to the new series equals alpha plus-- and that's just alpha times sum x n plus sum y n. And that's the end. So now, remember we had certain sequences which we could tell whether they converge, a little bit easier than just an arbitrary sequence. A couple of examples of-- at least one example of a sequence we could decide if it converges kind of easily is a monotone increasing sequence. And we showed that a monotone increasing sequence converges if and only if it's bounded. So we're going to use this to be able to say something about series now-- not sequences, but series-- that have non-negative terms that I'm adding up. Because the partial sums corresponding to a series that has non-negative terms form a monotone increasing sequence. And that's not too hard to show. So this is the following theorem. So now, we're going to discuss a little bit about sequences or look at sequences which have non-negative terms. So the theorem is the following-- if for all n a natural number, x then is bigger than or equal to 0-- so all these terms are non-negative-- then the series converges if and only if the sequence of partial sums s sub m-- is bounded. And again, the way we see this is just that when these terms are non-negative, the sequence of partial sums is monotone. So here's the proof. It's quite easy. So we have for all n a natural number-- make that m-- if I look at s sub m plus 1, this is equal to-- so this is the m plus 1 partial sum-- from m equals 1 to m plus 1, x of n, this is equal to sum from n equals m x n plus x n plus 1. And now the x m plus 1 term is not negative. We're assuming all the terms are non-negative. So this right-hand side-- this is certainly bigger than or equal to sum from n equals 1 to m of x n, and that equals s m. So just summarizing for all natural numbers m, the s m plus 1 is bigger than or equal to s m. If I just keep adding non-negative things, the partial sums are getting bigger. So the partial sums is a monotone. So maybe I should've stated this slightly differently just so that you don't think this is part of one of the if and only ifs. I mean, this is the assumption that we have for this whole statement. Suppose this, so the conclusion is that this converges if and only if the sequence of partial sums is bounded. So based on the assumption that all the terms are non-negative, we see that the sequence of partial sums is monotone increasing. That's why what we proved for sequences-- every monotone increasing sequence converges if and only if it's bounded. And that's it. Now, not every series we look at does have non-negative terms. But we can always form a certain series from those terms to make a new series with non-negative terms, which gives us information about the original series. What am I going on about? And look at the convergence properties of that new series. So we have the following definition-- that a series converges absolutely, or we say we have absolute convergence, if the series formed by taking the absolute values of these terms, if this series converges. So what I was trying to get at before I stated this definition is that absolute convergence implies usual convergence. If I had this series converging absolutely, then the original series converges. Now, before I prove this theorem, let me prove a little, small theorem. I can't remember if I gave it for an assignment or not, but it's essentially a triangle inequality for however many terms you like. So we'll prove this theorem in just a minute. But first, let me prove the following theorem-- that if m is bigger than or equal to 2, and x1 up to x m are in R, then sum from n equals 1 to m x n, take the absolute value of this sum, this is less than or equal to sum from n equals 1 to m of the absolute value. When m equals 2, this is just the usual form of the triangle inequality. So m equals 2-- this is just saying x1 plus x2 is always less than or equal to x1 plus x2, which is just the triangle inequality. But typically how life works, at least in analysis, if you can do it for two things, then you can do it for n things or m things, in this case, by induction. And so that's how we're going to prove this. So we'll prove first prove this triangle inequality by induction. Now, in the induction proofs we've done so far, little n is our thing that we're inducting on. In this statement, m is the thing, induction on m. So let's look at the base case, which is m equals 2. So then this is just the triangle inequality for two real numbers that we've already proved before-- x1 plus x2 in absolute value is always less than or equal to sum of x1 plus the sum of x2-- I mean, the sum of absolute value of x1 and the absolute value of x2. So the base case is fine. So now we do the inductive step. So I'll assume the statement that I want to prove. Usually, I use m, but now I'll go to the next letter l, going in reverse alphabetical order. Suppose if x1, x l in R. So let's actually, instead of just restating all that, just-- I'll just [INAUDIBLE] star. So suppose star holds for m equals l. And now we want to prove that star holds for m equals l plus 1. Now we want to show star holds for m equals l plus 1. Let x1 up to x l plus 1 be in R. Then if I look at the sum from n equals 1 to l plus 1 x n, this is equal to sum from n equals 1 to l x n plus x l plus 1 in absolute value. By the usual triangle inequality for two terms, this is less than or equal to sum from n equals 1 to l of x n in absolute value plus the absolute value of x l plus 1 by usual triangle inequality. And now this term, since I'm assuming m equals l holds, so the m equals l case says this is less than or equal to sum from n equals 1 to l of x n plus x l plus 1 in absolute value. So this is by inductive hypothesis. And this is just equal to sum from n equals 1 to l plus 1 x n. so we've proven the case for now m equals l plus 1. And that concludes the proof of this generalized triangle inequality with arbitrary number of terms. So let's get back to proving this theorem, that absolute convergence implies convergence. So we'll do that by proving that absolute convergence implies that the series is Cauchy. So proof-- and this is of the theorem just before this theorem, we proved that absolute convergence implies usual convergence. So we will prove that in fact, this series is Cauchy, assuming absolute convergence. And from last time, we had approved the statement, or at least this followed from the statement for sequences, that a Cauchy series converges-- that a series is Cauchy if and only if it converges. So we have to prove that the series is Cauchy. Remember, this means for all epsilon positive, there exists a natural number m such that for all l bigger than m bigger than or equal to M, if I look at the sum from n equals m plus 1 to l of x n, this is less than epsilon. So let epsilon be positive. So since we're assuming that the series is absolutely convergent, this implies that this series with absolute values here is also Cauchy. So that means that there exists a natural number m sub 0 such that for all L bigger than m bigger than or equal to M sub 0, if I look at the sum of absolute values from m plus 1 to l, this is less than epsilon. Now, this should have an absolute value on the outside, but this is a sum of non-negative terms, so the absolute value can be removed. You can essentially see where we're going based on what's written on the board-- what we want to prove, and what we know, and this triangle inequality. So choose M to be M sub 0. Then if l is bigger than m is bigger than or equal to M, then the absolute value of the sum m plus 1 to l sub n-- this is less than or equal to the sum from n equals m plus 1 to l of the absolute values of x sub n by the theorem we proved just a minute ago. And this is less than epsilon by our choice of M. M is equal to M0, and for M0, we have this inequality right here. Thus the series is Cauchy, which implies it converges. Basically, the only test you know for determining when a series is convergent is in one of two possibilities. Either A, it has a very simple form, and so all the terms are non-negative, but the terms have a very simple form. It's the alternating series test which we'll discuss in a little bit, possibly the next lecture. And then when a series converges absolutely, we have a lot of tests for that. And we'll see that series which converge absolutely somehow are not fickle, meaning I can rearrange the terms and the rearranged series will still converge absolutely, and converge to the same thing that the original series converged to. So let me just make a brief comment after this theorem, we proved that absolute convergence implies usual convergence, and tie-in a little bit to what I just said there. So we'll show that the series sum from 1 to infinity of minus 1 to the n 1 over n-- this converges. But note that this series does not converge absolutely. Because when I take absolute values, I just get sum of 1 over n, which is the harmonic series, which we just showed a few minutes ago is divergent. So now we're going to move on to some convergence tests. Now, when it comes to convergence tests, what these all follow from is basically what we know about geometric series and the following comparison test, although when I do the proofs of the other convergence tests, I won't actually state that I'm using the comparison test. But that's kind of what's really getting used. So the first test we have is the comparison test. And the statement is the following-- suppose for all n a natural number, we're looking at non-negative terms with one smaller than the other. Then the conclusion is if the bigger one converges, this implies that the smaller one converges. And if the smaller one diverges, this implies that the bigger one diverges. How we're going to prove this is-- so we're dealing with terms that are non-negative, so we'll use this theorem about a series of non-negative terms. So we use that theorem, and we proved that a series of non-negative terms converges if and only if the sequence of partial sums is bounded. So if this series converges, this implies that the sequence of partial sums is bounded, which implies-- that means that there exists a non-negative number such that for all natural numbers m, sum from n equals 1 to m of y sub n is less than or equal to B. But this immediately implies that since all the x n's are less than or equal to the y n's, we get that the n-th partial sum corresponding to the x n's, which is less than or equal to the n-th partial sum for the y n's, is also less than or equal to B for all m. So sequence of partial sums corresponding to the x n's is bounded. And therefore, by the theorem we proved, which I think I have erased already, the series converges. Now, proving 2 is essentially-- it's kind of the same thing, except the inequalities go the other way. And the x n's are getting bigger, which implies the y n's are also getting-- or the partial sums corresponding to the x n's is getting bigger implying that the partial sums corresponding to the y n's are also getting bigger. So now 2, if this series diverges, then this implies that partial sums is unbounded. we'll now prove that this implies that the partial sums corresponding to the y n's are unbounded. Now, remember what it means for a sequence to be bounded is that there exists a non-negative number B such that for all m I have that bound. So to say it's unbounded means that for all B there exists a little m bigger than or equal to capital M such that that inequality is reversed. So let me put here in a box what this actually means. Again, this means that for all B bigger than or equal to 0, there exists m, a natural number, so that y n equals 1 to m is bigger than or equal to B. That's what this means. So this is a for all statement, so I have to be able to prove it for every B, that B be bigger than or equal to 0. Now, since we know that the partial sums corresponding to the x n's is unbounded, this implies that there exists an m, a natural number, such that the sum from n equals 1 to m of x n is bigger than or equal to B. So let me say again, to show its unbounded I should really have-- so for a sequence to be unbounded, I should have an absolute value here is bigger than or equal to B. But all of these terms are non-negative, and therefore, I can remove the absolute values, and the same thing here. So there exists a natural number m so that I have this. And so we put a little m0 there because we have to somehow show there exists a little m. Choose m to be this m0. So now, if we look at the partial sums for the y n's, this is bigger than or equal to-- because m equals m0, it's this term, which is bigger than or equal to B. Thus, this proves the partial sums corresponding to the y n is unbounded. And therefore, this series diverges. So let's use the comparison theorem to consider series like 1 over n p-series, and prove when they do converge. So theorem-- for p, a real number, sum n equals 1 to infinity 1 over n to the p converges if and only if p is bigger than 1. So for the proof, why does the series converging imply p has to be bigger than 1? I'll do this by contradiction. So suppose 1 over n to the p equals 1 to infinity converges. So we'll do the proof by contradiction that p has to be bigger than 1. Suppose p is less than or equal to 1. Then 1 over n to the p-- where p is less than or equal to 1, this is bigger than or equal to 1 over n. And this implies, since 1 over n-- since the series corresponding to 1 over n-- diverges implies that the series corresponding to 1 over n to the p diverges by the comparison test, which is a direct contradiction to what we're assuming, that the series converges. So this must be false. p must be bigger than 1. So we've shown that if this series converges, then p has to be bigger than 1. So now let's prove the other direction and suppose p is bigger than 1, and prove that the p series, 1 over n to the p, converges. So the way we're going to do this is kind of how we showed that the harmonic series is divergent. So what we're going to do is first, we're going to show that there is a subsequence of partial sums corresponding to this guy that is bounded. So remember, to prove that this converges, this converges if and only if the sequence of partial sums is bounded. And what we're going to first do towards that is prove that there is a subsequence of partial sums which is bounded. So we make a first claim that the sequence of partial sums-- so s 2 to the k, this is sum from n goes 1 to 2 to the k of 1 over n to the p, so k a natural number-- this partial sum is bounded by a fixed number depending on p, 1 plus 1 minus 2 to the minus p minus 1. In other words, this subsequence of partial sums corresponding to s 2 to the k is bounded. So again, we do this by grouping these terms according to which power of 2 the denominator is between, and then estimate from above now, rather than from below like we did for the harmonic series. So we have s 2 to the k equals 1. So again, we're grouping these terms according to where they fall. So just write this out one more time-- this is equal to 1 plus 1/2 to the p plus 1 over 3 to the p plus 1 over 4 to the p plus 1 over 5 to the p plus 1 over 8 to the p plus-- and then up until the last term. And now I can write this as 1 plus the sum from l equals 1 to k, so the number of blocks I have here. and now, the terms that come in each of these blocks 1 over into the p. And so now I estimate 1 over n not from below by this guy, but from above by putting in the smallest n that n is in this block. So this is less than or equal to 1 plus sum from l equals 1 to k sum from n equals 2 to the l minus 1 plus 1, 2 to the l 1 over 2 to the l minus 1 plus 1 raised to the p-th power. Now this plus 1 is just making things bigger on the bottom, so if I remove it, I've made things bigger overall for this fraction. So this is less than or equal to sum from 1 equals l equals 1 to k, sum n equals 2 to the l minus 1, 2 the l times 1 over 2 to the p times l minus 1. And now this thing here, if we do the same algebra we did a minute ago, this is equal to 1 l equals 1 to k. Now I have this term coming out. And then the number of terms I have here, just like I did for the harmonic series, this is equal to 2 to the l minus 2 to the l minus 1 plus 1 plus 1. And now this is equal to 1 plus sum from l equals 1 to k. And so this whole thing here is equal to 2 to the l minus 1. So I get 2 to the minus p minus 1 l minus 1. Now I can shift this index. Actually, I guess I could have made that sharper but it doesn't matter. I could shift this index l by-- no. So l starts at 1 and goes to k. And here, I have the sum l minus 1. So I can shift this index to go from now l equals 0 to k minus 1, 2 to the minus p minus 1 l. So this is like making a change of variables, l prime equals l minus 1. And so let me put l prime instead of l. So p is bigger than 1, so this corresponds to a geometric series now. So let me actually rewrite this as 1 over 2 to the p minus 1 to the l prime. When p is bigger than 1, then 1 over 2 to the p minus 1 is less than 1. So this thing is a k minus 1 partial sum for the geometric series with this as R. So this is always bounded above by-- if I add up all the terms, which equals that thing that I have up there, 1 over 1 minus 2 to the minus p minus 1. So that proves that along this subsequence, these partial sums are bounded by this fixed number. And now I claim that this proves that the whole sequence of partial sums is bounded, in fact, by the same number. For all m, a natural number, s m is less than or equal to this number again-- 1 minus 2 to the minus p minus 1. So let m be a natural number, so we're trying to prove this bound. What do we do? We find a dyadic number, a number of the form 2 to the k bigger than m. And since 2 to the m is bigger than m-- I think that's maybe one of the first things we did by induction-- we get that s sub m-- which is the partial sum of non-negative terms-- this is going to be less than or equal to, since this is a monotone increasing guy, this is going to be less than or equal s to the 2m, which is less than or equal to 1 plus 1 minus 2 to the minus p minus 1. Thus, the sequence of partial sums is bounded, which implies this series converges. And that's the end of the proof, and I think we'll stop there.
MIT_18100A_Real_Analysis_Fall_2020
Lecture_19_Differentiation_Rules_Rolles_Theorem_and_the_Mean_Value_Theorem.txt
[SQUEAKING] [RUSTLING] [CLICKING] CASEY RODRIGUEZ: So let's continue with our discussion of the derivative. Let me recall that the derivative of a function, if it exists, which we denote by f prime of c, this is the limit as x goes to c of f of x minus f of c over x minus c. Last time, we looked at the relationship between continuity and differentiability and showed that differentiability implies continuity, but that having a derivative is quite a miraculous thing. Because there are continuous functions, which we actually constructed. We constructed a continuous function. You can generalize that example. So we constructed a continuous function that is differentiable nowhere. So in fact, differentiability is a much stronger condition than continuity. And it's something of a miracle. Just as a side note, if you go on to study complex analysis, which are functions of a complex variable rather than of a real variable, you also have the notion of a derivative there. And then, the derivative is much more miraculous than it is here in the setting of r. But the whole point is, the derivative is something of a miracle. So today, the goal is to prove some properties about the derivatives. Most of these you know. Maybe you didn't cover the proofs of them in calculus with the ultimate goal of proving the mean value theorem, which is, to me probably the most underappreciated but most important result in calculus. Now, you could argue that probably the hero of the story of calculus is the fundamental theorem of calculus, which connects integration, which we'll cover next after differentiation. Which so the fundamental theorem of calculus connects integration and differentiation. So that really is, but-- and that is really the hero of calculus. It gets used more than anything else. But to prove that, you use the mean value theorem. So if the fundamental theorem of calculus was Batman, I would say the mean value theorem is something like Alfred. He's really the reason why Batman gets to be who he is. Enough analogies for now. Let's go back to proving some properties of the derivative. So first, let's do some basic-- the basic linearity and rules for the derivative. So let f go from I to R. i is always some interval. And if f-- so I have two functions, f and g, from I to R. c is a point in i. And the conclusion is, if f and g are differentiable at c, then we have several rules that we can apply. The first is basically the linearity of the derivative that for all alpha and R, function alpha f plus g, which now goes from I to R, is differentiable at c. And the derivative of the function alpha f plus g prime c is alpha times f prime of c plus g prime of c. I think I'm starting to write a little slandered here. Second is the multiplication rule, which is the function f times g is differentiable at c. And the derivative of the product is not the product of the derivatives, but the prime of c, g of c plus f of c, g prime of c. So that's a multiplication rule. And then, we also have the quotient rule. And just like when you need to divide by something, you need to assume that you're never dividing by 0. If g of x does not equal 0 for all x in I, then the function f over g is differentiable at c. And the derivative is the derivative of the top times the bottom minus the top times the derivative of the bottom over the bottom squared, [? now ?] prime g of c squared. So I'll prove one and two. Three we'll leave as an exercise. So for one, we just compute that the limit as x goes to c of alpha f plus g of x, minus alpha f plus g of c over x minus c, this is just by definition alpha times f of x plus g of x minus alpha times f of c plus g of c. And so, this equals the limit as x goes to c. And just collecting terms alpha times alpha f of x minus f of c times x minus c, plus g of x minus g of c over x minus c. And all of these limits exist. So this is just a fixed number. The limit as x goes to c of this exists and this exists. So by what we know about limits, namely that the limit of the sum is the sum of the limits. And then, scalars just pull out of the limit. This is equal to alpha times f prime of c plus g prime of c. So that's the proof of the first one. For the proof of the second one, we'll use the fact that a function which is differentiable at a point is also continuous at a point. So since g is differentiable at c, it is continuous at c. I.e. limit as x goes to c of g of x equals g of c. Now we compute the limit as x goes to c of f times g of x minus f of c times g of c over x minus c. Now what I'm going to do is add and subtract f of c times g of x. So this I can write as the limit as x goes to c of f of x, minus f of c over x minus c, times g of x. Plus now, f of c times g of x minus g of c over x minus c. And again, so this limit as x goes to c exists. This is just the derivative of f at c. And we noted that since g is differentiable at c, g is also continuous at c, meaning the limit as x goes to c of g of x equals g of c. This is just a constant f of c. This goes to g prime of c. So all these limits exist. And therefore, we get that this is equal to f prime of c times g of c plus f of c, g prime of c. And three, I'll leave as an exercise. Again, you'll write out-- you'll add and subtract something to f of x over g of x minus f of c over g of c and use the fact that differentiability at a point implies continuity, and just evaluate the limits. So I'm going to stop with these two rules. I don't know why I called it the multiplication rule. It's called the product rule-- well, multiplication and product are the same thing. But we also have the chain rule, which requires a little more care to prove than what we've done so far for these rules. So suppose I have two intervals. And then I have a function g going from I1 to I2, f going from I2 to R. And suppose g is differentiable at c, and f is differentiable at g of c. Then the function f of g, function goes from I1 to R. This is differentiable at c. And the derivative equals f prime of g of c times g prime of c. So what's the basic idea? Again, so this is basic idea is that we write the difference quotient, g of x minus f of g of c over x minus c as f of g of x minus g, f at g of c times over g of x minus g of c times g of x over g of c times x minus c. And let x go to c, and then we'll pick up the derivative of g here at c. And right here, this is f of something converging to g of c. And so, this should be like f prime of g of c. But the only problem with really writing this down is that there could be points where g of x equals g of c. And then, I'm dividing by 0. That's not allowed. So writing this expression is meaningless. But we're going to do something like this in spirit, where at least when g of x does not equal g of c, this thing kind of equals this thing. That's the basic idea. So to implement this strategy, let's introduce some notation and a few auxiliary functions. So first off, the function we're interested in taking the derivative f of g, let me call that h of x. Because I don't want to have to keep writing f of g of x. Let's call it d. This will be the point g of c. So we want to show then that h prime of c exists, and it's equal to f prime of d times g prime of c. Just in this notation I've set up. So now, let me define some auxiliary functions u of y. This is going to be essentially the difference quotient of f, except at a certain point. So this is f of y minus f of d over y minus d when y does not equal d so that this is meaningful. And then, f prime of d when y equals d. And then, v of x-- so this is going to play the role of that first expression I wrote up there, f of g of x minus f of g of c. And then, v of x, this will be g of x minus g of c, x minus c when x does not equal c, and g prime of c when x equals c. Now, essentially what do we have here? Then if I look at f of y minus f of d, this is equal to u of y times y minus d, always. Because if y does not equal d, then I divide over, and I get u of y, then I get f of y minus f of d on the left-hand side over y minus d. And that's equal to u of y by definition. And then, when y equals d, I get 0 here, and I get 0 over here. So that's clear. And also, the same here, g of x minus g of c equals v of x minus [INAUDIBLE] times x minus c. So we have these two expressions. And one more thing I want to note is that u and v are continuous at y equals d and x equals c respectively. So note that v of y is continuous at d, and v of x is continuous at c. So let's look at the proof of that. We just need to show that the limit as y goes to d of v of y equals v of d. So compute the limit as y goes to d of v of y. Now, when we look at limits, remember, we're not allowed to put y equals d in the expression for this limit. We're always looking at points y close to d but not equal to d, is what I'm saying. And so, therefore, when y is near d, and not equal to d, v of y is given by-- I should-- this should-- I should have written u here, sorry, u. So we're looking at y near d, but not equal to d. And therefore, it's given by this expression. So this is the limit as y goes to d of f of y minus f of d over y minus d. And this is just the definition of the derivative of f at d. So remember, d is g of c. So f is differentiable there. And this is by definition u of d. So this function u is continuous at y equals d. And similarly, the limit as x goes to c of u of x, v of x equals v of c, which is g prime of c. So now we're going to combine what I've written here to finish the proof. So in h of x minus h of c, this is equal to f of g of x minus f of g of c. And this is equal to u of g of x times g of x minus g of c. This is from this relation here. So we'll call this one star, this one double star. This is by star. And now, I use the second relation, double star, to write g of x minus g of c equals v of x times x minus c. And therefore, the limit as x goes to c of h of x minus h of c, or x minus c, this is equal to the limit as x goes to c of u of g of x times v of x. Now, u is continuous at g of c, g is continuous at c. And we proved that the composition of two continuous functions is continuous. So this equals u of g of c. And v of x, we've proven already. That limit is v of c. And u of g of c, so this is d, remember, in the notation we had earlier. And therefore, that equals f prime of g of c. Remember, g of c was d, times g prime of c. So to go from this point to this point, we use the fact that u is continuous at c, which we proved. u is continuous at g of c, which we proved over there. And that g is continuous at c. And therefore, the composition of two continuous functions is continuous at that point to arrive at this guy. But again, the whole basic idea was to be able to write the difference quotient as a product of what looks like the difference quotient of f times the difference quotient of g. But you have to be careful, because g of x could equal g of c, in which case you're dividing by 0 in this basic idea we had up there, which is not allowed. But you get around it by using these functions which equal what you want as long as you're not dividing by 0, basically. So that wraps it up for basic properties of the derivative. And now, we're going to move on to trying to prove the mean value theorem-- the Alfred of this Batman tale. But first, in order to prove this, we'll need a result about the derivative in relation to certain relative minimum and relative maximum. So let me first define what this means. Let S be a subset of R, f be a function from S to R. We say f has a relative max at c in S. If there exists a delta positive, such that for all x in S, x minus c is less than delta, then f of x is less than or equal to f of c. So the picture here, and then let me state that there's analogous definition of relative min at c. You just take this entire definition and flip the inequality. That's the definition for relative min. So what this means pictorially, so remember, we had this notion of absolute min and absolute max. An absolute max at a point c means at every other point in S, f of x lies below f of c. Here we're just saying, as long as you're close to the point c, then you lay below f of c. So let's say our function looks something like that. So then, let's say we're at this point here. So this would be absolute max, because the graph of the function sits below the value of the function at this point for all x in this interval a, b. And then, here at this point, we would have an absolute min, because the rest of-- because the graph of the function sits above that point for all x in this interval. But now, if I go to this point, say, this would be a relative max, because why? If I look at an interval-- small interval around it, and I look inside this interval-- there's too many dashed lines going on. So I'm going to erase this. Yeah, I'm just going to do away with the dash. If I zoom in on just this one piece of the graph, then at this point, the graph sits below this point inside this little vertical strip. So this guy would be a relative max. And then similarly, at this point here, maybe it's-- let me-- so you can really see what the graph is. I should have brought different colors. At this point, we have a relative min. Because nearby points sit above the value of the function evaluated at this point. So relative max means nearby points f sits below that f evaluated at that point. And relative min is the other way around. So I don't think we need to belabor the point of what relative min relative maxes are. I think that's pretty clear. So the theorem is the following. If f from a, b has a relative min or max at a point c strictly inside this closed interval, in the open interval a, b, then and f is differentiable at c, then f prime of c equals 0. And this is kind of clear from this picture, although this might look more like a sawtooth, but it shouldn't, that the tangent at this point is horizontal. And the tangent at this point is horizontal, which just is the expression that we're just expressing that f prime of c equals 0. Because f prime of c is supposed to be the slope of the tangent. So the idea of the proof is-- and we'll do it for relative max. Relative min you get by taking minus f. So we'll do it for suppose f has a relative max at c in a, b. Then so I'm just going to draw a picture to illustrate why this is the case, so or why what I'm about to say is the case. So we have a, b, we have c. Then there exist delta positive, say delta one, such that two things are true. x or c minus delta 1 c plus delta 1 is contained in a, b. Two, for all x in c minus delta 1, c plus delta 1, f of x is less than or equal to f of c. So delta 1 is chosen as the minimum of-- so maybe let me not call it delta one, let me just call it delta. So first off, you know that a, b, open interval, this is an open set which we've encountered in the assignments. So you can find a delta so that delta 0, say, so that this interval centered at c is contained in a, b. Now, since f has a relative maximum at this point, there exist another delta 1, say, so that if I'm inside that interval, then f of x is less than or equal to f of c for all x in that interval. And I choose delta to be the minimum of these two deltas. And then I will have a chosen delta so that c minus delta c plus delta is-- so here, this one is-- and then delta 0 shows them so that that is contained. So I choose the minimum of these deltas. And this is the delta I take for this statement. So now, the way we get that the derivative is 0 is, we're going to approach c from above and from below and use the fact that we have a relative max to show that the derivative is both bigger than or equal to 0 and less than or equal to 0. So let xn be c minus-- how did I write this-- delta over 2n, which is in the interval c minus delta c for all n. Then what do we know? xn, the limit as n goes to infinity of c minus delta over 2, n converges to c. So the derivative at c is equal to the limit as n goes to infinity of f of x sub n minus f of c over x sub n minus c. Now, what do we note? So x sub is in this interval here, which is contained in this interval here. And therefore, f of x sub n is always less than or equal to f of c. So what's on top is always less than or equal to 0. Now, x sub n is always less than c. So this thing on bottom is also less than 0. And therefore, something that's less than or equal to 0 on top over something that's less than 0 on bottom must be bigger than or equal to 0. So that's approaching c from the left. If we approach c from the right, delta over 2n which is contained in c, c plus delta for all n, then xn converges to c. And if we look at f prime of c, again, this is equal to limit as n goes to infinity of f of y sub n minus f of c over y sub n minus c. Again, f of c is a relative maximum. So this thing on top is still less than or equal to 0. And the thing on bottom though, is now since y sub n is bigger than c, this is positive. So something on top is negative, something on top is positive. So the limit must be negative. So we've just shown f prime of c is both bigger than or equal to 0. And that it's also less than or equal to 0. And therefore, the derivative is 0 at this point. And again, the proof for a relative minimum is very similar, except that inequality flips signs. And therefore, you would get this is less than or equal to 0, this one would be bigger than or equal to 0. But you still get f prime of c is bounded between 0 and 0. So at a point where we have a relative minimum strictly inside the interval, the derivative must be 0. And the fact that it happens strictly inside the interval is important. This is not necessarily true if the relative max or relative min happens at an endpoint. Think of f of x equals x on 0, 1. Then it has a absolute minimum at 0, absolute maximum at 1, and the derivative is 1 at both points, not 0. So this is only for relative min, relative maxes that occur inside-- strictly inside the interval that the function is defined on and where it's differentiable. So now, we have Rolle's theorem, which is essentially the mean value theorem rotated. We'll get the mean value theorem from Rolle's theorem. And which states the following, let f be from a, b to R, be continuous. So it's continuous at every point in a, b, differentiable at every point inside of a, b. It could be differentiable at the endpoints. That's fine. But it has to be differentiable on the open interval a, b. If f of b, a equals f of b equals 0, then there exists a point c in the open interval a, b-- and this is important-- such that f prime of c equals 0. And what's the what's the picture that goes with this? I'm sure you've seen it before. We draw a function that looks kind of like sine or cosine. But anyways, function which is 0 here, 0 here. Then there has to exist a point where the tangent is vertical. In fact, here it can occur at two different points. But something that's already kind of giving the game away as far as how we'll prove this is, let's look at where the function takes a maximum and a minimum. That is the heart of the proof. And why can we-- why does it even have an absolute maximum or absolute minimum? So I guess I didn't say this over there when I discussed relative min and relative max. But an absolute maximum is also a relative maximum. And absolute minimum is also a relative minimum. So let me make this-- this is kind of a late remark. Absolute max is a relative max, absolute mins is a relative min. So this picture is kind of given the game away already. Let's look at where the function takes a max and a min. And why does f take a max or a min? It's because f is continuous on this closed and bounded interval. This is something we proved when we were discussing continuity, the min max theorem. So let's give the proof. Since f is continuous on a, b, f achieves a relative max at some point c1 in a, b, and relative min at c2 in a, b. If there exists an x-- also. If of f of c1 is positive, then what can you say? Well, c1 cannot be one of the endpoints, because at the endpoints, f of a equals f of b equals 0. This implies c1 is in the open interval a, b, which implies by the previous theorem, f prime of c1 equals 0. So here you-- c equals c1. If I'm at the minimum now, and this is less than 0, then again, this implies that c2 is in the open interval a, b, which implies since f has a minimum at c2, f prime of c2 equals 0 by the previous theorem. And therefore, we could take c equals c2. If-- now, this is two cases-- f of c1, if either f of c1 is positive or f of c2 is less than 0, then we have the result. If f of c1 is less than or equal to 0, is less than or equal to f of c2, now remember, f achieves a max that c1. F achieves a min at c2. And therefore, this should always be less than or equal to this. If the max and the min-- so first off, I want to make sure I'm getting my logic correct here. So rather than state it this way, let's do it like this. So in terms of the picture here, this would be c1, c2, where we achieve a max and a min. Then in the final case that f of c1 and then, so this is the max. 0 is equal to f evaluated at a point. Then-- all right. So let me go back to what I was going to say a minute ago. So in the last case, that f of c1 is less than or equal to 0 is less than f of c2. Now, remember, this is the min, this is the max of f. So this should always sit above f of c2. So that implies that f of c1 equals f of c2. And therefore, the max and min equal each other. Which implies for all x and a, b, since f of c1 is the minimum, or f of c2 is the-- so I started off with something here, there, and in between, which implies for all x in a, b, f of x equals f of c2. So that means f is constant. And we know what the derivative of a constant function is. It's just 0. So we could take-- say, the midpoint. And that's the end. Sorry that I kind of fumbled through that for a minute, but-- OK. So for any function which is 0 at the endpoints, there has to be a point in between where the function is-- the derivative of the function equals 0. Now, again, we should come back to when we see a theorem, we should pick it apart a little bit to see what's necessary, what's not. We had these two hypotheses coming in, that if the function is continuous and differentiable on this open interval a, b, are these necessary? So for example, let's say I have the function-- if I look at the function 1 minus the absolute value of x. So at minus 1, 1, 1, this is a continuous function on minus 1, 1. And f of minus 1 equals f of 1 equals 0. But there is no point where f prime of c equals 0. What hypothesis did I leave out for this function? The fact that it's-- the hypothesis that it's differentiable at every point in between. This function is not differentiable. But there's no point where the derivative equals 0. So this example tells you that the hypothesis that the function is differentiable on the interval minus 1, 1, or a, b, is necessary for the theorem to be true. That's what I'm getting at. Now, the other hypothesis that the function is continuous on the closed interval is also necessary. Let's say I look at-- and this is pretty easy to come up with a counter-example, if I don't assume that. Let's take the function which is-- the function that's x minus 1 for x not equal to minus 1, and 0 for x equal to minus 1. So does is this function look like? So this function here is differentiable on minus 1, 1, f of minus 1 equals f of 1 equals 0. But again, there's no point c in minus 1, 1 so that f prime of c equals 0. Because the derivative in between minus 1, 1 is just 1. So both of these hypotheses, that f is continuous on the closed interval and differentiable on the open interval, are necessary for this theorem to be true. If you drop either of those hypotheses, then the theorem is false. So let's rotate this picture here and arrive at the mean value theorem. Let f from a, b to R be continuous. So it's continuous on the closed interval a, b, differentiable on a, b. Then there exists a c in the open interval a, b such that f prime of c-- let me write it this way-- f of b minus f of a times b minus a. So why do I refer to it as a rotated picture and shifted version of Rolle's theorem? So there's a, b. Here's f of a, f of b. And let's say that's how the function looks. Now, the slope of the line connecting f of b to f of a is exactly f of b minus f of a. So the slope of that line is f of b minus f of a. And what we're stating is that there's a point c, so that the tangent at that point is parallel to that line, has the same slope as the line connecting f of b to f of a. So this theorem really does reduce to Rolle's theorem. So let me define a function g from a, b to R, which satisfies the hypotheses of Rolle's theorem and will give us what we want, basically. So g of x equals f of x minus f of b, plus f of b minus f of a over b minus a times b minus x. Basically, what this function g does is, it takes the function now and rotates it, and shifts it down, so that f-- these two points then coincide and give you 0. Then g is continuous on the closed interval a, b. Why? Because it's a sum of two continuous functions on a, b. f of x is continuous on a, b. And this is just a constant. That's just a constant. This is a polynomial. So g is continuous on a, b, and differentiable on a, b, again, because f is. F is differentiable on the open interval a, b. This part is differentiable everywhere so the sum of two differentiable functions is differential. So g of x is differentiable on a, b. So continuous on a, b and differentiable on a, b. And since I said, we're going to use Rolle's theorem, let's compute g of a. This is equal to f of a minus f of b, plus f of b minus f of a, over b minus a, times b minus a. That cancels with that. Minus f of b cancels with f of b. f of a cancels with minus f of a, and we get 0. And then, g of b, this is even easier to see. If I just stick this in, this is f of b minus f of b plus f of b minus f of a, over b minus a, b minus b. And this also equals 0. So g of a and g of b are 0. The function is continuous on the closed interval. It's differentiable on the open interval. And therefore, by Rolle's theorem-- I think there's supposed to be some sort of accent over O, but I can't remember, and I've forgotten it already. By Rolle's theorem, there exist a point c in a, b, such that 0 is equal to g prime of c, which is equal to-- we take g and actually compute what the derivative is at c, is equal to f prime of c. That's a constant derivative of that is 0. This times this, the derivative of x evaluated at c just gives me minus f of b minus f of a, times b minus a. And that's it. 0 is equal to f prime of c minus what we want. We move that to the other side, multiply through by b minus a, and we're done. So some first very nice applications of the mean value theorem, which I think you learned from calculus are the following. Let f from I to R be differentiable, meaning it's differentiable at every point in the interval I. Then we have two conclusions. f is increasing, meaning we recall what that means, for f to be increasing, that means that if x is less than y, this implies f of x is less than or equal to f of y. This is equivalent to for all x in I, f prime of x is bigger than or equal to 0. So remember, f prime is the rate of change. If the rate of change is always non-negative, then f has to be growing. But this says, f is decreasing, meaning x less than y implies f of x is bigger than or equal to f of y. This is equivalent to since a function is increasing if and only if minus f is decreasing, this inequality should flip. So I'm going to do increasing-- decreasing follows from number one by taking minus f. So how much time do I have? I still have time. So suppose first that f prime of x is always non-negative. Let a, b be in I. [SNEEZES] Excuse me. We now want to show f of a is less than or equal to f of b. Then f is continuous on this smaller interval a, b. Why? Because f is differentiable at every point in I. And therefore, it is continuous at every point in I. So in particular on this smaller interval, and differentiable on a, b, which implies there exists a c in a, b-- So by the mean value theorem, such that f of b minus f of a equals f prime of c times b minus a. And now, since the derivative is always non-negative and b minus a is positive, this is bigger than or equal to 0. I.e. if a b minus f of a is bigger than or equal to 0. So we prove in one direction. We've proven that if you like that the function's derivative being non-negative implies that the function is increasing. So now let's prove the opposite direction. So suppose f is increasing and c is in I. Let xn be a sequence in I, such that xn converges to c. And one of two things holds. And either let's say, a, all n xn is less than c, or b, all n xn is bigger than c. And we can always find such a sequence given a point C in the interval I. So if I, so let's say, it's a closed interval. And c is in here. Then we can find a sequence xn converging to c, approaching c from the left. If c is one of the endpoints, then we can find a sequence from I approaching c from the right. That's b. And the same thing if c is over here we can find a sequence approaching c, strictly from the left. So there always exists such a sequence. Always exists. And this is because, I is an interval. And if the sequence satisfies case a, and we look at f prime, then we get that f of xn is less than or equal to f of c for all n. And let me in fact write this a little bit differently. Since-- so here we're using that f as increasing. Since f is increasing, which implies for all n, f of xn minus f of c over x of n minus c-- now, this thing on top is negative. xn is less than c. So this thing on the bottom is also negative. And therefore, this thing is, positive-- non-negative, I mean. And therefore, the limit must also be negative. But that limit is just a derivative evaluated at c. And case b is kind of similar. I mean, not kind of, it is. In case b, we get that for all n, f of xn, now, since x tends to the right of c-- let me write it this way. This minus f of c is bigger than or equal to 0, which implies that f prime of c, which is the limit as n goes to infinity of f of xn minus f of c over xn minus c. What's on top is non-negative. And what's on bottom is non-negative, because xn is bigger than c. So bigger than 0, bigger than 0, bigger than 0. Thus in either case, we get that f prime of c is bigger than or equal to 0. And therefore, we've proven that the derivative is always non-negative. So that's one. And two, f is decreasing if and only if the function minus f is increasing, which by what we've done from part one is if and only if minus f prime of x is bigger than or equal to 0 for all x in I. And then, multiplying through by this minus 1 gives me number two. And so, let me make one last-- I know it's taboo to write on the back part. But I'm going to do it anyways. We have the very simple theorem which now follows. Let f from I to R be differentiable, then f is constant. So f of x equals f of y for all x and y in I, if and only if the derivative is identically 0. So f is constant if and only if f is both increasing and decreasing. Because it satisfies the equality sign in both of those. And by what we've done now, this means for all x in I, if prime of x is bigger than or equal to 0, and-- that's for the increasing part-- f prime of x is less than or equal to 0. And therefore, this is equivalent to f prime of x equals 0. So we'll stop there.
MIT_15S50_Poker_Theory_and_Analysis_IAP_2015
Decision_Making.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: Everyone, welcome back to Poker Theory and Analytics. We're very lucky today to have a guest speaker, Matt Hawrilenko. Matt's background is really interesting. He was a Princeton grad who went to work for a Cisco International group for a couple of years, and then he left to become a professional poker player full time. He was considered one of the best limit hold'em players in the world in addition to winning a World Series of Poker bracelet. More recently he retired from poker and is focusing on clinical psychology where he studies in Boston at both Clark University and Harvard. So with that, I'll pass it along to Matt Hawrilenko. [APPLAUSE] MATT HAWRILENKO: OK. Hey. So I'm super excited to be here today. And as I understand, I've been told that I'm your second game theory guy. And I guess Bill, Bill Chen, one of my best friends in poker who I've done a lot of work with came in and talked to you guys last week. I'm going to take a slightly different angle than Bill did last week. So first I kind of just want to get a sense of who's in the room. So my presumption is that there might be sort of like widely varying levels of experience with game theory. So if you just don't mind kind of helping me out, so who here has seen a game theory game before, like prisoner's dilemma, something like that? OK. So most of us. Who here has actually solved a couple of game theory games, like with pen and paper, matrix algebra, whatever? OK. About half. Has anyone actually taken a game theory class? All right. Great. So I'm kind of hoping this talk is sort of going to be equally useful for everyone, but we'll find out. So Bill talked to you guys about Cepheus last week. I'm going to talk about something a little different than Bill usually talks about. So I'm going to try to talk about how to play good poker. And so if we think of this problem of playing good poker is this big game space. There are two ways that people generally approach it in that they'll take a read-based approach, which is pretty much what everybody does, right? You try to figure out what the other guy has, what he might have, and go from there. You can take a game theoretic approach. So I'm going to try today discriminate a little bit between these two approaches, and I'm going to talk about why, whatever level of poker you're at, game theory is something they can complement your game or just be your game. For me it is my game. And broadly speaking, the reason that I think game theory is particularly useful for poker goes something like this. So one of the things that I love to do off the table, I do jujitsu. Which if you guys don't know what that is, it's kind of like wrestling where you're trying to choke the guy or hyperextend their arm or something like that. And when you start out doing jujitsu, you start out as a white belt, and you're competing against, rolling with other white belts. And you can learn this whole repertoire of moves. And tons of stuff works against white belts, right? Like, whatever you do, it's probably going to work against white belts. As you get better or as you start going against better competition-- blue belts, purple, brown, black belts-- a lot of the moves that worked against white belts don't work anymore. Not only do they not work, they can tend to get you in quite a lot of trouble. So I think that the best way to approach this game is from day one, you're not training to be white belts. You're training to be black belts, right? You want to learn the moves that work all the way along against the best competition. And I think it's sort of the same way with poker, right? So if you think about it, if you play in a home game or whatever with a bunch of buddies, you might have one strategy that works fairly well, right? Maybe you play really tight, you bet a lot with your best hands, and you don't bluff that much. And this works really well because most people don't really have a sense of what's what, of the strength of hands. They overvalue their hands. So you take that strategy to the Bellagio and you play in even mid limit games with reasonable players, and all of a sudden you're starting to get eaten alive. So I'm going to talk about how game theory can sort of help you avoid this and help you be strong and grow stronger all the way along. So now if we think about game theory, really there are two audiences that we could be speaking to, right? So Bill came in last week and he talked about Cepheus, this algorithm that solved my best game. And, well, that is sad for me. I'm not going to spend a lot of time dwelling on that here. Rather I'm going to talk about what humans can do, how humans can apply game theory at the table to their game. And this is for a couple of reasons, but maybe the bigger one is I've had a fair bit of math, but I can't solve the kinds of problems that Bill can solve. But my work has sort of been on taking game theory, taking these concepts, and really using them in a practical sense. So that's what I want to talk about today. So if we're thinking about this there are, again, a couple angles we could take. We could talk about it from a theoretic perspective. We could solve a lot of games. We could solve dynamical systems, and see how that works. Or we could talk about it in practice. What does game theory tell me? How does it tell me I should think about this situation? Of course, I think the theory practice dichotomy is a false economy. So what I want to do is I want to spend a little bit a time solving some really, really simple toy games, two toy games. And I want to sort of use that to bridge the theory and practice gap. So I want to take a couple of toy games and then apply them to a real hand of high stakes poker I played. So I'm going to use that hand to motivate how we might apply some of these ideas. So then finally at the end of the day, we can come up with a list, right? We can think of a rule-based strategy. That's something that learning from Cepheus might give you. It might tell you what to do in every specific situation. But that's really hard. Poker's really big. You can't really remember all of that. So the thing that I really hope you guys take away from my talk today is some training principles, some ways to think about the game which, as you finish this class I guess in a couple of days and move on throughout your poker careers, whatever those might be, some tools that can sort of help you continue working on your game. So let's just get going. So sometimes in poker we get put in really tough spots, and it can start to feel like we're trying to guess our way out of them. Varying degrees of guess our way out of them. Sometimes we might be sort of making educated guesses, sometimes they might be less educated. But what do we do? What do you do in that really tough spot where you just don't know what's going on? So this happened to me a couple of years ago in-- this happens to me all the time. But this happened to me a couple of years ago in a World Series tournament that I played. We were deep. We were in the money. And the one thing I want you guys to keep in mind here is I'm playing this hand against a player who is just much better than me. So that sucks. On the other hand, I have aces. So what I'm going to do is I'm going to take you really briefly through the hand and let you know how I was thinking about it, and we'll kind of look at it from a couple perspectives. So we're both really, really deep. The blinds are 12 and 24,000 with a $3,000 ante, and we're six handed. So I raise two off the button with aces. I raise a little bit, he calls. Flop comes, king, jack, eight. He checks, I bet, he calls. It turns a five. He checks, I bet about 2/3 of the pot. He calls. And then the river's a king. And then he reaches for his chips, and I feel good. I'm like, oh, money. And then he bets just about $1.1 million to a $700,000 pot. And now I'm like, I don't know. I have no idea. So he's betting I'm happy. I see the amount. I'm confused. So here's how the hand looks. So I'm sitting there and I'm just trying to get a sense of his range, and I'm thinking, like, bet of 1.5 times the pot. What does that mean? What kind of hands is he betting so much with? Why isn't he putting me all in? Why is it a little below all in? Is he trying to entice me? Or maybe he's trying to save $300,000 if I fold. All these thoughts are going through my head. So before we talk about what I did, I'm actually curious. Are you guys happy with the information I gave you? Is there more information that you want? What do you want to know? AUDIENCE: How big is the gap between getting out now and getting out next? MATT HAWRILENKO: Ooh, good question. So let's just say it's pretty linear at this point. AUDIENCE: OK. MATT HAWRILENKO: Anything else you want to know? Oh, you want to know lots. What do you want to know? AUDIENCE: Has he shown down big bluffs? MATT HAWRILENKO: Has he shown down big bluffs? Good question, we've played with him a little bit, and I don't actually recall. But he's certainly capable of showing down big bluffs. I fancied myself a pretty good player at the time, and he is someone that I would consider one of the top players in the game. Anything else? Anyone else? Any questions? Right. And so it sounds like what you're trying to do is you're trying to get a sense of, what is his range? Does he have a lot of bluffs in this spot? Does he have any bluffs in this spot? And I think that's how pretty much most of us tend to approach the game. It's most natural way to approach the game. Is he bluffing, isn't he? Does he have too many bluffs here? Out of curiosity, does anyone want to stare him down for tells? Nobody wants to stare him down. You might want to stare him down. This guy wants to stare him down. [LAUGHTER] This guy definitely wants to stare him down. This guy wants to stare him down for sure. By the way, if you get this look, good work. It means you've done your job. It's uncomfortable. But I want to tell you exactly what he's thinking right now. What he's thinking goes something kind of like this. [MUSIC PLAYING] But it can feel like this, right? Tough players can put us in spots where we're just thinking about monkeys clashing symbols. And when we don't have a repertoire, I think, that we start reading really heavily into small signals, right? We're trying to figure something out. So generally speaking, my view, there are some tells that are OK. There are some tells that kind of work against weak players and less and less against better players. But these are small signals in a whole lot of noise. Very small signal in a very noisy environment. Certainly not something to build a career on, right? In the big games, you rarely see tells. You just don't see them enough for that to be profitable. But actually some people do build careers on them. So FBI interrogators build careers on these kinds of tells. So how do they do? Well, we actually have some data in that spot. There have been a whole series of studies where basically the paradigm is you bring in some FBI interrogators, and then you bring in some random people from the street. And you have them watch someone interview a person. And then at the end of the day you have to figure out, is this person telling the truth, or is this person lying? How do you think they do? AUDIENCE: Same. MATT HAWRILENKO: Same? Yeah. Same in every category. So everyone is basically chance, or maybe like 53%. So they do exactly the same as the random people off the street, these body language experts, except for one difference. The one difference is they are way more confident that they are right, OK? So you ask them their level of confidence, and most people are like, eh, I don't know. Like 50%. The interrogators are, like, 90% sure that they're right. So they're not alone, right? So I don't know if you guys have heard of self-assessment bias. It's one of my favorite biases. So one more study. So it was sort of motivated by this old study of GE engineers. It's like 30 years ago. And what they do is they ask, OK, of all the engineers at GE, where do you rank? What is your percentile rank of everyone? And they ask basically everyone. So you would think these being engineers, engineers being math guys, that they have a good sense of percentiles. You'd think that they might get this right. The average engineer ranks themself right about at the 80th percentile. Of all the people they asked-- they asked, like, maybe a hundred of them-- of all the people they asked, two ranked themselves as below average. That is my favorite data point from this study. Two ranked themselves as below average. Should be about half. And they're not alone too, right? So poker players also famous for their self-assessment bias. And one of the things that I think sort of feeds into this, at the beginning of Rounders-- I haven't seen this movie for, like, five years. I don't know if you guys have seen Rounders. But at the beginning of Rounders, Matt Damon's character quotes this poker book. It's one of my all time favorite quotes from a poker movie which, I guess, isn't saying much. But gives this quote, "Few players can remember the big pots they've won, but every player can remember with remarkable accuracy the outstanding tough beats of their career." And I think it's these sort of memory biases that feed into our self-assessment bias, right? So you walk into the Bellagio, you see a table of all pros. There has to be self-assessment bias here, right? There has to be. So don't be that person. Let me circle back to my point. I'm going to tell you about my favorite poker hand of all time. Of every hand I've ever played, this is my favorite for a number of reasons. So I'm playing a tournament in the World Series. This is a big tournament. And we're on the exact money bubble. And what that means is that the next person to bust out gets nothing. Everyone else is guaranteed something, couple dozen bucks. I don't know. So I'm sitting there, and there's this guy at my table who, as we approach the money bubble, he's so excited to be there. It's his first World Series tournament, he's about to make the money. He's calling his girlfriend and his buddies every hour telling them, yeah, I think we're almost in the money. He's so excited. And then it gets to the point where, like, 40 minutes before the bubble, he just leaves. He doesn't want to play a hand. He just leaves the table so he doesn't have to bust out. So he comes back. And it's now the actual bubble. He's to my direct left. I'm in the small blind, he's in the big blind, and it folds around to me. And at this point, I feel like there are just dollar signs in my eyes, right? I'm so excited. So I look down and I see four deuce off suit. And we both have medium stacks, but I have this hell of a read on this guy, so I shove all in. He says, I knew you were going to do that. I call blind. He calls without looking at his cards because he knew I was going to shove in. It's like the strongest [INAUDIBLE] I've ever had. He flips over his hand. He has nine deuce off the suit. He has me dominated with a terrible hand. Let's think about this. How terrible does my read have to be? How far off does it have to be? What has to happen here for him to call with nine deuce off suit. Not only does he have to not care about bursting out, busting out on the bubble. He has to not care so much that he won't even look at his cards. If he looks at his hand, he has to fold. So he proceeds to win the hand. And they count us down, and he has me literally covered by one or two chips. And I walk away thinking I have to go home. We rent a math house every year out in Vegas. Me and Bill Chen, Jerrod Ankenman, Mike who's sitting right over there, and a bunch of other of us. I have to go home and I have to tell these guys what just-- Kenny, who's right here-- I have to tell these guys what just happened, and I still don't hear the end of it. So what does this mean? So I realize the beginning of this talk sounds a little bit like a commercial for like, just be humble. Don't be an idiot. But I think it's a little more than that. Here's the takeaway. Any time you try to divine your opponent's strategy, she can do the same thing back to you. And this happens in really obvious ways, like when my super tight guy all of a sudden doesn't care. But it also happens in subtle ways, right? So a lot of times we think about pot odds. So you might be sitting there thinking, OK, I'm getting two to one. Do I have the best hand here at least one in three times? Very natural way to think, right? But now you're kind of trying to do magic. That's one of my favorite quotes of all time, by the way, from Harry Potter. And the trouble is, the other side can do magic too. I'm trying to figure out what their distribution is and respond to it. They're trying to figure out what I think their distribution is and shape it around that. And all of a sudden, we're playing this leveling game. And we're both kind of trying to do magic, right? So in the hand that I showed you guys in the beginning, I could be sitting there and trying to figure out exactly what my opponent's range is when she bets one and a half times the pot. I can try to figure out why he's not putting me all in. I can try to figure out a million other little things that are going to help me get a little more sense of this range, but I'm trying to do magic. And I think that's kind of our go to move when we don't really have a repertoire, right? And this is going to happen more and more-- well, it happens with really bad players, bu that's OK-- but it's going to happen more and more with good players. They'll take you to a place where they have this paved road. They've been there before. They take people to these weird places. And you haven't. And what do you do? So what I would say is forget their hand. Forget their range. Don't think about it. So one of my favorite people in the world, Jerrod Ankenman, he's the one who wrote Bill's book and let Bill put his name on it, by the way. He has this quote, "If I truly played optimally, I could write down my entire strategy on a piece of paper, what I would do in every single situation, and I could give it to you and you couldn't beat me." That's what we're trying to do here. So how do we get there? Well, heads-up limit hold'em turns out to be a pretty big game. It has about a quintillion game states, and that's the smallest poker game that we really play for money. So we can do a couple things, right? We can try to do some, like, sweet programming like the team out in Alberta did. And that's great. You know, for example, humans play chess, right? And computers now crush them, but human learning is really strongly aided by computers. But you can't memorize every position. You can't memorize every line. You have to know what's going on underneath the hood. And these algorithms, these programs like Cepheus, they will give you the strategy. They will not tell you why. And as humans, we need to start to understand why if we have any hope of carrying these strategies with us and actually playing them in real life. So they're a black box, right? So we can solve the games. But solving the games does not get us out of the woods. So if we want to wrap our puny human minds around it, we have to be a little more clever. So we're going to look at a couple of toy games here, really simple ones that can start to help us wrap our minds around how to behave in these situations. So the first one is a clairvoyance game. And a clairvoyance game is basically when either one or both players have complete game state information. So what would you do if you lived in a world where you always knew your opponent's hand and he knew that you knew? That's the idea of a clairvoyance game. So we're going to look at a game called Coin Flip Clairvoyance. And the game goes like this. You can and should play it for money by the way. So each player antes a dollar. There are two players. And then you flip a coin. If it's heads, you win. If it's tails, your opponent wins. However, there's a round of betting. Only you see the coin after the flip. Then you can bet or check. So you choose to bet a dollar or check, and your opponent can only check. If you check, they have to flip over hand or call. If you bet, they either have to call a dollar or fold, OK? So you know if you win or not. Your opponent doesn't know if they'll win a showdown. That's the idea of this game. Make sense? Yeah OK, easy game. So how do we play? So scenario one. Flip the coin. It's heads. What do we do? AUDIENCE: Bet. MATT HAWRILENKO: Bet. OK. Scenario two. It's tails. What do we do? So there are two main questions. How often should your opponent call, and how often should you bluff? So how do we solve this? What we want to do is we want to call enough to make your opponent indifferent to bluffing or giving up. That's what your opponent should be doing. So to do that, we're going to set the expectation of bluffing equal to the expectation of giving up. So the expectation of bluffing is just this, right? It's the pot times the amount they fold, which is one minus the proportion of the times they called, right? That's how much you win when you bluff. So now how much do you lose when you bluff and get caught, right? It's the amount you bluff, which is going to be one unit, times the amount you get called. So we can just sort of do a little simple algebra, and it'll reduce to-- you should be calling p over p plus 1 of the time, where p is the pot, right? So if the pot is two units and you're bluffing one unit, we should be calling with a proportion of two over three. 2/3 of the time with our kings. That is how we make the opponent indifferent to bluffing or giving up. OK? So a couple of things to note about this before we push on. So what happens here as the pot gets bigger? Do you call more or less? AUDIENCE: More MATT HAWRILENKO: More. Yeah/ So as the pot gets bigger, this asymptotes to 1 which makes sense, right? Most of the value's already in the pot. So there's more money in there. You have to protect against being bluffed at more. Totally intuitive with poker, right? So how often do you bluff? Well, you want to set the expected value of calling equal to the expected value of folding. So the expected value of calling here is just-- we're going to use the ratio of bluffs to value bets rather than a percentage. It just works out nicer. So the ratio of bluffs to value bets, so how frequently are you bluffing, times the pot plus one because that is what they win when they call. So bluffs value that times the pot plus the unit that you bluff. And then what do they lose when they call in their wrong? They lose the amount you bet, so that's going to be one unit, the value you bet. So you're going to bluff p plus 1. So you're going to bluff 1 over p plus 1 of the time. So now what happens here? So as the pot gets bigger, what are you doing? Bluffing more or less? AUDIENCE: Less. MATT HAWRILENKO: Less. Yeah. Is that counterintuitive? AUDIENCE: No. MATT HAWRILENKO: Is it? I don't know. It was counterintuitive to me because I'm like, oh, there's more money in the pot. But what it means is there's more money in the pot, so I don't really need to bluff very frequently to make sure I get value, because the values in there, and my opponent is calling more of the time. So the bigger the pot is, the more my opponent calls, right? And what I'm doing essentially is I'm bluffing so I can get value from the time that I'm winning. Yeah? OK. So we can actually generalize this too, right? So we can generalize it to no limit games pretty simply. So we've sort of flipped things around here. So here the pot is 1 and s is the proportion of the pot that you bet, so you're going to be calling 1 over 1 plus s, right? So if the pot is two units, s is one unit, so s would be 0.5, right? 1 plus 0.5 is one half. And you'd be bluffing s over 1 plus s, right? So you're going to be calling 1 minus the bluff ratio. Sometimes we call the bluff ratio alpha. 1 minus alpha. But we can actually calculate a value for this game and for all toy games. So Bowling and their team with Cepheus they calculated the value of having the button in limit hold'em. We always kind of knew it was good, but they calculated precisely just how good it was. And if you're looking for a job at a place like [? Sig ?] or somewhere else in finance, you actually probably should just calculate the value of this, because you're going to be getting interview questions like this. So again, what we're taking here is the larger amount your opponent gets, the less frequently you have to call. Right? The more frequently your opponent bluffs, the more frequently she has to value that, right? So as a bluffing region gets larger, the value betting region has to get larger with it. So in a coin flip game it doesn't make sense to think of regions, but in poker it's going to. So a question I get here a lot is, what if it's not a repeated game? What if you're playing it just this one time? Or what if you're at a table with a player that you'll never play with again? How do you play that? And the answer is that it's a repeated game. So I have a feeling this will be intuitive for you guys, so I'll do it quickly. But I want you to imagine a scenario where you're going to play the Coin Flip Clairvoyance game a thousand times against different players. So against a thousand different players, right? And suppose you take the position, since my opponents can't learn from my past, I'm going to bluff 100% of the time. So what's actually happening here? You can sort of think of each opponent as a random sampling from the distribution of possible strategies that are out there for the Coin Flip Clairvoyance game. So some of them will fold too much, and you will own them. And then some of them will call too much, and they will own you if you're bluffing all of the time, right? So even if you haven't seen this person before, even if it's your first hand of poker against them, it is a repeated game, OK? So get that, it's not a repeated game notion out of your heads. It's bad for business. So taking like a half a step back, what have we learned about this so far? OK. So the Coin Flip Clairvoyance game, it's not about just value betting or just bluffing. It's about the combination of the two. We're also trying to maximize the value of our entire set of hands, right? Because what happens? So suppose our strategy is, we're going to bet every time we have heads, we're never going to bet when we have tails? What's our opponent going to do? AUDIENCE: Fold to every bet. MATT HAWRILENKO: They're going to fold to every bet. Yeah, exactly. So we sort of can calculate this ratio where now they don't do so well if they fold to every bet. And so what's cool about this is the math in this game is very simple, right? It's not hard. But it buys you a lot. It buys you a lot of intuition about poker. Some really useful concepts. So I want to move on. This is probably my favorite of all the toy games, and there are actually a million versions of it. We're going to do the simplest one because I think it sort of gets everything that you need to know, more or less. So this is an ace, king, and queen game. It's an incomplete information game. So each player antes $1 and is dealt one card. So if I get delta ace, my opponent only has the king or the queen. They can't also have an ace. So then you can check of bet and your opponent, just like the Coin Flip Clairvoyance game, can only check or call or fold. They can't bet if you check. So this is going to be our first mapping, our only mapping, of a toy game that resembles real poker, right? Now we have a real range. Again, I think that you probably can and should play this game for money. I think there's a real difference between what we're about to do, which is solve it and sort of get it intuitively, and actually get it experientially. So go forth and gamble. But what do we do? So case one. So you get the ace. Are you going to check or bet? AUDIENCE: Bet. MATT HAWRILENKO: You're going to bet. Yeah. Now your opponent bets and you have an ace. What are you going to do? AUDIENCE: Call. MATT HAWRILENKO: You're going to call. All right, good. And now your opponent bets, and you have a queen. What are you going to do? AUDIENCE: Fold. MATT HAWRILENKO: Yeah. Hey, yeah. OK. So it seems trivial, right? These first three cases seem really trivial. But an important thing to note is that they are dominant strategies or dominated strategies. So a dominated strategy in game theory, for example, calling with a queen here would be dominated. A dominated strategy is one where the decision has equal or lesser-- strictly equal or lesser value to another decision. If I call with a queen, I cannot win. I just lose money. That has strictly lesser value than folding, OK? So dominated strategies, important concept. So how about this one. You have a king. Check or bet? AUDIENCE: Check. AUDIENCE: Check MATT HAWRILENKO: Check. AUDIENCE: Split. MATT HAWRILENKO: Split? OK. All right, well, I actually want to see where everyone's in. So we're going to have three options. Who wants to check? All right, who wants to bet all the time? Who wants to bet sometimes? OK. See, I tricked you. This is also a dominated strategy. So what happens if you have a king and you bet? What is your opponent going to do with an ace? AUDIENCE: He's either going to call with an ace or fold with a queen. MATT HAWRILENKO: Exactly. Your opponent-- because the ace and the queen, dominant strategies, right? Your opponents always calling with an ace, always folding a queen. So betting with a king here would be a dominated strategy. Strictly dominated by checking. How about here? Now we have a queen. What do we want to do? Check or bet? AUDIENCE: Bet. MATT HAWRILENKO: Check, bet, exactly. We want to bet some of the time. Right? And we'll go through [INAUDIBLE]. And now we have a king and our opponent bets. What do we want to do? AUDIENCE: Mix. MATT HAWRILENKO: Mix. Good. You got the idea. Does anyone have any guess as to how the mix might break down? AUDIENCE: Depends on his proportion of bluffing with the queen. MATT HAWRILENKO: OK. But suppose we're trying to solve it. So it depends on his proportions bluffing with the queen. Exactly. And let's get tighter. How do we solve it, right? So if we have a king, our opponent has an ace half the time and has a queen half the time. They're going to bet all the time with the ace and sometimes with the queen. Turns out in this game that it's going to be about the same or exactly the same. She should be calling 1 over 1 plus s, s being the fraction of the pot. So you should be calling with 2/3 of hands that beat a bluff. So the hands that beat a bluff are aces and kings, not queens, right? So if we're thinking about it this way, before-- eh, no. I'll show this first. So aces are going to represent 50% of the hands that beat a bluff, right? Because you're going to have aces 50% of the time and kings 50% of the time. So calling with aces seems better than calling with kings. So we're going to call with all of our aces. So now we're up to half, but we need to get to 2/3, right? We want to be calling 2/3 of the time, kind of per our formula. So we're calling with all of our aces, and then a third of our kings times having a king half the time, that's another sixth, right? So all of our aces and a third of our kings. So how is thinking about it this way different from thinking about it using pot odds? So for pot odds we're trying to figure out, what does this person have in this situation? So I'm sitting here with a king with pot odds, and I'm thinking, am I ahead at least a third of the time here? I don't know exactly. But I know that I can try to make my opponent indifferent to bluffing or calling. So I'm thinking about what I'm doing with my whole range of hands. So yeah, OK. So these are the two observations we had from the Coin Flip Clairvoyance game. So adding one. One thing that we're noting here with the ace, king, and queen game, what are we doing? We're sort of implicitly mapping three different types of hands. Value hands, bluff catchers, and bluffs. And the big thing here is your strategy for what you do with one hand determines your strategy for other hands, all right? I'm definitely calling with the aces, so I need to call with some kings, right? I'm definitely betting all my aces, so I need to bluff with the lowest-- like, the worst part of my distribution, right? That's the part that's going to gain the most. So a more subtle thing that I think is super important and is going to play into sort of the last half of this talk is if I am playing a hand differently from you, I should do different things with other hands than you should. Say for whatever reason I'm only betting half of my aces but I'm still betting a third of my kings. Whoops. Now I'm out of whack, right? Now I'm out of balance, and I'm going to lose more in this game by being out of balance. So your strategy for one hand determines your strategy for other hands. That's the whole key here, OK? So to sort of summarize what we've done so far is the Temple of Apollo. This is where I'd like you to go to see the oracle. Like, you want to go and get a prediction, this is where you would go. And walking into the Temple of Apollo back in the day in ancient Greece, I'm wondering if they knew a thing or two about game theory. So there are three inscriptions above the temple. The first one is know thyself, right? Know your own hand. Know your own distribution. The second is nothing in excess. Play with balance. So know thyself, nothing in excess. And the last one is make a pledge and mischief is nigh. Yeah, it's really a real stretch to make that one work, so we'll just leave it at mischief. Mischief can sometimes be good. So let me be very clear on how important I think this concept of knowing your own hand, knowing where you are in your own distribution is. I think you should not think about anything else in poker until you have bought and paid for a house by knowing where you are in your own distribution, in shaping it to be balanced. Don't think about anything else. Everything else is just window dressing compared to this concept. So here we are again. So we're going to go through this hand, the one that I told you about at the beginning of the talk, and we're going to try to read our own hand. So again, we're playing against this player who's better than us, some stuff happens, what do we do on the river? How do we think about that from a game theoretic perspective? So there are kind of three ways again that I alluded to in the beginning that people might think about it, right? So the first one is my hand versus your hand, OK? Well, I have aces. What do I think he has? King queen? King 10? Maybe he has queen 10? Maybe he's bluffing? Maybe he has queen nine? What is his most likely hand? And probably as most of you have already realized-- so Kevin told me that you'd all played at least 100 tournaments so far, I'm guessing some a lot more-- it's really hard to put someone on a particular hand, not particularly useful. So the next thing you might try to do is my hand versus your distribution. How are my aces doing against all the hands that you might have given the actions you've taken? I can't really put you on one hand, but I can look at your actions and see what sort of distribution they might suggest. And then the last one is my distribution versus your distribution. And this is I can look at your actions and I can look at my actions, and I can try to shape my actions such that they maximally exploit your actions. So it's not so much about what I'm doing with my aces. It's about what I'm doing with all the hands that I'd have, where I just happen to have aces here. And this is the style that's most complimentary to game theory. So again, we're on this river and with pot odds with my aces I might be thinking, am I good at least a third of the time here? But if I'm doing that I'm trying to do magic, because I'm trying to figure out exactly what he has. And I don't think we need to do that here. Right? What I want to be thinking is, well, how much of the time do I need to call to make my opponent indifferent to bluffing? Any guesses? Yeah? AUDIENCE: Like 40% of the time? MATT HAWRILENKO: Yeah. So something like 1 over 1 plus s. So how do we get there? So this is the slide-- guys, if you pay attention to one slide this whole talk, this is the slide to pay attention to. This is the slide where we map the ace, king, queen game to actual poker. So the whole idea of poker from a game theory perspective is we're going to try to make bluffing zero EV for our opponents. So we're going to call with the proportion of our hands. We're going to make bluffing zero EV. So in real poker, games that allow raising, that means we could potentially be raising. So we want to be continuing 1 over 1 plus s at the time, at least calling, pondering arrays. So what do we see? So if this is the ace, king, queen game, we're going to map it to real poker this way. This is like our 99th percentile hand, the very best hand we can have in this spot, right? So on that board there were two kings on the board, so a 99th percentile hand would be like quad kings, right? Would be four of a kind. This is our worst hand, the very worst hand we could have in this spot, I don't know. A four deuce. And so what we're going to do-- so again, these are the hands that are the very best at showdown, very high EV. Very worst at showdown, very low EV. So we're going to be calling or raising 1 over 1 plus s of the time. So the question is going to be, what is our distribution here? What is 1 over 1 plus s? Our bluffed value ratio is going to be s over 1 plus s of our worst hands, right? These are the hands that gain the most value by bluffing, right? The ones that are going to do the worst at showdown. Well, not for us in this spot. I'm going to talk about that later. Let's leave that for a second. But those are the hands that my opponent should be bluffing, right? So if we think about our value betting range, I might have a different value betting range than you. And that has implications for how we play differently than each other. So the wider my value betting range is, the more hands I need to be bluffing. As this region expands, this region expands. So if my opponent is value betting more hands, they should also be bluffing more. So when you see some of the very best no-limit players play these guys who are just complete animals-- you're like, how are they bluffing that there? How are they calling that there? How are they making this value bet so thin? This how they do it, right? If they're value betting a lot, they're also bluffing a lot. So you have to call them more. So the more they bluff, the more they have to value bet. This is one of the places where a lot of beginners just get way, way out of whack, because bluffing is-- it starts scary and then it gets sexy, and then it gets something in between. So the whole idea, don't let this get out of whack. And again, the larger amount that my opponent bets, the less frequently I have to call. The less they bet, the more frequently I have to call. So we're about to our own hand. Let's keep these couple of things in mind. So again, I think reading your own hand is the most important skill in poker, and it's because what you do with part of your distribution shapes what you do with the rest of it. And so what we're about to do is we're going to go through and we're going to sort of make some frequency updates on each street. So we're going to do two updates. We're going to sort of update what our hands might be given the cards that have come out. So there's a card removal effect, right? If an ace comes out, I'm a lot less likely to have a pair of aces because there are fewer combinations. And then we're also going to account for the actions that we take. So let's just do it. It'll be clear. So, OK. So I've opened two off the button. So here's some kind of reasonable range for opening two off the button. So we've gotten rid of all the hands that aren't here. This is what I might have right now, OK? So you don't really need to pay attention to the specifics here, but flop comes, king, jack, eight. So now I'm going to update for card removal. So all the hands in orange are the frequencies that have changed, right? So eights, I could have had six of them before the flop came, but now that there's an eight, I only have three of them, et cetera. So our total combination down the bottom here has gone down, right? So now what happens? Well, they check, I bet. So what hands am I betting here when my opponent checks? Well, I'm getting rid of some of them. I don't know if they're the right ones or not, but I'm getting rid of some pairs, some bottom pairs, some gut shots. I might not be betting those here, right? So the hands in white are all the hands I'm still betting. And again, our frequency is coming down. Our distribution is narrowing. So the turn comes. It's a five. We get rid of some hands with fives in them. I bet 2/3 of the pot. So we get rid of all the hands in this distribution where I'm not betting 2/3 of the pot. And really quickly, when you're thinking about reading your own hand-- and I'm going to say that the most important time to be doing it is probably off the table-- when you're thinking about it, you should be really thinking about every street, OK? So OK. So these are the hands that I'm value betting. Turns out I'm value betting about 94 combinations here. How's my proportion of bluffs? So I'm value betting 94 combinations. I have a total of 120. So that leaves what? 26 hands. So how often should I be bluffing? Or-- hm. 96. I'm value betting 96 leaving 24 bluff hands. So how often should I be bluffing? OK, so 96 times-- what's that? Is it like 1 over 1 plus s, s over 1 plus s, s over 1 plus s? So times 0.4-- that's the amount I'm betting, I'm betting 40% of the pot-- times 1 plus 0.4. So 96 times 0.4. Yeah. 96 times 0.4 over 1.4. All right. So I should be bluffing about 27 hands. What do I have? 16, 24? Huh. Did pretty well. I'm happy with that. Pretty good shape. I don't know. I don't know about all the other actions. We can argue about what I'm checking behind, but we don't want to over think it. So generally speaking, I want to be checking in on each street. Like, ooh. Am I balanced here in the way that I should be balanced? So the river comes a king, and now we have fewer combinations. So what do I do with my aces? So again, s, the bet size, he bets $1,080,000 into a $720,000 chip pot. So s is 1.5. I should be calling 1.5 over 2.5, 40%. That's 40% of hands that beat a bluff. So what does that look like? So if we're thinking about our calling region, what I've done here is I've just sort of taken all my hands and I've ranked them, and so kings represent my top 1%, king jack. Now we're at 9%. This is like the cumulative sort of frequency distribution with my hands ranked. And I'm saying here in this distribution, queen jack is the worst hand that still beats a bluff. We can argue about that, but rough guess. So first nothing in excess, calling 1 over 1 plus s of a time. Where are we? Ooh. This is surprising to me. This felt like a tough decision, right? So we're actually at the 62nd percentile here. So one question is, so from this, what should I be doing? Should I be calling or folding? Folding. Yeah. So it looks like I'm calling with my king queen. I might be folding king 10 maybe. We can talk about that a little bit. But aces seem like a pretty clear fold here. Suppose my opponent thinks that I'll fold good hands like aces, and so he's like, mm. I'm going to bluff 90% of the time here. Is he exploiting me? Seeing head shakes. He's not exploiting me, right? So why not? Yeah? AUDIENCE: Because you win on his bluff. Such a big question at the time, it makes up for the pot that you're losing. MATT HAWRILENKO: Yeah. My distribution is really strong here, right? Of the hands that beat a bluff, like half of them have trips. That's crazy to me. So no. So he's not exploiting me, but we will talk about exploitation in a minute. So we solved it, right? So we fold aces. We even fold king 10. So one question is, do we actually want to have a distribution in the spot where we have to fold trips? And if that feels kind of bad, well, two possibilities. One it just feels bad, or two, it might mean we sort of screwed up on the way here. It's not the worst. It's not a four flush board. There are no straights on it, right? Some folding trips here, that feels kind of bad. So what might that mean? A couple of things, right? So one thing we could do is I probably want a distribution where I don't have to fold it. And again, it depends a little bit on bet size, but this is a pretty reasonable rule of thumb when you're thinking about shaping your play off the table. So one thing that we can do, we can add some hands in from earlier. Ooh, maybe I should have played the turn a little bit differently. Maybe I should have played preflop, maybe I should have played some more hands, right? So we can start to expand our range. Right? Now we're calling with most all of our kings. We can actually construct a distribution where we have to call it aces, right? If we're playing way more hands and if we're value betting way more hands throughout, all of a sudden our distribution is wider, right? And now I should be in the same spot with the same hand and do something differently because my strategy to get here was different. And again, that's the same idea. You see these really good players making these crazily thin value bets, and this is why. They're bluffing a lot so they value bet a lot. The main idea that I want you to take away, if we sort of think of a principle, if I find myself on the river and I have more medium strength hands, I have to call more with medium strength hands, otherwise I can get exploited. Suppose we were deeper and our opponent bets into us, and we have a bunch more chips. Which hands should we be bluff raising from this distribution? So most the time, if I'm just straight bluffing, I want to be bluffing with the very bottom of my distribution because that's the part the gains the most when my opponent folds. If I'm bluff raising, it's a little different. If I'm bluff raising, I want to think about, OK, so what is the set of hands I would fold? What are the very best hands in that set that I would fold? And I may as well choose those, right? Because it seems like those hands have more value, so if I'm going to fold them anyway, it's a dominated strategy to bluff raise with hands that are weaker than that, right? So if I'm bluff raising, it should be really with the very best hands that I would otherwise fold, not with the very bottom of my distribution. So what do we see here? So first off, we want to check for balance on all streets. That's the big thing we take away. We can argue about little bits of distributions, but really, we're not solving to the second decimal point here, right? Second, we can look at this board all day. We can look at this king jack eight five king board all day. Unfortunately, that's not going to be the board that comes every time, and if we just look at this board all day, we're going to start to overfit our strategy a little bit to this board. So what I suggest you focus on is fixing the glaring errors, and there will be glaring errors, places where your distribution is way, way imbalanced. I still find glaring errors when I play poker, and I've been trying to do this for a little while now. Another thought which is not quite so obvious from what I've said so far is, you don't want to needlessly bifurcate your distribution. How do you bifurcate your distribution? Well, suppose preflop, I raise some amount with some set of hands and in a different amount with a different set of hands. Ooh, right? All of a sudden I started off here with 310 combinations, and I got down to, I don't know, 60 or something. But now all of a sudden, I'm starting off with half that. I'm starting off with 155, in this game tree gets smaller really, really quickly. So I'm not saying don't do it. But I'm saying if you do it, A, have a really good reason for doing it. B, be really careful. Be really careful of betting different amounts with different hand types. I basically don't. I will bet different amounts based on the texture of the board. So if boards are more [? drawy ?] I might tend to make my bet sizes larger earlier. But I won't bet different amounts with different hand types because I think it the possible gain is so small and the possible loss is so big. And the last thing that I found pretty cool about this was when I started really, really spending time trying to read my own hand, I'd start to find these consistent situations where I would get really imbalanced. And then when you want to start to think about moving on to exploitive play after you have paid off a house with your poker winnings, you want to maybe identify those spots in your own opponents, right? If you're getting imbalanced there, other people probably are too, so be ready for that. So what do I want to say here? So I guess we've ignored a few assumptions of this model, right? The biggest assumption in a toy game like the ace king queen game or the Coin Flip Clairvoyance game is that distributions are symmetric. If I'm applying the ace king queen game to my river situation here, I'm implicitly assuming that my opponents and I essentially have an equivalent distribution of aces, kings, and queens. That's not always true in poker. So generally I'd say that this sort this mapping is actually fairly robust, but at the same time, you have to be aware of situations where your distributions aren't symmetric, right? Like if you raise under the gun at a ten handed table and the big blind calls, your distribution is way, way stronger. It's not symmetric. So you're going to have a different set of actions. However, as you progress throughout the hand, distributions sort of tend to become more and more symmetric. So generally speaking, it's a model with some assumptions which are meh. But in my experience, it holds pretty well. So another thing I haven't talked about today is I haven't told you which hands to value bet, right? So I've told you the calling proportion, bluffing proportion. I haven't said anything about, how do you choose what to bet? And there are game theory games that can kind of give you insight into that. You can get insight into that from Bill and Jared's book. But I think that what's most important here is reading your own hand is the thing that I think can integrate into everybody's game. Whatever hands you currently value bet, it can work for you. Right? And it can sort of work equally well for the right player as it can for the very loose player. So the reason I've decided to talk about the reading your own hand approach is this is a very flexible approach that can work for a lot of people in a lot of situations. And, you know, as you get better and you start to play more and more hands-- or fewer and fewer, depending where you fall on the spectrum-- but generally as you start to play more and more hands, what you're doing here, thinking from sort of like this game theoretic perspective is you're moving towards hands that are closer to threshold hands, closer to zero easy. So adding in the nth hand has a much smaller effect than adding in that, like, fifth hand, right? Or if you think about all the good hands in poker, right? So you're always going to play your jacks, you're always going to play your 10s, right? Adding in that nine six suited has a much smaller impact on your EV than adding in the 10s, the nines, the eights. Yeah, that's all I want to say there. So in the last few minutes I'm going to talk really briefly about exploitive play. So exploitive play. So if we're looking at the top, our best hand's at showdown, our worst hand's at showdown. And this is about where our value betting threshold is. The way to exploit from a game theoretic perspective isn't to say, ah, I know he's bluffing! I call a blind! Although apparently that can work pretty well sometimes. The way you do it is you expand to the margins. So if normally I'm value betting here and my opponent-- or normally I'm calling here and my opponent bluffs way too much, I might expand to the marginal calls. That's one thing I might do. If they don't value bet enough, I might contract the marginal calls, right? But again, the ones that are very close to zero EV, I'm never making the big fold here. If they fold too much, I might expand to the marginal bluffs a little bit and contract the marginal bets a little bit. If they call too much, I'm going to expand the marginal bets and contract the marginal bluffs. So what that means is you get away from, I think he's bluffing 80% of the time in this situation, to-- in this exact situation, and I'm going to catch him-- to, hm. This person seems a little bluffy, so I'm going to shape my distribution just a little bit around that. OK. So the other idea about exploitive play is as your read grows stronger, as you have more confidence, you can expand to the margins, right? So a little read, you move the margins a little. Big read, you move the margins more. For example, well, OK. So with our example hand, we have a margin right about here, but we can start to move it wider as our read grow stronger that our opponent might be bluffing. But again, we don't want to move it wider with just the hand that I happen to be holding right now. We need to be thinking, how strong is it, and how wide am I comfortable moving it? Yeah. So if we're thinking about exploiting with my sort of, like, shamefully exploitive hand, what did I do? Well, it wasn't quite a 1 over 1 plus s situation. We're in this situation where I'm going to be shoving in the top hands and folding the worst. So if my margin was here, if my margin was, I don't know, eight six suited or something, maybe I'd go, like, eight five suited. Maybe I'd go king deuce off suit, something like that. Instead what I did was I went all the way to the very bottom of my distribution and got suitably punished. Yeah. So however confident you think you are in your read, you are probably overestimating it. So, yeah. That's where I was. One other thought here before I wrap up, and that's this idea of advanced exploitive play, and this is fun. So we think about expanding the margins. We think, oh, in this spot he's bluffing. I should call more. But now if you start to make slightly more subtle reads, this is an opponent who bluffs a little too much on the river, I could punish him on the river by calling, or this is an opponent who folds too much on the river. I could punish them, or I could punish her by betting the river. But they're going to get that feedback pretty soon if you're just starting to hammer every river. Another thing you can do is you can make the pot a little bit bigger, right? So don't forget about this. This is the pot. If you start making this bigger on earlier streets, maybe against this particular player, I raise a little bit more preflop. I bet a little bit more on the flop. Right? The pot's bigger on the turn, so now I'm betting on the turn. And now with the river I take all my normal actions except I know that I'm winning a few too many pots because they're folding too much, and now the few too many parts that I'm winning are proportionally larger. So this is this idea of we can exploit downstream. We can anticipate where they're weak. We cannot change our play in that spot and tip them off, but we can change it earlier. So to wrap up, so you want to know yourself. You want know your own hand, right? That's the first key. The second key is to keep it balanced, and you want to exploit the margins. And so as we think about Cepheus, as we think about these algorithms coming and exploiting players, so even in 20 years, computing power-- oh, what's that? Like 1,000 times greater or something? Is that right? Wrong? AUDIENCE: Sounds all right. MATT HAWRILENKO: Depending. Depending if you believe Moore's law, I guess. OK. So even then, even when more poker games are tractable, you're going to need toy games to draw insights on what's going on. As these strategies come out of the black boxes, if you hope to grasp them, if you hope to hold onto them, you need these kind of insights to be able to have some scaffolding to start to put them on. So I think I'm done, but if I were to summarize this talk one way, don't be this guy. Be this guy. OK, we're done. Any questions? Ooh, wait. AUDIENCE: So if you're doing, like, Nash equilibrium, well, that guarantees you a positive EV. But [INAUDIBLE] that's always enough. Like, if you're in tournament having a positive EV, your chips will grow a little but [? maybe blinds ?] are growing faster. Is that an [INAUDIBLE] Nash equilibrium strategy, or do you have to deviate if you want a higher variance [INAUDIBLE]? MATT HAWRILENKO: Yeah, so good question. So the question-- and tell me-- let me make sure we answer it-- the question is, OK, well, you can try to play an equilibrium strategy and that's fine. But if you're exploiting, you're winning more. And so if you're playing in tournaments or these situations, you really need to exploit to win more money. Is that kind of-- yeah? So yeah. That idea has been around for a long time, and I think that that idea is driven by people not really knowing just how strong game theory strategies might be, and just how strong sort of like, OK. So what is a Nash equilibria here? When you're playing a Nash equilibria, the idea is that your opponents are going to impale themselves on their own mistakes, and you're trying to make as few mistakes as possible. And how big is that, right? It's kind of an empirical question. In my experience, it can be a lot bigger than most people think. Because when you start exploiting, you start making mistakes. And you start sort of getting impaled a little bit on your own mistakes too. And my guess is for most of us, for 99% of us, we're going to make a lot more mistakes than we think we do. So we think we're exploiting. At the same time we're getting exploited. So actually, thinking about that is an empirical question, right? Like, how do human players do right now against Cepheus? My guess is not very well. In fact, my guess is that they're losing more than the top players, the top human players are winning from other human players. So that was certainly kind of my experience a few years ago. So it's one of these things where you'd see it, right? Like I think-- I don't know. I feel like I saw some stat where Cepheus was beating really good professionals for four bets per hundred hands, which is a lot. Which is a lot. Like, most top players are winning less than half of that. So Nash equilibria, yeah. Pretty darn good. Maybe one day when human players are better, you need to start exploiting more. I don't think we're anywhere close. Other questions? Anything else? All right, I guess we're done. Thanks guys. I'll stick around for a little bit if anyone wants to chat. [APPLAUSE]
MIT_15S50_Poker_Theory_and_Analysis_IAP_2015
Tournament_Play.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: Today, we're going to be talking about something a little bit away from the game of poker, per se, and more about looking at poker as an investment. The conceit here is that, to some extent, you're either a winning poker player now, or eventually you'll get there. So this is looking at now that you have these alpha streams, these streams of positive EV that you can pick which ones you want to embrace, how do you balance out? How do you look at your future, given that this is now an option you hold? Cash game here just means anything that's not a tournament, where tournament is a poker structure where you buy in, and with money, you receive tournament chips, and you only get money to the extent that you survive longer than someone else, where the person with all the chips in the end wins some set piece, not necessarily all their chips. Whereas a cash game is one such that each chip has a specific value, like you're actually bedding real dollars, and you can enter or leave whenever you like. So in cash games, chips are effectively literal money, so the more chips you win, the more money you win. Whereas in tournaments, all that matters is the position you finish. When we talk about the Independent Chip model, we'll talk about when those diverge. The second point is about that, where Chip EV, like literally having more chips, in a cash game is directly related to the amount of money you make. If you're making a decision that results in you having more chips, you make more money in the long run. Whereas in a tournament, it's a little bit more complicated. They're congruent, and they generally trend in the same direction. But in some particular cases, what's something that's positive Chip Ev may be either positive or negative in terms of Dollar EV, and we'll talk about that in a little bit. Cash games-- you can come and go whenever you want. That makes them very liquid. If you're a big cash game player, you can generally grind out your hourly win rate whenever you want, especially online. There's a little bit of overhead when you play live, but cash games are much easier to just come and go with regard to kind of executing your skill. With tournaments. you get all your EV from winning, from lasting long, so you're generally stuck. The better you are, the longer these tournaments take, and you have to kind of be there for the long run and generally plan to last a long time. In cash games, you can buy in as many times as you want. Like if you're in a cash game with a lot of bad players and you get knocked out, you can just buy back in. It's like it never happened. With a tournament, if you're at a tournament with a lot of bad players, and you get knocked out the first hand because you get unlucky, you never get to redo that for most tournaments. You always get kind of a new field of players, and you don't have the opportunity to reenter a tournament. Your only choice is to enter at the first time or not. Cash games have fixed blinds. The situation that you're in when you start is generally the situation you're always going to be in, so you don't need to factor in having a different size stack. Presumably, you're always going to have at least the maximum buy-in or more. Whereas in tournament situations, you're going to be short-stacked. People at your table are going to have a widely different stack size. In cash games, you have a single table selection, which is you get to see the nine people that you're going to play against. And you can watch them, and you can see if eight people call pre-flop in a cash game, you know that probably seven of those people are bad, and you can just enter that game and play against those eight or nine people-- totally fine. Whereas for tournaments, because you're only playing against nine other people in the tournament of a field of say, like 100, you only really get to target groups of people. You can identify an audience of people who would enter that tournament and then get an idea of what the average person you'll play against will be, but you can enter tournament with 90 fish and 10 pros and be at a table of 10 pros to start. So that adds a little bit of variance. It also adds some complexity in trying to get a read at your table and kind of make that jive with your read of what the average player should be for that tournament. Like the World Series of Poker is typically a tournament that has like 60% fish, 40% percent pros nowadays. But a lot of tables end up like eight pros and two fish, and that creates a much different type of situation for you than if you were at a table that was primarily new players. As I said before, cash games have higher liquidity. The best kind of career you can make in poker is being really good at a cash game, because you can go anywhere and play those. You can play as long as you want. Like if you only have two hours and you're in Vegas, you can grind out pretty confidently like $150 in EV, no problem. Whereas in a tournament, it's much more uncertain how long it'll take you to capture that EV and how long you might be obligated to stick around. For what's considered normal win rates, cash games are considered lower variance. Tournaments are considered higher variance. If you're winning at cash games, you're crushing it, but it's much, much harder to win at cash games, at least at the medium high stakes. It's for this reason-- it's because anyone who's a loser at a cash game knows it immediately and drops out really quickly. Whereas in tournaments, some of the big name pros are probably losers at this point, they just don't know it yet. Because you can go years before you find out whether you're actually a winner or a loser at big tournaments. Let's talk about the tournament life cycle. The early game is going to be everything before the bubble. So it's going to be like the first 90% of eliminations in the tournament will be in the early game, although because of how tournaments work, that ends up being like half the tournament in terms of time. But you get the idea anyway. Play during the early game is very similar to cash games. There's no ICM. There's no difference in the value of chips early on, because when you have 1,000 chips, and you need 100,000 to win the tournament, a difference of 50 chips isn't that big of a deal. The difference in value of chips doesn't really matter that much, which is why I'm saying that Tournament Chip EV is approximately equal to real Dollar EV, so make decisions as if it were a normal cash game. In addition, your playing style is going to be entirely influenced by your stack size. Having the same proportional stack size later in the early game, you'll make the same types of decisions. I'm defining different zones of play based on your stack size. If your M is less than 2, I'm calling that the dead zone, Harrington called it that. 2 to 8 is the steal period. 8 to 12 is the steal re-steal period, and we'll explain what that is later. 12 to 30 is the value-betting zone, and then 30 or more is a set-mining zone. Let me talk about tempo before we get into what exactly those things are. The most important thing about tournaments with regard to how it's different than a cash game is getting the speed right, like getting your aggression at the right level. Doyle Brunson used to say "never get caught speeding in a tournament." What that means is don't get caught being way too aggressive way too early on, and by early on, I mean when you have a big chip stack. So you need to win coin flips, like you will have to win several hands when you're behind to win a tournament. So what you should do is make sure that when you're not in coin flips, you slowly grow your stack, and you avoid flipping when you don't have to so that when you actually do flip, you end up with more chips. Like those small differences are the things that are multiplied all throughout the tournament to make you-- instead of like 1% chance of winning a tournament, like 3% or 4%. That's where your edge really materializes. The dead zone-- being in the dead zone is terrible. It's much worse than half of having 4M. You should only ever be in the zone because you lost the last hand, and you had slightly more chips than that guy. Because this is a really bad situation to be in, because you have no fold equity. So fold equity is where your value is going to come from, because you're not going to win a lot of showdowns in a tournament, and a lot of your EV from the tournament is going to come from fold equity. So if you have less than 2M, you're going to bet like pre-flop, and the big blind is going to call you without even looking at his hand, which means you have no fold equity-- like you basically have to win a showdown to get back into it, and that's a really bad situation. You want to get out of the dead zone before you have to pay a big blind, and then you're going to be stuck on whatever hand you get there. The reason is you want to be able to get your fold equity back. If you're under 1M under any circumstance-- so never, never do that. Call any two cards before you get to 1M. Why? Because if you have 1M, and then you double up, you still are in the dead zone, like you're basically at least one coin flip behind being in a terrible situation. So enter any sort of coin flip to avoid getting below 1M. So a lot of your value is going to come from this steal period, and by steal, I mean stealing blinds. When you have between 2 an 8M, your only decision is to go all in or fold. Because every time you steal the blinds, and the blinds are really valuable to you, you increase your stack by like 20% or 30%-- like that's a big deal. That's probably more than your equity from actually going to showdown. Unless you literally have aces, that's probably better than any sort of edge you're going to have by getting called. So find out who doesn't protect their blinds and steal from them. You should've been reading people early in the tournament to find out who's a pushover-- like they're going to be your best friend, because you're going to steal blinds from them. On the converse, pretend like you're someone who defends their blind, because if you get walked-- if everyone folds to your big blind-- you just get one additional M for free. But don't actually do it, because you don't want to see a showdown. It's a bit of a game of chicken, and I will actively pretend, like I will tell a person I will actively defend my blinds-- because it's not binding, and you can do that-- and then just think about whatever hand I have and then fold. You should still be calling at a very tight range, but to the extent that you can convince them not to push into you, it's almost as good as the stealing yourself. So avoid showdowns if at all possible. A lot of your hands are going to be played in this period, and the most important thing you can do is find opportunities to steal. Don't get into showdowns. Don't call unless you think you're like a 70% favorite. And it's related to the Gap Theory, which is by Sklansky, which says to call a hand, to call it all in, you need a much stronger hand than to push. Why? Because when you push, when you bet, you have full equity, and when you call, you don't. And that's the difference. Like you can push with any two cards in a lot of situations, but you need to have a good hand to call. So you're here your goal here is to keep your head above water. Don't fall below 2, and preferably get into like the 10M period, either by doubling up or by stealing. So if you steal three hands in a row, you have 11M, if you're at 8. That's the idea, here. And then once you're at 11M, you're in a much more interesting situation. So this steal, re-steal period is when everyone is trying to steal blinds, except you have the equity such that if you get re-raised, you can fold. What will happen is people bet into you, and you have to re-pop them sometimes. And then like you have to identify who's going to re-pop you and avoid them. Here's the idea. If you steal a bet with like 2M, and if he re-raises you with 6M, you're like marginally to call any two cards. Then if you have an M of 5, for example, you fall below. If you have a 27% equity, it's a good call, meaning any two cards, because you're at least 70%-30% basically all the time. Whereas if you have an M of 12 and he pushes, you have to have a 40% chance of calling, like you can actually fold this, and that's good, it gives you more optionality. The Value Betting Zone is where you might actually get to see a flop, and you have to plan for that. Your hand, pre-flop, is actually valuable to the extent it hits the flop, not like it is when you have lower M, where your hand's going to be valuable to the extent that you're likely to be winning pre-flop. So this play in 15M to 30M is very similar to what a cash game play is like. So you're probably going to see a bunch of flops, and we're going to plan accordingly. You still don't want to flat call pre-flop. When you play a hand, you want to be the aggressor. You want to raise when you have a good hand. You want to fold when you have a bad hand. I'm going to call this Flop, Turn, River, Play, but I really mean it's play when you have enough ships to actually see the flop. What you should do is not play that many hands, but when you play hands, play them aggressively. So the standard bet here is going to be 3 big blinds, plus one big blind for every caller before you. So if everyone folded before you, you bet 3. If two people call before you, you bet 5, 5 times a big blind. All your bets should be big. They should be big, big portions of the pot. 2/3 is an OK number. If you think the person is particularly weak, you can bet the pot. First, we're going talk about pre-flop. When your M is high, your value comes from having a good hand on the flop. I still don't care about the river. If you do this right, you're probably not even going to reach the river by the time someone is all in. We're only worried about the extent that your hand is valuable on the flop. Depending on your position, this is what I'm recommending your opening range is, where I'm saying you're opening by raising to three big blinds. So I'm saying that if you're an early position, you should really only do this with like the top 5% of hands, which is 10s or Ace-Queen, suited, or Ace-King. The difference in suitedness actually matters there. If you're first to act, any other hand, you should fold, which is obviously the majority of hands, because you have the least amount of information now-- like you might be up against someone with another strong hand, now. And then on like every card hereafter, you're going to be in the worst position. So to win in that position, to be profitable in that position, you need to already have a really good hand. To the extent that you have the option to just not play a hand, it seems to make sense that you would prefer not playing hands when you're in a bad position. So that's why that's a really tight range. When you get to middle position-- so maybe like four people to act after you-- you could widen up a little bit, which a lot of you might think is still very tight range-- and it is. This might be close to like 15%, where you have 8s or Ace-Jack, and maybe, King-Queen. Every other hand, you should be folding. Like that's your range for raising out of any sort of middle position. And then it makes it easy, because you can imagine what type of flop you either hit or don't hit, here. So if you're facing a raise, what I recommend is you just move everything up one. So if you're facing a raise in early or middle position, then you can play 10s, Ace-Queen, Ace-King, and if you're late, then you can start playing like Ace-Jack. So when it comes to a flop, by this time, you will have raised pre-flop, and you are now the aggressor in the hand. So if you were the aggressor, you should bet 2/3 of the pot. That works a lot more often than you would think, even among people who know what a seed bet is. Then the break-even, based on our formula, is going to be 2/3 divided by 5/3, or 40%. This is what I'm recommending in terms of tiers. This Tier 1 is a King-high flush, like I'm giving you a little bit of leeway, there, where you don't need to have literally the Ace-high flush. But if you have the King-high flush, you can go broke for 30M. In addition, the literal top straight. So if the board is 4, 5, 6, and you have 7, 8, you can consider that Tier 1. And this is only going to be relevant on an unpaired board. Why? Like, what does a paired board mean? STUDENT: Somebody's got a flush. STUDENT: Somebody [INAUDIBLE] PROFESSOR: It means a full house is possible. So if a full house is possible, it basically makes your hand worse than six more hands that are possible. Like a flush or a straight on a paired board would be considered Tier 2. Like if they're betting aggressively into you, and you have an Ace-high flush, you might be ahead. But then if they're raising you, then I would be very worried about you being up against a full house. So I would bet those much less aggressively, and then I would only bet the Ace-high flush, here. Like if you have a King-high flush on that board, there are all sorts of hand that can beat you. In addition, I'm saying if you have what-- like the fourth best flush, here, you can bet it, but you can't raise it, really. You can't raise it if they raise into you, because that just gives them too many opportunities to have a hand that would actually beat you. So a 10-high flush isn't bad. You shouldn't fold it. And you're not really drawing, but you should understand that it doesn't have four bets of value. It only really has two bets of value, either a bet and their raise, which you'll call. Or if they bet, you can raise them, and that's generally it. And that's obviously on an unpaired board. This also counts a second straight-- so a straight where you don't have the top two cards, but you have two cards that give you a straight that are slightly lower. In addition, bottom set-- I would put here in addition to any two pair. So two pair is crushed by a set, and then bottom set is also crushed by any set. It has the same problem. So I'm calling this Tier 2, and I'm saying that's good for like two bets. Tier 3 is an Overpair. So if you have a pair of Jacks, and the flop comes 2, 3, 4, or something that is a little bit less correlated, like 2, 3, 10. Like an Overpair is good. It's slightly better than Top-Pair Top-Kicker, because you beat Top-Pair Top-Kicker. And then Top-Pair Good-Kicker, I'm saying is also Tier 3, where you might be able to take it down. Like if you bet, and they call, you might still be ahead if they're drawing. But if you bet, and they raise, you're probably behind, and should treat it like you're drawing thereafter. Then all these hands, which you guys might have previously thought were good hands, are not. They're just going to be called drawing hands-- so Top-Pair Bad-Kicker, Mid or Bottom Pair, or a Pocket Pair that's not an Overpair. So if you have 5s, when the board is 2, 6, 7. So by the Turn and the River, all your turns, like they're already going to be big pots. You don't need to worry about extracting additional bets on the Turn. So in general, try to figure out, based on his action, what are the possible groups of hands that he could have here. And in general, like that's it. By the time you're on the Turn, there aren't like going to be that many more bets. So usually on the Turn, it's either like go all in or fold. Then hopefully you're going to be in a situation where you played it right previously, where you're going to have a better idea of whether to do that. Bubble play is going to be around 20% to 10% of the field left. If we're saying that 10% of the field makes money, we're saying that around 10% more is when like everyone at every table thinks that oh, like they have a pretty good chance of making money at some point. So they change around a little bit. This is when ICM starts mattering. This is when your decisions to win more chips may not necessarily be the maximal decision with regard to winning money. And it's when players who are probably not very comfortable with the amount of stakes that they're playing for start to make really big mistakes. Typically, the bad players around the bubble will be a little bit too tight, like if you're playing in the World Series, you'll see that like there are 250 people left, and 240 people get $12,000, and everyone else gets zero. And they're going to say like OK, maybe I don't need to worry about increasing my chances of getting first place by half a percent if it means that I'm putting myself at risk of getting zero now. Like they weigh that little jump in money quite a bit, and as a result, they play a little bit too tight. Then the general-- at least as of a couple years ago-- the consensus was that how you do during this bubble period basically determines how deep you're going to go during the tournament, because this is when the amateurs make the biggest mistakes. So if you're a player who likes to exploit those mistakes, this is when you're going to-- a good player can just crush it. Like he can be much more than 50% to double up during this period by just identifying weaker players who don't want to get knocked out for any reason and just bullying them around. So there are two types of meta-game going on now, just so you know. This was a conventional belief-- "the average amateur player is way too tight." But then once everyone realized that, it became the opposite, where the average amateur player was way too aggressive. Because once the first one gets priced in, then everyone starts to be too aggressive, because everyone wants to do what good players do on the bubble. You could probably identify your table pretty quickly around this time, whether you see weaker players making really bad calls or really bad folds. That will kind of give you an idea of what situation you're in. To the extent you can bully people around, but to the extent that other people are bullying you, you don't need to necessarily push them back. Let's talk about ICM. The Independent Chip Model is highlighting when the chip value diverges from your dollar value in the tournament. So Chip-EV, which is a C, here-- how it's different than Dollar-EV in tournaments. It's related to the chance of you ending up in certain payout spots. That's why it's particularly nonlinear. Rather, when the first place gets everything, Chip-EV goes Dollar-EV, because you eventually just have to win all the chips. And to the extent it brings you closer to winning all the chips, your expectation for the tournament is a little bit higher. But when payouts are steep, it gets a little bit more complicated, because in some situations, surviving a little bit longer has a material dollar difference and getting more chips in that situation may not be as important as surviving. And we'll go through examples. The Dollar-EV is not symmetric. There's curvature. So there's this idea of convexity, where when you win, you always lose in this kind of situation, because when you win a lot of chips, those chips go down in value over time. And when you lose chips, those chips are really valuable. So it makes your marginal threshold for making a risky decision much higher. In addition, when you factor in actual utility of money, it's even worse, where clearly big upsides are less good than protecting against big downsides. That's kind of the idea there. Losing hurts more than winning. Let's just go through an example of this, and hopefully this will make it intuitive. Say that we're playing in this situation. It was whatever, like a 100-person tournament that was a $20 buy-in, and we're down to the final four people, and we have 2,500 chips each, and these are the payouts, where first gets a grand, second gets $600, and third gets $400. If we look at player Adam, what's his equity? Like what's his expected dollar amount in this tournament, assuming that everyone is approximately equal skill, which is the underlying assumption here? STUDENT: [INAUDIBLE] PROFESSOR: Yeah, it should be like just 25% of the whole pot, because he has 25% of the chips. The whole pot is $2,000, so I expect that his EV is $500. And it is. So everyone has a $500 EV. But say that we're in a situation where Adam and David go into a coin flip, and then Adam beats David. Intuitively, what do you think Adam's equity should change to? I would imagine you would think that he just gets the equity from David, because those 2,500 chips are worth $500 in equity in the tournament. So when this happens, how does the equity change? You could run it through a poker tracker, and I'll go through the kind of rudimentary math for it. You might think that Adam now has $1,000 in EV, and these guys have $500 in EV. And like really, you should think those guys shouldn't have gain from this happening, because the pot size doesn't change, and they don't even change in chip value. But in actuality, Adam only gains $266 of equity for doubling up here, and these guys actually gain $100 from that happening. Let me show you why that is. Let's look at the deltas of these payouts. This is worth $400, and then this is worth $200, and getting to first is worth an additional $400. So we originally have $500 of value here, but now that one person's out, everyone's guaranteed what? Like certainly, when someone has zero chips, what's their equity in this tournament? It should be this, right? $400-- like they already are guaranteed $400 of chips. OK, so now that everyone has $400, we're just allocating the remainder of this. We're deciding who gets this remaining $600. So you can see that these guys have a 25% chance of getting that remaining $600, which is why they have just about $150 more than that $400 number. So that's where it comes. The reason this is is because the winner does not eventually get all of the equity, like the winner doesn't end up with $2,000. He ends up with $1,000, which is a less valuable than adding up all the equities from every player. Really, he's giving value out to the second and third place player. So that first spot is probably the worst value when it comes to your equity per chip, because if it were winner take all, then the equity would be flat, and then who would care? But the fact that he's giving money to players who didn't have all the chips in the end caused that first place to be really bad. So since Adam is really close to first place, he's getting hurt the most. You would think like fair value would be $1,000, but he's not. He's well short of it. Whereas these guys are just gaining from it, because they're most likely place is second and third, and they're capturing some of that value. So a satellite is a tournament that a certain x number of people win a ticket to a bigger event. The World Series of Poker runs a lot of satellites, and I think these tournaments are great, because this is a really difficult situation for people to figure out, and it causes a lot of people to make a huge, huge, huge mistakes. Say this is like 100-person tournament, where they all buy in for like $90 or something, such that first through ninth gets $10,000, and tenth place gets zero, and there are 10 people left. So their equity is just their percentage chance of winning this flat payout. Since they all have the same amount, they're splitting that $90,000 pool even, right? So they all have approximately the same equity, here, because no one gets hurt by having more chips by the curvature. Say they were in a situation-- blinds are $200-$400, and Irene, here, who's in the small blind, raises $2,5000 to Jessica, who has Kings. This is one person down until we make money. Jessica has Kings, here. So intuitively, what should we do, here? Yeah, like calling seems not that bad. What do I give you, here? Like what is Jessica's range? Jessica is pushing anything, here, right, because Jessica has an M of like 3. And she's in the small blind, so she's appropriately pushing any two cards, because as we showed earlier, you're definitely, definitely supposed to do that at least when it comes to Chip-EV. So Jessica is 82% with the Kings, and if we look at chip equity, she is crushing it. She is 82% to win $5,000, doubling up. She's 18% to lose. So her equity, her expected value after this is 4,100, meaning her delta is 1,600. She is expected to win 1,600 chips for this. But what about when we look at Dollar-EV? We see for Dollar-EV, she has an 82% chance of winning $10,000 and an 18% chance of getting zero. So her expected value after this call is actually $8,200, which is worse than her equity of this tournament. She actually loses money if she makes us call. In fact, if you put the other person on any two cards, you should even fold aces. Like you should only pretend to look at your cards before you fold this hand. And this is what it is for every person in this situation, unless you're more than a 90% favorite to win, which you're not, because unless you can particularly put them on a hand, which you have a Pocket-Pair that dominates their lower, uncorrelated cards, you should fold. How should this pay out? It doesn't play out like this. In Vegas, there's virtually 100% chance Jessica calls here, to Irene and Jessica's dismay, but to the benefit of everyone else at the table. But what should happen? If you're playing in this situation, and everyone's playing rationally, how do you think this will play out at $200-$400 blinds. STUDENT: So Irene would just push it, [INAUDIBLE] PROFESSOR: Yeah, so if you're in the big blind, and you're pushed into, what's your calling range? Zero, it's 0%. There are no cards that you should call. And then every single person at that table has that 0% calling range. However, knowing that your fold equity is virtually 100%, every single person, at the first opportunity, should just go all in. Under the gun, every single hand will open push, and everyone else will fold. And every single hand, that will happen until the blinds eventually put someone all in against their will. So this is a situation where ICM comes into huge play, like people really screw this up, especially live. That's why satellites are one of my favorite types of tournaments to play-- is because like every idiot will tell you that calling with Kings here is clearly the right move, when even folding Aces is like way, way, way better than calling with anything. You should definitely do your best to identify opportunities like this, because to the extent that you find a tournament like this that ends up in these types of situations, it's really hard for live players to get this right. Let's jump to late game. Late game-- I don't have that much to talk about. Late game, you just have to keep steal, re-stealing. You're going to have an M less than 10 probably, unless you literally just doubled up. Yeah? STUDENT: What is re-stealing? PROFESSOR: Stealing is when you try to steal the blinds, because they're so valuable. Re-stealing is when someone you identify as stealing your blinds, and you re-pop them some percentage of the time. It's like protecting your blind, or in some cases, when you see someone steal before you, and you're not necessarily a blind, just raising against them. This is all going to be pre-flop stuff. Just be conscious of ICM, like maybe it's not a good time to take a coin flip for your life. Like don't call with Ace-King, here, because you have like a 2% edge for [? a certain ?] range. OK, so that's it for late game. I really don't find it to be hugely different. Late game just plays like a 10-handed sit-and-go. The biggest difference is that it's as if all the players are playing way above their head, because say you're bankrolled for a $100 tournament with 1,000 people, and you get down to last 10. It's now is if you're all playing in a $10,000 tournament. So to some extent that changes it up, but it should generally be treated as if it is a sit-and-go that just pays out a little bit flatter. That's it for tournament play. Any questions on that? Otherwise, we can move on to bankroll management. This is really the stuff that's going to be about poker as an investment. So what's a bankroll? Bankroll has a couple different definitions. The bankroll is like the amount of money you can devote to playing poker, making poker investments. However, like I think it should be defined as the amount of money that you would have to lose to never play poker again. Not because you're necessarily ashamed of how much you lost, but because like you lost so much money that the stakes that you would be required to play, given your remaining amount of money would be so low that you would never make that amount of money back. It's like if you can conceivably put like $20,000 into poker, and you lose like $19,900, you're not going to grind out like one-cent, two-cent games until you have that $20,000 back. You would just stop playing poker. You would just be like OK, you would get a normal job and rebuild your bankroll through something else. So that's how I'm defining bankroll. It's the amount of money where poker is no longer a realistic option to be making money for you. This only matters if you're a winning player, because if you're losing player, I mean, you should play not at all. You should figure out how to win, or more accurately, you should move down to stakes that you're actually a winner. That only matters if you're winning. The formula for your right bankroll doesn't work if you're in negative expectation, because you're eventually going to go broke no matter what. Some examples of what this is-- for like a new player, you're just going to think of what's a lot of money to you. If you're a new player to poker, and you lose $10,000, you're probably never playing poker again. You're a huge underdog to ever make that up, so that's probably the end of your poker career, if you're fairly new. If you're an amateur, realistically I would consider your bankroll your liquid investments-- cash that, if you lose-- like you shouldn't mortgage your house-- but like a portion of the money you have in the bank that if you lose, you wouldn't be homeless seems kind of realistic, here. The amount of money that you're not investing for the long term, but if you lost it, and as a result, would have to stop playing poker, you would find that to be reasonable. The threshold here is like-- the bankroll management rules are such that you're at a 2% chance of ever losing your bankroll. So that's a kind of ballpark for someone who's an amateur. And for pros, it's all of that money, plus as much money as they could possibly borrow before they get cut off. Look, if you're a Phil Hellmuth-- he's probably worth like $5 million or something-- and he loses $5 million, he's not quitting poker. He's going to raise another million dollars and start playing again. So his bankroll is probably like $8 million before every single person goes OK, I'm not loaning you any money, and then he has to get a job at like McDonald's. So they're in a different situation, and especially when we're talking about like staking. You can have no money and have a bankroll of a couple hundred grand, if you have a track record that's good enough that people will just loan you money. Let's go through some rules of thumb for bankroll management. This is the idea. This was the motivation. Someone did the math on this. I remember checking it before. It seemed about right. So we're assuming a 2% chance to go broke based on your average buy-in, and we're assuming you don't change stakes. People kind of forget what this assumption is, so if you change stakes, you're never going to go broke. Why? Because when you lose half your money, you're going to drop down to half the stakes. So like, you asymptote near zero, but you're never going to actually lose all that money, you're just going to lose a lot, until eventually your hourly is not worth it. And when you go up in stakes, it's the opposite. Like your 2%-- -- say one the rolls is like $100 buy-ins. So if you have $100 buy-ins for the main event of the World Series-- like you have $1 million dollars, and then as a result of winning that, you start playing $50,000 buy-in tournaments, you're not that still 2% from the beginning to go broke, you are now a new 2%. So it causes a multiplying factor if, when you win, you take on more risk. And the same way, if you lose and take on less risk, it has a dampening factor. So this 2% is only if you don't change stakes, and you really should be changing stakes, especially because there's a high correlation between actually losing and not being a good player at whatever stakes you're playing, and it'll give you a chance to identify what's the right place for you to play. That's what the numbers are. I'd recommend-- I happen to play over bankroll, typically, just because I find 2% a little bit too-- like I wouldn't be happy with 2% that I lost all my money, but some people are. I usually double these. If you're kind of nervous when you play poker, it's either because you're new, and you've never done it before, in which case, you'll get over it. But if you actually get hurt by losses, to the point where it's always on your mind, it's probably because you're not really bankrolled appropriately. It's because if you lost five of those buy-ins, you would stop playing poker, which meant that you didn't really have 20 buy-ins worth of bankroll. You only had 5. So the theory here, it's based on-- do you have question? STUDENT: Yeah, when you play a cash game, and say you double or triple your money, should you just walk away, or should you just, because you're playing well, you have a bigger stack, should you just keep playing? What's the [INAUDIBLE] PROFESSOR: So if they're really bad players-- This is based on average buy-in, so basically, if you double your stack, and then a lot of players at the table also doubled, you're playing a game that's twice as big. So it's not like that makes it your normal game, especially if they're bad. These numbers go way down if you're ROI goes up materially, or your win rate. So if you're winning a game by a lot more than average, you can go with fewer buy-ins. So if you're in a situation where the other players are bad, I would just stay there. If you're in a situation that they're good, I would, as soon as I double up, I would change tables. STUDENT: So it depends on the table. PROFESSOR: You can't take money off a table in a casino, but in general, I wouldn't be too happy about having now like more at risk than my normal risk metrics dictate. The theory, here, is based on the Kelly Criterion. Kelly was a big guy in information theory a couple of decades ago. His idea is that, say you're utility curve is logarithmic, you maximize your utility by betting with regard to your edge. For example, if you're in a one to one bet, where you're 60% to win, you should bet 20% of your bankroll. And he proves it out. You can look up his paper. It's pretty famous. He's been getting a lot of flak, because this is probably not a great assumption, the logarithmic utility. So this is used in blackjack, in particular, where the crux of counting cards is that when the count is in your favor, you bet more, such that eventually your wins outpace your losses. And then investment management, it's the same idea, where you're going to put more money to what you consider higher alpha-generating ideas. The World Series of Poker is an example I find to be really ridiculous, because it's like the biggest tournament in the world-- at least, live-- 6,000 entries, $10,000, like one of the biggest entries in the world. So the right bankroll is like $1 million dollars for this tournament. And given the risk of playing a $10,000, 6,000 person tournament, like you're not getting good returns on your $1 million investment. I always found this to be a paradox. For someone who actually plays poker for money, no matter how easy The World Series is, you're not getting good risk-adjusted returns, especially when you count overhead. However, The World Series of Poker, the average player is really bad, still. And you have good upside in that I think there are a lot of good outside benefits from making big live scores. Like Chris Moneymaker has made way more money as a result of winning the World Series and getting these sponsorships than he has from actually winning, I'm sure. And he won, I think, like probably more than $1 million. It wasn't huge then. But there are a lot of pretty big upsides there. However, you can do-- for the summer tournament, I think it's very common for pros to do some risk management techniques. One is staking. If I have an investor that has more than $1 million, he can allocate this to his portfolio and stake like 10 people and have a little bit more of a diversified investment. So the common deal for this, just so you know, is them getting 50% of the upside. So they get the first $10,000 and 50% of the remainder. And to the extent that it's a long term staking deal, you don't get money until they get money. So if we play two of these, and I lose the first one, they get $20,000 of the second one, plus 50% of the remainder. That's a pretty common staking deal. It generally works out. And that was, to the extent that you don't need to worry about friction-- like trustworthiness-- it seems to work out for everyone involved where there is a lot of overhead, but the player is grinding out like a high dollar amount, and the investor is getting good diversification to his portfolio. And it's an equity investment-- like you don't owe this money back if you lose it. They're just partnering with you in the tournament. Something that is more common recently is selling shares and trading percentages. So this lets them create some sort of syndicate. I think it's all like handshakes, but it makes it so that they're pretty diversified. In addition, with regard to outside investors, players will just sell shares of themselves to other investors. They'll cut their next 10 tournaments into 10 pieces, and they'll try to raise like $15,000 for them, where they sell it off in $1,500 pieces. So it lets a relatively small investor diversify across a lot of different players. This make a lot more sense from a finance standpoint in terms of you still get this dollar amount, but you're reducing the variance by quite a bit. So counter-party risk is important, for staking, certainly. Like if you stake someone, and they don't play the tournament, or you need to worry about them not paying you back if they win, that's a huge friction, in addition to if you play in like underground card rooms or play online, you may not your money back, and you should keep that in mind. So if you think the club that you pay in is 1 in 10 to get raided during any night that you play, you should probably reduce your expected gross winnings by 10% and factor that in. The current poker environment is-- online poker started out, basically, back in 2003, where The World Series of Poker started investing a lot of money to build up the publicity. They invested in hold card cams, and then they really built it up. And then Chris Moneymaker-- arguably the worst poker player, yet the most charismatic-- won the tournament. And he was a great ambassador. And then the following year, someone else who played a satellite to get in also won-- Brehmer, who was a pretty good guy-- so poker blew up, and then online poker blew up. And it was great, because people could just, like your average Joe college student could just load like $50 on Poker Stars and lose it to me. So that was really good for like five years, and then eventually the natural course of the game is players get better. Then eventually Black Friday happened a couple years back, where Full Tilt, one of the poker sites, turned out to be a Ponzi scheme, and they actually took everyone's money, so anyone who had money on that got it confiscated, and they possibly got a percentage back, although I'm not sure exactly how that worked. Then all of the poker sites got banned from the US. So online is kind of gone, and like that was a bit of a nail in the coffin. Then poker has contracted quite a bit. The World Series of Poker used to have-- you could sit down at a table, and there would be like six amateurs, one pro, and two guys reading like Poker For Dummies. Now it's like 50-50. Like for a $10,000 tournament, it's still pretty soft. But like I played a couple years ago, and I played in a cash game, where 10 people at my table were saying that they were professional poker players. Probably a couple were lying, but it's much different than it was before. However, if you use good game selection, if you can find games that are soft-- like the side games of the World Series are soft, and like low stakes games are always soft-- I find it to be something that could add a lot of value to an otherwise good, separate career, like it diversifies your own investments, especially if you handle the bankroll stuff properly. And that's it. So thanks a lot, everyone. [APPLAUSE]
MIT_15S50_Poker_Theory_and_Analysis_IAP_2015
Basic_Strategy.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: OK, so welcome back everyone. This is going to be the first class we're actually talking about poker strategy, so this should be pretty exciting. So the first thing we're going to learn about is position, and this is only three slides long. So positions have different names. They're put in two different groups, and a lot of how we describe what's going in a particular hand is going to be relevant to where people are sitting. Why? Because people in a late position get to act after people in an early position. And in general, the positions are broken into four different groups. There are the blinds, who pay the blinds and are first act on every street after a pre-flop. There's early position, middle position, and late position, where all these positions have names, except middle position. Starting in the big blind, we call it under the gun, under the gun plus 1, under the gun plus 2. You could also describe these as seat 1, 2, 3, 4 all the way to 9 or 10, although I don't really like that because I they all have pretty unique names, and they're descriptive enough to just use those. So middle position is labeled one, two, and three, and then around the button you describe their relation to the button, where either you are the button or you are cutting off the button. And then some people get a little crazy by calling this the hijack, but I tend to not do that, and then as we eliminate people, we get rid of the least interesting positions to only keep the ones with real names. So the reason that I'm telling you this now is we're going to be going through hands where I talk about players by their position. The individual person doesn't matter, but it's much to understand what's going on when I refer to them as a cut off, or the button, or whatever. But in general, the later position is better because you get more information. You get to see people acting before you. And as a result, the money flows, in general, to the late positions. The hand that you're the button, you're going to make the most amount of money, and you can see that in poker tracker. And if you're losing money on the button, you should seriously reevaluate how you're playing that, because that's when you make the most amount of money. Big blinds are an interesting situation because they get to see the flop for a discount, because they're compelled to pay some sort of bet. So you might think the blinds are in a good position because they get a free flop, but they're actually in a terrible position because when position matters post flop they are the first to act in every single situation. So even though you might think that you're getting a discount for being in the blinds, you're getting a discount and entry into a hand where you're almost certainly going to be at a major informational disadvantage. However, interestingly, in short stack situations, early position is actually better because you have the opportunity to go all in before the other person does, and you maintain the equity from aggression, the fold equity, which we'll talk about later. It's sort of like a game of chicken where-- so chicken is a game where two people drive at each other until one person turns, where it's like the an infinitely bad return if they both don't turn, and then one wins if one turns and one doesn't. So the proper strategy in chicken is to throw your steering wheel out the window so the other person knows he only has one option if he doesn't want an infinitely big loss. So with position, it works very similar to that, where if you're in a tournament where neither person wants to see a showdown, neither person wants to deal with a coin flip for their tournament life, if you're in early position, you have the opportunity to be the aggressor and go all in. So you get to discourage the other person from entering into it. So let's move on to some basic concepts. So a lot of these things are based on odds. So poker is a statistical game, and we're going to be talking about applications of math to poker. So why does drawing matter? So drawing means you're trying to make a hand. More cards that will come out will give you a really good hand, whereas you don't necessarily have a good hand right now. In a really common situation, there's one guy that has an OK hand, and there's one guy that has nothing but the potential to have a really good hand. Most of the decision points come down to whether the guy with nothing has equity, has an interest in making his real hand. So really common examples of these are one person has a pair or two pair, and one guy has a straight or a flush draw. If we're talking pre-flop, someone has a pocket pair, and someone else has literally anything else, and they're trying to make anything more than whatever that guy's pairs is. So what the drawer has to to-- the guy without a real hand-- is to decide whether the bet he is facing or whatever has to pay to see more cards to find out if he makes his hand is worth the cost, is worth what the aggressor is making him pay to see that additional card. And the person who has a hand already wants to make it so that the drawer cannot see his card for a positive equity. He wants to bet so much that a call is bad, because that's where his equity comes from. So he can either bet enough so that he folds-- the other guy folds-- or bet enough that he'll call and make a huge mistake. Both are equally good. Actually the second is probably better. AUDIENCE: You're saying [INAUDIBLE]. PROFESSOR: So I'm saying a drawer is someone who has a flush draw or a straight draw, or basically has no real hand at showdown, but has a reasonable chance of, as more cards come out, making a monster hand, like making a hand which will almost certainly win that showdown. OK, so let's go through a scenario. OK, so this seems pretty straightforward. There's some sort of bet pre-flop. I called, and it was heads up. It came with four to a flush, four hearts, and this guy bet into me. And the question is, what do we do? That's a big question. And I'm going to be using this format a lot because it's easier to-- at least for me, it's easier to see, and then hopefully it's something you guys will pick up on. I'm only going to include relevant information, and the cases are going to be written this format where we have the relevant stacks up here. Here are the blinds. This means that small blind is $20, big blind is $40, and there's a $10 ante. This is a pot before anyone does anything. This is a pot as of a flop. These are my cards. The hero is whoever we care about. The villain is the other guy. And this just shows the order of what happens. So here he raised to $120 pre-flop. Three big blinds. I call. The flop comes eight of hearts, three of hearts, something that doesn't matter. He bets $370, all in. So my decision is, what can I do here? And this is a really common scenario. And what we can do is develop the tools that we need to make this-- to figure out what we want to do here. And rather than should we call, we can come up with a much more resilient answer. What's the biggest bet that we can call? And we're going to end up with a solution set of this, of this area here. That's what we want to figure out. But first we need to develop something called expected value. So expected value is the same in poker as it is in math. It's just a probability weighted average of all possible results. So it's win percentage times win amount minus lose percentage times lose amount. So in our scenario, we're going to add some variables into it. We're facing about into a pot of 380. Our EV is going to be whatever chance we have to win times a pot of 380 plus whatever the be is, x, minus our lose percentage, which is 1 minus win percentage times that same variable x. And our threshold for call is when EV equals 0. So pot odds is generally what we call the relationship between the size of the bet you're facing and the pot that you would win if you call that bet and then win the hand. So this is going to be the equation. So it's plus EV. It's positive expectation if the chance you have of winning is greater than the call amount divided by the size of the pot after the call. So say that we're seeing a bet of $100. We were seeing a bet a little bit bigger than that, but just for example purposes, we'll use $100. So your pot odds would be $100 divided by $580, where $580 is whatever was in the pot before plus his bet plus your call. You'll win $580 if you win this hand. So your call is contributing 17% of the pot. And just so you guys know, people use pot odds in a different way. They talk about 1 to 4 and use a different notation for referring to your chance of winning. I always thought this was very intuitive, so that's what I'm going to be teaching you guys. It's a percentage of the pot that you can contribute. So if your win percentage is more than 17%, this is a plus EV call, and this should be fairly easy to wrap your head around. And your win percentage can just be calculated based on what cards will make you win divided by what cards are left in the deck. And those are called outs. Cards that result in a win for you based on your best estimate are called an out. So when you're going for a flush-- so there are 13 hearts. You already know about four of them. They're either in your hand or on the board. There are nine hearts left that you could hit to make your flush and presumably win. So your win percentage-- this is calculating it out exactly-- is just 1 minus your chance of hitting the flush on either one of those cards. So it's 40 out of 49 times 39 out of 48, which is about equal to 34%. So since this 34%, our chance of winning, is more than 17%, the proportion of a call that we're contributing, this makes it a good call. And the fact that this is really big compared to this makes it a really good call. So this is how I think of it in terms of visualizing it. So this whole pie is the $580 pot that it would be if you called. This chunk is your 34% pot odds. Now, this chunk can be comprised of the size of the bet you're calling and your expected value from calling. So here, the size of the chunk is $197, which was 34% of $580. We can contribute up to that amount. If we get to contribute less of it, that means that any additional chunk is EV. We are making $97 for making this call. Similarly, if we make a call that's too big, we end up with a negative chunk of that pie. So I'm teaching you a quick rule for calculating your chance of winning any hand, and the quick rule I'm going to use is by Phil Gordon. So let's talk about Phil Gordon. So Phil Gordon got-- he seems like an OK guy. He got fourth place in the main event. He won a World Poker tour. He won two British championships. He's the head referee of the World Series of rock, paper, scissors. These guys get into really interesting things when they're not playing poker, and he's the author Phil Gordon's Little Green Book. So Phil Gordon invented this thing which caught on called Gordon's rule of two and four, which basically just says each of your outs is worth 2% for each additional card you get to see for that side of the bet. And it should be fairly obvious where 2% comes from. It's just 1 divided by 50, and it's a rough estimate of what each out is worth over 49, or 48, or however many cards are left. If you get to see both the turn and the river, you use 4%, and that's the whole role. I'm sure someone figured it out before, but he was nice enough to coin it and write in his book, which is why I'm giving him credit for it. So some examples of these are if you have a low pair and you're trying to get three of a kind by the turn or the river, you have two outs. And if you're trying to figure out your chance of making that three of a kind on the turn, you do two outs times 2% for a total of 4% to make your hand. Simple enough. Other common examples are flush draw, which should be nine outs to give you odds of 9 divided by 47, or about 18%. An inside straight draw is four outs to give you odds 4 out of 47, or 8%. And you can see this is the exact calculation, but it's really very close to just multiplying by 2. So back to pot odds. Your break even is when EV is 0. That's a common theme that we're going to be talking about. So the bet is x into a pot of $380. Your chance of hitting the flush is 9 times 4%, or 36%-ish. we're assuming that we get to see both cards. Why do I think we're going to get to see both cards? Because he's all in, and he can't bet anymore. So win percentage is 36%. Our exact win rate is 34%, showing that this is pretty close. We didn't actually need to do any heavy math to get a good ballpark number. So the question here is, we're facing a bet of $370. The pot before we face that bet is $380. And the question is, should we call? Because you're not going-- you can solve the threshold conceptually just to get a resilient solution set, especially when you're doing things before or after the fact, but in real time, we're going to want a rule for how to figure this out. So let's talk through this one, and then we'll go through the solution on the next side. So we have to figure out whether to call this. So what are we drawing to? So we're drawing to a flush. So how many cards will result in a flush here? Nine, right. So there are nine remaining hearts in the deck, and then we get to see one or two cards. AUDIENCE: Two cards. PROFESSOR: Yep, I agree. So we get to see two cards because he's all in. So our chance of winning is 4% times 9, so 9%, 18%, 36%. So we can call up to 36%. We can contribute up to 36% of the final pot. So we would contribute $370 into the final pot of 2 times this plus 1 times this. And just offhand, you can calculate-- you can figure out that's around 1/3, because the pot is about equal to the size of his bet. So we're contributing a little bit less than 33%, so we know that this is going to be a good call. And that's how you would do this in real time. You'd say you're 36% to win. You're contributing 33% of the pot, so you decide to call. And that's how you would make this decision. So let's do a couple more examples. These are all different situations where this type of thing might come up. So here's a situation where we have asymmetrical stacks, although the blinds are the same. So we have six, seven of diamonds. I'm using the four color deck just to make it easier to see. Something happens pre-flop that doesn't really matter. On the flop, there' $320 in the pot. He bets $150. So what we do here? So what are we drawing to? We're drawing to a straight. So how many outs do we have? How many cards will hit that straight? AUDIENCE: Eight. PROFESSOR: Eight, yeah. So we got four nines and then four fours-- will make us hit that straight. So eight outs total. So what's our chance of winning this hand based on what we're calling here? AUDIENCE: 32%. PROFESSOR: Yeah, 8%, 16%, 32%. Yep, I agree with that. So based on that, what do we have to contribute to stay in this hand? What percentage of the future pot? AUDIENCE: Less than 1/3. PROFESSOR: Yeah, something less than 1/3, because if he get exactly $320, that would be 1/3. So we know this is way less than 1/3, and since we're 32% to win, this is probably going to be a good call. So going through the questions-- so we have an open ended straight draw, meaning we have eight outs because two different cards would result in the straight. Our outs are any nine and any four. We have a 33% chance of hitting it, and what's the correct play? Call, because $150 out of $620, where $620 is the pot plus $300 is 24%. OK, so that wasn't bad. So those are two common draws. One was a flush draw, and one was a straight draw. So let's go to something a little bit different. So we have five five on the button. He raises into us. I call, and the flop comes three clubs, five, ace, six. He bets $200. OK, cool. So this is a situation which I'm sure a lot of you may have run into recently. So what hand are we drawing to here? Why do we think we're behind if we have three fives here? AUDIENCE: [INAUDIBLE] two of clubs [INAUDIBLE]. PROFESSOR: Yeah, he might have a flush, certainly to the point where I'm not super comfortable with the set here knowing that it's reasonably likely for someone to have a flush here, or even if he doesn't have a flush and we bet, he's only going to call us really if he has a flush or a better hand. So he has a flush-- what are we drawing to? What beats a flush here? AUDIENCE: Full house. PROFESSOR: Full house, good. What else? AUDIENCE: Four fives. PROFESSOR: Yep, four of a kind. OK, so what are our outs here? AUDIENCE: Seven. PROFESSOR: Yep, I agree. Seven outs. What are they? [INTERPOSING VOICES] Yep. So three aces, three sixes, one five. So we have seven outs total. So what's our chance of hitting-- do we count one or two cards here? AUDIENCE: One. PROFESSOR: One. Why? Because he has a lot of chips behind, and there's no way, if he's betting this on the flop, he's giving us a free card on the turn, unless for some reason he thinks we have a flash, but we certainly can't count on that. OK, so what did we say? Seven outs. So we use 2% for the next card, or 14%. So we can call up to 14% of the future part. The future pot is going to be $2100, $2300, so we can call 14% of that. So what's a good estimate of that? It's going to be more than my 280 because 14% of $2000 is $280, right? So he's betting materially less than that. He's way under betting whatever he has here. If he has a flush, he's not protecting it. If he doesn't have a flush, he's losing. So this is a very common example of a villain not protecting his hand. This is a situation where I see a lot of newer players screw up. They're betting so little-- they're betting little because they don't want the other guy to fold, but they're actually losing value because the other guy folding here would be preferable. They should bet enough that he either folds or he makes a wrong decision if he calls. So we're drawing to full house or four of a kind, which you guys got right. Our outs are three aces, three sixes, and one five for seven cards total. Our chance of hitting the draw is 14%, so the correct play is? AUDIENCE: Call. PROFESSOR: Yep, the correct play is call, because he's only asking us for to contribute 9% of that pot. Since we're 14% to win, the chunk in that pie is bigger than the 9% chunk that we have to contribute, and the result is this $122 free that he's giving us. OK, so I think this is my last example. This one should be a little bit more fun. So this is it. So why is this a draw that we're looking at? We're the first one to act. Why does this matter? Can anyone tell what's going on here? AUDIENCE: Big blind [INAUDIBLE]. PROFESSOR: Yeah, so the villain here is all in blind with that $200, because that's the big blind. So by calling here or by doing anything, he is going all in. So really it's like he acted before us, and now we're deciding whether we want to act. So what are we drawing to here? What are we facing? What does he have in terms of a range? AUDIENCE: Anything. PROFESSOR: Any two cards. And then so what are we drawing to? In general, we're drawing to basically anything. We're hoping that we win some amount of the time, and what percentage do we have to win? First let's start with what's a reasonable estimate for the amount we could win, the percentage of the time? So what are some hand versus hand percentages that you know? So what's aces versus anything? AUDIENCE: 80%. PROFESSOR: Yeah, it's like 80% or 85%. And then if he doesn't have a pocket pair higher than both of our cards, you're generally-- even if you're dominated, you're like 70/30, and then the majority of random versus random is between 60/40 in either direction. So say that-- what percentage of the time do we have to win here for this to be a good call? So what's the size of the bet that we're facing here if we're the small blind? $100, right? So we're contributing $100 here to win a pot of $400, which is going to be on big blind from each of us. So if we're more than 25% to win here, this is a plus EV call. And I see a lot of people screw this up for some reason, but you're virtually always ahead of 25% here. So what we're drawing to here is anything, and our chance of hitting the draw is we're actually about 40% versus his range. And even the worst heads up hand versus any two cards is 32%, so we're really calling blind there. We are always ahead of his range. So the correct play is certainly going to need a call, and the EV is like $60. So if we fold this, it's worth about $60 chips. So let's talk about implied out odds. So the solution to an implied odds question is the number of chips that we have to win after hitting our draw. So I'm using that specific language because for pot odds the solution is whether or not you can call, or what's the maximum bet you can call. For implied odds, it's different. It's the number of chips you have to win later to make the call good. It's the amount of basically dead money you need to add to the pot after the fact. So the way that we do that is we take a look at our percentage chance of winning-- say it's 20%-- and then we figure out what size would the pot have to be to make the bet we are currently facing be 20% of that pot. So here's an example, and we're using easier numbers here because we're dividing by percentages. So say we have a flush draw and we're 18% to hit. If the pot is $300 and we have a bet of $180 into us, our call is going to be 27% of the pot. So if we had a 27% chance of winning that would be a break even call, but we don't. We have an 19% chance of winning. So by pot odds, it says don't call. But to figure out what the amount is that we want the pot to be, we just divide that $180 by the 18% of our odds to get this $1000 number. So if the part were $1000, we could make that call. So the solution here is this $340 difference, which is the actual part after we call-- the difference between that and the pot that we need to make this call neutral. That's where this $340 comes from. And it has to be in dead money. It has to be money that's added to the pot after we already hit our flush. So to visualize-- so we need that bet of $180 from the example I just gave to be 18% of the pot. That's what makes it a good bet. As of the time that we make the decision, our bet here represents 27% of that pot. However, if we can increase a pot by $340, that bet would be 18% of that new pot. So that gives us the right implied odds to make this call. And what we need to figure out is whether this $340 number is realistic-- the difference between this $1000 and that $660. So are we following that? Is that making it easier to understand what we're trying to figure out when we're doing implied odds question? AUDIENCE: Yes. PROFESSOR: OK, cool. So I think I have two or three examples here just to walk through that idea. So here's a hand. So here's a decision we're facing, and we need to figure out whether this is a good call. So we have plenty of chips behind. We all started with $1000, and then we're probably not winning this hand because we have middle pair. So we're drawing to two pair or three of a kind. Our outs are these, which are five outs total, which gives us a chance of hitting our draw of what? So do we get to see one or two cards? AUDIENCE: One. PROFESSOR: Right. We get to see one card, because presumably he's going to bet again. So we multiply by 2% to get a 10% chance of hitting the draw, and then let's go back to this. So what does the pot have to be to make this bet 10% of the future pot? AUDIENCE: $1000. AUDIENCE: $900. PROFESSOR: Well, it needs to be $1000, because we're contributing $100 of some pot that we have 10% equity in. So it needs to be $1000, which means how much additional money do we need to add after we call that? So after this call it's going to be $100, $475, $575, because we're calling $100, so that's going to be in the pot, too. And then it's the delta between that and $1000 that we care about. So it's going to be $1000 minus $575. We need to draw $425 in addition at the end. So I have a 10% chance of hitting. Our odds are 16%, meaning we can't call it there. However, if we can pull out that $100 bet divided by the 10% odds that we need, it creates $1000 pot with the difference of $575. So we need $425 more on that after we hit our draw to make that a good call, which in that situation seems reasonable. So he got $100 into a pot of $400. Presumably he'll bet like $200 or $300 next hand, and then we can re-pop him for anything. Even if it's a min bet, which he'll presumably be obligated to call, especially because this is a very hidden draw, we'll be able to make this a good call. So I think this is reasonably a good call based on I think we could get $400, $500 more at least. So let's do another one of these. So I'm going to make these all from the same position and all the pre-flop actions the same just to make it simple to see what's going on. OK, so here let's go through the same steps. So what are we drawing to here? AUDIENCE: Straight flush. [INTERPOSING VOICES] PROFESSOR: Yeah, so several things. So we're drawing to a straight. We're drawing to a flush. We're drawing to anything else. [INTERPOSING VOICES] So, I would agree we're drawing to a royal flush also. And I'm going to say the over pair might not be good. One pair I wouldn't consider a great hand, especially when we're-- what were blinds here? $50, $100? So we have an m of like $50? Something like that. So I think our m-- our top pair is not that great here, but I do think the flush is good. Probably like a king high flush is good, and then the straight is good, too. So how many outs do we have here? So how many outs to the flush? AUDIENCE: Nine. PROFESSOR: Right, so we have nine other clubs in the deck. And then how many outs to the straight? AUDIENCE: Eight. PROFESSOR: Eight, right. So we have 17 outs, and then how many are overlaps? AUDIENCE: Two. PROFESSOR: Two, right. So let me make sure I got that right. So 9 plus 8. 17 Minus 2. Yep, 15. So we have 15 outs here. And then how many cards are we going to see? AUDIENCE: One. PROFESSOR: We're going to see one. I really wouldn't estimate that we're going to see two cards, unless someone is specifically all in. So use one card here. So we have 15 outs over one card, so what's our percent chance of winning on that next card? AUDIENCE: 30%. PROFESSOR: 30%, good. So what would the pot have to be eventually to make this a good call with our 30% chance of winning this hand? AUDIENCE: $1800? PROFESSOR: It would be-- so I think it would be $600 divided by 30%. Right? So what's that? So $600 divided by 3/10. No, I think it's going to be more than that, because we're going to multiply by 10/3. So it's going to $6000 divided by 3, or $2000. Would you agree with that? So this pot has to be $2000 by the end. Now, what's it going to be when we call here? AUDIENCE: $1425. PROFESSOR: Yeah, let's see. So it's going to be $600-- his $600 plus our $600. $1200 plus $275. Yeah, $1475. So how many additional dollars do we need in the pot after hitting one of our draws? AUDIENCE: [INAUDIBLE]. PROFESSOR: Good, right. So I think that's right. So drawing to straight or flush. Any ace, any nine, seven other clubs that aren't ace or nine for 15 outs. We are 30% to hit this. So right now the pot odds are 40% because he's betting $600-- or we'd be contributing $600 into a total pot of $1475. We need to win an additional $525 after it to make this a good call. So that's it. So that's how you do implied odds. Just make sure you understand what the future pot has to be, and then you can use your own judgment for whether that's a realistic amount to win here. I think here $500 is totally reasonable, because he already bet $600. Even if a flush comes, he's probably pretty obligated to make at least another a $500 bet or at least a $500 call if he checks. So I think that's good. To make it a little simpler for you guys, I made explicit all of the formulas that we went over for drawing, just to help with the case. So our normal EV formula is just-- so x is always going to be what we're solving for. Our EV is just is the either benefit or cost of the decision that we're facing. It's just going to be the combination of our win percent and loss percent and the win amount and loss amount. How you determine pot odds is just a decision rule. Yes or no-- do you make this call? It's just your win percentage of the hand-- the chance you hitting your draw, whether that's greater than the call amount divided by the pot plus 2 times the call amount, because the bet amount and the call amount are the same thing. If it is greater, then you make the call. If it's less than, you fold. Implied odds, which we just went over-- it's going to be the bet amount you're facing divided by your chance of winning the hand minus whatever the pot is going to be after you make that call. I think that's it. So these are all the formulas you need to make these decisions. You can generally remember them when you're at the table. I think they're fairly intuitive, and if not, they seem fairly easy to memorize. Anyway, so let's do a live example of this. So this hand happened at the World Series of Poker last year when it was 10 handed, which means there's it was one hand before the final table bubble, where they get to-- how it works in the World Series is they play down to nine, and then they have a break for three months where they build up the final table, and they advertise it, and they play it live. So this is a situation that was very tense for these guys, and an interesting had happened which I think is a great example of what we're trying to do. Anyway, so let's watch. [VIDEO PLAYBACK] The very first year of the World Series in 1970. PROFESSOR: There we go. No final table. Champion determined by a vote of all the players. Johnny Moss was the winner. Under the gun, Martin Jacobson, ace, jack of clubs. Very accomplished tournament player. Four World Series final tables. Raise. $650,000. The dealer announces raise, but I don't think Martin has the right denominations out there. Hold on, hold on, hold on. Hold the action. Hold the action. Just call. Just a call. So they're making it just a call for $300,000. By the way, that was World Series dealer of the year, Andy Tillman. Frankly, I think the dealer of the year thing has gone to his head. He's dealing with a lot more attitude now. One of these players will join the likes of John Hewitt, Jordan Smith, and Don Barton as main event 10th place finishers. Action on to William Tonking. Jack, nine in the small blind. He wants to play. He limps in. In the big blind, Dan Sindelar checks his option. Three for a bargain. And here is our flop. 7, 8, 10, 2 clubs. Tonking with a jack high straight. He checks it to Sindelar, middle pair with a gut shot. And he's reaching for chips, bets a half million. Jacobson with flush and straight draws. If Jacobson raises under the gun as he intended to, Tonking likely would have folded. Instead, they're now on a massive collision course that could define the November nine. Jacobson obviously loves his hand was straight and flush draws. Unfortunately, he's run into Tonking, who flopped a straight, but there is a raise to a $1,750,000. So the 2% hand bets, and the second worst hand raises. Lon, this is a game I need to be in. A dream scenario for the short stack that still could turn into a nightmare for William Tonking. All in. And Tonking announces all in. Sindelar folds. [END PLAYBACK] PROFESSOR: So let's figure out what's going through his head right now. So here are all our players. That's our hero with ace, jack clubs. It's a little hard to see when they broadcast it on TV, but he was under the gun. He called. Called around. He bet. He raises. He check raises all in, and now Jacobson facing a decision here. So clearly, what is he drawing to? Flush, and then if he hits that flush is he going to win? Probably. And then what else is he drawing to? AUDIENCE: Straight. PROFESSOR: Straight, right. And then if he hits that nine, he's probably going to win with a straight, although not all the time, because he doesn't have the best straight. If a nine comes and then this guy has queen, king-- or sorry, jack, queen, he's going to actually lose. So the question is, what does he do here? This is what it looks like. So our hero here raises $1750. He re-raises $4525 more to being all in for $6275. So he's drawn to a flush and possibly a straight. So how many outs do we have? So you can count partial outs. You can say I'm going to win half the time if I get this, just to be conservative. So you can say all these clubs are good because you have the best possible flush, and maybe this nine will work, so let's count it as half a card, half an out. We'll win half the time if we do that. So we have 10 and 1/2 outs. So our chance of hitting the draw-- how many cards do we get to see? AUDIENCE: Why 10 and 1/2? [INAUDIBLE]. PROFESSOR: Because you can just say, if we hit this nine, we're going win half the time. We're probably going to win more than that, but it's a situation where if he has a jack we split, and if he has jack queen, we lose. So I'm not really comfortable calling those complete outs, and in the end, you can see it doesn't really matter. But the more conservative move is just saying half the time we'll win with those, and with these nine outs we're going to win all the time. You can just count them as half outs, or you can count them as 2/3 outs, or something like that. Anyway, so we get to see both cards because he's all in. You have a question? AUDIENCE: This would actually be a [INAUDIBLE] this will always-- when it's half [INAUDIBLE] that the other person has a jack. So under that condition, [INAUDIBLE] under all other conditions of this [INAUDIBLE]. PROFESSOR: No. We lose if he has jack, queen. AUDIENCE: Right. If he has a jack-- he can have a jack, queen. That's fine, but if he has a jack, then it's the 1/2, but if he does not have a jack, then any nine wins. PROFESSOR: Yeah, that's right. This is a conservative play. AUDIENCE: This is a really-- this is the worst case scenario. PROFESSOR: Yeah, I would agree. If this says call, then we're definitely calling. It's a real pain to have aggressive estimates, and then it says call, and you need to wonder [? why their ?] estimates are wrong. So this gives us a more clear example. Anyway, so the correct play is going to be to call here. It's a little bit difficult to see, but we're going to say that what's in the pot are all the bets that happened before he was re-raised. So that's the original part of $1400, that one guy that bet $500 for some reason, and then this, which would be our all in call, which was the $6275 2, because he bet that, and we called that. This was a small blind. I don't know if you saw. The small blind here just called $500 and then folded when he bet into him, so that's dead money in the pot. So the total amount is $14,450, and we're facing a bet of $4525, so 31% of the pot we're contributing. We're 42% to hit our draws, meaning that this is a pretty clear call. And when we do the EV, even with this conservative estimate, it says we're making about $1.5 million chips for making this call. So this should be pretty easy. Let's see what happens, and let's see if this works. [VIDEO PLAYBACK] Boo. [END PLAYBACK] PROFESSOR: Anyway, OK. So he won that. The guy, Jacobson, I'm pretty sure ended up winning the World Series that year. OK, so we have a bunch of be carefuls. Do not draw to a hand that may not actually win when you hit it, which means if you're drawing to a flush that's not even that good and maybe dominated by another flush, you probably shouldn't count all those as full outs, or the lower end of a straight is really, really bad. It's really common for people to draw to that and then just go broke, because they think they made their hand, but as it turns out, they made the second best hand. In addition, don't draw out to a worse made hand than is already possible. So people refer to something called a paired board, which means two cards on the board have the same number. That means that four of a kind or full house are possible. So if you're drawn to a straight or a flush, you might not even. You might be-- drawing dead, is what it's called. You like you might be 0% to win that hand, so be careful on drawing on a paired board. In addition, do not assume you get to see both cards. It's really common for players to think that, OK, there are two cards left. He doesn't seem too aggressive. I'll probably get to see both cards for cheap, and then find out that their assumptions when calling the flop ended up being really bad and costing them EV. So very rarely does someone check the turn. Unless the turn is really scary, like you hit your draw obviously, or it looks like you did, no one is was going to give you that for free. Another thing to be careful about is don't overestimate how easy it is to extract additional chips. It's really, really obvious when someone hits a flush draw, because there aren't that many reasons people are going to call a bet on the flop when there are two clubs on it and then bet when another club hits on the turn. Flushes are really obvious and everyone is keeping an eye on that. Straights are less obvious because a lot of different boards can have a straight on it so they can't really just assume that you're going to have a straight if there are any like four cards that are near each other by the turn. And sets, like when you have a pocket pair and you hit a third of that pair on the turn, are basically invisible. There's no way they can put you on that. So your implied odds for sets are huge, whereas your implied odds on flush draws are very, very small. In addition, on the other end, if you have a made hand, don't bet so little to give them the odds to reach their draw. Basically, most of your flop and turn bets should be like 2/3 of the pot just to punish them if they want to chase their draw. OK, so that's it for implied odds. So let's move on to fold equity. So here's an example. So where you guys following what was going on in that hand? Basically, I had position pre-flop to make this call. Then on the flop I had an open ended straight draw. He be small enough that I should call. Same thing on the turn. I think he checked behind me on the turn, and the river he checks. Why? Why is he checking the river here? So he's checking because he's worried. He knows I'm drawing to something because I flat called, and look. I could've been drawing to a flush, and he thinks I just hit it. So this is a perfect bluffing opportunity, because we are basically representing a flush. So the question is, how often does this have to work to be a good bet versus just checking behind and losing nothing? With bluffing, if it's a bad bet, we're just going to lose money most of the time. So we have to figure out, what proportion of the time does this have to win to make it worth it? And that's what we're going to be looking at here. The concept that will give us the value of making this bet is called fold equity. So fold equity is the value that you're getting in a hand from the likelihood that the other player is going to fold. So with regard to fold equity, I'm saying your showdown value, which is this acronym here, is 0. You can't win at showdown, which is our situation there. If he calls us, we definitely, definitely lost. So the formula for this is-- at least the EV formula is just-- so it's a derivation of the normal EV formula that we always see. It's just the pot times your chance of winning-- i.e. his fold percentage-- minus the chance of losing. And you lose that bet if you lose, but your risking the bet to win the pot. If we have the chance to win after he calls, we can add another variable where just, instead of us just losing this bet for the amount he calls of the time, when he calls, we're going to get some amount of EV, which is still presumably going to be negative, but it's going to be a less negative than just losing the entire bet. So that's the basic formula for semi-bluffing here. Some I'm defining bluffing is a bet where it has positive expectation because the fold equity is more than 0. Just this term, just the proportion of the pot that you expect to win from him folding is greater than the weighted chance of you losing that bet. That's just going to be called a bluff, an outright bluff. And I differentiate that from semi-bluffing, where this is actually negative, where if you have a 0% chance of winning, it's actually a bad bet, because he calls you more times than makes that valuable. But a semi-bluff actually becomes positive expectation because of your showdown win percentage. Your showdown win percentage is sufficiently high to offset it, and this is where the value comes from, because you have the opportunity to steal pots, but you also the opportunity redraw to a winning hand. And that's why in tournaments this becomes something that you're going to be doing very often, because you're not going to always have made hands, but you're always going to have something that could become a made hand, and that becomes good enough. So how often does this have to work to be profitable? So I'm just going to give you a formula here. So we're betting $150 into a pot of $350 where we have no chance of winning if he calls. Our EV, which is just taking it from that formula, is $350 times the chance we fold minus $150, our bet, times the chance he calls. So we can solve this for EV equals 0 and then solve for fold to get this formula. We get $150 divided by the pot plus our bet. So this is our bet, because the idea is that we are putting $150 into that pot for a chance of winning that whole pot back. He won't add that $150 to the pot if we win it, so that's the idea there. So it's our bet divided by the pot after we bet to give us our neutral EV fold percentage. So that's the chance of him folding that makes this a good bet. So I think this is pretty cool. You can use this to determine what's a good bluff and what's a bad bluff by just saying, is he going to call this more than 1/3 of the time? And just the EV calculation, looking at using this formula to prove that we reach a neutral EV is just 30% times this $350, the pot minus 70%, him calling, times our bet. That equals 0. And that's our quick-- I like plugging this back into the EV formula just to make sure we messed around with the variables properly. So are we OK with this so far? Because we're going to move on to something a little bit more complicated. In this one, be bet $75 like he did before, but we are raising $150. Why? Why are we raising $150 here rather than just calling? Yeah, because we have an open ended straight draw, where even if he calls, we still could win, and that fundamentally changes what we need to make this profitable. So here, our chance of winning is 16%. 8 times 2. We get to see just the river. So it's $150 into a pot of $350 where our win percent is 16%. This is still our chance of taking now the pot uncontested, and then the 1 minus f percent, the chance that he calls, is multiplied by our marginal EV, this 16% times winning the pot. $500 is $150 $350-- $150 our bet, $350, the pot-- minus $150, our bet. I guess a $150 here would be his bet, but still we have a chance to $500 or lose $150. One of the reasons fold equity is really hard to teach is because there's no real intuitive way to memorize this formula. So what I did here is I just solved for EV equals 0 for our fold percentage. So we could solve this-- I just plugged this into Wolfram Alpha, and I got the neutral fold percent is just 12%, compared to here we need to win this bluff 30% of the time, and here we need to win 12% of the time. That shows the value of the semi-bluff here. So to check with EV, you win $350 12% of the time, and 88% of the time you have to deal with this. So that shows, at least intuitively, what the value is but let's see if we can figure out exactly how important this win percentage is. So we're going to have to use calculus for this. So when we graph this formula, we see a clear trend, and it would be intuitive. When your showdown win percentage goes up, the amount you need him to fold goes down. If you win 0% of the time, he needs to fold a lot, but then if you win some amount of the time, he only needs to fold a smaller amount of the time. So that's what this thing is saying. And there are a couple of interesting points on this graph which I want to point out. So what's this point here? It's our break even fold percentage for having a 0 EV. So the idea is, if he folds, how you read this is, if we have a 16% chance of winning if we're drawing to an open ended straight for one card, if he folds more than 12% percent of the time, if we're in anywhere in this area, it's positive EV and for anywhere down here it's not. So that's how we're reading this graph here. So what about this point here? It's a complete bluff because we have 0% chance of winning, and you recognize this 30%. It's from all the way back here. It's when we had a 0% chance of winning. So that's that point up there, which I think is pretty interesting, but it gets even cooler. OK so that's what that is. It's our $150 divided by $150 plus $350, which is our formula for determining what our break even fold percentage is for a complete bluff. That's our 30% here, but check this out. So what is this number? It's our pot odds break even. It's the size of the bet that we could call if he was betting to make us neutral EV. That's what this 23% is. It's our $150 divided by the pot after our call. What that means is, if he folds 0% of the time, I, similarly, did he just bet, and then we have the option to call. That makes a 0 EV. So this graph connects all of those variables for us, and that lets us derive something very interesting with regard to implied odds. We could just figure out how implied odds impacts our fold percentage by looking at this secant line and coming up with a good estimate. So let me work through this graph, talk through what we're seeing here, because I think this is really cool. So to be clear, this blue line is our neutral fold percentage, and then this slope is-- it's the derivative of how much of a bonus we get to fold percentage for every 1% win rate. So for each additional one out, each additional 2%, he needs the fold 3% less for us to break even there. That's what this is telling you. When you have a 10% chance of winning, you just reduce this amount by 15%. You multiply it by the one and a half slope. Although it undershoots it by a little bit, it gives you a very, very close estimate to using these implied odds in real time. And then so I went ahead and figured out, OK, so that's for a specific bet size. How's it work if we look at for a much bigger bet or a much smaller bet? I found something really interesting. When the bet becomes-- when we go towards infinity, the partial derivative is 2. You only get as much of a bonus as 2 times your win percentage. So each additional out gives you like 8% percent reduced in break even for fold percentage. And then when your bet approaches 0, you only get a 1% decrease. So these are our bounds. For a pot size bet, it's 1.5% percent your bonus, and regardless, you know your bonus is going to be between 1 and 2, at least in terms of the average across win percentages. That's what we discovered here. And what this is letting us do is it's less letting us create a quick rule that implements implied odds. So to go over what these rules exactly are, let's back up to a complete bluff. So our fold needed is just the bet divided by the pot and the bet combined. If you want to bet the exact size of the pot, which isn't that bad for a bluff, you only need to win half the time. And then you can, if you want, scale linearly down to 0. You could just say, all right, if I bet half the pot, I have to win 25% of the time. It's a little bit off. It's like 33% of the time, but it's not that bad. So this gives you a very easy way to determine when you should bluff or not, and obviously there's a bit of judgment because you've got to figure out whether this is a reasonable number, but it gives you idea of you don't need to win that bluff 80% of the time. And then when you actually have a chance to redraw to win, it becomes even more interesting. So in general, when you have a draw, your value is higher because you still have a chance to win the hand. And in general, you're going to see very rarely will people actually make complete bluffs, because they would prefer-- the chance of you winning the hand at the end materially makes your value better. So a simple assumption is just each 1% your showdown increases you decrease your fold percentage by 1.5%. And your fold percentage are going to be much, much smaller. They're going to be like 15% to 20-ish percent, somewhere in that range. So decreasing that by 5% actually makes you quite a bit more likely to win, or at least have a positive expectation decision. And when we talk about pre-flop, which is going to be nothing but figuring out semi-bluffing opportunities, we're going to be heavily using this type of thing. So let's do some examples. OK, so what is going on here? So just to watch that again-- so it looks like the villain raised something pre-flop. I had position, so I called. And then he showed weakness for three straights in a row, I don't know what he has, but it seems to be worth taking a stab at it. So then what proportion of the time does this step have to work to make it a good bluff? So is this going to be bluff or a semi-bluff? AUDIENCE: Bluff. PROFESSOR: Bluff. Why? AUDIENCE: [INAUDIBLE]. PROFESSOR: Yeah, I barely beat the board. I think my 10 high plays, but only very close. So there's no way he can call this with the worse hand. So the question is, how often does this have to work to be valuable? Which is a very common question you might ask yourself. So do you remember how to figure this out? So the formula is going to be this. It's just the bet divided by the pot plus the bet. So difference between this and the pot odds formula is one bet. Pot odds formulas is pot plus two bets, ours and his. This formula is just the pot plus our bet only, because he never adds his bet in. So to figure out our chance of winning here, let's just go to this one. We just do what? We take this side of our bet and divide it by this plus this number. So we add those together, it's what? $625. So we'd just take $250 divided by $625, which is what? It's 40%. So this needs to work 40% of the time to be valuable. So it's actually not as interesting as I would have guessed. This needs to work a pretty big amount of time. And given that he's shown so much weakness, he's probably guessing that we're probably buffing. But anyway, if he calls 25% of the time, does that make that a good bet or not? Yes, because that means he folds 75% of the time, which is more than our 40%. So that makes that a good bet. And just to plug into the EV formula, what is our value from this bluff if our estimate is right here that he calls 25% of the time? It's $200, so the pot is what? I think the pot is $400 here. So that makes sense to me that 75% of the time we're going to take down that pot, so it's worth about $200 to us. So that's it. Let's do another example. OK, so what's going on? Something happened on the flop, and then what are we doing here? AUDIENCE: [INAUDIBLE]. PROFESSOR: Exactly. So we're betting $450 into a pot of $775. So the question is, is this a good bet? Should we have done this? And we're going to be facing these decisions all throughout the tournament. So this one is going to be kind of complicated, but not really. Let's see what we can piece together for now. So what's our chance of winning this one at showdown? So we have 16% chance of winning. AUDIENCE: [INAUDIBLE]. PROFESSOR: It's hard to-- I prefer not counting-- you can proportion partial outs to whatever you think your real chance of winning if you hit that is-- say, that's worth 1/3 of an out. But in terms of being conservative and making this simple, we could just say, let's say we have to hit the straight to win, although you can consider yourself having a little bit more equity if you just say maybe I'll win if I hit a 10 or something. So we're betting for $450 into this pot of $775. So we know we have a 16% chance of winning this hand if we are called. If we have no percent chance of winning the hand if we are called, what proportion of the time do we need him to fold to make this good? $450 divided by $1225. That $1225 is going to be $450 plus $775. Is that right? What was the bet? Yeah, $450 plus $775 $1225. I think this is-- $11-- $1225. I think that's right. $1225? OK, so we have a 37% chance of-- that's our break even if we have no chance of winning, but then we get a bonus for the 16% chance of us winning, and then a general estimate is going to be 1.5 times, because we're making approximately a pot size bet. We're making it a little bit smaller so maybe this is over doing it by a little bit, but this is at least giving us an OK estimate. This might be a little low. It might be like 18%, but we can't differentiate between a margin that small. So the 60% chance is related to our chance of winning. We get a bonus that's proportional to that. I'm saying 1.5, which seem to be about in the ballpark, to give me 13% chance break even for that fold rate. So even if he calls 80% of the time, it makes it a good bet, and 80% is a huge amount considering-- I don't remember the situation. He could potentially have nothing here. He definitely showed some sort of weakness, so it's totally reasonable that he won't call more than 80% of the time there. So we calculate our equity just based on the formula earlier, which is our chance of taking the pot down uncontested, 20%. And then our 80% chance of winning 60% of the time and losing 84% of the time, where we're winning the pot plus his bet, and we're losing our bet. OK, so let's jump to another live example. [VIDEO PLAYBACK] Junior world champion bowling and horseshoes for [INAUDIBLE], Foosball, and maybe poker. Sorry, what's your name? Mark. [INAUDIBLE] Mark already plays this final table [INAUDIBLE]. I've heard-- sorry. I have no idea. Listen, Billy is a world champion in another sport. PROFESSOR: That guy is pretty cool, too. What sport? [INAUDIBLE] Foosball. Yeah? Yeah. Well, that's pretty awesome, as well, huh? Yeah, that is awesome. PROFESSOR: This is considered very high quality banter by poker standards. [INAUDIBLE] Secret is out. Absolutely. Jacobson, pocket sevens. $650. Confirming with [INAUDIBLE]. Yeah, that's $650, and that's a raise. Politano, 10 trey suited. [INAUDIBLE] to Pappas now with ace, queen. I wonder if there are different surfaces of Foosball, like the French open of Foosball, the Wimbledon of Foosball. And is there an ace, queen in foosball? Yeah, right. PROFESSOR: The wort joke ever. Several brands of championship tables. Billy's a tornado guy, by the way. Tornado. Billy, with ace, queen re-raised to $1 million $425,000. The main event is a grind, but Billy Pappas says he doesn't get tired here because he's used to Foosball tournaments, which are 14 hours a day on your feet for several days. [? And ?] [INAUDIBLE] folds. The ace of hearts is exposed. Back to Jacobson. Jacobson trying to become the first Swede to make the main event final tables since Chris Bjorin in 1997. Bjorin tied for sixth all time in the World Series caches. Bjorin and Jacobson both born in Sweden. Both moved to London. Jacobson made the call. We're heads up. King, jack, trey. Jacobson ahead so in the sevens. Pappas picks up a Broadway draw. Jacobson checks. [END PLAYBACK] PROFESSOR: So Let's take a look at what actually happened before we got to where we paused. So this guy's in position. He's in the cut off position. Jacobson raises. He re-pops with ace, queen in position. Newhouse throws out an ace for some reason. So Jacobson checks, and then he's going to make the standard bet. So the question is, is this a good bet? And then something we can definitely figure out is what percentage of the time does this have to be a fold to make this a good bet. If showdown win percentage is 0, it's going to be $1800 divided by the pot plus $1800, his bet-- 33%. But if he actually has a chance of winning-- he has an inside straight draw. He has a 10, and then he has the best possible straight. He gets up 8% of the time, reducing his break even fold percentage by approximately 8%. So it's 8 times 1.5, 12. So this minus 12 is at 21%. And then this is solving it out exactly. I gave them half outs for an ace. Maybe ace wins 1/2 the time. It turns out that 21% is basically dead on. So let's see what happened. [VIDEO PLAYBACK] King, jack, trey. Jacobson ahead so with the sevens. Pappas picks up a Broadway draw. Jacobson checks. Of course Bruno Politano trying to become the first Brazilian to make main event final table. [INAUDIBLE] What happened to the Canadians? Our record ten [INAUDIBLE] since 2013. Non grata Canadian. I think they got too cocky. And now Pappas comes out with a draw for $1.8. Pappas was rather aggressive earlier in the main event, again showing his aggressive side right now. Martin folds. Pappas will drag the pot. Now he's sits just shy of $20 million. A world champ in two different games? It just could very well be. Billy Pappas makes good use of that scary boarded take down the pot. [END PLAYBACK] PROFESSOR: OK, so that's a very common type of bet, which we'll talk about later. That's called a continuation bet. So he showed aggression pre-flop. It's checked him on the flop. It's almost always going to be the right move to bet again on the flop, because you're already indicating that you have a good hand, and then two face cards show up. It's reasonably likely that you're going to have at least top pair there, so it's uncommon for the other guy to try to push up against you since presumably you have at least a pair of kings or jacks most of the time. Let's do some be careful abouts. This is a lot of stuff I notice from the more recent tournaments. So don't bet too little on a bluff. That makes it very obvious, and then it's usually pretty clear-- if you bet 1/3 of the pot, which is something that's generally not common for normal players, it kind of screams that you're not too attached to the hand. And a 2/3 of the pot bet only really needs to win a small percentage of time to be profitable. I get the no one likes to lose money on a bluff, but 1/3 bet works much less of the time than a 2/3, and you actually get much less value out of it. So bet enough. Bet like you had a normal hand. Bet enough that, if someone is drawing to something, they don't have the odds to make that call. Alternatively, don't bet too much on the bluff, and I'm making pretty wide ranges here so don't think that I'm contradicting myself here. One of the biggest tells for a bluff is someone betting more than the pot, just because it means they didn't actually think through the numbers, and they're just like I want to bet a lot so that makes the other guy fold. But in general, don't bet too much, and by too much I mean more than the pot. And in addition, if you're short stacked, don't bluff an amount that, if he raised, you'd have to call anyway. In which case, you should just bet all in there. So don't be afraid of getting caught bluffing. So this is a reason people don't bluff live. It's because they're afraid of showing down nothing. Don't worry about that. One of the best indications to me of someone being a good player is they'll and show down a bad hand and just be like, yep. That's how you play poker, and that'll be the end of it. So don't worry about-- what you have when you bluff is completely immaterial. So just having a losing hand that's really bad is no different than having a marginal losing hand. So don't be afraid of being bluffing, especially live. People get embarrassed when they get caught bluffing. Don't worry about it. So semi-bluffing is great compared to bluffing because you have a chance of winning the hand, but if you're in position, sometimes it's better just to take a free card. If he shows weakness and checks into you when you have an open ended straight drawn, in some cases it's just going to be right to check and get your free card. You have to compare your EV of checking to your EV of bluffing. And don't bluff calling stations because a lot of your value from these guys will come from value betting. The only way you'll possibly lose to them is if you try to bluff them. You might be in a situation where you're ready to run over calling stations, but you don't have good cards and bluffing is not the way to go. So don't do that. Let's wrap it up there. Thanks everyone. [APPLAUSE]
MIT_15S50_Poker_Theory_and_Analysis_IAP_2015
Analytical_Techniques.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: Welcome back everyone to Poker Theory and Analytics. We're lucky today to have a guest speaker, Joel Freed, coming to talk to us about PokerTracker. As you know, we have a great partnership going with PokerTracker in this class. They sent along Joel Freed to teach us analytics. Joel is a VIP support director for Max Value Software, who's the parent company of PokerTracker. He has taught analytical techniques to some of the biggest names in the poker industry. And he's come by to teach us that sort of thing, also. So with that, I'm going to pass it along to Joel. JOEL FREED: Thanks. I hope you all have had a chance to install PokerTracker by now. PokerTracker is the industry leading analysis and tracking software for online poker players. We've been around since 2001. So we've been able to grow as the poker economy has grown. It started out as software only for people who played limit hold 'em. And it has exploded. We do Omaha now-- obviously, no-limit and pot limit hold 'em. And we have extensive tournament support, some of which we'll be talking about today. What PokerTracker does is it can help you identify and analyze similar decision points to help you improve your game. So PokerTracker's not going to do it all for you, but it's going to help you find the spots where you can improve and make better decisions. So what do I mean when I say, decision point? A decision point is any time in a hand where you can make some action. So you can check. You can bet. You can call. You can raise. Or, you can fold. At any point in the hand where you can do that, I'm going to call that a decision point. And when you play poker, after you've played for a while, or if you've already played for a while, you'll realize you naturally remember similar decision points. If you're the short stack in a tournament, and it's folded to you and the small blind, you're going to be able to lump all of those kinds of decisions together so that, when you face that decision the next time around, you already have some kind of history to build on. And using PokerTracker to analyze the interesting decision points is really a very effective way to improve your game because, the next time you come to a similar decision, you will already have some memory of what you believed to be right last time you were doing something like this. So what makes a decision point interesting? It's interesting when you aren't sure what the right answer is. And that may seem kind of obvious. But when the expectation of the outcomes is really close together, you're going to sit at the table, and you're going to agonize over it. And that's where you see people on television, where they're sitting there for minutes, and they're going, ah, I don't know what to do. And those are the interesting decision points. And as you start out, you will find that you really don't know what you're doing a lot. All decision points are going to be interesting until you start to have some kind of heuristic, some kind of rubric for when to call, when to fold. So here is a situation. We are in the big blind. It's 100/200 blinds. This is a tournament. In this tournament, it was a single table tournament. Let's say you bought in for $10. $50 goes to first place, $30 goes to second place, and $20 goes to third place. So there's four players left here. You have a chip stack of 1,430 chips. And the cutoff here, this player, opened all in for 5,700 chips. The player on the button folded. And this player called for 2,980 chips. So these two guys are already all in. You have ace, queen of clubs. So you're ace, queen suited. And you have 1,430 chips back in this spot. I want you to take a second. And I want you to think about this decision and, whether in this spot, knowing nothing about these two players, you don't really know too much about them right now, we'll get into that a little bit later. But without knowing anything really about these guys, they're just your average players in the game, would you be calling or folding? I want you to think about that for a second. So I'll point out another feature here. For those of you who are familiar with pot odds. Your odds here are 3.35 to 1 to call. If this was a cash game and if you had 23% equity against both of those hands, you could call profitably. Obviously, this is a tournament situation which affects things drastically right here. So the total pot size is 4,810. We have 1,430 to call. So if you believe that this is a very easy decision, raise your hand. One, two-- we've got a few. Awesome. You guys are right. If you think it's a call, I want you to raise your hand. One call. For the people think it's close, if you think it's a call, raise your hand. One, two. Got a few. It's not even close. And it's fold. Let me talk a little bit about what's going on in this view. This is from our ICM quiz. PokerTracker has a feature that lets you practice in these end of tournament situations, which you can access through the Tools menu bar-- Tools, ICM, Quiz. And in this spot, what it's telling you is, if you push, based on an average player model for these two players, your equity in the tournament is 9.49% based on the expectation of this hand against these two players when they're all in, for average opponents. That means that you could expect to make $9.49 with the prize we've picked out. If you fold, however, you have an expectation of 18.67% And the reason for this is that it's reasonably likely that the player with 2,980 chips is going to go out. And once you do that, you are guaranteed third place. So this is a clear fold. It's even, in fact, a clear fold if you have kings. And you can go further in the poker tracker ICM tool. If you click the Results link that's right here, it will bring up the full math, and I'm not going to go through the ICM math right now for you. I know you're going to go through that later in the course. But what you can do is you can change their ranges, right here, by clicking these buttons. And so I've set them to 100% range. That means these players are playing any two cards. So even if you knew before the hand, they pushed all in blind, they didn't even look at their hands, they could have any two cards, either one of them, it's still a fold because fold equity still is close to 19%, and your push equity is 15%, even though you're going to win half of all hands against two random cards. So before we talk a little bit more about how to use PokerTracker to analyze your game and see where you can improve, I wanted to talk a little bit about how you can use PokerTracker and not get better poker because there's some great and interesting stuff in PokerTracker that you can spend lots of time looking at that will not help you at all make a better decision. So looking at graphs. People love looking at their results graphs. And here's a nice results graph. You started off at hand one. You had a nice run here. You went steady for a little while. You won a few big hands around hand 183,000. And then you end up about plus 860,000 Euros. Since poker is a series of decision points, the question is, which decisions would you make differently based on this graph. And the answer is absolutely none of them. Knowing that you did this well in this spot is not going to help you make better decisions in the future. It may allow you to buy a house in the Boston area, but it will not help you actually play better poker. Another way you will not get better at poker by using PokerTracker is by looking at hands, which we call walks. A walk is when you're in the big blind and everyone folds to you. You win the small blind, which is nice. But you didn't do anything. There's no way in which you could make a better play in that hand. And I know a lot of heads up, sit and go players who play two player tournaments like to look at those kinds of spots and they want to know, how much am I winning there. And the answer is it doesn't matter because you can't change your play based on who folds to you. Another thing that people want to do with PokerTracker that won't help them get better at poker is analyzing luck. We have several tools for luck in PokerTracker. This graph is from the cash game side. It is actually-- you'll notice the normal curve. This is actually normalized. So what you're seeing here is each dot tells you how often you're hitting your draws relative to expectations. So this player is more than one standard deviation above the mean at flopping three of a kind when he holds a pocket pair, which we call flopping a set. And that's fantastic. He's probably making lots of money when he has these pocket pairs because making a set is a very powerful hand. However, he's not going to be able to change the way he plays by knowing that he's been lucky in the past because he may or may not continue to be lucky in the future. I also have a friend who spent lots of time building lots of reports to see if he was getting dealt aces more regularly than average. And while it's great to be dealt aces more regularly than average-- and you will make lots more money if you get dealt aces more regularly than average-- knowing what happened last week will not help you the next time you're sitting at the poker table. The last thing I'm going to say about PokerTracker-- and this is a little bit trickier-- there are lots of statistics, and lots of numbers in PokerTracker. I think that we have easily over 1,000 different statistics you can look at, especially when you consider combinations of position. And if you add stack size, it's in the thousands for sure. And the problem is, some are not relevant to the spot you're looking at. And some will lack a sufficient sample. So I'm going to use this hand to illustrate both of those points. This is a cash game hand from a from real poker site from about four years ago. The player, hero, here was on the button and was dealt ace of spades, 10 of hearts. So he had ace, 10 off suit. Before the flop, villain16 made a raise. And hero called. The flop was two of spades , four of spades, three of diamonds. villain16 made a bet. And hero made a call. I'm not going to go into whether or not that was a good play. There are reasons for it. There are reasons against it. But for the purposes of this hand, it's important to note that that happened. On the turn, villain16 also bet, and hero also called. The turn was the 10 of diamonds and the river was the ace of hearts. So now, we have two pair-- top two pair in fact. And villain16 makes another bet. The bet was 1,550 British pounds. And the pot was 1,975 pounds. So we're sitting here. We're getting 2.27 to 1 odds. So if we call and we're ahead 31% of the time or more, it's a good call. We can fold. Our stack is 3,350, so we could also make a raise to like 1,700, 1,800. Now we have lots of stats here on the table. I'm not going to go through them all. This is our heads up display. So we're looking, right now, at the PokerTracker replayer. The important ones to note for this purpose is this red number here-- this 97-- this is the number of hands of data we have on this player. So if you were sitting at a casino, you get about 30 hands an hour, so this would be the equivalent of about three hours of live play against somebody. Online, it's more like an hour and a half because online hands tend to come a bit faster. So we've got some data. It's not a huge sample. VP is the VPIP number that was talked about last time. That's the percentage of hands he's playing. So he's been playing about 2 out of 3 hands. So he's been in a lot of pots. So we know that about this guy. And PR is Preflop Raise. That's how often he's coming in for a raise, or raising at some point in the hand. And that's 45%. So 2/3 of the time-- because 45 over 65 is 2/3, roughly-- he's making raises. So he's aggressive and he's playing lots of hands. And here we're in this spot. We're facing a big bet on the river. So we've got his river stats. So if you click on the HUD, you're going to get this pop-up. There's a bunch of tabs here, Tools, Preflop, Flop, Turn, River. Since this is a river spot, I've just got the River tab open. I didn't want to overwhelm you guys yet. And we can see his bet stats. So on the river, in our entire sample, he's had eight chances to bet the river. And he bet three of those chances. And I'm going to tell you that that number is completely and totally irrelevant to this situation because we can also see this number for cbet. Now cbet is a poker term. Is called a continuation bet. That means that a player has been aggressive the entire hand up till now. So a cbet on the flop, he was the last raiser preflop. He gets a chance to open the action on the flop and he does. That's a flop continuation. A turn continuation bet-- he made a flop cbet, and he now has a chance to bet on the turn, he makes a turn bet. So that's a turn cbet. A river cbet is he makes a turn cbet. And now he has a chance to bet the river. And we have never, not once, seen him get a chance to make a river cbet. So if we tried to use this 38% here, and we said, well, he only bets 38% of the river, so he must have a really big hand here, we would be basing it on wrong information because he could have gotten to the river in any way for these bets to count. He could have been calling-- a calling down in position, and then it was checked to him, and that that would count. He could've been raising. He could've raised preflop, checked the flop, checked the turn. That would count. Any combination. And his hand strength is going to be vastly different those times he is bet the entire way than those times where he's done other things. So since we have a sample of zero cbets here, these river stats are not actually that useful for analyzing this spot. You're much better off looking at the board and trying to figure out, based on his preflop numbers and his flop numbers, what kind of hand gets here. I could talk about this hand a lot more, but that's our PowerPoint here. Now we've talked about ways that you can not get better at poker using PokerTracker. Let's get to the interesting stuff. How do you get better at poker using PokerTracker 4? And I'm going to say it's a five step process. Step one, use PokerTracker for reports and filters to look at very specific kinds of decision points. Find those times you find interesting. Find those times where you don't know the answer, and those are the ones we're going to look-- you should be looking at because those are ones that are going to help you get better at poker. The next thing to do is create mental models of the players in a specific situation. So that means you should have an idea of what you think those players are doing here. Now even though they're looking at similar kinds of decision points, you're not always going to have similar kinds of players in those hands. If you want to look at all times you were facing a river continuation bet, you're going to have sometimes you're against aggressive players and sometimes you're against passive players. You're going to have times where the river made a draw come in. You're going to have times where the river paired the board. All these are slightly different. And not necessarily different enough-- you wouldn't want to lump them together for point 1, but enough that you need to definitely think about the players in this specific situation and try to get an idea what's this player doing right here. This is exactly what you would do at a live table when you're sitting across the table from somebody trying to figure out what is going through his mind right now. Then you adjust that model that you've just built based on any relevant statistics that you do happen to have-- if you have notes on the player or anything else. If you know that this guy lost a big hand two hands ago, and he might be, what's called in the poker world, steaming. He is really mad. And he is just going to be way more aggressive right now, that's relevant information. If you're sitting in a casino and you know that guy has just finished his fifth bear and he's slurring his words a little bit, that's relevant information. Anything that you can do to adjust the model that you've built to be more relevant, that's good. Then you evaluate your different decision options. So if you're in a spot where, let's say, you can just either call or fold. You have to think about what kind of hand does he have. Will I win if I make a call here? If I fold, obviously, I'm out of the hand. For tournaments, what chip stack will I have remaining? How does this affect everyone else's standings in the tournament? There's a lot of considerations for tournaments that affect, especially, your preflop decisions. And then, once you've done all four of these things, go to step one and do it again. Continue to do this over and over again, every time you have an interesting kind of decision, and you will find that you are able to make better decisions because you understand the different decision points that you're facing. So let's talk about how to navigate PokerTracker 4 a little bit. For those of you who have it installed and have your computer here, feel free to open up PokerTracker and follow along with me. So what I'm calling a report is any kind of way in which PokerTracker is showing you data. On the top left here, this community, which launches our community page, you can look at our forums. You can download custom stats, all sorts of other fun stuff. Play Poker. This is where you go when you want to actually play. So for those of you who haven't actually done any importing in any of the tournaments yet, when you want to play, you go to Play Poker and you click the Gets Hands While Playing button. And View Stats here,. This is where all of your information is going to be displayed. T is for tournament, which is going to be all you guys are interested in. And we have four options, Result, Statistics, My Reports, and Graphs. And I'm not going to talk about Graphs today at all just because there really isn't enough time. So there's this left hand side bar here. And some people have liked closing it to make things bigger. And you should not forget that it is there. A lot of the navigation options are in that sidebar. And there is a huge amount of value in being able to change reports. So right now, we're looking at the overview report. It's got a nice graph. You can change different kinds of graphs. You can show your ROI, your ITM. ROI is Return On Investment. ITM is In The Money percentage. You can show those on this graph, too. You can see your results-- how much you've won, how many tournaments you've played. And down here-- so this is a report. Right now, we're looking at the basic By Description report, which is showing you one row per description of tournament type. For you guys, honestly, just looking at By Tournament is probably going to be OK. That will give you one row per individual tournament. You're not going to have so much data that it's going to be hard to group things together. And most of your tournaments are relatively similar. If you choose Advanced rather than Basic, all it does is give you more stats. So you might want to spend some time looking at what these numbers mean in Basic before you flip to Advanced. And it's also important to know-- so you can change any report from this drop down. And I'll talk about what the different ones are in a little bit. But the Overview Report has one really cool feature that is not obvious, and that is you can double click to get more detail. So if you want to know more about these 392 tournaments because those are the ones you're going to start playing tomorrow and those are the ones that are really interesting, then you double click. And now you see each one of those 392 tournaments, one tournament per row. And you get to see how much you won in that tournament, how long it was, your finish position, all kinds of stuff. And it does say here on the side, double click a row for more details. And you can go back by choosing Back By Description or choosing Remove All filters and Return to Route. If that's not enough, you can then double click the individual tournament and it will show you the hands, one hand at a time. So you're going to get a row for each hand as the tournament went through. It will show you most recent 100 by default. If the tournament ran more than 100, you can feel free to change that. We just have that set so that it doesn't choke too hard if you have a 1,500 hand tournament or something. You can also sort by clicking any of the column headers, just like any other reporting software. So you've got one row per line here. You can also right click, which has a lot of useful features. It's a context menu. And you can use that to add or remove statistics from your report. So when you get into your VPIP and your PFR and those kinds of things, you might want to add or remove a bunch of different custom stats. So you can do that with Configure Report here. But you can also replay-- this is how you'd replay hands. If you want to export videos and put them on YouTube directly, you could just right click, choose Export Video, put it on YouTube, post YouTube video on Facebook, and everyone can see your awesome play, which is always fun. And if you want to look at multiples at a time, you can use Control or Shift-Click. So Control-Click will highlight one extra individual, and Shift-Click will highlight a range. If you click here, you hold shift, you click here, it'll highlight everything in between. And then you could say replay hand. You could say replay all hands and report. And it will load everything up in the replayer. This would be the way, if you wanted to replay an entire tournament. You had such a great tournament, you really want to watch it again, right now, you can just replay all hands and report. And presto. You can just click Play and sit back. And the whole tournament will play through. So another report on the statistics side that I wanted to highlight for you is summary. And the reason summary is interesting and useful is that summary allows you to do different kinds of grouping. In particular, for you, I think starting hands hold 'em, as shown here on the right, and position will be your most useful ones. So what happens in starting hand hold 'em here, you have one row per hand type. So you can see here, we had aces 131 times. This is how many big blinds we won, adjusted for luck-- for all in equity. The VPIP is 100%. Congratulations, if you did not know it, you will almost certainly, voluntarily put money in the pot 100% of the time when you get dealt aces. If this number is not 100% of the time, I recommend going back and rethinking your decision process, because aces is the best hand in Texas hold 'em, for certain. And nobody will say anything else about that. You can see different rows for all the different types. And that is going to be the best way to see-- if you feel like I'm playing some of these suited connectors a little-- maybe it's too much. Maybe I'm not sure. Am I playing jack, ten suited too much? Am I playing king, queen off suit too much? If you're not sure, come here and you can start looking at the hands because whatever row you have here, the hands from that row will display in the bottom. And it works just like the other hand report. You can replay them. You can double click for more information on one. They're all right here. And Position and Groups. I don't have an image of it, but it groups one row for position. And when I say position, I mean if you're in the big blind, if you're in the small blind, if you're on the button. So if you feel like someone is beating up on my big blind. Man, I sit there, and every time I get my blinds raised, and I have to fold, and I hate it. Can I play back? Or am I doing it right, and it just feels wrong to me because sometimes your memory of the situation isn't really the truth. You can come in and load up the Position Report, and you'll be able to see exactly what's happening. Another really awesome report is the Hold 'em Hand Range Visualizer. This is in Statistics as well. And it looks like a lot, but it's not actually as crazy as it looks. So first off, over here, we have various different statistics. So when you choose a statistic, you're getting information based on your values in this spot. So let me talk about what three betting is. Three betting means someone has made a first raise. That's the two bet. And then you made the second raise. And we're talking about preflop only right now. So you made the three-bet. So someone raised and you reraised. That's all we know about these hands right now. But we're looking at only those spots, right now, in this report. And you can look at all kinds of different spots. And this little wrench here lets you configure. If you want to put different stats here, you can do that, too. Right now, we're looking at range. So here it says, Range and Value. So when I say range, I mean these are the hands that you have actually done this with. These numbers here are percentages. And each one tells you what percentage of all of the hands that I have made a three-bet with does this comprise. Let's look at pocket 10s. This player made a reraise with pocket 10s-- of hands he made a reraise, 2.244% of those, he had pocket 10s. So you can think about this as, if I made this raise, and someone else was against me, what could they expect to see. About almost 4% of the time, they'd expect to see ace, queen offsuit. 4.28% of the time, they'd expect to see ace, king. 2 and 1/2% of the time, aces. So this is what your actual range looks like. And in poker, we use range as the term for-- if you think of the domain as all hole cards-- so those are all hole cards you could be playing-- your heuristics, your mental processes at playing poker is the function that takes a hand from the domain and puts it in the range. So this is your range. You can change this from Range to Value. And this is where it gets cool. This is the percentage that this player made a three bet, given that he had a chance to with each of these hands. And so you're going to notice something very different, right away. We have a whole lot of the 100s here. This player has made a three bet 100% with ace king suited, ace queen suited, ace jack suited, ace 10, and ace nine suited. Every time he had those hands, and he had a chance to make a reraise, he did it. Every single time. But if you look at the actual percentages, they are not the same. 0.84, 0.931, 1.49. So you can tell also that he got dealt ace, jack suited in these kinds of spots a little bit more because he's still doing it 100% of the time. So this report is fantastic for helping you figure out, first of all, what would an opponent be seeing me do. What am I looking like? And then, well, what am I actually doing when I get my hands in these spots? If you think, you know what, I should never be making reraises with king, jack offsuit. And you come here, and you go, well, I've been doing it 75% of the time, you immediately know something that you can use next time you have king, jack offsuit in that situation to make a different play. That will let you change your poker. And again, you can pick any of our statistics from here. And you can look at them. Anything that shows percentage-wise will work in this report. Another really important report for tournament play in particular is facing preflop action. So again, we're in Statistics section, and it's the Facing Preflop Action Report. What you're seeing here is one row per kind of situation that you could be in your very first action before the flop. You get dealt your cards, stuff happens before you. And now it's your first decision. Well, the question is, what happened until then. If it's an unopened pot, that means everybody folded to you. So it's a similar kind of situation. So here, we can see we have 10,434 hands where this player was the first to be able to open the pot. You can see the winning percentage, you can see their VPIP and PFRs. So again, they're playing a little more than 70% of hands in that spot, raising 60%-- a little more. But you can see one limper. Now a limper means someone just called the big blind. It's a technical term. It means there's one person who just called the big blind. And it's to them. We have 1,945 hands for that. You can see how their VPIP changes. Suddenly, they're not putting in nearly as much money. Part of the reason for this is this sample is based on heads up play. So when you check in the big blind, it's not considered voluntarily putting money in because you haven't put any more money in. And you can only VPIP before the flop. After the flop is a totally different animal. So VPIP preflop only. And that's a good thing to keep in mind. So that's why you see this kind of big drop. But you can already see how, looking at this report, I can tell you something about this player's play. They check a lot when it's limped to them, and they're in the big blind, and its heads up play. So you can use these and, of course, this works like the other reports. And all of your hands are down here. And so you can replay these hands. You can do other filtering in addition to this. And it will help you figure out what your play is in different first situation spots. And tournament play-- this is going to be huge because, when you're facing an all in before you get a chance to act, your range is going to be very different, you're going to want to make different decisions than you will if everyone folds to you because you're able to steal the blinds a lot more liberally. And you're going to see, later in the course, how aggressive you can actually be as your chip stack gets small. So being able to look at these different kinds of spots will help you fine tune your game and look at what you've actually been doing in those situations. So I've mentioned lots of reports. Do you need more than that? I'm glad you do because you can create custom reports in the My Reports section. In here, you can choose the type of report, how to group the report, and how to show exactly what you want to see. So any of our stats can be added to these reports. There's three kinds. A player report starts out by looking at hands that you've played. You're looking at groups of hands. So a stat like VPIP looks at groups of hands because it's a how often you voluntarily put money in the pot over a sample of hands. That's to contrast with a Hand Report, which is going to show you one row per hand. So if you wanted to make a report that was going to show you all hands where you faced a river continuation bet, you could do that as a hand report, and save it, and load that up really quickly next time. An All Players Report, is like a Player Report, except it's not just for you, it's for everybody in your database. You're going to get one row per player. And you can put whatever stats you want on it there. For Player Report, we also have lots of different groupings, some of which you will not see anywhere else by default in PokerTracker. And I want to specifically highlight preflop stack size for you guys because, as tournament players, this kind of custom report has a lot of value for you because preflop stack size is going to be the determining factor in a lot of situations in your tournament play. So let's see what that report looks like. You get one row per different stacks. You get different ranges. These are in big blinds. They're not in m. You get a pretty nice nice range. And you can kind of approximate m by multiplying that by 2/3 because you're going to be dividing it by the small blind, too. So you can see here, this player had the stack less than two big blinds 59 times. And this is how often they raised when it was folded to them. And you can see it's 47%. So you can see, in this spot, how someone's raising range is going to change based on their stack size. And you can see we have different samples for all the different stack sizes. Based on the tournaments you guys are actually playing, you're going to be seeing numbers much more in this area, and a lot less in this area because you need to be really, really deep to be able to have 100 big blinds. And in the turbo tournament, it just really doesn't happen that much. But, in addition to be able to just look at this, you can then come in here to this blue Filters link-- it will be blue for you until you've clicked it once. It's gray for me here. And then you can make individual filters, and you can save that with this report. So you could come in here, and you could take this report, and filter for times you're facing an all in. And you would then see times you were facing an all in by stack size, and see what your stats are. And you could save that. And it would show up in this dropdown. And you could look at it later. And you wouldn't need to redo everything. And it will be really helpful for analyzing your own play. So I've talked a lot about filtering-- how do you do that. This button here, More Filters, is the gateway to all of the filtering everywhere. And other than the doing specific stuff for custom reports, this is the place where you're going to go if you want see specific subsets of your data. So when you click it, you're going to get this window. There are five sections over here, Game Details, Hand Details, Hand Values, Board Texture, and Actions and Opportunities. And each one of these has different kinds of filters. And we tried to break it down as intuitively as possible. So Game Details is all stuff for how to filter out the game level. What day did I play it on; what currency was it in; what was the speed; was it a turbo; was it as a hyper turbo; was it a super turbo; what was the buy in; what was its description; what was the table size; were there six players at a table-- 10 players at the table. You can also filter for the specific blind levels. So if you wanted to look at a hand at the 100/200 blind levels, that would be there. You navigate this by-- you choose your section on the left hand side. Anytime you see the greater than symbol, you can click. So any of these options here, you can click. And you'll get the individual filters you can turn on and off. And I'll show you a few of those later. When you go down, you can click up here, where it says cancel, right now. It will change up to the previous section. And when you turn everything you want on, you click add to filters. And then it adds them all together. Here's Hand Details. Hand Details is filters about the whole hand altogether. So what was the maximum preflop raise that occurred? Was there a three bet at all? If you want all hands where someone made a three bet, you'd come here, which can be great if you want to look at times when you made a raise and someone made a reraise, but you don't really care who. You just want to look at all three-bet hands. You can come in here. If you want to look at hands where there was limping, you can turn that on. If you want to look at hands by pot size or stack depth-- so you want to look at hands where you had a certain m. You can come in there. How many players were at the table. If you want to look bubble hands, you'd look there. What was the position of different players? Who made the first raise? Where was I sitting? All that's in player position. This is mostly for custom reports and notes. If you made tags on the hand, you can tag hands in the right-click menu. You can do that there. How much you won or lost in the hand; how much you contributed to the pot; did the hand go to showdown. There's tons of options. Hand Values is another way you're going to want to look at hands. And that is your hole cards, your hand strength, and your draw strength. So let's go through those. This is what the hole card filtering looks like. You've got the chart of all the different hole card options. So there are several ways you could do it. If you want, you can just click on individual hole cards. So if you just wanted to pick, say, pocket 10s, you click right there, add to filter, you're done. You only see data from pocket 10s. If you wanted to see the top 15% of hands, you can come down to this slider, and you can slide it over to 15%. My top 15% of hands and your top 15% of hands may be different. That's why we have this Model option. So we're using, by default, this Glanksky-Carlson model, which was invented by them in response to a very specific math problem, which I think you guys will talk about later in the course. We use that as the default model to rank hands. But if you don't like that, you can actually go in, and you make your own custom model. You can say, you know what, I want three, two suited to be the best hand in the deck because I love it. You can do that. You can put that at the top. We also have hand versus three randoms as a default model. It ranks hands based on their equity against three random sets of hole cards. It's going to look very different. It's going to be a lot more skewed towards high cards. So you're going to see things like king, jack offsuit and queen, jack offsuit long before you see low pocket pairs. So that's going to make things radically different in terms of using the percentages. There's also this Group Select button, which will save you time. If you want to do any ace, you can just click Group Select, Any Ace. Of if you wanted to do any pair, Group Select, Any Pair. If you want to invert it, you can just click Group Select and Invert, and anything that's turned on will be turned off. So you have lots of options there. Once you set whatever you want, you just click Add to Filter. Hand Strength. So once you see the flop, you have some hand strength. So all of these do imply having some sort of hand. Did you have a high card? That means you don't have any pair. You just look at what your high card strength is. One pair, two pair, three of a kind. All the way down. There are multiple, multiple, options in each of these. I'm just going to look at straight, just to save a little bit of time. So what you would pick is what straight you made it on. So if you wanted to look at only times you made the straight on the river, you could pick that. If you wanted to see any straight, if you wanted to see all three straights, you can just change that and turn it on, How many hole cards you used. If you wanted both of your hole cars to be used to make this straight, you can turn that on right here. A non-nut straight means it's the best possible straight versus a better straight could be made. So if you have ace, king, and the board is queen, jack, 10, you have the nut straight. No one can have a better straight. You have the absolute best straight. If you have nine, eight on that same board, you still have a straight, 8, 9, 10, jack, queen, but ace, king still is a better straight than you. So you would have the non-nut straight. Backdoor straight is when you don't have a straight until the river. And you didn't have a straight draw on the flop either. So if you had ace, jack, with a king on the board, and it came queen, 10 to bust some opponent on the bubble, you just completed a backdoor straight and made somebody very unhappy. Draw Strength. So these are how you would look at if you wanted to look at my straight draws, my flush draw. You pick which straight you had it on, which can be any flop, turn, or both because you never have draws on the river. Your hand is made or not. And we have the different kinds of draw options. So for straight draws, you can have either draw to one card, which makes you four outs, or draw to two cards, which makes you eight out. And we let you pick here whether the outs are the best possible straight or outs to some straight that's not the best depending on whether it's open ended or double gutshot. Also you could filter for backdoor straight draw or hands where you didn't have any straight draw ever. So you can imagine, these can combine for lots of different things. Board Texture is how the cards are working together on the flop. If you wanted to look for a flop with any ace, you'd go into Board Cards, and you could pick ace on the flop. If you wanted to look for the turn paired the board with the flop, you'd go into board pairing to the turn section, and that one's there. If you wanted to look for times the flop was all three clubs, so you want monotone, you would go into board suits and you'd pick all three cards of one suit for the flop. Board Connectedness is like 10, 9 , 8 is considered a connected flop. 10, 9, 7 is a little less connected. So all those kinds of options are in there. You can be really specific about picking the situations you're filtering for. And Actions and Opportunity is the last section. But in some ways, it's almost the biggest because these are actions you had, or opportunities you had, in hands you played. So this includes bets and sizing, raises, raises faced, raise sizing, calls, folds, opportunities, you name it. So we're going to look at preflop just quickly so that you get an idea. If you want to look at hands where you voluntary put money in the pot, turn that on. Posted blinds, raises, calls, folds, opportunities, all of these are here to let you filter for times you made a three-bet, times you faced a three-bet, times you folded to a three-bet, times you had a chance to face an all in-- that would be in opportunities. Action Sequences. Your specific action. So your first action is raising. Your second action is call. Let's bring those up. Bet Sizing. I want to look at times I made a three big blind open raise. We can bring those up. Actions and Counter with Sizes. What someone else did it and it came to me. So they made an open raise of three big blinds and it came to me. That would be in there. And we have those for flop, turn, and river, too. If that's not enough and you need more stuff. You can combine these. So you can highlight them. So this is what it looks like after you click Add to Filters. So we've added two. Sizes calculated in big blinds. Actions faced, two-bet preflop between two and three. So that means that someone made a two to three big blind open before the flop. And we faced that raise. That's that filter. And this is, we made a first raise preflop with between two and three big blinds. Now it should be obvious, these will never happen together. So if you leave these on with AND, and you click save and apply to all filters, you will see no data anywhere because that can never happen. What you might want, however, is you want to see any of these two times. So you're looking for hands where someone made that two to three big blind open. I want to see how that affects things on the turn, let's say, because those are the spots I want to look at now. That's with the OR select. So you'd use OR. This is inclusive OR. So any of the times that match either one of these two spots will come together. And when you click that, it says OR. And you get the two filters together. And you can also click Ungroup, once things are grouped together. And you can split it back up. You can use AND if you want to to make nested deeper logic. And NOT will negate things. So if you wanted to look at-- if you wanted to negate times you made a two-bet of that size, you could just click on that one and click NOT. It's not highlighted here because you can't NOT two things at once. And you would see all other times, all other situations. So as you can imagine, making a complicated situation is going to take you some time. And it's going to you little work to get used to it. That's why we added this Save As Quick Filter option. So once you've made it, you don't have to make it again, which is fantastic because if you spend half an hour on one of these things, you don't want to click Clear Filters and never see it again. So you just click Save As Quick Filter, and type your Quick Filter name, and click OK. And then you're done. It is now saved for the future. You will see it on a dropdown, which I'll show you in a couple of slides. And you will just be able to load that right up on any of your reports, instantly. On the other side, we have this Edit Quick Filters button. And when you click that, it shows any saved Quick Filters you have. When you click on it, it shows you what they are. And you can rename them, delete them. You can load them, which is awesome when you have multiples. If you want to put them together, you can load one and come over here and load the other one. You can append them together. You can make several pieces and then put them together really fast. We also have available for free download on our website. Then you would import them right here. If you wanted to export them and give them to your friends-- you said, let's all make a group of filters together. We'll split up the sample. And you make these, you make these, I'll make these, and then you want to give them all to each other, great. You just export them, give them to each other, it saves you a whole bunch of time. And this is the dropdown in the sidebar. So you see, once we've made this Quick Filter, you just pick it, click that, and you're done. Now that filter applies to all of the reports on the other side. So I've talked a lot about stats. And I hope you're sitting there wondering, what are these things. How can I find out more about them? I want to know about it, but there's so much going on. So the way these work is the vast majority of stats in our software will tell you what percentage of the time somebody did something, given that he had a chance to. So it's this very simple mathematical formula. How often did do it, how often did he have a chance do it, turn it into a percentage. Done. How often did he actually three-bet divided by how often he had a chance to three-bet. Multiply by 100. If it's 1% of the time when he three bets, he has a hand. So let's go through a few examples. VPIP, we've already talked about a bunch. So it's the percentage of the time a player chose to put money in before the flop. This is considered one of the staple stats because it really speaks to how much a player is involved in the pot. If you've been sitting at a casino and you see a guy that's just splashing money every hand. He's playing every hand. He doesn't care. His VPIP is very high. And it's easier to think about that player when you start building models in your mind of what do players who have VPIPs do. And you can start to categorize players by VPIP numbers. So someone who has a VPIP of 40 is going to have similar kinds of play style to someone else who has a VPIP of 40, even if you don't know a whole lot else because this one will converge pretty quickly since they will VPIP or not pretty much every hand. Raise First In is a little bit more restrictive, but I think it's one that's really useful for you guys in tournament play. If the percentage of time a player raises on their first action, when everyone has folded to them. So if everyone folds to you and you raise, you raised first in the pot. So if you see that be 40% for somebody, they are raising a ton. They're raising almost half of all hands. It's a huge number. And these can combine. These work together, right? So if someone has a VPIP of 70 and a Raise First In of two, they are calling all the time, but they are not raising ever. So this starts to give you an idea of what kind of player they are. They're the kind of player who puts money in, but they are not willing to commit to a raise. They don't want to bring the stakes higher. They just want to go along with whatever's happening. And some stats can be super crazy specific. So we've already talked about what three-bets are. We've already talked about what cbets are. There is a stat in our system called Fold To Raise After Flop Cbet in a Three-bet or Higher Pot. That's what 3upplus means. Three-bet or higher. So let's break this one down. It means that we made a three-bet or higher before the flop. That is, we either three-bet or we four-bet or we fivr-bet, or we six-bet-- anything higher, but we made the last raise because we made a cbet on the flop. So we made some last raise preflop that was at least a reraise. And on the flop, we had a chance to bet, and we bet, and they raised us-- because they raised-- and we folded to that specific raise. You will not see this happening in your tournaments because you are going to be too short to really get these situations. In a cash game, this can be really useful, once you have a few thousand hands on somebody. But as you can imagine, if we have this, we have a lot of other really specific stats. So if you find something you want to know more about, you can go through out stat list and find if there's something there that already does it. And this is one where you might not want to pay attention to it until you have a really good sample size on somebody. So how do you find all the stat list? If you click Configure from the menu bar and you choose Statistics, it's going to pop open this window, which is going to show you our entire stat list. So you guys are going to want to change to tournament. And there are players stats and hand stats for the different kinds of reports. And you choose Stats here. So we're looking at tournament player stats. You could see we have this list. We're just looking at the call stats here. You could see the scroll bar. So we have a lot. If you want to search for specific ones, you can just type-- if you type three-bet in here you suddenly see only three-bet stats. So we're looking at a stat called call preflop squeeze. If you were looking through the stats and you saw this. And you said, well, what is that situation. Well that situation is described here in details in the detailed description. A squeeze is when someone has made a raise and someone else made a call. And then, somebody three-bets both of them. That projects an image of a lot of strength. This is talked about in Harrington on Hold 'em as a great way to make a move because if someone makes an open raise for three big blinds and someone else calls, and you're sitting there for 10 big blinds and you shovel all in, the first player needs to decide what's the guy behind me who called going to do, and what's this guy doing. He's got to have a big hand if he's able to shove all in over both of us. So that raise is call to squeeze. If you face a squeeze, and you call it, it will count for this situation. If the player who made the first raise folds, you still face the squeeze if you called. So those are the two spots here. And you can see, we have the detailed description that explains exactly what it was. And the formula gives you the actual numbers we used. We don't say multiply by 100 here just to keep it a little bit simpler. We have these detailed descriptions for every stat on the list. So if you are so inclined, you can spend several hours reading through our entire stat list. And I think you will probably be the first person who has ever actually done it besides me. So all right. We've talked a lot about opponents and what opponent stats can be. How do you find them? How do I look at opponent stats in PokerTracker? This is going to be important because you need to figure out what these numbers mean for your various opponents. So the first way is in the results section. We have a player summary report. We saw it in that dropdown earlier. When you choose this report, you're going to get one row per player-- and I blocked out the players' names for confidentiality reasons. But you can then see how many tournaments they played in, what their VPIP is, what their PFR is, what their three-bet is, how often were their three-bets successful. That is how often they three-bet and everyone folded to it, which is really fun because if you have a really high three-bet success, that means you can really three-bet with impunity because everyone's folding. And it's great. So you pick up the open raise, you pick up the blinds, you can make lots of money. You can see this guy is good at picking spots. He's got 64%, 65%. This guy's terrible at picking spots. He's only been successful one time in three. With 17 tournaments, it's probably some number like that, two in six, one in three. So you want to go deeper than that? I thought you did. We've got the hero versus villain report. And what this does is each row again shows one player, but it specifically targets the hands where you both put money in. So these are hands you won money from him or he won money from you. It breaks down by different sizes, so you can see if you won lots of big hands or small hands. So like, we lost three reasonable sized pots to this guy. We don't have any big losses, which is nice in this report. We don't have any big wins either. We took three pretty good sized pots from him. So if you highlight one of these reports, you're going to get the hands you were involved with this person in the bottom. So if one of your friends has been playing in these tournaments and saying he's been beating up on you, you can go look in PokerTracker at the exact hands you've played together, and tell him you're full of it because here our hands, and I can see that you did not beat me up. Look, you three-bet me once and I had four, two. I'm clearly not calling your three bet. That has nothing to do with you. So I'm sure you want to go even deeper than that, right? On the sidebar, we also have this player box. And you can choose a new player, and you can choose any of your opponents. And when you load them in here, what you're going to look at now is all of the data in all of the reports of PokerTracker-- everything we've been talking about, you will be looking at from his perspective. So you are looking at all of the data through his eyes. And the only thing you need to really keep in mind when looking at data through another player's eyes is that you only get hole card data when he reaches showdown. If he doesn't reach showdown, you don't know what his hole cards are because the hand's not going to tell you. But anything else, for all hands that reach showdown, you have full information. You have all of our reports, all of our stats, all the filters. And if that is not enough for you, I don't know what will be. So let's look at an example. We're going to look at one example analysis. And hopefully, that will show you how all of this stuff works together. Let's say, there's one situation that was really bugging us. We've been playing in these tournaments that are called Fifty50s. Fifty50 And the way these work is that they pay out the top 5 places. 10 people start, five people get paid. Once it gets down to five players, everyone gets their buy in back. So if you've bought it for $10, once you get in the top 5, you definitely get your $10 back. And everybody who is in the top 5 has some amount of chips left. So it takes the other half of the prize pool and gives it out to all the players, based on how many chips they have. So that that's how a Fifty50 tournament works. And we want to look at times we're in the big blind because we're been playing a lot of these; times we were six handed. So we're on the bubble. Once one more player goes out, the tournament is over. That is the end of the tournament. And we want to look at times we have king, queen offsuit because that hand's been bugging us. We don't need any other reason, but we have a real situation here. So you go into the sidebar. You pick your Fifty50 flag. That will limit it to Fifty50 tournaments. You go in to Hand Details, Number of Players. Players dealt in a hand, six to six. So now we're looking at six handed play in Fifty50 tournaments. Actions and Opportunities, Preflop, Post Big Blind-- bang-- add that to filter. Now we're looking at big blind play in a Fifty50 tournament six handed. Hand Values. We just turn on king, queen. That's king, queen offsuit there. King, queen suited is on the other side. And we go into Actions and Opportunities, Preflop, and turn on Opportunities, and turn on Faced All In. We add those. We have all of our filters right here. And so now, we have picked out our situation entirely. And we can look at our spot. We have two hands in our database that match that. It may have felt like you had a lot of data, but sometimes, when you go and you look at it, it turns out you don't have as much as you thought. And that's OK. We're still going to look at one of those hands. We've got two hands here. We'll look at the second one. We're going to replay it. We're going to see it in the PokerTracker replayer. So here is our spot. Let's take a moment to look at. We're in the big blind, just like we said. We have king and a queen. Fold, fold, fold, fold. And this guy pushes all in. So we're facing our all in from him. So our chip stack is 1,955. His chip stack was 725. Our blinds were 60 and 120. And there was a 15 chip ante. So we have 90 chips. These were from the ante here. So we have to choose if we want to call his 605 chip bet, or we want to fold to it with king, queen here in this spot. So I want you to take a second and think about whether you would call here or whether you would fold. Remember, if you call and you bust him, the tournament is over. You definitely win money because he's out. You are in the top, and you get more money based on your chip stack plus your but-in. If you fold here, he now picks up 210 chips. He now has 935 chips. He's still the short stack, and we're going to keep playing. You have a king and a queen. I want you to just take a second and think about our situation. The first thing you should be thinking about, and I hope you were thinking about, right now, is what hands would he be playing to make this all in raise with. Obviously, he could have aces. Obviously, he could have kings. What other hand? How far down is he going to go when he makes this all in range? And before we can make any adjustments based on stats, we have to have a baseline because if we think the baseline player is playing all in with any two cards here, it's going to radically change how we adjust based on his stats because if he's tighter than average, tighter than average of 100% is going to be very different than tighter than average of 10%. So think about what you think an unknown player would play. Start to build this model in your head. This is what all the big name pros will do. They are able to have a baseline model and then adjust based on individual characteristics. So think about that. Think about what kinds of hands he would play. Now let's add some stats. Now we're going to start refining our model on this player. We've got a baseline idea in our head of what kinds of hands he's going to play. Now let's look. We have 73 hands of data. His VPIP is 11. So in this 73 hands that this guy's been playing, so it's probably about two tournaments worth given the length of these terms. In two tournaments' worth, he has put money in 11% of the time-- 1 hand in 9, this guy is playing. That's it. Preflop raise, 1%. And I can tell you because I did double check. This means, of his 73 hands, he raised preflop before this exactly in only one time. That's it. Only once ever. In both tournaments we've seen him play, he's only ever raised once before. I hope that that changes what hands you think this guy is going to play, at least a little bit. Now we can look a little bit deeper in his stats and try to see if there's any information there. Let's look at his small blind statistics. This is the Tools pop-up here. We're just looking at the top half. There's some stuff in the bottom, but it's all postflop stuff, so it's not relevant right now. So let's just look at small blind here. His VPIP from the small blind is 13%. So one time out of eight, he put money in from the small blind. He has never raised from the small blind, ever, ever-- not once. So he did no raises of any kind. He seems to call across all positions. So here are his callings. These are from the preflop section. You can see he calls from everywhere. He's called 4 out of 25 hands. He's called cold twice, which means you call without having put money in the pot. You can't call cold from the blind. That's why these are going to be zeros. He limps behind limpers 1 out a 7. So he's not attacking people who just called big blind. He's not attacking people who are acting weak. He is folding to a blind steal twice. When he's in a blind, he's perfectly happy to give his blinds up. A blind steal is when you're in a blind and someone in the cutoff, or the button, the last two positions who haven't put blind money in-- they make a raise. So when you're facing a raise from them, it's called facing a steal. He's done that twice and he folded both times. So you can start to a piece all of this together and get an idea. But this sample is not huge. So think about what hands this specific player is likely to raise all in with, now that we have all of this information. Remember, his tournament life is on the line. If he is called, and he loses, he is out of the tournament, he wins absolutely nothing. But he's the short stack. He has an m of about 2.7. He could go less than three orbits before he is completely out of chips, just by folding. So he has to make a move at some point. He has to pick a hand and make a stand soon. So the fact that his tournament life is on the line is going to make him tighter because he doesn't want to lose the tournament. The fact that he is so short with such a short m, is going to make him looser. So these kind of are in conflict with one another. And as you play more tournaments, you're going to find out how they poll. Do you think this guy is going to play low pocket pairs? Do you think he's going to pick deuces here, and happy to play a coin flipper for his tournament life? I see one person shaking his head. What about weak aces? Do you think ace, four offsuit is OK? Do you think he's willing to take his tournament life into his hands with just a really weak ace? What about king, queen. Could we have the same hand here? Would he make this shove with my hand? King, 10? Could we have him completely dominated? So that's another poker term. If you have the same card and your other card is better than theirs, it's called dominating. You're 70 to 30 favorite in that spot-- something like that. A queen, jack-- same kind of situation. If he's willing to do it with these hands, then we're in a really great spot, right? We have to decide what this player would open raise with. So now we're starting to build that model. Remember, his VPIP is 11%, so let's start-- first, if you want to convert percentages to actual hands, the equity calculator in PokerTracker will help you with this because it's kind of hard to know what 11% of hands is. Especially if you don't have experience with it, you're like, I don't know. So you can come into the equity calculator. And you can pop that open. And you can choose Hand Range Selector. And then you can use the slider, and you can see what the model says. Again, we're still in Glansky-Carlson. You can play between the models. So this is 15%. So think about the hands that you've just decided this guy's willing to go all in with. Here's a 15% range according to Glansky-Carlson. Wider. Looser. So this guys is down to ace, four suitedl down to king, 10 suited; pairs, down to three not twos; and down to ace, eight. that's 15% of all hands. Well, we said he'll play 11% of hands, overall over 73 hands. So let's start by saying, any hand in the top 11%. He'll play those. It's not going to be perfect. Our sample's not fantastic. We know this, but we're trying to get an idea for what's right to do here in the spot. So this is the top 11%. You don't have the bottom two pairs. You don't have any weak aces at all. Even ace, nine is considered a middling ace. So that's not too bad. He has no weak suited aces. That's a 11% range. That's a pretty strong range. So PokerTracker also has a built in calculator that does ICM. ICM stands for the Independent Chip Model. This has been out there for a little while. And what the ICM does is it takes tournament chips and converts them to real world dollars because tournament chips and real world dollars don't have a one to one correlation because your last tournament chip is worth a lot more than your thousandth tournament chip, if you have 1,000 chip stack-- because once you're out, you can win nothing. But that extra chip doesn't mean quite as much. So this model takes your equity, your total chip stack, and gives you a result as an equity in the prize pool. It says, with this chip stack, if everything were even-- ceteris paribus-- nothing else to be considered, this is what your equity in the prize pool should be according to the model. This has been there in the replayer the entire time. I don't know if you noticed it, but if you click it, it will show you the results in this spot. So we're going to be looking at the times we face this all in, and we will see what the average model says whether we should push or fold in this spot with this hand. So who thinks we should call his all in raise, if he was an average player? Everyone else I presume says fold. The default model says call. Everyone who raised your hand, you guys win. So you can click the blue results link here, you can see the full ICM tree. But the important thing to note right now is you can look and see what your prize pool equity is, right here. If you push, you can say you would have, against the average model, a 15.83% equity in the prize pool. If you fold, 15.28%. Now keep in mind, this is just against the average model. This is not based on anything we've done specifically to analyze this player. But the good news is, we can change that. And we can see what the model's range was. So when we click results, we're going to see what the model says the average player would raise all in with in this spot. And the default range is-- drum roll, please-- 33.9%. The default model says the average player will raise all in with an m of 2.7, when folded to in the small blind with about 34% of hands. We built this model using some pretty robust data analysis. But as we've seen, different players play very differently. And this player is most certainly not average because we put him on somewhere much tighter than 33.9%. And you can see push equity here and fold equity or not that different. We're only talking 0.6% of the prize pool. So if we're playing a $10 tournament, we're talking about the difference between an expected result of $0.60. It's not huge, but each little edge in poker adds up, one piece right on top of the other. And making these decisions in the marginal spots is really how you can expand your advantage and play better than your opponents. So let's change that. So you can click right here, once you've been looking at the model results. You can change his range. So this, by the way, is what 33% looks like. So the average player here shoving as weak as queen, eight suited; king, two suited; king, six offsuit; any pair; any ace. That's a lot of hands. These are weak, weak hands. So we change it. We'll drag the slider down from 33.8% down to 11%. Suddenly, now we're looking at a whole lot tighter range. With the updated range, who thinks it's a call now? Good, I'm glad nobody thinks it's a call because it is in fact, a fold. So by looking at this player, by doing analysis of his previous play, we were able to turn a close call into a close fold. And so in this spot, when he has this range, our push equity is 14.85% and fold equity is 15 and 1/4%. And you can actually play with this. And you can keep changing and updating the range. And you can figure out what break even is. So if he knew what our cards, were, if we turned them face up, we can figure out what's the game theory optimal pushing range for him, right? When we turn our king, queen up, if he could play perfect poker, he would have pushed a 24% range. That would be king, nine suited; any ace; any pair; queen, ten suited; king, 10 offsuit, or better. If we think he is pushing less than that, it's definitely a fold for us. If he is pushing more than that, it's a call. So we now have established what a break even range is for us facing a push from a short stack at the end of a tournament in a tournament type that we play. So next time that we're sitting in the big blind, and we face a shove from a short stack, who's in the small blind, we have a better idea of what things to look at and how much to skew things based on his range. If you're starting to see 50%, 60%, and you're sitting with king, queen, you're in a great spot to call, assuming you're in a otherwise similar situation with your chips relative to the rest of the table. But in this spot, against this guy, it's a fold. So of course, in the actual tournament, hero called because you always make the wrong play the first time you're in the situation. So hero called here with king, queen offsuit. And we get a lovely, wonderful flop. We've got the king, nine, two. So hurray, we hit our pair. This is great. Your heart starts racing. There's the turn. It's an offsuit seven. Fantastical. If he has two diamonds, he doesn't have a flush yet. None of the straights came in. Uh-oh. There's another diamond. So now if he has two diamonds, he could have a flush. One of the hands that was in his range was ace, king. If he has ace, king, he also beats is. So if he has queens, kings, aces, ace king, or any two diamonds, he wins. And of course, his actual hand was ace, five offsuit. I want to point out that ace, five offsuit was not in our range analysis for him. So our range was actually too tight. So in the future, when we're trying to do this kind of analysis, we can update our range by saying, hey, the last time I did an analysis of a guy that I thought was really tight, when he was super short, he was looser than I thought. You can add those extra few percent in when you're updating your ranges later. No worries about ace, king. Certainly no flush. He did have one diamond though. So here are our take aways. You should think about poker as a series of discrete decisions that you're going to make when you're playing. You can use PokerTracker's filter system to target those very specific decisions. As we've seen, you can filter for anything under the sun you would like. You can save filters for later. So target those. Pick the spots that bug you. Pick the spots that you're going to remember. Next time you have six, six, and you're out of position, and it's on the flop, and there's a king up there, and you made a raise preflop, and you're not sure whether you should bet or you should check, you can pick that spot and you can look at it. And when you check, call, then the turn is a queen, you can decide, do I want to check or do I want to bet. You can pick these spots. And then when the river 10 comes, it just sucks to be you. You can use the available information to build a mental model of your opponents. Look, this is the way people play poker. You try to imagine what's going on in your opponent's head. You try to go, how is this player going to think. How does he make decisions. So that's what I'm calling this mental model. It's all of the heuristics that's going on in his head or her head to make a specific decision. Analyze your options based on your mental model. Think about what you can do. Think about how his actions are going to affect your play. Think about how your hand affects it. And when your mental model is proven wrong, like it just was, we did not think that guy had ace, five offsuit, you can adjust your model. And you will get better at poker if you continue to analyze your game one hand at the time. So I hope that that was instructive and that I opened your eyes a little bit to all the power that PokerTracker has to offer for you. If you have any questions, I'd be happy to take them. [APPLAUSE]
MIT_15S50_Poker_Theory_and_Analysis_IAP_2015
Introduction_to_Poker_Theory.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. KEVIN DESMOND: All right, everyone. So welcome to 15.S50, Poker Theory and Analytics. So this is going to be Monday, Wednesday, Friday from 3:30 to 5:00. I just got a room for a review session on Tuesday, Thursday for anyone who needs to catch up a little bit. The class is here, 4370. I'm Kevin Desmond. I'm going to be the instructor. Paul Mende is the faculty advisor. And this is worth three H credits. The game play aspect-- so this is what I did. And I think this is really cool. So Poker Stars gave us our own private league for only MIT people in this course. And my goal here is to separate people who are fairly new from people who are very competitive, because I don't want someone not to pass the course because they happen to be not that great at poker. So I created this thing called the Beginners' League. And these are going to be Daily Turbos. Turbos means they're fast-ish tournaments. And to get the game play credit, you can cash, you can make money in one of them, or you can play in 10 of them. So those who are struggling can get this game play credit by playing 10 tournaments, which is about a 10-hour commitment. Let's go into the game play aspect more. So Poker Stars created this private league for us, which is really cool. So Poker Stars is generally considered the most reputable online poker site. That's why we use them. So they have two different types of games. So they have real money and play money games. Now if you're in the US, you can't do real money. It used to be something that was very gray area. And then there was one poker site which turned out to be legitimately like a Ponzi scheme, and as a result, now poker in the US is like much more black and white, definitely not OK for real money. However, their play money scene is pretty resilient, and that's what we're taking advantage of here. The Poker Stars play money scene is broken down into two different things. They have public games, where you can just go and play for play chips against anyone in the world, which is cool. And you can do that, and I recommend you give it a shot just to get used to the software. In addition, you could do home games, which is what we're generally going to be doing. That's what they call their private leagues. So in the private leagues, in their home games, they have this showcase. And you might notice as soon as you log in that the MIT League, Poker Theory and Analytics, is already at the top. That's not just for us. That's for everyone. Anyone in the world who logs into Poker Stars and looks at home games has the MIT League at the top, which I think is really cool. So to access this, I'll send a more specific instructions later. I gave you guys just the passcode of what you need. But to actually get there, what you need to do is, you log into Poker Stars. You go to this button, which is a little house, to access home games. And then you want to join a game. And what you do is, you put the Club ID, which is 557832. You put the invitation code, which you're all going to have on Stellar. And then you put your real name, preferably the one that's listed in the course, because I actually have to approve everyone that joins the league, and I can't do it just based on someone's screen name. And I guess you have to agree to some sort of terms and conditions. So let's talk about hand history. So a lot of analytics are going to be based off of hand histories, which are just text files that Poker Stars gives you to the extent that you indicate that you want to save them down. So these are kind of jumbled messes of text. Each line just shows one thing that happens. And you might get used to reading it, or might not, depending on how much you're going to scrutinize it. But more importantly, you can use these in all the data analytic programs that we're going to use. In particular, Poker Tracker runs off of that. You'll load just thousands of hands into Poker Tracker, and it'll do analytics for you. It knows exactly what's going on based on that format, which is generally considered universal. And then for the sake of visualizing these hands-- if you just read it, that's fine. But then if you want to show other people, I'm recommending we use something called the Universal Hand History Replayer, which is something that's free. And what it does, it just reads the hands, and it plays them. It animates what happened as if you were seeing it for real. So the deal with hand histories is, if you're a real money player, Poker Stars dedicates databases of hand histories so that, if you want, you can request all your hand histories at any time. For play money players, they let you capture your own hand histories if you want, but they definitely don't save them. So the reason I'm showing you this now, and I'm going to email it out to you later, is if you lose your hand histories, so you don't capture them in time, you'll never get them back. So make sure you're actually capturing hand histories, because we're going to be using that for a lot of the analysis we do. OK, so let's talk about the league. And honestly, I think this league is going to be really cool. Usually the evolution of a player is they're terrible at poker, and then they start becoming good at playing against bad people. And then when they actually start playing for real, they get crushed again because they're used to playing against other bad people. So this will actually hopefully get you used to playing against other people who are playing correctly, which is not something you can commonly learn just from playing around with your friends. In addition through playing in these online leagues, you can collect stats that you could never get from playing live. And I think this is why the live tournament scene is dominated by online pros. It's because no live pro can get as many hands or analyze their play in the way that you can do online. It's not even comparable. So this is given-- even if your whole intention is to only play live the entire rest your life, doing this type of analytics would give you a chance to learn at a much faster rate and learn things that you would never see live. So every week we're going to have a major tournament, which is basically going to be the same structure, maybe a little bit slower, than the ones we do daily, except they're going to have real prizes. So Akuna is giving us, for their first tournament, Beats headphones. And Apple TV, Bose speakers and a lot of gift cards. And then for their second tournament, they're giving us all of those things plus an iPad Air and an iPad Mini. But we're not done yet. Because this class is focused on playing live, we're going to end the class with a live tournament sponsored by Optiver on the 31st, which is the day after the last day of the class. So after the league's over, and after you guys are good at poker, you'll have an opportunity to play each other in a live tournament, where their prize pool is all of the Akuna prizes, plus a PlayStation 4, plus an iPad, plus a Kindle, and plus a GoPro. I want this to reflect the type of things an online, multi-table tournament player would do. How it normally works is, during the week, and basically every single day, there is a uniform amount of tournaments that will just run every single day at the top of the hour. And these pros will just grind those out. They'll get used to the structure. And that's where they'll kind of grind their teeth. And then on the weekends, that's when you get a lot of the square money, a lot of the newer guys who only play poker on the weekend. And those are more gimmicky, idiosyncratic tournaments, but also the highest value. So that's why I'm producing the tournament structure like this, where the bulk of your tournaments will be very similar to each other. But then the tournaments that really matter will be completely different, at least relatively different. So that's why I'm doing that. That'll make you get a feel for what these guys have to go through. So let's talk about turbos. Turbos let you focus on pre-flop decisions, which are the area where I think there is the most to learn among people who are new at poker. Basically, all of your value that you're losing in tournament is from screwing up pre-flop. No one gets that right live because it's really difficult to be able to feel comfortable doing what's generally considered right. And we're going to spend a lot of time on pre-flop. But these turbos encourage you to do that sort of thing, because live is a lot of pre-flop, and you're going to be doing that in the turbos online, too. In addition, no one wants to spend six hours doing a tournament. So I'm making these turbos so you can be in and out in 45 minutes. And then you boot up another tournament, or you can be done with poker for that night. In addition, you have the opportunity-- you can play as many tournaments as you want. It's common for pros to do something called multi-tabling which is they'll do multiple tournaments at the same time. For the beginners, I'd probably recommend you just do one. But for the regular league, have at that. you want to do like all four tournaments at the same time, go ahead, to the extent that they overlap with each other a little bit. OK. So that's the end with the prize league. So the schedule is, we're going to go through what I'm calling basic strategy, which are the basic axioms that we're going to be using in order to analyze the decision making process in poker. Then we're going to be doing pre-flop analysis. And we're going to be doing a lot of this, because this is really where the value add is going to be, is getting this right. I think the way that we can tackle this thing is kind of a way I recommend that you learn anything complicated. So we're going to break this down into three different sections. Fundamental concept, practice, which are actually implementing those concepts when you have 10 seconds to make a decision, and then more advanced stuff. With regard to concepts, I'm going to call this the basic framework for decision making. It's being unexploitable. You want to get to the level when you sit down at a table, every pro in the room doesn't turn and go, I want to sit at that guy's table. You want to be a slightly winning player way before you want to become a huge winning player. In order to let you know the type of thing that we're learning, I'm going to label the slides with this, to indicate that this is like a basic concept. Learn this thing before you move on. The advanced stuff is, once you learn how to do things-- which how to do things is pretty broad-- we're going to learn minor adjustments that we can do to get quite a bit of extra money, like how to grind out that additional half big blind an hour out of our opponents. So any real deviations from what we normally do, in addition to meta game. Meta game is always fun, like anything not related to the hand to hand decision making process, like table selection, or bankroll management, or deciding whether or not to play. That stuff is really fun, and that's to be indicated by this ace here. OK. So I'm going to label those slides for anything that's considered advanced, and stuff you should only really do when you get the concepts down. And then a lot of this class is going to be focused on practice, which is how to actually implement these concepts on a day to day basis when you're actually playing, especially live. We are not going to have all the information. We're not going to have calculators, and we're not going to have that much time to make a decision. So how to apply these in real time, making rules of thumb, figuring out what you can just ignore and what you have to definitely do, and then some the psychology stuff related to actually performing live is going to be what I'm calling practice, which is going to be indicated by that poker chip with a P in it. Let's talk about what I'm bringing to the table here. So this course is primarily going to be from my perspective. And the decisions about what I'm going to teach you here, and the value calls I'm making, is going to come from what I consider the appropriate way for someone to play poker. So my background is that I was an online multi-table tournament grinder, not because I was a great pro, but because I sat more than I played. I was definitely a person who did not play every single tournament. I told you the World Series of Poker has like 25 different tournaments. 10 are Texas Hold'em. And then they have an Omaha tournament, and a horse tournament, which is a combination of five different games. And what is common is that any pro who plays one plays them all. I consider that ridiculous for someone who's actually interested in making any sort of money or career playing poker. So I'm definitely someone who prefers identifying value and monetizing it. So anyway, that's the perspective that I'm going to be teaching this course from. I like ROI. It's a great efficiency metric. Usually you try to maximize your ROI up until the point where it's below some sort of hourly that you set for yourself, because one of the ways you supplement ROI is by moving down in stakes. Usually lower stakes are easier games. You should have a higher win rate. But that win rate's multiplied by a much lower number. So usually you're going to move around in stakes until you have a good ROI, but hopefully above what you consider your lowest amount that you can feel comfortable earning. In addition, I want to focus on live tournaments because who knows what's going to happen to online? Whereas I think live tournaments are very social, they're very public. Everyone knows who wins live tournaments. So I'm going to teach in a way such that focuses on these types of values. OK. So let's move on to some of the concepts and tools that we're going to learn. So we're done learning about what we're actually going to be doing during this class. So let's learn a little bit about poker. So first thing is, we're going to be using PokerTracker a lot. So I'm going to email out exactly how to install this thing. PokerTracker has donated 115 licenses to their product for us. And then our next lesson, on Wednesday, is going to be Joel Fried teaching us how to use this thing and going through some of the analytics. So one other thing that I like using is the Universal Replayer. And what this thing does is it just visualizes hand histories. So you'll feed it a hand history in a text file. It animates it. It probably does other things, but it's free. And this thing's been around for a while. I've not even sure if it's supported anymore. But it's a thing that I'm used to. So this is what it looks like. So you give it a hand, and then it reproduces what you might have seen if you actually played that hand. So let's move on to a concept. So stack size. So this might seem fairly simple, but we ought to make sure we're talking about the same thing when we go through this. So your stack size, it's the value of the chips in front of you. So that's fairly normal. But we have this thing called effective stack size, which is what we're usually going to be talking about when we refer to stack, which is the minimum of your stack or the next biggest stack after you. And the way to think about this is the number of chips you could possibly lose in this one hand. That's what your relevant stack size is. And the way you make decisions will depend on your effective stack much more than anything else. So an example of this would be, say you're in a heads up situation where you're the hero here on the small blind. Big blind has, whatever, 300 chips. And you have some amount of chips with queens. So if you have 1,500 chips, and so does he-- say blinds are like 10/20-- you have, what, like 50 times the blinds combined here. So this is a pretty different hand than aces. Why? So say that you raise with queens, and then he raises you. So you raise to 60, he raises you to 200, you raise to 600, and he pushes to 1,500. Your queens are probably not really that good anymore. It matters how many chips you have here. However, if you have 300 chips, you raise with queens, and then he pushes over, you can't fold that. You might as well have aces, and it makes your hands, the way you play hands, materially different. That's why chip size matters in general. When the chip stack is low, you're playing these two hands basically identical. You're saying-- you're just playing this range. However, when we're talking about effective chip stack, it's the same thing, where even if you have 1,500 and he has 300, if you raise, he's going to push. You don't have the opportunity to do that back and forth anymore. So you might as well have 300 with regard to your decision making here. That's why we're looking at the effective stack, because it really matters who has the least number of chips, because that determines when the action is going to be over. So really, I like this definition the most, the most amount of chips that you can lose in the hand. It's a lot more, I think, simple to think about than this min formula. OK. And then we're almost always talking about effective stack. Let's talk about Dan Harrington. So Dan Harrington is a player whose style I very much like. His nickname's Action Dan, which the consensus is, he just kind of gave himself, because he's considered Mr. Fundamental, like tight aggressive ABC player. So this playing style, this temperament, tight aggressive, is something that is used to characterize basic playing styles. So let's quickly go through what those are. So there are two different axes here. There's how often you bet, where bet means you are raising the stakes, so either you bet or you raise. And then here's how often you call. Either you call a lot or you call not that much. You can get a good feel for the type of person someone is by what box they fill in. So these have names. So someone who's tight aggressive, you would just refer to them as Tag, which is like what Dan Harrington is. You bet when you have good hands and you fold when you have bad hands. Another possibly winning strategy is loose aggressive, Lag, where you certainly bet when you have good hands, but you will see a lot of cards before you'll give up on a hand. You're definitely willing to call a lot. These, type passive, are not pronounceable words, so the community generally came up with different words to describe these. So a tight passive person is weak. They're someone who you can completely run over, because they fold when they have a bad hand, they check when they have a good hand. I guess they would be called rocks. You never need to worry about having a big losing night against these guys. So someone who's type passive is generally considered playing sub-optionally. And then the loose passive people are described-- this icon, which I forget what it's from. I think it might be from an old version of PokerTracker, or maybe it was on Party Poker or something. But everyone loved seeing this icon which you could label people as, because a loose passive person is what? They are a calling machine. That's what that stands for, and it means that when you have a hand, they will call all of your bets. You will extract value out of them. But when they have a hand, they're OK with letting you look at your draws to make a decision about whether by the river you have a hand or not. There's virtually no way that these guys are making money in poker. I think it would be, like over a realistic sample size, there's no type of player who could fit in this quadrant and be good enough on any other metric to actually be making money in poker. So in general how we look at this is, we would call this Tag guy solid ABC. That's what I'm recommending you guys play as. Tag players, as a quadrant, are going to be the biggest winners. Lag players, someone who's very aggressive and plays a lot of hands, could possibly be a pretty good winner. It depends on the type of game, and then their opponent and their ability to pick spots. But there are a lot of big Lag winners. There are not a lot of big weak winners. And there are not a lot of calling machines, loose passive players, who are not big losers. So anytime you see-- this is a definition of someone who's a complete fish, a huge donater to the game. And your ability to recognize this type of thing will help you find good games to play, when you see someone doing this kind of thing. Anyway, back to Action Dan. So Dan Harrington is a pretty good poker player. He's been around the block. He won the main event back in 1995, when it had, like, 300 people in it. He has two World Series of Poker bracelets and one World Poker Tour title. But anyway, so Harrington popularized this thing called the M-ratio, which was invented by someone else. So the M-ratio was invented by this guy Paul Magriel, who's a backgammon theorist, apparently one of the best backgammon players in the world, commentator for the WSOB, World Series of Backgammon, and eight WSOB final tables. Anyway, so he's supposedly really, really good at math, even by MIT standards. But he invented this thing called the M-ratio, but then it never caught on until Harrington started doing it. All right, so Harrington's M-ratio is your effective stack divided by the sum of the blinds and the empties. So you'll hear people talk about, like, oh, I had 10 big blinds, or 15 big blinds or whatever, to talk about their chip stack. But that has a fundamental problem. It has a lot of different problems. One is, it doesn't tell the story. So the usual blind levels are like 1/2 or 2/4, where the big blind is just twice the small blind. So that's the assumption. But if you're at a blind level that's like 1/3 or 3/5, the number of big blinds you have is not indicative of anything. It's not indicative of how many hands you can see, or how much you care about winning a pot pre-flop. So using the blinds is bad, in addition to, once you start having, like, if you're 50/100 blinds and you have an ante of 25, you have basically half the stack you had before, in realistic terms. Just to get big blinds doesn't, in fact, earn antes at all. And that's a major problem referring to it like that. So using M seems to make a lot more sense. So what it is, is it's basically the percentage of your stack that is the blinds in the ante. So it's like how many rounds of poker you can survive if you just fold every single hand. Of course, you're not going to do that. Although I think that's what he's actually getting at, because he uses M to refer to when you have to make a move, which is not generally how I recommend you do it. I think it's more important, because it means how important the blinds are to your stack. The only reason anyone plays any hand of poker is because someone wants to win the blind. So even if you have kings, to some extent, if you could win the blinds, 99% of the time you would just do that. You don't really all the time want someone to go up against you. So the blinds are really driving the decision making process, at least pre-flop. And the percentage that those blinds are of your stack matter a lot. If they're 1% of your stack, if your M is 100, the blind basically don't matter at all. Whatever happens after the blinds is going to materially impact your decision. Where if your M is 2, and the blinds are half your stack, winning those seems really important. You should do whatever you can to kind of maximize your chance of winning that. So that's why M is a good ratio here. And then, in addition, for tournaments, it makes it much easier to talk about hands without having to worry about all the different parts of the tournament life cycle. If you have 1,500 chips and it's 50/100 blinds, you can basically make the same decisions as if you have 10 times as many chips at a level that's 10 times as high blinds. You could just divide in your head and basically make the same decision. You don't need to worry about doing anything different as a result of having more chips. So Harrington invented or brought up a bunch of other things that never really caught on. He invented a thing called the queue ratio, which is your stack size divided by the average stack size in the tournament. So I guess you might use this to get an idea of how far behind you are in the tournament. Like if your queue is 5, you don't need to be that aggressive. But if your queue is .2, you have a lot of catching up to do before you're realistically going to be anywhere near the money. I don't really make decisions based on that. I think the community doesn't. So it never really caught on for anything. I've never actually heard anyone use that. So he came up with this thing called effective M, which makes sense, if you look at M from his perspective. Effective M, it's your M divided by-- you multiply by how shorthanded your table is. And it gives you the equivalent of the number of 10-handed tables you could survive. It just means that, say you have 10 Ms, you could survive 10 rounds of blinds. If you have three people at your table, you can't survive for another like six hours because you actually pay the blind every other hand. That's what effective M is doing. It reduces your M proportionally. Since he's looking at this from the perspective of when you need to start making moves, it kind of makes sense that your M would be reduced if you're shorthanded. But I look at M from the perspective of how valuable it is in terms of blinds. So I don't really use that. I don't know anyone who really uses effective M either. But he invented them, and maybe they'll catch on eventually. So I think that's going to be done for today. Thanks, everyone, for a good first lecture.
MIT_15S50_Poker_Theory_and_Analysis_IAP_2015
Game_Theory.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: All right, well I'd like to thank you for inviting me again to talk to the poker class. It's always great to come here, and we're going to be having a tournament in a couple weeks, so good luck for the people participating in that. Actually, I'm coming back in another two weeks because I think [INAUDIBLE] a Harvard MIT math tournament for high school kids. I really love visiting MIT. I just wish it were at some other time besides the winter. Then it would be perfect. All right, today I'm going to talk about the University of Alberta's Cepheus computer program. It supposedly solved poker. We're going to talk about what they actually did. [LAUGHTER] There seems to be a lot of buzz about this, so I thought this was a good to do. So I have to tell you that Jared and I did not work directly with the University of Alberta people, but we are very familiar with their methods and have actually tried some of their coding techniques. So we're pretty familiar with the same research that's going on. To It's sort of an, I think, objective commentary. So by the way, as the lecture goes on, you can interrupt with questions. Just raise your hands if something is unclear because I've been told I have about 80 minutes. Probably spend 55 and then save the rest for questions. All right, so that line of talk-- first I'm going to talk about what the Cepheus accomplished, what the University of Alberta people accomplished, and I'm going to bring that up by discussing game theory optimal energies in poker. How many of you know what game [INAUDIBLE] is. I just want to know [INAUDIBLE] or what a [INAUDIBLE] is. Raise your hands. OK. So about 1/2, 2/3. Good. I'm going to do a quick introduction to what game theory [INAUDIBLE] is. We're going to talk about a simple poker game and solutions to it. And then I'm going to go into their algorithm, which is written [INAUDIBLE]. They used the method of counterfactual [INAUDIBLE]. Actually, the method they used to push through to the solution of the problem is counter CF plus, which is basically the original algorithm with some shortcuts, which we'll discuss. After this, though, we're going to think about extensions of computer solutions to other games, including [INAUDIBLE] games and multiplayer games. A couple people have questions about [INAUDIBLE] no limit program. We'll talk about what they're work entailed if questions lead in that direction. All right, let's talk about what Cepheus accomplished. It's a game theory [INAUDIBLE] solution to heads up limit hold 'em. And so what does that mean? You guys all know what limit hold 'em is, right? Good. Basically, after [INAUDIBLE] few years, they've achieved and exploited less than 1/1000 of a big blind. So the first thing is not a boo perfect optimal solution. You can still exploit it for about 1/1000 of a blind for a hand. However, there are probably better games. This is like 1/20-- this is 1/2000 of a big bet. You can actually play heads up for 50 years at normal speed and still have some probability of losing. The reason for that is the standard deviation of heads up limit hold 'em is about five big blinds. So you can just imagine how many hands you have to play [INAUDIBLE] the significance. About, oh, 25 million. So it's definitely a milestone. This is the first time a real poker game has been solved. In math of poker, we solved ace, king, queen, [INAUDIBLE] on paper, but [INAUDIBLE] a real poker games that's solved. However, given their previous work, it was just a matter of [INAUDIBLE]. I remember two or three years ago they passed the 1/100 of a big bet, which is sort of our measurement of significance. If you're playing and you're winning more than 1/100 of a big bet for a hand, you can [INAUDIBLE] it's a probable game. Below that comes theoretical. So it's definitely a milestone. And basically I knew that, if they just maybe spent more CPU power, they would get the solution. For 900 CPU years, we finally got the solution. So I don't know. If I had that much CPU power, I'd solve a few problems, too. But it's still the miles [INAUDIBLE]. It's great. So what effect does this have on other games? Does this mean poker is going to go the way of chess for computers who are just much better than we are? I don't think we're there yet, and we'll talk about that later. So let's talk about Nash equilibrium. So John F. Nash won the Nobel Prize in 1994 "for pioneering analysis of equilibrium in the theory of non-cooperative games." And he extended the work of John Von Neumann and Oskar Morgenstern, [INAUDIBLE] actually first considered these two player zero sum games. So Nash equilibrium is just a set of strategies such that no player can actually improve their strategy and make more [INAUDIBLE]. [INAUDIBLE] whatever. In a the two player zero sum games, we refer to Nash equilibria as also very optimal. The reason is because Nash equilibria are also the min/max solution. It's the best you can do given that he can see what you do and respond. Simplest case of Nash equilibria is, if you're playing rock, paper, scissors, what's the Nash equilibrium? 1/3 each. So that's not that exciting in this case, because both players kind of just t0. You can't make more than 0, you can't make less than 0. So it doesn't seem to be that exciting a solution, but in poker it's kind of exciting because they're kind of dominated mistakes people play, or mistakes that actually lose money to the optimal solution. So the reason 1/3, 1/3 is the Nash equilibrium because nobody can do anything to improve their lot. It may not be the best thing to play. If a guy is playing 1/2 scissors and 1/2 rock, what should you play? 100% rock. Yeah, sort of like the Aerosmith strategy. [LAUGHTER] Right. So there are much better ways to play if your opponents deviate from Nash equilibrium. So actually game theory optimal is not necessarily the best way to play, even heads up. It's a way to play to kind of guaranteed you never lose. So that's sort of the accomplishment. That's why we like to find these things. I know I could just play this, and I'm not taking total advantage of my opponent's mistakes, but at least I'm playing in away where he can't take advantage of me at all. Let's do a simple example. So this is an example that I shared with the class a couple years ago. So there are two players, Rose and Colin, and the reason the players are called Rose and Colin are because this refers to [INAUDIBLE] games. One player chooses a row, the other player chooses a column. That's their payoff. And for a three player game, we introduce Larry, because there are layers. So the two players are Rose and Colin. So each player antes $50 for $100 in a pot. Rose looks at a card [INAUDIBLE] full deck, who will win in the pot a showdown if the card is. Otherwise she will lose. So Rose can decide to bet $100 or check after she looks at her card. So there's $100 in the pot. She looks at her card. She [INAUDIBLE] whether to be $100 or to check. If Rose bets, Colin may decide to call $100 or fold. If Colin folds, Rose wins. Well, you guys know how poker works. If Colin calls, there's a showdown, and her card is actually a spade. She wins the whole pot. Colin wins the pot. So what's the optimal strategies for Rose and Colin? Does anybody know the answer? Well, let's do one [INAUDIBLE] part of it. How often do you think [INAUDIBLE] should call? Colin wants a call [INAUDIBLE] enough to make Rose's bluffs probable. If Rose gets a spade, what is she going to do? Bet. She has nothing to lose by betting, unless she's being very, very tricky, but it is correct to bet. So let's see. If Rose doesn't pick up a spade and bluffs, how often does that have to succeed for it to be profitable? There's $100 in the pot. She looks. If it's not a spade, she has to bet $100, and how much is she risking? How much is she going to win? It's actually $100 and another $100, right? Because there's $100 in a pot. Sure, she anted something and made the pot, but she's spending $100. And if Colin calls, she's going to lose the $100. If Colin folds, she's going to win the $100 in the pot, or she could have just given up. So it's 1 to 1. So Rose should call half the time-- I mean Colin should call half the time. Rose should bet to bluff in a 2 to 1 ratio, because that's the odds Colin gets to call. So Rose should always bet a spade. If Colin calls 100% of the time, Rose will just never bluff. If Colin never calls, Rose would just be every time. So there is kind of no equilibrium there. If Colin calls half the time, Rose will be indifferent to bluffing. She'll be negative $50 either way without a spade, and then $100 with a spade. Now, this is strategy for [INAUDIBLE] and the correct strategy for Rose is this ratio of bluff to spade, which is 1 to 2. So Rose should basically bet half of her hearts. She can bet the high hearts, and I guess with the eight of hearts she can decide whether-- is it the eight or the seven? No, it's the-- yeah, it's the eight. [INAUDIBLE] with the eight of hearts she can decide whether to bet or not like half the time. So these are Nash equilibrium and game theory optimal strategies, and basically the value of the game is negative-- is worth $12.50 to Colin. Any questions about this? All right, so these are the strategies that the algorithm tries to find. Let's go on to the algorithm now. Well, let's talk about what [INAUDIBLE] optimal is first. By the way, there will be about five or so transparencies [INAUDIBLE] of math equations. So just suffer through these. Those of you who understand are going to enjoy the later part, but let's just talk formally about what game theory optimal means. So there's this game function, u. It takes two strategies, an x strategy and a y strategy, and it gives [INAUDIBLE]. If this was rock, paper, scissors, you would have u of rock versus scissors to be 1, so on and so forth. It's positive for x and negative-- x is trying to-- x gets u, and y loses u. That's the idea. So one of things is we can take convex linear combinations of strategies. That is, if x sigma xk are strategies and we have some coefficients that are all non-negative and that sum to 1, we can make a new strategy as a linear combination of these strategies. And also u is bi-linear means that the value of the game here is just the linear combination that [INAUDIBLE] sigma x. And it would be the same also for sigma y. This just means, suppose you have two strategies and you play 1/3 sigma x1 and 2/3 sigma x2, your payoff is going to be 1/3 of the payoff of sigma x1 and 1/3 payoff of sigma x2. Hopefully that's pretty clear. Now we define a pair of strategies to be an epsilonic rim if the best x can do against y is this strategy. The best y can do against x is this strategy-- is epsilon. And if epsilon equals 0, these are in Nash equilibrium. So after 900 PU hours, what they found were two strategies-- sigma x star, sigma y star-- that were within 1/1000 of a big blind of equilibrium. And that's basically [INAUDIBLE] accomplished. So I'm going to actually go through the nitty gritty of how they did this in case you would like to write on poker solver Sunday. So the big idea that they borrowed was this idea of regret minimization, which is actually pretty cool. Suppose that each time step t the player has a few pure strategies. We're assuming the player has a handful of strategies. In poker, obviously, there's trillions of strategies, but-- two to the trillions of strategies. But say he has two strategies. He can play one or two. Suppose it's odds, or evens, or something like that. Or he has three strategies like [INAUDIBLE]. So basically he chooses some sort of mixture of strategies at the beginning, and we're only dealing with one player at this time. We're assuming the other guy-- we're assuming he's playing against some adversary that's all knowing. That's the original set up, regret memorization. We'll talk about how this applies to game theory in general. Now with each time t we're given values ut of sigma k. So basically after he determines this, the adversary decides what the value of use of t is, and basically his payoff is just [INAUDIBLE] a linear combination of the things he picked. But the idea is that the adversary can be adversarial. he can decide to make the [INAUDIBLE] strategy score well some of the time, and the [INAUDIBLE] strategy score badly some of the time. So basically now the idea is to calculate a regret. By the way, this is not the notation that's used in the three or four papers they wrote on this, because I think they did great work-- it's really written as a math paper. It looks like a particle physics paper, which is-- actually for particle physics you need all the complex notation because they're trying to describe something [INAUDIBLE] difficult. I think for computer science papers usually don't need this. So I'll explain this, and then you guys through reread their paper. I think that [INAUDIBLE] give you a quicker way to understand their paper. So there's this thing called regret of the k option at time t, which is just the sum of the difference of playing k versus playing whatever you played. So basically you can have positive regret or negative regrets. Negative regrets means that what you played-- what you decided to play up to time t was better than just playing k at each time step. So we're only concerned-- we're mostly concerned with the positive regret, which means, instead of playing, you should have made-- you could have made more money by playing option k. So what's the significance of this? So the idea is we want the average regret, which is this element divided by t. So basically you want the average regret, average amount that you're kind missing out on to be less than epsilon sub t, where in epsilon sub t is the [INAUDIBLE] converging to 0. If you have this, you have some regret [INAUDIBLE]. So the cool thing about this is you can do regret matching. You can let these weights-- first of all, you just look at the positive, the things with positive regret, and weight the options. At each [INAUDIBLE], we basically weight the options that have positive regrets accordingly. And if you're so lucky that nothing is positive regret, you just randomly pick a strategy. Let's do an example, because I think this is kind of unclear what it is. So let's just say we have two strategies. The player can pick one, or the player can pick two at each time, or the player can pick some mixture one and two. After a player does that, the adversary comes out and says, well, one of them is worth [INAUDIBLE] and one of them is worth 1. So let's just see how this works. So suppose at the first time step we picked sigma 2 because we don't have any regrets yet. We're just randomly picking a strategy-- [INAUDIBLE] sorry, sigma 1. We'll just randomly pick sigma 1. So the adversary now gives us the value of sigma 1 0 and sigma 2 is 1. And you go, oh, well that means that the regret of the first option is 0 and the regret of the second option is 1. We're aware this first option is 0 is because we already played sigma 1, so you can't have any regrets, either positive or negative, for playing sigma 1, because your option was playing sigma 1, but you have some regret of not playing sigma 2. Sigma 2 was kind of the winner here. If the two [INAUDIBLE] reversed, we would have r1 equals 0 and r2 equals the negative 1. And then we'd become happy because all our regrets would be non-negative. So at t equals 2, because we have zero regret here and regret 1 here, we actually pick the strategy to be all sigma 2. Now the adversary says, OK, well the value of sigma 1 is 1, and the value of sigma 2 is 0 for the second time step. So what happens? Well, the same thing happens as before. Now we have regret of 1 on the [INAUDIBLE], and then regret of [INAUDIBLE] on the second option. So what do we do next? The regret [INAUDIBLE]. Well, flip a coin or just pick a linear even combination of the two strategies, half of one and half the other. That's what we can do. [INAUDIBLE] the same. So now the adversary says sigma 1 is 0 and sigma 2 is 1, which means that the regret of 1 actually goes to 0.5, and the regret of 2 actually goes to 1.5. [INAUDIBLE] 1 goes down a 1/2. So now with these regrets our waiting is kind of the ratio of the two. It's 1/4 sigma 1 and 3/4 sigma 2. So now the adversary goes, OK, well sigma is 0. Sigma 2 is 1. So this regret actually goes [INAUDIBLE] down by 3/4, and this goes up by a 1/4. And since this is negative, now we pick the strategy to be sigma 2. [INAUDIBLE] and so forth. Now the adversary [INAUDIBLE] for us and say, oh, it's really sigma 1. Then a regret of sigma 1 would go up to 0.75, and so on and so forth. So it seems that the adversary can make the job tough on us. Well actually, there is a theorem that says, for our example, [INAUDIBLE]. The square of the first regret if it's positive plus the square of the second regret if it's positive is always going to be less than or equal to t. And that's because, if [INAUDIBLE] these are both positive, it goes, for example, you are really going r1 plus or minus whatever amount of r2 you're doing. And r2 of t now minus plus whatever amount of r1 you're doing. The things that [INAUDIBLE] this you can see the cross terms cancel each other out. This becomes 2 r1 r2 divided by r1 plus rt. So you're left with this squared plus this squared plus this squared plus this squared. And this squared plus this squared is going to be less than 1, so we have this here, which means that the quadratic sum only [INAUDIBLE] by 1. We have this bound. Why is this bound so great? Well if the square of the regrets are less than t, that means the average regret is going to be [INAUDIBLE] 1 over root t. In fact, it's kind of left as a homework problem. In a general case, our kt over t is less than n minus 1 delta over root t, where delta is the maximum deviation of the options and is just the number of options. Yeah? AUDIENCE: I'm curious, is [INAUDIBLE] in terms of what is the strategy sigma. Number of like a payoff? PROFESSOR: No, no, no. A strategy sigma, in terms of poker strategy, is sort of a description of what you would do. Suppose you get ace, six off suit pre-flop. A strategy would be a descriptor of what you would do at each point of the hand. So there's some significance in effect that this regret, average regret, goes to 0. Well, the significance in terms of game theory optimal is suppose a peer's strategies are-- suppose you have a bunch of peer strategies for x and bunch of peer strategies for y. If we regret match, but instead of doing an adversary, we just say t utility for x is just the utility for x playing against the sigma ty, and the utility for y is just negative utility-- the game utility for y playing against sigma xt. This is kind of a mutual regret matching. You do regret matching for x and y in each step, which means you just modify x-- you compute the regrets at each step. Then you modify x [INAUDIBLE] y strategy by this type of regret matching. And basically the strategies that you choose, the average strategy, which is the sum of the strategies you have had all along divided by t. 1/t-- all the strategies you've done in these t steps. And basically what happens is now, if you try [INAUDIBLE] to exploit a [INAUDIBLE] strategy, again, this is the best x can do against y minus the best y does against x. You compute this, and you add the sum of what actually happened with x of t and y sub t, and so on and so forth. You notice that this is the regret of k-- of x picking strategy k all the time. It's just y picking strategy j all the time. So that's less than 2 epsilon over t because regrets over t converge, so it's within [INAUDIBLE] game theory optimal. Basically what this all means is basically suppose you choose your strategy, some mixture of stuff. Your opponent tries to figure out how best he can exploit this strategy. By the way, this is often called nemesis. I really like that name. Opponent figures out his nemesis strategy against you. Then, well, you get to see-- so his nemesis strategies-- unless you're playing the exact game theory optimal strategies-- is always going to be better than the game value. He looks at what you've done and finds the best response. And you do the same to him, and the difference of those two games kind of exploitable. Obviously, this means basically, if your opponent sees what you're doing, this is the best he can do against you. This number is the one that's less than 1/1000 of a big blind. So counterfactual regret is kind of cool because-- it's a good thing I've drawn this tree. At each of your decision points, now you can regret match. So first of all, you don't need to be fed back the correct utility [INAUDIBLE]. Here in the example we gave, we had a u0 and u1. You'll just be fed back some unbiased stochastic number that averages the value of the game. For example, if you're doing a regret chain on poker, it's hard to tell if I'm up with this strategy that has a bunch of terabytes, and you come up with a strategy that's also a bunch of terrabytes-- what's the value of playing against y [INAUDIBLE]? But we can just get a sample. We can get a sample. Well you can just run it once. Right, that's the idea. You get a sample by just saying, OK, just play one hand, and see the result of that hand. And you could use either random chance or whatever every time you decide to do whatever branches of your tree if you do a mix tree. So the cool thing already is, without counterfactual regret, you can quickly converge the solution, because a lot of strategies, like fictitious play-- it's the best response. The best response is hard to calculate sometimes, but each simulation can just be one iteration through it. And this is counterfactual regret because [INAUDIBLE] is given assuming that the player does everything to play to that node. So the waiting here is nature just has its probabilities. If your opponent plays according to his strategy, but when you play you always kind of play towards that node, so your weight actually 1 for each of these options you pick. The cool thing is that once you have the structure set up where you're just doing one or a few iterations throughout the hand, it's actually pretty easy to set up different weighting schemes. For example, if you have two options and the ace of hearts comes on the turn, or the deuce of clubs comes on the turn, and you don't really have to worry about the ace of hearts coming on a turn. That tree is fine. That part of the tree has very little positive regrets. You can say, OK, we'll just-- different game where the ace of hearts comes about [INAUDIBLE] at a time the deuce of clubs comes, but we're going to weight the results by 10. You still get the same answer. It's just that you get a much coarser kind of [INAUDIBLE] every time the ace of hearts comes, but already kind of know what to do with that. You can work on the deuce of clubs. So there a lot of different weightings schemes. This means that the hands can be kind of sampled intrinsically. So the final algorithm they had was factual regret plus. So instead of having accumulated negative regrets, basically a lot of these option regrets can be really negative. Folding aces pre-flop quickly turns to really negative regret. You lose your small blind, and hopefully if you play it limit hold 'em, you could win more than the small blind. So you accumulate a lot of [INAUDIBLE] so set options falls off the map. Their innovation in counter factual plus is to, instead of putting a big negative number to a lot of these things, they just floor them at 0. And the reason they floor at 0 is because you know this a simultaneous evolution of strategies where even strategies at the beginning just might not be great strategies, and you want to-- if regret of something is 0, you can route get regret faster if it's the right thing to do to respond to your opponent's strategy. All of these things-- suppose you start with a random initial guess for your opponent's strategy. Then you actually have a pretty reasonable strategy, which is bet and raise every time with every hand. If your opponent has a random strategy, he might just fold. So later in the streets, it's probably [INAUDIBLE] just bet and raise every time with every hand. He raises you back. It's not like he knows anything. It's a random strategy. Just raise him back and hope he folds. If he doesn't fold and call, you bet again an x3, because now the pot is bigger. So he has a 1/3 chance of folding. You should bet. So that evolves quick. If you start off with a random tree with no information, that starts off as the dominant strategy. And then you have to walk that back as your opponent's strategy evolves also. By the way, they're actually keeping two trees-- one for the small blind strategy, and one for the big blind strategy. And this is everything with respect to the small blind. The small blind isn't-- so let's just go into the next slide probably. [INAUDIBLE] have to be. So let's try to figure out how big the strategy space in limit hold 'em has to be. So let's concentrate on river nodes because that's most of the nodes. It's a tree so we just have to calculate the leaves. So first of all, assuming a four bet cap-- the reason we assume a four bet cap-- well, I don't know why, but it seems that that's-- so this is one approximation, the four bet cap, but this is kind of normal in types of research papers. if we have a four bet, there are nine possible actions that get you to the next street. There are some actions that [INAUDIBLE] like player one bets and player two folds, but if you don't get to the street, you don't get to the river, and that's a pretty small percentage of the nodes. So why are there nine possible actions? Let's count them. One of the actions that gets to the next street is check check. So that's one. What are the eight? What are the other eight? AUDIENCE: [INAUDIBLE]. PROFESSOR: Right, check raise. Let's try systemic [INAUDIBLE] count them. So I claim that there are two ways-- one bet in the pot. Player one can bet, and player two can call, or player one can check, player two can bet, and player one can call. In fact, there are two ways to put k bets in the pot and k is greater than 0. If you want to put three bets in a pot, what are the two ways? AUDIENCE: [INAUDIBLE]. PROFESSOR: Right. Yeah, right. Bet, raise, re-raise, call, and check, bet, raise, re-raise, call. So if the cap is k bets, there's always 2k plus 1 ways to [INAUDIBLE] three. So there are nine possible actions in each betting round before the river. So there are three betting rounds-- pre-flop, flop, and turn. So let's use some symmetries because I don't think the optimal strategy has you playing something differently with ace, six of diamonds, ace, six of heart. [INAUDIBLE] very easy to prove. The optimal strategy doesn't have that. So using symmetries on a flop-- so how many distinct flops are there? Well, I like to think about it as where the suits have symmetries. I like to think about it as, well, there could be three suits in a flop, two suits in a flop, or one suit. So if there's one suit on a flop, there's 13 [INAUDIBLE] combinations. That's pretty straightforward. If there are two suits on a flop, what's the combinations? There are 13 possibilities for one of the suits, and there are 13 [INAUDIBLE] for the other suit. It's based on heart or something like that. The suits are symmetric. So there are 1014 things [INAUDIBLE]. This is [INAUDIBLE] the things. And if it's three suited, you just choose three ranks, but it's not 13 choose 2. It's 15 choose 2 because why? I guess the ranks can be equal. So it would 13 choose 2 if the ranks would be unique, but you'd have three aces on him. So this is actually 15 choose 2. So there's 455 three suited flops, [INAUDIBLE] flops. That's kind of the big explosion in limit hold 'em, pre-flop to flop. So there is not [INAUDIBLE] possible actions in each betting round. So let's count the number of turns and rivers. There's [INAUDIBLE] turns and 48 rivers. So counting that, you have a billion possible action sequences to the river. The [INAUDIBLE] things in each street, all the flops, then the turns and rivers. But each river, there could be up to 126 [INAUDIBLE] types. 47 times 46. Making about 6.5 trillion hand river types. Each node should be visited about 1,000 times. It's a big computational problem, but it still tractable, especially if you have 900 years of CPU. And they also used many shortcuts. They use all the symmetries I talk about, and they also have a few shortcuts. And you can see these trees are big. Terabytes of memory to actually store your strategy. So you can't really get that on a node yet. I don't know. Can you fit that on a node now? Does anybody know? I don't know of a CPU that has [INAUDIBLE] bytes of RAM yet. What they did was they broke the problem up into about 100 [INAUDIBLE] different sub-games, and they just worked on those sub-games. In fact, I guess if you're clever about it, you can use cache memory when you get down to the river. Things are pretty close, and you know that using cache memory is faster than using [INAUDIBLE] memory. You can take advantage of these things. A lot of these updates through these regrets are just simple addition, and you can just optimize the heck out of this, and I'm sure they did it. Let's just try to solve some other games. I have two games that seem accessible. Suppose we do Omaha eight. Well, this is exactly the same structure as limit hold 'em. You just change the hole cards. So instead of having 47 choose 2 different river hands, you have 47 choose 4. That's like a multiple 82.5 x to the original tree, so that's not that bad. 900 CPU hours-- this is just 75,000 CPU hours. If it were a matter of national security to get the exact solution to Omaha, the military could just do it in a few months. There's also [INAUDIBLE] you can do, by the way. Basically, what they did is-- before they did this, was that they solved the sub-game. In that, basically if you both get hands together and you say you have to play these hands the same way, that's basically a sub-strategy. You can consider subspace of your strategy x prime of x and y prime of y, and you just solved the x prime y prime game, meaning you both get hands together, probably on the river because that's when bucketing kind of becomes more necessary. And you solve that game, and you go, well, how optimal is x prime in the hold game? And if you're good at bucketing, it may be pretty close. If you're bad at bucketing, like you put the aces in the same bucket as seven, five suited, you probably won't get a great answer. So you need to intelligently design your buckets. You can't-- well, I guess there are also evolutionary things you can do to try to design buckets and see what things are close to each other. People who have familiarity with this know that this is kind of hit or miss. Another game that you can maybe solve is razz. It's definitely as simple as [INAUDIBLE] a stud. Why is razz simpler than all other games of stud? There are only 13 different cards. The deuce of spades is the same card as the deuce of hearts. You can't-- well, you could make flushes, but they're irrelevant. So unfortunately there are 13 to the 8th power possible ways of cards can come, because there are four up cards. That's sort of the problem. Kind of the community information you have is a bigger set, and your trees just get bigger because now you have one extra street. And you still have 415 choose 3 combinations of any three ranks as river hand types. So There are 2.4 quadrillion river hands. So that's a factor of 374 [INAUDIBLE], but we think some of these roads are pretty null. How many of you actually play razz? A couple of you. OK, great. Good poker class that people study razz. If you have a queen up and a deuce completes it, you're not really going to get into a raising war and make it [INAUDIBLE] cap on third street. Some of the [INAUDIBLE] may be null. You can do some bucketing, perhaps. Razz is kind of more natural to bucketing because you can think about what hands to bucket together. Maybe the king, eight, six, deuce is very close to the king, eight, six, ace. And the two strategies-- and you can start in hands by rank order of cards or something like that. So this is 374. This is 82.5. Or you could apply for a grant and say we need x hours of CPU time. I don't know what the right strategy is, but these two problems are tractable. Let's talk about big bet games because there's been some sort of discussion, even last night, about Snowie. A few people have tried big bet games, and they're problems. First of all, there's a continuum of bet sizes you can make. The Snowie solution just assumes three bet sizes. I can bet half the pot, I can bet the pot, or I can jam, I think. Maybe there's-- I can bet two times the pot, but the problem with that is that I think that's a little bit too coarse. The question is, if you solved that game, how close is that solution to the real game? And that's kind of an interesting question, but you don't even have a complete strategy. What if some guy bets a quarter of the pot, or 1.5 times the pot, something that' not on your list? You have to exploit-- and then it gets kind of weird, because my response to a pot size bet is to raise the pot again. All right, what if he makes a 1.1 times the pot? Is it right to raise the pot-- just raise the pot 1.1 times or raise the pot 0.9 times so you get back to the same stack sizes so you can do the same thing in the future. These are difficult questions. Even if some bets [INAUDIBLE] are non-optimal, our full strategy needs responses to to the bets. So simple approximations may work. I kind of feel this is kind of a tough problem, though. And you could just-- just playing a game where you can just make rigid pot bet sizes, then you might get something actually interesting. But one of the things with regret matching, if you actually have a lot of bet sizes, suppose you say, OK, I'm just going to kill this problem, and I'm going to do 0.01 times the pot, 0.02 times the pot, 0.03 times the pot, and so on and so forth. The problems is now you have a lot of options which are really close in equity together, so this regret minimization is going to take a while. It's going to have to sort out really close events. And then it's going to have to balance your value bets with your bluff and things like that. So even just trying to kill it by putting a lot of bet types may not solve the problem for you. So two player, three player games are actually kind of interesting. The dress by the group and using counterfactual regret to create competitive multi-player agents. And this is a paper done in 2011 or so. And the program for actually first and second in annual three player limit event-- the first problem is that there's no guarantee of epsilon convergence. You're not necessarily within epsilon of Nash equilibrium. Second problem is that, do you just want to play in Nash equilibrium? There could be multiple Nash equilibria in multi-way games, especially in these proportional payout tournaments, satellites where, say, two people get a seat. There are really nonlinear effects going, and it could [INAUDIBLE] which collusive equilibria are you playing? In our book, Jared and I point out a game called the rock maniac game where it's a real poker game where players can use a simple strategy and ensure you losing. A simple version non-poker version, like a game where you play even or odds with three players, but the odd man out wins. So suppose you and I are colluding against the third chump. What would we do? AUDIENCE: [INAUDIBLE]. PROFESSOR: Right, I would play one, and you'd play two. And the third guy could never win. There are situations which can come up in poker like that, but I think if there's no collusion and it's not a tournament, playing Nash equilibria usually turns out OK. I think that's sort of the argument they were making in creating these strategies. All right, here are the references. This took about [INAUDIBLE] the time I estimated, so questions? OK, let's just-- you hand your hand up first. AUDIENCE: Well, the original strategy finds that the Nash equilibria, if you're playing against someone who's trying to beat [INAUDIBLE] strategy-- does it work if one of the strategies is probabilistic. Let's say two strategy trees-- PROFESSOR: Yeah, yeah, yeah. It does work with-- AUDIENCE: Choose [INAUDIBLE], but you don't know always which one I'll choose. PROFESSOR: Yeah, it works because you're going to play-- all of these strategies assume that they could be mixed strategies. If you're not allowed to play 1/3 rock, 1/3 paper, and 1/3 scissors, then you're going to have to play really bad strategy, and there's definitely times in which mixing is going to be necessary. So, yeah. All of these strategies have mixing. Yeah? AUDIENCE: What effects do you think [INAUDIBLE] going to have on limit hold 'em games? PROFESSOR: I don't now. I think pretty much before the solution came out the big online players kind of knew that a lot of people were playing near optimal, and I think the game is kind of dead. What do you think, Mike? AUDIENCE: [INAUDIBLE]. PROFESSOR: Right. Too bad Matt doesn't come here. AUDIENCE: [INAUDIBLE] are already basically doing this anyway. PROFESSOR: Well, no. I mean, even if you have the strategy, you have to learn it. The problem is that, if you go to a casino and you play somebody who's a good limit hold 'em player, he's-- because these types of strategies have been out for a while, they already played much closer to optimal than they did before. So I think this would have absolutely no effect on heads up limit hold 'em. It's already kind of no one-- yes? AUDIENCE: So can you talk more about different ways you can do approximations. [INAUDIBLE] mentioning earlier bucketing all of the different hands [INAUDIBLE] the ranks or what are some other things we can do? PROFESSOR: It's an endless [INAUDIBLE] be clever in bucketing. So bucket hand types together. One kind of clever thing you can do is try to cut out the river entirely by just estimating your equity on the river. Of course, that's not going to be your showdown equity because you may be forced to face of a bet. So you try some sort of implied value of your hand. Let's see. What other bucketing things. I mean, in some games there's a sort of a natural way of bucketing hand types. Like In the river on Omaha, you could just try to bucket the cards that actually play and ignore the other cards. The thing is that, when you do things like that, [INAUDIBLE] losing assisting, we call it card removal. Card removal and blocking players from having the nuts and things like that are pretty important-- do turn out to be a pretty important part of the game theory optimal solution when you're getting down to the milli big blind kind of level. And if you don't think about card removal at all, then you actually have a strategy that can be exploited pretty easily. Actually, I talked about this yesterday. The thing is typically when the pot is p and you're facing a bet, you want to make them indifferent to bluffing. He's betting 1 to win p, so you want to call about pr over p plus 1 at the time. If you don't call this much, he's going to bluff and take it. So that's sort of the thing. We're saying the bet is 1 and the pot is p. So if the pot is 10, and he bets 1, and he takes it more than 1/11 at the time, he's going to just-- [INAUDIBLE] bluff everything. The real problem becomes that, if you don't think about card removal at all, he can start bluffing hands in which he knows it's more likely you have a mediocre hand or something that includes a strong hand. One real example is in PLO when there is a flush on the board, what's a good bluff? AUDIENCE: You have ace of [INAUDIBLE]. PROFESSOR: Right, you have the ace in a suit. You don't have anything else. That's a great bluff, because you're blocking him from having a great hand, and you're blocking all of his not hands and a lot of his really good hands. And he's much more likely to fold, because if you bet the pot, a lot of his hands he's [INAUDIBLE] himself with [INAUDIBLE] with the nut flush. Oh, I have a natural call. Are you all in? I have the nuts? OK, I call. So that's why card removal is important. Yeah? AUDIENCE: So is my understanding correct that optimal [INAUDIBLE]? PROFESSOR: Yes. AUDIENCE: And has there been any study of optimal [INAUDIBLE]. PROFESSOR: Sort of like utility theory. In poker in general, it's kind of weird. People think a lot about that [INAUDIBLE] what tournament they should enter, what games they should play. But there hasn't been a study really optimizing your own personal utility within the games. The assumption is kind of like, well, I'm going to use all this cool utilities theory [INAUDIBLE] to figure out what game I'm playing. As long as I'm playing the game, I'm just going to try to win the most money. That's sort of been the attitude, and I think that's actually correct for most [INAUDIBLE]. In limit hold 'em, [INAUDIBLE] you need bank rolls of hundreds of bets. You're not going to try to optimize and try to win some fraction of a bet with your utility function by lowering the variance. That is an interesting question, because maybe-- I feel that, if there is some utility consideration-- like maybe in a tournament you feel your chips are non-linear-- maybe you are going to quit playing your marginal hands because of utility considerations. AUDIENCE: [INAUDIBLE] like the fountain table of major events. They'll go beyond ICM to say maybe I won't coin flip for a $10 edge [INAUDIBLE] step up. PROFESSOR: I mean, if you use ICM, those utilities are already kind of calculated, but yeah. For example, final table of the main event, I'm not only using ICM, but I'm thinking, well, $3 million-- $4 million compared to $2 million is a much smaller step to me than $2 million is compared to 0 in my own personal utility. Like $0.5 million compared to $2 million versus $2 million compared to $3.5 million. So I need to optimize utility. I mean, yeah. I think that's kind of worthy of study. Yeah? AUDIENCE: What is it about the analytics of poker that makes it so popular with trading firms? And how does it-- PROFESSOR: Oh, OK. That's a great question. AUDIENCE: How do you use it professionally, all of this stuff? PROFESSOR: Well, I mean, I think poker is just kind of-- if you think what one game-- if you could teach traders one game, what one game would represent what traders have to know? Well poker-- there are a lot of actors. There's incomplete information. That's one big thing. And you do have to do a lot of thinking of what your counter party is doing. If he wants to trade against you, he puts a bid or offer-- some of that is why there's this [INAUDIBLE]. Are you trying to get out of risk? [INAUDIBLE] big position he's trying to get out of, or do you have to be worried about these orders and things like that? And also poker gives you sort of the skills to trade that-- suppose you know something is worth $10. [INAUDIBLE] you're going to make around it [INAUDIBLE]. Knowing nothing, you might make-- bid [INAUDIBLE] offer at 10/10, which means you're willing to buy the [INAUDIBLE] or sell it at 10/10, but you know something about the counter party. You may know the counter party can be a better buyer than seller or that buying is the risky part [INAUDIBLE] is the risky part. That kind of has a quant. Also, as a quant, doing poker analytics is very similar to the analysis we do in trading. A lot of this analysis-- how these strategies work, do these strategies really return what we think they return are similar to discussions we have in our trading strategy. I'm glad I'm able to talk to you about this, because if you're interested in doing poker strategies, you'll probably be interested in doing trading strategies, too. Any more questions? Yes? AUDIENCE: What about doing the deviation from [INAUDIBLE] the [INAUDIBLE] detecting deviation or let's say somebody goes from playing optimally [INAUDIBLE] not playing optimal [INAUDIBLE]. PROFESSOR: Yeah, I mean that's a very interesting thing, and that's actually hard to determine because that feels a little bit harder than this because this is [INAUDIBLE]. It's like I'm trying to figure out the optimal strategy, and I just play this, and whatever money comes to me comes to me. You open your arms. The money comes to you. The other thing is, oh, well he's playing badly, so I'm going to go there and take his money. But then if I deviate from optimal, I'm also opening up myself to being exploited. So that's kind of hard. That's much more of a dynamic problem. When does he go on tilt? How long was he on tilt? What evidence do we have that he's on tilt. I know that [INAUDIBLE], the guys in CMU, were looking into some sort of zero loss way to exploit your opponents, because you just figure out when your opponents are playing badly, how much they've given up in playing sub-optimally, and then you go to a [INAUDIBLE]. But you only open up yourself to, say, half the money he's given up, or something like that, playing badly. And the metric is-- so there's some sort of gaming algorithm you can do to do that, but yeah that's definitely another field of study. There are a lot of interesting fields that can come out poker [INAUDIBLE]. All right. I guess that's it. [APPLAUSE]
MIT_15S50_Poker_Theory_and_Analysis_IAP_2015
Poker_Economics.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: OK, what we're going to do, we're going to start our story in what at the time would have been called the Northwest United States. We're in the first part of the 1800s, and we're really talking about this portion of it here. Although later on in our story, we'll actually cover this whole region here. This was a sparsely-populated, impoverished area. There were basically no roads or towns. There was lots of swamps, mountains, rivers, land transportation was very difficult. The area was more or less under constant warfare from 1800 till about 1880. I'm talking about large scale armies in what would be called the Indian Wars, but also smaller scale raids of dozens or half a dozen people, and individual murders. It was among the poorest areas on earth. Who lived there? Anybody know? So early 1800s, 1810, 1820 in the Northwest United States? AUDIENCE: No one? PROFESSOR: Anybody? No, people lived there. AUDIENCE: People looking for gold? PROFESSOR: Gold miners were a little bit later, but sure, they would come along. There were some Native Americans, many of them had been pushed from other places, right? The east coast Native Americans had been pushed in there. There were some Native Americans that lived there who were resisting that. There were also Natives being pushed in from the South by the people in Mexico. So we've got people being in. How about Acadian Driftwood? Anybody here--? So the Canadian rebels, who were French-speaking, are being pushed here by the English. Whiskey Rebellion, right? The rebellious people in the United States are being pushed out here, escaped slaves, debtors. There is an expression at the time, GTT. If you didn't want to pay your debts, you wrote GTT on your door. It meant "Gone To Texas," and you headed over here. So we got this area full of losers, violence, very little in the way of economic resources. A few years ago, somebody did a list of the 100 wealthiest Americans of all time. And they compared it. They tried to estimate the wealth of these people and they compared it to the total size of the economy at the time. 19 of the 100 wealthiest Americans of all time were in that Northwest area in 1850, and they made their money from about 1850 to 1880. So what happened? How did this poor economic place with no prospects for anything, the last place in the world you'd expect anybody to get rich, how did it generate such tremendous fortunes? And it isn't just money. I mean you say, OK, well, some of them struck gold or oil or something like that. But listen to the names. It wasn't just robber barons like Rockefeller, Mellon, and Carnegie. It was great inventors like McCormick, Westinghouse, and Pullman. Innovators in other fields, such as Swift, Pulitzer, Hearst, Armor, Marshall Field. These people are household names today. I challenge anybody in this room to name any other business innovator anywhere in the world in the entire 1900s. OK, so we got 19 of them in this little area. And we know them today, right? None anywhere else. There were some innovations in other places, but here was clearly the place where modern business was being formed. So what was it that they had? Was it something in the water? Something in the air? What did this region have that nobody else had? AUDIENCE: Labor? PROFESSOR: No, it was very sparsely populated. Labor was extremely expensive. And your laborers had to spend most of their time eking out subsistence living, right? It would be very expensive to bring in food or supplies for them. No, labor was extremely expensive. Well, I'll tell you one thing they had was they had a game called poker. [LAUGHTER IN AUDIENCE] All right, you're laughing. I hope you won't be laughing at the end of this. Let's go back. The first written records we have from outside this region about poker come to us from about the late 1820s and the early 1830s. It's a lot more interesting what they don't say than what they say. One thing they don't say is, people in this region played this game called poker, and here's how it works. None of them explain the game at all. That's kind of strange, right? Somebody comes and tells you, hey, there's this game people are playing here, but they don't tell you the rules. They didn't even describe it as a game. The one thing they were very clear about is it involved transferring large amounts of money. They also-- nobody said, oh, it's a variant of this game or that game somebody plays somewhere else. Nobody said, this is some game you might be interested in. Everybody already knew. I mean, it was-- you can read these things. It was common knowledge that people in this area played a game called poker. It was a very serious game played for very large amounts of money. It was played nowhere else. Nobody mentioned any antecedents. Nobody said it was brought in by the French or by the Indians or by the Persians or by any other group. It was just there. It was there in the area. A couple other interesting things about this game. First of all, what we know that, far more popular than poker at any period of time, and certainly in this period as well, were standard gambling games that are played-- variants of which are played all over the world by people of all classes. Dice games, faro was very popular at the time, wheel-based chance games. We also know though, the other kind of game people were familiar with is played by aristocrats-- things like whist or chess-- that don't necessarily involve gambling, although they can, and involve a lot of skill. Poker was neither of the above. OK, so let's start asking, what is there about poker that was different from any game that came before it? This might give us some clue as to why this was different. What was going on here? Ace beats king. OK, that doesn't seem like much today. That was pretty revolutionary, right? You'd go to get executed in most countries in the world in those days for saying that. And an ace was, by the way, in early poker, ace was always high. There was no such thing as a low/high ace. So that tells you something. OK, the people who made this game, they weren't monarchists, right? They were thinking a different way. Here's another thing, though. The hand rankings are in order of rarity. The rarer the hand, the higher it ranks. And in early poker, we didn't have straights and flushes, so it's even more straightforward. But here's something kind of interesting. Here's something that changed in the game. So in the early poker up until about the 1830s, this principle that the rarer the hand, the more valuable it was was actually employed much more consistently. So today when you compare two hands that are of the same time-- so two people each have two pair, two people each have a pair, two people each have a nothing hand. You decide the winner starting from the highest cards on down. Right, so if I have two pair and you have two pair, we first look at who has the higher highest pair. Then we look at who has the higher lowest pair if that's a tie. But in early poker, it was reversed. And you can see if you rank it the way we do today, we've inverted our principle. Now, the more common the hand-- like ace high hands, there are 500,000 out of the 1.3 million. These are hands with no straight, no flush, no pair, no match. The ace high hands are the most common, but they rank the highest. In poker up till about, like I say, the 1830s, we ranked it by the lowest card in the hand. So you first compared your lowest card, then your second lowest. And there you see we are true to the principle that the rarer the hand, the higher it ranks. Can anybody see the strategic difference this makes? Why is this important? Is it just completely arbitrary? What difference does it make? Let's say we did things this way in poker. What difference would it make to the play? What it means is aces are much less valuable. It's a lot like-- anybody here every play lowball poker? Lowball poker, your best cards don't help you. The question is, what's your worst card? You talk about a lowball hand, you talk about the highest card in my hand that makes it. You can have four great cards, and a fifth card can completely ruin it. You can ace, king, queen, jack and then a two, and you've got a two low hand, and you lose to everybody else. In the modern poker, if you have an ace-- let's say we both have hands and neither one of us has any matches or straight or flush. If I have an ace, I have an 83% chance that I beat you. But if we rank them by low card, an ace only gives you a 56% chance of winning. An ace is much less valuable because really the difference between ace, king, queen, and jack is very little, because it's your lowest card that's going to determine how strong your hand is. Now, the biggest point I want to make about this isn't the subtleties of a strategy when you rank hands this way. This is a game that was designed. This is a game that somebody thought about, articulated a rational principle, and did what was at the time some pretty clever mathematics to figure out the ranking. This is not a game that evolved by long tradition. This is not a game where somebody in a court sat down and wrote the rules. This is a game that somebody designed, and they designed it for a reason. But actually, the card playing poker is pretty trivially simple, and especially straight poker. It was called straight poker at the time. The way the game was played was you were dealt a card, you had a betting round, you were dealt a second card, you had a betting around. You were dealt a third card, you had a betting round, and then that was the last round. The last two cards-- at the time, they were called the turn and the river, just like in hold 'em today-- were dealt together, and there was no betting after the final card. I'll tell you why that's important in a bit. Another thing about the way the betting was done is there was no ante. There was no blind in the sense we know it today. The way the betting worked-- and again, this shows some very careful design on somebody's part-- the dealer dealt the cards. The person to the dealer's left was known as the age. The age posted a stake, and they could post any, before they saw their card, their first card. They could post a stake, any amount they wanted, including zero. They didn't have to post anything. But the rules were a little different. On the first-- and this is only on the first round. All the subsequent rounds, the betting is exactly the same as modern poker. But on the first round, you were not allowed to call the age's bet. So let's say the age bets $1. You cannot call that. You can fold, or you can raise. And the minimum raises the amount to the minimum bet would be $2. So this gives a lot more advantage to the age compared to somebody who posts a blind today, a blind that can be called. In a way, you can see the analogy between what the age does and what the small blind does in poker, that somebody has to double the bet in order to play. Although in modern poker, with a small blind and a big blind, someone is forced to come in and double. The poker players at the time were very insistent. They said poker is not gambling. And the difference between poker and gambling is no one is ever forced to make a bet. You look at your cards, you voluntarily make a bet that you think has-- they wouldn't use the word at the time, but we would say now is that you think has positive expected value. The first, the blind bet, was empirically known to be a winning. It was an advantage people posted. Now mathematically, you can show that can't be true. But psychologically and empirically, it was true, that posting the blind with other people being forced to either double or fold was empirically an advantage. Anyone else betting, they could bet if they thought they would win. They didn't have to bet if they didn't. So let's talk about the betting in poker. There are a couple of things about it that are different from any other game that comes before. The first one is that at the end of every betting round, everyone remaining in the hand has bet exactly the same amount of money. The hand is mark-to-market after every round. So ultimately, you're betting on who has the best hand at showdown, or who has the best surviving hand at showdown of people who haven't folded. But in between the way, at the end of every round, the game cannot continue until everyone remaining in the hand has bet the same amount. This didn't last all that long. So by the 1840s, 1850s, we're saying there's a lot of fight against these very strict rules. People started adding antes. People started adding straights and flushes. People started adding betting after the final round of cards. People started adding all kinds of more complicated games-- draw poker, stud poker with some cards revealed. They started adding new kinds of hands. So the game starts to change a little bit. And R.F. Foster, who was probably one of the first people to really write a comprehensive history of poker-- and this was 100 years later, really-- explained what happened. So there is this conservative, old game, scientific based on very rigid principles, that eventually evolved into the modern game. The modern game, a lot more fun to play. The modern game allows gambling, clearly. The modern game is really for a somewhat different purpose. But so far, we've talked about the cards and we've talked about the betting. We actually haven't talked about the most revolutionary thing in poker that, again, is like no other game that came before it. Most gambling games throughout history-- and remember, gambling goes back to human prehistory. Virtually every culture has forms of gambling. But gambling is almost always done for either goods or cash. When credit was used, credit was provided by a trusted central counterparty. So somebody who organized the game might organize credit. Poker was never played for cash, never played for goods. Poker was played for what were called at the time "poker checks." And this is a distinction that goes all the way up to about the 1980s before it finally got erased. Even as late as the 1980s when I was playing poker, there was a distinction, and people understood it. There were two similar things that were often confused. There were checks, and there were chips. And they looked kind of similar. A check had intrinsic value. Checks were used for cash. Casino checks were used for cash in Las Vegas. Nobody ever used cash. You just used casino chips for all your transactions. But they were also used for all different kinds of games. Chips are just markers. Chips are things you buy at the table, and you're supposed to cash them in when you leave the table. You're not even supposed to take them from the roulette table over to the craps table or something like that. So chips are just markers. They have no intrinsic value. There is a complicated story which I'd be happy to talk to you people about sometime. The IRS came down. And some casinos, when they pushed organized crime out, they got rid of checks. So right now, there's no such thing as, a least a legal, chip with a real, intrinsic value. They're just markers. If you show up to a casino with a $5,000 chip and you want to cash it in, they're going to ask you where you got it, prove that you bought it at a Casino or won it at a casino. If you can't, they won't give you any money for it. But let's go back to the early 1800s. The way poker was played was with poker checks. Poker checks were markers. Often, they were made out of clay. People would make little disks out of clay, and they would put a thumb print in it. They would put some identifiable marker. The key thing is that they were identifiable to an individual. You were playing poker with money you created yourself. And if you lost, other people would have these markers, and these markers would be claims on you. If you won, you had other people's markers. At the end of the game, people did something called ring clearing, which means, oh, OK. You played a game for a while. You've got a bunch of checks in front of you. You're going to take the checks you have from other people, you're going to trade them for your own checks back. You're going to end up with, if you were a winner, you're going to end up with a bunch of other people's checks, and suddenly a bunch of people owe you money. Or if you were a loser, other people are going to end up with yours. But it's a winners responsibility to collect from the losers. There's no central counterparty. There's no trusting. You play. If you don't like a guy's checks, too bad. You got some things that aren't-- maybe you can find somebody else to trade you something for them. Now what you've done is you've created a form of money. It's this credit creation that is a really essential element of early poker. And let's talk about another financial institution from this period-- the soft money bank, otherwise called a wildcat bank. People created banks. And the way they created banks is somebody said, I have a bank, and I'm going to make loans. And I'll either print up some bank notes, some of which were extremely crude. People even used markers, twigs, old tally sticks, things like that. They used anything they could find as bank notes. Or they just kept an accounting system. If you want to spend the money, somebody has to deposit it into the bank. If it worked, this generated a lot of economic activity. Everything was successful. The loans paid off. Deposits were honored. Everything was fine. If it didn't work, everything was worthless. If you add one feature to this, it comes what most people think of as a bank. The one feature we're missing from this bank is what? Actual money, gold. Right. Now, this is kind of an interesting dichotomy in economics. To me, a classic bank is a soft money back. Capital, to me, is an additional feature that gives it some credibility, right? It means two things. It means the person creating the bank is going to have some skin in the game, is going to take a loss if the whole bank collapses, if nobody ever pays back their loans. So that shows-- it's a signal to show they have some confidence. Also, the cash they put in, the capital they put in, in theory is going to help people if the bank fails. It'll pay off some portion of the losses. In practice, it never does. In practice, the people who run the banks always get their capital out before anybody else. But a lot of the legislation on banks, a lot of the way people think about banks, is the opposite. They think about it as a classic bank is something that has 100% capital, and that fractional reserve is some kind of like little extra thing you do to a bank. But we see, if you look again, human prehistory cultures, you see almost all cultures have some form of this self credit equation, all kinds of things-- susus, tontines, [INAUDIBLE]. All over the world, we have these kinds of things, and they were useful in the American West. Well, poker was one of them. Poker was a way you could create a form of money, your own money. You could play a game. If you won, you picked up a lot of credit from other people, and you could use that to generate some economic activity. If you lost, other people had to employ you, right? They had to get their money back somehow. You didn't have any money. They had to find things for you to do to work in their businesses. And this is how a lot of business was created in the old west. Now we're going to move a little forward to around the 1840s. Anyone know what this is? It is the Chicago Board of Trade. This is actually around 1900, so it wouldn't be quite so fancy back in 1840. Futures exchanges, again, appeared in exactly the same geographic area as the game of poker about 20 years later. Nobody invented it. Suddenly, these things started popping up all over the place. A financial institution no one had ever seen before, completely unlike anything in the past, none of them outside the region, ubiquitous inside the region, and nobody said they invented it. There was something in the culture, in the way people thought, the way people did business that made this a very natural thing to do, even though it had never been done before. And the analogies with poker are pretty obvious, right? Mark-to-market. Every day, you're betting on the price of wheat in three months, but every day you settle up so that you got the same amount of money at stake. Clearing-- again, the initial exchanges in the early days, they used ring clearing, exactly the same as poker. Later on, they went to a full clearinghouse that was a little more sophisticated and allowed people to do it. But what's the purpose of these exchanges? What does this futures exchange do for people? AUDIENCE: To lock in the price. PROFESSOR: Like who? Who would want to lock in a price? AUDIENCE: The farmers. PROFESSOR: OK. Well, let's think about this. I'm a farmer in 1840. I'm about a two-week journey from Chicago over bad roads. Normally, I do what farmers have done since the beginning of private agriculture. They would sell to a crop buyer. I would go to the place where I buy my supplies, and there's a crop buyer who has an agent there, or that agent also comes by my farm every month or so to check out how the crops are going, because he wants to keep tabs on the crops. And also, I can lock in a price with him. He will buy exactly my crop. He will buy whatever quantity I happen to produce. He'll agree on a price now, or he'll set the price later. Whatever I want. I deliver it. I can deliver it to him, or for a slightly lower price, he'll come and pick it up at my farm. Now let's compare that with this futures exchange, a brand new innovation that's going to make my life better. I can take a two-week journey into Chicago. I can promise to delivery set quantity of a set grade of wheat that I don't produce that I can't be sure I will have on time. I don't know the quantity I'm going to produce, but I have to specify that. I have to put down initial margin. I don't have any cash. All my cash is tied up in my crop. Farmers only have cash after harvest. And I have to stay in Chicago every day to make mark-to-market payments, since who's growing my wheat? OK, so this makes absolutely no sense. Anytime you read a textbook and it talks about farmers using futures exchanges, you know they haven't spent half a second thinking about this. There were no farmers involved in setting up the futures exchanges. In fact, farmers were suspicious and have often tried to have them shut down. When farmers do use futures exchanges, they almost always buy the product. They don't sell it. So let's talk about what people really use these things for. Here is the canonical trade that you can think about, what you want to understand futures markets. I'm a processor. All the people who set these things up-- and by the why, all the people who set these things up were poker players. I'm not kidding. Look it up, you find the name, you find all these people were poker players. I grind wheat, OK? I'm not going to use this for hedging. I can't use it for hedging, right? I buy wheat, that's true. But is my exposure to wheat going up in price or going down in price? One going up. OK, let me tell you two stories. Story number one, there's a sudden increase in demand for wheat. There's a big war in Europe, other crops fail in other regions. The price of wheat goes way up. What happens to my business? I'm making lots of money, right? They've got to grind lots of wheat. They're bringing all the wheat in from all the [INAUDIBLE]. Everybody grinds it, there's a shortage of grinding capacity, I can raise my prices. I'm rich. So it that way, I'm long wheat, right? Price of wheat goes up, my business goes up. Let's say-- sorry? AUDIENCE: So you're assuming that the price of grounded wheat is directly correlated to the price of wheat? PROFESSOR: Yes. I'm saying the price of ground wheat is what went up. And because the price of ground wheat went up, my business is more valuable. And you're exactly-- you hit the point exactly, if that's the stuff I'm selling. Now let's do a different story. There's a crop failure in the area. Price of wheat goes way up, right? Wheat is scarce. Who wants my grinding facilities? Nobody. There's an excess of grinding facilities. I can't charge a penny. OK, so I have no natural wheat exposure. I can't hedge my wheat exposure in this futures market. Also, for the same reasons I mentioned of a farmer, it's not a convenient place to do it. It's a type and grade of wheat I don't want at a place I don't want. Also, the price of wheat has very little do with my business. What I care about is my machinery operating properly, what's the price of fuel, what's the price of labor, what are the regulations on the stuff, what's the equality of the stuff I'm getting. I can lock in prices with suppliers and either buyers or the flour or sellers of the wheat anytime I want. That's not the point of the futures market. What you want to think about is I'm growing wheat. I'm going to go to a silo, a grain silo. A grain silo is a guy, he buys wheat from all over the place and he sells it to people like me. I'm going to say, OK, I want these wheat deliveries over the next three months, so much and so on these various times. I want it delivered to my grinding facility. I want exactly this type and kind of wheat. And he'll agree, and we'll settle a deal. Now I'm going to go to the futures exchange. I've just bought a quarter's worth of wheat from the wheat silo. I'm going to go to the futures exchange, I'm going to sell a quarter's worth of wheat forward. What have I done? Well, now I have no price risk, right? Now if the price of wheat goes up or down, I don't care, because I bought we today, and I sold it three months from now. I have borrowed wheat. Now, what could I do instead? I could borrow money. I could borrow money and buy the wheat. But then I take two price risks-- I take the price risk of the money, and remember, this is an area where there's very little money around. These futures exchanges were invented in a place where there was very little gold and silver. There wasn't a good banking system. Bank notes weren't very trustworthy. So taking the risk of money was a big risk. And also, price of wheat going up and down. The simplest thing is just to borrow wheat, which is what I want to do. And like any business loan, I never intend to repay this. If I'm doing a business loan, say to buy machinery, I make the loan. And when the loan comes due, I borrow again to run the business. The only time I pay back my business loans is when I shut down the business and liquidate everything and pay off the creditors. Same thing with this. I'm going to roll these futures contracts forward forever. I'm never going to take delivery. But what I've done is I've perpetually borrowed wheat. I've taken part of my business input and I borrowed it instead of buying it. So one thing is the futures exchanges create a tremendous amount of credit. But they do something else too that's kind of interesting. Anybody here know how to take flour and turn it back into wheat? Anybody here know how to take August wheat and move it back in time to May? All right, I have something very surprising to tell you. For 175 years, the Chicago Board of Trade here has been quoting prices on both of those services. A futures exchange quotes prices on services nobody's ever thought of. It opens tremendous scope for innovation. Let's say I want to build a bridge. And when I build this bridge, it's going to divert wheat that was going to Saint Louis. It's going to get diverted to Chicago, because now it's going to be cheaper to move it into Chicago. I can hedge that. I can sell those transportation services on the futures market by buying Chicago wheat and selling Saint Louis wheat. I can do calendar spreads. I can do grade spreads, cleaning spreads, all of these things are a way of buying and selling all the services involved in an agricultural processing business. And this turns out to be a far more efficient way to organize things. It's got its internally generated credit. It's got far better information flows. And this is what touches off a tremendous explosion of business activity and business innovation that goes throughout the world. Now, one of the things that's kind of strange about futures markets is they were only used for agricultural commodities. Now, granted, that was a much bigger part of the economy than it is today, but in the global economy, this was not a big deal. There were much, much more valuable commodities that never went to futures exchanges. And there were things-- commodities weren't even the most valuable things. When things really took off is in the 1970s when people took the same idea and moved it to financial. But my story is about poker. One of the things people sometimes say is that futures evolved from to arrive contracts. Now, to arrive contracts have been around longer than we have written records. In ancient Mesopotamia in the early days, we see in the earliest writings that we can still read today, we find that these to arrive contracts were common. A to arrive contract says, essentially, I will sell you 10,000 bushels of wheat at $0.50 a bushel as soon as wheat comes to the market, wheat come to the city market. OK, it's a price guarantee. It's not a delivery guarantee. I don't tell you when you're going to get it. I don't even guarantee you will get it. I'm just saying, when it comes, this is what I will sell it you at. The largest to arrive market in the United States was in Buffalo. OK, everybody remember all those stories about the fortunes won and lost on the Buffalo to arrive exchange, the fistfights, the corners? We had people who made their fortunes there, never went on do anything else? Exactly. To arrive contracts are run by quiet commission clerks. Futures exchanges are populated by tough, brawling innovators who often make fortunes or lose fortunes and go on to do dramatic business activity. There's no connection between the two. Now, can anybody tell me which ones of these are poker games? AUDIENCE: Omaha. PROFESSOR: Sorry? AUDIENCE: Omaha. PROFESSOR: Omaha, Texas. Yep. AUDIENCE: Chicago. PROFESSOR: Chicago, yes. This one you guys might not have heard of so much, but there is a Cincinnati. It isn't played much anymore. There is no poker game called Buffalo. There is no poker game named after any place except places where, if you lose all your money in the game, drown your servers by jumping in a river, you float down to New Orleans. Even today, poker is very, very strongly regionally-- it's a regional attitude. It's an extraordinarily explosive innovative economic attitude, and it has never really seeped out except the place it was born. Now, that may no longer be true with internet poker. The question is, is the soul of poker alive? Internet poker has this economic innovation and freedom and self credit creation. Is this something's that's going to spread to the whole world, or has somehow poker been neutralized and, when it comes to a computer and becomes virtual, it's no longer got that soul? It's something we're going to find out in the next few years. Now we're going to jump forward to me. I was born in the 1950s. I was raised in Seattle. And one thing you have to understand is two people can be raised in exactly the same time, exactly the same environment, and have totally different ideas of what it was like. I think most people would say, if a movie was made of my childhood, they would say, hey, that was an idyllic childhood. Your dad was a professor. You didn't have so much money that you had affluenza and were wrecking cars or things like that. But you were never embarrassed that you didn't have clothes for school, you were never hungry, anything like that. You were treated well. It was a suburban neighborhood. It was a pretty place. You had lots of stimulation, all of that stuff. But I hated it. I was oppressed by lots of things. I believed, I sincerely believed, that the world was going to end in nuclear war before I was in college. It just seemed like it clearly was likely to happen. I was interested in math and science, but all the math and science was defense related. Only big government projects to kill people were the only way you could get funding for things. More than half the world was in the grip of brutal totalitarian dictatorships, and no country had ever emerged from communism to freedom and prosperity. It seemed like the entire, world even the free world, even the relative democracies, were run by paranoids and total incompetents. The economy was terrible. Seattle was a few years ahead of the rest of the country and we had sort of slipped into that '70s malaise back in the '60s. A friend of the family who was an aerospace engineer who was the world expert-- in fact, he went to MIT. He was the world expert in materials for supersonic wing design. He was fired because nobody wanted to build supersonic planes, and he was driving a cab. Another thing about this is this is, again, Seattle was kind of forward looking in terms of economics. We were getting economic malaise before the rest of the country. In another way, we were kind of a throwback. It was more like the '50s than other parts of the country. And it was this really weird social dynamic. OK, I don't want to tell you the neighborhood I grew up in was any worse or weirder than any other neighborhood. But there was a certain percentage of alcoholics, or child molesters, of wife beaters, of drug addicts, all the stuff. And in the '50s and in Seattle by the time I was growing up, nobody cared about it. Nobody would ever talk about any of that stuff. If a man cut his grass and brought home a paycheck, he was a good guy. He could do anything else and nobody would talk about it. But if you didn't-- and we would have this. It would happen. Like a neighbor, the guy would lose this job. They would quietly move away, and nobody would ever talk about them. It was just weird. And you got the message, OK, the economy was very insecure. If you could earn money, everything was fine. You didn't have to worry about anything else. But if you couldn't earn money, it was unspeakably bad. Well, I was a shy kid. I was introverted, awkward. But I liked looking in the back of a newspaper at the numbers. The patterns in the numbers really fascinated me. And I worked out ways, and I would bet money on horses, I would bet money on other sports. And I found out I could win doing this. I also found out-- and this was true throughout the American West at the time. I couldn't really find a good picture of this. This is just something I found on the internet that's roughly equivalent. Taverns, in the back room of the taverns or the basement of the taverns, there were these poker games. And I'd go in and I'd play, and I'd win. And more important than that I would win, I could walk in. I could collect the money. I could walk out, and this was enormously liberating to me. It said to me, OK. You don't have to get a job. You don't have to go to college. Anywhere you go in the world, you can find a poker game. You can win money. And when you're a good player and people know it, even if you run out of money, people lend you money. People stake you. You also start getting into this network. And this was something I had not expected at all. I had gone there, I thought, OK. I'm going to win some money, and I'm going to prove that I can get this monkey off my back and I don't have to worry about this anymore. These people-- the people I was playing with didn't look like this. One or two of them might have looked like this. But there were policemen, there were sailors, there were clerks. You know, people. And they end up owing you money. Right, if you're good player, these people end up owing you money. And I later kind of figured, I sort of had a half idea of what was going on at the time. But I've later figured it out a little better, and later in life as I've played more places, I've figured out the system a little better. I had a purpose for these people. I had two purposes. One is because I was a good player I protected their game. Let's say somebody showed up, a really good player wanted to take all their money, show up and-- well, they had somebody there who was good. And by the way, being a good poker player in those days didn't just mean good card play. In fact, it would be trivially-- and I'm sure anybody here what have no trouble cleaning up in terms of pure card play in this game. But you had to be able to spot cheating. You had to be able to figure out who might risk of arrest. You had to figure out who might get violent. I mean, there's a lot of social skills involved in this. So by being a good player, you protected the game, but you also connected them with a bigger network. So remember, a lot of this is about credit equation. A tremendous amount of economic activity goes on here. There's doctors, lawyers, police, mechanics, whatever-- people exchanging services, underground economy, people who couldn't make it in the normal economy. Like let's say you're a lawyer, and you're a really bad lawyer. And you can't get any business. You hang [? out a shingle, ?] nobody's going to hire you. Your resume isn't very good. There's too many lawyers around. But you know, you owe somebody money at a poker game and they want you to write a letter and do some for them, whenever, you can do a little legal business on this stuff. And by connecting into the broader network in the city, you connect these people in. Some of these people had really dropped out of normal life, or they weren't getting paychecks. They were subsisting entirely on the underground economy. And this was a very important organizational tool. Other people like to keep one foot in that world-- maybe a little side income, maybe a little fallback. We weren't doing like big organized crime, but there'd be a sailor in the game who could bring in Cuban cigars. There was a guy who was maybe a bathroom attendant at a fancy restaurant who could sell them, and this kind of stuff could get organized in a game like this. The poker was very important because you actually spend a lot of time with these people. You learned a lot about them. You couldn't fake it in a way. An undercover cop could show up for two days or act something this, but they aren't going to be playing poker every day for years. So I'm kind of moving up in the Seattle poker network, and I come to Boston. I went to Harvard. And again, remember, I'm shy. I'm awkward. I'm from the west, a little overwhelmed by all this stuff. But I walked in with a network. It shocked me when I got here that this network was seamlessly translated across the country, that I knew police, I knew poker people of all different ranks and stations of life. I got into poker games at Harvard itself. People talk about going to college to get contacts. Let me tell you about that. I had three roommates. I love my roommates. These are great guys. I'm not saying anything bad about them. But in terms of like useful contacts for me in life, well, one of them's a corporate lawyer, one of them's a TV producer, one of them's a law professor-- all great things, that's nice. Never really a lot of use to me in terms of advancing my business interests or whatever, and not exactly hard things to break into, right? You want to know a corporate lawyer? Well, it's pretty easy to know a corporate lawyer. Poker games at Harvard, I played with George W. Bush, who went on to be president. I played with Bill Gates, Steve Ballmer, Scott Turow, people who are celebrities, politicians, rich people. Those are the poker connections. And a poker connection is very different. You're not playing poker with your friends. A poker connection is, there is a business relationship in there that can be extremely useful. My whole life, my whole career, has been informed by poker networks. Now I'm going to zip through a few things. I'm playing in Boston, a guy shows up. He actually managed a card room near Stanford University. He was, again, it's this network thing. So I'm playing now in a pretty senior game in Boston, some the best card players in Boston are here, pretty high stakes. And they invite visiting pros from other parts of the country to keep the network connected throughout the country. And this guy said to me, he said, well, I was good. I played well that night. But it wasn't just that. He said, you know, you're a kind of strange guy. You've got this sort of mathematical poker sense. You think about it in theoretical terms, whatever. There are a lot of guys like you out in Gardena, California. So that's where the best poker in the world is being played. He said, I'm a good player. I'm a national pro. I go around from city to city sitting in end games with the best players in the city. And I win in those games, but I can't turn a profit in Gardena. You should go the Gardena and see if you can match yourself up with the best. So I go there. How many people have heard of Gardena, California? Yeah. If you're interested, there's a movie called California Split that's got some good scenes. It's stupid poker movie for the most part, but it has some actual scenes shot in Gardena at that time. That was the best poker in the world. And it was the network theory I was talking about on steroids because of a few things. First of all, this is in the late '70s. Marginal tax rates have gotten really high, regulation has gotten crazy, the tax code is incredibly complicated and corrupt. Community property, big thing-- a lot of these guys were wiped out in a divorce. I mean, that was something that just didn't happen 10 years earlier. A lot of them had tax liens. So you have this whole group of people, they tend be-- they were much better educated than the guys I was playing with in Seattle or even really the guys in Boston. They were smart. Not only were they broke, they were financially toxic. Any money they put in a bank was getting whisked away by somebody. If they were walking down the street with $1 in their pocket, somebody could grab it. Poker chip, not so much, right? Poker debt, they lent some money to somebody in the poker room, nobody could collect that. Tremendous about of very active underground economy going on. I go there, and first day I'm there, guy comes up to me. He manages a motel. He used to own it, went bankrupt. Now the bank pays him to manage it. If you're a poker player, he'll give you a room to stay in. You have to pay. You slip him a little bit of money, much less than the rent would be. It's kind of furnished with broken down stuff other tenants have left. But poker players don't care, right? Poker players are there, they go, they sleep on the broken couch, and then they leave. Maybe take a shower if it's the first of the month or something. And you'd stay there for three or four months, and eventually the owners would kick you out. And he'd just say, oh, well, the guy never paid any rent, and he would cover all this for you and do it. And then you just move to the next motel down the line, and he moves somebody else in. You wanted to get your car fixed, you wanted to get a lawsuit filed, you wanted to get your operation done, whatever, there were people in the room to do it, all for under the table, you could borrow the money. And all of the people who can invented the modern poker, if you talk about the David Sklansky's, Mason Malmuth, Mike Harrow. I don't know if these names mean anything to you, but these were the first people who actually sat down and wrote poker theory books. This was the only place where people were really thinking about it. Las Vegas, they hated poker. Casinos hate poker. The reason casinos hated poker is somebody in the building was losing money and the Casino wasn't getting it. And they would stick it-- they either wouldn't have poker, or they'd stick it next to the layout of slot machines under the stairs. They'd open and close the room at random intervals. The one thing they did was the World Series of Poker. It was a pure Casino publicity stunt. It had nothing to do with poker in those days. Only in like the late '90s, early 2000s with lots of people being brought in by internet, with the poker boom, with television did casinos really come to terms with it and start liking poker. But Gardena was pure poker, and that's where the good stuff was done. Now, I'm making my living by poker. But I have broader interests. I'm thinking-- I was really, when I came to Boston for four years, I was shocked at how shoddy I felt that most quantitative analysis was. I thought, people are doing this work. They're teaching this stuff. They're advising the government. They're running the government. They're writing these papers, this and that. They have terrible data. They have really stupid analysis. And the biggest thing, the way I kind of encapsulated all of this is, none of them would bet a nickel on their own results. Every single one of them, if they were buying a car, they would spend a lot more serious commonsense analysis to get the right car than they do recommending a plan for the government to take over the steel industry, or something like that. And a lot of other people felt the same way. And we had both some philosophical and some empirical ideas that we were sold on the idea of quantitative analysis. We were confident in our ability to make bets and win. One of the things about hanging around with gamblers, there's a lot of bets. Gamblers are really nasty people, a lot of really bad stuff. One of the flip sides of that is you don't have to be nice, which takes a lot of pressure off. But they do take things seriously, right? Anything you say, somebody can say, put some money on it. If you won't, you're just an idiot loser chattering. You'd start really thinking about things if you actually have to bet money on anything, any opinion you get. A lot of silly chattering conversation never happens if people have to bet on everything they say. Anyway though, the natural thing to do is to say, OK, I want to go to a place. I want to prove that my methods work. And the way I'm going to prove my methods work is I'm going to go to places where people gamble, and I'm going to prove that I can win. Now, all of us had read Ed Thorpe's Beat the Dealer. Do people here know Ed Thorpe? OK, you should. You should read his books. You should meet him. He's still alive. He was a mathematics professor. He invented blackjack card counting. He managed to beat virtually every other Casino game. And he's also either invented or perfected almost all of the quantitative hedge fund strategies that people use today. He was one of the most successful hedge fund managers for many years. So one thing we did is we wanted to go and beat casino games. And then what I'd like to talk about, I'm just going to mention this very briefly. But roulette, to me, was the one that really changed the way I think about a lot of stuff. It's a very, very important lesson for people who have academic statistics backgrounds. How do you beat roulette? OK, well this Ed Thorpe's. Ed Thorpe was thinking about this. And he was hearing this debate, and some people said, you should [INAUDIBLE] You should just record the patterns of the wheels and see if you can notice patterns, like the wheel's a little weighted, what number comes up a little more than the others. But other people said no, that's impossible. The wheels are too good. And Ed had a really remarkable insight that not enough people know about. He said, you can win money either way. If the wheel's broken and 13 comes up a little more than it should, you can bet on 13. That's easy. But if the wheel is so perfect that every number comes up equally often, it must be machined to perfection, then you can use a little physics. We had some mechanics up on the board when I came here. Use Newtonian physics, and you can figure out where the ball is going to end up. So it's a lot of work. He did it with Claude Shannon. Again, we're back to MIT, one of the fathers of information theory. And they sat down, and they worked on roulette. And if you study this problem for a bit, you pretty quickly find out, here's how things work. The way roulette works, they spin a wheel in one direction, and they spin a ball around the outside of a bowl the other way. OK, everything is very Newtonian until the ball goes away from the edge. It is very easy with a little bit of electronic aids-- which at the time were legal, and are now illegal. But with electronic aids, it's very easy to tell what number will be under the ball when the ball leaves the edge. Now, there's a lot of bouncing and banging around between that and when the ball comes to rest. And that's pretty chaotic, essentially impossible to predict. So you have this perfectly predictable section, and then you have this chaotic section. But here's the insight. The predictable section, you can calculate. You can know, OK. You say, when the ball goes under the edge, number 17 will be directly underneath it. The chaotic part cannot be uniform on the wheel. You can say, if 17 is under it when the ball comes down, here is the distribution of places the ball is going to end up on the wheel, and it's nowhere near random, nowhere near uniform. So you can make good bets. Now if you actually want to do this, you have to get several layers. You have to keep refining this notion. But I just want to focus on the big picture. A lot of statistical theory, the basic theory of statistics, was based on dice. That's what Nassim Taleb calls the ludic fallacy-- people trying to create randomness. The real world is nowhere near so random. Even when people try to create randomness, even in a Casino, they can't do it. And the reason they can't do it is if you build things really, really precisely, they're predictable. If you build things kind of loose and sloppy, they have non-uniform patterns. What you can't do is build a device that's both. It exceeds human capabilities. And when we're talking about the practical randomness you see in the world-- in the stock market, in politics, and in war, anything you want to talk about-- people are way too sloppy in modeling things as random. Whenever somebody says this thing is random, you say, I'm going to take a hard block. And I'm going to find little pockets of predictability that I can calculate. And in between those pockets of predictability, I'm going to find patterns that are non-uniform. And what I'm going to end up with is I'm going to end up with a system where people are saying, you know, you're obsessing about data and data quality for things that don't really matter very much. Nobody else thinks these statistics are important. Why are you spending all your time cleaning data for something that's far removed from the essential economics of this problem? And they're also going to say, and here are the really important things. And you're just waving your hands. You're not paying attention. You're making criminally reckless or crude assumptions about those things. But what they don't understand is that's exactly how you beat it. That is what you do. When you figure this stuff out, you really come up with this thing that will understand and make a profit, it will look like that to outsiders, that you're focused on stuff that doesn't matter and you're ignoring the stuff that does matter. Put another way, the stuff you think that matters doesn't, and the stuff you don't think that matters does. And when you do enough work on this kind of stuff, that's how you win. Now, the thing about the people who did blackjack card counting and roulette and baccarat and craps worked on all those other games, what happens when people find out that's what they're doing? AUDIENCE: [INAUDIBLE] PROFESSOR: Yeah, it depends on the time and the place, whether you're buried in the desert or just warned off or whatever. But basically, you have to fool the casino. You're taking money from the Casino. The people who stuck with this hate casinos a lot more than they like money. They tend to be antisocial people. You don't need any social skills, right? All you need is-- I mean, if you've seen some of the movies, the MIT blackjack team, Bring Down the House, whatever, you see some wildly over the top play acting to get things. That isn't what most of these guys were. Most of these guys were quiet. They went in, the casino gets a game. The casino takes care of stuff. You just go in and play, but you have to stay under the radar. You can't let the casino notice. Now, people also looked at another field, sports betting. Now, sports betting, this is frequent of stuff. This is Bayesian stuff. It is a lot easier to predict how people are going to bet than how a game is going to come out. If you wanted to predictive a game from first principles and figure out what the proper point spread was, well, you've got a big job ahead of you. It can be done. People have done it and made some successful results, but that's a big job. Here's an easy observation that was enough. In the 1970s, this was enough to make money. Los Angeles Lakers are a glamorous team. They were at the time. Los Angeles is a big betting city. When the Lakers play at home, there's a lot of money coming in and betting on them. The point spread is going to be too favorable. Bet against the Lakers at home, right? Don't need a genius, don't need a math PhD, don't need a computer. You can just figure this out. And patterns like this are very easy to catch, but they're based on understanding people. So the malcontents, the introverts, the autistic people, they all went here. The people who like people went into sports betting. Now what happens if you're sports betting and you're successful? What happens? Well certainly in those days, they hire you. Right? They want you. Great, hey, you're winning? We want to take advantage of that. We'll pay you a salary. You bet for us. And then the way they would pay you, by the way, is they would let you make bigger bets. And so you become part of the organization. And pretty soon, you're running your own bookmaking operation and so on. So these are social people. These are Bayesian people. These people are betting on frequencies. These people are betting on people. Both of them are learning skills and techniques that nobody taught in a classroom, that were totally generations ahead of what statisticians were doing in academics, what people were doing in econometrics, anything like that. These people were because these people had to. It only worked if you're right. You're betting every day. Smartest people in the world are spending every waking moment trying to find a way to beat it. I did some of both of these. I also did some poker. I was mostly a poker player. Poker is kind of in between. OK, you've got some cardplay kind of thing, shuffle reading, things like that. But you've also got to know something about people. Not as much as a guy doing sports betting. You don't have to predict actions of thousands of people. But you do got to be able to look around the people at the table and figure some stuff out. You've got to be able to get invited to games. You've got to be able to collect from losers. You've got to be able to avoid arrest or getting cheated or beaten up and stuff like that. So we were kind of in between. So a lot of people I know from those days-- this was the '70s, '80s, early '80s-- a lot of people stayed at it. But a lot of us, having honed our skills and figured, OK, now we think we know something. I've been playing poker since I'm 14. I've had moderate success in beating casino games and sports betting. I've had some really strong success at poker. I have some confidence, right? I don't just think I know something. I know something. And the reason I believe it is I've went to the places where you test that stuff and walked away with people's money. So a lot of us went into finance. So this is really now we're talking about the early '80s. The people who like casino games, they like secretive little hedge funds. They wanted to invest their own money. There are only a few wealthy investors. As soon as they could, they wanted to pay off their investors and just be by themselves. And some of this really brilliant stuff [INAUDIBLE] fairly narrow focused. They had to like pick some narrow niche kind of thing and do it. Bayesians, these people were naturals. These people went into big bank stuff. They knew people. They knew businesses. They were bank executive types. And by the way, in these days I'm talking about, you left math off your resume if you wanted to apply for a Wall Street job. I'm not kidding. People thought if you knew math, it'd be like you want to go to the NFL. You know, the NFL, they don't want you if you've got a PhD in math. It's like they don't want smart people. Wall Street did not want people who knew math. Some of them understood that smart people would be dangerous. Others just thought that anybody in math had to be ivory tower and couldn't possibly know anything. But still, once you got there, these skills were incredibly useful. If you'd been five, six years betting sports successfully, you know a lot of stuff about markets and businesses and how to run things with people that some of these other people would never learn their whole lives. These people didn't have to get a job, but they're problem was, could they find somebody to trade with? Could they get people to broker for them, to trade with them, and so on? So they had a lot problems with that. Me, poker players, people like me, we could get jobs. We didn't want high level executive jobs though. We wanted to run some little department under the radar screen. We'd run a quant trading team. We do some pairs trading. We do some quant stuff. We do structured products. We did a lot of that kind of stuff. We were used to networking-- we were good at that-- but under the screen. We weren't good at like showing up in a suit and tie and being nice to bosses and things like that. We weren't good at raising money, which is why we had to go hook up with a big firm. We liked the fact that a firm would give us trading capital, the firm would give us setup relationships, so we didn't have any problem opening up trading lines and things like that. But we basically got to keep what we made, or we got to keep a portion of what we made, which is a way we like to do it. This is what really revolutionized Wall Street. The thing I tell people is, having lived through this from the early '80s to now, it's people don't understand how much finance has changed. An awful lot of finance is written as if there's some continuous history, some minor technical innovations, electronic trading is just a little bit faster way of people yelling at each other in a pit. Well, it's not true. The entire fundamental technological basis of finance has changed, has completely been redone. I use the analogy of a digital camera versus a chemical camera. They kind of look the same-- or they used, now they're in your phone. But a few years ago, a digital camera looked kind of the same. It runs on battery. It's got a flash. It's got a lens. It's got a shutter button. People use it for the same thing. They take pictures of their friends, their vacations, their parties, things like that. But the technology inside is entirely, totally different. If you want to make a camera, you hire completely different people, use completely different processes, the theory is completely different. Well, that's the difference in finance. And the people of my generation, the quants in my generation, are the ones that built that. And most of it has a lot more of its genetics come from sports betting and betting casino games and poker than it does from economic theory of the 1970s and 1980s era vintage. For those of you who are interested in this stuff, I put a few books here. I did these in alphabetical order of title. I do have couple of my own books here, but Beat the Market by Ed Thorpe. Just read anything you can by Ed Thorpe. A lot of it's available free online. You've just got to. That's a guy who understands stuff. James McManus, a friend of mine, wrote Cowboys Full. Some of you may remember James. He made the final table at the main event of the World Series of Poker and wrote a best selling book about it, which is a lot of fun, too-- Positively 5th Street. But this is a history of poker if that's what you're interested in. It's really the only good history of poker. This is a really interesting book, The Economic Function of Futures Markets. For about 100 years, people wrote nothing but nonsense about futures markets. And then this guy came along. He had a PhD, came along, and wrote a thin book about 1990 that absolutely explained it. And it's funny, because after reading nonsense-- just transparent nonsense about futures markets all the time-- this guy wrote a book. And the thing about this book is it's logical. It makes sense. He's got a story. It makes sense. He's got empirical evidence for it. He nailed it. He explained what futures markets are. And nobody ever paid any attention. I mean, nobody ever cites it. Nobody ever reads it. Whatever. But if you want to know futures markets, this is the guy who explains it. Fischer Black-- by the way, this is the hardback which has a blue cover. If you buy the paperback with a black cover, it has a foreward by me. So I go for the paperback. But this is really interesting. If you're interested in the time when I was at Harvard and hanging around MIT and talking to people who were arguing about this kind of stuff, Fischer Black was a very important part of that. I knew him pretty well in those days. And there's a lot of really interesting stuff about how logic and rationality and mathematics actually came into finance. It didn't come in naturally. It was some eccentric thinkers-- Fischer Black was nothing if not eccentric-- who did it. And this book explains it. A guy by the name of Perry Mehrling, Perry Mehrling's a nice guy who wrote another good book, The New Lombard Street, that's very interesting about modern finance. This is a book by Daniel Usner, who I do not know. This kind of covers the prehistory. This tells you what the world was like in the Northwest United States before poker came in. It goes up to, I think, 1791 or so. So it's a pre-history. But it's really fascinating economics. There was a really fascinating economy in this part of the world. The pre-modern economy, but it had within it more of the design of the modern economy than if you'd gone to London or New York at the same time. This is a book, another guy I don't know. I actually talked to him a few times about this book when he was writing it, but this is More Money Than God. It's an excellent story of the early hedge funds, a lot of these people who were coming from these sorts of backgrounds, bringing some mathematics into finance. It doesn't have a tremendous insight. He doesn't get into the strategies or the intellectual ideas behind the things, but he'll tell you the stories of the people and what happened and stuff like that. Two books that I-- I sort of did this as I was coming up with this and I just pulled up some books. But there are two that I left off that I was thinking on the train ride up I should have put on here. One of them is called Poker Faces by a guy named David Hayano. He's another guy, knew him in Gardena. He was a player in Gardena. He was also working on his PhD in sociology. Actually, he called himself an autoethnographer. So he was an ethnographer who studied himself. And he wrote. Anyway, he wrote he wrote a book on Gardena and the poker economy and what it was like. I'm actually in the book. But it really is a great book, and it really tells you kind of what the poker economy was like in the '70s. I think that was the last gasp that takes us back. It was a lot weaker than it was in the 1840s Chicago. So it wasn't as important to the economy, but it was still there. You could really see a lot of the relations, how things worked, really were there. I think that's kind of gone now. You really don't find that today. Another one is by a guy, a friend of mine, professor Phil Tetlock lock at Wharton. He wrote a book called Expert Political Judgment. And that tells you a lot about what the things that drove me away from basically how bad experts are predicting stuff. And what really drives you to-- if you're good at this tough stuff, if you're a quantitative person, if you like to make bets, if you like to back your judgment, you've just got to stay away from experts. You've got to go someplace where you find out in cold, hard cash whether you're making good or bad predictions. Pokerface of Wall Street, this is by me. This covers a lot of stuff I'm talking about, the connection between poker and finance. And Red Blooded Risk, also by me, this tells a lot more about how people brought ideas from sports betting and poker and Casino games and how that entered into mainstream finance. OK, that's what I've got. Any questions, comments? Yeah? AUDIENCE: How good was George Bush and Bill Gates at Poker? PROFESSOR: I can't really answer that in the sense that-- to really answer that, you'd have to like tracked their winnings and losing for long periods of time. Bill ran a game in Courier House. I didn't like the game. It was a very tense game. I always got the feeling people lost more than they wanted to and it wasn't a lot of fun to play. He was certainly a respectable player at the time. You also have to understand, I'm coming from a slightly different perspective. I'm really at this point one of the best players in the country. We didn't really have ranking back then, it was kind of hard to tell. But I would say there were maybe 100 people in the country that I would have felt more or less equal to, and I wasn't afraid of anybody. I would sit down at the table with anybody and play them kind of even. So none of these people where that level. None of these people were kind of serious, professional players. But in somebody can be a very competent, careful player. George Bush was a lot of fun to play with. Ran a great game. I don't think he was too interested in the money. I'm not sure. He didn't seem to be. Probably more interested in the connections, by the way. By the way, that drove a lot of people. A lot of people-- I'm talking about this networking in kind of this abstract. See, I'm an introvert. So for me, it's kind of this amazing thing that you can create these networks. I'm thinking of them very mathematical and have a diagram. A lot of people are just wired that way. They understand on some-- they don't sit down and draw network diagrams. They say, I'm going to play in this game. I'm going to make some friends. I'm going to chat with my friends. I'm going to buy a baseball team. I'm going to be president, whatever. It works for them. And that stuff never works for me. I have to think about it. I have to do this on an intellectual level. [INAUDIBLE] I would say of the people I played at Harvard at that time, the two that were probably the best in my recollection were Scott Turow, the author, who was really could have been a pro level player. Probably didn't play enough to do it, but had the instincts, had the people skills and so on. And a guy named Lloyd Trefethen who's a mathematician at Oxford now and just was a really good-- actually taught me a lot of theory. We talked about that stuff a lot. But all this, one thing I will say in terms of that. The people at Garden, the people at Gardena really were a cut above that. A pretty good club player at Gardena, somebody who just kind of was playing at the top stakes and breaking even, somebody like that was so much better than the leather ass Texas road gamblers who were playing off in the World Series of Poker and things like that. You went to Las Vegas, and these people just were not very good at poker. Now, they were very good at making money at poker, which is a little bit different. But if you just sat down, if you just listened to the analysis they gave for why they did things when they did, it was just-- they believed in luck. They would make these arguments that just made no strategic sense at all. They weren't even thinking in terms of what you think of [? as ?] [? poker. ?] Now, that had very little to do with making a living at poker. One of the stories I like to tell is there's a story that's-- I read it in a book, actually. I didn't know this at the time, but three WSOP winners-- It might have been Dolly Brunsen, Sailor Roberts. Sailor Roberts was a good player. He was probably the only one in that crowd that was really top. And one other, they used to drive around Texas and play in just local [INAUDIBLE]. OK, 3 WSOP winners playing in these Texas games and stuff, just local people. They should clean up, or they should make tons of money. Sailor told me they broke even. They just weren't really making money playing poker. What they were doing was they were bag men for betting on high school football. So they were going to these towns, they were playing some poker. And then they were collecting the bets and moving around and getting paid for that. But the poker was a break even operation for them. So a lot of people who have great reputations in the poker place, lots of colorful stories, and just you set them down at a computer, you set them down at a tournament or something, they would just be toast in five minutes. But the people, the people in Gardena-- who were much better in terms of theoretical poker playing and playing the game-- we would've been toast in that tavern in Texas, right? We would've been beaten over the head with a pool cue and left out with the cows or something. I don't just mean we weren't tough. We weren't tough. We aren't tough. We didn't have social skills. We had just enough social skills, some of us-- most of them didn't even have that. I was considered pretty social because I could actually get a game together of normal people and collect money and so on. Most of these people couldn't. They couldn't play with normal people. And none of us had the kind of skills that those people did. Other questions? Yeah? AUDIENCE: You think if you're a good player, are you better off making a big name for yourself or staying under the radar? PROFESSOR: That's hard for me to answer, because I come from a strong tradition of being under the radar. It was very hard for me to come up. When I wrote this book, this was my first public acknowledgement. I mean, people who knew me knew I played. Including on Wall Street-- you know, I mean, a lot of famous people on Wall Street play and play with whatever. But to sort of come out in print-- and when I talk to people, I said, I'm going to tell a story about you in the book. And people who you would think would have no problem being identified as a poker player just would not let me do it. But that said, there's a whole career out there. I don't know if it's still true, but there are people who made fortunes by being famous poker players. And the thing I always kind of wanted to do, I never liked tournaments. I don't like casinos. I played them sometimes, but I like playing when I want to play. I don't like being told when to play, and some of these casinos, the experience is just so physically unpleasant, playing at 2:00 in the morning and getting bad chairs or whatever. But I was tempted by high stakes poker. And I always thought that would be kind of fun to go in there and play for cash and be on television and do that. That was one that I kind of thought of. But you also have to think a little bit-- like I say, people in AQR have no problem with me writing the books and giving talks like this and so on. I don't know about me being actually in poker after that. That probably would be a step too much. So I think it's a choice you make, who you are. And I think if you decide who you are and you're true to it, you can be successful. What you don't want to do is put on a front. If you put on a front for something like that, it can come back to haunt you. So if you are a celebrity-- I'm just not a celebrity. I never could be. I'd never be happy being one. Nobody's offered to make me one, but-- and it wouldn't be right for me. But if you are cut from that cloth, you should do it. Be true to yourself. Yeah? AUDIENCE: How do you feel the view of mathematicians on Wall Street has changed since [INAUDIBLE] PROFESSOR: Sorry, perception of what? AUDIENCE: Of mathematicians. PROFESSOR: It's gone through-- it kind of goes through. OK, so first, it was just laughable. And let me explain how Wall Street works, by the way. Wall Street is sales. Finance is sales. All the money in finance always has, always will be in sales. A few quants can make a couple of bucks on the edges-- a few billion, even-- but it's just not a lot in the whole scheme of things. If you can bring money in, if you can gather assets, it's always been rewarded. So when the early quants came, the attitude is, let's say you're like LOR associate. People [? know ?] [? that. ?] O'Brian, Rubinstein, and I'm blanking. These are the people who did portfolio insurance in the '80s, whatever. Anyway, so these finance professors show up, these quants show up. What Wall Street is saying, hey great. We don't care. You could astrologers. You could be chartists, whatever. If you're gathering assets and you're giving it to us and paying us commissions to trade it back and forth, if you're giving people reason to trade, we're happy to service you. And we'll pretend we like you, and we'll pretend to respect you. Whatever, we don't care. And they really don't. And so that was kind of the attitude toward mathematicians. If you have a way to generate trades and to talk people into trading, great, because we're just taking a commission. All we care about is that people are coming in and doing this. Then they started figuring out that, you know, hey, wait a minute. These guys aren't like everybody else. These guys actually are making money. Some of this stuff really does make money. That's very, very difficult for Wall Street to come to terms with. There's a New York Times Magazine article about Lehman Brothers with Lou Gluckman and Peter Peterson, the clash of those cultures-- the shirt sleeve, cigar chomping trader culture, totally at odds with the white shoe investment banking culture when these two things crashed. So then people were kind of afraid of mathematicians, but also somewhat dismissive of them. So then I think it's kind of mellowed out a little bit. But I will still say, AQR is a quant hedge fund. When we go and do our credit thing, the credit counterparties, the assumption is nobody can understand your black box trading. Now, that's just not true. I mean, we are very transparent. We can sit it down. And certainly any of you could figure it out in five minutes, but other people would maybe take an hour. But we can lay it down. Here's what we do. Here are the eight signals and we measure these things. And we add them up and we find the best and we buy this and we short that. It's a lot simpler than somebody who's saying, oh, I'm doing this quantitative analysis or technical analysis or fundamental stuff and I'm thinking about all these 80 factors. We tell you exactly what we do. And they always assign the most junior credit person to us, because they figure nobody can understand it anyway, so why waste somebody who knows what's going on? That kind of prejudice you still find a lot, that somehow what we do is crazy. It's a little weird. Nobody can really understand it. It seems to kind of work, so we'll continue to do business. We'll recommend you. We'll put our client's money in you, but not with the same confidence we have with a guy who's saying, I went to the company and I pounded the tires and I talked to the CEO and I shook him and said, tell me what's really going on here. And I understand this stuff. The biggest problem is you're quant's not confident. It's still the case that people measure your credibility, how much they believe you, by how confident you are. So if I come in and say, well, I got this model. It seems to work pretty well. I think I got a 60% chance that this trade will make money. They just think you're-- they don't know where you're coming from, what planet. A guy comes in and says, I'm sure this is going to work. It's got to. I've got all these 87 reasons, whatever. Now, they don't believe them. They think it's 51% just like. But they expect him to be confident and they understand that. So I think that's the biggest problem. If you want to be a quant and you really want to be honest about your confidence for things, it is shocking to people how little you know. It is shocking to people that with all this work, 51% is still pretty good. And that, I think, is a barrier we're going to have real trouble ever surmounting. A quant can understand, hey, 51%? Enough bets, you can be a casino raking in the money in roulette. That's great. Non-quants have problem with that concept, which is why they're playing roulette without the glasses. OK, we-- AUDIENCE: We don't have enough time. PROFESSOR: Well, thank you very much for your attention. [APPLAUSE]
MIT_15S50_Poker_Theory_and_Analysis_IAP_2015
Preflop_Analysis.txt
NARRATOR: The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: So let's get started. Today we're going to be talking about preflop, which is the last thing I definitely want to cover before the [? Sakuna ?] tournament. As you know, this is going to be one of my last lectures. I'm teaching one or to over the next two weeks, but we're primarily going to have guest speakers talking about more the macro poker environment. So why are we doing preflop? So in tournaments, most of your value is going to come from what you do preflop-- playing it close to optimally. And the reason is because a lot of people don't do this good at all. Like they really, really play preflop badly because it's very counter intuitive. Especially live. Like people have a much tougher time doing this because either they're afraid of getting knocked down on a bad hand, or their afraid of showing down a bad hand, or live players are just worse in general. So for whatever reason, people screw this up online. A little bit on live a lot. In addition, one of the reasons that we're spending an entire day on it is it's relatively easier to solve from a mathematical standpoint. Like we get close to a Nash equilibrium because there aren't that many variables. Whereas postflop, there are a million variables. It's more related to kind of putting things into patterns that you might be close to. So that's why I'm doing this. And then let's start with a scenario that we're going to be analyzing for the rest of the class. There we go. So here's our scenario. So I'm-- how it works heads up is the dealer button is a small blind. That's a slight break in the rules because you assume I'm button, he's small blind, I'm big blind, but they change around a little bit. This way, the button isn't the last one to act every round. So here, I'm first one to act preflop and then last to act every round thereafter. If I were big blind, I'd be less every single round. So it's a minor variation that you guys should just know. So situation-- I'm small blind for 125, he's big blind for 150, and there's a 25 ante, which is why there's 50 in the pot. And the question is what do we do here? OK, so we have 9-6 offsuit with 2 1/2 m. We're in the small blind, and we're trying to figure out what do we do. So the answer here might not be that intuitive, and I don't think I'd give it to you right away. But how we figure it out is just with a normal semi-bluffing equation. We take a look at-- OK, so everything preflop is a semi-bluff because we have some chance of winning, and we're virtually never less than 30%. So we just use this semi-bluffing formula that we've seen before where our EV is just going to be the pot times the chance he folds, plus the chance he doesn't fold times our EV when he calls. We need to figure out some sort of calling range. I just made up one here that seemed like about at the pro level, and this is pretty wide. Like him calling with like Ace-2 or Jack-10 or 2-2. This is a wideish range. I think a lot of players for the tournament life might not call this wide, but it ends up being like 27.6% of hands he's calling. So we're going to use it as a baseline, and then later we'll show why that doesn't matter. So our question is, what's our equity versus range here? And what we can do there is just plug into PokerTracker. So the idea is we don't know what hand he has and we don't care. It's unrealistic to think that we can get down to any sort of hand he'll be calling with, but we can get down to a range. We can say he's equally likely to have any assortment of the hands in that range because if he calls, that means that he has one of those hands. So we can compare our equity versus any one of those hands to get our equity versus range, and we can do that in PokerTracker. So what you're going to do is you're going to open up the equity calculator, you're going to put in 9-6 off, and then you're going to put in this. I actually just drew it in there, but that's the same thing I showed before, which is pocket 2s are better. Ace-2, King-2 , Queen-- sorry. King-10, Queen-10, Jack-10, and then the same for offsuit, which is the representation of this entire thing. I just need to separate offsuit or suited, and then I assumed by jack-10, it meant that entire corner, although PokerTracker likes specifying. Anyway, so this is our equity versus. This is saying that if we go in with 9-6 and he calls with anything in that range, we are 34% to win this hand. So I don't know if that's about what you'd expect. It seems higher than I would have thought initially, but that's our equity verses range. If we go all in and he calls, we can assume we're a 35-65 underdog without even knowing what he has. And then once we find out what he has, we will find out whether we're slightly better or worse, but that doesn't really matter. So we can calculate our EV in this hand in the same way that we're used to calculating semi-bluffing. The EV of the push is going to be this. The chance of him folding times 425. If he calls 27% of the time, that means he folds whatever this is like 73% of the time. So that's our value from him folding-- the fold equity. And then 20% of the time we're in a showdown situation where of that [? push ?] time that he calls, we're 35 to win 1250 and we're 65 to lose 950, resulting in a total equity of 253. So that's a lot of chips for what might seem like a very marginal move. By going all in here, you're actually making that many chips. And in fact, a more provocative way to describe it is if you don't do this, you are losing 250 chips. You only need 1,000 to win this whole tournament, and by folding what seems like a very weak hand in this position, you are just giving him 250 chips of value. So that's a lot, and that shows how counterintuitive this is. That a situation where you have really bad cards, you don't realize that these are actually really, really good cards in the situation. In fact, no matter what, your cards are basically good in this situation. Because this situation makes any two cards good enough. So let's talk about-- so we made one assumption, which is what his call range is. Did you have a question? Oh, OK. So we made some assumption about what his call range is, and I said it didn't matter. Why? Because what we can do here is make the call range a variable. So this fold percent is a variable, and then our win percent is related to the percent that he folds. Because if he calls a small range of hands when we get called, he's going to be crushing us. Whereas if he calls like 90% of the time, we're probably ahead of his range because he calls a lot of worse hands. Anyway, so you can take a graph and just look, OK, he calls somewhere between 0% of the time in 100% of the time, and then we can do this EV equation for all those hands by calculating OK, so what's our equity versus calling range? This doesn't change. We still win or lose the same amount every time. And say so if we were always pushing here, if he calls with whatever range, this is our EV. And then one thing that's interesting about this graph is what? It's always positive, which means that no matter what his calling range is, this push is good. It might be more intuitive this way. Like if he folds 100% of the time-- if he only calls 0% of the time, we trend up to a number around here. And what number's around here? That's just this. It's the pot before we do anything. So if he folds 100% of the time, we just win the pot. Like that's our EV every hand, and it just trends based on this. Where the more he calls, typically the better the worse our EVs going to be. Yes? [? Yes. ?] We'll get to that. And that's a good question, although it's going to take us half the class to answer that. Anyway, so that's a cool thing here. So it's always positive, which means I'm saying-- no matter what his-- the villain does, always, always push all in with 9-6 offsuit in this position. So let's break it down into components. So to explain intuitively what drives this graph is your equity there is split into two parts because it's a semi-bluff. It's your fold equity and your showdown equity. So let's turn this into lines. So you see your fold equity just decreases the more he calls in a linear fashion. So that should be fairly intuitive. The proportion he calls reduces your fold equity by that amount, and then your showdown equity has a little curve here where you get more value from the showdown on average the more he calls. The only reason there's a little bit of curvature is because it's multiplied by the chances of him calling in the first plAce. So it's tilting in some direction. But anyway, these are curved. Makes this a very interesting optimization puzzle. And the reason we're positive at the end is for this reason. We're basically-- so this line, the total equity, is just the sum of these two. Where our showdown value is going to be negative over basically this entire range. Even when our fold equity reaches zero, that's when our showdown equity creeps just above zero. Regardless of what his calling range is, your average is like 175 chips of EV. Like the 20% call-- or whatever we said. The 27% calling range isn't necessarily optimal. He knows you have these cards. But on average, you're getting-- whatever, like 100 and something chips. And just to show how bad is folding, if you just open fold this for some reason, that's as bad as purposely calling it all in with 3-4 versus Ace-King. Like that's an equivalent amount of EV loss as folding this hand. So now let's talk about how hard this is going to be. So push/fold decisions are really hard to kind of intuitively figure out just because of the curvature and the steepness of those graphs are moving in weird directions and they're both moving at the same time with regard to all the other variables. So it makes it so that it's very hard to come up with very quick rules. Some variables are going to affect our decision, and the result is either push or fold. Our result isn't even as complicated as betting a certain amount, but the variables you have to consider are our cards, our position, our stack, and the villain's call range, which results in this five-dimensional ray that we have to slice and dice so that we have our push/fold decision range. So what we want to do is isolate for specific things. We want to say OK, let's take-- let's make this and this static so that we can solve for these two, or let's make these two static so we can solve for these two. So we can end up with a chart that looks something like this. And this is a little bit more manageable. We can just say this green area is when you should push and this yellow area is when you should fold. That's our goal here. To be able to develop that sort of chart. And then figure out what's the best thing to isolate. So let's get to it. So range is the postset of hands. This is how you write it. This is the assumption that we're making. So we're doing it for two reasons. Analyzing our opponent, which we're not going to be doing. We're going to assume we have no information about him because it's preflop, and a lot of it-- one of our assumptions is that he hasn't even acted yet, so we have no information at all. But we're going to be using it to determine our plays. Basically we're saying we have some sort of decision [INAUDIBLE] for our range. Like we certainly can't solve this for every single hand or at least we can't remember it, so what we're going to do is come up with a line where we say every hand above that certain line, i.e. That range above that line, is going to be good for our specific decision. And we're bringing these into percentiles, and that's how we're going to describe them. And we're using Sklansky-Carlson. His rankings for our breakdown of percentiles. But here's his original question, and this is just to explain where these come from and why this is particularly relevant to what we're doing. So his question is you're the small blind and you have a choice between either folding or open pushing a hand, or open pushing here means you push and you show your hand before he makes a call. And the question is how many chips is this good for? If you have one chip, or I guess we have two chips, you have since they're 1-2 blinds, if you have two chips and one is already in the small blind, your pot odds are 25%. So even if he knows-- he's going to call if he's anything more than a 50% favorite, so you have the odds to do that with at least a pretty bad hand and a lot of good hands. So x here is how big can your stack be before this starts becoming unprofitable? And the answer for Ace-Ace offsuit is about 70, and to explain how we got to that, we're going to make a couple assumptions. And you can see the clear relation between this methodology and what we're doing. So 1-2 blinds-- here a small blind with whatever. Here are bets x all in. And we're going to say big blind's going to call when EV's more than 50. Which he might do pot odds, which we're just going to ignore now for simplicity. We're just going to say big blind is calling when he's more than 50% to win. It's actually zero when he has the pot odds, which is this equation, but we're going to forget about that. Let's just say he's going to only call with literal better hands. So here he's going to call with Ace-8 or pocket 2's because we have Ace-8. He's going to call with a pocket pair, which is always going to be better, and Ace-8, which is going to be equal or better. So that's his calling range 13% of the time. So we win 33% for this range, which is you just plug-in to the PokerTracker and it'll give you that you're a 30% underdog, which is what I did here. That's how you do it. And we get our EV from the situation, and we have to solve for x. So we want EV to be 0 because we're finding the marginal EV, and this is what we get. We get x, which is all the chips we had before we paid the big blind, and we lose x minus 1, which is all the chips we have after paying the small blind. So a little bit of nuance there and only changes our number by one, but that gives us our break even. And our break even here is like 62x. So the reason it's a little bit lower is it doesn't factor in the big blind doing pot odds, but that's the idea. So what we're going to do is we're going to solve this and we're not going to do it. Someone else did it. We're going to solve this for every single hand, and this is what Sklansky did. So he figured what your number here is going to be two things. The number of hands you are slightly better than, and why is this factor important? What's he going to do it if you have a better hand than him? He's going to fold. So this determines our fold equity, and then chance of winning when you're behind determines our equity when he actually does call. He just has perfect information which is why it's a little bit of a change and we're able to solve it out perfectly here. So Sklansky didn't know how to program, which is why Scott Carlson, or Victor Chubukov a couple days later-- apparently later the same day- came up with the answer. Where he said Aces have infinity value here where you can have an unlimited amount of chips and then show Aces and push them, and it's plus EV. We're not saying it's the most optimal decision, but you won't actually lose chips on average even if you have an unlimited chip stack. Whereas King, it only works for 1,290 chips. So we can solve it for every hand to get down to, you could do this with 3-2 off with 1.8 chips and so on. So this is our primary way to rank hands, and we're assuming that at least in a heads-up situation, this is a pretty good idea of what hands will have the most value especially when it comes to going all in. One other method is going to be equity versus three randoms, and it might be more relevant for multi-way pots. You just rank them by their expectation against three callers who haven't looked at their hand. But we're not going to do that especially because we're assuming generally we're going to get one caller, especially heads-up where you can only get one. But we're going to expand it out later. So just an example of what these are. Top 1% is Aces, top 5% is 10s and Ace-Queens. 30% is this. And sorry to break it to you guys, but you're going to have to memorize these. So 5% you just remember their premium hands, just ten or Ace Queens, and I'm giving you little mnemonics to help you remember it. Like Ace-10 or better is 10%, and then like some sort of pocket pairs-- it's not even that big of deal if you go down to 2s here. 20% is Ace-2 or pocket 2's. That's how you're going to remember it. And then 30% is Broadway, so Broadway is any two fAce cards. And an intuitive way to think about it is based on this. Broadway is just this corner here, which is about 1/3 of the graph when you count in pocket pairs. So that's how you just remember 30%-- it's that corner. So it's not that bad. Just remember top 20% is Ace-2 and 2s. 30% is Broadway, and then 10% is Ace-10-- not that bad. So 50% is just going to be the diagonal on the graph. It's any two cards adding up to 15. This is what I mean by the diagonal. Like this is your 13-by-13 graph where these are offsuit and these are suited, which is why we double count them. And then this hand adds up to 15. Ace-two and so does King-3 and so does Queen-4. So this is your 50th percentile hand. So that's how you're going to remember that. And then you don't really need to remember anything more than that. So 100% is going to be any two cards. It's going to be the entire graph here. So we're going to be talking about-- yeah? Percentage is the percent of hands that that range represents. So like this is the top 10% of hands. So if he calls-- if we assume he's going to call 10% of the time, we are saying that he will call with these cards and these cards only. And we're going to be talking about ranges from now on because it helps us understand what cards he has, to some extent, but also lets us use it as an idea to get-- to figure out exactly how often he's going to call. So we're just going to be talking about ranges, and this is what I mean when I say ranges. And to the extent that you're just doing this at the table, you just have to memorize these three numbers. Because this is easy, this is easy, and then just remember that tens are our 5%. When we're talking about ranges in general, a plus EV range-- a decision that's good for a range means that on average, your decision is profitable for every hand for that range in general. But an optimal range is profitable for literally every hand. So you can do a lot of things that are profitable for any two cards, but 3-2 off might actually not be profitable. So it's not necessarily optimal. You want to make sure that every [INAUDIBLE] you can, every single card in that range is optimal and is profitable, which gives you the optimal range. It's the most plus EV range for that type of decision. An example of this is-- so say you're playing against Ace-Queen and you call with this range. Pocket 5's and Ace-10 or better. So you are ahead of this [INAUDIBLE] Queen because you're 53% to win the hand with this range. However, we know it's not optimal because we're calling with two losing hands-- Ace-10 and Ace-jack. So a more realistic, more optimal range is going to be pocket 2s and Ace-King or better. So that's going to be the difference there. So I want to make sure we're not solving for something that's a slight favorite when actually missing out on something a little bit more optimal. Here we go, so this is a better range. 5-5 or Ace-Queen, actually 2s or Ace-Queen would be a little bit better, and we win 60% of the time. And then if a range is optimal, then we know for every hand in that range, we have a plus EV decision. And that's why this is the action we're going to use to prove that if we solve for a range, then we know that if you have any card in that range, our rule is still good for it. So that's why I'm showing you that. Right, so that's it. So that's all we're going to do in terms of defining what ranges are and how we got those numbers. And from now, we're going to talk about making decisions based on a range. So let's talk about preflops. Our assumption is hero has M less than 10. We're in that period of the tournament where your M is not going to be that high. The villain is calling some percentage of hands that are presumably the top whatever of hands. We're guessing he's not calling with a worse hand then-- and folding better hands. [INAUDIBLE] He might be a little bit-- his view of what are good hands and bad hands might be a little bit off of ours, but we're just assuming everyone has the same. [? ICM ?] doesn't matter, so we don't care about payout in tournaments. We're just trying to maximize our chip-- our chip EV, and we only have two decisions-- push or call. So the way that we're going to come up with this rule here for heads-up is-- so what we're doing is we want to figure out what our push/fold range is for heads-up in every scenario. And the way that we're going to do this is first, we need an equation that tells us range for range equities. So everyone can figure out Aces versus whatever is like 80-20, and then 2 over cards with a pocket pair is 50-50ish. But we want to know what's a range versus a range. Like if we push 70% of the time and he calls 30% of the time, what does that translate to in terms of our equity? And it ends up being like 60/40, but we want to get an idea of what that trend is because that's going to materially change our ability to come up with an optimal solution here. So we want to build a table of range versus range, and then we come up with a formula that will let us put in two ranges and come up with an equity calculation for both of those ranges. Then we're going to develop an EV model for semi-bluffs, which we already did, so that should be quick. For each m, find Nash equilibrium if one exists. So a lot of people-- if you Google "Nash equilibrium" for a preflop, you'll find something, it's wrong, and later I'll show you why. And therefore, unstable equilibriums-- it pretty much always is. Find a reasonable range and figure out what kind of assumptions we need to make to make our push/fold decisions correct. So let's do this first thing. So if we just-- for each-- say that we use a hero's range of 50%. So in PokerTracker, we put top 50% of hands. And then for each range of hands for the villain, we calculate the equity for that. And we do that for whatever I did, like 15 different ranges. So if we push 50% of the time and he calls 50% of the time, what's our equity? What's our chance of winning? 50, right. So we have an even equity because we are pushing and calling the same types of hands on average. Whereas if he calls 100% of the time, our equity's actually closer to 60. We are the favorite-- we are a 60/40 favorite on that hand. Whereas if he calls with just Aces, then we are like 17% to win the hand. That's the idea. So our goal here is to come up with some sort of equation that we can just plug-in to-- we can do math on. Like we want to be able to do calculus on it, so we need an equation. So what type of function does this look like because we want to try to fit a curve to it. Exactly. It seems like a logarithmic function and within r squared of 99, it says that that's the logarithmic equation for it. So that works when the range is 50%, but let's take a look at when we change the range to 30%. So what this is saying is we are pushing 30% of the time and he is calling x percent of the time. And what this line is our chance of winning when he does that, and this is also logarithmic. So we're seeing a little bit of a pattern, and then it also has a really good r squared. And then if we push 10% of the time, it still like, OK, it's 98 and change when we look at our equities versus his calling range. But then it starts to get bad. If he calls 5% of the time, our r squared when we fit a log normal function, or a logarithmic function, is only 95. And if he only calls 3% of the time, our r squared is 89. Like this is not really lot logarithmic when we start getting in very tight ranges. And in 1%, it sucks. It's not even close. So what we're trying to do is develop an equation for figuring out range versus ranges, and we see that logarithmic works some of the time. So the reason it doesn't work, just to give you kind of an idea why it doesn't work, is because the top 1% of hands is three hands. It's Aces, Ace-King, and Kings. So it's materially changing based on whether he's-- if we push Ace-King, whether he calls with only Aces or Kings or also Ace-King jumps us between here and here. So we have huge gaps when it comes to very tight percentages, so that's why we break a little bit up here. And when we turn it around, we have the same type of thing. This is looking at OK, say that we know he's going to call 50% of the time. If we push x percent of the time, what is our chance of winning? So if we 100% of the time and he calls 40% of the time, then we are 40% to win. So that's what this chart is telling us, and you see the same type of pattern. Where 50% range versus this kind of range is good. If we push whatever and he calls 30% of the time, it's still good. But then if he calls 2% of the time, it starts becoming bad. So what are we taking away from this? And the whole plan is we want to come up with an equation that just gives us these numbers. I only got these from pushing them into PokerTracker and were trying to fit a certain equation onto it. So our takeaways here is that like range versus range relationship is probably logarithmic. That seems to be a good estimate, but it's definitely not good in the top 5%. So with regard to our model, we're just going to say this is probably not that good when you're talking about ranges in 5%. But realistically, when m is less than 10, no one is do anything in the 5% range, so it's not that big of a deal just to ignore it. So what we did is we populated this table just now. So we took a look at what's the villain's range up here and what's our range? So when we push 100% of the time and he calls 30% of the time, we win 39% of the time. Where red means that we are not likely to win and green means that we are likely to win. Like when we push 5% and he calls with anything, we're 73% to win. So that's what this table is telling us, and we want to do is we want to come up with an equation that lets us populate this table without actually having to do the range versus range calculations by hand. So what I did was I just ran a regression based on logarithmic variables, and I found something that I found to be really, really cool. So these seem to be the coefficients with an r squared of 98. So this equation seems to basically nail these range versus range equities, which is, I think, really cool. So our guess is probably a reason that it starts at 50%, and it seems to be symmetrical. Which is-- I don't know if there's a statistical reason for that, but that's extremely fascinating. And it could make sense intuitively that-- so this is your chance of winning. So would it make sense that the wider he calls-- when we take the natural log of the percentage of him calling, our percent win goes up. So I think it does because it means he's calling with a worse hand. That's why it goes up when he calls greater, but then goes down when we push greater because this means that we're pushing worse hands. So that's the equation there, and that's giving us something that we can actually differentiate when we're trying to solve this. And just in terms of errors, it's not that bad. We have an r squared of 98. So I think this is good enough to use. We can actually use this to try to optimize our decision making preflop. So we solved-- we came up with an equation that we can use for determining our chance of winning a hand compared to our pushing range and his calling range. So let's go back to our EV model for semi-bluff. So we already did that in the fold equity portion, but we're going to build this out to be relevant to our situation in particular. So we're assuming we're talking about in terms of M, so what's our-- so our blinds are going to be equal to 1. Like this means this is 1 M and our EV is going to be in terms of M's. We're going to-- if our EV ends up being 0.35, it means 0.35 times all the blinds combined just to get rid of blinds here. So all the blinds combined equals M. Our showdown value is just going to be-- so our fold equity is going to be the pot times the chance of him folding, i.e. 1 minus calling. The chance of him-- our showdown value is chance of calling times win amount times win percentage minus lose amount times lose percentage. The win amount is going to be stacked plus 2/3 because if the blinds are 1-2 or something where the big blind is double the small blind, it means that you win his big blind and then you lose your whole stack. Depending on when we mark our stack, it changes it a little bit with regard to whether we count it as before we pay the blind or after. But it ends up not making a huge difference. It just impacts whether this is plus 2/3 or plus 1/3 or just your stack. But anyway, we also have this equation, which we just solve for. Where the chance of us winning is related to the chance to him calling and the range that we're pushing. So what you might see here is all of these are related to the same three variables, where it's either the call percentage, the push percentage, or a stack size. So we combine this to one giant equation, which obviously we're not going to remember but we can use to start solving this mathematically. So this is what we would end up with. So when M is 1, we end up with this sort of graph. When the hero's push range goes up to 100% here and the villain's call range goes up to 100% here. This is our equity, which I just color-coded so we don't need to look at the numbers. Where green means it's in the hero's favor and yellow means it's close to zero and red means it's in the villain's favor. So what we see here is it's really green over here and really yellow over there. And this is telling us this is already factoring in fold equity and our chance of winning if he calls. This is saying that if we call 100% of the time, he should call 100% of the time because he wants the yellowest area and we want the greenest area. So that's with 1 M, so it shouldn't be surprising that our equilibrium at 1 M is going to be everyone like getting all in 100% of the time. We see when we switch to 10 M, it's pretty different. In fact, this 100 is one of the yellowest areas, whereas our green area is either up here or down here. So we're going to use this to get an idea of how our value changes based on these variables changing to figure out if we can isolate it to a corner and make that our kind of rule. So first, let's find out when we can get a Nash equilibrium. So the villain gets to pick this. The villain gets to pick his call range and we get to pick our push range. That's the flexibility each person has. So if we push 100% of the time and he calls 100% of the time, this is a Nash equilibrium for 1 M because he can't do any better by going up here because a higher number is worse for him because this is our equity in the hand. And the equity comes directly from him because it's a heads-up situation whereas we can't do better by pushing left. Like when we go down here, the next number is 0.32, meaning that if we push any less, we're actually losing value because we want the number to be higher. And when it comes to 2 M, 100% isn't that much off. We're talking about 0.01 M of a difference in EV, but we do actually reach a Nash equilibrium here-- 2 M-- where we can't do anything any better by moving and he can't do any better by moving. So when M is 3, we have an unstable Nash, and this is when stuff is getting interesting. If we push 80, he should call 100. If he calls 100, we should push 65, and so on. So we end up in a bit of a circle. It's a rock, paper, scissors situation. What do we have? M equals 3. So this lets us know that this whole Nash equilibrium thing is not going to work out. Like it only really works for M of 2, and that's not the most important situation to figure out because you're know that you should be pushing a very, very wide range when M is 2. So we need to figure out some sort of pattern. How do these colors move when your M is more than 2? So let's make-- let's go to M equals 5 and make the gradient a little bit steeper. So these are the same numbers except I just made it-- we're talking about very slight changes in EV, where the difference between yellow here and green here is 0.1 M, and this will help us understand where this value is going to come from. So let's see how this changes over time, because our goal here is-- the villain's goal is to cause this to be in a yellow area, and the hero's goal is to cause this to be in a green area. And you can see why we inherently have this need to figure out what they're going to do. Because say that we're the villain. If we know the hero pushes 100, we call 100. Whereas if we think the hero pushes 30%, we should call 5%. So let's see how this changes over time. So this is when M is 2, and then this is us increasing as M is 3, 4, 5, 6, 7, 8, 9. We're trying to pick the box that is kind of the yellowest area. Where can we realistically come up with a rule for ending up there? Based on what this x-coordinate is, what should be our y-coordinate? What should be our call? Do we see any way to figure this thing out? So what I think we should do is just draw a line right here and say let's just figure out what the equation of that line is, and then come up with an estimate of what his pushing range is, and then throw it into whatever function we used to get that line. Which can't be very complicated in order to make sure, no matter what his pushing range is that we read him for, we end up in this yellow area. So what I did was for each M, I highlighted what the lowest EV decision is for the hero, meaning the best decision for the villain. And for 10 M, it's this diagonal. But for 1 M, it's all the way down here. Like best you can do is over here. So are our function is we call 10 times his pushing range. So if we have 1 M and he pushes 5% of the time, we call about 50% of the time. If he pushes 20% of the time, we're calling always, and anything more than 20%, we're always calling. That's where M equals 1. In general, the rule is going to be if M is 1, always call, and if M is 2, you call twice his pushing range. So if we think he's pushing 50% or more, we're always calling. If we think he's pushing 20%, we call 40% of the time. Like that's the optimal calling range in this heads-up situation. So when M is 4-- so I skipped one, but when M is 4 we have basically this straight diagonal. We try to match his pushing range. So when he pushes 60, we call 60. When he pushes 100, we call 100. That's the best we can do to dominate his range there. When M is 6, we call 2/3 his range. And when M is 9, we call half is range. So those are the rules that we're just going to have to remember. When you're in a heads-up situation, if the stack is basically 1 or 2, you're always calling him. At 4, you're calling him an even amount. At 6 and 9, you're calling him like half of what he pushes. And that's the optimal move. That is how you will-- if you're in a heads-up situation, which you will be at the end of every tournament to the extent that you get there, you will dominate his playing style based on what you read as his pushing amount. And you can just count how many hands did he push versus how many hands does he fold to get an idea of what his range is here. And this also works when it's folded to you in the small blind. Like even if you're 10 handed, by the time you get to the small blind, you're heads up again and these rules apply. So to the extent you can memorize this thing will give you the right move in that scenario for Ms up to 10. So when you're the hero-- when you're the small but, rather, and you're pushing, it requires you to estimate what his calling range is. Because you end up all over the plAce based on making different estimates of what you can call with, and you really don't have any information. But I'm going to graph it out here and then we can see what's going to be a good estimate in basically any scenario. So there are a couple different ways I can think about this. So we're going to come up with a bunch of numbers based on the villains' call range here. And we can either-- so we're targeting a column and then like the row is going to be information we don't have. So the question is do we pick the column that has the highest average EV, the highest minimum EV, or the highest EV versus like a particular bad player that we're going to be targeting? And the blue here is what column maximizes your EV for that scenario? And your guess is as good as mine when it comes to what's the best way to strategize it. Are we looking for-- maximizing the min will help us make sure that we're not dominated by someone who's really got our number, whereas maximizing the average might be better when we're trying to figure out against a player we know nothing about what would be better. And maximizing versus tight-- or loose-- will help us figure out some sort of-- how to capitalize on the reads that we're making. So when M is 1, he's over there. No matter what the scenario, you should be pushing 100% of the time. That's what this is telling us. It shouldn't be a surprise based on all the stuff I told you about 1 M situations. No matter what kind of assumption we're making, push 100%. So let me pick up to M is 10. So what's going on here? So as M increases, they all kind of move at the same time, except what? If we're talking loose or we're targeting best/worst case scenario or best average, it starts to trickle down to like 50%. But it's certainly-- they're relatively near each other, and what's good about this is that means that if we target any of these, we're basically in the ballpark where the difference between this column and this column for any of M is going to be not that material. If you were just in that ballpark, you're fine, but which one of them is completely different. Yeah, against a tight player. And it should make sense intuitively why if you read him as tight, as someone who only calls 15% of the time, even with 9 and 10 M, you should push 100% of the time. Why? It's because 85% of the time, he's just going to fold, and even when you're called, the amount of value you get from him folding most of the time just crushes him. So you should-- to the extent that you can encourage him to be tight, do it. But absolutely, if he's tight, push every single hand. You're never in the scenario where pushing less than top 50% is good. So that's it for heads up. So now let's talk about other positions, and we end up in a lot of complicated situations, which we have to just assume away here. So we can lose in two different ways. We can lose if we call and he beats us, or we could also lose if we call and then someone behind us calls. So we're estimating that someone behind us is only going to call if he has a premium hand because if we're in a short stack situation and someone pushes and then we call, someone is only calling behind us when they have a really good hand. And let's just assume we're going to lose if we get another caller. Like we are almost certainly going to be dominated-- let's say we have 0 EV there. And because of our equation here, we can actually solve that. We can actually figure out what range is going to be 60% versus another range. So this is resulting in a really cool rule of thumb here, which is when you're in a 10 M or less situation and you're trying to decide whether to call an all-in, you just ask yourself, are you calling such that his range is three times more than your range? And some questions you might ask yourself is say that you have Ace-10, like you have a 10% range here. Would he push all in into you with King-Jack-- something in a 30%? If you have King-Queen, which is like 30%, would he push all in with 8-5, which is 50% range, and so on. So when you're in a [? full ring ?] situation and you're trying to decide whether to call, figure out in general what do you think his pushing range is there. And it's a good call if your calling range is 1/3 of his pushing range. Let's call it a day. Thanks, guys.
Facing_our_ugly_history
The_movement_that_inspired_the_Holocaust_Alexandra_Minna_Stern_and_Natalie_Lira.txt
As a new widow, Sarah Rosas Garcia was already struggling to support her nine children when her oldest daughter was picked up by local authorities. Andrea Garcia had been accused of skipping school and being sexually promiscuous, so the authorities responsible for juvenile delinquents committed her to a state hospital. After being administered an IQ test and assigned a low score, the doctors made their verdict. They told Sara her 19-year-old daughter would be sterilized to prevent passing on what the state saw as a mental deficiency. This horrific tale may sound like a story from an authoritarian regime. But in fact, it took place in Southern California in 1938. And Andrea Garcia was one of thousands of poor women of color targeted by the state’s relentless campaign of eugenics. Since ancient Greece, there have been efforts to control human populations via reproduction, retaining some traits and removing others. But in the 19th century, the discovery of evolution and genetics inspired a new scientific movement dedicated to this endeavor. In 1883, British scientist Sir Francis Galton named this idea eugenics, drawn from the Greek word for “to be well born.” This wave of modern eugenicists included prominent scientists and progressive reformers who believed they could improve society by ensuring that only desirable traits were passed down. However, their definition of what traits were and were not desirable was largely determined by the prejudices of their era. Entire categories of people were considered “unfit” for reproduction, including immigrants, people of color and people with disabilities. Meanwhile, their ideal genetic standard reflected the movement’s members: white Europeans of Nordic or Anglo-Saxon descent. As the influence of eugenics spread in the early 20th century, many countries restricted immigration and outlawed interracial unions. These measures to improve so-called “racial hygiene” were taken to their horrific conclusion in Nazi Germany. The Nazi eugenics campaign systematically killed millions of Jews, as well as individuals from other groups, including Roma, gay men, and people with disabilities. Outside their extreme brutality, however, Nazi eugenic policies reflected similar standards across the globe. Throughout the mid-20th century, many countries enacted eugenics policies, and governments in Sweden, Canada, and Japan forcibly sterilized thousands of individuals. Sterilization was exceptionally common in the US. From 1907 to 1979, US policies enforced the sterilization of over 60,000 people, with 32 states passing laws that mandated sterilization for men and women deemed “mentally defective.” This label was typically applied based on superficial mental health diagnoses and the results of IQ tests, which were linguistically and culturally biased against most immigrant populations. These racist standards were particularly problematic in California. From 1920 to 1945, Latina women were 59% more likely to be sterilized than other women. And the rate of sterilizations in California was incredibly high— this single state performed over one third of the country’s sterilization operations. Such was the case of Andrea Garcia, whose story reflects thousands of individuals with similar fates. With the help of famed civil rights lawyer David Marcus, Andrea’s mother argued that California’s sterilization law violated the US Constitution, depriving Andrea of her rights to equal protection under the law. However, while one of the three judges overseeing the case voted to spare Andrea, the other two did not. Records suggests it’s possible Andrea escaped the impending surgery, but many more victims of these policies did not. Although eugenics acquired negative connotations after the horrors of World War II, many of its practices, including sterilization, continued for decades. By the late 1960s, research into human genetics was more nuanced, and bioethics had begun to blunt eugenics’ influence. Yet Sweden and the US continued to pursue involuntary sterilization well into the 70s. Finally, class action lawsuits and protest marches in the US galvanized lawmakers, and California’s sterilization laws were finally repealed in 1979. Unfortunately today, the legal and illegal sterilization of many oppressed communities still continues around the globe.
Facing_our_ugly_history
Why_was_India_split_into_two_countries_Haimanti_Roy.txt
In August 1947, India gained independence after 200 years of British rule. What followed was one of the largest and bloodiest forced migrations in history. An estimated one million people lost their lives. Before British colonization, the Indian subcontinent was a patchwork of regional kingdoms known as princely states populated by Hindus, Muslims, Sikhs, Jains, Buddhists, Christians, Parsis, and Jews. Each princely state had its own traditions, caste backgrounds, and leadership. Starting in the 1500s, a series of European powers colonized India with coastal trading settlements. By the mid-18th century, the English East India Company emerged as the primary colonial power in India. The British ruled some provinces directly, and ruled the princely states indirectly. Under indirect rule, the princely states remained sovereign but made political and financial concessions to the British. In the 19th century, the British began to categorize Indians by religious identity— a gross simplification of the communities in India. They counted Hindus as “majorities” and all other religious communities as distinct “minorities,” with Muslims being the largest minority. Sikhs were considered part of the Hindu community by everyone but themselves. In elections, people could only vote for candidates of their own religious identification. These practices exaggerated differences, sowing distrust between communities that had previously co-existed. The 20th century began with decades of anti-colonial movements, where Indians fought for independence from Britain. In the aftermath of World War II, under enormous financial strain from the war, Britain finally caved. Indian political leaders had differing views on what an independent India should look like. Mohandas Gandhi and Jawaharlal Nehru represented the Hindu majority and wanted one united India. Muhammad Ali Jinnah, who led the Muslim minority, thought the rifts created by colonization were too deep to repair. Jinnah argued for a two nation division where Muslims would have a homeland called Pakistan. Following riots in 1946 and 1947, the British expedited their retreat, planning Indian independence behind closed doors. In June 1947, the British viceroy announced that India would gain independence by August, and be partitioned into Hindu India and Muslim Pakistan— but gave little explanation of how exactly this would happen. Using outdated maps, inaccurate census numbers and minimal knowledge of the land, in a mere five weeks, the Boundary Committee drew a border dividing three provinces under direct British rule: Bengal, Punjab, and Assam. The border took into account where Hindus and Muslims were majorities, but also factors like location and population percentages. So if a Hindu majority area bordered another Hindu majority area, it would be included in India— but if a Hindu majority area bordered Muslim majority areas, it might become part of Pakistan. Princely states on the border had to choose which of the new nations to join, losing their sovereignty in the process. While the Boundary Committee worked on the new map, Hindus and Muslims began moving to areas where they thought they’d be a part of the religious majority— but they couldn’t be sure. Families divided themselves. Fearing sexual violence, parents sent young daughters and wives to regions they perceived to be safe. The new map wasn’t revealed until August 17th, 1947— two days after independence. The provinces of Punjab and Bengal became the geographically separated East and West Pakistan. The rest became Hindu-majority India. In a period of two years, millions of Hindus and Sikhs living in Pakistan left for India, while Muslims living in India fled villages where their families had lived for centuries. The cities of Lahore, Delhi, Calcutta, Dhaka, and Karachi emptied of old residents and filled with refugees. In the power vacuum British forces left behind, radicalized militias and local groups massacred migrants. Much of the violence occurred in Punjab, and women bore the brunt of it, suffering rape and mutilation. Around 100,000 women were kidnapped and forced to marry their captors. The problems created by Partition went far beyond this immediate deadly aftermath. Many families who made temporary moves became permanently displaced, and borders continue to be disputed. In 1971, East Pakistan seceded and became the new country of Bangladesh. Meanwhile, the Hindu ruler of Kashmir decided to join India— a decision that was to be finalized by a public referendum of the majority Muslim population. That referendum still hasn't happened as of 2020, and India and Pakistan have been warring over Kashmir since 1947. More than 70 years later, the legacies of the Partition remain clear in the subcontinent: in its new political formations and in the memories of divided families.
Facing_our_ugly_history
What_really_happened_during_the_Salem_Witch_Trials_Brian_A_Pavlac.txt
You’ve been accused of a crime you did not commit. It’s impossible to prove your innocence. If you insist that you’re innocent anyway, you’ll likely be found guilty and executed. But if you confess, apologize, and implicate others for good measure, you’ll go free. Do you give a false confession— or risk a public hanging? This was the choice facing those accused of witchcraft in the village of Salem, Massachusetts between February 1692 and May 1693. They were the victims of paranoia about the supernatural, misdirected religious fervor— and a justice system that valued repentance over truth. Salem was settled in 1626 by Puritans, a group of English protestants. Life was strict and isolated for the people of Salem. Battles with their Native American neighbors and groups of French settlers were commonplace. People feared starvation and disease, and relations between villagers were strained. To make matters worse, 1692 brought one of the coldest winters on record. That winter, two cousins, 9 year old Betty Parris and 11 year old Abigail Williams started behaving very strangely. A physician found nothing physically wrong — but diagnosed the girls as under “an evil hand.” Puritans believed that the Devil wreaked havoc in the world through human agents, or witches, who blighted nature, conjured fiendish apparitions, and tormented children. As news swept through the village, the symptoms appeared to spread. Accounts describe 12 so-called “afflicted” girls contorting their bodies, having fits, and complaining of prickling skin. Four of the girls soon accused three local women of tormenting them. All three of the accused were considered outsiders in some way. On February 29th, the authorities arrested Sarah Good, a poor pregnant mother of a young daughter, Sarah Osbourne, who had long been absent from church and was suing the family of one of her accusers, and Tituba, an enslaved woman in Betty Parris’s home known by her first name only. Tituba denied harming the girls at first. But then she confessed to practicing witchcraft on the Devil’s orders, and charged Good and Osbourne with having forced her. Osbourne and Good both maintained their innocence. Osbourne died in prison, while Good’s husband turned against her in court, testifying that she "was a witch or would be one very quickly." Good’s 4 year old daughter was imprisoned and eventually gave testimony against her mother. Meanwhile, Good gave birth in jail. Her baby died, and she was convicted and hanged shortly thereafter. Tituba was held in custody until May, and then released. These three victims were just the beginning. As accusations multiplied, others, like Tituba, made false confession to save themselves. The authorities even reportedly told one accused witch that she would be hanged if she did not confess, and freed if she did. They were not particularly interested in thoroughly investigating the charges— in keeping with their Church’s teachings, they preferred that the accused confessed, asked for forgiveness, and promised not to engage in more witchcraft. The court accepted all kinds of dubious evidence, including so-called “spectral evidence” in which the girls began raving when supposedly touched by invisible ghosts. Complicating matters further, many of the jurors in the trials were relatives of the accusers, compromising their objectivity. Those who dared to speak out, such as Judge Nathanial Saltonstall, came under suspicion. By the spring of 1693, over a hundred people had been imprisoned, and 14 women and 6 men had been executed. By this time, accusations were starting to spread beyond Salem to neighboring communities, and even the most powerful figures were targets. When his own wife was accused, the governor of Massachusetts colony suspended the trials. Sentences were amended, prisoners released, and arrests stopped. Some have speculated that the girls were suffering from hallucinations caused by fungus; or a condition that caused swelling of the brain. But ultimately, the reason for their behavior is unknown. What we do know is that adults accepted wild accusations by children as hard evidence. Today, the Salem Witch Trials remain a cautionary tale of the dangers of groupthink and scapegoating, and the power of fear to manipulate human perception.
Facing_our_ugly_history
What_killed_all_the_bison_Andrew_C_Isenberg.txt
It was 1861, and Lone Bear was leading Eagle Plume on his first-ever hunt. He paused and told Eagle Plume the rules: once he saw the bison herd, he needed to wait until someone older signaled; and when it came time, to kill only what his horse could carry. Lone Bear advanced, then beckoned, and suddenly they were off. Eagle Plume and Lone Bear were Kiowa, which was one of several Indigenous groups that lived on the Great Plains. By the mid-1700s, many Plains nations were using horses to hunt the area’s plentiful bison, North America’s largest land mammals. They survived on bison meat, made the bison’s summer hides into lodges, and winter coats into blankets, and used bison bones and horns for tools and sinew as thread. But in the decades to come, millions of bison will be slaughtered, and the Plains societies’ survival and cultures fundamentally— and deliberately— threatened. After the American Civil War, thousands of US settlers began occupying the Plains, intent on exploiting its natural resources. During the 1860s, Plains nations pushed back against the US military. William Sherman resented the army’s defeats. His ruthless military tactics had recently helped end the American Civil War. And, in 1869, he was appointed the US Army’s Commanding General. Now, his focus was on what he called “the Indian problem.” US government officials were determined to force Native American people into designated areas they called reservations. This way, they could control Indigenous people while US settlers and companies profited off their land. Sherman pledged to stay out west, in his words, “till the Indians are all killed or taken to a country where they can be watched.” Meanwhile, the demand for leather, like the kind used for belting to connect industrial machinery, boomed. To meet the demand, US hunters armed with rifles killed bison all across the Plains. Sherman and other military officials realized they could meet their goal passively, by letting this lurching industrial economy run unchecked. Their idea was that, if hunters depleted the bison, Plains Indigenous peoples would be starved into submission. One US colonel told a visiting British lieutenant, “Kill every buffalo you can! Every buffalo dead is an Indian gone.” The US military refused to enforce treaties that barred civilian hunters from tribal territory, and it sometimes provided hunters with protection and ammunition. Many hide hunters killed 50 bison a day. During a two-month span in 1876, one hunter killed 5,855 bison, the near-constant firing of his rifle leaving him deaf in one ear. Some of the bison the hunters shot wandered away and died. Commonly, the hunters would only retrieve the bison's hides and tongues, leaving the rest to rot. Inexperienced skinners destroyed hides as they flayed them. And bison carcasses that were left were torn to pieces by other animals. So hunters began lacing bison meat with poison so they could also collect wolf pelts. Native American people protested, and humanitarian and animal rights groups tried to intervene as the bison population plummeted. Legislation that would make bison hunting illegal in federal territories even passed Congress in 1874— but the US President vetoed it. After all, the sordid strategy was working: many Plains nations faced starvation and were being forced onto reservations. Back in 1800, tens of millions of bison swept the Great Plains. By 1900, there were fewer than 1,000 in existence. Some wealthy US citizens created bison preserves which helped save the species. But the preserves functioned mainly as tourist attractions, and some of them carved even more land off Native American reservations. As of 2021, the bison population had grown to around 500,000. A vast majority live on private ranches. In recent years, Plains nations have reintroduced some 20,000 bison to tribal lands. They aim to heal and restore the relationship that was so flagrantly attacked during the bison slaughter.
Facing_our_ugly_history
What_is_Juneteenth_and_why_is_it_important_Karlos_K_Hill_and_Soraya_Field_Fiorio.txt
One day, while hiding in the kitchen, Charlotte Brooks overheard a life-changing secret. At the age of 17, she’d been separated from her family and taken to William Neyland’s Texas Plantation. There, she was made to do housework at the violent whims of her enslavers. On that fateful day, she learned that slavery had recently been abolished, but Neyland conspired to keep this a secret from those he enslaved. Hearing this, Brooks stepped out of her hiding spot, proclaimed her freedom, spread the news throughout the plantation, and ran. That night, she returned for her daughter, Tempie. And before Neyland’s spiteful bullets could find them, they were gone for good. For more than two centuries, slavery defined what would become the United States— from its past as the 13 British colonies to its growth as an independent country. Slavery fueled its cotton industry and made it a leading economic power. 10 of the first 12 presidents enslaved people. And when US chattel slavery finally ended, it was a long and uneven process. Enslaved people resisted from the beginning— by escaping, breaking tools, staging rebellions, and more. During the American Revolution, Vermont and Massachusetts abolished slavery while several states took steps towards gradual abolition. In 1808, federal law banned the import of enslaved African people, but it allowed the slave trade to continue domestically. Approximately 4 million people were enslaved in the US when Abraham Lincoln was elected president in 1860. Lincoln opposed slavery, and though he had no plans to outlaw it, his election caused panic in Southern states, which began withdrawing from the Union. they vowed to uphold slavery and formed the Confederacy, triggering the start of the American Civil War. A year into the conflict, Lincoln abolished slavery in Washington, D.C., legally freeing more than 3,000 people. And five months later, he announced the Emancipation Proclamation. It promised freedom to the 3.5 million people enslaved in Confederate states. But it would only be fulfilled if the rebelling states didn’t rejoin the Union by January 1st, 1863. And it bore no mention of the roughly 500,000 people in bondage in the border states of Delaware, Maryland, Kentucky, and Missouri that hadn’t seceded. When the Confederacy refused to surrender, Union soldiers began announcing emancipation. But many Southern areas remained under Confederate control, making it impossible to actually implement abolition throughout the South. The war raged on for two more years, and on January 31st, 1865, Congress passed the 13th Amendment. It promised to end slavery throughout the US— except as punishment for a crime. But to go into effect, 27 states would have to ratify it first. Meanwhile, the Civil War virtually ended with the surrender of Confederate General Robert E. Lee on April 9th, 1865. But although slavery was technically illegal in all Southern states, it still persisted in the last bastions of the Confederacy. There, enslavers like Neyland continued to evade abolition until forced. This was also the case when Union General Gordon Granger marched his troops into Galveston, Texas, on June 19th and announced that all enslaved people there were officially free— and had been for more than two years. Still, at this point, people remained legally enslaved in the border states. It wasn’t until more than five months later, on December 6th, 1865, that the 13th Amendment was finally ratified. This formally ended chattel slavery in the US. Because official emancipation was a staggered process, people in different places commemorated it on different dates. Those in Galveston, Texas, began celebrating “Juneteenth”— a combination of “June” and “nineteenth”— on the very first anniversary of General Granger’s announcement. Over time, smaller Juneteenth gatherings gave way to large parades. And the tradition eventually became the most widespread of emancipation celebrations. But, while chattel slavery had officially ended, racial inequality, oppression, and terror had not. Celebrating emancipation was itself an act of continued resistance. And it wasn't until 2021 that Juneteenth became a federal holiday. Today, Juneteenth holds profound significance as a celebration of the demise of slavery, the righteous pursuit of true freedom for all, and a continued pledge to remember the past and dream the future.
Facing_our_ugly_history
The_Nazis_recruited_to_win_the_Cold_War_Brian_Crim.txt
In May of 1945, the Third Reich was in chaos. Adolf Hitler was dead, German surrender was imminent, and Allied troops had already begun divvying up German territory. But high-ranking Nazi engineer Wernher von Braun wasn’t worried. In fact, he approached the US government directly— informing them of his location and waiting calmly for their arrival. As the brain behind the world’s first long-range ballistic missile, von Braun knew his expertise made him a highly valuable military asset. And sure enough, his so-called captors gave him a decidedly warm welcome. Von Braun wasn't the only Nazi scientist receiving this treatment. While World War II was almost over, a new war was brewing. And the US was eager to recruit the smartest minds in Germany before the Soviets got the chance. This became known as Operation Paperclip— a clandestine campaign that brought over 1,500 German scientists to the US between 1945 and 1962. The program was named for the paperclips attached to the files of early recruits— indicating that incriminating information like Nazi affiliations or suspected war crimes could be disregarded. Von Braun, for example, had overseen an SS project that relied on forced labor from thousands of concentration camp prisoners. While von Braun approached the US directly, other scientists had to be identified and located. One important asset in this effort was a Nazi-compiled list of Germany’s top scientists, which someone had unsuccessfully tried to dispose of by flushing down a toilet. But the US was just one player in this scramble. The Soviets were also competing to seize German brainpower, resorting to bribery and forced relocation. The French and British lacked the money to lure the best German brains, but that didn't stop them from kidnapping the occasional scientist. They also stole patents and dismantled factories to learn what they could. The US approach, however, featured a different and particularly tempting brand of coercion: the promise to relocate entire German families and grant them citizenship. This controversial offer was one of the reasons Paperclip was initially shrouded in secrecy. But the project became difficult to hide when Germans started popping up all over the US. The military tried to get ahead of any controversy by revealing the operation to the press in late 1946. But the news immediately attracted criticism from many prominent voices, including Albert Einstein, Eleanor Roosevelt, and the NAACP, as well as many veteran’s organizations. These parties opposed granting German scientists citizenship while millions of displaced persons, including survivors of Nazi atrocities, had no chance of coming to America. Most Americans were also against employing former Nazis in sensitive national security positions. But as the Cold War ramped up, the military argument for keeping these scientists out of Soviet hands overpowered popular objections. With his Nazi past largely hidden from the public, von Braun became one of the US’s most important engineers at the height of the Space Race. In 1958, his team responded to the Soviet launch of Sputnik with the US’s own successful satellite launch. And in the 60s, he was the chief architect of Saturn V, the rocket that brought Americans to the moon. Other Paperclip recruits contributed to the development of chemical weapons such as Agent Orange, pharmaceutical research, and the development of modern airplanes. These contributions helped the US government present Paperclip as a success. But, in hindsight, it’s hard to gauge how helpful the program really was. While von Braun saved the US years of rocketry experimentation, there's no reason to think American scientists couldn't have developed the same technology without him. Furthermore, very few Paperclippers were as exceptional as von Braun. Many were average scientists who either completed their contracts and returned to Germany, or took jobs alongside Americans with equivalent expertise. But ultimately, the issue of Paperclip’s success is just one of many questions raised by its contentious approach to science, ethics, and national security. Can scientists working on military technology be apolitical, or are they responsible for their creations? Can pressing political and military concerns justify overlooking war crimes? In many ways, von Braun’s obituary sums up the operation’s murkiness: “A kind of Faustian shadow may be discerned in [...] the fascinating career of Wernher von Braun: a man so possessed of [...] intellectual hunger, that any accommodation may be justified.”
Facing_our_ugly_history
The_dark_history_of_the_Chinese_Exclusion_Act_Robert_Chang.txt
After 12 years living in California, Chinese citizen Chae Chan Ping was ready for a visit home. He procured the necessary documents for his departure and return journey, and set sail for China, where he spent the next year reconnecting with friends and family. But when he returned to San Francisco on October 8th, 1888, Ping and his fellow immigrant passengers were forbidden to disembark. Just days earlier, President Grover Cleveland had signed the Scott Act, which invalidated the legal documents allowing their re-entry to the United States. This policy threatened to separate families and deprive Chinese immigrants of their homes and livelihoods. Ping challenged the ruling, beginning a legal battle for the rights of thousands of Chinese immigrants. But his case inspired an even more controversial policy that continues to impact immigrants around the globe. Discrimination against Chinese immigrants had begun decades earlier, when the California Gold Rush created a massive demand for labor. Initially, Chinese immigrants were welcomed as reliable workers and became essential parts of frontier communities. Many built railroads and worked in the mines, while others operated laundries, restaurants, and general stores. The 1868 Burlingame Treaty even granted China favored trading status with the US, and allowed unrestricted migration between the two countries. But as large numbers of Chinese immigrants found success, American workers began to see them as a threat. Politicians and labor leaders denounced them for driving down wages, and violence against Chinese individuals became increasingly common. This anti-Chinese sentiment soon found its way into California’s courts. In 1854, following a murder trial where a white man was convicted of murdering a Chinese man, the California Supreme Court overturned the conviction, holding that Chinese eyewitness testimony was inadmissible. The court declared that Chinese citizens could not testify against white defendants, citing similar precedents forbidding testimony by Black and Native American individuals. This decision effectively legalized violence against California’s Chinese population, inspiring mob attacks and campaigns for segregation. Before long, anti-Chinese sentiment reached the federal level. In 1882, Congress passed the Chinese Exclusion Act, the first federal law that restricted immigration based explicitly on nationality. In practice, the Act banned entry to all ethnically Chinese immigrants besides diplomats, and prohibited existing immigrants from obtaining citizenship. It also meant Chinese individuals couldn’t leave the United States and return without first applying for a certificate of re-entry. This policy remained in place until October 1st, 1888, when the Scott Act prohibited re-entry altogether, stranding Chae Chan Ping and thousands of other Chinese immigrants. In court, Ping argued he had followed the proper protocol obtaining his re-entry certificate, and the government had not honored his legally issued document. This argument was strong enough to send his case all the way to the Supreme Court. But the justices ruled against Ping, invalidating thousands of legal re-entry certificates in one fell swoop. The decision led to Ping’s deportation and left up to 20,000 Chinese immigrants unable to return to the US. But arguably even more important than the court’s racist ruling was the logic they used to support it. Traditionally, the Supreme Court is considered a check on the other two branches of American government, offering judgment on policies passed by Congress and the president. In this case however, the court stated they had no power to pass judgment on the Scott Act, since Congress had declared the immigration policy “a matter of national security.” This decision set a unique precedent. Unless Ping's case was overturned, congressional and executive branches could claim national security concerns to pass whatever immigration laws they wanted. Throughout the 20th century, xenophobic government officials used this power to freely discriminate against immigrant groups. The 1917 Asiatic Barred Zone Act prohibited the entry of all South Asians. And a series of immigration acts in the 1920s expanded restrictions throughout Asia, Eastern Europe and southern Europe. Many of these restrictions were lifted after World War II, and the Chinese Exclusion Act itself was finally repealed in 1943— over 60 years after it was enacted. But the US government continues to use this precedent to deploy sudden and sweeping immigration policies, targeting journalists and dissidents as well as ethnic groups. Little is known about what became of Chae Chan Ping following his deportation. But the injustices visited upon him and thousands of other Chinese Americans continue to impact immigrant rights and liberties.
Facing_our_ugly_history
What_really_caused_the_Irish_Potato_Famine_Stephanie_Honchell_Smith.txt
In the fall of 1845, the bright green leaves of potato plants dotted the Irish countryside. For over 200 years, the South American vegetable had thrived in Ireland’s rough terrain and unpredictable weather. Packed with carbohydrates, vitamins, and minerals, the potato was a remarkably nutrient-rich crop that made it easy for less wealthy families to maintain a balanced diet. By the mid-19th century, potatoes had supplanted other staple foods. And since British mandates ensured Ireland’s more valuable agricultural products were exported, roughly half the country’s 8.5 million residents lived almost entirely on potatoes. But when harvesting began in 1845, farmers found their potatoes blackened and shriveled. Those who ate them suffered severe stomach cramps and even death. Today, we know the culprit was Phytophthora infestans— a fungus that flourished in the season’s unusually damp weather. But at the time it was simply called “the blight.” The fungus likely originated in the Americas, traveling across the Atlantic on ships. And while it destroyed potato harvests across Europe, wealthier countries— then as today— generally fared better, as they had more resources to draw on. Meanwhile, the southern and western regions of Ireland were already impoverished and entirely dependent on the single crop, making them disproportionately vulnerable. The impacts of food insecurity are often most severe at the poverty line. But while the failed harvest created a class crisis, the government's response turned it into a national catastrophe. For centuries, Ireland had been under varying degrees of English control, and by 1845, it was part of the United Kingdom with its government based in London. During the famine’s first year, this distant ruling body imported corn from North America and offered the Irish employment on public works projects. But this relief only caused more problems. Imported food was poorly distributed and offered insufficient nutrition, making the previously healthy population more vulnerable to disease, and increasing maternal and child mortality. Worse still, the British continued to export Ireland’s grain and livestock. Meanwhile, the public works projects required lengthy shifts of grueling manual labour and were far from where most workers lived. For example, just one of countless tragic incidences is the story of Thomas Malone, who walked 18 kilometers roundtrip to work every day. One night, exhausted and starving, he collapsed and died just before reaching home, leaving behind his wife and six children. Despite the year’s countless tragedies, many families managed to scrape by. But in 1846, the damp weather returned and the blight worsened, impacting 75% of Ireland's potato yield. British relief efforts diminished substantially in the famine’s second year. And while international aid helped save lives, the overall need was enormous. As the crisis wore on, the government limited who was eligible for relief and tasked Ireland with funding the relief efforts themselves by increasing local taxes. Most modern historians view these disastrous policies as stemming from a mix of toxic religious ideology, laissez-faire economic policies, and political infighting. British news sources callously depicted the Irish as lazy, simple-minded alcoholics, and some London decision-makers believed the famine was God’s punishment for these sinful behaviors. Other government officials purposefully blocked efforts to provide meaningful relief due to internal political rivalries. As with famines and food insecurity today, it wasn't a lack of resources preventing the British from aiding Ireland, but rather a lack of political will. Seven years after the blight began, Ireland’s weather patterns returned to normal and the potato crop finally stabilised. But over 1 million people had perished from starvation, malnutrition, and disease. Between 1 and 2 million more fled the country, beginning a trend that dropped Ireland’s population to half its pre-famine levels by the 1920s. Today, climate change is making extreme weather more common and sustained, leading countless agricultural communities to face similar struggles. Just as in Ireland, farmers living on the margins are increasingly facing starvation, malnutrition, and disease due to global weather patterns for which they bear little responsibility. But history doesn’t have to repeat itself if governments and institutions can provide the kind of aid these regions need: relief efforts that are coordinated and ongoing, provide sufficient nutrition to prevent disease, and are offered with compassion rather than judgment.
Facing_our_ugly_history
What_really_happened_during_the_Attica_Prison_Rebellion_Orisanmi_Burton.txt
“We are men. We are not beasts and we do not intend to be beaten or driven as such... What has happened here is but the sound before the fury of those who are oppressed.” These words were spoken during the 1971 Attica Prison Rebellion by one of its leaders, Elliott Barkley. At the time, Attica prison was severely overcrowded. Its majority Black and Latino population faced constant physical and verbal abuse. All prison guards were white. Some were members of white supremacist hate groups. Guards threw away letters that weren’t written in English and prohibited Muslim religious services. They punished white prisoners for fraternizing with non-white men. Prisoners were allowed one shower a week and one roll of toilet paper a month. Among those imprisoned at Attica were Elliott Barkley, Frank Smith, and Herbert X. Blyden. “I’m dying here little by little every day...” Barkley wrote his mother. She contacted authorities, but nothing changed. He began writing a book about life at Attica. Meanwhile, Smith worked a position called the “warden’s laundry boy” for 30 cents day. His grandmother had been enslaved. Because Smith and others were treated as less-than-human at the will of their keepers, they viewed prison as an extension of slavery. And Blyden had participated in prison strikes and rebellions. He and others saw the violence of prison as symptomatic of a societal problem where individuals are denied justice based on their class and race. They felt people shouldn’t be stripped of their rights to health and dignity upon being sentenced. Instead, resources should go towards meeting people’s basic needs to prevent crime in the first place. In the summer of 1971, Blyden co-founded the Attica Liberation Faction. The group compiled a manifesto and petitioned Corrections Commissioner Russell Oswald and Governor Nelson Rockefeller for better treatment. Though largely ignored, they continued organizing. After activist George Jackson was killed at a California prison, 700 men at Attica participated in a silent fast. Just weeks later, on September 9th, a spontaneous uprising began. A group of prisoners overpowered guards, sparking the Attica Rebellion. Prisoners broke windows, started fires, and captured supplies. They beat many guards. One of them, William Quinn, would die from his injuries. Soon, over 1,200 prisoners had assembled in the yard with 42 hostages, preparing to demand change. They established a medical bay, delegated men to prepare and ration food, protected and sheltered guards, and elected a negotiating committee. They appointed Blyden chief negotiator, Smith as security chief, and Barkely as a speaker. Later that day, Barkley presented their demands to the press. When his mother saw him on TV, she was terrified. He was just days from being released. But she believed authorities would want retribution. Over the next four days, prisoners held negotiations with officials. They called for a minimum wage, rehabilitation programs, better education, and more. They promised all remaining hostages would be safe if they were given amnesty for crimes committed during the uprising. Meanwhile, Governor Rockefeller began crisis talks with President Nixon. The president told his chief of staff that the rebellion should be quelled to set an example for other Black activists. Commissioner Oswald announced he’d meet a number of the demands, but refused to guarantee amnesty. Prisoners refused to surrender. As warnings of an imminent siege mounted, they threatened to kill 8 hostages if attacked. Nevertheless, Rockefeller ordered troops to retake the prison. Helicopters tear-gassed the yard. Troopers shot over 2,000 rounds of ammunition, killing 29 prisoners and 10 guards, and wounding many others. Witnesses say troopers found Barkley and shot him in the back. Officers stripped surviving men naked, tortured them, and deprived them of medical attention. Blyden was starved for days. Smith was sexually violated, burned with cigarettes, dragged into isolation, and beaten. Directly after the attack, Governor Rockefeller thought prisoners were responsible for the deaths of the 10 guards. He called it “a beautiful operation.” President Nixon congratulated Rockefeller and told his chief of staff that the way to stop “radicals” was to “kill a few.” But autopsies soon confirmed that prisoners hadn’t killed any guards during the attack, as threatened. Government forces had. Nixon told Rockefeller to stand his ground. Those who survived the massacre continued fighting for revolutionary change. Long after being released, Smith and Blyden campaigned for social justice and prison abolition. The demands men made at Attica in 1971 remain at the core of ongoing protests— within and beyond prison walls.
Facing_our_ugly_history
Ugly_History_The_Armenian_Genocide_Ümit_Kurt.txt
In the 19th century, Christian Armenians in the Ottoman Empire lived as second-class citizens. They were taxed disproportionately, forbidden from giving testimony in Ottoman courts, and frequently attacked by local Kurdish tribes. In 1878, Armenian activists negotiated a treaty to enact reforms, but Sultan Abdul Hamid II refused to make good on these promises. And when an Armenian resistance movement began to form, the sultan took decisive action. From 1894 to 1896, he led the Hamidian Massacres— a relentless campaign of violence that took the lives of over 150,000 Armenians. These massacres were the culmination of centuries of Armenian oppression. Yet they were only the beginning of an even greater tragedy— a genocide hidden under the guise of World War I that would include the deportation, forced Islamization, and mass murder of nearly 1 million Armenians. As some of the most ancient inhabitants of this region, the Armenian people were originally a collective of tribes living in the mountains of Western Asia. By the 6th century BCE, these tribes were living in one nation called Armenia, which, over the following 2,000 years, was controlled by various local and invading leaders. But whoever ruled their homeland, a devotion to Christianity became a vital part of their ethnic identity, even as their neighbors increasingly adopted Islam. In what’s currently eastern Turkey, Christian Armenians shared the area with Muslim Kurds for centuries, until Turkic-speaking peoples invaded the region. Four centuries later, Ottoman Turks claimed these communities as part of the vast Ottoman Empire. While the empire’s systematic preference towards Muslims made life difficult for Armenians, Greeks, and Jews, by the late 19th century, well-educated Armenian elites were able to attain prominent positions in banking, commerce, and government. However, this rise in influence became a source of resentment, and many Muslim Ottomans believed Armenians would eventually betray the empire to form their own independent state. This belief is partly what led Abdul Hamid to begin the Hamidian Massacres. Thankfully, the Ottoman Armenians fighting the sultan’s forces weren’t alone. Armenians from neighboring Russia had recently founded two resistance organizations which offered a haven for refugees and supplied arms to villages under siege. Finally, in April 1909, the sultan was deposed following the Young Turk Revolution. But despite initial promises, this new government also failed to pass meaningful reforms. A second wave of massacres ravaged the Armenian population, and the era of international warfare to come would only make things worse. During the First Balkan War, thousands of Muslim refugees were sent to the Armenian stronghold of Anatolia, further increasing tensions between Christian and Muslim Ottomans. And in the first winter of World War I, Ottoman general Enver Pasha attempted to flank opposing Russian forces by sending his troops through the frigid Sarıkamış mountains. When their unit froze to death. Enver Pasha blamed the disaster on “Armenian treachery,” and ordered the immediate disarming of all non-Muslims— a decision which moved many Armenians from the front lines of an external war to the trenches of an internal one. By 1915, the Ottomans were enacting more violence on their own Armenian citizens than any foreign enemy. At this time, Turkish nationalist Talaat Pasha had become the de facto leader of the Ottoman Empire, and he ordered the deportation of all Armenians in eastern Anatolia on the grounds of national security. Talaat’s new legislation allowed Armenian property and businesses to be seized as wartime necessities, and able-bodied Armenian men were routinely killed to lower the likelihood of resistance. Most Armenians were marched to concentration camps in the Syrian desert, where they regularly suffered robbery, abduction, and rape. The few women and children who escaped deportation were forcibly converted to Islam. The ruling Turks saw Muslim identity as the cornerstone of their vision for the empire, so escaped Armenian youths were placed in orphanages to indoctrinate them with Muslim culture and traditions. Children who resisted were subjected to violence and torture. When the majority of these killings ended in 1916, it was estimated that the population of Ottoman Armenians had dropped from 1.5 million to roughly 500,000. In the decades that followed, many of the remaining Armenians dispersed across the globe. Families who immigrated to eastern Russia may have eventually been incorporated into the modern nation of Armenia, which received its independence in 1991. But to this day the Turkish government denies this genocide occurred. Official government language acknowledges the violence but defines the Ottoman’s actions as “necessary measures,” and Armenian deaths as unfortunate consequences of war. In recent years, some Turkish historians have refuted this stance and begun writing about this period with less fear of retribution. In the pursuit of justice, many Armenians and Armenian-led non-profit organizations work tirelessly to advocate for the recognition of this genocide, and accountability for those responsible.
Facing_our_ugly_history
Ugly_History_The_Spanish_Inquisition_Kayla_Wolf.txt
It’s 1481. In the city of Seville, devout Catholics are turning themselves in to the authorities. They’re confessing to heresy— failure to follow the beliefs of the Catholic Church. But why? The Spanish Inquisition has arrived in Seville. The Inquisition began in 1478, when Pope Sixtus IV issued a decree authorizing the Catholic monarchs, Ferdinand and Isabella, to root out heresy in the Spanish kingdoms— a confederacy of semi-independent kingdoms in the area that would become the modern country of Spain. Though the order came from the church, the monarchs had requested it. When the Inquisition began, the Spanish kingdoms were diverse both ethnically and religiously, with Jews, Muslims, and Christians living in the same regions. The Inquisition quickly turned its attention to ridding the region of people who were not part of the Catholic Church. It would last more than 350 years. On the ground, groups called tribunals ran the Inquisition in each region. Roles on a tribunal could include an arresting constable, a prosecuting attorney, inquisitors to question the accused, and a scribe. A “Grand Inquisitor,” a member of the clergy selected by the king and queen, almost always led a tribunal. The Inquisition marked its arrival in each new place with an “Edict of Grace.” Typically lasting 40 days, the Edict of Grace promised mercy to those who confess to heresy. After that, the inquisitors persecuted suspected heretics on the basis of anonymous accusations. So the confessors in Seville probably didn’t see themselves as actual heretics— instead, they were hedging their bets by reporting themselves when the consequences were low, rather than risking imprisonment or torture if someone else accused them later on. They were right to worry: once the authorities arrested someone, accusations were often vague, so the accused didn’t know the reasons for their arrest or the identity of their accuser. Victims were imprisoned for months or even years. Once arrested, their property was confiscated, often leaving their families on the street. Under these conditions, victims confessed to the most mundane forms of heresy— like hanging linen to dry on a Saturday. The Inquisition targeted different subsets of the population over time. In 1492, at the brutal Grand Inquisitor Tomás de Torquemada’s urging, the monarchs issued a decree giving Spanish Jews four months to either convert to Christianity or leave the kingdom. Thousands were expelled and those who stayed risked persecution. Converts to Christianity, known as conversos, weren’t even safe, because authorities suspected them of practicing Judaism in secret. The hatred directed at conversos was both religious and economic, as conversos made up a large portion of the upper middle class. The Inquisition eventually shifted its focus to the moriscos, converts to Christianity from Islam. In 1609, an edict passed forcing all moriscos to leave. An estimated 300,000 left. Those who remained became the Inquisition’s next targets. The inquisitors announced the punishments of those found guilty of heresy in public gatherings called autos de fé, or acts of faith. Hundreds of people gathered to watch the procession of sinners, mass, sermon, and finally the announcement of punishments. Most of the accused received punishments like imprisonment, exile, or having to wear a sanbenito, a garment that marked them as a sinner. The worst punishment was “relaxado en persona”— a euphemism for burning at the stake. This punishment was relatively uncommon— reserved for unrepentant and relapsed heretics. Over 350 years after Queen Isabella started the Inquisition, her namesake, Queen Isabella II, formally ended it on July 15th, 1834. The Spanish kingdoms’ dependence on the Catholic Church had isolated them while the rest of Europe experienced the Enlightenment and embraced the separation of church and state. Historians still debate the number of people killed during the Inquisition. Some suggest over 30,000 but most estimate between 1,000 and 2,000. The consequences of the Inquisition, however, reach far beyond fatalities. In some places, an estimated 1/3 of prisoners were tortured. Hundreds of thousands of members of religious minorities were forced to leave their homes, and those who remained faced discrimination and economic hardship. Smaller inquisitions in Spanish colonial territories in the Americas, especially Mexico, carried their own tolls. Friends turned in friends, neighbors accused neighbors, and even family members reported each other of heresy. Under the Inquisition, people were condemned to live in fear and paranoia for centuries.
Facing_our_ugly_history
Zumbi_The_last_king_of_Palmares_Marc_Adam_Hertzman_Flavio_dos_Santos_Gomes.txt
During the 1600s, an expansive autonomous settlement called Palmares reached its height in northeastern Brazil. It was founded and led by people escaping from slavery, also called maroons. In fact, it was one of the world’s largest maroon communities, its population reaching beyond 10,000. And its citizens were at constant war with colonial forces. The records we have about Palmares mainly come from biased Dutch and Portuguese sources, but historians have managed to piece much of its story together. During the Trans-Atlantic slave trade, which began in the 1500s, nearly half of all enslaved African people were sent to Portugal’s American colony: Brazil. Some escaped and sought shelter in Brazil’s interior regions, where they formed settlements called mocambos or quilombos. Fugitives from slavery probably arrived in the northeast in the late 1500s. By the 1660s, their camps had consolidated into a powerful confederation known today as the Quilombo of Palmares. It consisted of a central capital linking dozens of villages, which specialized in particular agricultural goods or served as military training grounds. Citizens of Palmares, or Palmaristas, were governed by a king and defended by an organized military. African people and Brazilian-born Black and Indigenous people all came seeking refuge. They collectively fished, hunted, raised livestock, planted orchards, and grew crops like cassava, corn, and sugarcane. They also made use of the abundant palm trees for which Palmares was named, turning palm products into butter, wine, and light. Palmaristas crafted palm husks into pipes and leaves into mats and baskets. They traded some of these goods with Portuguese settlers for products like gunpowder and salt. In turn, settlers avoided Palmares’ raids during which they’d seize weapons and take captives. The Portuguese were concerned with other invading imperialists, but regarded Indigenous uprisings and Palmares as their internal threats. Palmares endangered the very institution of slavery— the foundation of Brazil's economy. During the 1670s, the Portuguese escalated their attacks. By this time, Ganga-Zumba was Palmares’ leader. He ruled from the Macaco, which functioned as the capital. His allies and family members governed the other villages— with women playing crucial roles in operation and defense. As they fought the Portuguese, Palmaristas used the landscape to their advantage. Camouflaged and built in high places, their mocambos offered superior lookouts. They constructed hidden ditches lined with sharp stakes that swallowed unsuspecting soldiers and false roads that led to ambushes. They launched counterattacks under the cover of night and were constantly abandoning and building settlements to elude the Portuguese. In 1678, after years of failed attacks, the Portuguese offered to negotiate a peace treaty with Ganga-Zumba. The terms they agreed upon recognized Palmares’ independence and the freedom of anyone born there. However, the treaty demanded that Palmares pledge loyalty to the crown and return all past and future fugitives from slavery. Many Palmaristas dissented, among them Zumbi— Ganga-Zumba’s nephew— a rising leader himself. Before long, Ganga-Zumba was killed, likely by a group affiliated with his nephew. As Palmares’ new leader, Zumbi rejected the treaty and resumed resistance for another 15 years. But in February of 1694, the Portuguese captured the capital after a devastating siege. Zumbi escaped, but they eventually found and ambushed him. And on November 20th, 1695, Portuguese forces killed Zumbi. His death was not the end of Palmares, but it was a crushing blow. After years of warfare, there were far fewer rebels in the area. Those who remained rallied around new leaders and maintained a presence, however small, through the 1760s. Though, Palmares is no more thousands of other quilombos persist to this day. November 20th, the day of Zumbi’s death, is celebrated across Brazil as the Day of Black Consciousness. But Zumbi was just one of many Palmaristas. We only know some of their names, but their fight for freedom echoes centuries later.
Facing_our_ugly_history
Ugly_History_The_Khmer_Rouge_murders_Timothy_Williams.txt
From 1975 to 1979, the Communist Party of Kampuchea ruled Cambodia with an iron fist, perpetrating genocide that killed one fourth of the country’s population. Roughly 1 million Cambodians were executed as suspected political enemies or due to their ethnicities. The regime targeted Muslim Cham, Vietnamese, Chinese, Thai, and Laotian individuals. Outside these executions, one million more Cambodians died of starvation, disease, or exhaustion from overwork. This genocidal regime rose to power amidst decades of political turmoil. Following World War II, Cambodia’s monarch, Prince Norodom Sihanouk, successfully negotiated the country’s independence after roughly 90 years of French colonial rule. But Sihanouk’s strict policies provoked friction with many citizens. Especially militant communist rebels, who had long opposed the French and now turned their attention to overthrowing the prince. This unstable situation was further complicated by a war raging outside Cambodia’s borders. In Vietnam, millions of American troops were supporting the non-communist south against the communist north. While the US petitioned for Cambodia’s support, Prince Sihanouk tried to stay neutral. But in 1970, he was overthrown by his prime minister who allowed American troops to bomb regions of Cambodia in their efforts to target North Vietnamese fighters. These attacks killed thousands of Cambodian civilians. To regain power after being overthrown, the prince allied with his political enemies. The Communist Party of Kampuchea, also known as the Khmer Rouge, was led by Cambodians who dreamed of making their nation a classless society of rice farmers. They opposed capitalist Western imperialism and sought to lead the country to self-sufficiency. But to the public, they mostly represented a force fighting the pro-American government. Angered by destructive American bombing and encouraged by the prince’s call to arms, many Cambodians joined the Khmer Rouge. Eventually, a full blown civil war erupted. Over five years of fighting, more than half a million Cambodians died in this brutal conflict. But the violence didn’t end when the rebels conquered Phnom Penh in April 1975. Upon taking the capital, the Khmer Rouge executed anyone associated with the previous government. Prince Sihanouk remained stripped of power and was put under house arrest, and the Khmer Rouge began evacuating city residents to the countryside. Those who couldn't make the trip by foot were abandoned, separating countless families. In this new regime, every citizen was stripped of their belongings and given the same clothes and haircut. Private property, money, and religion were outlawed. The new agricultural workforce was expected to produce impossible amounts of rice, and local leaders would be killed if they couldn’t fulfill quotas. Many prioritized their orders to the capital above feeding workers. Underfed, overworked, and suffering from malaria and malnutrition, thousands perished. The Khmer Rouge members enforcing the system were no safer. When their plan failed to produce rice at the expected rates, Khmer Rouge leadership became paranoid. They believed that internal enemies were trying to sabotage the revolution, and they began arresting and executing anyone perceived as a threat. This brutality continued for almost four years. Finally, in 1979, Vietnamese troops working alongside defected Khmer Rouge members took control of the country. This political upheaval triggered yet another civil war that wouldn’t end until the 1990s. In the years that followed, there was no easy path to justice for victims and their families. A hybrid UN-Cambodian tribunal was established in 2003, but it only tried Khmer Rouge in the topmost leadership positions. Lower level Khmer Rouge members appeared in court as well, but they weren't placed on trial. Instead, they gave testimony and offered insight into the cruel system that had enabled their superiors’ crimes. Some of these perpetrators were even legally acknowledged as victims, because they constantly feared for their lives and committed violence as a means of self-preservation. This perception of low level Khmer Rouge members as victims rather than perpetrators extended beyond the courtroom. Like other Cambodians, most Khmer Rouge members lost family, suffered hunger, were stripped of their homes and belongings, and were overworked to exhaustion. And the paranoia amongst Khmer Rouge leadership had led to a higher rate of execution for Khmer Rouge members than the ethnic majority population. As a result, many Cambodians today don't just see the genocide as one committed against ethnic minority groups, but also as a broad campaign of violence impacting the entire population. As of 2021, only three people have received prison sentences. Many victims would like the tribunal to pursue further trials of Khmer Rouge leaders. However, a 2018 national survey revealed that most victims feel the tribunal has contributed to justice. In the wake of such tragedy, it’s tempting to paint conflicts in simplistic terms— casting one group as oppressor and the other as oppressed. But many Cambodians live with a more complex reality. Everyone suffered, even those who contributed to the suffering of others. This perception doesn’t excuse any acts of violence. But how a society remembers traumatic events plays a part in who is seen as victim, who is seen as perpetrator, and how a shattered society can build a path into the future.
Facing_our_ugly_history
Why_are_US_cities_still_so_segregated_Kevin_EhrmanSolberg_and_Kirsten_Delegard.txt
On October 21st, 1909, 125 residents of an affluent Minneapolis neighborhood approached William Simpson, who’d just bought a plot in the area, and told him to leave. The Simpsons would be the second Black family in the otherwise white neighborhood, where they intended to build a home. When the Simpsons refused offers to buy them out, their neighbors tried blocking their home’s construction. They finally moved into their house, but the incident had a ripple effect. Just a few months after the mob harassed the Simpsons, the first racially restrictive covenant was put into place in Minneapolis. Covenants are agreements in property deeds that are intended to regulate how the property is to be used. Beginning in the mid-1800s, people in the United States and elsewhere began employing them in a new way: specifically, to racially restrict properties. They wrote clauses into deeds that were meant to prevent all future owners from selling or leasing to certain racial and ethnic groups, especially Black people. Between 1920 and 1950, these racial covenants spread like wildfire throughout the US, making cities more segregated and the suburbs more restricted. In the county encompassing Minneapolis, racial covenants eventually appeared on the deeds to more than 25,000 homes. Not only was this legal, but the US Federal Housing Administration promoted racial covenants in their underwriting manual. While constructing new homes, real estate developers began racially restricting them from the outset. Developments were planned as dream communities for American families— but for white people only. In 1947, one company began building what became widely recognized as the prototype of the postwar American suburb: Levittown, New York. It was a community of more than 17,000 identical homes. They cost around $7,000 each and were intended to be affordable for returning World War II veterans. But, according to Levittown’s racial covenants, none of the houses could “be used or occupied by any person other than members of the Caucasian race,” with one exception: servants. Between 1950 and 1970, the population of the American suburbs nearly doubled as white people flocked to more racially homogenous areas in a phenomenon known as “white flight.” The suburbs spread, replacing native ecosystems with miles of pavement and water-guzzling lawns. And their diffuse layout necessitated car travel. American automobile production quadrupled between 1946 and 1955, cementing the nation's dependence on cars. Federal programs like the G.I. Bill offered American veterans favorable lending rates for buying homes. But it was difficult for people of color to take advantage of such resources. Racial covenants restricted them from certain neighborhoods. And, at the same time, government programs labelled neighborhoods of color bad investments and often refused to insure mortgages in those areas. Therefore, banks usually wouldn’t lend money to people purchasing property in neighborhoods of color— a practice that became known as redlining. So, instead of owning homes that increased in value over time, creating wealth that could be passed to future generations, many people of color were forced to spend their income on rent. And even when they were able to buy property, their home’s value was less likely to increase. The suburbs boasted cul-de-sacs and dead ends that minimized traffic. Meanwhile, city planners often identified redlined neighborhoods as inexpensive areas for industrial development. So, the massive freeway projects of the mid-20th century disproportionately cut through redlined neighborhoods, accompanied by heavy industry and pollution. As a result, many neighborhoods of color experience higher rates of drinking water contamination, asthma, and other health issues. People targeted by racial covenants increasingly challenged them in court and, in 1968, they were finally banned under the Fair Housing Act. But the damage had been done. Racial covenants concentrated wealth and amenities in white neighborhoods and depressed the conditions and home values in neighborhoods of color. As of 2020, about 74% of white families in the US owned their homes, while about 44% of Black families did. That gap is greatest in Minnesota’s Twin Cities. Across the country, neighborhoods remain segregated and 90% of all suburban counties are predominantly white. Some landlords, real estate agents, and lenders still discriminate against people based on race— rejecting them, steering them to and away from certain neighborhoods, or providing inaccessibly high interest rates. Gentrification and exclusionary zoning practices also still displace and keep people of color out of certain neighborhoods. Racial covenants are now illegal. But they can still be seen on many housing deeds. The legacy of racial covenants is etched across the pristine lawns of the American suburbs. It’s a footnote in the demographic divides of every city. And it’s one of the insidious architects of the hidden inequalities that shape our world.
Facing_our_ugly_history
What_caused_the_Rwandan_Genocide_Susanne_BuckleyZistel.txt
For 100 days in 1994, the African country of Rwanda suffered a horrific campaign of mass murder. Neighbor turned against neighbor as violence engulfed the region, resulting in the deaths of over one-tenth of the country’s population. The seeds of this conflict were planted a century earlier, first when German, and later Belgian, colonizers arrived in the country. At the time, Rwanda was ruled by a monarchy of Tutsi, one of the three ethnic groups comprising the population. Tutsi and the even smaller Twa communities were minority groups, while Hutu composed the majority. Many Hutus and Tutsi civilians were on good terms, but colonial powers encouraged political division. Belgians enforced record keeping around ethnic identity, and created a public narrative that cast Tutsi as elite rulers and Hutu as ordinary farmers. Over time, this propaganda led to intense political hostility. And while colonial powers had largely withdrawn by 1959, lingering anger motivated a Hutu revolt, forcing many Tutsi leaders to flee the country. Over the following decade, Rwanda transitioned to an independent republic with a Hutu government. This new administration argued that as the majority group, Hutu deserved exclusive access to political power. They excluded the Tutsi minority by appointing offices based on population and prohibited the return of Tutsi families that had fled years earlier. Hutu extremists also circulated propaganda blaming Tutsi for the country’s economic, social, and political problems. Discontent with their life in exile, a small group of Tutsi insurgents invaded Rwanda in 1990, beginning a violent civil war. The conflict lasted three years before it was resolved with a formal peace accord. But the war’s aftermath was rife with insecurity. While some civilians in both groups remained amicable, the treaty intensified political polarization. And in 1994, when a plane carrying the Hutu Rwandan president was shot down, the conflict broke out anew. This time, Hutu officials had prepared a deadly response to ensure they stayed in power. Working off a list of targets, government-funded Hutu militias flooded the streets, perpetrating acts of physical and sexual violence against Tutsi political enemies and civilians. Over the chaotic following months, over 1 million Hutu civilians joined their ranks due to coercion, self-preservation, or the pursuit of personal agendas. Tutsi victims sought refuge at churches and schools where they hoped international organizations would protect them, but no outside party came to their aid. UN soldiers who’d overseen the Peace Accord were instructed to abandon Tutsi civilians, and UN leadership refused to acknowledge the genocide taking place. The violence didn’t end until mid-July, when the Tutsi army— who instigated the previous civil war— seized control of the country. By the time the fighting was over, roughly 800,000 Rwandans had been killed, and only a small fraction of the Tutsi population was left alive. In the months that followed, there was no easy strategy for bringing the killers to justice. The UN established a special tribunal in Tanzania to try the key perpetrators. But Hutu civilians from every level of society had committed atrocities against their neighbors, friends, and even family members. There were roughly 120,000 Rwandans awaiting trial, and inmates were dying from overcrowding and poor hygiene. The new Rwandan government estimated it would take 100 years to prosecute every accused civilian in national court. So officials determined the best path forward involved looking to the country’s past. Rwanda has a traditional process for resolving interpersonal conflicts called gacaca. Roughly translating to “justice on the grass,” gacaca had long been used to address offenses within villages. Local witnesses would offer testimony and could then speak for or against the accused. Then, appointed lay judges would determine an appropriate penalty within the community’s means. In the hope of trying perpetrators more quickly, the government adapted gacaca for their formal courts. These hybrid trials had no professional attorneys or judges, and no evidence outside the spoken word and a case file detailing the crimes of the accused. All charges were then divided into four categories: masterminding the genocide and committing acts of sexual violence, participating in the killings, physical assault, or destroying Tutsi property. Those found guilty of the first two categories were entered into the traditional court system, but the other crimes were assigned set penalties which could be reduced if the accused pled guilty. Beginning in 2002, thousands of gacaca courts convened every week. The process proved faster than conventional courts, but Rwandan opinion on the trials was mixed. Some didn’t want to accuse their neighbors in a community setting, and many potential witnesses were intimidated to prevent their testimony. Additionally, while the trial showed that not all Hutu participated in the killings, the courts only reviewed cases with Tutsi victims, ignoring the Hutu casualties incurred during the genocide and the preceding civil war. When the trials concluded in 2012, the courts had convicted 1.7 million individuals. For some families, these verdicts helped restore the dignity of those lost in the violence. For others, the trials were a decade-long reminder of a past they were desperate to leave behind.
Facing_our_ugly_history
How_one_of_the_most_profitable_companies_in_history_rose_to_power_Adam_Clulow.txt
During the 17th century, the three letters “VOC” formed the world’s most recognizable logo. These initials belonged to the Verenigde Oostindische Compagnie, or the Dutch East India Company— widely considered the most profitable corporation ever created. Starting in 1602, it cornered the booming spice market and pioneered trade routes between Asia and Europe. But such success came with an overwhelming cost in human life. When the Dutch state created the Company, it granted the organization the power to wage war, conduct diplomacy, and seize colonies throughout Asia. The Dutch East India Company was intended to make money and battle competing European empires. The Asian market was the largest at the time and spices were in great demand throughout Europe. Nutmeg was among the most precious. But it was only cultivated on Indonesia’s Banda Islands. If Dutch officials could seize exclusive control over nutmeg, they'd make their investors rich, ensure the Company’s long-term survival and deprive their adversaries of the same gains. However, their plan hinged on the submission of the Bandanese people. This was something Company officials, like the ruthless Jan Pieterszoon Coen, were willing to go to great lengths to ensure. Home to around 15,000 people, the Banda Islands were composed of village confederations controlled by rich men called orang kaya, who were expert traders. They'd retained their virtual monopoly over nutmeg for centuries, selling at the highest price to Asian and European merchants. When the Dutch East India Company arrived in the early 1600s, its officials persuaded a group of orang kaya to sign a treaty. It guaranteed protection in exchange for monopoly rights to their nutmeg. Bandanese leaders had made similar agreements before, but were able to break them without serious consequences. The Dutch represented a new threat. They attempted to build forts to control trade and stop smuggling, and insisted that all nutmeg be sold to them at deflated prices. Many Bandanese refused and relations continued to deteriorate. In 1609, a group of villagers ambushed and killed a Dutch admiral and 40 of his men. Over the next decade, tensions escalated as treaties were broken and re-signed. The Company and Jan Pieterszoon Coen, its Governor-General, began considering new strategies. The Bandanese, one official wrote, should be “brought to reason or entirely exterminated.” Coen himself believed that there could be no trade without war. In 1621, with the approval of his superiors, he staged a massive invasion and made Bandanese leaders sign another document. But this time, the terms didn’t recognize the Bandanese as a sovereign people— they were the Dutch East India Company’s colonial subjects. Soon, Dutch officials claimed they'd detected a conspiracy against them. Coen used this to eliminate further resistance. He ordered his soldiers to torture Bandanese leaders to extract confessions. Over the following months, Company troops waged a brutal campaign that decimated the population. Many Bandanese people were starved to death or enslaved and sent to distant Dutch colonies. Others jumped from cliffs rather than surrender. Thousands fled, emptying out whole villages. Some survivors resettled on other islands, where they preserved remnants of Bandanese language and culture. When the Company’s violent campaign was over, the indigenous population had plummeted to less than a thousand, most of whom were enslaved. The Dutch East India Company sliced the islands into plantations and imported an enslaved workforce. It was, by many measures, an act of genocide. By securing this global monopoly over nutmeg, the Company supercharged its economic development, contributing to the Dutch Golden Age. Although Coen faced criticism, he was celebrated as a national hero well into the 20th century. 400 years after the massacre on Banda, Coen’s statue still stands in the city of Hoorn— despite mounting pressure for its removal. Coen and the Dutch East India Company brought a prized commodity under their control and profits soared. But they achieved this by violently tearing another society apart.
Facing_our_ugly_history
How_did_South_African_Apartheid_happen_and_how_did_it_finally_end_Thula_Simpson.txt
On June 16th 1976, over 10,000 student protesters flooded the streets of Soweto, South Africa. For 28 years, South Africans had been living under Apartheid, a strict policy of segregation that barred the country’s Black majority from skilled, high-paying jobs, quality education, and much more. And in 1974, the government announced schools would be forced to teach many subjects in Afrikaans— a language used primarily by the nation’s white ruling elites. But when protesters rose up to fight this injustice, the government's response was swift. Armed police officers turned their weapons onto the crowd, and over the following days they killed over 150 students, including victims as young as 13. Even before Apartheid, South Africa’s long history of racial violence had already cost countless Black Africans their jobs, homes, and lives. Beginning in the 1600s, first Dutch and later British settlers colonized the nation, displacing local populations from their ancestral lands. Over the following centuries, Black Africans were segregated onto so-called native reserves; and by the 20th century, that meant 70% of the population was living on roughly 13% of the country’s land. Deprived of their traditional livelihoods and seeking to escape these overpopulated regions, Black Africans began migrating to white-controlled areas. There, they worked for low wages on white-owned farms and mines, alongside the descendants of enslaved and indentured workers from across Africa and Asia. By 1948, this exploited labor force was a primary driver of South Africa’s booming economy. But economists argued that continued growth required a stable, educated, and urbanized African labor force. The ruling United Party accepted this logic, but the rival National Party argued such a workforce would threaten the white ruling class. Naming their campaign Apartheid, the Afrikaans word for “separateness,” the National Party won the 1948 elections. And once in power, they began forcibly relocating millions of Africans back to the reserves. Under Apartheid, Black workers were considered temporary visitors in white areas. They were restricted to specific zones, and their trade unions received no official recognition. The government also abolished mixed race universities, outlawed mixed marriages, segregated recreational spaces, and purged the non-white population from the voters’ roll. Within parliament at this time, Apartheid only had a small group of outspoken opponents. But outside the government, three political groups were leading a popular resistance against the regime: the Communist Party, which was then legally banned in 1950, their allies in the African National Congress, and later, a splinter group called the Pan-Africanist Congress. Despite some ideological differences, all three groups worked to mobilize the masses against Apartheid by non-violent methods. But the National Party wasn’t as restrained. On March 21st, 1960, policemen massacred demonstrators at a PAC rally, and within weeks, the ANC and PAC were outlawed. These events radicalized anti-Apartheid leaders, and in December of 1961, Nelson Mandela and other ANC and Communist Party activists established the resistance’s armed wing. While the conflict grew increasingly violent, the 1960s saw consistent economic growth throughout South Africa. The National Party attributed this to the success of Apartheid, but it was actually due to further exploitation. Employers were illegally hiring Black laborers for positions affluent white workers didn’t want to fill. And since this prosperity was flowing disproportionately to the ruling white minority, the government happily turned a blind eye. Meanwhile, the National Party leveraged global anti-communist sentiment to demonize its adversaries. In 1963, they tried Mandela and ten others for advancing communism and training recruits in guerrilla warfare. Eight of the defendants were sentenced to life in prison, and many remaining anti-Apartheid leaders were forced into exile. Over the next decade, a generation of student activists rose up to continue the fight, led in part by Steve Biko and the South African Students Organization. Following the Soweto Massacre, student protesters spread nationwide. But police violently smothered these demonstrations, killing over 600 protesters by early 1977. That same year, Biko was taken into police custody and killed in a brutal assault. In response to this violence, the international community finally called for an end to Apartheid, with some countries enacting trade embargoes against South Africa. The state attempted to launch a reform process, creating separate parliaments for the country's white, non-white, and Indian populations. But the exclusion of the African majority led to more nationwide rioting. So when F.W. de Klerk, a long-time supporter of Apartheid, came to power in 1989, he concluded the only way to ensure white survival was to end the policy. On February 2nd, 1990, de Klerk shocked the world by unbanning the ANC, releasing Mandela, and calling for constitutional negotiations. Four years later, in the nation’s inaugural all-inclusive elections, Mandela became South Africa’s first Black president. But today, the national trauma of Apartheid can still be keenly felt, and many wounds from this period have yet to fully heal.
Facing_our_ugly_history
Ugly_History_Japanese_American_incarceration_camps_Densho.txt
On December 7, 1941, 16 year-old Aki Kurose shared in the horror of millions of Americans when Japanese planes attacked Pearl Harbor. What she did not know, was how that shared experience would soon leave her family and over 120,000 Japanese Americans alienated from their country, both socially and physically. As of 1941, Japanese American communities had been growing in the US for over 50 years. About one-third of them were immigrants, many of whom settled on the West Coast and had lived there for decades. The rest were born as American citizens, like Aki. Born Akiko Kato in Seattle, Aki grew up in a diverse neighborhood where she never thought of herself as anything but American– until the day after the attack, when a teacher told her: “You people bombed Pearl Harbor." Amid racism, paranoia, and fears of sabotage, people labelled Japanese Americans as potential traitors. FBI agents began to search homes, confiscate belongings and detain community leaders without trial. Aki’s family was not immediately subjected to these extreme measures, but on February 19, 1942, President Roosevelt issued Executive Order 9066. The order authorized the removal of any suspected enemies– including anyone of even partial Japanese heritage– from designated ‘military areas.’ At first, Japanese Americans were pushed to leave restricted areas and migrate inland. But as the government froze their bank accounts and imposed local restrictions such as curfews, many were unable to leave– Aki’s family among them. In March, a proclamation forbid Japanese Americans from changing their residency, trapping them in military zones. In May, the army moved Aki and her family, along with over 7,000 Japanese Americans living in Seattle to "Camp Harmony" in Puyallup, Washington. This was one of several makeshift detention centers at former fairgrounds and racetracks, where entire families were packed into poorly converted stables and barracks. Over the ensuing months, the army moved Japanese Americans into long-term camps in desolate areas of the West and South, moving Aki and her family to Minidoka in southern Idaho. Guarded by armed soldiers, many of these camps were still being constructed when incarcerees moved in. These hastily built prisons were overcrowded and unsanitary. People frequently fell ill and were unable to receive proper medical care. The War Relocation Authority relied on incarcerees to keep the camps running. Many worked in camp facilities or taught in poorly equipped classrooms, while others raised crops and animals. Some Japanese Americans rebelled, organizing labor strikes and even rioting. But many more, like Aki’s parents, endured. They constantly sought to recreate some semblance of life outside the camps, but the reality of their situation was unavoidable. Like many younger incarcerees, Aki was determined to leave her camp. She finished her final year of high school at Minidoka, and with the aid of an anti-racist Quaker organization, she was able to enroll at Friends University in Kansas. For Aki’s family however, things wouldn’t begin to change until late 1944. A landmark Supreme Court case ruled that continued detention of American citizens without charges was unconstitutional. In the fall of 1945, the war ended and the camps closed down. Remaining incarcerees were given a mere $25 and a train ticket to their pre-war address, but many no longer had a home or job to return to. Aki’s family had been able to keep their apartment, and Aki eventually returned to Seattle after college. However, post-war prejudice made finding work difficult. Incarcarees faced discrimination and resentment from workers and tenants who replaced them. Fortunately, Japanese Americans weren’t alone in the fight against racial discrimination. Aki found work with one of Seattle’s first interracial labor unions and joined the Congress of Racial Equality. She became a teacher, and over the next several decades, her advocacy for multicultural, socially conscious education would impact thousands of students. However, many ex-incarcerees, particularly members of older generations, were unable to rebuild their lives after the war. Children of incarcerees began a movement calling for the United States to atone for this historic injustice. In 1988, the US government officially apologized for the wartime incarceration– admitting it was the catastrophic result of racism, hysteria, and failed political leadership. Three years after this apology, Aki Kurose was awarded the Human Rights Award from the Seattle Chapter of the United Nations, celebrating her vision of peace and respect for people of all backgrounds.
Facing_our_ugly_history
Ugly_history_The_1937_Haitian_Massacre_Edward_Paulino.txt
When historians talk about the atrocities of the 20th century, we often think of those that took place during and between the two World Wars. Along with the Armenian genocide in modern-day Turkey, the Rape of Nanking in China, and Kristallnacht in Germany, another horrific ethnic cleansing campaign occurred on an island between the Atlantic Ocean and Caribbean Sea. The roots of this conflict go back to 1492, when Christopher Columbus stumbled onto the Caribbean island that would come to be named Hispaniola, launching a wave of European colonization. The island’s Taíno natives were decimated by violence and disease and the Europeans imported large numbers of enslaved Africans to toil in profitable sugar plantations. By 1777, the island had become divided between a French-controlled West and a Spanish-controlled East. A mass slave revolt won Haiti its independence from France in 1804 and it became the world’s first Black republic. But the new nation paid dearly, shut out of the world economy and saddled with debt by its former masters. Meanwhile, the Dominican Republic would declare independence by first overthrowing Haitian rule of eastern Hispaniola and later Spanish and American colonialism. Despite the long and collaborative history shared by these two countries, many Dominican elites saw Haiti as a racial threat that imperiled political and commercial relations with white western nations. In the years following World War I, the United States occupied both parts of the island. It did so to secure its power in the Western hemisphere by destroying local opposition and installing US-friendly governments. The brutal and racist nature of the US occupation, particularly along the remote Dominican-Haitian border, laid the foundation for bigger atrocities after its withdrawal. In 1930, liberal Dominican president Horacio Vásquez was overthrown by the chief of his army, Rafael Trujillo. Despite being a quarter Haitian himself, Trujillo saw the presence of a bicultural Haitian and Dominican borderland as both a threat to his power and an escape route for political revolutionaries. In a chilling speech on October 2, 1937, he left no doubt about his intentions for the region. Claiming to be protecting Dominican farmers from theft and incursion, Trujillo announced the killing of 300 Haitians along the border and promised that this so-called "remedy" would continue. Over the next few weeks, the Dominican military, acting on Trujillo’s orders, murdered thousands of Haitian men and women, and even their Dominican-born children. The military targeted Black Haitians, even though many Dominicans themselves were also dark-skinned. Some accounts say that to distinguish the residents of one country from the other, the killers forced their victims to say the Spanish word for parsley. Dominicans pronounce it perejil, with a trilled Spanish "r." The primary Haitian language, however, is Kreyol, which doesn’t use a trilled r. So if people struggled to say perejil, they were judged to be Haitian and immediately killed. Yet recent scholarship suggests that tests like this weren’t the sole factor used to determine who would be murdered, especially because many of the border residents were bilingual. The Dominican government censored any news of the massacre, while bodies were thrown in ravines, dumped in rivers, or burned to dispose of the evidence. This is why no one knows exactly how many people were murdered, though contemporary estimates range from about 4,000 to 15,000. Yet the extent of the carnage was clear to many observers. As the US Ambassador to the Dominican Republic at the time noted, “The entire northwest of the frontier on the Dajabón side is absolutely devoid of Haitians. Those not slain either fled across the frontier or are still hiding in the bush.” The government tried to disclaim responsibility and blame the killings on vigilante civilians, but Trujillo was condemned internationally. Eventually, the Dominican government was forced to pay only $525,000 in reparations to Haiti, but due to corrupt bureaucracy, barely any of these funds reached survivors or their families. Neither Trujillo nor anyone in his government was ever punished for this crime against humanity. The legacy of the massacre remains a source of tension between the two countries. Activists on both sides of the border have tried to heal the wounds of the past. But the Dominican state has done little, if anything, to officially commemorate the massacre or its victims. Meanwhile, the memory of the Haitian massacre remains a chilling reminder of how power-hungry leaders can manipulate people into turning against their lifelong neighbors.
Facing_our_ugly_history
Ugly_History_The_US_Syphilis_Experiment_Susan_M_Reverby.txt
In the 1930s, the United States was ravaged by syphilis. This sexually transmitted infection afflicted nearly 1 in 10 Americans, producing painful sores and rashes that persisted for roughly two years. After these initial symptoms, late-stage syphilis was known to cause organ damage, heart and brain disorders, and even blindness. It was incredibly difficult to slow the disease’s spread. Experts cautioned against unprotected sex, but the infection could also be passed during childbirth. Worse still, existing treatments like mercury and bismuth were considered unreliable at best and potentially harmful at worst. Today these heavy metals are classified as toxic, but at the time, doctors were still uncovering their dangerous side effects. Amidst the uncertainty, health care professionals had two key questions. Did late-stage syphilis warrant the risks of existing treatments? And, did the infected individual’s race change how the disease progressed? Many physicians were convinced syphilis affected the neurological systems of white patients and the cardiovascular systems of Black patients. There was little evidence for this theory, but the U.S. Public Health Service was determined to investigate further. In 1932 they launched a massive experiment in Tuskegee, Alabama. The town had already possessed a small hospital, and the area was home to a large population of potential participants. The PHS collaborated with local doctors and nurses to recruit roughly 400 Black men presumed to have noncontagious late-stage syphilis, as well as 200 non-syphilitic Black men for their control group. But their recruitment plan centered on a lie. While the researchers planned to observe how syphilis would progress with minimal treatment, participants were told they would receive free drugs and care for their condition. At first, researchers gave the men existing treatments, but these were soon replaced with placebos. Under the false pretense of providing a special remedy, researchers performed painful and invasive spinal taps to investigate the disease’s neurological consequences. When patients died, the PHS would swoop in to study the body by funding funerals in exchange for autopsies. In their published studies, they listed the men as volunteers to obscure the circumstances under which they’d been recruited. Outside Alabama, syphilis treatment was advancing. A decade after the study began, clinical trials confirmed that penicillin effectively cured the disease in its early stages. But in Tuskegee, researchers were determined to keep pursuing what they considered vital research. They had yet to confirm their theories about racial difference, and they believed they would never have another opportunity to observe the long-term effects of untreated syphilis. The study’s leadership decided to withhold knowledge of new treatments from their subjects. During World War II, researchers convinced the local draft board to exempt men from their study, preventing them from enlisting and potentially accessing penicillin. The study even continued through the 1950s when penicillin was shown to help manage late-stage syphilis. By today’s bioethical standards, withholding treatment in a research study without a patient’s informed consent is morally abhorrent. But for a large part of the 20th century, this practice was not uncommon. In the 1940s, US led studies in Guatemala infected numerous prisoners, sex workers, soldiers, and mental health patients with sexually transmitted infections to study potential treatments. And other studies throughout the 50s and 60s saw doctors secretly infecting patients with viral hepatitis or even cancer cells. Eventually, researchers began objecting to these unjust experiments. In the late 1960s, an STI contact tracer named Peter Buxtun convinced the PHS to consider ending the study. But after leadership decided against it, Buxtun sent his concerns to the press. In July of 1972, an exposé of the Tuskegee study made headlines across the country. Following public outcry, a federal investigation, and a lawsuit, the study was finally shut down in 1972— 40 years after it began and 30 after a treatment for syphilis had been found. No evidence of any racial difference was discovered. When the study ended, only 74 of the original 600 men were alive. 40 of their wives and 19 of their children had contracted syphilis, presumably from their husbands and fathers. In the wake of this tragedy, and concerns about similar experiments, Congress passed new regulations for ethical research and informed consent. But systemic racism continues to permeate medical care and research throughout the US. To truly address these issues, the need for structural change, better access to care, and transparency in research remains urgent.
Facing_our_ugly_history
The_true_cost_of_gold_Lyla_Latif.txt
Gold is one of Earth’s most valuable resources, with one kilogram regularly valued at over 55,000 US dollars. In 2020, Mali produced an estimated 71.2 tons of gold. But Mali only saw $850 million from gold in 2020, when that amount is worth billions, not to mention that the country likely produced much more than the reported 71.2 tons. The situation isn’t unique: a number of other gold-rich countries in Africa, including Mauritania, Senegal, Guinea, Cote d’Ivoire, Ghana, Burkina Faso, and Niger also aren’t seeing the income they should, given the price of gold. The force behind this is greed on an individual, corporate, and national scale, and a corrupt system that perpetuates itself. Although Mali has abundant gold, the country lacks the infrastructure to mine and export it. So the government allows multinational corporations to apply for licenses to mine gold in exchange for taxes paid to Mali’s government. These taxes should, theoretically, finance development, like building the infrastructure to mine gold, improve the economy, and provide citizens with public goods like healthcare and education. Tax money alone isn’t enough to do these things, of course: a government also has to be invested in its people’s well-being, and government corruption can prevent progress. But without adequate funds, even the best intentioned government doesn’t stand a chance of improving circumstances for its citizens. Foreign corporations exploit Mali’s need for tax revenue to get the government to sign on to very unfavorable yet perfectly legal contracts. For example, one such contract stated that no corporate taxes would be owed for the first five years, costing Mali millions in tax revenue. Meanwhile, mining licenses sometimes allow these corporations to take samples of gold out of the country without registering them or paying taxes on them. These should be small amounts of gold used to test for quality, but the license doesn’t limit the size of samples, so this creates a loophole where corporations export large amounts of gold without paying any tax. The multinational corporations are also evading taxes they are legally required to pay. They filter profits through a labyrinth of tax havens that’s difficult to trace. Or they exaggerate their expenses so they end up owing very little in taxes. For instance, a corporation in Mali uses a subsidiary in Ireland to manage its operations and another subsidiary in the Netherlands to license its brand name. The corporation in Mali pays management fees to the Irish subsidiary and pays intellectual property license fees to the Dutch company, all for enormous sums. These costs are deducted from overall profits, leaving the amount subject to taxes at a bare minimum. These companies also buy gold on the black market. Local, small-scale miners often operate without a license, so the government is unaware of how much gold they mine. Corporations buy gold from these miners, avoiding the cost of mining the gold themselves, and pay the miners far below market value. Then they turn around and tell the government they incurred huge expenses mining gold they didn’t mine at all. There’s no way for Mali’s revenue authority to verify this information, causing the country to lose even more tax money. Similarly, corporations pay corrupt government officials to help them smuggle gold across borders, primarily to the United Arab Emirates, rather than operating through legal channels. In 2016, Mali reported around $200 million of exported gold, but the UAE reported receiving slightly over $1.5 billion of imported gold from Mali that same year. The gold is then sold to European, American, and Asian markets from the UAE, with no questions asked about its origins. Similar patterns can be seen with gold-rich countries across Africa, indicating that gold smuggling is happening on a massive scale, without ever being subject to taxes. All of this creates a vicious cycle, forcing a continued reliance on the corporations that helped create the situation in the first place. More than half of Mali’s citizens live below the international poverty line, while their nation’s wealth lines the pockets of foreign corporations and corrupt officials.
Facing_our_ugly_history
One_of_historys_most_dangerous_myths_Anneliese_Mehnert.txt
From the 1650s through the late 1800s, European colonists descended on South Africa. First, Dutch and later British forces sought to claim the region for themselves, with their struggle becoming even more aggressive after discovering the area’s abundant natural resources. In their ruthless scramble, both colonial powers violently removed numerous Indigenous communities from their ancestral lands. Yet despite these conflicts, the colonizers often claimed they were settling in empty land devoid of local people. These reports were corroborated in letters and travelogues by various administrators, soldiers, and missionaries. Maps were drawn reflecting these claims, and prominent British historians supported this narrative. Publications codifying the so-called Empty Land Theory had three central arguments. First, most of the land being settled by Europeans had no established communities or agricultural infrastructure. Second, any African communities that were in those regions had actually entered the area at the same time as Europeans, so they didn’t have an ancestral claim to the land. And third, since these African communities had probably stolen the land from earlier, no-longer-present Indigenous people, the Europeans were within their rights to displace these African settlers. The problem is that all three of these arguments were completely false. Almost none of this land was empty and Africans had lived here for millennia. Indigenous South Africans simply had a different practice of land ownership from the Dutch and British. Land belonged to families or groups, not individuals. And even that ownership was more focused on the land’s agricultural products than the land itself. Community leaders would distribute seasonal land rights, allowing various nomadic groups to graze cattle or forage for vegetation. Even the groups that did live in large agricultural settlements didn’t believe they owned the land as private property. But the colonizing Europeans had no respect for this system of ownership. They concluded the land belonged to no one and could therefore be divided amongst themselves. In this context, claims that the land was “empty” were an ignorant oversimplification of a much more complex reality. But the Empty Land Theory allowed British academics to rewrite history and minimize native populations. In 1894, the European parliament in Cape Town took this exploitation even further by passing the Glen Grey Act. This decree made it functionally impossible for native Africans to own land, shattering the system of collective tribal ownership and creating a class of landless people. To justify the theft, Europeans painted the locals as barbarians who lacked the capacity for reason and were better off being ruled by the colonizers. This strategy of stripping locals of their right to ancestral lands and casting native people as savages has been employed by many colonizers. Now known as the Empty Land Myth, this is a well-established technique in the colonial playbook, and its impact can be found in the history of many countries, including Australia, Canada, and the United States. And in South Africa, the influence of this narrative can be traced directly to a brutal campaign of institutionalized racism. Barred from their lands, the once self-sufficient population struggled as migrant laborers and miners on European-owned property. The law forbade them from working certain skilled jobs, and forced Africans to live in racially segregated areas. Over time, these racist policies intensified, mandating separation in urban areas, restricting voting rights, and eventually building to apartheid. Under this system, African people had no voting rights, and the education of native Africans was overhauled to emphasize their legal and social subservience to white settlers. This state of legally enforced racism persisted through the early 1990s, and throughout this period, colonists frequently invoked the Empty Land Theory to justify the unequal distribution of land. South African resistance movements fought throughout the 20th century to gain political and economic freedom. And since the 1980s, South African scholars have been using archaeological evidence to correct the historical record. Today, South African schools are finally teaching the region's true history. But the legacy of the Empty Land Myth still persists as one of the most harmful stories ever told.
Facing_our_ugly_history
The_dark_history_of_the_overthrow_of_Hawaii_Sydney_Iaukea.txt
It was January 16th, 1895. Two men arrived at Lili’uokalani’s door, arrested her, and led her to the room where she would be imprisoned. A group had recently seized power and now confiscated her diaries, ransacked her house, claimed her lands, and hid her away. Lili’uokalani was Hawaii’s queen. And she ruled through one of the most turbulent periods of its history. 75 years earlier, American missionaries first arrived in Hawaii. They quickly amassed power, building businesses and claiming arable land that they transformed into plantations. They worked closely with the ali’i, or sacred Hawaiian nobility closely linked to the Gods. The ali’i appointed missionaries to government roles where they helped establish Hawaii as a sovereign kingdom with a constitutional monarchy. But once certain business opportunities emerged— namely, the prospect of exporting sugar to the US tariff-free— some of their descendants shifted positions. They formed a political group known as “the Missionary Party” and began plotting to annex Hawaii, bringing it under US control. Lili’uokalani and her siblings were born into an ali’i family. In 1874, her brother, Kalākaua, ascended the throne, but thirteen years into his reign, the emerging threat crystallized. The Missionary Party called a meeting where an all-white militia surrounded and forced the king to sign new legislation. Later called the Bayonet Constitution, it stripped Native Hawaiians of their rights, diminished the monarchy’s power, and ceded control to this group of white businessmen. Four years later, King Kalākaua died, heartbroken, Lili’uokalani said, “by the base ingratitude of the very persons whose fortunes he had made.” Prepared to fight, she assumed the throne. Despite death threats and rumors of insurgency, Queen Lili’uokalani was determined to restore power to her people— an estimated two thirds of whom had lost their voting rights. Flooded with requests for change, she authored a new constitution. But before she introduced it, the so-called “Committee of Safety,” a new organization that consisted of many Missionary Party members, hatched another plot. Under the false pretense that this new constitution endangered American property and lives, they staged a coup on January 17th, 1893. More than 160 US Marines marched to the palace, where the “Committee of Safety” removed Queen Lili’uokalani from office. Thousands of Hawaiian people protested, some wearing hat bands reading, “Aloha ’Āina,” or “love of the nation.” The alleged “Provisional Government” declared Hawaii a Republic the following year. They proclaimed that Hawaiians couldn’t vote or be government employees without signing a new “oath of allegiance.” Many refused. The following year, some of Lili’uokalani’s supporters attempted a counterrevolution. The Republic responded brutally, jailing hundreds and sentencing six people to death. In exchange for their release, the Republic made Lili’uokalani sign a document that claimed to relinquish her throne, and they imprisoned her in the palace. Under constant guard, she composed songs expressing her love for her people and began making a patchwork quilt that told the story of her life. While she was only allowed news that had been reviewed by her jailers, her supporters often brought her bouquets from the garden she’d dedicated to them, wrapped in newspaper. After 8 months, Lili’uokalani was placed under house arrest. As soon as it was lifted, she traveled to Washington, D.C. with Hawaiian nationalists and over 20,000 signatures. There they successfully convinced Congress to halt the Republic’s annexation treaty. But the following year, the Spanish-American War began. Seeing Hawaii as a strategic military base, President William McKinley declared it a US territory on July 7th, 1898— breaking international law and devastating Queen Lili’uokalani and her people. She spent the rest of her life petitioning for the restoration of her lands, Native Hawaiian rights, and national liberation. When she died in 1917, these dreams were unrealized. A member of the group that forced Queen Lili’uokalani out of office once declared, “If we are ever to have peace and annexation the first thing to do is obliterate the past.” They failed at this goal. Queen Lili’uokalani left a resilient legacy: Her commitment to her land and people never wavered and many Hawaiians continue to fight in her memory. Speaking of Hawaii’s children, Queen Lili’uokalani said, “It is for them that I would give the last drop of my blood.”
Facing_our_ugly_history
Ugly_History_The_El_Mozote_murders_Diana_Sierra_Becerra.txt
In 1984, a group of radio broadcasters and operators walked into the abandoned village of El Mozote in El Salvador. Fireflies illuminated the remnants of a massacre that had taken place three years earlier. Led by Colonel Domingo Monterrosa, government soldiers had tortured, raped, and murdered 978 people, including 553 children. The youngest victim, Concepción Sánchez, was just three days old. Both the US and Salvadoran governments denied the massacre had taken place, and the slaughter left few people alive to tell their story. But with the help of Radio Venceremos, one of those survivors, Rufina Amaya, shared her testimony— exposing both Monterrosa and the governments funding his crimes. This massacre was one in a long line of atrocities committed against El Salvador’s farmers. Since the 1800s, a handful of oligarchs had controlled nearly all the country’s land, forcing laborers to work for almost nothing. In 1932, Indigenous farm workers led an insurrection, but the dictatorial government responded by committing genocide against these communities. From then on, one military dictatorship after another ruled the country in concert with wealthy landowners. Their power only grew in the 1960s, when the United States began supplying the regime with military aid. The US wanted to stop the spread of reformist and revolutionary movements, which they saw as threats to capitalism. So they spent huge sums of money training Salvadoran soldiers and “death squads”— fascist military units versed in brutal counter-insurgency methods. Throughout the 1970s, these forces slaughtered farmers who organized to demand basic rights, such as living wages, food, and clean water. Finally, in 1980, farmers and urban workers formed the Farabundo Martí National Liberation Front. This coalition of guerrilla groups fought to overthrow the dictatorship and build a socialist society that met the needs of laborers. These revolutionaries were attacked from every direction. Colonel Monterrosa led a special battalion intent on destroying the FMLN, using tactics he’d learned at an American military school. State forces terrorized farmers to stop them from joining or aiding the guerrillas. But one group of rebels would not be silenced: the operators of Radio Venceremos. This clandestine guerrilla radio began in 1981, and its broadcasters Santiago and Mariposa became the voice of the revolution. They transmitted news from the front lines and reported military abuses that no other source covered. The station’s politics and popularity made it a high-profile target. And because they operated in a relatively small area, its broadcasters had to move constantly to evade capture. To communicate undetected, the group modified two radios into telephones, linked together through kilometers of barbed wire covering the countryside. This secret telephone line helped the rebels stay one step ahead of their pursuers. In addition to reporting news, the radio broadcast educational programs in areas under guerrilla control. Here, farmers organized democratic councils to govern themselves, alongside cooperatives, schools, and medical clinics. Organizers also encouraged civilian women to participate in these councils to ensure the revolution overthrew both capitalism and patriarchy. Women made up roughly a third of the guerrillas, working in a huge variety of roles. Colonel Monterrosa was obsessed with destroying Radio Venceremos. In October 1984, government soldiers finally captured their radio transmitter. Monterrosa himself went to retrieve the equipment and held a theatrical press conference celebrating his “decisive blow to the subversives.” But in reality, the radio team had outsmarted him once again. The transmitter was boobytrapped. Once Monterrosa’s helicopter left the press conference, radio members detonated the device over El Mozote, killing the colonel near the village he had massacred. Monterrosa’s death was one victory in a much larger conflict. The civil war raged on for 8 more years before concluding in 1992, when peace accords dissolved the oppressive National Guard and allowed the FMLN to become an electoral party. But these accords didn’t address problems of deep, structural inequality. In 1993, the UN Truth Commission reported that over 75,000 people died during the war. Yet the Salvadoran legislature prevented the prosecution of war crimes and continues to obstruct justice to this day. As of 2021, no participating American officials have been put on trial, and only one individual from the Salvadoran government has been sentenced for war crimes. Historical erasure exists in the US as well, where these and other stories of US intervention in Central America are rarely taught in public schools. But the victims refuse to be forgotten. Rufina Amaya continued to share her testimony until her death in 2007. And survivors of other massacres still organize to denounce state violence. They map old massacre sites, exhume and bury loved ones, and build sanctuaries and museums, all in the hope of pollinating a more just future.
Facing_our_ugly_history
The_dark_history_of_zombies_Christopher_M_Moreman.txt
Animated corpses appear in stories all over the world throughout recorded history. But zombies have a distinct lineage— one that traces back to Equatorial and Central Africa. The first clue is in the word “zombie” itself. Its exact etymological origins are unknown, but there are several candidates. The Mitsogho people of Gabon, for example, use the word “ndzumbi” for corpse. The Kikongo word “nzambi” refers variously to the supreme being, an ancestor with superhuman abilities, or another deity. And, in certain languages spoken in Angola and the Congo, “zumbi” refers to an object inhabited by a spirit, or someone returned from the dead. There are also similarities in certain cultural beliefs. For example, in Kongo tradition, it’s thought that once someone dies, their spirit can be housed in a physical object which might bring protection and good luck. Similar beliefs about what might happen to someone’s soul after death are held in various parts of Africa. Between 1517 and 1804, France and Spain enslaved hundreds of thousands of African people, taking them to the Caribbean island that now contains Haiti and the Dominican Republic. There, the religious beliefs of enslaved African people mixed with the Catholic traditions of colonial authorities and a religion known as “vodou” developed. According to some vodou beliefs, a person’s soul can be captured and stored, becoming a body-less “zombi.” Alternatively, if a body isn’t properly attended to soon after death, a sorcerer called a “bokor” can capture a corpse and turn it into a soulless zombi that will perform their bidding. Historically, these zombis were said to be put to work as laborers who needed neither food nor rest and would enrich their captor’s fortune. In other words, zombification seemed to represent the horrors of enslavement that many Haitian people experienced. It was the worst possible fate: a form of enslavement that not even death could free you from. The zombi was deprived of an afterlife and trapped in eternal subjugation. Because of this, in Haitian culture, zombis are commonly seen as victims deserving of sympathy and care. The zombie underwent a transformation after the US occupation of Haiti began in 1915— this time, through the lens of Western pop culture. During the occupation, US citizens propagated many racist beliefs about Black Haitian people. Among false accounts of devil worship and human sacrifice, zombie stories captured the American imagination. And in 1932, zombies debuted on the big screen in a film called “White Zombie.” Set in Haiti, the film’s protagonist must rescue his fiancée from an evil vodou master who runs a sugar mill using zombi labor. Notably, the film's main object of sympathy isn't the enslaved workforce, but the victimized white woman. Over the following decades, zombies appeared in many American films, usually with loose references to Haitian culture, though some veered off to involve aliens and Nazis. Then came the wildly influential 1968 film “Night of the Living Dead,” in which a group of strangers tries to survive an onslaught of slow-moving, flesh-eating monsters. The film’s director remarked that he never envisioned his living dead as zombies. Instead, it was the audience who recognized them as such. But from then on, zombies became linked to an insatiable craving for flesh— with a particular taste for brains added in 1985′s “The Return of the Living Dead.” In these and many subsequent films, no sorcerer controls the zombies; they’re the monsters. And in many iterations, later fueled by 2002′s “28 Days Later,” zombification became a contagious phenomenon. For decades now, artists around the world have used zombies to shine a light on the social ills and anxieties of their moment— from consumer culture to the global lack of disaster preparedness. But, in effect, American pop culture also initially erased the zombies origins— cannibalizing its original significance and transforming the victim into the monster.
Facing_our_ugly_history
The_records_the_British_Empire_didnt_want_you_to_see_Audra_A_Diptée.txt
In 2009, five Kenyan people took a petition to the British Prime Minister’s office. They claimed they endured human rights abuses in the 1950s, while Kenya was under British colonial rule and demanded reparations. They had vivid accounts and physical scars from their experiences— but their testimonies were undermined. They had no documentary evidence that Britain sanctioned systems of torture against Kenyans— at least, not yet. Thousands of secret files were waiting to be discovered. In 2010, a historian joined the trial as an expert witness and attested to having seen references to missing documents. They noted that Kenya had repeatedly requested the return of stolen papers, which the British government had refused. In fact, many historians suspected there were gaps in the archives. As a result, the court ordered the release of any relevant documents. And, days later, British officials acknowledged that 1,500 pertinent files were being held in a high-security archive. It soon became clear that these were just a small sample of documents Britain hid between the 1950s and 70s, while former colonies declared independence, as part of a widespread colonial British policy called Operation Legacy. The policy was for British colonial officers to destroy or remove documentation that might incriminate Britain and be of strategic value to the new governments. They were instructed to destroy, alter, or secretly transport these papers to the UK. Documents slated for destruction were to be burnt to ashes or sunk in weighted crates far from shore. During the trial, between 2010 and 2013, an independent historian revealed they had located more than 20,000 previously hidden Operation Legacy files from 37 former colonies. Finally, an estimated 1.2 million colonial files, sprawling kilometers in the archive’s so-called “Special Collections,” were also exposed. And these were only the documents that British forces kept. How many were destroyed— and what information they contained— remains unknown. About 3.5 tons of colonial documents were slated for incineration in Kenya. Ultimately, Operation Legacy’s objective was to obscure critical aspects of the truth. In the words of Britain’s attorney-general in Kenya, “If we are going to sin, we must sin quietly.” So, what really happened in Kenya? Beginning in 1895, the British administration forcibly removed people from their traditional lands, giving the most fertile areas to European settlers to establish large-scale farms. They mandated forced labor systems, implemented reservations for Indigenous African peoples, and restricted their movement. Kenyan people resisted these incursions from the start and grew increasingly organized over time. One movement, the Kenya Land and Freedom Army, aimed to forcibly remove white settlers and overthrow the colonial government. When the British declared a state of emergency in 1952, they were giving themselves permission to take otherwise illegal special measures to regain control. The newly revealed Operation Legacy documents confirmed that people suspected of participating in the resistance were subjected to horrible abuses. Between 1952 and 1959, the British imprisoned over 80,000 people without trial, sentenced over 1,000 people convicted as terrorists to death, and imposed extreme surveillance and interrogation tactics. Some people were beaten to death. Others were raped or castrated. Many were shackled at the wrist for years. Children were killed. One person was burnt alive. Ndiku Mutwiwa Mutua testified to being castrated while handcuffed and blindfolded. Wambugu Wa Nyingi said he was suspended upside-down, beaten, and had water thrown on his face until he could barely breathe. Jane Muthoni Mara said she was sexually violated with a hot bottle, and imprisoned for years without cause. In response to the new evidence, the British government issued a formal apology, and made an out-of-court financial settlement with the 5,228 Kenyan claimants ultimately involved in the case. The original five claimants had made history— and paved the way for it to be rightfully rewritten. The uncovered files challenge fundamental myths about British colonialism as a benevolent institution that brought freedom and democracy to its subjects, then graciously gave them independence. Instead, the newly exposed evidence confirms what many people knew to be true, because they lived it— and survived to rescue history from the ashes.
Facing_our_ugly_history
Ugly_History_Witch_Hunts_Brian_A_Pavlac.txt
In the German town of Nördlingen in 1593, an innkeeper named Maria Höll found herself accused of witchcraft. She was arrested for questioning, and denied the charges. She continued to insist she wasn’t a witch through 62 rounds of torture before her accusers finally released her. Rebekka Lemp, accused a few years earlier in the same town, faced a worse fate. She wrote to her husband from jail worrying that she would confess under torture, even though she was innocent. After giving a false confession, she was burned at the stake in front of her family. Höll and Lemp were both victims of the witch hunts that occurred in Europe and the American colonies from the late 15th century until the early 18th century. These witch hunts were not a unified initiative by a single authority, but rather a phenomenon that occurred sporadically and followed a similar pattern each time. The term “witch” has taken on many meanings, but in these hunts, a witch was someone who allegedly gained magical powers by obeying Satan rather than God. This definition of witchcraft spread through churches in Western Europe starting at the end of the 15th century. It really gained traction after the pope gave a friar and professor of theology named Heinrich Kraemer permission to conduct inquisitions in search of witches in 1485. His first, in the town of Innsbruck, didn’t gain much traction with the local authorities, who disapproved of his harsh questioning of respectable citizens and shut down his trials. Undeterred, he wrote a book called the "Malleus Maleficarum," or "Hammer of Witches." The text argued for the existence of witches and suggested ruthless tactics for hunting and prosecuting them. He singled out women as easier targets for the devil’s influence, though men could also be witches. Kraemer’s book spurred others to write their own books and give sermons on the dangers of witchcraft. According to these texts, witches practiced rituals including kissing the Devil’s anus and poisoning or bewitching targets the devil singled out for harm. Though there was no evidence to support any of these claims, belief in witches became widespread. A witch hunt often began with a misfortune: a failed harvest, a sick cow, or a stillborn child. Community members blamed witchcraft, and accused each other of being witches. Many of the accused were people on the fringes of society: the elderly, the poor, or social outcasts, but any member of the community could be targeted, even occasionally children. While religious authorities encouraged witch hunts, local secular governments usually carried out the detainment and punishment of accused witches. Those suspected of witchcraft were questioned and often tortured— and under torture, thousands of innocent people confessed to witchcraft and implicated others in turn. Because these witch hunts occurred sporadically over centuries and continents the specifics varied considerably. Punishments for convicted witches ranged from small fines to burning at the stake. The hunt in which Höll and Lemp were accused dragged on for nine years, while others lasted just months. They could have anywhere from a few to a few hundred victims. The motivations of the witch hunters probably varied as well, but it seems likely that many weren’t consciously looking for scapegoats— instead, they sincerely believed in witchcraft, and thought they were doing good by rooting it out in their communities. Institutions of power enabled real harm to be done on the basis of these beliefs. But there were dissenters all along– jurists, scholars, and physicians countered books like Kraemer’s "Hammer of Witches" with texts objecting to the cruelty of the hunts, the use of forced confessions, and the lack of evidence of witchcraft. From the late 17th through the mid-18th century, their arguments gained force with the rise of stronger central governments and legal norms like due process. Witch hunting slowly declined until it disappeared altogether. Both the onset and demise of these atrocities came gradually, out of seemingly ordinary circumstances. The potential for similar situations, in which authorities use their powers to mobilize society against a false threat, still exists today— but so does the capacity of reasoned dissent to combat those false beliefs.
Facing_our_ugly_history
Can_stereotypes_ever_be_good_Sheila_Marie_Orfano_and_Densho.txt
In 2007, researchers surveyed over 180 teachers to understand if they held stereotypes about students from three racial groups. The results surfaced several negative stereotypes, labeling Black students as aggressive and stubborn, white students as selfish and materialistic, and Asian students as shy and meek. But regardless of the teachers’ other biases, the most commonly held opinion was that Asian students were significantly more industrious, intelligent, and gentle than their peers. On the surface, this might seem like a good thing, or at least better than other, negative characterizations. But treating this seemingly favorable stereotype as reality can actually cause a surprising amount of harm— to those it describes, those it doesn’t, and even those who believe it to be true. This image of humble, hard-working Asians is actually well-known as the “model minority” stereotype. Versions of this stereotype emerged in the mid-20th century to describe Chinese Americans. But following World War II, the label became commonly used to claim that Japanese Americans had overcome their mistreatment in US incarceration camps, and successfully integrated into American society. Former incarcerees were praised as compliant, diligent, and respectful of authority. In the following decades, “model minority” became a label for many Asian populations in the US. But the truth behind this story of thriving Asian Americans is much more complicated. During World War II, the US government tried to “Americanize” incarcerated Japanese Americans. They did this through English language classes, patriotic exercises, and lessons on how to behave in white American society. When incarcerees were released, they were instructed to avoid returning to their own communities and cultural practices, and instead, integrate into white society. But after decades of anti-Asian policies and propaganda, white Americans had to be persuaded that Japanese Americans were no longer a threat. So the government organized media coverage to transform the public perception of Japanese Americans from suspected traitors to an American success story. In fact, the phrase “model minority” was coined by one such article from 1966. But this article, and others like it, didn’t just cast Asian Americans as an obedient and respectful “model minority." They also criticized so-called “problem minorities,” primarily Black Americans. Politicians who were threatened by the rising Civil Rights movement used this rhetoric to discredit Black Americans’ demands for justice and equality. They presented a fabricated story of Asian American success to paint struggling Black communities as inferior. This narrative put a wedge between Black and Asian Americans. It erased their shared history of fighting oppression alongside other marginalized groups, and pit the two communities against each other. In doing so, the model minority myth also enforced a racial hierarchy, with white Americans on top and everyone else underneath. Certainly, many people who still believe the model minority stereotype, either consciously or unconsciously, might not agree with that idea. But comparing the imagined strengths and weaknesses of racial groups places value on how well those groups meet certain standards— typically, standards set by a white majority. In this case, the model minority stereotype suggests that marginalized groups who are compliant, gentle, and respectful of white authority are deserving of tolerance, while groups that challenge the status quo are not. This stereotype also negatively impacts the Asian individuals it describes. According to a psychological phenomenon known as stereotype threat, members of a group often place pressure on their individual actions to avoid encouraging negative group stereotypes. But this phenomenon can occur around seemingly positive stereotypes as well. The pressure associated with living up to impossibly high standards can lead to poor performance. And teachers are less likely to notice when Asian students are struggling. Outside the classroom, social programs catering to Asian communities are frequently overlooked or cut, because they’re assumed to need less support than other disadvantaged groups. The favorable portrait created by this stereotype can also make it harder to recognize racially motivated violence and discrimination against Asian Americans. And since this stereotype carelessly groups all Asians under the same umbrella, it impacts people with various backgrounds and unique histories of discrimination. So while the model minority label might appear to benefit Asian populations at first, in practice, it works like every other racial stereotype. It reduces a group of people to a one-dimensional image. And that single image hinders our ability to understand the history, struggles, and triumphs of the individuals within that group. Acknowledging and challenging these labels is essential for building coalitions across communities and eliminating harmful stereotypes for good.
Facing_our_ugly_history
Mao_Zedongs_infamous_mango_cult_Vivian_Jiang.txt
One morning in August 1968, factory worker Wang Xiaoping overheard news of a mysterious mandatory meeting. Rumors whispered through the cafeteria described shipments of a gift from the country’s communist leader, Chairman Mao Zedong. And sure enough, managers soon dispersed a gift to every factory worker— a glass box encasing a golden wax replica of a mango. Wang Xiaoping’s factory wasn’t the only facility to receive this unusual offering. The Chairman gifted fresh mangoes to factories across China, leading employees to stay up late, touching the fruits and contemplating the meaning behind Mao's gesture. Some tried to preserve the fresh mangoes in formaldehyde, while others ate the fruit and commissioned wax replicas of their prize. In one factory, workers initiated a strange ritual: peeling and boiling their mangoes to create a “holy” broth that was spooned into their mouths. Since traditional Chinese medicine often involved boiling ingredients, it's possible this mango wine was concocted as a kind of healing tonic. Soon, fables formed that the fruit ensured a long life like the Peaches of Immortality from Chinese mythology. And by refusing to eat the mangoes himself, Mao had generously sacrificed his own longevity for the working class. But whatever Mao's intentions, this mango mania wasn’t as frivolous as it might seem. And in fact, it’s harmless appearance hid a much darker truth. Two years earlier, Mao Zedong had launched the Cultural Revolution, a decade-long political and ideological movement intended to erase capitalist thought and cultural traditions from Chinese society. To enact this plan, Mao called on the Red Guards, a student-led paramilitary group. He enlisted them to help eradicate the “Four Olds”— a vaguely defined set of customs, habits, and ideas often associated with the elite upper-class. Mao’s dogma was militant, and the Red Guard interpreted his vision as achievable only through violence. The Red Guard acted above law and order, ransacking temples and tombs, including those of dynastic royalty and Confucius. Homes were raided and piles of books burned in the streets. But the Red Guard’s rampage went far beyond property damage. They began holding “struggle sessions”— public spectacles designed to shame so-called class enemies. Victims were accused of holding elitist, capitalist values, and were often forced to wear heavy signs detailing their crimes. The Red Guard pressured people to accuse their friends and family. They manipulated students to denounce their teachers and parents. They gradually morphed into torture and executions. After two years of the Red Guards’ chaos, Mao recanted his support and sent 30,000 factory workers to fight the Red Guard at Qinghua University. With the help of the People’s Liberation Army, these factory workers succeeded, and Mao thanked them for their service with a crate of 40 mangoes. This gesture wasn’t quite as generous as it appeared since Mao was actually passing along a gift he received from Pakistan’s foreign minister. But much worse, this reward was quickly tainted by the ideology of the Cultural Revolution. As a propaganda tool, Mao’s mangoes demanded high levels of respect. Workers boarded unheated buses in sub-zero temperatures to visit mandatory mango exhibitions organized by the government. Factory workers were scolded for not holding their replicas securely. And in Sichuan, a man who remarked that the mango was “nothing special” and “looked like a sweet potato” was arrested, tried, and executed. For reasons mostly unknown, the mango fever broke a year and a half later. After the Red Guard was dissolved and participants were sent to the countryside for re-education, the mystifying mango faded from official propaganda. Wax from the replicas were repurposed for candles during power outages. And today you’d be lucky to find an antique mango tray or medallion while perusing a Beijing flea market. But the tale of Mao’s mangoes is just a minor story amidst a decade of painful, buried history. Discussion of the Cultural Revolution is restricted across China. And though some former Red Guards have attempted to challenge this policy by publicly reflecting and apologizing for their actions, they still avoid maligning Mao Zedong. Given the current political landscape of China, only time will tell when this history will be discussed openly and freely.
Facing_our_ugly_history
The_dark_history_of_Mount_Rushmore_Ned_Blackhawk_and_Jeffrey_D_Means.txt
Between 1927 and 1941, 400 workers blasted 450,000 tons of rock from a mountainside using chisels, jackhammers, and a lot of dynamite. Gradually, they carved out Mount Rushmore. Now, the monument draws nearly 3 million people to South Dakota’s Black Hills every year. But its façade belies a dark history. About 10,000 years ago, Native American people began inhabiting the Black Hills. The area became especially sacred to the Lakota people, who formed the western branch of what the US called the Sioux Nation. The Lakota believed one cave within the Black Hills to be where they first emerged. And they named one of the Black Hills mountain peaks the Six Grandfathers after their sacred directional spirits. But in the 1800s, Lakota access to this land came under threat. White settlers in North America expanded their territories by using physical violence or negotiating with Indigenous peoples. After its establishment in the late 1700s, the US government ratified hundreds of treaties with Native American nations. However, it often broke them or created them using coercion. Between 1866 in 1868, the Lakota and their allies successfully defended their land from the U.S. military and negotiated a new treaty with the government. In the 1868 Treaty at Fort Laramie, all parties agreed that a vast territory, including the Black Hills, belonged to the Sioux Nation. In return, the Lakota would allow US travelers to pass safely through. But many aspects of the Treaty also aimed to assimilate the Lakota into white culture. This included incentives to convert them from hunting to farming, abandon their nomadic lifestyle, and wear clothes the US provided. Meanwhile, just seven years later, the US broke the treaty after an expedition found gold in the Black Hills. Miners set up camps, the military attacked and ultimately defeated the Lakota, and the US passed legislation illegally seizing the land. 50 years later, workers began etching into the Lakota’s sacred Six Grandfather’s Mountain. The project was led by an arrogant sculptor named Gutzon Borglum, who had ties to the KKK. A historian originally proposed that Mount Rushmore include Western figures— like Lakota Chief Red Cloud. But Borglum chose to feature his own heroes. By October of 1941, Borglum had died from surgical complications and work stopped, though the project was unfinished. None of the four figures had torsos, as intended, and rubble was left piled below. To the Lakota, the monument was a desecration. And the presidents immortalized on the rockface all had brutal anti-Indigenous legacies. Members of the Iroquois Confederacy called George Washington “Town Destroyer” for encouraging military campaigns that burned 50 of their villages in 1779. Theodore Roosevelt championed forced assimilation and said, “I don’t go so far as to think that the only good Indians are dead Indians, but I believe nine out of 10 are.” In 1980, after the Sioux Nation had sued the US for treaty violations, the Supreme Court ruled that the Black Hills had been unlawfully taken, and the Sioux were entitled to compensation. The amount named has since reached over a billion dollars. But the Sioux Nation refused to take the money and to give up their claim to the Black Hills, maintaining that they were never for sale. So, what should happen to Mount Rushmore and the Black Hills? Responses to that question are wide-ranging. Some, including tribal leaders and Borglum’s great-granddaughter, have called for Mount Rushmore to be removed. Others see it as an important patriotic symbol and vital aspect of South Dakota's economy that should remain. Many Lakota people want the 1868 Treaty to be honored and the now-federally controlled lands to be returned to their tribal communities. Others have said that the Lakota and the US should at least co-manage parts of the Black Hills. Currently, there are no plans for change. The US broke many of its promises with Indigenous nations making issues like this common. Native people have been fighting for broken treaties to be honoured for generations, achieving some major victories along the way. Meanwhile, if untouched, the faces engraved on the Six Grandfathers Mountain are expected to remain for thousands of years to come.
Facing_our_ugly_history
Historys_deadliest_king_by_Georges_NzongolaNtalaja.txt
On December 12, 1904, Chief Lontulu laid 110 twigs in front of a foreign commission. Every twig represented a person in his village who died because of King Leopold’s horrific regime in the Congo— all in the name of rubber. Chief Lontulu separated the twigs into four piles: tribal nobles, men, women, and children— then proceeded to name the dead of one-by-one. His testimony joined hundreds of others to help bring an end to one of the greatest atrocities in history. Beginning in the late 1800s, European countries participated in the so-called “Scramble for Africa.” They colonized 90% of the continent, exploiting African resources and enriching their countries. Belgium had recently become an independent kingdom. Its ruler, Leopold II, wanted to acquire what he called “a slice of this magnificent African cake.” Meanwhile, he read colonial explorer Henry Morton Stanley’s reports about traveling through Africa. Stanley emphasized the Congo basin’s majesty. So, in 1879, Leopold contracted him to return to the Congo. There, Stanley deceived leaders into signing some 450 treaties allowing for land use. Leopold persuaded the US and European powers to grant him ownership of the Congo, pledging to protect free trade in the region. And on May 29, 1885, a territory more than 80 times the size of Belgium and home to 20 million people was declared his own private colony— by no one it actually belonged to. Leopold lost no time consolidating power in what he called the Congo Free State. He claimed land, raised an army, and forced many Congolese men to complete unpaid labor. Things got even worse when, in 1887, a Scottish inventor redeveloped the pneumatic tire, creating a massive international market for rubber. The Congo had one of the world’s largest supplies. Leopold seized the opportunity, requiring villages to meet ever-greater rubber quotas. Congolese men had to harvest the material from wild vines. As supplies drained, they walked for days to gather enough. Leopold’s army entered villages and held women and children hostage until the impossible quota was met. Soldiers sexually violated women and deprived children of food and water. Congolese people rebelled— they refused to cooperate, fought Leopold’s soldiers, hid in the forests, and destroyed rubber vines. Leopold’s army responded to resistance or failure to meet quotas with unflinching torture and executions. Because guns and ammunition were expensive, officers ordered soldiers to prove they used their bullets in the line of duty by removing a hand from anyone they killed. However, many soldiers hunted using their guns. To avoid harsh penalties and account for lost bullets, they cut off living people’s hands. They also used this practice as punishment. If rubber quotas weren’t met, soldiers would sever people’s hands and bring them to their commanders instead of rubber. The regime dramatically upended daily life and agriculture, causing widespread starvation and disease. Meanwhile, King Leopold built monuments and private estates with the wealth he extracted. Soon, people brought international attention to the horrific abuses of Leopold’s Congo Free State. In 1890, American journalist George Washington Williams accused King Leopold of “deceit, fraud, robberies, arson, murder, slave-raiding, and [a] general policy of cruelty.” In 1903, Diplomat Roger Casement wrote a report that corroborated the nature and scale of the atrocities. It was published the following year. In response, Leopold appointed his own commission to investigate the accusations. They heard numerous witness statements in the Congo— Chief Lontulu’s included. The report only confirmed the worst. Facing pressure, Leopold relinquished control of the Congo to the Belgian government in 1908. But this did not mean justice. The Belgian state awarded Leopold 50 million francs “in testimony for his great sacrifice in favor of the Congo.” He died the following year. Crowds booed his funeral procession. For more than 50 years following, the Congo remained a Belgian colony, until declaring independence in 1960. That year, the Congo elected its first prime minister, Patrice Lumumba. But months later, he was unseated in a US and Belgium backed coup. In early 1961, Lumumba was assassinated under Belgian supervision. The coup launched the country into a decades-long dictatorship. Around 10 million Congolese people are thought to have died during Leopold’s occupation and looting of the Congo. Despite this devastation, calls for reparations have gone unanswered. To this day, throughout Belgium can be found the monuments King Leopold built on a foundation of inconceivable cruelty.
Stanford_CS221_Artificial_Intelligence_Principles_and_Techniques_Autumn_2021
Artificial_Intelligence_Today_Stanford_CS221_AI_Autumn_2021.txt
So if I had to use one word to describe AI today, it would be surreal. It's kind of hard for me to imagine that 10 years ago, AI was very much an academic endeavor. And now, countries are forming national strategies about AI. What? So the AI Index is a project that aims to track the status of AI. And each year, they release an annual report. Here are some quotes from this report. Compute doubling every 3.4 months. The conference NeuriIPS increased over 800% in the last eight years. Number of jobs is also going up. And so quantitatively at least, we see that it shouldn't be surprising to people that AI is just becoming a big deal. Qualitatively, what I think is really interesting is that AI is transitioning from being in the lab to the real world. For a long time, I was limited to relatively artificial environments, which was useful for developing methods. But now, we're seeing real world deployment in ways that really impact people's lives. And I want to stress that AI, like any technology, is an amplifier. It makes what is good better and makes what is bad worse. And we really need to be aware of both sides. So let me start with the positives, the prospects. So here are some examples in which AI has been, well, beneficial. So in the last decade, speech recognition, question answerings have gotten remarkably good. And now, you can talk to your favorite assistant and expect some basic, though obviously not perfect, level of language understanding. My three-year-old is growing up thinking that talking to computers is perfectly normal. And search engines like Google have told us that enabling power that comes with being able to tap into the world's rich information. And now, taking one step further, these assistants allow this information to be more efficiently and naturally accessible, which can be especially useful for people do not have the means to use a computer. So there's language barriers in the world that pose significant challenges to travelers, immigrants, businesses, minority subcommunities. And so connecting people is very valuable. So machine translation aims to overcome these barriers. Machine translation has come a long way since the '60s. And while it's far from perfect, it is really good enough for someone to get the basic gist of a document written in a different language or to have a real-time conversation with someone speaking in a completely different language. Autonomous driving will someday hopefully be able to reduce the number of accidents and congestion. But a major challenge is to recognize what is going on in the unstructured environment. And computer vision has made a lot of progress towards recognizing these objects. But there is still headroom to be made to ensure sufficient reliability. An interesting application is visual assistive technology. So this is an example. It's called Seeing AI from Microsoft Research, which you point a camera at something, and it narrates what's going on there. And so this obviously could be a game changer for the visually impaired. Auto captioning technology is the opposite, which it's also potentially very impactful, turning sound into sight. Health care is another big area that's growing in importance both for diagnosis and therapeutic development, especially in areas where there is a shortage of clinical expertise. So an example of this is detecting diseases based on chest x-rays or diagnosing diabetic retinopathy, which is one of the major challenges in AI and health care these days. There's also interesting recent data set that shows images of COVID-19 infected cells and how they respond to certain drugs with the hope that one day we can find drugs that can treat late stage COVID-19. Poverty is a big problem in the world. But even figuring out the areas in greatest need is challenging. So recently, people have been using satellite imagery to try to figure this out because gathering survey data on the ground is very, very expensive. And using machine learning, you can look at satellite images and try to predict various wealth indicators. This could be really useful for governments and NGOs to take action and monitor the progress. So this sounds all great, right? So what's the catch? Well, there's a lot of things that one has to be aware of. I just want to give you a general idea of the space, and I'm going to go fairly quickly here. First, this energy consumption. So there is a genuine cost to training high performing models that we're seeing today. So if we look at NLP, there has been a trend of training more and more large language models. Back in 2018, which is like ancient history now, models had about 100 million parameters only. And then BERT came along, which some of you might have heard of. Made a big splash with 300. And then in January this year, Microsoft released a model which was 17 billion parameters. And then to top it off, OpenAI in May released a 10 times larger model with 175 billion parameters. So this is big. Last year, there was a paper published that talked about the carbon footprint of training these models. So we looked at the transformer with 200 million parameters, which would be around here on this graph. And they show that even this train, if you used neural architecture search, this was five times the amount of CO2 emissions in the entire lifetime of the US's cars. So now, I'll leave you to speculate what GPT3's environmental footprint is. So needless to say, a lot of people are actively trying to somehow reduce the model's size, improve efficiency without sacrificing accuracy. Privacy is another big area. So machine learning algorithms have really been developed to assume that data is just sitting there in one place, and it's fully accessible. But our mobile files generate a wealth of information, and we might not want to be sending all of that information up into some big internet company. Recently, there's been a lot of active work in privacy preserving machine learning, which allows some of the learning to be happening on device in a decentralized way and only transmitting various central statistics to a central server. Security is another major challenge, especially in high stakes applications, like autonomous driving and face identification for authentication. So here, our models not only need to be accurate, but robust against attackers and malicious behavior, which we know exist in the world. So researchers have shown that if you can-- I will show you examples. If you take an image of a stop sign, and you actually post these stickers on them, you can get a state of the art system to think that these are speed limit signs. Or you can actually buy these cool-looking glasses that will trick face ID to think that you're some celebrity that you're not. So guarding against these attackers, quite kind of frightening is still a wide open problem. Bias was mentioned in the chat. This is something that's maybe less spectacular in terms of sudden impact, but I think is more pernicious. So here's an example from machine translation. If you take Hungarian, and you have the words he and she, which are not differentiated. You translate them into English. So the machine translation model has to hallucinate the gender. You can reveal all sorts of stereotypes that the model is harboring. For example, she is a nurse, baker, wedding organizer, but he is a scientist, engineer, and teacher, and CEO. And there's a lot of active work showing how hard it is to actually remove these biases. So I want to say that machine learning algorithms, they are based on, quote unquote, objective mathematical principles. But the train models are trained on to latch onto statistics in the data. And the data comes from society, so any biases in society are reflected in data and propagated to model predictions. And worse, sometimes they're even amplified. So here's another case study. So Northpointe is a company that produced a software called COMPAS that assesses whether someone is going to commit a crime again. And so ProPublica, this non-profit organization that does some investigative journalism, came out and said, whoa, whoa, whoa. You are not being fair because given that an individual did not re-offend, Black people are twice as likely to be wrongly classified as committing a high risk score than White people here. But Northpointe defended themselves by saying that given a risk score of 7, 60% of White people have re-offended, 60% of Black people have re-offended. So therefore, it's fair. So both of these actually turned out to be simply different kind of desiderata of fairness. And unfortunately, there's some actually impossibility results that say you can't have these two and this third criteria that hold for imperfect classifiers at the same time. And given that these algorithms are being actually deployed and really impacting people's lives in a huge way, this indicates that we not only need to understand the technical implications of all these algorithms, but also think about the philosophical and policy-related issues as well. So this one's kind of scary, generating fake content. Deep learning has enabled us to generate deep fakes, such as Obama saying things that he never did, which you can find online. Or more recently, this is a blog post written by a friend, GPT3, that made its way to number one on Hacker News. So it's completely clear, at least to me, that we've lost the ability to tell the difference between real and fake content. And given the ease and skill at which fake content can now be generated, bad actors spreading disinformation is I think a major threat to our society Finally, AI systems are being deployed in dynamic environments, where you have systems which are making predictions, serving you search results, giving you recommendations, serving ads. And users are taking actions by clicking essentially. And these actions are recorded as data. This data is used to retrain the system, which furthermore reinforces these actions. So I think there is a very dangerous feedback loop inherent in machine learning where all these biases are amplified and depolarized and leads to quite unstable behavior. So think a major open research challenge is to how to figure out how to build more robust systems that are not as susceptible to these unstable dynamics. So to conclude, I just want to stress that AI technology is an amplifier. And we've seen that AI can and promises to be quite beneficial to society, reducing accessibility barriers, improving efficiency. But on the other hand, it can generally amplify biases, introduce new security risks, centralize power in ways that was kind of unprecedented before. And I just want you to keep these issues in mind as we go through the course. Just because you can build it doesn't mean you should. And if we're not careful, we could potentially build something that does more harm than good. And moreover, figuring out the best way to tread the line between positive prospects and negative risks is, I think, something that requires a deep technical understanding, especially if we are to develop novel solutions. And that's something that this course is going to equip you with. So that concludes this module.
Stanford_CS221_Artificial_Intelligence_Principles_and_Techniques_Autumn_2021
Machine_Learning_9_Backpropagation_Stanford_CS221_AI_Autumn_2021.txt
Hi. In this module, I'm going to talk about the backpropagation algorithm for computing gradients automatically. It's generally associated with training neural networks, but it's actually a far more general algorithm. So let's begin with our motivating example, which is, suppose we're doing regression with a four-layer neural network. So remember that we compute the loss on a given example, the loss with respect to a particular example. And now, we have the weights of the network. V1, V2, V3, and w is equal to-- and remember the form of the neural network, you start with a feature vector, you multiply it by some weight matrix, it gives you a vector. And send it through the activation function. Repeatedly apply a matrix. Send it to an activation function to left multiply by a matrix. Send it through an activation function. You take a final vector. You take the dot product with respect to the final weight vector. And that gives you your score. So this is the prediction, subtract the target value. And you square it, that gives you your loss. So now, if you wanted to train this neural network using stochastic gradient descent, you would need to compute the gradient of this loss function with respect to each of the parameters. So for example, you would compute the gradient of the loss with respect to V1, that gives you a gradient update, which you can then use to update V1, same with V2, 3, and w. So now, you can sit down with this lovely expression. And you can just grind through the math and get the expressions. It's straightforward, but it's rather tedious. The question is, how can you get the gradients without all doing all this manual work? So the answer to that is computation graphs. So here is our loss function again. And what we're going to do is write down the computation graph for these mathematics. Computation graph is a directed acyclic graph, whose root node represents the final expression, this loss function. And each node represents intermediate sub expressions, like 1 cube x, for example. Now what this computation graph is going to allow us to do is allows us to apply the backpropagation algorithm to the computation graph and automatically get gradients out. So there's two purposes actually that we're going to do this. The first is computing the gradients automatically. And this is how deep learning packages like TensorFlow and PyTorch work behind the hood. And second of all, we're going to use this as a tool to gain insight into the modular structure of the gradients, and try to demystify. Because taking gradients by hand, you can lead it into situations where you just have a lot of symbols. But using a graph, we can start to see the structure. OK. So our starting point is to think about functions as boxes. So imagine you have this expression, a plus b, and that gives rise to some group of c. So I'm going to represent this as a very simple computation graph where you have a and b. And these arrows point into this box that does plus. And the result will be labeled as c here, OK? So now, the question is if I change a or b by a small amount, how much does c change? Oh, this is just the notion of a gradient. So informally, we can look at this as a plus b equals c. Now, if I go and fiddle with a a little bit, I add epsilon, so what happens to the right hand side? While on the right hand side, I just get plus 1 epsilon. And what I'm going to do, so the gradient of c with respect to a is 1? I'm just going to write it on there. So this can be interpreted as kind of amplification or a gain. By moving a by a little bit, this is kind of the multiplicative factor that c needs to be multiplied. So let's do the other side. So a plus b, and you add a bit of noise to b. And again, you get 1 plus epsilon. So the gradient of c with respect to b is 1 as well. Here's another example, c equals a times b. So as a computation graph, a and b go into this box, which the takes dot product, and you get c. So what happens when you add epsilon noise to a? a plus epsilon times b is equal to c plus. And now, you have b epsilon coming out. So therefore, the gradient of c with respect to a is now b. And analogously, we add noise to b. And we see that the contribution to the output c is a times epsilon, a epsilon. Hence, therefore, the gradient over here is a. So this all should be kind of familiar. I've just cast the sum and product rules for differentiation in graphical form. So let's do a few more kind of small examples. These small examples are going to be the building blocks. It turns out that you can take these building blocks and compose them to build all sorts of more complicated functions. So here's example we saw before-- a plus b, and the gradients are 1 and 1, a minus b, the gradients are 1 and minus 1. Because if you add epsilon to b, then this difference is going to go down by epsilon. Here, we saw this example, a times b and the gradients on b and a. If you look at the squared function, a squared, the gradient with respect to input as 2a, so that's kind of the power rule. Let's consider a and b where you take the max, OK? So this one, let's think about this. So if I add a little bit-- if I add epsilon to a, how is that going to change the max? Well, if a is greater than that b, then it's just going to change the max by epsilon. But if a is less than b, then it's going to be 0, because b is going to be less. So the gradient of this max of a and b with respect to a is the indicator function of whether a is greater than or not. And symmetrically, the gradient with respect to b is whether b is greater than a, OK? And finally, here is the logistic function, a sent through this logistic function. And a little bit of algebra, which I'll spare you of, produces this actually quite elegant expression, which is sigma times 1 minus. And you can kind of check that as a goes to infinity or minus infinity. Remember, the sigmoid is going to saturate at 1 or 0, which means this gradient is actually going to go to 0. So that's just a simple sanity check. OK. So these are the basic building blocks. And that's really kind of all the brute force differentiation that we're going to really do. The rest is just composition. So now, we take these building blocks, and we put them together. So here's a simple example. So suppose you take a squared, you get b. And then you take b, and you square it, and you get c. So by the building blocks from the previous slide, we know that the gradient on this edge is going to be 2 times, input here, which is b. And the gradient along this edge is going to be 2 times a. OK. So now, using these two, we can apply chain rule from calculus to compute the gradient of c with respect to a. And this is going to be nothing more than just the product of those two quantities. So in this case, we've got 2b times 2a. And remember that b is equal to a squared, substitute that in and you get 4a cubed. And remember, c is a to the fourth. So we can verify that this is indeed consistent with our goal. OK? So in general, you can compute these gradients by simply taking the product along edges. And that's going to be really useful on this slide. OK. So now, let's turn to our first example. The hinge loss for linear classification. We actually did this one before, but I just want to do it again through the lens of a computation graph. So here's a loss function. And given this loss function, I'm going to construct the computation graph, and then compute the gradient of the loss with respect to w. So working bottom up, we have the weight vector. And we have the feature vector. And we take the dot product. That gives us a score. We take the score. We take y, and multiply them together, and that gives us the margin. 1 minus the margin, and you take the max of that and 0, and you get the loss. So another nice thing about the computation graph is it allows you to annotate these sub-expressions and see how the computation and what the pieces are. OK. So now, let us compute the gradient of the loss with respect to w. And what I'm going to do here is all I need to do is compute the gradients along all of these edges from loss down to w, OK? And so let's begin at the top here. So the gradient with n-- oh, here is our cheat sheet. Don't forget the cheat sheet. So we just now pattern match. Here's a max over two things. Well, what's on this edge? It's first thing greater than second thing. OK. So the gradient here is going to be first thing which is 1 minus margin greater than the second thing, which is 0. And now what about this edge? So here is minus 1. So that's minus 1. What about this times? Times is the second input. And the second input here is y. And here's another times the second import is cube x, OK? So this allows us to think about the gradients, one piece at a time. And all of the little edges are just invocations of this cheat sheet here. OK. Now, we're ready to read off the gradient of the loss with respect to w. And this is just going to be product of all the edges, OK? First, we have 1 minus margin greater than 0. So I'm going to rewrite that as margin less than 1, verified that's the same thing. We have a minus sign here. That's a minus sign here. And then we have y. And then we have a phi of x. We multiply them all together. And that's the expression. And you can verify that this is indeed the gradient of the loss function. OK? In summary, we computed the computation graph. And then we applied this cheat sheet to the individual edges. And then you just multiply them all together. And just as another note, remember, the gradient with respect to w is really-- think about perturbations. If you change w by a little bit, how much is the loss going to change? And the change is going to be the product of all these kind of amplifications evaluated at a particular point. All right. So now, let's do neural networks now. So this is not going to be really anything new. It's just going to be a different example. So I'm going to do two-layer neural networks. And we're going to, again, build this computation graph up. So we have the feature vector. You have the first layer weight matrix of v. You take the product. And then you stick this through the activation function. And we're going to label that h, which is the hidden vector. And now we're going to take the dot product of w and h, that gives you the score. And then now, its score minus y is a residual. And the residual square is a loss. OK? Another aside is that the computation graph really allows you to see visually this modularity. So that part up here is just the square loss. And the part down here is any way of computing a score. Before, we had a class linear predictor. And now, we have two-layer neural network. It could be a four-layer neural network, which in a computation graph is just a [INAUDIBLE]. OK. So that's a computation graph. Now, let's-- to perform stochastic gradient descent, we need to compute the gradient with respect to both w and b, OK? So let's compute the gradient with respect to w of the loss. And what I'm going to do is look at the edges and compute the gradients, OK? So here's our cheat sheet. So OK. What goes on this edge? What's the gradient of the square? This is just 2 times the input, which is in this case 2 times the residual. What about this edge? So minus, so this should just be a 1 here. And then what about this edge? This is just going to be the second input here, so that is an h, OK? So now, multiply all these things together, and you get the gradient of the loss function with respect to w, OK? All right. So one thing you can kind of double check is that we did do the gradient of the square lots for linear predictors. And it was also two times the of residual times the feature vector instead of h. And now, we just have h, which is a kind of a stand in for the feature vector as far as w is concerned. But that's kind of a nice sanity check. All right. So now, let's do this more complicated one. So this, we want to compute the gradient with respect to v of loss of all the arguments. And this equals-- let's fill in all the edges. So first of all, notice that these two edges are actually in common with this power. We can go ahead and write them down. So one cool thing about computation graphs is it allows you to see the shared structure that the gradients actually have themselves, also have common sub-expressions. OK. So now, we need to do more work here. So the gradient on this edge is going to be the other input, which is w. This is sigma. So the gradient is going to be sigma, the input minus 1 minus sigma. So this is going to just be h times 1 minus h. This hollow circle here represents the element product of vector, so you take two vectors, and you multiply the elements together. And this is because this function is applying just elementwise. And then what about this final edge? This is just going to be phi of x, which is this other input. And now, we can just multiply the rest of these things together. So we have w times h, times 1 minus h, times a phi of x transpose. So there's a slight kind of annoyance here. Because here, we have V times phi of x, whereas before, there's no transpose here because we just have w dot something. And w dot is the same as w transpose, OK? But the high level is that the product of all of these green pieces yields the gradient of the loss with respect to v. OK? All right. So that finishes up this example. So now, we have mainly used this graphical representation to visualize the computation of function values and gradients. But the problems of backpropagation is that we didn't have to do any of that at all. I just did that to kind of illustrate the inner workings of gradient computations on the computation. But now, we're going to introduce a backpropagation algorithm, which is a general procedure for computing these gradients. So we never have to worry about it. I'm going to do this backpropagation for a simple example, which is just this squared loss and linear regression. And one note is that previously, we've worked with symbolic expressions. But the actual algorithm is going to operate on numbers usually. So what I'm going to do is work with a concrete example and walk through the backpropagation algorithm with this example. So the backpropagation algorithm includes two steps, a forward step and a backward step. So in the forward step, what we're going to do is we're going to compute a bunch of forward values from the leaves to the root. And each forward value is simply the value of that sub expression rooted at i. The value could be a scalar, vector, or a matrix. So let's walk through this example here, OK. So at the leaves, we have w, which is (3,1). And we have the feature vector (1,2). So now, if you take these two quantities, and you take the dot product, you get 3 plus 2, which is 5. And now, you take the score 5. And you take y, subtract them, and you get the residual, which is 3. Notice that the forward value of this node is 5, and the forward value of this node is 3. And now, finally, you'd square this. And the value of the square is 3 squared, which is 9, or value at this node is 9, OK? So now, we're done with the forward phase. All we've done is evaluated the loss. But importantly, we have also remembered all the values along the way, which will come in handy. So now, the backward step is we're going to compute a backward value, gi, one for every node. And this is going to be the gradient of the loss with respect to the value at that node. So if that node changes value, how does the loss change? So the backward pass is going to compute the values from the root to the leaves. So let's do this for example. So the base case, gradient of the loss with respect to loss is 1. And now, we look at the gradient on this edge. We did this before. It's just two times the residual. OK. So now, we need to compute the backward value of this node, OK? To do that, we're going to take the backward value of the parent, and multiply whatever's on this edge. What's on this edge is 2 times the residual. The residual is 3. So it's 2 times 3, which is 6. And so 1 times 6 is 6. Notice that in computing this backward value, I'm using the intermediate computations from the forward pass. OK. So let's continue. So the gradient on this edge is 1. So backward value here is 6, which is the parent backward value, times what's on this edge, which is 1. That gives us 6. And then the backward value of this node is 6 times, what's on this edge, which is this other input (1,2). And that gives us (6,12). So to conclude, the backpropagation algorithm takes these concrete values, this expression, and produces the gradient of the loss with respect to w evaluated at these concrete values. And that's (6,12). OK? And the back propagation algorithm remember works for any computation graph or layered neural networks, much more complicated models. But this is just a simple example to show you the dynamics of forward pass and backward pass. OK. So now, we have the back propagation algorithm, we compute gradients, we stick these gradients into stochastic gradient descent. And then we just run stochastic gradient descent, and then we get some weights. So now, one question is, what do we get? So we wanted to optimize the training loss using stochastic gradient descent. But running stochastic gradient descent, does it actually minimize these weights? This is a little bit of a delicate question here. So for linear predictors, turns out that the training loss for a convex loss is going to be a convex function, which means that it is going to have a single global minimum, which means that if you start at some point, and then you just follow your nose by running gradient descent with appropriate step size, it's going to converge to the global optimum. But for neural networks, the TrainLoss is non-convex, and which means that there's no guarantees at all. And you're going to converge to the global minimum. You're lucky you can converge to a local minimum. So optimization of neural networks is in principle hard. But of course, people do it anyway. And you actually get some good results. So there's a gap between theory and practice, which is not quite understood yet. But in practice, getting neural networks to train properly is a little bit hard. But I think of it as kind of like driving stick. There's just a lot of degrees of freedom. You can stall and get stuck. But if you know what you're doing, you can actually get a lot of good results. OK? So here are some examples, just to give you a flavor of what needs to be done, OK? So here is a two-layer neural network. And here is the loss function. The first is initialization matters. So if you have a convex function, wherever you initialize, you run it for long enough. You converge to the local optimum. For a non-convex function, if you initialize here, you might get stuck up here. If you initialize over here, you'll get stuck here and so on. So generally, you have to be a little bit careful about how you initialize. You can't initialize at 0. Because it turns out that all the rows of your weight matrix are going to be identical, which is not very useful. So you temporarily initialize around 0 with some kind of random noise. Or you can use pre-training to initialize your neural network as well, which we won't cover right now. Another thing that people do is called overparameterization. So here, this corresponds to adding more hidden units than you really need. This corresponds to having a lot of rows of this matrix. And the idea here is that the more hidden units you have, the more kind of "chances" you have of having the network learn something reasonable by your data. So some of the units might die off and not be very useful, but maybe like some fraction of them will actually be useful. And the final thing that people do is using adaptive step sizes, which is generally an extension of stochastic gradient descent. Remembers, in stochastic gradient descent, we had a single step size, eta, which controlled how fast you move. With methods like AdaGrad or Adam, you actually get a per feature, or a per parameter step size. So for every weight, you get a number of which dictates how fast you should be moving in that direction. And this generally leads to better results. OK. So one maybe high level thing to keep in mind is don't let your gradients vanish or explode. So if I explain this, it will become kind of clear. So when you run gradient descent or stochastic gradient descent, if your gradients vanish, which means become too small or close to zero, then you'll get stuck, and you won't make progress. But if your gradients become too large, then you'll just explode. And you will oscillate and might diverge. So with careful initialization, and setting up the step sizes, generally, and even designing of the neural network architecture, all of this is around of kind of makingz sure that your gradients don't vanish or explode. OK. So that's all the guidance I'll provide you. There's a lot more to be said on this topic. We're just kind of giving you a high level overview. OK. So let's summarize now. So the most important topic of this module is that of a computation graph This allows you to represent arbitrary mathematical expressions. And these expressions are built out of these simple building blocks. And I hope that the idea of computation graphs will allow you to get a better visual understanding of what your mathematical expressions are doing, and also, what gradient computations are about. And then we saw we had a backpropagation algorithm, which is this general purpose algorithm for leveraging the computation graph to compute the gradients. So notice that we've done this kind of in the context of neural networks. But I stress that computation graphs and backpropagation is fully general. It allows you to handle many, many functions. And the generality is one of these reasons that you can-- it allows you to iterate very quickly on new types of models and loss functions, and opens up this new paradigm for model development, differential programming, which we'll talk about in a future module. All right. That's it. Thanks.
Stanford_CS221_Artificial_Intelligence_Principles_and_Techniques_Autumn_2021
Bayesian_Networks_9_EM_Algorithm_Stanford_CS221_AI_Autumn_2021.txt
Hi. In this module, I'm going to talk about the EM algorithm for learning Bayesian networks when we have unobserved variables, our training data. So let's start with our familiar movie rating example. So here, remember this Bayesian network. We have a genre. It could be drama or comedy, and we have two people, Jim and Martha, who are going to rate-- produce ratings of this movie. And I denote as R1 and R2. And before, when we observe all the variables in our training data, we could just use maximum likelihood, which amounts to counting and normalizing. But this only works if we observe all the variables in each training example. So data collection is expensive. What happens if we don't observe some of them? For example, what happens if we don't know the genre of the movies, but we only observe the pairs of ratings of Martha and Jim? So what can we do in this case? You know, intuitively, it seems kind of hopeless. How can we learn a Bayesian network relating G and R when we don't even see examples of G? But we'll show that this is actually possible in many cases, but certainly not all cases, and that's kind of the magic of EM and unsupervised learning in general. So let's try to approach this problem top-down. What are the principles that we have? Well, maximum likelihood was something that was served quite well. So let's try to see if we can make that work. So generally, we have a set of variables, which are hidden, called Big H. And we also have some variables big E, which are observed. So in this movie rating example, we have G as a hidden variable. The two ratings is an observed variable, and we have some little e denoting what they're observed to. And in this case, we have remember the set of parameters, which is the probability of G and the probability of R given. So the principle of maximum marginal likelihood says, well, just maximize the probability of the data. Tweak the parameters to make that probability as high as possible. So what this means for us is that we're going to try to find the theta that maximizes the probability over all the observed observations that we have in the training data of the probability of that observation given theta. So this looks very much like maximum likelihood with the exception that we are marginalizing out the hidden variable. And just to spell this out, what this quantity really is, is the summation over possible values of the hidden variables of H equals h and E equals e. So this is the principle that we want to adhere to. So it turns out that the EM algorithm is one way of trying to optimize this objective, but we're going to try to motivate EM in a more intuitive way. So EM, you should think about it as a generalization of the K-means algorithm. Remember in K-means for clustering, we also had a similar problem where we have cluster centroids and cluster assignments, both of which we didn't know. In our case, the cluster centroids are going to be generalized to parameters of a Bayesian network in general, and the cluster assignments are going to be generalized to the hidden variables. So here are the variables. We have E, H. And here is the expectation maximization algorithm or otherwise known as EM. So we're first going to initialize the parameters randomly, and they're going to repeat until convergence. It's going to alternate between two steps-- the E-step and the M-step. In the E-step, what we're going to first do is try to use the parameters to guess the hidden variables. So we're going to compute q of h. There's going to be a distribution over possible values that the hidden variables could take on. And this is going to be equal to simply the probability of the hidden variable conditioned on the evidence or the observations that we saw. And again, this is depends on the parameters at the current iteration. We're going to do this for every possible value of h. How do we do this? Well, we've already seen how we can compute these types of quantities given a fixed Bayesian network, and this is called probabilistic inference. So in case, h is small, we can just do it kind of brute force. If h is HMM, we can use-- if the Bayesian network is HMM, we can use forward-backward. In general, we can use Gibbs sampling, et cetera. So now, what do we have? We have these weights for every h, and now we can create fully observed examples now. We can just pair a particular h with our observations and put a weight next to that example. And the important thing is how now we have a set of weighted examples which are fully observed. And what do we know? How do we deal with fully-observed examples, well, we can get a maximum likelihood now. So we take these weighted examples, and then we just count and normalize. And that gives us a fresh set of parameters, which we then can go back and repeat the E-step and the M-step over and over again. So the EM algorithm is guaranteed to converge to a local optimum just like K-means, but it can get stuck in local optimum and not actually solve their global optimization problem. So let's do an example. We're just going to do one iteration of EM on our sample Bayesian network. So suppose our training data includes two examples-- R1 and R2 equals 2 and 2, and the second example where it's 1 and 2. And the genre is unobserved? OK, so suppose we have parameters that look like this. So probability of G is just uniform, and probability of r given g is just taken by this table. OK, so now we're going to do the E-step. So remember, the E-step is trying to guess what g is given each of these examples because we don't know what-- So let's look at 2, 2, OK? So the first example. Well, g could be either c or d, but there's two possibilities. And for each one, I'm going to compute the probability of the joint assignment for now. So here, I'm just going to by definition of the Bayesian network. This is going to be probability of g equals c-- that's 0.5-- times probability of gr. r equals 2 given cg. g equals c-- that's 0.6. And then r2 equals 2 given g-- that's 0.6. That gives me 0.18. And now we look at the other possibility, which is g equals d. And the probability of g equals d is 0.5. And the probability of r1 equals 2 given g equals d, that is 0.4 here-- 0.4 down here. And probability of r2 equals 2 given g d that's also another 0.4, OK? So now I have these probabilities next to each of these possible extensions of this assignment. Now I can normalize, and that's how I get my q distribution. So if I normalize distribution, I'm going to get 0.69 and point-- so there's more probability mass on g equals c. And if I were to guess, then g equals d. So now we move on to the second data point, and I'm going to do the same thing. So 1, 2 could either be c or-- and I'm going to compute the probability of each possible assignment to g. So I have a probability of g equals c. That's 0.5 times probability of r1 equals 1 given g equals c-- that's 0.4. And I have what is the probability of r2 equals 2 given g equals c-- that's 0.6. And analogously, I can compute the same quantity for g equal to e. And again, I normalize, and I get 0.5 and 0.5. OK, so at this point, at the end of the E-step, what I have are four fleshed out data points. I started with two data points, but it's kind of been expanded into the possible continuations of g. And each data point is weighted by some probability q of g, which is essentially how much I think that data point is valid, in some sense. OK, so now we move on to the M-step. And the M-step is just going to take these four data points and count them up and normalize. So this should be very familiar. So first, we're going to estimate the probability of g. So g can take on two values-- c and d. So I count them up. How many times did g equals c occur? What shows up in the first and the third data points, and I'm just going to add their weights together, which is 0.69 and 0.5. And what about d? Well, g equals d shows up in the second and the fourth rows, and that's 0.31 plus 0.5. And then I'm just going to normalize this into an actual distribution. So now I move on to the probability of r given g. So for each possible configuration here, I'm going to count. So c 1 shows up here once, and that has a weight of 5. What about c 2? c 2 shows up three times. One here, one with r2, and one down here. If I add the weights of those, I'm going to get 0.5 plus 0.69 plus 0.69. Remember notice that this example is used twice because I'm generating two bytes from c. OK, so now I have these counts. I normalize this distribution to get a distribution of r given g equal c. So now I move on to what happens when g is d. So I look at d 1. d 1 shows up once here with weight 0.5. And what about d equals 2? Well, that shows up 3 times. Twice here by 2, 3 once. And then once here. I've another 0.5. I'm going to add and normalize, and I get a distribution. So the only difference between maximum likelihood and the M-step is that now I'm adding these fractional counts rather than integer accounts. But otherwise, the logic and the code is exactly the same. So what have we done stepping back a little bit? Intuitively, we've gone from a preliminary set of parameters. And I'm guessing what g is. And then using that guess of g to further refine my estimate of the parameters. And you'll see that the parameters over here were 0.4 and 0.6, and now they've been pushed to 0.2 and 0.8. So in general, EM is going tends to polarize the probabilities because that's the best way to maximize the likelihood of the data. And now, this is just one iteration of EM. Now I would take these parameters and go through the same process and go through the same process until I convert. OK, so now, let's turn to an interesting application of EM, and that's decipherment. So this is an example of a cipher. It's called a Copiale cipher, which is an 105-page encrypted volume dating back from the 1730s. It looks like this. So for a long time, no one knew what these words were. It was finally cracked in 2011 with the help of EM by Kevin Knight, an NLP researcher. So the Copiale cipher is actually very complex. So what we're going to do is motivate the idea of using Bayesian networks for decipherment with a simple substitution cipher. So the idea behind the substitution cipher is that suppose you wanted to send an encrypted message to someone, so you're going to generate a substitution table which specifies how each letter gets transformed into another letter. Cipher is going to be a permutation of another line. And so then you have a message you want to send. Suppose you want to say hello world. You're going to use this substitution table, apply it to this plain text to produce a ciphertext. And this is done by taking a mapping h to n, e to m, l to y, and l to y, and o to t, and so on. So now you hide the substitution table, and then you hand someone the ciphertext, or you put in a book and bury it for someone to discover later. So now the question is, when someone receives a ciphertext-- is given the ciphertext, can they recover the plaintext? Importantly, the plaintext is obviously unknown, but also the substitution table is also unknown. This is a very challenging problem. But let's see how we can use Bayesian networks, in particular, HMM to try to address this. So remember the process of using HMM you have to think about what is the generative story of how this data arose? So I'm going to model this as follows. I'm going to have a sequence of letters, which are plaintext, and these are hidden. And we have a corresponding sequence of characters in the ciphertext. And I'm going to define a joint distribution over all of these by first generating the plaintext letters according to a Markov model via by a start and a bunch of transitions. And then, for each plaintext letter, I'm going to generate a ciphertext letter via some emission. So the parameters of HMM remember are the probability of start, the probability of transition, and the probability of emission. So intuitively, the transitions are going to capture kind of the cohesion of plaintext because it's actually supposed to be readable and have structures, not random letters. And the emission is going to-- distribution is going to capture the substitution table. So how are we going to estimate this HMM, OK? So first of all, we're going to make some just simplifying choices here, but we'll show that it's kind of sufficient. So we're going to set a P start to a uniform. You could be a little bit more clever, but I'm just going to leave it alone for simplicity. Then the transition probabilities. So this is dressed up as a bigram model over characters. And this model tells you what looks like English or not. And the really cool thing about this is that if we know the plain text is supposed to be English, we can just go and grab a ton of English and estimate a distribution over that text, and that gives us P trans. We don't even look at the ciphertext. And then finally, the key part is that the emission distribution is the substitution table, and that's what we're going to estimate from EM. So notice that P emission is actually more general than a substitution. It says for every plaintext character, I can actually generate a distribution over ciphertext letters, whereas the substitution table says there's exactly one. And this is more out of convenience because it makes optimization easier. But in principle, you can also think about P emit as being constrained to just a one to one mapping. OK, so why do we think that this will work intuitively? Well, so the transition distributions, which we've already estimated on English, it's going to favor a plain text that looks like English, while the emission distribution is going to try to favor consistent character substitutions. So we don't want it to be the case that a maps to a t here and a v here and a f there. We want some consistency. And by having this emission distribution and maximizing likelihood, it's going to try to encourage that kind of consistency. So we have these two forces kind of at play with each other while we're trying to estimate both the hidden variables and the parameters. So let's try to actually step into the EM algorithm and say what kind of computations are needed to estimate this HMM. So in the E-step, what I need to do is to compute the distribution over the hidden variables conditioned on the observations. And t do that, we introduced the forward-backward algorithm a while back. And forward-backward algorithm is computing these smoothing queries, which is exactly what's the probability of a plaintext letter being a particular value h given the ciphertext that we observe. And I'm going to do this for each position in this text-- in the ciphertext and every potential character. So I'm going to define qi of h to be this probability. So this is my best guess at a particular location. What do I think the plaintext character is? So now, given these guesses, the M-step is going to re-estimate the substitution table or the emission distribution. So I'm going to count-- a fractional count and normalize for all of the characters e and h, OK? So for every possible plaintext letter and every ciphertext letter, I'm going to look at all the positions where the ciphertext was actually e, and I'm going to add this probability or weight qi of h. So this is going to tell me how many times in expectation we believe that a particular plaintext letter and a particular ciphertext letter are together. And now, I'm just going to normalize this distribution. So P emit of a ciphertext letter given a plaintext letter is proportional to this count emit of h and e. OK, so that's it, and we just run the EM algorithm, and we hope for the best. OK, so just to make this a little bit more exciting, I'm going to try to code this up in Python so we can see it in action. All right, so a few things first. So here is our ciphertext. You shouldn't be able to read this, and we're going to try to decipher this. And we also have this lm.train, which is this, quote unquote, "large amount of English text" that we can draw from. OK, so I'm going to-- we also have this utility file, which I'll just review, so it allows you to read text. We're going to convert text into a sequence of integers just for simplicity, and we also importantly have implemented this forward-backward algorithm, which is going to take a sequence of observations and the parameters of HMM. And it's going to return q, which is a two-dimensional array where it's a qi for each position we have a distribution over possible values of hi. OK, so let's decipher-- let's decipher some ciphertext. OK, so import util. So I'm going to declare k to be the number of characters. So this is lowercase letters plus space but normalize the text. The first thing I want to do is initialize the HMM. So remember the parameters of HMM. I have start probabilities. So this is going to be p_start of h. And I'm just going to set this to the uniform distribution in all the data. So startProbs equals 1 divided by K for h in range K. So that's going to be just a uniform distribution. So now, what about the transition probabilities? So transition probably goes from h1 to h2. And this is P trans of h2 given h1. So note the order is switched here because I want transitionProbs of h1 to be actually an array which specifies a distribution over h2. So here, we're going to estimate this from plain text. So I'm going to have raw text. This is I'm going to read it from lm.train, which we saw earlier, and I'm going to convert it into an integer sequence, OK? So let's see what that looks like. So that's just a sequence of integers. OK, so now I'm going to estimate P trans from this raw text. So this is actually going to be just a standard, fully observable estimation problem. So I'm going to look over all positions. I'm sorting from 1 to the end. And then I'm going to define-- look at h1 and h2 to be consecutive characters in this broad character sequence. And then I'm going to increment a counter. So I'm going to define transitionCounts to be for each h1 in range K and then for each h2 in range K. I'm going to have a 0, OK? So this is going to be a K by K 0 matrix. And then I'm going to just increment this count. Solve this count once, and then I'm going to normalize. So the way I'm going to normalize is we're going to define transition probabilities to be for each h1, I'm going to call normalized on transitionCounts counts of h1, OK? And so for every h1, this gives me a distribution over h2 if I normalize it. That's going to be my transition probability. I'm done with transition probabilities. So what about emission probabilities? So here I have for every h, I have a distribution over e. So this is going to be just to write it out in our mathematical language. This is going to be e given h. So here, I just want to initialize it to the uniform distribution. So just to document this a little bit more, so we have a uniform distribution. This is done-- estimate this from plaintext, this is, we're going to estimate this. This is just an initialization. OK, so I'm going to initialize this to for each h in the domain of h, for each e in a similar domain. I'm just going to have a 1 over k. So this is a uniform distribution. And now, I'm going to run EM to estimate only this emission probability there. So let's make it larger. So to run EM, I'm going to load my ciphertext in. So observations equals read the text ciphertext, and then I'm going to convert this into a sequence. OK, so now I'm going to iterate a number of times. Let's just call it 200. And then I'm going to do the E-step and the M-step, OK? So what happens in the E-step? I'm going to use my current setting of parameters to guess at what the plaintext is? So I'm going to run forward-backward on the observations and passing the parameters of the HMM. Larger-- and this is going to return q. Just to note that q of h equals-- on a mathematical notation, this is the probability at that hi equals h given the evidence, which is observations here. Print out our best guess so far. So let's see how we're doing. We're going to do this at each iteration. So to do this, so for every-- let's define n equals the length of the number of observations here. So for each position, I'm going to look at q1-- so this gives me a distribution over h. And I'm going to take the one that has the highest probability. So then I'm going to convert this to string and print it out, OK? And now that finally, the M-step is we're just going to count and normalize here. So I'm going to define a new temporary variable, which is emissionCounts. And this is going to be-- let me just actually cheat a little bit, and I'm going to call emissionCounts to be 0 for the same dimensionality as emissionProbs. This is a matrix of zeros. OK, so now we're going to go through each position here, i in range of n. And for each position, what are the possible values it can take? So that's going to be h. And I'm going to up the emission counts of-- so emission remember is h, e. So h and e is going to be observations of i plus equal to q, i, h. So this is probably the most important line here. So remember q, i, h is what is the weight on a particular h at position i. And then emissionCounts is going to be h of that particular observation, and I want to just update that count, OK? So now all you need to do is normalize. So emission probabilities is for each possible value of h. I'm going to normalize emissionCounts of h, OK? So that's it. So just to review this briefly. So I first initialize HMM. The starting probabilities are just uniform. And then I'm going to estimate the transition probabilities in a fully-supervised way from plaintext, where I just simply count and normalize. And then I'm going to initialize the emission probabilities to just uniform for now. Then I'm going to run the EM algorithm to actually update the emission probability, OK? So I read in the observations, and then I'm going to iterate between the E-step and the M-step, where in the E-step I run forward-backward to compute the distribution over particular possible values of h at each position and print out my best guess. And then I'm going to account and normalize. All right, so let's see how this does. So decipher.py. So at each step, it's going to print out its best guess. And over time, you can see that this jumble of letters is going to slowly evolve as EM is trying to figure out both the plain text as well as the substitution table. So this isn't going to be perfect because we've used a fairly simple model, and we don't have too much data, but you can see some structure emerging. So I woved my woke alone without-- so that's a real word. I need one that I could really-- and so on. And plain-- there's probably something, and so on, OK? So just for comparison, this is actually the plain text. So this is a little passage from the Little Prince. So I lived my life alone without anyone that I could really talk to, until I had an accident with my plane. So definitely far from perfect, but given that we just did it in a minute, it's maybe not bad. OK, so let me summarize. We presented the EM algorithm for estimating the parameters of a Bayesian network when there are unobserved variables. So the overarching principle is that of maximum marginal likelihood. We're going to find the parameters such that that drives up the probability of the variables that we did observe as much as possible. So the EM algorithm is going to optimize the marginal likelihood objective, but fundamentally it's a chicken and egg problem just like in K-mean. We don't know the hidden variables, and we also don't know the parameters. So what we're going to do is to iterate between one and the other. So in the E-step, we're going to perform probabilistic inference given a fixed set of parameters to produce our best guess over what some of the hidden variables are. And then in M-step, we're going to use these probabilities as weights of examples and then we're just going to count and normalize to parameters, and then we are going to estimate the hidden variables and estimate the parameters, and so on, and so forth. So finally, once you've learned your Bayesian network, you can go off and perform inference and answer all sorts of questions, which could involve asking about these unobserved variables that you didn't see on new test examples, or it could be used to ask questions about the observed variables given some other variables. And in general, this highlights kind of the flexibility of Bayesian networks. Just because you had a certain pattern of messiness at training time doesn't mean you have to commit to that at test time. So there is many applications of Bayesian networks, including involving the EM algorithm. We looked at decipherment, where the goal is to infer the plaintext from the ciphertext. EM could also be used to reconstruct phylogenetic trees given the DNA of modern organisms. And it can also be used to infer the unknown label of a data point where the observations are the possible noisy labels provided by crowd workers. So finally, EM is the most canonical version of a broader class of techniques called variational inference, which actually includes things like variational autoencoders, which some of you might have heard of. In that case, the q actually the encoder, and it's given by a neural network. And the decoder is the Bayesian network. So there's a lot more to connections to be explored, and I encourage you to read up on this by yourself.
Stanford_CS221_Artificial_Intelligence_Principles_and_Techniques_Autumn_2021
Constraint_Satisfaction_Problems_CSPs_7_Local_Search_Stanford_CS221_AI_Autumn_2021.txt
Hi, in this module, I'm going to talk about local search, strategy for approximately computing the maximum weight assignment constraint satisfaction problem. So remember that a CSP is defined by a factor graph, which includes a set of variables x1 through xn and a set of factors f1 through fm where each factor is a function that depends on a subset of the variables and returns a non-negative number. Each assignment to all the variables has a weight given by the product of all of the factors evaluated on assignment. And the objective is to compute the maximum weight assignment as usual. So so far we've seen backtracking search and beam search. And both of these search algorithms works by extending partial assignments. You start with the empty assignments, and then you assign one variable. And you assign another variable until you get to a complete assignment. And then maybe you backtrack or maybe you don't. So local search is going to be a little bit different. It's going to modify complete assignments. So you're going to start with a random assignment. And then you're going to choose one variable and you change it. You chose another variable and you change it, more kind of like house maintenance rather than building a house. So one of the advantages of local search is that gives you additional flexibility. You can pick any variable and try to improve it. Whereas backtracking search and beam search you have to do things in a certain order. For beam search, once you've assigned a variable, you can't go back. And backtracking search you can backtrack, but you can't really kind of backtrack out of order. So recall our running example, object tracking. So at each time position, you observe a noisy sensor reading of a particular object. You observe 0, 2, and 2 as the positions of the object. And you're trying to figure out where this object was. So we demodel this as a CSP where we have three of observational factors, o1, which favors x1 equals plus 0, o2, which favors x2 equals 2, o3, which favors x3 plus 2. And we have two transition factors that favor subsequent positions being close by. So let's jump in. And suppose we just have a complete assignment 0, 0, 1, OK? My question is, how do we improve this? Well, let's look at the weight of this assignment. So the weight of this assignment is 2 because 0 agrees with 0, times 2 because 0 agrees with 0, times 0-- oh-oh-- because these two are too far apart, times 1 because these only differ by 1, and times 1 because they differ by 1. But you get a 0. So that's not a very good assignment. So how can we improve? Let's try to reassign x2 to something else. Let's try to assign it to some v where we can set v equals 0, 1, or 2. And for each of these, alternate assignments and compute its weight. And then we simply take the assignment with the best weight. In this case, it's this one, which sets x2 to 1. Then we end up with a new assignment, which is better than the old one. So mission accomplished. So we can refine this strategy a little bit more. So suppose we're trying to assign x2. The weight of a new assignment where x2 has been replaced with some v is as follows. So you're multiplying all the factors in the CSP together, o1, t1, o2, t2, o3, t3. But note that only some of the factors depend on v. In particular, o1 and o3 don't depend on v. So no matter what v is these are the same, which means that we can ignore them and just evaluate the factors that involve x2. So this is like idea of locality, which leverages the structure of the CSP. When evaluating possible reassignments to some variable xi, we only need to consider the factors that depend on xi. So in the factor graph where there's lots and lots of variables-- and you're trying to reassign one variable, which might have a small neighborhood. Then you're saving a lot of effort. So now we're ready to define a local search algorithm, which is called iterated conditional modes. It sounds fancy, but it's really simple. The idea is that we're going to start x to be a random complete assignment. And we're going to loop through x1 through xn. And then keep on going until we converge or we run out of time. What we're going to do is we're going to try to reassign xi. OK, so we're going to consider each possible value that xi could take on. And then we're going to update the current assignment x with that value. OK, so this-- that produces an assignment xv. And then we're going to compute the weights of each of these xv's and choose the one with the highest weight. Remember in computing the weight we only need to evaluate the factors that touch xi. And also notice that this looks remarkably like greedy search or beam search. There is a substantial difference in that here x are complete assignments, not partial assignments. So this is not extending an assignment so much as replacing xi with v. So pictorially what this looks like is you start with x1. So by convention, unshaded nodes are the ones that are meant to be reassigned and shaded ones are the ones that are fixed. So you pick up x1. And you say, can I change it to make it better? And then you pick some value of x1. Then you go to x2 and say, can I make-- change x2 to make this assignment better? And then you go to x3. And then you go back to x1 and say, hey, can I make it better by changing x1 again? You keep on going until it converges. So here is a demo on the object tracking example. So at the start of the algorithm, we're just going to initialize this with a random assignment 0, 1, 2. And it has a weight of 4. And now I'm going to try to maximize variable 1-- x1 given everything else. So let's consider alternative values of x1. So it could be 0, 1, or 2. For each of these, I'm going to compute its weight only evaluating the factors that touch x1. So in this case, it's only o1 and t1 that touch x1. So I only need to evaluate those. Compute the weights. Choose the best one breaking ties arbitrarily. So I choose x1-- 0, which means I didn't change. And now let me step. So now I'm looking at 1 x2. And can I change anything? Nope. What about here? x3 assigned-- is assigned 2. What can I do? Well, I compute the weights. And here I am choosing x3 to be 1, OK? So I change that assignment. And now I go back to x1 and iterate. And it looks like I've converged because I'm not changing anything. So I've converged to an assignment with a weight of 4, which, if you remember, is not the optimum maximum weight assignment. The maximum weight assignment is-- has weight 8. So again iterative conditional modes is going to give you an OK solution but not necessarily the best one. So convergence properties. But the good news is that the weight of your assignment is not going to go down. It's going to always increase or stay the same each iteration. And this is because when you're trying to reassign a variable, you can always choose the old value and maintain the same weight. So any change must be increasing the weight. So this means that it converges in a finite number of iterations because there's only a finite number of possible assignments. So you can only increase the weight a finite number of times. This can get stuck in local optimum as we've seen. And it's not generally guaranteed to find the optimum assignment. So just a quick note is that there's two ways around this. One is that there is a variant of this where you can change two variables or maybe three variables at a time. And that allows you to perhaps get out of a local optimum. And another thing we can do is add randomness. So at each step, we could just add-- choose the best option or just choose a random option. And this will also allow us to escape these local optimum. Or we can use something like Gibbs sampling, which I'll talk about in a future module, which will add stochasticity to ICM. OK, so here is the summary. So let me summarize actually all of the search algorithms for CSP that we've encountered. So first we looked at backtracking search. There's a strategy is to extend partial assignments and then backtrack when we get to the complete assignment. The backtracking search is exact. It computes the actual maximum weight of assignment. And it's the only algorithm that we're considering that does that in general. But the main problem is that the time can be exponential in the number of variables. Then we looked at beam search, which extends-- also extends partial assignments. And here we're trading off accuracy for time. So this is approximate. It will only give you an OK solution, but it's linear in the number of variables. And local search we saw iterative conditional modes, which does local search by choosing the best value of a variable at each given time. It's a different strategy. Here, we're starting with complete assignments and modifying them to make them better. So here it's also approximate, but it's fast just like beam search. OK, so that concludes this module.
Stanford_CS221_Artificial_Intelligence_Principles_and_Techniques_Autumn_2021
Constraint_Satisfaction_Problems_CSPs_6_Beam_Search_Stanford_CS221_AI_Autumn_2021.txt
Hi. In this module, I'm going to talk about beam search, a really simple algorithm for finding approximate maximum weight assignments efficiently when you're in a hurry and don't want to incur the full cost of backtracking search. So just to review, remember a constraint satisfaction, or a CSP, is defined by a factor graph, which consists of a set of variables, X1 through Xn, where each Xi is some element of a domain, i, and a set of factors, f1 through fm, where each factor function, fj, is a function that takes a assignment and returns a non-negative number. And usually, the factor function depends only on a subset of the variables. So each assignment, little x, to all the variables has a weight. And that weight is given by simply the product of all the factors applied to the assignment. And the objective is to find the maximum weight assignment. So let us revisit the object tracking example. So in this example, we're trying to track an object over time. And at each time step we record a noisy sensor reading of its position. So at time step 1 we see 0. Time step 2 we see 2. Time step 3 we see 2. And the question is, what was the trajectory that object took? Is it this one, or this one, or something else? We model this a problem as a CSP with x1, x2, and x3. We defined factors that captured our intuitions about the problem. o1 captures the fact that the actual position should be close to the sensor reading. So 2 is the weight assigned to 0. So 0X1 equals 0 as favored. And x1 equals 2 is disallowed. Similarly, o2 favors x2 equals 2. o3 favors x3 equals 2. And finally, the transition factors, t1 and t2 favor adjacent xi's, which are close. So a distance of 0 will get a weight of 2. Whereas, a distance of 1 will get 1, and so on. And you can click on this demo to actually play with this CSP. We'll come back to this in a bit. OK, so this is the object tracking example. So now, so far, we've seen backtracking search as a way to compute maximum weight assignments. And backtracking search essentially does a exhaustive depth-first search of the entire tree in the worst case, which can take a very, very long time. So how can we avoid this? Well, we have to give up on something. And what we're going to give up on is correctness. So what we're going to do is simply not backtrack. So let's start with something called the greedy search algorithm. So again, we start with an empty assignment. We consider possible settings of, let's say, x1. So let's say there's two possible settings. And we're just going to choose one of them, whichever ones has the highest weight. And the weight, remember, of a partial assignment is the product of all the factors that you can evaluate so far. Well, let's pick this one. Again, let's set X2. There's two possible ways to set it. Let's pick the better one and keep on going until we reach a complete assignment. And then we just return that. So formally, what greedy search is doing is starting with a partial assignment, which is empty. And then it's going through each of the variables, X1 through Xn. I'm going to try to extend the partial assignment to set Xi. So for each possible value that I can assign Xi, I'm going to form a potential candidate partial assignment and call it Xv. And then I'm going to compute the weight of each of these Xv's, and then choose the one with the highest weight. So an important caveat is this is definitely not guaranteed to find the maximum weight assignment, even though locally it appears to be optimizing and finding the value with the best weight. So let's look at this demo to see how it works on object tracking. OK, so here we have the CSP that's defined. And I'm going to step through this algorithm. So initially, I extend the empty assignment to assignments that only fill in X1. So X1 could be 0, 1, or 2. And these are the weights of these three partial assignments. Remember, the sensor reading was 0. So therefore, X1 equals 0 has a larger weight. So next step, I prune. I keep only the best candidate, which in this case, is X1 equals 0. So then I go to i equals 2. And I extend that assignment to three possible settings of X2, compute their weights. And then I keep the best one, which in this case is 0, 1. And now I extend again to X3. Three possible values to set X3, compute the weights of these now complete assignments. And then I choose the best one. So in this case, greedy search ends up with assignment 0, 1, 1, with a weight of 4. And if you remember this example, the best weight assignment have weight 8. So 4 is definitely not the right answer. But it's not 0 either. It found something. OK, so what's the problem with greedy search is that it's too myopic and only keeps the single best candidate. So beam search is just the natural generalization of greedy, where I'm keeping at most K candidates at each level. So let's say K equals 4. So I'm going to start with empty assignment. I'm going to extend. And then I don't need to prune. Because there's only two possible partial assignments here. And I have a capacity of 4. I'm going to extend again. Again, I don't need to prune. But then next, I'm going to extend each of the elements on my beam, the partial assignments, extend each of these. And now, I have 8. And now, I need to reduce the 8 partial assignments to 4. And to do this, I'm going to simply compute the weight of each of these 8 partial assignments, and then take the four which have the highest weight. And now let's suppose those are these four. And then I continue only expanding the ones I've kept and then keeping the ones, again, the top four, and then keep on going. So notice that visually I'm exploring only a very, very small fraction of the tree. But I'm doing this kind of holistically, looking down the tree at kind of multiple-- I could be exploring different parts of the tree at the same time. So formally, beam search keeps most K candidates of partial assignments. I'm going to initialize the candidate set to be just the single partial assignment, which is empty. Now again, like greedy search, I'm going to go through the variables one at a time. I'm going to extend. In this case, I'm going to consider each partial assignment in C and each possible value that it can assign Xi. And I'm going perform the extend the assignment. And I'm just going to keep track. C prime is going to be the new set of candidates. And then now I'm going to prune that set by computing the weight for each element of C prime and just keeping the top K elements. So this is not guaranteed to find the maximum weight assignment either. But sometimes it works better. So let's look at this example, object tracking and extend from the empty assignment to get three partial assignments to X1. I prune to the top three, so nothing gets removed. I then extend. So each of these three partial assignments gets extended into three additional ones. Now I have nine. And now, I'm going to prune down from nine to three. So that will keep all the assignments here with a positive weight. And now I extend again to find settings of X3, compute each of these weights. And then I'm going to take the top assignments. OK, so now, notice that the top assignment that I have right now is 1, 2, 2 with a weight of 8. And in this case, I got lucky. And I found actual max weight assignment. But in general, you won't be guaranteed that. OK, so what is the time complexity of beam search? Because one of the advantages is that it's supposed to be fast. So let's do a simple calculation here. So suppose we have n variables, which is the depth of this tree. And suppose that each of the variables has b elements, which is going to be the branching factor here. And then the beam size is K. So what is the time that it takes to run beam search? It's going to be for each of the variables, each level of this tree, we're going to have a set of candidates which is of size K. And the extension phase is going to take each of these K and extend it into b candidates. So then I'm going to end up with Kb extended candidates total. And then I'm going to have to take the top K. So the time it takes to take a list of Kb elements and select the top K elements is Kb log K by building a heap. So the total time is nKb log K. And importantly, this is linear in the number of variables. Whereas, backtracking search would be exponential in the number of variables. OK, so let us summarize now. So beam search is a fairly simple heuristic to approximate maximum weight assignments. And it's really done if you're really in a hurry and you don't really care about getting maximum weight assignment, because you probably won't. So the nice thing about beam search is it has this parameter, K, which allows you to control the trade off between efficiency and accuracy. So if you're really in a hurry, you set K equals 1. You just get greedy search, which sometimes actually gets you pretty good answers. And if as you increase K more and more, if you increase K to infinity, then you'll definitely search the entire search tree. And you will get the optimal answer. But this is basically exponential time. One thing to know about beam search with K equals infinity it is performing a breadth first search of the tree. Because it performs level by level. And it explores all of the nodes in a tree systematically. So using this analogy, I want to end with a final note here, which is that backtracking search is really like doing a depth first search on the search tree. It dives deeply into one complete assignment, and then backtracks, and then finds another complete assignment and backtracks, looking kind of one assignment at a time. Whereas, beam search is more akin to breadth first search, where we're proceeding level by level. But the main difference with breadth first search is that we're doing this heuristic pruning at each level to make sure that we don't have too many candidates. And the way it's using that, doing that pruning is based on the factors that it can evaluate so far. So for beam search to work, you really need it to be the case that the factors are local and they can be evaluated as much as possible along the way, and not all at the very end. All right, so that's the end of this module.
Stanford_CS221_Artificial_Intelligence_Principles_and_Techniques_Autumn_2021
Logic_5_Propositional_Modus_Ponens_Stanford_CS221_AI_Autumn_2021.txt
OK, so in this module, we would like to talk about Horn clauses, specifically, how modus ponens applies to propositional logic with only Horn clauses and how we can show completeness and soundness in that setting, OK? So to do that, you have to define a few other things, so let me go back to my definitions here. So here we've been talking about inference rules, we've been talking about modus ponens, derivation and proving, and we've talked about soundness and completeness. You've seen that modus ponens is sound, but it is not complete. And as a way of fixing that, we thought maybe we should restrict our formulas to formulas that are only-- that only have Horn clauses, so we need to define what a Horn clause is. And if you define what a Horn clause is, you have to define what a definite clause is, so I'm going to define definite clause. And I'm going to define goal clause. And a Horn clause is basically a clause that is either a definite clause or a goal clause. I'll define this in a setting-- in a second, definite clause, OK. So that's just back here. So what is a definite clause? A definite clause is a clause that has the following form. So you have p1 anded through pk, implying q, OK? So that is a definite clause, OK? So where p1 through pk, and q are propositional symbols. Notice that just one thing I want to mention is k could be 0 too, so you could have almost like true implies q. So you would end up with just q. So that is also a definite clause. So here are some examples of definite clauses. So rain and snow implying traffic is a definite clause because it does have this form of p1 anded through pk, implying q, OK? Traffic itself is also just a definite clause, so q itself is just a definite clause. Not traffic, negation of traffic, is not a definite clause because you don't have any negations here, right? These are propositional symbols. And then rain and snow implying traffic or peaceful is not a definite clause because we have this or here, OK? So again, a definite clause has this form of positive information implying something positive, OK? So it has kind of that form. And in addition to definite clause, we also have this other thing that is called a goal clause. So a goal clause is a clause of this form, p1 anded through pk, implying false, OK? So this clause is called the goal clause. So like traffic and accident implying false is going to be a goal clause. So what is a Horn clause? A Horn clause is a clause that is either a definite clause or a goal clause, OK? And the reason I'm separating this goal clause here is this type of goal clauses have a specific form. They're equivalent to basically saying negation of whatever comes first, right? Because implication is negation of this, or false. Or false goes away, right? So then it's basically just negation of this first part. What is negation of this first part then? That is negation of traffic and accident, or negation of traffic, or negation of accident. So basically you can think of a bunch of or's of a bunch of negations, and that acts as a goal clause. And that is also allowed here when you talk about Horn clauses in general. All right, so that's a Horn clause. And then, I'm going to expand this idea of modus ponens. We talked about modus ponens being of the form of p, and p implying q, being able to give us q, right? So the more general modus ponens-- for inference of modus ponens is of this form of having p1 through pk, and then p1 through pk anded together implying q, giving us q. Here is an example. So let's say, it is wet, and it's a weekday. And if it is wet, and it is a weekday, there is traffic, OK? So this is going to imply traffic here for us. So that's just a more general form of modus ponens, OK? All right, so then we have basically this theorem. And the theorem says that if I apply this modus ponens rule only on Horn clauses, then I'm going to get completeness, OK? So modus ponens is complete with respective Horn clauses. And what that means is that suppose that you have a knowledge base that only has Horn clauses, and p is entailed, p is a symbol, and p is entailed in this knowledge base. Then if I just apply modus ponens, if I just apply this particular inference rule of modus ponens, I will be able to derive p. And then that's pretty nice. Because in general, if you ask me-- like remember the ask and tell operators? If you ask me, is p true, you're really asking me if p is entailed in KB. And instead of me doing something of the form of model checking, and satisfying both things of those forms that we have talked about, instead of me doing all of that and trying to figure out if this model really entails p or not, then what I can do is I can basically do a simple manipulation. I can basically just apply modus ponens on my knowledge base and see if I can derive it like syntactically or not. And then if I can, then this derivation and entailment are equivalent, right? If I can derive this based on syntax and based on modus ponens, then I would be able to say that the knowledge base also entails p. So going back to this diagram that we had before, right, we will have soundness and completeness, meaning that this idea of derivation, knowledge base, knowledge base deriving g is going to be equivalent to knowledge base entailing g. So if you ask me, is g true-- or if you want to add g to the knowledge base-- remember the ask and tell operations-- that's about asking for entailment, right? And if it is asking for entailment, again, right, if I'm in a space where I have soundness and completeness of my inference rules, modus ponens in this case, then I can just do this derivation, which is much simpler. All right, so let's just look at an example here. So let's say I have my knowledge base are the following formulas here. And then my modus ponens rule is this more general rule of p1 through pk, and p1 through pk anded together implying q, and that gives me q, OK? So what happens here? So if you ask me, based on your knowledge base, is there traffic? Can you tell me if there is traffic or not? What I can do is I can check if the knowledge base derives traffic. And how do I do that? Well, I have rain, and rain implies wet. If I apply modus ponens on my knowledge base, I get wet. I know that it's a weekday. That's in my knowledge base. I have got this wet and added that to my knowledge base. I also have wet and weekday implies traffic in my knowledge base. With all these three together, I can infer-- I can infer traffic, I can derive traffic. And because knowledge base derives traffic, and we have soundness and completeness because we are looking at only Horn clauses, we are able to say the knowledge base here in this case entails traffic. All right, so this is kind of an overview of what we have talked about so far. We have talked about formulas. That's in the syntax land. We have meanings in the semantic land. We have models for each of them. And then in the semantic land, if you want to check-- if you want to check something is entailed or not, we have to do satisfiability, right? We have to do model checking. And that was quite involved. So instead of doing that, if we have a set of inference rules that are going to be sound and complete, either because maybe our formulas are restricted, or maybe our inference rules are fancier, then we are able to derive a formula. And that derivation-- if you have soundness and completeness, that derivation is the same thing as checking entailment. So in this module, we've talked about Horn clauses, and kind of like a restricted version of formulas where we can apply modus ponens. In the next module, we will be talking about resolution, so a fancier inference rule as opposed to changing our formulas in order to get both soundness and completeness.
Stanford_CS221_Artificial_Intelligence_Principles_and_Techniques_Autumn_2021
Markov_Networks_2_Gibbs_Sampling_Stanford_CS221_AI_Autumn_2021.txt
Hi, in this module, I'm going to talk about Gibbs sampling, a simple algorithm for approximately computing marginal probabilities and [AUDIO OUT].. You'll recall that a Markov network is based on a factor graph. And a factor graph gives a weight to every possible assignment of variables in that factor graph. And in a Markov network, we'll convert that weight into a probability by first computing the normalization constant. Which is sum over all the assignments of the weight of that assignment. Divide by that normalization constant, and we get the probability of assignment little x. So in this object tracking example, you see how we have a bunch of different assignments. There are weights. The partition function in this case is 26. We divide each of these weights by 26. And we get these probabilities. So the cool thing with Markov networks is that you can compute marginal probability. And that's going to be our focus. So marginal probability is going to be focusing on one particular variable Xi. And asking what values could it take on? And to get that, we're going to sum over all possible assignments where Xi does actually equal P, the joint probability of that assignment. And this example, if you look and ask for the probability of x2 equals 1. You sum over all the rows where x2 is equal to 1, that gives you 0.62. And if you ask for x2 equals 2. Then you're summing over the last two rows and that gives [AUDIO OUT]. So now let me present Gibbs sampling, just a simple algorithm for approximately computing these marginals. You could iterate over all possible assignments and compute, but that would take exponential time. So Gibbs sampling is going to follow the template of local search. Where we're going to go through each variable one at a time and update them. But unlike iterated conditional modes, which we saw before. Gibbs sampling is a randomized algorithm tailored for the purpose of computing a marginal. So let's present the algorithm. So, we're going to initialize the assignment to some completely random assignment. And then we're going to loop through each of the variables until convergence, which I'll talk about a little bit later. We're going to set the assignment Xi equals v with this probability, The probability of Xi equals v, given x minus i equals x minus i. So this my x minus i notation, just refers to all the variables except for Xi. So I'll come back to this in a second. But let me just highlight kind of the general flow of the algorithm. So suppose you have three variables. Gibbs sampling is going to provide a sample x1, holding the other ones fixed. And now it's going to move on to x2, holding the others fixed. And update x2, and then go to x3, and then it's going to cycle back to x1, x2, x3 and so on. So now how do I sample Xi equals v? So here is one example. What we're going to do, is we're going to try assigning Xi equals v, and getting some weight. So for every possible assignment of x2, I'm going to get some weight. And now remember in ICM, I would just simply take the value that produced the largest weight. But the main difference with Gibbs sampling is that I'm going to take these weights, and I'm going to normalize them to produce a probability. Again, normalizing is summing these values, so I get 5. And dividing by 5 to get probability 0.2, 0.4, 0.4. And now I'm going to sample x2 equals one of these values, according to this probability distribution. You can visualize that sampling process by the interval from 0 to 1. Where I have a number of segments representing the different possible values of x2. And the length is exactly the probability. So probability of x2 equal to 0, probability of x2 equals 1, and probability of x2 equal to 2. And then I'm going to throw a one dimensional dart at this line. I'm going to hit it somewhere and I'm going to take whatever value is specified by that interval. OK, so now I have a new value for x2 here. And now I proceed to the next variable and so on and so forth. So that produces a sequence of samples of the assignments. And the remaining thing to do is to aggregate them. So, I'm going to every time I go through this loop, I'm going to increment a counter or variable i of the particular value that I saw. OK, and at the very end, I'm going to compute an estimate P hat, of Xi equals little xi. And this is going to be simply the normalized version of the count. So this is going to be the relative frequency of seeing a particular value little xi, compared to everything else I've seen. OK, but there's a lot of counting and normalizing. But let's look at this demo to give us a kind of a more fuller sense of what's going on. So here is the object tracking example, I have three variables. And here I'm going-- I can specify the query which is, which variable am I interested in calculating the marginal of? And I'm going to run Gibbs sampling here. And then at the beginning, I sample a variable x1 given everything else, so consider all the possible values of x1. I'm going to look at their potentials or factors, compute a weight, normalize to get a distribution. And I'm going to sample a value according to these probabilities. So in this case, it's just a coin flip. I choose xy equals 0. And then I update my counter. So I'm recording that I saw x2 equals 1 once. OK, and then I'm going to move on to the next variable X2, do the same thing. Move to the next variable x3, kind of do the same thing. And I'm going to just cycle for this for a moment. You can see that the assignment, which is depicted up here, is changing. And down here, I can see that the count of the number of times x2 equals 1 has gone up to 25. And now look, I actually hit a different value. I went to a configuration where x2 equals 2 now. And then I might sample a little bit more, and they'll come back to 1. And you can just watch this for a little while. And you can see over here, that these are the estimates of the marginal probability of x2 based on the counts. So these numbers are simply these normalized versions of the counts. So I'm going to speed this up a little bit. So let me do just 1,000 steps at a time. OK, so now if I did 1,000 steps of Gibbs sampling. Now I have a lot of counts of x2 equals 1, some counts of x2 equals 2. And now you can see the probabilities are kind of converging to something like 0.6 and 0.3. Let me just hit step a few more times. And you can see that these probabilities are indeed converging to 0.61. Which if you remember from here, is pretty close to the true marginal probability. OK, so it seems at first glance, kind of a wild thing, right? So we're running this algorithm, it's just generating samples left and right. It's kind of random. And yet, if I compute the randomness, it's very carefully orchestrated. So that when I sum things up properly, I actually get the right answer out. So let me now go to the image denoising examples. So here the goal is given a noisy image, clean it up. And in our simplified version, I have Xi which represents the clean pixel value, which I don't know. Now, a subset of the pixels are observed. So for example, these in green here. And I'm going to clamp those pixel values to the observed value. And then I have a factor that says neighboring pixels are twice as likely to be the same, than different. So let's do Gibbs sampling in this image denoising case. So what Gibbs sampling would do, is it's going to sweep across the image, and sample each variable condition on the way. So suppose I'm landing on this particular pixel value, and I'm trying to figure out what should its value be. So again, I look at the possible values that could be 0 or 1. And for each value, I'm going to compute a weight. So remember from ICM, that I actually don't need to compute the weight of the entire assignment. And I just only need to look at the factors which are dependent on this value. OK, so let's consider v equals 0. So here if I put 0 here, that means this potential is going to be happy. Because [INAUDIBLE] and I'm going to get a 2. And this one is going to disagree. This one's going to disagree and this one is going to disagree. So the weight is 2 times 1 times 1 times 1, which is 2. So now if I try to put a 1 in this position, now this potential says 1, while the others say 2. So now that has a weight of 8. So now to get the probability of Xi equals v given everything else, I'm simply going to sum up and normalize. So I have 2 and 8 here, the normalization constant is 10. So I get probabilities 0.2 and 0.8. Now given this distribution, I'm going to set this value to 1 with probability of 0.8, and 0 with probability 0.2. And then I'm going to keep on going. So here is a fun little demo of Gibbs sampling for an image denoising that runs in your browser. OK, so the idea is that here is an image. And if you hit Control Enter here. You'll see that this is the input to the system. So we have black pixels and red pixels, these are the observed pixels. And white pixels are unobserved. And these are the ones that we want to fill in. So there's a bunch of settings where I'll talk to you about in a second. But if you click here, you can see how-- get a feeling for what Gibbs sampling is doing. Each frame here, each iteration is a full pass over all the pixels. And you can see that it's kind of dancing around, because it's trying to explore different assignments. So one thing you can do, is you can set showMarginals equals true. And what this does, is that instead of visualizing the assignment at a particular iteration for each pixel here, I'm actually visualizing the marginal probability estimate. So this is in general, going to be a number between 0 and 1 which is represented as a shade between black and red here. So this, in some sense, is the kind of best guess at what the reconstruction is. So there are a number of things you can play with. So for example, the fraction of missing pixels. If I reduce this to let's say 0.3. Then the problem becomes easier. And you can see that the reconstruction gets pretty reasonable results. Another fun thing you can play with is, well actually, let me bring down the-- bring up the missing fraction to 1. OK, so that means I don't see any pixel. So here, this is just going to be-- actually let me do that. showMarginals equals false, oops. So here you can see kind of just blind samples from the model, OK? And if I bump up the coherence, if I bump it down, then you'll see kind of a more random pattern. If I bump it up to 10, then you'll see more coherence. So remember, this is kind of like the phase transitions that we saw for the [AUDIO OUT]. OK, so I will let you play with this on your own. So let me just conclude here. Actually, one thing before we conclude. So let me try to go back to Iterated Conditional Modes. And compare that with Gibbs sampling. Both of them have the same kind of template. You're working with complete assignments, and you're going through each variable and updating the assignment to that variable one at a time. But there's a few differences here. One, the first salient one is that Iterated Conditional Modes was for solving CSPs, where we're trying to find the maximum weight assignment. Gibbs sampling is for Markov networks, where we're trying to compute marginal probabilities. So as a consequence for ICM, at each step we're choosing the value to assign to a variable, which maximizes weight. Whereas in Gibbs sampling, where using the weights to form a distribution and sampling from that distribution. In ICM, we noticed that the algorithm does converge but often to a local optimum. Which is not the best maximum weight assignment. For Gibbs sampling, as you can see from these samples, there's no traditional notions of convergence. The samples are going to keep on changing and keep on changing. So, the Iterates are not the ones which are converging. What is actually going to converge are the marginal estimates. And under some technical assumptions, these estimates are actually going to converge to the correct answer. We saw that for object tracking. It did a pretty good job there. But there were some technical conditions. One sufficient condition is that all the weights be positive. But more generally, what we need is that for the probability of going from one assignment to another assignment via Gibbs sampling has positive probability. As if you have two disconnected regions. Then if you start a Gibbs sampling at one particular point, then you will never reach the other point. The one important caveat is Gibbs sampling is wonderful. But in the worst case, it does take exponential time. So these are really-- computing marginal probabilities is a really hard problem. And Gibbs sampling is just a heuristic with some nice asymptotic energy. So wrapping up, we looked at computing the marginal probabilities of a Markov network. And we saw that Gibbs sampling did this by sampling one variable at a time. And it counts visitations to each of the values for a given variable. And it's one of these kind of astonishing things, that Gibbs sampling is so carefully constructed, that it actually kind of works. And you can prove lots of interesting theorems about it. Finally, Gibbs sampling is just the first taste of a much more broad class of techniques of Markov chain Monte Carlo which are used to produce much kind of richer ways of estimating probabilities in Markov. All right, that's the end of this module.
Stanford_CS221_Artificial_Intelligence_Principles_and_Techniques_Autumn_2021
Bayesian_Networks_7_Supervised_Learning_Stanford_CS221_AI_Autumn_2021.txt
So far, we've introduced Bayesian networks and talked about how to perform inference in them. In this module, we'll turn to the question of how to learn them from [AUDIO OUT] So recall that a Bayesian network consists of a set of random variables. For example, of cold, allergies, cough, and itchy eyes. The Bayesian network also comes equipped with a DAG, specifying the qualitative relationships between all these different variables. Quantitatively, however, the Bayesian network defines a set of local conditional distributions over each variable Xi, given the parents of i. And in this example, we would have probability of c, given parents, which are none, probability of a, probability of h, given that's two parents, c and a, and probability of i given a. So finally, if we multiply all of these probability distributions together, then we get what is the joint distribution overall and variables. In this case, we have C, A, H, and I. Then there's a question of how you do inference in Bayesian networks. So inference, remember, you're given a Bayesian network. You're given some evidence that you observe, for example, H and I equals 1 and 1 for a subset of the variables. And then you're given a query variable, which is something that you're interested in. Let's say, you're interested in cold. And the inference algorithm is going to produce a distribution over your query of variables conditioned on evidence. So for every possible setting of the query variable, we have a probability. So we saw many ways of doing this, including, manually, by exhaustive enumeration. We can convert Bayesian networks into Markov networks and do Gibbs sampling. And then for HMMs, we have specialized techniques such as the forward-backward algorithm and particle filtering. So inference assumes that all these local conditional distributions are known. But the big question is where did all of these come from? So all of these numbers are called the parameters of the Bayesian network, the red question marks. And in general, we might not know what they are. So let's try to learn that. So again, in all learning tasks, we start with the data. So in this case, the training data is going to include examples, where each example is a complete assignment to X. So this is the fully supervised setting, which is the simplest one to start out with. And then the learning algorithm is going to produce parameters. And the parameters are exactly all these red question marks. These are all the local conditional probabilities. So we're going to go through a bunch of examples, and then later show a general principle that ties all of them together. So you might be feeling a little bit that this might be very challenging because probabilistic inference assumes you know the parameters, and it was already pretty hard, both computationally and perhaps, conceptually even. But it turns out that for Bayesian networks, at least, somewhat surprising if you're learning under fully supervised data, learning actually turns out to be surprisingly easy. So let's begin. So suppose, you're developing Bayesian networks to model how people rate movies. So let's start with the world's simplest Bayesian network, which has one variable, R, which represents the rating of a movie. So the joint distribution is just p of r in this case. The movie rating can be 1 through 5. So first, we have to identify what the parameters are. So the parameters here, theta, is just a probability of 1, probability of 2, probability of 3, probability of 4, probability of 5. There are five parameters. And if you're a little bit clever, you only need four of them because the five numbers have to sum to 1. But for the sake of simplicity, let's just say that there's five parameters, OK? And now you're given some training data. Some ratings from users. You have a one, you have a three, you have a bunch of fours and three fives. And now, the question is, how do you estimate the parameters given the training data? Let's just follow our nose here. Well, intuitively, you would think that the probability of a rating is proportional to the number of occurrences of that particular rating in the training data. So now this is just intuition. It might be a good thing or it might not be a good thing. Well, let's find out later. But let's just go with that for now. So here's the training data. And what I'm going to do is the parameters are-- it's a probability table. So we're going to see a lot of these over the course of the next few slides. So for every rating, I'm going to count the number of times that shows up. 1 shows up once, 2 shows up zero times, 3 up once, 4 shows up five times, and 5 shows up three times. And now, I'm just going to sum up all the counts. That gives me 10. I'm going to normalize to get my probabilities. And that's the probability estimate. That's it. Count and normalize. OK, so let's level up a little bit and talk about two variables. Suppose that now, the rating is governed by the genre. So in particular, Bayesian network is you first generate the genre and then you generate the rating given genre. So now there's the parameters of this Bayesian network includes both the probability of the genre, which contain two parameters and the probability rating given genre, which includes 2 times 5 parameters. So 10 parameters for a total of 12 parameters. Again, if you're being clever, you can get that down to 9. So now, we're giving some training data. So we have each training a point. Remember, it is a full assignment to all the variables. So we have our G equals d and R equals 4 here. So now, how long do we estimate the parameters given this more complicated Bayesian network? So following our nose again, there's an intuitive strategy is that we're just going to estimate each local conditional distribution separately and see what happens, OK? So what does that mean? That means for probability of G, I'm just going to count the number of times particular values of G show up. So d shows up 1, 2, 3 times. And c shows up twice. So notice that this is the kind of same calculation as we had before. So now, this is 3/5 and 2/5 if you sum up and normalize. OK, so in estimating p of g, I simply only look at the slice of the examples that matter for this. And same with a probability of R given G. So now I'm going to look at all the possible assignments to the parents of a particular node and also that node. So that's a g and r. So d4 shows up twice, b5 shows up once, c1 shows up once, and c5 shows up once. Now, I count and normalize and I get my probability estimate of r and g, OK? So far so good. So in summary, consider each local conditional distribution separately, and then count based on the slice of a data that matters, and normalize. So now, let's consider three variables. So we have a genre, whether the movie won an award or not and the rating. So here, we have a genre and whether it won an award, influencing whether how well the movie is rated. Joint distribution is p of g, p of a, p of r, given g and a. So now, we have local conditional distributions for each of these factors here. So remember that V structures, this type of structure, was really special in Bayesian networks. It gives rise to explaining away. It's the thing that if you marginalize unobserved leaves, you can render things independent. And it was really a hallmark of vision now. But from a perspective of learning, there's really nothing special here. And to see this, what we're going to do is just-- suppose we have some training data, which includes assignments to all three variables. We're just going to count and normalize again. And so here, we're going to solve with p of g. This is exactly the same thing as before. We just look at only the genre. And then we're going to look at p of a, which is analogously looking at only 0, 1 goes 0, 1, and counting and normalizing. And now the big local condition of distribution is p of r given g and a. So here, I'm going to look at the parents of r and r itself. I'm going to count the number of times this local configuration happens. So I have d, 0, 1 showing up once, d 0, 3 showing up once, And d, 1, 5 showing up once and each of these showing up once. And now, I want to normalize so I have to be a little bit careful. I don't want to add all these numbers or normalize. Because this is conditioned on g and a. So that means for every setting of g and a, I have my distribution over r. I'm going to look at d, 0. So I have one occurrence of r equals 1 and one occurrence of r equals 3. So if I normalize that, it's going to be give me half and half. And now for this setting g and a, I only have one possibility of r. So that has probability 1 and same for these other ones. So again, everything has count and normalize. Where you have to pay attention to what you're normalizing over, you're only normalizing over possible values of r, not g and a. So one thing you might note is that all of these probabilities are 1 and the probabilities that are not mentioned here are 0. So you might wonder that if this is a good estimate, but we'll come back to that later. So now, let's invert the V structure. Let's look at a different structure. So we have the genre. And suppose we have two people, Jim and Martha. And they're both going to rate this movie. And both of them rate it depending on the genres. G generates R1 and also generates R2. So now we have this three node Bayesian network, and the estimation is going to be the same. I'll just go through it very quickly. So we have parameters 1 for every variable here. And so probability of g is count and normalize, probability of R1, given g, is you count and normalize. Again, remember that I'm normalizing over possible values of g. So you can partition the rows based on the value of g. So here, I have 2 and 1. And I'm normalizing 2/3 and 1/3. And g equals c is just handled in a separate normalization. And then R2, given g is analogous, so I'm not going to go over this. So this is fine, except for what I'm going to now do is think about the setting where suppose you have not just two users, but 1,000 users or a million users. Now, you might be a little bit worried because now, for every user, you might have to have their own local conditional distribution. And the number of parameters might just go up, which means that estimation might be hard, especially for new users. So we're going to consider a slightly different-- it's going to be the same Bayesian network here, but the parameters are different. In particular, I'm going to consider a single parameter of r give g, instead of having p of R1 and p of R2. So now how do I estimate distribution of this model? So let's begin. So probability of g is the same as before. And now the probability of r given g, I'm just going to count the number of times a particular local configuration shows up, either where r is R1 or R2. So d, 3 shows up once here, d, 4 shows up three times. You have 1 and 2 and 3. So notice I'm counting both occurrences of R1 and R2. And d, 5 shows up twice here with R1, and here with R2. c, 1 shows up once, c, 2 shows up once, c, 4 shows up once, and c, 5 shows up once as well. Now, I just count and normalize. So I look at all the b's and I count, sum, and normalize. Now I look at all the c's, and I count and normalize. OK, so when I have only one distribution that is responsible for two nodes, I simply aggregate their counts and normalize. So this is an important slide. So the more general idea that I want to highlight is this idea of parameter sharing in Bayesian networks. And this happens when the local conditional distributions over different variables are actually the same. And to be very precise about that, I want you to look at the following picture. So we have G, R1, and R2. So far, we've looked at Bayesian networks through the lens of inference, where we know that every variable comes with a local conditional distribution. But we didn't worry about where that came from. It was just there, but now, for learning it matters where it came from. So what we should think about is each of these variables being empowered by a local conditional distribution. So g is powered by this table here. R1 is powered by this table. And in the case of parameter sharing, R2 is also powered by this table. So we have a Bayesian network. And behind the scenes, you should think about all these tables, which have arrows kind of hooking up and providing juice to each of these variables. And now, if you didn't have parameter sharing, then R1 and R2 would be powered by different tables. Now, this is an important point. When we're doing inference, you should think about that as reading from the parameters. And where you're reading, you don't care whether you have two copies of something or one copy of something because you're getting the same thing. But in learning, we're writing to the parameters from the observed variables. In that case, you need to worry about whether you're writing to one, a memory location, or two memory locations. So the right analogy is you think about in programming, you have pass by reference or pass by value. And in parameter sharing, we're passing by reference. So we're passing this parameter into each of these nodes. And when we do learning, we write back into those parameters. And it matters whether they're the same parameters or not. So when would you do parameter sharing like this? Well, it's a trade off. And it's ultimately, a multi-leg decision. So by doing this, you aggregate your data, which means that you have more data per parameter, which allows you to get more reliable estimates. On the other hand, you end up with less expressive models. For example, if you had a lot of users, you might lose ability to personalize if you parameter share. And there's, obviously, many intermediate points as well, which we won't get to. So let's look at some other Bayesian networks with parameter sharing. So we already looked at the naive Bayes before, but just to anchor it in this notation, let's say we have a genre and we have a movie review, and we have a Bayesian network, which generates each word independently conditioned on the genre. And so the joint distribution over everything is equal to probability under genre of y times for each word the probability of a particular word given, y. So the parameters of this Bayesian network are the genre and P word. So now, you can think about doing a little exercise of how many parameters are there? So you look at theta and you say, P genre. Well, that's two parameters, two genres. Pword, that's 2 times the number of words, the number of values that Wi can take on. And so that's it. So notice, importantly, that the number of parameters does not grow with L, even though the number of variables in the Bayesian network grows with L. So now we see the kind of the complexities of the parameters and the number of variables to be quite different. You can have a million variable Bayesian network, but you might have only one parameter, for example. That's quite possible. So here's another example, our friendly HMM. So we have actual positions of objects, H1 through Hn and E1 through En. And this should be very familiar by now, so you have HMM, which has a joint distribution, which is given by three distributions, P start of h1 times transition of hi given hi minus 1 times for each variable the probability of emitting ei, given hi. Again, the parameters are P start, P trans, and P emit. And you can think about how many parameters are in this Bayesian network. Well, you have the number of positions times number of positions squared times number of positions times number of possible sensor reading values. Again, there is no dependence on the time window, the number of time steps here, n. And this is useful because if you imagine tracking over a long period of time, we may have a million time steps, you don't want the number of parameters to be the same. OK, so here, the training data is going to, again, be full assignments to all the parameter variables. And later in a future module, we'll come back to the case where in practice, you might only observe the sensor reading. But more on that later. So now, let's present the general case. Hopefully, the intuitions have already been fleshed out. But I just want to write things down with some formal notation. So a Bayesian network, remember, includes variables X1 through Xn. And now we have parameters, and the parameters is a collection of distributions. So I'm going to write that as p subscript d, where d indexes into a subset. And for the HMM, for example, big D is start, trans, emit. So D is just a label, if you will, a name. So each variable Xi is generated from some distribution. And now the notation gets a little bit hairy, but its p sub di is the distribution that points into Xi. And I'm looking up that distribution by name p. So you can think about this more formally as this is just the equation for defining what a Bayesian network is of the joint distribution equals the product, the local conditional distributions. But now, I'm being very explicit that each variable, di, every variable I has a particular distribution that is powering that variable. So the idea of parameter sharing is that di is just the same for multiple i's. OK, so here is the learning algorithm for general Bayesian networks. So the input is a D train, consisting of full segments to all the variables X1 through Xn. And the output is going to be all these distributions here. So the algorithm is, again, just count and normalize. So what we're going to do is go through every training example, which is a full assignment to all the variables. For every variable in your Bayesian network, we're just going to increment a counter. OK, so what this counter is is, I look at which distribution is powering variable i, and I'm going to increment that counter for the local configuration, which is looking at assignment to its parents and also the value of Xi. And then I'm just going to normalize for each distribution and local assignment to its parents. I'm going to set the probability under that distribution of Xi given parents to be proportional to this count, OK? And that's it. So far, we've presented this count and normalize algorithm, showing a lot of examples. And hopefully, this seems like a reasonable thing to do. But part of you might still be wondering, well, why? Why is count and normalize a reasonable thing to do? And there is a higher principle here. And it's called maximum likelihood. So the principle of maximum likelihood, which is a very old idea in statistics is that we have our training data here. So if we look at the product over all examples in the training data, and we look at the probability under the Bayesian network that is assigned to that data. And notice, I'm going to provide semicolon theta here to recognize the fact that this Bayesian network depends on the parameters now. So this is the likelihood of this theta given these parameters. A maximum likelihood is saying, I want to tweak these parameters, so that this likelihood as large as possible. So this should look a little bit more like what we were doing in the machine learning modules, where we write down a loss function, which depends on parameters, and which is usually a sum over the theta. And we try to find the parameters at minimized loss. Here, it's the opposite. We're trying to find the parameters that maximize the likelihood. And if you just take a log and you negate it, you actually end up with minimum loss as well. But I will ignore that for now. So intuitively, this is a reasonable principle as well. What you're trying to do is for every setting of parameters, that gives you some likelihood under the model of the data. And you just want to keep on tweaking that until the likelihood as high as possible. So having said that, now I'm just going to claim that that algorithm, which we called count and normalize is exactly solving the maximum likelihood objective. So this is really nice because it gives us a closed form solution to this maximum likelihood objective. You don't have to take the gradient of this and iterate and worry about convergence. Also, it's just done. And this is one of the reasons that makes maximum likelihood estimation of Bayesian network so scalable and intuitive is that well, it is scalable and well, that was a little bit tautological. All right, so I haven't justified why maximum likelihood principle leads to the count and normalize algorithm, but let me just provide you a little bit of a taste of why this might be the case. So let's take this small data set, d, 4, d, 5, and c, 5. So if I write down the maximum likelihood objective, so I have two variables here, I'm going to expand that. OK, so I have max over theta. And theta, really here, is the probability of genre, the probability of rating of given the genre is c, and the probability of rating, given genre is d. So I have three distributions here that I want to optimize. And I'm just expand out based on the definition of Bayesian network. I have probability of D, given for probability of rating 4, given D. So that is the probability of the first data point times if d, 5 given D, that's the second data point. And then e of c and p of 5 given c, that's the third data point. So I'm multiplying all these probabilities across all the points, and that is the probability of the data, given a particular assignment to the local conditional distribution. And now, I've color-coded them on purpose because what we can do is we can shuffle things around. If you just look at probability of g, so I'm maxing over that. And that shows up in these three places. And it doesn't affect anything else. So I can just pull that out. And I can pull the green apart, which is r given c. I can pull the blue stuff out, and that's maximizing over if r given g equals d here. So the punchline here is that we can decompose the maximum likelihood objective, which looks like a big tangled mess into actually sub problems, one for every distribution and assignments to the parents of a particular variable. And now having done that, now I have just one, a little, local optimization problem here, which is basically a solve in closed form. You can do this. I'm not going to do this for you, but you can introduce a Lagrange multiplier for the sum-to-1 constraint. And you can take some derivatives instead of 0, and then you get that. The maximum likelihood probability is proportional to the [INAUDIBLE].. And in this case, what we will estimate is that the probability of d is 2/3, probability of c is 1/3 and so on and so forth. OK, so let me summarize now. So we've talked about learning in fully supervised Bayesian networks, where we're observing instances of all the variables here. So one important concept to take away is this idea of parameter sharing. So we have talked about just a Bayesian network, which an inference doesn't care where these parameters come from. But we should really think about each of these nodes as being powered by a particular local conditional distribution. And sometimes, two variables could be powered by the same distribution. And again, inference is reading from the parameters-- learning is writing into the parameters in which case, it matters where these arrows come from. So secondly, we looked at the maximum likelihood principle, which is this kind of high-minded principle that says maximize the likelihood of your data. And we show that this is equal to this very pragmatic and simple intuitive principle of counting and normalizing. And this is the simplicity, which makes Bayesian networks, especially Naive Bayes still very kind of practical, useful, and interpretable. That's the end.
Stanford_CS221_Artificial_Intelligence_Principles_and_Techniques_Autumn_2021
Bayesian_Networks_3_Probabilistic_Programming_Stanford_CS221_AI_Autumn_2021.txt
Hi. In this module, I'm going to talk about probabilistic programming, a new way to think about defining Bayesian networks through the lens of writing programs. And this really is going to highlight the generative process aspect of Bayesian netweorks. So recall, a Bayesian network is defined by a set of variables. There are directed edges between the random variables that capture qualitative relationships. Then for every variable, we define a local conditional distribution conditioned on the parents of that variable. You multiply all these together, and you get the joint distribution over all of the random variables. And then given this joint distribution as a probabilistic database, you can go and do probabilistic inference and answer all sorts of questions. So what we're going to focus on today is how to write down this joint distribution or via the Bayesian network. Now, we're going to look at it via the lens of programs. Let's go through this example. So let me write a short program that I claim is going to be equivalent to writing down either this equation or drawing this graph. So here it goes. So first I'm going to draw B from a Bernoulli distribution. So you can think about this as the Bernoulli is just a function that it snaps on and returns 1 with or true with probability epsilon. So B is going to be set to 1 or true with a probability of epsilon. I'm going to independently do the same for E. And then finally, I'm going to set A equals B or E. So if I run this program, it's going to produce a setting to A, E, and B. So in general, a probabilistic program is just a randomized program such that if you run it, it sets the random variables. And in particular, it produces an assignment to the random variables. So while you can run the program, it's useful to think about the program itself as just a mathematical construct that's used to define a distribution. In particular, the probability of a program producing a particular assignment is going to be if by definition the joint distribution over that assignment. So let's look at a more interesting example of that showcases the convenience of using programming. So this one is going to have a for loop in it. So let's say we're doing object tracking. So we're going to assume that there's some object that starts at 0, 0 and then for each time step 1 through n I'm going to -- with probability alpha I'm going to go right. So i minus 1, Xi minus 1 is a previous location, I'm going to add 1, 0 to it. I'm going to go right, or with probability 1 minus alpha, I'm going to go down. So here is the Bayesian network responding to this probabilistic program. So you can see that each Xi depends only on Xi minus 1. The cool part is that this is a program and we can actually run this. So this is implemented in JavaScript behind the scenes. And you click Run with alpha equals 0.5, and each run produces an assignment X1, X2, X3, X4 and so on to the random variables, and we can visualize them. And you can play with alpha if alpha equals 2. Actually, let's make this 0.1. Oh, actually, I need to press Control Enter to save. If it's 0.1, then all the trajectories are going to be over here. If it's 0.9, then the trajectories are [AUDIO OUT], OK? So this program is-- specifies what is called a Markov model, which is a special case of a Bayesian network where we have a chain. Each variable is only dependent on the previous. So with this Markov model, we can ask pretty good questions. For example, what are the possible trajectories, given the evidence x10 equals 8, 2. So here, I'm going to condition on x10 equals 8, 2. And if I run this, then I'm sampling from all the program traces where I restrict only those ones where x10 is clamped to a 2. So this is a way to visualize the conditional distribution of a probabilistic program. So now I'm going to quickly go through a set of examples of Bayesian networks and by using probabilistic programs, to write them down. So this is going to be a fairly broad and quick overview. So one run of our application is in language modeling, which is often used to score sentences for speech recognition or machine translation. So here is a probabilistic program. For each position in the sentence, we're going to generate a word Xi given a minus 1. So this is actually an NLP. It's called a bigram or, more generally, n-gram model. So here, we generate x1. Maybe that's "wreck". And generate x2 given x1. Maybe that's "a". Then rate x3 given x2. That's "nice". And x4 given x3, that's "beach". So here is an example of object tracking, which that's actually what we're going to study at length in future modules. This is called a hidden Markov model. So here, for every time step, t equals 1 to big T, I'm going to generate an object location Ht. So for example, H1, I'm going to generate 3, 1. And then I'm going to also generate a sensor reading, Et given Ht. So given H1, I'm going to generate E1. And I might get something like just the sum of the coordinates as an example. And then I'm going to move to the next time step. Generate H2, given H1. Maybe that's 3, 2. Going to generate a sensor reading, which is the sum of the coordinates, and then so on. Generate H3, generate E3, generate H4, E4, generate H5, E5. So that specifies the joint distribution over these object locations and sensor readings. And now, a canonical question you might want to ask is, given the sensor readings, where is the object? So here is a generalization of the HMM to allow for multiple object tracking. It's called a factorial HMM. So here, for every time step, now I'm going to have two objects, a and b. And I'm going to generate the location of object o at time step t. For example, here I have H1a and H1b. And I'm going to generate a single sensor reading, which depends on both the objects. Here, I have E1, condition on both H1a and H1b. Go to the next time step. Generate the object or locations for the two objects. And then generate the sensor reading, conditioned on those two objects. Transition to the third time step, and generate the sensor reading. Transition to the fourth time step, generate the sensor reading. So in general, this defines a joint distribution, now, over all object locations for both objects, as well as the corresponding sensor reading. So here is another classic example called naive Bayes, which is often used for a very fast classification. So the way naive Bayes works is that we're going to generate a class for a label, Y. Now go in document classification, I might generate that this document is going to be about travel. And then for each position in the document, I'm going to generate a word, Wi. So for this one, I might generate "beach". Second word might generate "Paris" and then all the way up to WL. So now the typical way you use these naive Bayes models is that you're given a text document, which is the sequence of words. You ask for the label. What is this document? So a fancier version of the naive Bayes model is called latent Dirichlet allocation. And here, we're going to assume that a document is not just about one topic but possibly multiple topics. So I'm going to generate a distribution over topics called alpha. And remember that this is actually a continuous random variable, a might take on values which assigns probability 0.8 to travel and 0.2 to Europe. And then for-- again, for each element in the document, each position i can generate a topic Zi. So here I might generate "travel" for Z1. I'm going to generate a word given that topic, so here W1 given Z1 with variable "beach". I move on to a next word, generate a topic, generate a word given the topic, and so on and so forth, where I reach the end of the document.f OK, so the typical way you use LDA is that you are given a text document, the words here. What topics is it about? I want to infer, what are the topics for each of the words but also the topic distribution for that. So here is another example which generalizes the Bayesian network that we actually saw in a previous module. So in general, let's suppose that you have a bunch of diseases. We're going to generate, for each disease, Di which is the activity of disease i. So I might have pneumonia, generate a 1, cold and malaria. And we're going to have a set of symptoms, n symptoms, where each symptom will generate an activity of that symptom Sj. So we might have fever, which depends on the diseases. And we might have a cough, which depends on a set of diseases. And we have vomiting, which depends on the diseases. So now, the way you typically use this Bayesian network is that a patient comes in and reports some symptoms. You ask the question, what diseases might they have? And I'll just point out that this is a case where missing information can be handled naturally. If you-- a patient doesn't have-- if you didn't record a particular symptom, then you can just ignore that variable. So here is another example. Motivation is that you have a social networking model to analyze why certain people are connected with other people. So the model is formally called a stochastic block model. And the idea is that, for each person, we're going to generate a type of that person. So maybe we have three people, a politician, a scientist, and another scientist. And then, for every pair of people, we're going to generate whether those two people are connected. Eij is a Boolean to determine whether those are connected. So this politician and the scientist might be connected. It's a 1. And the generation of this only depends on the types of the two people in consideration. So 2 and 3 are scientists, and they're connected. And this politician and this scientist are not connected. So remember, we are given the social network, which are just the connectivity structures. So these ease. And we're asked, what is the probability of the people being of certain types? So that was a whirlwind tour of a lot of different Bayesian-- popular Bayesian network architectures in the literature. But they all, basically, boil down to this one, which is that there is a variable or a set of variables H, which are generated first and then giving rise to a set of variables E. So the probabilistic program specifies a Bayesian network by running it. It gives you a joint assignment. And the probability of producing that joint assignment is the joint probability. There are many, many types of models. I've only given you a very small subsample of them. Although, I want you to take away from this a general paradigm, is that you come up with stories of how quantities of interest H generate the data that you observe E. So this is really the opposite of how you normally think about machine learning or classification, where you start with the inputs. And then you define a sequence of operations to produce the outputs. In Bayesian networks, often, it's reversed. You think about the quantities of interest first, how they might arise in the world and then how the data is generated from those quantitative interests. So this paradigm might take a little bit of getting used to. But it might become natural after some practice. All right, that's it.
Stanford_CS221_Artificial_Intelligence_Principles_and_Techniques_Autumn_2021
Game_Playing_1_Minimax_Alphabeta_Pruning_Stanford_CS221_AI_Autumn_2019.txt
All right. Let's start guys. Okay. So a few announcements before we start. So, um, if, you have- if you need OAE accommodations, please let us know if you haven't done that already. So you need to let us know by October 31st because we need to figure out the alternate exam date. So, uh, we'll get back to you about the exact like details around the alternate exam date, but let us know by October 31st. Um, project proposals are also due this Thursday. So do talk to the TAs. Do talk to us, come to office hours, all that. Okay. All right. So today, I wanna talk about games. So, um, so we've started talking about this idea of state-based models, like, the fact that if you wanna have state as a way of representing, uh, everything about- everything that we need to plan for the future. We talked about search problems already. We have talked about MDPs where we have a setting where we are playing against the nature and, and the nature can play, uh, like probabilistically. And then based on that, we need to respond. Uh, and today, we wanna talk about games. So, so at the setup is, is we have two players playing against each other. So we're not necessarily playing against nature which can act probabilistically. We're actually playing against another intelligent agent that- that's deciding for, for his own or her own good. So, so that's kind of the main idea of, of games. All right. So, so let's start with an example. So this is actually an example that we are gonna use throughout the lecture. All right. So the example is, we have three buckets. We have A, B and C. And then you are choosing one of these three buckets. And then I choose a number from the bucket. And the question is, well, your goal here is to maximize the chosen number and the question is, which bucket would you use? Okay. So, so how many of you would choose bucket A? No one trusts me, okay [LAUGHTER] No one trusts me, good. How many of you would choose B? Okay. So now, now people don't trust me [LAUGHTER]. How many of you choose C? Okay. So, so there's a number of people there too. So, so how are you making that decision? So the way you are making this decision is, if you choose A, you're basically assuming that I'm not playing like, like try- I'm not trying to get you. I might actually give you 50. And if I give you 50, that'll be awesome. And you have this very large value that you are trying to maximize. If you think I'm going to act adversarial, and go against you and then try to minimize your, your number, then you're going to choose bucket B, right, because, because worst-case scenario, I'll choose the, the lowest number of the bucket and, and in bucket B, the lowest number is one which is better than minus 50 and minus 5. So, so if you're assuming I'm trying to, like, minimize your good, then you're gonna choose bucket B. And if you have no idea how I'm playing and, and you're just assuming maybe I'm acting ast- stochastically and maybe I'm, like, flipping a coin and then based on that deciding like what number to give you, you might choose C because in expectation, C is not bad, right? Like, C, like, if you just average out these numbers and then pick the average values from A, B, C- A, A and B and C, the average value for A is 0, for B, it's 2 and then for C, is, um, 5. Right, so, so, so if I'm playing it stochastically, you might say, well, I'm probably going to give you something around 5. So you would pick C. Okay. So, so today we wanna talk about these different policies that you might choose in these settings and how we should model our opponent and how we formalize these problems as game problems. So this is an example that, that we just started. Okay. So, so to- the plan is to formalize games, talk about how we compute values in the setting of games. So we're gonna talk about expectimax and minimax. And then towards the end of the lecture, we're gonna talk about how to make things faster. So we're gonna talk about evaluation functions as a way of making things faster, uh, which is using domain knowledge to, to, to define evaluation functions over notes. We're also gonna talk about alpha-beta pruning, which is a more general way of pruning your tree and making things faster. Okay. All right. So that's the plan for today. Okay. So we just defined this game and a way to, to go about the scheme is to create something that's called a game tree. A game tree is very similar to a search tree. So this might remind you of search tree where we talked about it like two weeks ago, right. So, so the idea is, we have this game tree where we have nodes in the- in this tree and each node is a decision point of a player. And we have different players here, right, like I was playing or you were playing or we have two different people, like, playing here. So these decision nodes could be for what one of the players, not both of them. And then each root to leaf path is going to be a possible outcome of the game. Okay. So, like, it could be that I'm choosing minus 50 and then your decision was to pick bucket A so that path is going to give us one possible outcome of how things can go. Okay. So, so that is what the tree is basically representing here. Okay. So the, the nodes in, in the first level are the de- decisions that I was making and then the, the first node, the root node are the decisions that you were making in this setting. So if we were to formalize this a little bit more, we're gonna formalize this problem as, as a two player zero sum game. Okay. So, so in this class, a- at least, like, today, we are going to talk about two-player games where we have an agent and we have an opponent. And then we are going to talk about policies and values and for all of those things, think of you- yourself as being the agent. So you're playing for the agent. You're optimizing for the agent. Opponent is this opponent that's playing against you. Okay. So we are also going to, to, like, today, we are going to talk about games, uh, that are turn-taking games. So we're going to talk about things like chess. We're not talking about things like rock-paper-scissors. We will talk about that actually next time when we have, like, like, simultaneous games where you're playing simultaneously. Today we are talking about turn-taking settings. Two-player turn-taking settings. Full observability, we see everything. We are not talking about, like, games like poker where you don't necessarily see, like, you have partial observation and you don't necessarily see the hand of your opponent. Full observation, two-player and also zero-sum games. And, and what zero-sum means is, if I'm winning and if I'm getting, like, $10 from winning, then my opponent is losing $10. So, so the total utility is going to be equal to zero. If I win some amount, my opponent is losing the same amount. Okay. All right. So, so what are the things that we need when we define games? So, so we need to know the players. We have the agent, we have the opponent. In addition to that, you need to define a bunch of things. This should remind you of the search lecture or the MDP lecture. So you might have a start state, as S start. We have actions which is a function of state, which gives us the possible actions from state S similar to before. You have a successor function similar to search problems. So a successor function takes a state and action and it tells us what's the resulting state you're going to end up at. And this- and, and you have an end- this end function which checks if you're in an end state or not. And the thing that's different here, there are two things that are different here. One is this utility function. And the utility function basically gives us the agent's utility at the end state. Okay. So one thing to notice here is, is that the utility only comes at an end state. So after you finish the game, like, I've played my chess and I won chess now and this is this chess game. And then, then I get my utility. Like, as I'm making moves, like, through my, my chess game, I'm not getting, getting any utility. Like, you only get the utility at an end state. And, and the way we're defining the utility, is we're defining it for the agents because again we are, we are replaying from perspective of the agent. So, so what would be the utility of the opponent? Minus that, right. So, so negation of that would be the utility of opponent. Okay. I've heard about partially observable Markov decision process. Is this, like, kind of, what it is? Like, is this partially observable? Okay. So the question is, is this partially observable Markov decision process? This is not a partially observable Markov decision processes. Um, there are classes that talk about, like there's- this decision under uncertainty by Mykel Kochenderfer's class that actually teaches that. So you should, you should, you should take classes on that. This is not a partially observable Markov decision process. This is fully observable. You have two players playing against each other. It's a very different setup. [inaudible]. So, so the, the question is, are there any randomness here? And, and so far, I haven't discussed any randomness yet. Later in the lecture, I'll talk actually about the case where there might be a nature in the middle that acts randomly and then how we go about it. But so far, two players playing against each other. Okay. All right. And then the other thing that we need to define when you are defining a game, um, is, is the player. So, so, so player is a function of state. And basically tells us who is in control, like, who is playing now. So in the game of chess, like, whose turn is it now. And then that is the function that, that you are going to define when we are formally defining, um, that game. Okay. All right. So, so let's look at an example. So we have a game of chess. Players are white and black. Let's say you're playing for white. So the agent is white, the opponent is black. And then the state S can represent the position of all pieces and whose turn it is. So, so that is going to what the state is representing. So whose player's turn it is and then the position of all pieces. So actions would be all the legal chess moves that player S can take. And then IsEnd basically checks if the state is checkmate or draw. That is what it is checking. Okay. So, so then what would the utility be? The utility will be, will be if you're, like, you're only going to get it when you win or when you lose or, or if there's a draw. So the way we are defining it is, it's going to be let's say, plus infinity if white wins because, because the agent is white and, and it's going to be zero if, if there is a draw and then it's going to be minus infinity if black wins. Okay. Yeah. So, so that was all the things that we would need to define. Yes. [inaudible] What- why do we have, why do we have whose turn it is in the state. Uh, so that's one way of actually, like, extracting the player function. So, so the way you can define a player function is a player is a function of state. So the state already needs to encode whose turn it is. So you can kind of extract that from the player. You said the, the utility would kind of be negative utility for the p agent. Is that assuming that they're both taking the same actions the whole time? No. So, so, so this is turn-taking, right? So I take an action and then the opponent takes an action and then the agent takes an action. The opponent takes an action and then at the very end of the game then then you get the utility and then the opponent gets- gets the negative of that utility. But the actions could be very different. Policies could be very different. And we'll talk about how to come up with that. So why is that condition variable, so what happens if white wins, you get plus infinity, but if black wins, if black wins, you get negative infinity, but like, when you lose- you hav- you don't have zero-sum game. We'll talk about that next lecture actually a little bit. So, so I'm, I'm talking about zero-sum games here because the algorithms you are talking about are for zero-sum games. Like we are talk- going to talk about min- mini-max type policies. Where I'm minimizing and the agent is maximizing. So I'll get back to that if, if I haven't answered that. Like we can talk about it after the class but also next lecture, we'll talk about more variations of games. So- but for now, I'm assuming a bunch of simplifying assumptions about this game. The assumption is that like if white wins, it's negative infinity, but if white wins, black gets 0 utility, [inaudible]. [NOISE] Uh, yeah. So these utilities need to add up to 0. If white wins, maybe white gets 10, but black gets minus 10. So, so like they, they need to add up. Okay. All right. So and then kind of the characteristics of games that we have already discussed are two main things. One is that all utilities are at end state. So throughout this path you are not getting new utilities as opposed to like things like MDPs where we were, we were getting rewards like throughout the path. But here, like the utility only comes in at the very end. At the end state. And then the other thing about it is that different players are in control at different states, right. Like if you are in state, you might not be able to control thing- control things. It might be your opponent's turn and you might not be able to do anything. Okay? So those are kind of the two main characteristics of games. All right. So let's look at a game that you're going to play. All right. So the game is a halving game. So we start with a number N. And then the player- the players take a turn and they can do two things. They can either subtract 1. So they can decrement N, or they can replace N with N over 2. So they can divide or subtract. Okay? And the player that's left with 0 is, is going to win. Okay. So, so that is, that is the setup. Is that- is everyone following that? So, so let's try to formalize the game and then after that you want to figure out what is a good policy to, to do it. So, so right now let's just try to- let's just try to formalize this. So you know like what are all the different things for the model are. So let's just have a new file. We are going to define this game. So it's a halving game. Okay, so let's, let's get this. All right. So we're initializing with N. So we're starting with some number N. So what is our state? Our state is going to encode whose player turn it is and that number N. Okay. So we have a player. Let's say our players are either plus 1 or minus 1. That's how I'm defining like who's player it is. So the start state. Let's say player plus 1 place with N. So so that is plus 1 and N. And then we need to define its end. Okay. So what you do is end check. Well we take the state. We decouple it into player and number. And if the number is equal to zero then then that is when the game ends. That's our ending condition. Okay. How about utility? Well we get the utility at an end state. So again I take a state. I decouple it into player and number. I make sure that we are in, in, in an end state so we assert that number is equal to 0 because that kind of defines if you're in an end state or not. And then the utility I'm gonna get, if I'm winning I'm gonna get infinity. If I'm not winning I'm gonna get minus infinity. And the way I'm defining that here is by just doing player of times infinity. Because player- I'm the agent, I'm the player plus 1. The opponent is player minus 1. That how- like if, if minus 1 is winning I'm gonna get minus infinity. Okay? The actions that we can do is we can subtract 1, or we can divide. Divide by 2. I mean subtract and divide are the main actions. And player, this player function again takes the state. I'm gonna decouple the state into player and number and just return the player. That's how I know who's player's turn is. And then we need to define the successor function. The successor function takes a state and an action and tells us what state you're going to end up at. So again a state. I'm going to decouple that into a player and a number. And then the actions I can take are two things. I can either subtract 1 or I can I can divide by 2. So if I'm subtracting then I'm going to return a new state which is minus player cause now it's minus 1's turn or plus 1's- like it's minus whoever turn it is now. And then I'm gonna do number minus 1. If the action is divide, we're gonna return the new player which is minus player, and then number divided by 2. Okay? That is it. So, so we just defined this game, okay. Yeah. All right. So, so that was my game. We're gonna play this game in a little bit. But let's- quickly before playing it. Let's talk about what is a solution to a game. Like what are we trying to do in a game. So if you remember MDPs the solution to a game was was the policy. So a policy was a function of state. It would return the action that you need to take in that state. So similar to MDPs here we have policies. But, but, the thing is I have two players. So policy should should depend on the player too. So I have Pi of P which is the policy of player P. And I can define it similar to before. It can be a policy as a function of a state and it can return just an action. And this would be a deterministic policy. Like deterministically if I'm in state, the policy is going to tell me what action to take, okay. We can also define Stochastic policies. So what Stochastic policies would do is they would take a state and action and then they would return a number between 0 to 1 which is the probability of taking that action. So policy Pi of a state and action basically will return a probability of player P taking action A in state S. So, so if you remember the bucket example, like maybe half the time I would pick the number on the right and half the time I would pick the number on the, on the left. That would be a stochastic policy, right. I'm not deterministically telling you what the action is. I'm coming up with the stochastic way of telling you like what policy I'm following, okay? So we have deterministic policies. Stochastic policies. Like in our game we could follow either one of them. Under what case would you want a stochastic policy versus the deterministic policy? Uh, can you speak up? Yeah. Under what case would you want a stochastic policy versus a deterministic policy? So under what case do you want a stochastic policy versus a deterministic policy? Again, we'll cover that a little bit more next time depending on what games you are in. Like you have some properties of when stochastic policies are giving us some some properties and deterministic policies are giving us some other properties. Right now you're just defining them as things that could exist. And, uh, we could think our opponent is acting deterministically if, if you know exactly what they were doing. Sometimes I've no idea. Maybe you like I've learned it somehow and I have some randomness there. And then I'm going to use some stochastic policy for how my opponent is going to play against me. But we are going to apply the- like what we get out of the stochastic versus deterministic policy is a little bit more next time. Okay. All right. So okay. So now let's- okay so now that we know that it's the policy that we want to get. Let's try to, let's try to write up a policy for this game. And then I'm gonna define a human policy. And what I mean by that is this is going to come from the human. That means one of you guys or two of you guys. So, um, so I need two volunteers for this but let's quickly actually write this up. So what is a human policy? It's just going to get the input from the keyboard. So, so what I'm going to type up here is, is get the action from the keyboard. So get the input from the keyboard. And that is going to be the action that we are picking. Remember the actions are either divide or subtract. Subtract 1. And if action is valid then return that action. That sounds like a good, good pol- policy. Okay. So that is a human policy. So now what I wanna do is I wanna have like this game that they're actually playing against each other. So I want to have policies for my agent. My agent is plus 1. That's going to be a human policy. And for my opponent, I'm gonna say my opponent is also a human policy. So I just want two humans to play against each other. Okay. And the game is, let's say we are starting with 15. So our number that we're starting with is 15. Okay? All right, so that looks right to me. So how do we, how do we ensure that we are progressing in the game. So if you're in an end sta- if you're not in an end state you want to progress. So let's print a bunch of things here. Let's print out state. Okay. Let's get the player out of the state cause again the state encodes a player. Let's get the policy. Because we have defined these policies for both of the players so we can get the policy for whoever is playing right now. And then the action comes from the policy in that state. And then the new state you're going to end up at is just the successor of the current state and action. So th- I'm just progressing. So, so this while loop here just figures out what state we are in, what policy are we following, and where are we going to end up at and that's the successor function. Okay. And then at the very end I'm just going to print out the utility. So that's either plus infinity or minus infinity. And that sounds good. So, all right. So let's actually- All right. So who wants to play this? Okay that's one person. You're the agent. You're player plus 1. Opponent is three people [LAUGHTER]. I think you were first. By [inaudible] yeah. Okay so you're minus 1. All right so let's, uh, play this game. Is this large enough? Yeah. Okay. All right so player 1. Player plus 1. We are at number 15. Do you wanna, uh, decrement. Okay. So minus 1. So we are at player minus 1. We're at 14. What do you wanna do? Divide. Divide. Okay. You have a policy [OVERLAPPING] [LAUGHTER] [BACKGROUND] Minus 1. Divide. Divide. [LAUGHTER] [LAUGHTER] Yeah I don't really, yeah. So yeah so you kind of get the point, right, so wait, did I make you lose now? [LAUGHTER] Sorry. My bad. But you get the utility at the end and then basically you kinda can see this interface- actually does any- Oh I don't know. We don't have that much time. I was going to try like another pair but the code is online if you wanna play with it, just play with it. We will have one other version playing it with an automated policy later. Um, all right. So, okay. So we're back here. Let me close this. Um, all right. So we just saw how we can give some human policies and human policies playing against each other. And again, the policy, you give it a state and action. It gives you a probability or you give it a state and it gives you an action. So a deterministic policy is just an instance of a stochastic policy. Right? So if you have a deterministic policy, you can kind of treat as a stochastic policy where with probability 1 you're picking- you're picking an action. So, all right. So, so now we wanna talk about how we evaluate a game. So, so let's say that someone comes in and gives me the policy of an agent and an opponent, and I just want to know how good that was. And again if you remember in the MDP lecture, we started with policy evaluation. So in the MDP lecture, we started with this idea of someone gives me the policy, you just want to evaluate how good that is, and you're kind of doing it analogous to exactly that. Someone comes in and tells me that my agent is going to pick bucket A, that is what my agent is going to just do all the time. And someone comes in and says, "Well, my opponent is going to act stochastically and, and with probability one-half, give me one of those numbers." Okay? So, so these are the two policies that we are going to have. So the question is; how good is this? So going back to the, to the tree, the game tree, what is really happening is my agent is going to pick, uh, this one, right? Because he's going to pick bucket A. So with probability one, we are going to end up here, with probability zero we end up in any of these other buckets. And then my opponent is going to stochastically pick either minus 50 or 50. Okay? So if my opponent is picking minus 50 or 50, then the value of this node is just the, the expectation of that or it's just going to be 0. So 50% of the time it's minus 50, 50% of the times it's 50, then the value of this node is 0. And then if my agent is picking, picking A then, then the value of this node is going to be 0. Okay? So, so you kind of can see how the value is going to propagate up from the utility. So we had the utilities at the leaf nodes, but we could actually compute a value for each one of these nodes if I know what the policies are. Like if I know who's following what policies, I can actually compute these values and go up the tree. Okay? And so in this case, I can say a value of a- of the start state, if I'm evaluating this particular policy, is going to be equal to 0. Okay? All right. So someone gave me the policy, I evaluated the value at the start state. So in general, as I was just saying earlier and this is, this is similar to policy evaluation. This is similar to the case that someone gives me the policies and I'll evaluate wha - how good the situation is. And you can write a recurrence to actually compute that. So I'm going to write the recurrence here maybe. So you want to compute this value. And this value is evaluating a given policy and it's a function of state. Well, what is that going to be equal to? It's going to be equal to utility of S, if you're in an end state. So it's utility of S if we are already in an end state. Otherwise, I have access to the policy of my opponent and policy of my agent so I can just do an expected sum over all possible actions of S. Let's say that I am - if, if player S is agent, I'm looking at policy of agent, let's say its a stochastic policy times V of eval of the successor state. Successor of S and A. And this is if, if my player is agent. So, so if is player - I'm just gonna write is player of S is equal to agent. What happens if my player is opponent? Um, I'm gonna do the same thing. I'm just evaluating I have access to the po- policy of the opponent. I'm again just doing- going to do a sum over all possible actions on the policy of the opponent, this is given to me- someone gave this to me, of state and action times the value of the successor state. And S and A and this is the case that my player is the opponent. So this is a recurrence that we are going to just write and it's kind of intuitive. Again, we have seen this in research too. Like you start with the utilities at the leaf nodes and you just push that back up based on what your policies are and what your policies are telling you like which sides - like which, which edges of the tree you are taking with what probability. Okay? This makes sense? All right. Okay. So that was evaluating the game. But what if now I want to solve what the agent should do? Like I'm the agent, I care about doing - like figuring out what my Pi agent is. I don't know what my Pi Agent is. I need to figure out what sort of policy I should be following. And that kind of takes us to this idea of expectimax which is basically the idea of - if I'm in a scenario where I know what my opponent does, so I'm still assuming what - I know what my opponent does, what would be the best thing that I should be doing as an agent? Okay? What, what would be the best thing I should do? Like if you knew, like, in the bucket example, I was trying - I was acting probabilistically, what would you do? Pick the action that gives you the maximum value. So you'd pick the action that gives you the maximum value because you're trying to maximize your own, your own value. So, so then if that is the case, then this recurrence needs to change, right? This recurrence- the way changes is, I'm going to call this- that new value, so I'm going to just do everything on top of this, I'm not gonna be writing it. I'm gonna call this value, value of expectimax policy. Okay? So, so this value eval, I'm not evaluating anything anymore. I want to actually figure out what my agent should do. So I'm gonna call it expectimax. And if I know a policy of my opponent, I'm not changing anything here because I know the policy of my opponent, I'm just going to compute this. But now I want to figure out what the agent should do and what should the agent do? Well, the agent should do the thing that maximizes this value. So I'm going to erase this sum with the policy because I don't have that policy. And the agent should do the thing that maximizes this value over all possible actions. So this should remind you of value iteration. So if you remember value iteration in the MDP lecture, like we weren't evaluating things, right? We were trying to maximize our value. And that's kind of analogous to what we are doing here. We're trying to figure out what should be the policy that the agent should take that maximizes the value under the scenario that I know what the opponent does. So I still kind of know what the opponent does. So going back to this example, so let's say I know my opponent is acting stochastically. What should I do? So if my opponent is acting stochastically with probability one-half, then the values of each one of these buckets are going to be 0, 2 and 5. And I'm trying to maximize my own util- my own values. So I'm gonna pick the one that gives me five. And, and that's shown with this upward triangle I'm trying to maximize. So I'm gonna pick bucket C because I'm trying to maximize under this knowledge that the other agent is stochastically acting. Okay? And, and, and then you're calling this the value of expectimax policy and the value of expectimax policy from the start state is equal to 5. Right? Because that's, that's evaluating the thing I'm going to get. Question back there? [inaudible] Yes. This is assuming I know my opponent's policy and I'm, I'm following - I guess I'm maximizing my own, er, my own value knowing that my opponent is following this policy and what the opponent would do in expectation. Okay? All right. So and then this is the, this is the recurrence that we would get, we would just update the recurrence. So if the agent is, uh, playing then we maximize the value of expectimax. Okay? All right. So, okay, in general I don't know the policy of my opponent. Right? So in general, like, I know what gives me this pi opt. So if that is the case, then what should we do? So one thing that we could do is we could assume worst case. So, so one thing that you could do is you could be like oh the opponent is trying to get me in and they're going to play the worst-case scenario and they are trying to minimize my value. And, and that's the fair thing to do. And we are going to talk about if, if that is always the best thing we can do or not, a little bit later in the lecture. But for now, what we could assume that if I know nothing about my opponent, I can just assume my opponent is acting adversarially against me. So and that kind of introduces this idea of minimax as opposed to expectimax that we just talked about. So, so what would minimax do? So in the case of a minimax policy, what I'm, I'm assuming is I am this agent trying to maximize my, my own- my own value and then I'm assuming my opponent is acting adversarially. So my opponent is really trying to minimize my value. And what that means is from this bucket, I'm gonna get minus 50, from this one I'm gonna get 1, from this one I'm gonna get minus 5. And under that assumption, well, I'm going to pick the second bucket because that gives me the highest- the highest value. So, so that is a minimax policy. So how would I change my recurrence if I were to play minimax or I'm going to- I'm going to call it V of- so let's look at the V of minimax of a state. Well, the recurrence is going to be over minimax, V of minimax, so I'm gonna change that. If the agent is playing, the agent is still trying to maximize the value. So, so that is all good. What if the opponent is playing? The opponent is going to minimize, right? So I don't have access to pi opt. So what I'm gonna do is I'm going to remove this and say well the opponent is going to take an action that minimizes the value of the successor of S and A. Okay? And this is how you would compute the value of a minimax policy. Is this assuming that the adversarial agent consistently tries to minimize the utility of the agent? Yes. What happens when, um, the adversarial agent doesn't always go with that selection but also becomes stochastically. Yes. So that's a good question. So what happens like if the adversarial agent is not always adversarial, right? So in that case, you have another stochastic policy that kind of defines what- what the opponent is doing. And if you have access to that, you can do something similar to expectimax. If you don't have access to that maybe you would want to act worst-case and assume that they're always trying to minimize. But- but that's some prior knowledge that you have that allows you to- to act better or maybe evaluate, ah, the value better for wherever you stay. So we'll talk about evaluation functions a little bit in the lecture. And maybe you'll look back and form your evaluation function, okay? All right. So- so- so here the value of minimax from the start state is going to be 1, right? Does everyone see that? So I'm assuming my opponent is acting adversarially. So we have minus 51 and minus 5. If I am maximizing then the best thing I can get is 1. And then that's how we compute V of minimax, okay? And then there is really no analogy to this in MDP setting because in MDP setting you don't really have this game. We don't really have this opponent that's playing against us. And what happens is, is that this is a recurrence that you're going to get it which is what we already have on the board, right? Okay. So- so what would the policy be? So the policy is just going to be the argmax of this V of minimax. So if you want to know what the policy of your agent should be, that's Pi max. It's the arg max over v of minimax, over successor of that state. And if you want to know what's the policy of- of your opponent, that state S should be- well, that's argmin of- of b of minimax which is intuitive, right? So- so then that way you can actually figure out what the action should be, what the policy with the actual action should be, okay? All right. So let's go back to this example, this halving game. So what we wanna do is we wanna actually code up what a minimax policy would do in this setting. And maybe we can play with a minimax policy after that, okay? So what would a minimax policy do? So it's a policy, so it's going to be a function of states, so let's give it state. And you're going to just write this recursion that we have on the board. So- so we're recursing over to state. If you're in an end state then what are we returning? Just the utility, okay? So we're returning the utility of that state, and there was no actions. And then if you're not in an end state, then you are either maximizing or minimizing over a set of choices. So let's actually like create those choices so they can just call max and min on them. So the choices we're going to iterate over all actions that- that we have. And what is that going to be exactly? Well, that's going to be doing a recursion over the successor states. So we are going to recurse over the successor state. So recurse over succ- game.successor of state and action. And I'm going to return the action here too because I just want to get the policy later. And the successor- does this recursive function returns a state and action. So I just want to get the state from the first one and the action from the second one. Okay. So if player is plus 1 that's the agent, the agent should maximize the choices. And if player is minus 1, then- then that's the opponent, the opponent should try to minimize over these choices. And that's pretty much like this recursion that we have on the board, and- and that's our recursive function, okay? So we're going to recurse over- over our state and that gives us a value and it also gives us- gives us an action. So let's just print things out. So you can refer to them. So minimax gives us an action, and it tells us this is the value that you can get [NOISE]. All right. And then it's a policy, so let's just return the action. Okay. So now what I'm gonna do is, I'm going to say plus 1 agent is still a human policy, and then it's playing against a minimax policy. So all right. So let's- who wants to play with this? And it's a little scarier to play with the minimax policy [LAUGHTER]. Okay. All right. So let's do this. Python. All right. So you are the agent. So you're player 1. You're starting from 15. What do you want to do? [BACKGROUND]. So you just lost the game [LAUGHTER]. So- so why do I know you lost the game? Now it's player minus 1 playing, you are at 7. And minimax policy took action minus, er, and says action minus, um, and- and it also, yeah takes action minus. So we're at 6. And then the value of the game is minus infinity. So you're playing with a minimax policy, you're already getting minus infinity. So- so you just lost the game. Anyone want to try this again [LAUGHTER]. You want to try it again maybe. [BACKGROUND] Subtract. [LAUGHTER] Okay. So you- so you can win, right? So the value is infinity right now. And then yeah, so and then the minimax policy also did a minus. So we're at 13 right now. It's your turn, you're at 13 [BACKGROUND]. You just lost the game again [LAUGHTER]. So yeah, so minus infinity is- yeah actually you need to like alternate between them. I think that is the best policy. But play with this kind of get a sense of how this runs. The code is online. So just feel free to play with it and figure out, what is the best policy to use. All right. So- okay. So- so that was a minimax policy. And then this is kinda the recurrence that we get for a minimax policy. Now, what I wanna do is I wanna spend a little bit of time talking about, um, some properties of this minimax policy. And then we talked about two types of policy so far, right? We have talked about expectimax, which is basically saying, "I as an agent, I'm trying to maximize, but I know what my opponent is going to do. So I'm going to assume my opponent does whatever. And then I'm going to maximize based on that." So- so for example, I am following and I'm going to refer that to as Pi of expectimax, which means that the agent and everything in red is for the agent, everything in blue is for the opponent. So I'm gonna say the agent is following this policy which says, "I'm going to maximize assuming my opponent is doing whatever. And here I'm calling Pi 7 as like some opponent policy." It couldn't be like anything but Pi 7. So let's say that, opponent is playing Pi 7, I'm going to maximize based on that. And- and the value we just talked about is the value of expectimax. The other value we just talked about is the value of minimax which says, "I am the agent. I'm going to maximize assuming the opponent is going to minimize." And then the opponent actually is going to minimize and is going to follow pi min. Okay. So- so these are the two values we have talked about so far. I want to talk a little bit about the properties of this. But before that, let me- So weight to like kinda like mix the two together. And you say like just highlight the probability of piping the minimum for like an expected max. I give a probability distribution over like the actions, right? Like why don't we just take the action that like minimizes whatever our reward is and give it a higher weight, in Expectimax. Um- [NOISE] I didn't fully follow what policy you were referring to, actually. Is it- are you coming up with a new policy that you do- you're saying would be a better policy to [NOISE] between like expectimax and minimax in some sense? So this might- this, this table might, kind of, address that because it's, it's considering four different cases. It's actually not considering the two cases. So this might actually refer to what you're, what you're proposing. So, so let's actually go through this first and then maybe, like, if it doesn't answer that. So, All right, so, so I want to talk about the setting. So this table is actually not that confusing, but it can get confusing. So do pay attention to this part. Um, all right, so where do I wanna- maybe, maybe I'll write over there. So I'm gonna use red for agent. Where is my blue, my blue? On the floor? Hanging on the left. Left? Your right. My right, [LAUGHTER] okay, all right, [LAUGHTER] okay. And then I'm going to use blue for, um, and I dropped this. I'm going to use blue for, um, the opponent policy. Okay. So, so then for agents, we're are going to have Pi max. All right. An agent could play Pi max. What does that mean again? I'm going to maximize assuming you're gong to minimize. An agent could play Pi expectimax. Maybe the policy 7, I'm gonna put 7 here, which means I am going to maximize assuming you're going to follow this Pi 7. So this is a thing that the agents can do. [NOISE] Okay? And then there are things that my opponent can do. I'm going to write that here. My opponent can actually follow Pi min which is I'm just going to minimize, or my opponent could follow some other policy Pi 7. Let's say Pi 7 in the bucket example right now is, is just acting as stochastically. So half the time pick one number, half the time pick another number. Okay? So, so that is what we have. So I'm going to draw my- actually my tree so we can go over examples of that too. So this was the bucket example. They started at minus 50 and 50 in bucket A, 1 and 3 in bucket B, minus 5 and 15 in bucket C. Okay? So this was my bucket example. I'm actually going to talk about that. So- All right. So I'm gonna talk about a bunch of properties of V of Pi max and Pi min, which is what we have been referring to as the minimax value. Okay? So, so I want to talk about this a little bit. Okay? So the first property that, that we can have is, is that V of Pi max and Pi min, it is- actually let me go back to the next slide. It is going to be an upper bound of any order value of any order policy. Pi of- I'm going to just write Pi of expectimax for any other policy for the agent. Assuming that my opponent is playing as a minimizer. Okay. So, so what I'm writing, so what I'm writing here is, is that value is going to be an upper bound of any order value if my agent decides to do anything else under the assumption that my opponent is a minimizer. So my opponent is really trying to get me. If my opponent is really trying to get me, then the best thing I can do is to maximize. Okay? So, so that's kind of intuitive, right? That's an upper bound. Let's look at that example. So what is Pi- V of Pi mix- er, Pi max and Pi min? So, so we just talked about that, right? So if this guy is a minimizer, we're gonna get minus 50 here, 1 here, minus 5 here. If this guy is a maximizer, what is the value I'm gonna get? You'll get 1, right? I'm gonna go down here and then I'm gonna get 1. So V of Pi max and Pi min is just equal to 1. That is this value that is just equal to 1. Okay? What is this saying is that this is going to be greater than maybe the setting where my opponent- so my, my agent is following expectimax and my opponent is still doing Pi min. So, so what would this correspond to? What will this value correspond to? So this is a value which says, well, I'm going to take an action assuming my opponent is acting stochastically. If my opponent is acting stochastically, I'm gonna get 0 here, I'm gonna get 2 here, and get 5 here. If I'm assuming that and I'm trying to maximize my own, my own value, which route do I go? I'm gonna go this route. But it turns out that my opponent was not doing that. My opponent was actually a minimizer. So if my opponent was actually a minimizer and I went this route, my opponent is going to give me minus 5. So the value I'm going to end up getting is minus 5. So this is equal to minus 5. This is equal to minus 5. Okay? So, so far I've shown that this guy is greater than this guy. okay? All right. So that's the first property. First property is if my opponent is terrible and is trying to get me, best thing I can do is to maximize. I shouldn't do anything else. Okay? The second property is, is that this is V of Pi max, again the same V, V of Pi max and Pi min is now a lower bound of a setting where your agent is maximizing assuming your opponent is minimizing. But your opponent was actually not minimizing, your opponent was following Pi 7. So, so what this says is if you're trying to maximize assuming your agent, your, your, your opponent is always minimizing, then, then you're doing- like you'll come up with like a lower bound and if your opponent ends up doing something else, you can always just do better than this lower bound. Okay? So what is, what is this V equal to or we just showed that is, that is one, right? That is this value. Okay? What does this correspond to? So this is value of Pi max which is I am going to assume you are trying to get me. If I'm going to assume you are trying to get me I'm gonna go down this route because that is the thing that gives me the highest, the highest value. But you are not trying to get me, you are following Pi 7. So if you're following, following Pi 7, you're just going to give me a half the time 1 and half the time 3 and that actually corresponds to the 2, and I'm going to get value 2 instead of value 1. So this is actually equal to 2 in this case. And this corresponds to this value in the table which is again the agent is following a maximizer assuming the opponent is a minimizer. Opponent was not a minimizer, opponent was just following Pi 7. And this is just equal to 2 . Okay. So so far, the things I've shown are actually very intuitive. They seem a little complicated but they're very intuitive. What I've shown is that this value of minimax, it's an upper bound. If you're assuming our, our opponent is a terrible opponent, now it's going to be an upper bound because the best thing I can do is maximize. I've also shown it's a lower bound if my opponent is not as bad. So, so that's what I've shown so far. A question. So here the opponent's policy is completely hidden to the agent. Yeah. So here, like, because- Yeah, the agent actually doesn't see the opponent- where the opponent goes, right? Even in the expectimax case, it thinks the opponent is going to follow Pi 7, but maybe the opponent follows Pi 7, maybe not. Right so, so like when we talk about expectimax and minimax, it's always the case that the opponent doesn't actually see what the opponent does. But the opponent can think- the agent can think what the opponent does, okay? And I'm going to talk about one more property. And this last property basically says if you know something, actually goes back to your question, if you know something about your opponent, right? If you know something about your opponent, then you shouldn't do the minimax policy. You should actually do the thing that has some knowledge of what your opponent does. So, so that basically says this- we Pi max and some Pi of opponent, you know something about Pi opponent. You know that opponent is playing Pi 7. That is going to be less than or equal to the case where you are following the Pi of expectimax of 7, uh, and the opponent actually follows Pi 7. Okay. So what does this last equality- inequality saying? Well, it is saying that the case where you're trying to maximize and you think your opponent is minimizing, but your opponent is actually not minimizing the, value of that is going to be less than the case where you're maximizing under some knowledge of your opponent's policy and your opponent's policy actually ended up doing that. Okay? So, so the first term is always the agent. The second term is always the opponent, right? So this value we have already computed, that- that's equal to 2. This value, what is this value saying? It is saying you are going to maximize assuming your opponent is stochastic. So if I'm assuming my opponent is stochastic, then I'm assuming that this is 0, this is 2, this is 5, right? I'm trying to maximize. So which one of my routes shou- should I go? I should go this route because that gives me 5. So this is the agent thinking the opponent is going to be stochastic, thinking he's going to get 5. And it gets here and the opponent actually ends up following Pi 7 which is a stochastic thing. So, so we are actually going to get 5. So, so this guy is equal to 5. And this is the last inequality that we have, which is V of Pi expectimax of 7, and Pi of 7 is greater than or equal to V of Pi max and Pi 7. We just showed this is equal to 5 for this example. Okay. All right. Question. [inaudible] The actions of the opponents always whether or not the [inaudible] [NOISE]. Uh, so- So if, if you, so if you know something about the stochasticity, that's in order. Like here, I knew that the opponent was following the stochastic policy of one out, one out. I might have known that the opponent is following a deterministic policy in- and always is picking the left one. So I could have like followed, like same expectimax policy under that knowledge. It could be anything else, but the whole idea of expectimax is, I have some knowledge of what the policy of, of the opponent is, it could be a stochastic policy, it could be a deterministic policy under that, how would I maximize? Does that mean that like transitively, that the bottom right is greater than the bottom left always? Yeah. So the question is do we have- Yeah. So we have what like this inequality, so transitively, this guy is always greater than this guy. And that kinda makes sense, right? Like we're saying, like if you're following expectimax, so this last one kinda makes sense, right? It's, it's basically saying if you're following expectimax and you know something about your opponent and your opponent actually ended up doing that, though, though your value should be greater than pretty much anything, right? Because you knew something about the opponent, you played knowing that, having that knowledge. Yes. When you say knowing something about the opponent, is that just knowing that it's asked stochastically or know what it's gonna take? [NOISE] It's knowing what they're going to take. Right? Like here, I knew what they'll point out. I knew that half the time they're going to take this one, half the time you are going to take the other one, and then I use that knowledge, right? Yeah. So you know exactly this? [OVERLAPPING]. Yes. Yeah, yeah, the expectimax. Is the expectimax policy given that your opponent is following Pi min policy- Given that, sorry. Given that your opponent is following Pi min. Is it- do you maximize it? So the expectimax policy is, is this policy when here we have a sum. The expectimax policy, uh, assumes your opponent is following Pi opponent and assumes that it has access to Pi opponent and so it ends up doing this sum over here. Yeah. If Pi opponent is Pi min? Like- Uh, if Pi oppo- I see what you are saying. So you're saying if Pi opponent is actually Pi min, then do they end up being equal to each other in some sense? So yeah, I guess so. Yeah. So if, if you know that the oppo- it becomes minimax, right? If you know your opponent is, is following min, as acting as minimizer or just like call that minimax. All right. So I'm gonna move ahead a little bit. All right so- and then, this is like what we have already talked about. Okay. So a few other things about modifying this game. So, so we have- okay so we have talked about this game, we have talked about properties of this game. There is a simple modification one can do which is, bringing nature in. So there was a question earlier which was like, is there any chance here? And then, yeah, you can like actually bring chance inside here. So, so let's say that you have the same game as before, you're choosing one of the three bins. And then, after choosing one of the three bins, you can flip a coin and if heads comes, then you can move one bin to the left, with wraparound. So what this means is 50% of the time, tails comes, you're not changing anything, you have this set up. 50% of the time you get heads. And then, in those settings you're just gonna pick like a neighboring bin as opposed to your original bin, okay? So, so the- you're adding this notion of chance here and, and it's kind of acting as a new player, so, so it's not actually the making things that much more complicated. So, so what happens is in some sense we have a policy of, of coin which is nature here, right and policy of coin is, half the time I get 0, I don't change anything, half the time I just get the neighboring bin as opposed to my main bin. And then I get this new tree where, where I have like a whole new level for what- where the chance plays. So we have- now we have max nodes, we have min nodes, we also have these chance nodes here. And the chance nodes again, like sometimes they take me to the original bucket and then 50% of the times they take me to a neighboring bucket, okay? But, but the whole story like stays the same, like nothing changes. You can, you can still compute value functions, you can still push the value functions further up. It's the same sort of recurrence. Nothing fundamental changes. Just- it just feels like there are three things playing now, okay? So, so then this is actually called expectiminimax, so a value of expectiminimax here, in this case for example, is minus 2, because there is a mini node for the opponent, there is an expectation node for what nature does, and then there is a max node for what the agent should do. That's why it's called expectiminimax. And then, you can actually compute the same value. So when the game is working out, so there's like two players. I pick a bin then you flip a coin, and then shift it left or not shift it left, and then I get to pick the number? Yes. Well, not you, well the opponent. The opponent. So yeah. So, so there are still two players and then the third coin thing. Yes. [inaudible] All right. So, so yeah. So the way to formalize this is you have players, so you have an agent, you have an opponent, you have coin, and then the recurrence changes a little bit I guess. So, so what happens is, the recurrence that we have had for minimax was just the max and min and it would just return us the utility if you're in an End function and in an End-state. Now, if the- if it is the coins term, we just do a sum over, uh, an expected sum of the policy of the coin which is what we were doing in expectiminimax. But, but we just have like a new term for when coin placed. So, so everything here kind of follows naturally in terms of what we were expecting, okay? All right so the summary so far is, uh, what we've been talking about max nodes, we have been talking about chance nodes, like what if you have a coin there and then also these min nodes. Um, and, and basically we've been talking about composing these sort of nodes together and creating like a minimax game or, or an expectimax game. And then value function, uh, we- is- you just do the usual recurrence that we had been doing in this class from the expected utility to, to- from the utility to come up with this expected utility value for all the nodes that we have. So there might be other, other scenarios that you might wanna think about for example, for your projects or like in, in general there are other variations of games that you might wanna think about. So what if, like the case that you are playing with multiple opponents? Like so far we have talked about like a two-players setting where we have one opponent and one agent but what if you have multiple opponents, like you can think about how the tree changes in those settings. Uh, or for example, like the taking turns aspects of it, like is it sim- if, if the game is simultaneous versus your turn-taking, uh, or like you can imagine settings where you have some actions that allow you to have an extra turn. So, so you have two turns. Uh, and then the next person takes t- takes a turn. So, so you should think about some of these, some of them come up in the homework. So, uh, think about variations of games in general. They are kind of fun. So to talk a little bit about the computation aspects of this. Um, so this is pretty bad. [LAUGHTER] Right, we talked about a game tree which is similar to tree search. So we are taking a tree search approach. Uh, if you remember tree search, like the algorithms we're using, like if you have branching factor of b and some depth of d then, then in terms of time it's exponential in order of b to the 2d, in this case. So I'm using d for the number of- how do I say this? So, so it's 2d because the play- the, the agent plays and then the opponent plays, so that's how I'm counting it. So every, every 2d like you have 2d plies but d depth. Does that makes sense? All right. And then in terms of space, it's order of d in terms of time, it's exponential that's pretty bad. So for a game of chess for example, the branching factor is around 35, depth is around 50. So if you compute b to the 2d, then it goes in the order of like number of atoms in the universe, that's not doable we should- we are not able to use any of these methods. So, so how do we make thi- things faster? So we should be talking about how to make things faster. So there are two approaches that we are talking about in this class to make things faster. And the first approach is using an evaluation function. So, uh, using an evaluation function and what we can do is you can use domain specific knowledge about the game to define almost like features about the game in order to approximate, like the, the value th- this value function at a particular state. So I'm going to talk about that a little bit. And then, another approach is this approach which is kind of simple and kind of nice which is called alpha-beta pruning. And, and the alpha-beta pruning approach, basically gets rid of part of the tree if it realizes you don't need to go down that tree, that part, that part of the sub-tree. So, so it's a pruning approach that doesn't explore all of the tree only explores parts of the tree. So, so we're going to talk about both of them. All right. So evaluation functions. So let's talk about that. Okay. So the depth can be really like the breadth and depth of the game can be really large. That's not that great. So one approach to go about solving the problem is, is to kind of limit the depth. So instead of like exploring everything in the tree, just limit the depth and, and get to that particular depth. And then after that, when you get to that depth just call an evaluation function. So, so if you were to search the full tree, this was the recursion that, that we had like we have talked about. This was like if you're doing a minimax approach this is the recursion that you gotta do. You gotta go over all the states and actions and, and go over all of the tree. But if you're using a limited depth tree search approach, what you can do is, you can basically have this depth d and then decrement d every time you go over an agent and opponent, like every time you go down the tree and at some point d just becomes 0, so you get to the po- some particular depth of the tree and when d becomes 0, you're gonna call an evaluation function on the states that you get, okay? And this evaluation function is almost of the same form of, like future costs when we were talking about search problems, right? So, so if you knew exactly what it was, then, then you were done, but you don't know exactly what it is because if, if you knew that we were to solve like the whole, uh, tree search problem. But in general, it can have some sort of weak estimate of, of, um, what, what the future costs would be. So, um, yeah. So, so an evaluation function Eval of s is a weak estimate of V minimax of s. So it's a weak estimate of, of your value function, okay? All right. So, so analogy of that is future costs in search problems. So how do we come up with an evaluation function? So we do it in a similar manner that we had visited in the learning lecture, where we're coming up with, with features and, and weights for those features, right. So, so if I'm playing like chess, and like the way we play it, right, like we think about a set of actions that we can take and where we end up at and, and based on where we end up at um, then you kind of evaluate how good that board is, right. You have some notions of features, and how good looking- like how good that board would be from that point on. And that allows us to evaluate what action to pick, right, like when we play chess that's kind of what we do. We pick a couple of actions and we see how the board would look like after taking them. An evaluation function kind of does the same thing, it tries to figure out what are the things said we should care about in a specific game, in this case in chess and tries to give values to them. So, so it might be things like the number of pieces we have, or mobility of those pieces, or if our king is safe, or if we have central control or not. So, so for example, for the pieces what we can do is, we can look at the difference between the number of pieces we have between what we have and what our opponent has. So number of kings that I have versus number of opponents that I have. Well, that seems really important thing because if I don't have a king and our opponent has a king then [LAUGHTER] I've lost the game. So, so you might put like a really large weight for that and you might care about like differences between the number of pawns, or number queens and other types of pieces that you have on the board. So, so that allows you to care about- to think about how good the board is, or number of legal moves that you have and the number of legal moves that your opponent has, and then that gives you some notion of like mobility of that state. Okay. All right. So um, so summary so far is- yeah, so this is pretty bad, order of B to the 2D is pretty bad, and an evaluation function basically tries to estimate this V minimax using some domain knowledge. And unlike A star, we actually don't have any guarantees in terms of like error from these sort of approximations. So um, but it's an approximation, people use it, it's pretty good. We will talk about it a little bit later next time when, when we think about like how- what sort of weights we should, we should pick for each one of these, for each one of these features. So you should think learning when you think about what are the weights we are using. All right. So- okay, so now I want to spend a bit of time on alpha-beta pruning because this is- yeah, important. Okay. So alpha-beta pruning. Yeah. The concept of alpha-beta pruning is also pretty simple, but I think it's one of those things that was- it was kind of that table you should pay attention to, to kind of get what it is happening. All right. So, so let's say that you want to choose between some bucket A and bucket B. Okay, and you want to choose the maximum value, and then you know that the values of A fall into like 3 to 5, and the values of B fall into 5 to 10. So, so they don't really have like any, any intersections between each other. So, so in that case, you don't really care about your, your- if you're picking a maximum right, you shouldn't care about your bucket A, or rest of your bucket A right, because you already know that you are above 5, you are happy with B, you shouldn't even look at A. So, so kind of the, the underlying concept of, of um, alpha-beta pruning is, is maintaining a lower bound and upper bound on values, and then if the intervals don't overlap then basically dropping part of the sub-tree that you don't need to work on because there is, there is no overlap between them. Okay. So here's an example, so let's say we have these max nodes and min nodes and you're going to go down and see 3, and then this is a min node so, so you're going to get 3 here. So when I get to the max node here, right, I- what, what, I know is that the max node is going to get 3 or higher, right. That- that's one thing that I would know without even looking at anything on the, on the other side, without even looking at the sub-tree on the left, I already know that this max node should get 3 or higher, right. Does everybody already agree with that? Okay. So, so then when I go down to this min node and I see 2 here, right, I know this is a min node, it's going to get a value that's less than or equal to 2. Less than or equal to 2 does not have any interval with greater than or equal to 3, so I should not worry about that sub-tree. Does everyone see see that? So maybe you'll like let me draw it out here. [NOISE] So that's kind of like the whole concept of what happens in alpha-beta pruning. So I have this max node, this was three, this was what- five. I found that the guy is 3, this is a max node. Whatever it gets, it- it's going to be greater than or equal to 3 because, because it's already seen 3, it's not gonna get any value less than 3, right. So, so we know whatever value we are going to get at this max node is going to be 3 or higher. Okay. Then I'm going to go down here, and then I see two here, right, It's a min node whatever it gets is going to be less than or equal to 2. So less than or equal to 2 is the value that's going to get popped up here. I already know less than or equal to 2 has no interval with 3 or greater. So I don't even need to worry about this like I, like I can completely ignore this side of the 3, I don't need to know whatever is happening down here, I don't even need to look at that. Okay. Because, because I- like this value should be greater than and equal to 2. Yes. All right, we should get a value greater than or equal to 8. Sorry. [inaudible] It's minimum- so it's a minimum, it's a minimum node, right. So it's going to be less than or equal [NOISE] to- right. Yeah. It's a min node, so I still have to, if I see 10 here or 20 here, like I'm not going to pick that, like it's 2 or lower. All right. So yeah- so if it is 10, or 100, or whatever sub-tree it is there like we're not going to look at that. So, so that, that is kind of the whole concept. Um, All right. So- okay. Let me actually go to this slide, I think this would be. So the key idea of alpha-beta pruning is as we are- with the optimal path is going to get to some leaf node that has some utility, and that utility is the thing that is going to be pushed up, right, like- and then the interesting thing is if you pick the optimal path, the value of the nodes on that optimal path are all going to be equal to each other, right, like they're the- basically the utility that you are going to get pushed up all the way to the top. So, so because of that like we need to have like these, these like we, we can't have settings where we don't have any intersections between the intervals because we know if this is, if this were to be the optimal path, the value on this node should have been the same as the value at this node- the same as the value at this node and, and so on. So if they don't have any intervals then no way that they would have the same value, and no way for that path to be the optimal path. Okay. So, so that's kind of the reason that it works because the optimal path you're going to have the same value throughout. Okay. So-all right so how do we actually do this? So the way we do this, is we're going to keep a lower bound on max nodes, so I'm going to call it that a_s. Let me [NOISE] get this up here. So we are going to have a_s which is a lower bound on max nodes. So we're going to keep track of that. We're also going to keep track of b_s, which is an upper bound on min nodes. Okay. And then if they don't have any intervals, we just drop that sub-tree. If they have intervals we just keep updating a_s and b_s. Okay. So, so here's an example, so let's say that we start with this top node. Somehow we have found out that this top node should be greater than or equal to 6, right. Somehow I know it should be greater than or equal to 6. Okay. So that is my a_s value. So my a_s is equal to 6, it, it is, it is going to be a lower bound on my max nodes. I know the, the valued- optimal value is going to be something greater than or equal to 6. Okay. Then somehow we get to this min node, and then we realized that this min node should be less than or equal to 8. So you're here, let's say 8 is here, we still have some interval, we're all good, right, so b_s is going to be equal to 8, right, we have an upper bound on the min node, and that tells us that upper bound is 8. So the, the valued- optimal value- the value and optimal path is going to be less than or equal to 8. Okay. So far so good. Then somehow I found out that that one is greater than or equal to 3. Greater than or equal to 3 should be fine, right, greater than or equal to 3 is still greater than or equal to 6, my a_s in this case, I'm going to call this S1, S2, and S3 is equal to 3, right, because I know I need to be greater than or equal to 3. But like 6 already does the job, right, like I don't need to worry about that 3. So, so that's all, good so far. And then for this last node, I am at this min node, and I realize that b_s4, I'm going to call it b_s4, is equal to 5. And what this tells me is that your value should be less than 5 and less than 5. So I'm going to update less than 8 to less than 5. And now, we don't have any in- intervals. So what that tells me is that path is not going to be the optimal path, because there is no intervals. So- so we're not going to find this- this one number that is going to be the utility. And what that tells me is, I can actually ignore that whole sub-tree because- because that's not going to be in- my- my optimal path, I can- I can get rid of it, I can ignore it, okay. Yes. We also ignore 3 if, uh, the beta is equal to alpha, if we already have something else, is that not the same thing? Yeah. So- so we're ignoring 3 in a different way. I- I- so- so yeah- so we're ignoring the value of 3 because this is already encoded here. But we're ignoring the subtree of 5, like I'm not exploring it. Like I need to explore things after the 3 already, because I- like- like- like with the 3 we already had an overlap with the Beta. So you're looking at- with the b value- we are looking at the overlap between your upper bound of min node and lower bound of max node. So that interval is the interval you're making sure it still has values in it. One example of, uh, if the two or three extend; do you just ignore them anyway because you already had something else that's- that's [OVERLAPPING] is that optimal? Yeah, yeah. So, uh, yeah, I think so. Yeah, so- so if you already have like, if 3 were 2, is that what you're saying? Yeah so- so- th- you want to have non-trivial intervals basically, yes. Yeah. So like if- if- if- it is the same value- you still- yeah, you don't have non-trivial intervals. And- and yeah question. I was wondering how we got 6 and 8 and 3. Oh, this is an example of that, imagine somehow [NOISE]. But we- we will talk about some examples whe- where we get them. So I'll talk about one more example where we actually like get these, but for now just assume somehow we have found this. Yes. Um, on the top example, I don't understand why, uh, 3 is an upper bound or 2 is a lower bound. So, um, so the- the actual values, um, I'm not showing a full example here. So the actual values are coming from somewhere that I'm not talking about yet but- [OVERLAPPING] [inaudible] Oh, the one at the top. Okay, oh sorry. Yeah. So the one at the top right? So- so this is a min node, a min node, this is a max node, right? So at my min node, I found out that minimum between 3 and 5 is 3, right? So max node is maximizing between 3 and a bunch of other things. That- that's what it is supposed to do, right? So it's maximizing between 3 and a bunch of other things, then it's at least going to be 3. It's not going to be 2, there is no way for it to be 2 or it's not going to be 0, right? Because it's- it's going to take maximum of 3 and something else. So that's why I'm saying, well this value whatever I'm going to get at this max node is going to be greater than or equal to 3. Does that make sense? So now I come down here, and I see like, I see this 2; this is a min node. So the value here is going to be the minimum between 2 and whatever is down this tree, right? So it is going to be at least, uh, I'm very bad with that, the least, and the most. It's going to be- [LAUGHTER] it's going to be 2 or lower. Let me just use that. So- so what we are getting here is going to be 2 or lower, right? So I'm either going to get 2 or 1 or 0 or- or all that. And that's the value that's going to be pushed up here, right? So that's the value that's going to go down here, it's going to be a value that is 2 or lower. So if I'm maximizing between 3 and something that is 2 or lower, then 3 is enough. And I can like, kind of figure that out based on these intervals and don't look at this side of the tree. Like- like once I've- I've seen these two, I already feel there is no- no trivial interval between a value that's greater than 3 and a value that's less than 2. So I can just not worry about stuff down there. Okay. All right. So one quick other implementation, I think is we talked about these A's, A values and B values. You can- on- keep track of only one value. And that would be this Alpha value and Beta value, where Alpha value is just -I'm going to illustrate it here. Alpha value- let me get it right. So Alpha of s is the max of a_s for all these s primes that are less than s. Yeah. So- so- is what this basically says is, remember like when we saw 3 we said, "Well, that's already included, like we already knew that." That's kind of the same idea. So Alpha of s is just going to be one value. In this case, it's just going to be 6, because like when I see 3, like I don't really care about that 3, right? Like I already know I'm greater than 6, knowing that I'm greater than 3 is not adding anything. So we keep track of one value; Alpha of a- al- Alpha of s. In this case, Alpha of s is just equal to 6. And then similar thing for Beta. We're going to keep track of Beta of s, and Beta of s is just minimum of b_s's. And then, what I'm writing here is just the ordering of the nodes that you have seen. So- so Beta of s is 5. And then, you're looking at the intervals like Alpha of s, uh, and s- Alpha of s and above, and Beta of s and below. And if those intervals don't have any trivial intersections, then you can- you can prune part of the tree. Okay. So- so this is more of an- an implementation thing instead of keeping track of all these a_s's and b_s's just keep one number, one Alpha and one Beta. Okay. All right. Okay. So let's look at one- one other example. Uh, so all right. So I'm going to just do this example real quick. Okay. So you're going to start from some top node, we're gonna go to this node, this is a min node between 9 and 7. Between 9 and 7 right? So it's a min node, I'm going to get this guy; 7. I'm going to realize that this max node is going to be something that's at least 7, right? It's going to be something that's greater than or equal to 7. So my Alpha of s is going to be 7 right now. I know whatever value I'm going to get is going to be 7 or higher. Whatever value this start node is going to get, It's got to be 7 or higher, okay? So now I come down here, I am at a min node. I see a 6 here, right? I go here, it's a min node, so whatever we get here, is going to be less than or equal to 6, right? So it's going to be 6 or something that is lower. That tells me my Beta of s is equal to 6. That tells me whatever I am getting in that min node is going to be 6 and lower. That doesn't have any intersections with my Alpha of s. So I can just not do anything about this- this branch. Like I don't- like I don't need to go over like- like I know like all these other things like, I can kind of ignore like this whole branch. Okay. All right. So now I go back up. I go down here, I'm at the min node. So remember the way we were computing these Beta values, were based on the nodes that we have seen previously. So I have a new Beta now because I'm done with this branch, right. So I- I need to get here. Here I have a min between- what is it? 8. This Is 8? 8 and 3. So okay. So- so I see my- maybe let me just write 8. I see my 8 here, it's a min node, so it's going to be less than or equal to 8. So my new Beta value is going to be 8. My Alpha is still 7 because that's for my top node. So its 8 or lower. We do have an interval, overlapping interval, 7 to 8. Everything is good. So I actually need to go and see what this value is. This value is 3, so I get 3 here, or like, it's exactly equal to 3. So that updates my Beta from 8 to 3. We have already explored that part of the tree anyways, but 3- you don't have an interval. If there were a bunch of things below this three like I- I like when I somehow decide, like I wouldn't need to explore it, but we don't really have that. And then we just find that our optimal value is 7, so we just return 7, okay. And we didn't explore this giant middle part of the tree. Okay. One more slide and then I'll- two more- two more quick, one quick idea. Okay. So [LAUGHTER] All right. So the order of things actually matters, so- so that the only thing I want to mention about this idea of pruning is- is the order of things matters. So- so when we look at this example, remember we didn't explore anything about the 10, because we already knew that this value needs to be greater than or equal to 3. These are my buckets, right? If I swap the buckets, like if I just swap the order of buckets, I move the 2-10 bucket to this side, 3-5 bucket to the other side, I wouldn't be able to do that. I actually need to explore the whole tree, because my Alphas and Beta wouldn't have the same properties. So the order that you are putting things on the tree actually matters and- and you should care about that. Um, so worse case scenario, our ordering is terrible, so we need to actually go over the full tree that's order of b to the 2d. That's the worst-case scenario. There is this best ordering where you don't explore like half of it. So- so you can- like if you- if you had- if you have a tree where you can explore up to depth 10, then with the best ordering, you can actually explore up to depth like 20. So- so that's a huge improvement actually. Uh, so the best ordering is going to be order of b to the d. And then random ordering turns out to be pretty okay too, so random ordering would be order of b to the 2 times 3 fourth times d. So even if you had the random ordering, it would be better than the worst case scenario. And then, well, how do you figure out what is a good bordering- ordering? Well, we can have this evaluation function. Remember you- you are computing the evaluation function and- and what you can do is, you can order, uh, your- so for max nodes, you can order the successors by decreasing evaluation function, and therefore, min nodes you can order the successors by increasing the evaluation functions. That allows you to prune as much things as possible. All right. So with that I'll see you guys next lecture talking about TD learning.
Stanford_CS221_Artificial_Intelligence_Principles_and_Techniques_Autumn_2021
Constraint_Satisfaction_Problems_CSPs_2_Definitions_Stanford_CS221_AI_Autumn_2021.txt
Hi, in this module, I'm going to formally define constraint satisfaction problems and the more general notion of a factor graph. So let's begin with an example, a voting example. So imagine there are three people. Person one, person two, and person three, and each one is going to cast the vote, either blue or red, blue or red, blue or red, and we know something about these people. We know that person one is definitely going to vote blue here, and we know that person three is leaning red. We also know that person one and person two are really close friends, so they must agree on their vote, whereas person two and person three are mere acquaintances, and their votes only tend to agree. So the question is how all these people are going to influence each other, and ultimately, cast votes? So we can model this problem using a factor graph. We're going to define a set of variables. X1 for person one, X2 for person two, X3 for person three, and we're going to define a set of factors that capture each of these four constraints or preferences. So let's begin with f1. f1 is going to capture the fact that person one is definitely blue. So I'm going to write f1 as specifying as a table, specifying for each value of x1. I'm going to specify a number. So f1 of x1 is going to be 0 if x1 as red, and it's going to be 1 if x1 is B or blue. And this captures the fact that a 0 means no way this is going to happen, and 1 means it's OK. So mathematically, I can write this factor, f1, as an indicator function of x1 equals B. So now, I'm going to write these indicator functions without-- usually, you would write a 1 here, because it's going to drop it for notational simplicity. So let's look at leaning red. So this factor is going to be x, f4, and this is also going to correspond to a table where for every possible value of x3, I'm going to specify a value. So R is going to be 2, and B is going to be a 1. And mathematically, this is going to be f4 is equal to x3 equals R. This indicator, a function, plus a smoothing constant of 1. So remember, this indicator is going to return 1 or 0 depending on whether its condition is true or false, and I'm adding 1, so I offset that to a 2 or a 1. So intuitively, you can think about this as person three prefers R to B maybe twice as much. So now, let's look at these other factors. So f2 is going to represent the fact that person one and person two have to agree. So again, I'm going to look at all the possible assignments to the variables in the scope of f2. So these two variables, x1, x2, and for every value, I'm going to assign a particular non-negative number. So here, R, R. I'm going to say that's 1. It's OK. They agree. And if they don't agree, I'm going to return 0, because I really don't like that, and if they agree, return B. That's agreed. So that's a 1. So more succinctly, I can write this factor as f2 as x1 equals x2. And now, finally, for f3. f3 is going to capture whether x2 and x3 tend to agree, and this table is going to look like this for x2 and x3. If they're both R, I'm going to return 3. If they're different, I'm going to return 2. And if they're all both B, then I'm going to return 3. So mathematically, this factor is going to indicate a function of whether x2 equals x3 plus a smoothing factor of 2, which makes it, instead of 1, 0, 0, 1, that gives me 3, 2, 2, 3. So there's a kind of a mild preference for these two people to agree compared to not agreeing. So now, if you click on this demo in the slides here, this is going to take you to a little JavaScript application here where you can actually write your own factor graph, and we're going to come back to this later. So this is a first example of a factor graph capturing this simple voting situation. So now, let's look at a different example that we looked at in the overview module. So this is map coloring Australia. So remember, Australia has these seven beautiful provinces, and each one needs to be assigned a color. So each of these provinces is going to be represented as a variable. And here, I'm going to give area a variable and name. WA for Western Australia, NT for Northern Territory, and so on, and I'm going to use big X usually to denote the set of all variables. Each variable is also going to take on a set of values, which in this case, is going to be red, green, or blue. And now, I'm going to define the factors of this factor graph. So for every two neighboring provinces, I want to say that they can't have the same color. So for example, f1 is going to say WA and NT must be different. That corresponds to this factor over here. f2 says NT and Q must be different, and that's going to correspond to this factor here, and so on and so forth. So now, we're ready to formally define a factor graph. So a factor graph is going to consist of a set of variables, x1 through xn in the general case. Remember, big X is going to denote the set of all variables, where each variable xi takes on values in some set of possible values known as the domain of variable i. And a factor graph also consists of a set of factors generally denoted f1 through fm. Each fj is going to be a function that takes as input an assignment to the variables and going to represent-- return a non-negative number. So it's really important that this function return a non-negative number instead of a negative number, because later, we'll see that we're going to multiply them together. So that's the definition of a factor graph. So a bit of terminology here. So I'm going to define the scope of a factor as a set of variables it depends on. So in the map coloring example, the scope of f1 is going to be simply WA and NT. This corresponds to, visually, the set of variables that this factor is touching. The arity of a factor is number of variables in the scope. So in this case, you just count how many variables are here. The answer is two. Some shorthand notation. So unary factors are ones that have arity 1, and binary factors are ones that have arity 2, and constraints are factors that return 0 or 1. So notice that factor can return any non-negative number, but a special case is when it returns 0 or 1, which means yes or no essentially. And in this context, f1 is a binary constraint. So one thing to remember about factors is that each factor usually depends only on a subset of the variables, and not all the variables, and this is going to be kind of important when we talk about an algorithmic efficiency. So now, we fully define what a factor graph is, I'm going to now talk about the notion of assignment weight. So let's go back to the voting example. In the voting sample, we had four factors corresponding to whether person one and person three were voting a certain way, and whether person one, person two, and person two and person three agreed or not. So an assignment is going to be just assignment of values to each of the variables. In this case, there's three variables, x1, x2, x3, and each assignment is going to be associated with a weight. So here's how the weight is going to be calculated. I'm going to go through each of these factors, and I'm going to plug-in this assignment and read out a particular number. So let's take this factor, f1. So what is x1? It's r. So I'm going to get a 0. What about this factor? What is x1 and x2? It's going to R, R. I'm going to return a 1. I'm going to copy that down here. What about this factor? x2 and x3 are R, R. I'm going to get a 3. And finally, the fourth factor, f4, what is x3? It's R. So I'm going to read out a 2. And all of the outputs of the factors are numbers, and I'm going to multiply all of them together. I'm going to get a weight, and that weight in this case is 0. So now, you can go through all of the other possible assignments of values to all the variables. In this case, there are eight possible assignments, and each of them is going to have a particular weight. So now, let's look at the demo. If you click step here, that's going to run this inference algorithm and produce a weight for every possible assignment that has non-zero weight. So in this case, we verified that there are two possible assignments that have non-zero weight, assigning BBR and BBB. OK, so now, let's switch over again to the map-coloring example just to see how weights are computed here. So here is a possible assignment of colors to provinces. So here, notationally, I'm going to make a slight change. It's going to be sometimes convenient to be representing assignments in this kind of dictionary format, where the variables have names. So here, I have WA is assigned red. NT is assigned green, and so on and so forth. So literally, you can think about this as a Python dictionary if you like. What is the weight of this assignment? Well, in this particular case, all neighbors have different colors, and remember, each factor is just going to thumbs up, return 1 if the two adjacent neighbors have different colors. So I'm just going to go 1 times 1 times 1, and that's just 1. Now, consider an alternative assignment, where I simply replace NT with red here. So NT becomes red, and now, we can see that the weight of this altered assignment is going to be 0, because these two factors are going to evaluate to 0. These two here, and one thing you might realize very quickly here is that all it takes is for one factor to veto the entire assignment. Because we're multiplying, if one of the factors returns 0, then the product of all the factors is also going to be 0. So here is a general definition of assignment weight. Assignment little x is going to be x1 through xn has a weight, and this weight is a function that takes an assignment and returns the product over all possible factors of the factor fj applied to an assignment. And here, even though each factor only depends on a subset of variables, I'm kind of simplifying notation by just passing in the entire assignment. In practice, I would only pass in only the variables that are in the scope of fj. So a bit of terminology. An assignment is consistent if its weight is greater than 0. A weight can't be negative because all the factors return non-negative numbers. So if a weight is 0, that means the assignment is inconsistent. And the objective of a constraint-satisfaction problem, finally getting to what the point of all this is, is to find the maximum weight assignment. Mathematically, it's written arg max over all possible assignments x of the weight of x. And a constraint-satisfaction problem is said to be satisfiable if the weight of a maximum weight assignment is greater than zero. Another way to say the same thing is whether there exists some consistent assignment. And note one thing is that the weight here in the context of factor graphs and constraint-satisfaction problems are not the same as a weight that we study in machine learning. Those weights can be negative or non-negative, but these weights in constraint-satisfaction problems or factor graphs have to be non-negative. One other small comment is that here we are actually defining a slight generalization of constraint-satisfaction problems where factors can actually have not just 0 or 1 weights, but actually any non-negative value. So constraint-satisfaction problems actually is a general umbrella term that captures several important cases. So the first is Boolean satisfiability problems, otherwise known as SAT. So in these cases, the variables are Boolean valued, and the factors are a logical formula such as x1 or not x2 or x5. So satisfiability problems are NP-complete problems, which means that in the worst case they're really, really hard, and we don't have efficient algorithms for solving them. But in practice, it turns out that there's been an extraordinary amount of progress in SAT solving, and we can actually routinely solve SAT problems with many, many more variables than we might be able to predict by theory alone. So there's a joke that says, theoreticians reduce a problem set if they want to show that it's hard to solve, and practitioners reduce a problem to set if they want to solve the problem. Another class of problems that is important is linear programming, and in linear programs, the variables are real-value numbers, and the factors are linear inequalities such as x2 plus 3x5 less than or equal to 1. And despite the fact that variables can take on an infinite number of values, linear programs have the special structure that makes them especially efficient to solve, and there's been a lot of work in solving linear programs efficiently. Integer linear programs are same as linear programs, except for the variables are integer valued, and the fact that they're integer values makes these incredibly hard again, just like satisfiability problems. Mixed integer linear programs are problems where variables are reals and integers, and these problems are also hard to solve. So in summary, we formally defined the notion of a factor graph, which includes variables and factors. So variables specify unknown quantities that we need to ascertain, and factors specify preferences or constraints for partial assignments. And one thing that's special about factor graphs is that you're specifying constraints and preferences in a local way. So suppose you're modeling, and you think of a particular preference that you have, you can just simply write down a factor in terms of the variables that matter and throw that factor into the constraint-satisfaction problem. And now, the hard work comes in actually processing all these set of factors. So a key definition is the weight of a possible-- an assignment is the product of all the factors, and this is where all the magic happens. This is where you have to think globally about all the factors together, and the point of a constraint-satisfaction problem, again, is to find the maximum weight assignment, and this is, again, something that requires global reasoning over all the factors. And so the model here to remember is specify locally if you're modeling and optimize globally, which is what the inference algorithm will do. That's the end of this module.
Stanford_CS221_Artificial_Intelligence_Principles_and_Techniques_Autumn_2021
AI_and_Law_I_MarianoFlorentino_Cuéllar_President_of_the_Carnegie_Endowment_for_International_Peace.txt
So today we have the pleasure of hearing from Justice Mariano-Florentino Cuellar. Tino is a professor in the law school at Stanford. He's also a justice on the California Supreme Court. He was also an official in the Clinton and Obama administration, which is incredibly cool. Tino did his undergrad at Harvard, he did his law school at Yale. He also has a PhD in political science at Stanford, and he has done a lot of work around cyber law, and actually like AI and legislations, as well as other work around international affairs and public health. He also teaches a class on regulating AI, which is a very cool class. So if you're interested in these areas and if you're interested in these topics, I absolutely recommend taking that class, especially after taking 221, I think that would be a really good class to take. And we've been basically interacting with Tino through the AI Safety Center, and the Human Centered AI Institute over the past couple of years. We have a project together right now on adaptive agents. So it's really great to work with Tino, it's really great to hear from him. And for me, he's on the list of the top five people I would want to have a conversation with. This includes roboticists, this is a very small list. So it's really great today to hear from Tino, and I'm excited to hear him talk so you know, so welcome. Thank you very much, professor Sadigh Dorsa. And thank you Percy and Peng and Woody. It's really an honor to be here and to share some time with you. I have to tell you that that last comment you made Dorsa, is a lot of pressure. I don't want to let the class down and get demoted and not be on your top five list. It's also been really great to get to know you, and I learn so much from all of our interactions. I appreciate that you've come to speak at my class so it's only fair, and it's really an honor to be here. I want to take like 35 to 40 minutes, which I know in the era of Zoom is a long time. So I'm going to hope that those of you who have been good enough to tune in, I know this-- doing this live is optional, that you're going to find this worthwhile. I want us to have a lot of time for discussion, but let me just give you a quick overview of what I mostly want to do. I want to explore with you why your interest in artificial intelligence, which is what led you to take this class, is actually incredibly relevant to policy, to politics, and law. And along the way you're going to see it's also relevant to international affairs and geopolitics. But really in the course of this talk, I want to share with you some reasons not only why you should be interested in law and policy, and take your technical knowledge and expect that it's going to be relevant to a lot of really important questions the world is facing. I also want to maybe give you a sense of how I became really interested in the subject along the way. And I'm going to try to share my slides now so you have a better sense of what we're talking about. So let me start by noting that right now you're at an amazing moment in your life, you're learning about artificial intelligence, you have this extraordinary university at least virtually around you. Eventually you'll be back here physically, I hope and expect. And you can look at this talk and think about it from a perspective of a technical expert which is what you're becoming by taking this class. But before we get to that, I want you to imagine yourself not as a technical expert, but to think of yourself as just a citizen. Somebody who has to think about, how does this technology affect daily life? Who's being affected by it? Where are the inequities? What are the opportunities for understanding it better? And then near the end of the talk, I want you to imagine yourself as a policymaker, somebody who has to make decisions about how to allocate scarce resources, where government budget should go, what people should do in the legislature and the courts around how to resolve the technical questions and policy questions and the legal questions that arise. And what are you're going to find is that your technical knowledge is extremely relevant to a lot of these crucial issues. But at the same time, you need to round out that knowledge by understanding a little bit about the legal system and about organizations. So the bottom line really, is that I'm going to share with you a lot of different messages. But the core message is that this technology that you are learning to master has not only benefits but risks. And in the course of implementing that technology, society is going to be shaping how that technology is used through the legal system and also through organizations. Through the associations, the institutions, the groups, but especially the firms, the agencies that so many of us are going to work in-- law firms, government agencies, big corporations, non-profit organizations. Now I want to tell you because I know that it is difficult to hang on to your attention, but I'm going to try, that there are some things that I absolutely want to have you remember. If you remember one thing about my whole presentation, it's that the impact of artificial intelligence on the world, on your daily life, is a function of law and organizations. It's not anything that actually acts directly by itself, but it has to be mediated by some organization, by like what Stanford does, or what the Republican Party does, or what the United Nations does. But also it's meted by legal rules. And along the way you're going to find that we might sometimes talk as though we're discussing the possibility of developing legal rules that will apply to AI, well, I'm here to suggest to you that many of those rules already exist. The question is just how to translate them to this context? If I can convince you to remember two things and not just one, I'd like you to remember the point above, but then also I want you to remember some crucial terminology, and that is the techniques of AI like machine learning, are different from AI systems or applications. And these are the mechanisms obviously, that instantiate the techniques that are attached to a user interface-- I'll say more about this later-- and that actually spit out information, recommendations, insights that people will then act on. And if miraculously enough, I can get you to remember three things, and this is the last thing I really want you to remember for sure, it's the previous two points plus the point that law is kind of merging with the design and policy challenges that are implicit in AI. So I'm going to end up just by telling you that lawyers are becoming more and more a little bit like people like you, who are trying to wrap their minds around machine learning, supervised learning, unsupervised learning, reinforcement learning. And in the same way, you and your community, the people who are the technical experts are increasingly pushing to ask questions like, what is the right way to use this technology? What do we want it to do and not do? So with that as background, let me acknowledge more explicitly the benefits side of the AI technology you're learning to master. Because if we don't, then we're going to get a pretty distorted picture. If you were physically on campus right now and you were walking around Stanford, you can go to 10 different places on campus where really cool stuff is happening that is relevant to real problems that people are facing around the world, and where AI techniques are being used to try to make the world a little bit of a better place. So let's take for example, the population of the world that is facing serious nutritional stress, meaning people who are at serious risk of starving. A generation ago that population was much bigger than it is now, but sadly that population is still stubbornly large. 700 million people or so face serious food insecurity. These are generally the people living on $1 a day or less. You see some of the kids here. Overwhelmingly their population is concentrated in Africa and in India and in Asia, but there are also some people in North America and in Europe even who face food insecurity. The different ways that we have to allocate resources effectively to make sure food doesn't go to waste, to make supply chains more efficient, to pinpoint where there are problems in real time. And what's more, a lot of this population not only faces problems around food, but also faces problems around education. The distribution of access to high quality education is incredibly unequal as we know. We are all a part of that system, we take part in it. So when I think about the future of both nutrition and education in a world that is more equitable and more benign, I cannot imagine that future without some use of artificial intelligence techniques to democratize education, to make the delivery of food more efficient, to pinpoint problems in real time. In somewhat similar fashion, this quirky set of four images you see here is an example of the work with Dan Ho, my colleague at the law school is doing with some colleagues, to use satellite imagery to pinpoint where sources of pollution are in much more accurate fashion than anything the government currently has really. And what that would allow us to do is more effectively cross-reference the self-reported data that comes from firms that claim to be complying with environmental law. With the reality that it takes some fairly sophisticated but also in some ways intuitive machine learning techniques to make use of this visual data. And then you've got a picture of a courtroom. This is not the kind of courtroom where I sit because it's a trial courtroom mostly. This is where trials are actually heard in Superior Court. The reality is that in California, if we had more time and if we were in person, I would ask you to guess the number of cases we hear in California courts every year. And generally speaking when I ask the question people are like, 20,000. And I give a shocked response, that's too low and people say, OK. 200,000. And my eyebrows still go up. And finally we get to like 800,000. Well, the actual retail answer is like 6 million cases a year. So it will not shock you to hear that probably 40%, 50% of those cases, the litigants are self represented. They are people like you, they don't have a lawyer, they are trying their best to navigate an incredibly complicated system. I would love to imagine a world where the distribution of legal knowledge is not so restricted just to people who have a Stanford Law degree, or a similarly great credential, or can pay a lot of money for a fancy lawyer, but where software and AI systems that you might help design can help people navigate a very intricate legal system. But at the very far right, you see a picture of an African-American man under the words, "criminal justice." And there's a question mark there. And why I'm doing that is to highlight that this whole world we can imagine also has its risks and its downsides. And to make this more concrete, I want to focus on one person in particular. The gentleman you see here, Robert Williams, is one of many people whose lives are being affected by the fact that artificial intelligence systems are not just theoretical anymore in terms of their practical application. They're being used in all kinds of settings, including in the criminal justice system. So here he was one day in a suburb of Detroit when he gets arrested by police. He's told that he's being arrested because he's suspected of committing larceny, which is a fancy word for stealing, robbing a store in Detroit. And it turns out that because the police were using an image recognition system, doing facial recognition, and they had a base of data corpus of 49 million images, and the system indicated that the image from the security camera in Detroit matched Robert Williams's picture, he was arrested. Now you might ask, did the police have any other reason to suspect him? Were there outstanding arrest warrants for him? Had he committed similar crimes in the past? And the answer is no, no, no, OK? So once he was arrested, the police admitted that the photo was a little blurry. They admitted that they didn't have any other information about him. And after a little bit more discussion, they ultimately agreed with Mr. Williams that the picture really just didn't look like him. Meaning the intuitive human response was like, no, that doesn't seem to be you, but the algorithm says it's you. So what do we do? The answer is, he was detained for 30 hours. Now I'm not suggesting that there aren't worse things than being detained for no reason for 30 hours, but I'll tell you. I grew up on the US-Mexico border, and it was a fact of life in my family that sometimes you need to cross over to the American side to go shopping or do something else like that. And being detained even 45 minutes, an hour, an hour and 15 minutes, all those things happened to me. It's not very pleasant. So you can imagine what it's like, or you can begin to imagine if you try what it's like to be detained more than 30 hours and then be told that it's because a computer made a mistake. Computer must have gotten it wrong, was the exact thing that he was told. Everything that I want to share with you from here on out, in a way you could sum up by asking this narrow question of, why did this happen to Mr. Williams, and what does that mean? What are the remedies? Do we have a legal system in society where we can sort of disentangle the mistaken uses from the correct ones, and manage the risks appropriately? Can we take seriously the fact that humans also make mistakes when they're looking for faces? I'll say more about that in a moment. But I hope I can press you to think about the situation with Mr. Williams in a little bit of a broader context. Because we could talk about criminal justice, or we could talk about testing, or as you may know the International Baccalaureate exams this last year because of COVID were not actually given, but instead students were given a score that was their predictive score based on the previous portfolio of work they'd submitted. We can talk about testing in remote settings where your image is sort of being analyzed by a camera that's trying to detect whether you're cheating. We can talk about insurance, we can talk about 36 other domains where the stuff is really a fact of life, and the broader question really is, what does the incident involving Robert Williams tell us about law, about artificial intelligence, and about how society is changing, and our legal system is changing in response to this technology? So that is the tip of a very, very big iceberg. Now let me acknowledge again the point about how there's a lot about this subject that goes deeper. It doesn't just start with the history of artificial intelligence, it actually starts with a history of really modern society. Now on the screen you see the picture of a very intense-looking man named Max Weber. For anybody who's ever taken a class on social theory or sociology his name might be familiar. There's a lot I could say about him, but here's the main point I want to make. Writing in the very early 20th century, Max Weber was looking around society and observing things, noting. Society didn't work the same way then than it did 100 or 200 years ago. Many, many people worked inside organizations with a hierarchy, formal systems of authority, organizations had a director, an assistant director, officials, clerks, and all of this, observed Max Weber, was a means to which the modern nation-state processed information, took it, and decided what to do with it rationally. Sometimes developing the mechanisms to act as if by reflex, by recognizing the kind of problem and quickly delivering a response. Sometimes by elevating it to people who could sit in an office, talk in a conference room and come up with a solution thinking presumably logically. And what Max Weber noted, much to the influence of people who came after him, including yours truly, is that these bureaucracies aspired to work like a machine, right? They were trying to automate the process of decision making in some way, to the point that it could be predictable and rational. And Weber pointed out that that was all well and good, but there were going to be some problems along the way. And in some ways, I'm here to tell you that many of the problems that Weber highlighted, how we have a love-hate relationship with these bureaucracies. On the one hand we think that they're inefficient, that they're rule-bound, that they're not creative, that they're frustrating, that they're slow, but at the same time we can't live without them. That will end up illuminating in some ways some of the really interesting choices we have about how we use artificial intelligence. Maybe in some ways to replace conventional bureaucracy, but I would argue in o ther ways to replicate and in some ways channel some of the same tragic conflicts and tensions. Now channeling Max Weber to some degree and also reflecting my own interest in AI, in 2016 I wrote a piece that had the following punch line basically, which was sometimes we're going to deal with the concerns we have about the role of artificial intelligence by suggesting that really all we're building are recommendation engines. Not really that different from the way Netflix works. You may also like to watch this. Judge, you may think that this person deserves a harsher sentence, but it's really up to you, judge. You don't have to be the one to decide, or rather, you have to be the one to decide. We don't have to be the ones to decide, we the ones who designed the AI system. We're just giving you a recommendation. We're using these techniques to give you a sense of what the likelihood is that this person is going to reoffend. And the point I was trying to make in 2016, which seems now like a long time ago, is that actually that line between the computer program and particularly the AI system that has sophisticated user interface, capacities to sort of speak to you in natural language or to serve up the information in a way that's easy for you to assimilate, it's really difficult to police that line between they're just supporting your decision and they're actually making the decision. And here's one place where I can highlight my point at the very beginning about how law merges with organizations which merges with AI, if you really want to understand the effect, right? So if you want to know if an AI system is actually serving as a decision support tool rather than actually making a decision, you're going to want to know the answer to questions like, well, are the designers of that system liable if it turns out to make a recommendation that's really, really bad, that results in people getting injured? Or conversely, is the organization run in a way that the decision maker using the AI system is being audited and is being checked to see if all of her decisions are just exactly rubber stamping what the software does, and if that's the case, well, what's the point of having the human decision maker in the loop anyway, right? So I'm giving you the sense that we're building up to this point of all these conflicts and questions, and meanwhile, people like Robert Williams are getting arrested. But now let me return to this point about how humans often are not great decision makers too. So we can think about where it is that human cognition fails in terms of perception. We can think about how humans add up information and come up with a thought or a decision. We can think about what motivates humans. Whether it is that even if I have every reason in the world based on my job to be fair when I'm working in a police station and I'm deciding who to arrest, if I have an improper motivation, if I want to impress somebody who happens to be on a ride along with me that day, or if I really dislike the person who works in this particular area of town and I want to arrest them because I have a nefarious motivation, that can mean that human decision making gets all messed up. And even the legal arrangements we have to police human behavior can fall short. But my next slide, which is probably the messiest slide of the whole presentation, so you don't have to memorize it or even read it all. You know I can make these available to you later, but here's the punch line. The punch line is that the mere argument that humans are not as good as the performance of AI systems in a discrete test like facial recognition does not really answer the question of how you want AI systems to be used by organizations to make decisions. Because the devil is really in the details. Let me just pick two points here to highlight. Let's talk about perception. So the field of the neurophysiology of how vision works is really, really complicated and fascinating. And it's not an accident I would argue, that some of the coolest things that we have been learning about how to develop better image recognition systems in the AI space are influenced by what we learn from neuroscience. But the fact that that's still a bit of a mystery highlights to you that we actually only understand a little bit about how humans make visual processing decisions. For example, we know that it takes about 100 milliseconds for humans to perceive whether a picture reflects a person of one gender or another generally, for humans to pick up emotions, for humans to recognize familiar faces. But eyewitness identification ie, do you remember-- that involves unfamiliar faces. Do you remember whether this image is showing you the person that you think you saw two weeks ago when the glass was shattered and somebody came into your apartment at night and grabbed your beautiful collection of baseball cards and left? That is a lot less exact. And as one of my colleagues explained in a dissenting opinion in a case called People v. Reed, we would be grossly inaccurate if we suggested that that is a system of identification that works really, really well. But then of course, if you compare that to the way AI systems work, on the one hand AI systems might be in the lab much more accurate than humans at picking out the similarity between two images just at random, not that are sort of known earlier the way humans might know them earlier. But on the other hand, the ability of those systems outside of the lab to operate effectively and particularly to detect emotions for example, is not so great. These systems have in a number of applications and instantiations real differences in how effectively they work for pictures of people who identify as white rather than for Blacks or Asians, and of course, you have all kinds of other failure modes like hacking. And then of course, we could talk about legal arrangements. And here I would just note that we humans have hundreds of years of experience dealing with human mistakes. That's really what the legal system is designed to do. We are only learning now how to adapt our legal rules and standards to deal with the mistakes that machines make. We're not starting from scratch. But it would be a mistake to assume that we figured out exactly how to do that. So now I want to make the point that when we are dealing with problems posed by AI in the legal system, we are not starting from scratch. And the best way I can make that point is to just highlight for those of you who are vicariously interested in asking yourself, what would it be like to go to law school, what would that feel like? And you're thinking, well, maybe that would not be terrible. It might be kind of fun. I'll give you a flavor of some of the subjects that people learn about in law school and it will not take a rocket scientist, it will not take a Stanford computer science professor to see that these subjects that we cover in law school, they are just literally touching right up against AI already and it will continue to be the case. So in an area of law called agency law, is where we figure out like if Professor Sadigh says to a TA, I want you to go across campus and I want you to pick up this particular computer and I want you to carry it to the other side of campus. And along the way the person picks up the computer but then gets distracted, drops the computer and kills a bird, and it turns out that that bird is the prize-winning bird of somebody's bird collection or whatever. Does Professor Sadigh end up being responsible? Well, agency law resolves that kind of question. When are you responsible for the actions of others in your organization-- of your agent? Now ordinarily, agency law applies to the actions that you begin to put in motion that some other human being engages in. But you can totally see how this branch of law is beginning to grapple with the question of when you are responsible for the actions that you set in motion because you design an agent to do something like to sort employee applicants, and then the agent does that, the artificial software-based agent. OK. So then you have my core field of administrative law and legislation. This is the law of what counts as sufficient justification for any action of government. If the president signs an executive order saying, I don't want the census to keep on going until December. I want it to stop in October. When does the president have the power to do that? How does that power get into some conflict potentially with the power of Congress to pass a law saying how long the census is supposed to continue? You get the idea. What if the government says, well, you're going to have to move out of this home because we want to build a road through here. What right do you have to challenge that kind of action? So obviously, the more and more that government decision making involves reliance on machines, the more and more that this branch of law is going to have to deal with the question of, what does it mean when the machine is empowered to play a crucial role in that government decision? Does that make it more reliable, less reliable, more fair or less fair, when can we do that, when can we not do that? Last but certainly not least, tort law. Tort law is about who has a duty to whom, what counts as a reasonable decision, and how do we attribute causal responsibility for bad things that happen? Translation, let's say you're back on campus and sadly you get COVID-19, can you blame the University? When and why can you blame the University, why can you not? Or let's suppose-- forget COVID-19 for a moment, let's suppose that you're in a lab and sadly your lab partner decides to try to attack you and you survive, but you're asking, well, wasn't the University responsible for making sure that I wasn't attacked? That's tort law. And you can imagine that as the information that is the fuel of modern AI systems and the sort of fuel for machine learning increasingly flows to systems that are interconnected, questions about what a decision maker does with that information and whether that information makes the decision maker responsible for a different kind of safety protection relative to somebody that could be protected, that all becomes more interesting. OK. So let me give you some context for how to think about these problems by just acknowledging that the history of AI is kind of long. And it does not start with the birth of the internet, it goes back further in history to some of Professor Sadigh's colleagues in the computer science department at Stanford. So I could go on and on about this but my little subtext in addition to what I want to share about the history of the AI is to kind of quickly give you a sense of how in the world I became super interested in this, beginning a little in college but then again when I worked in government at the Treasury Department and even more so when I came back from working for Obama in 2010. So just look at those pictures for a moment. You might recognize some of these faces. I'm sure you recognize at least one, the one with the woman in red as it were. But if you go back a little further, what you're going to see in the picture under 1950s is Herbert Simon, a really, really smart man whose parents were refugees from Germany, who spent most of his career at Carnegie Mellon University, and I mean, come on, you have to be pretty smart if you start as a political scientist but then become interested in psychology, end up writing about economics and winning the Nobel Prize in economics, and along the way you become a major pioneer of AI. That was Herbert Simon for you. He was so brilliant. I recommend to you any book or article ever written by Herbert Simon. Among other things, one of the reasons he won the Nobel Prize in economics is because he developed the notion of bounded rationality, which is at the core of what we call behavioral economics now. The notion that you may be best modeled as a human, not as somebody who's trying to optimize but as somebody who's trying to satisfy a certain threshold. And we can certainly use that insight to imagine how to design a software agent and how to do machine learning, which is one reason why you can imagine his expertise and brilliance got transferred over into AI. He's most associated in AI with the story of development of systems to do basically like first order logic and mathematical type reasoning, what some refer to as good old fashioned AI. And I'll just note here that that was really important, but always treated as the Holy Grail, maybe something that was elusive and not possible to realize the kind of instinctive almost automatic decision-making in motion that now is so much at the cutting edge of what we are helping robots and AI systems to do. The recognition piece was missing even if the cognition piece at least around how you prove theorems was possible to instantiate early on. Briefly by the 1970s the picture really is different. Here that picture includes Ed Feigenbaum, somebody who is our colleague in computer science and always someone fascinating to talk to. Somebody who's been one of my mentors a little bit in trying to learn about AI. And he's very much associated with expert systems, with taking insights from not only the work of Herbert Simon, who was actually Ed Feigenbaum's mentor, but also from psychology, and sociology, and decision theory to develop systems that could act almost as experts and replicate knowledge in particular domains. And then by the 2000s the real phenomenon that changes everything and certainly gives rise to the prominence of the person in the third picture, Sheryl Sandberg, is the rise of the internet. Because of course, all this stuff about AI was happening partly in academic labs and partly in defense departments but suddenly the ability we have to harvest and centralize billions and billions and billions of pieces of behavioral data from humans and of course, to do it in systems that work faster and have access to more computing power lets us do some truly amazing things. And I'll just note here that my interest in this begins in college in '93 when I was trying to understand how human decision-making could be modeled so kind of very much the Herbert Simon sort of work. But when I was working in Treasury in the late '90s it wasn't lost on me that there was just so much data that the US government had gathered around financial transactions. And I was interested in privacy as you might have been, but also interested in the idea that if that data were available, how could it be used in a way that was efficient, lawful, analytically sophisticated to detect really, really problematic uses of the financial system, including to commit corruption for example, to launder money and so on. And so I became exposed to some of the techniques that you're learning about in this class right now. When I came back from the Obama administration in 2010 it struck me that so many of the domains in which I was working, particularly around public health and criminal justice were already being affected by early examples and applications of this stuff, so I became really interested in trying to understand how this stuff would affect every aspect of decision-making in law and in political science and try to learn more about what you're learning about right now. So here's where I want to highlight where my own thinking went after I returned from the White House in 2010. It struck me that some of the most interesting work happening in AI and in universities but also in the private sector was about pushing the boundaries of analytical techniques to discern patterns-- unsupervised learning, reinforcement learning, and so on. And the breakthroughs were really extraordinary and they continue to be. But it was also striking to me that these techniques in their raw form were not necessarily designed to influence or help non-experts, they were not necessarily designed to solve real-world problems. So if instead you're looking at how AI techniques get used like they were used in the arrest of Robert Williams, you're not dealing with AI techniques by themselves, you're dealing with AI systems. Which my co-author and I defined using probably a little too much mumbo jumbo, a sociotechnical embodiment of policy codified in appropriate computational learning tools. So a system to gather data and learn from the data. And embedded in a specific institutional context, meaning it fits in an organization and is given a certain purview. People who make decisions are told, here's how you can use the tool, here's how you shouldn't use the tool, right? And really what that means is if you want to understand how AI is being used in the real world, you have to understand relationships of power. Who gets to decide that the system works the way it does, and that somebody can point to that system and claim that it embodies some kind of intelligence? Why does this matter? Well, it matters because now here we get to the other side of the coin of the internet, right? We're not in a world where this is mostly happening in the lab right now. We're in a world where really important things in the world are being affected by AI. I cannot give this lecture without pointing to the toothbrush that somebody recently gave me as a gift, that advertises how it uses artificial intelligence to learn how to brush your teeth. And this is the genesis of a concept they called toothbrush maturity. When technology gets to be so ubiquitous to the point that it intersects even with a toothbrush, then you know that you're dealing with something that has to be understood in its real-world context, and not just in the theoretical stories you can tell about how well it's going to work. Another example of this really though, is that the very large internet companies that are around us in Silicon Valley have a market capitalization that you can't really explain without understanding just how well online advertising must be working, and how much it's leveraging the enormous amounts of data that are generated by the internet and analyzed by some of the AI techniques that you are learning about here. Where's this going? So interestingly short answer is, I really wonder whether anyone really fully knows. And that's true of almost any technology, right? You can't always predict. By the way, I'm about 3/4 of the way through the presentation so just bear with me for a few more minutes. But I can point to different things here, but the main point I want this slide to highlight for you is that some of the breakthroughs that we're seeing right now are not so much progress in terms of just more clever algorithms or even more different data, but it's partly just leveraging more and more computing power. I wonder where that's going to go, I don't know that that's sustainable. But I do think that if you want to get a sense of where this field is going, think a little bit about language in particular. Because if I go back and think a little bit about how government agencies were making decisions in the late '90s when I was there, most of the expert analysis was being done using techniques like probit, logit, econometrics like regression, stuff you're going to be learning about in this class. But it was being mediated through humans presenting to each other. What AI systems may increasingly have the capacity to do is to use those very same techniques, but to then communicate with the user in a way that is adaptive to the human and able to leverage language in a way that previously software did not do. So that persuasive ability of software is something we have never really seen before. And as we have more effective use of compute and greater use of compute, I think the feats that will be possible when you sort of marry up the GPT-3 type stuff with the analytics will be very different. Which is to say a lot of humans who are consuming the output are not necessarily going to be in a great position to be very sophisticated arbiters of whether what they're being told or recommended is accurate or not. Just to wrap up, there are all kinds of interesting intersections now about the law and AI and policy problems that result. I want to make a pitch to you. This is kind of tentative. I'm not as certain about this as I am about other things, that we're actually having this really weird bifurcated bimodal distribution of attention to the problems, where some problems now are so familiar that we don't necessarily know how to solve them but you will hear the buzzwords very often-- explainability, interpretability, bias, privacy, et cetera. And these problems I think of as not medium to long term problems, they are present day problems. They have hit already. Just ask Robert Williams. And then when you see an interview with Elon Musk, that you're going to hear about catastrophic or existential risk. I think that it would probably be a big mistake to ignore catastrophic or existential risk, much as I would have argued in the 1960s if I'd been alive then and an adult, that anybody who was interested in the future of fossil fuels, even if we didn't have all the science, would probably be making a mistake if they ignored completely what the risks might be. If they were trying to understand the risks systemically for the planet of the use at scale of these techniques for producing energy once the rest of the world, meaning poor people in Indonesia, and India, and Africa, and China began to demand the level of consumption of energy that Americans and Europeans had taken for granted, right? But I still think that in some ways the catastrophic or existential risk piece is not a risk that I believe the world is likely to be facing in five years, in eight years or ten years, that's just maybe something we can go into in the Q&A about why that is. But I suspect that the level of delegation we have already engaged in the AI systems doesn't get to the point where they can protect their purview and power without intervention as well as they might someday. And obviously that requires further thinking. But that leaves some seriously interesting issues that I think really deserve attention more in the short term. For one, this question of where causal responsibility lies when a system that deploys AI acts in a way that is not safe. Think about the autonomous vehicle. But not only the autonomous vehicle, think about the AI system in a large company that increasingly is making financial decisions. Reviewed by humans perhaps, meted by humans but increasingly in an autonomous way. I think that problems involving power and collective action are really interesting in this space. So if you're running a large company and suddenly 27% of those jobs are now going to be-- or the functions done by different people in different jobs are now going to be done by AI systems, how does that redistribute power within the organization? How does the advent of lethal autonomous weapons influence the distribution of power and geopolitics for example. How does it empower countries with smaller armies and so on. Another point, which is familiar to people working on cars in particular, is that precision can spur disagreement. Right now a lot of legal rules are written in fairly general terms, which is to say humans are not supposed to drive when they are impaired. They're supposed to engage in driving that shows reasonable care, et cetera, et cetera. These are fairly vague descriptions and the court can figure out what that means, in particular fact patterns with the help of a jury. But when you can actually program an automated system to make split second decisions that are extremely precise about when and how to prioritize exposing some smaller number of humans to risk when you can save a larger number of humans like a variation of the trolley problem, that will spur disagreements that didn't exist before. Just like mapping technologies when they developed and became more precise spurred disagreements between different countries that previously shared borders in very inhospitable locations, when the border could really not be traced with quite as much detail and specificity. Just to mention maybe one or two last quick things on this slide, I think it's going to be really interesting as AI systems pose the question of what it means to maximize social welfare, like how do you design a system that is going to have as its core attribute, this is what you want to do what some people trying to do, that it's going to try to keep humans safe, or it's going to try to avoid doing anything that will imperil too many people. Taking human values and turning them into code is actually really, really difficult. And it is related to the process through which humans think about change and conflict which is to say, we often deal with conflict through our institutions like courts and legislatures. Increasingly as we deal with conflict through machines, we'll have to program machines to help defuse conflict, and not only to point out how two views that seem to be very similar are actually in tension with each other. All right. So we've gotten to my very last slide. I'll end here, there's probably too much text on it. But here's what I want to highlight. If you are listening to this lecture and you're thinking, I hope that part of my career is spent thinking about how I can help move AI and the design of AI so that it is socially beneficial, I want to highlight to you that that is actually really difficult to define. And probably in ways you've already anticipated. But I want to highlight in particular a tension between two different ways of thinking of what the social good is for purposes of AI and pretty much everything else. In one version of what it means to work for the social good, you basically develop systems that increasingly are good at giving people what they want, what they say they want, but especially what their behavior indicates that they like and that they value. So the entertainment that they want, the products that they want, the classes that they want, the kind of teaching that maximizes student evaluation, feedback and so on. But of course, part of what makes life so interesting is that there's a separation sometimes between what people say they want and what they actually want, or what people say they want and what they do. Or for that matter what people want at time one when you started listening to this lecture and what you want right now, it's probably for me to stop, right? And once you start admitting to the idea that human welfare is more complicated, and further once you start designing systems that are in real-time shaping human affect and culture and behavior, it actually becomes really, really difficult to know where to land. Like how to take advantage of the human knowledge you have to know how to make humans better off. I don't know how we solve that problem. But I do know that the things that I do as a judge and the things that we do in law schools as lawyers and the things you do as technical experts are increasingly merging. And I don't think that we can answer these really tough questions without acknowledging that our bodies of knowledge have a border that is increasingly becoming really blurry. And with that, I'm going to stop and thank you for listening, and I'm looking forward to your comments and questions and feedbacks, concurring opinions, dissents, whatever you want to share. Thank you [INAUDIBLE]. So I think-- yeah. So the way we're going to go for questions and thoughts is either raise your hand or put it on chat, and then they'll just call you or read the question. Great. I should add that if we were in a real classroom, what law professors do is we call on people. I would call on people. But I can't really call on people, so I'm going to wait for your questions. I have a question. You mentioned that a lot of the laws for the abuse for AI actually already exist. Can you give an example for an existing law that you think would be used for AI applications or systems pretty soon but is not already in use for example, something that's already there? Absolutely. Great question. Thank you. The short answer is, let me start with the common law, which are the subjects that we teach law students in the first year that are-- they're sort of defined by the fact that the law is a little bit more judge made. As you learn basic civics of a system like the American system, in most cases, the legislature is elected to enact what the law is. And then the executive branch will implement it. And then the courts will judge and interpret what the law means. But there are certain branches of the law where, in our Anglo-American tradition, the law is actually first developed by judges over time. Little by little case, by case. And then the legislature will jump in, and they will tweak the law in this way or that way. So those bodies of law include contract law and tort law. And both of those are so clearly about AI, at some level. So contract law is the law of promises that we make to each other and when they're binding and when they're not. Right? When Professor Sadigh promised you a good class, I think she's delivering, as are her colleagues in this class. But if you say, well the class wasn't good. I've been defrauded. The law will try to determine whether there's an actual legal claim that you have or not. So in the end, just imagine now for a moment that increasingly transactions are being made by two AI systems making smart contracts with each other in a split second because you've pre-programmed one to say, as long as this stock falls below this price, buy a whole bunch of it, right? And when the lines of supply and demand cross because two AI systems are talking to each other, the deal is made. But then it turns out that maybe this was class C shares of stock, not class A. And so who's stuck dealing with the cost of a transaction that is not what both parties wanted? So existing contract law has a lot to say about that. Now, tort law is an area of law that frankly, when I was a law student I thought it was really boring. When I was a law professor, I thought it was kind of technically complicated, not that interesting to me. I had my hands full teaching other stuff. As a judge, I think of it as fascinating, really, really interesting. That's the body of law governing when your conduct harms other people and when you are liable for that. There is no way to have a discussion about cars, like automated cars, without having tort law be a big part of that. So to what extent is the designer of the software that runs the vision system for the car responsible for this person who gets run over, versus the person who runs the company that tested the software versus the company that designed the car and marketed for you, versus the driver that pushed the car to operate in really bad weather? And tort law is really complicated, but I will just give you one quick insight which is very intuitive, which is one workforce concept in tort law, which has been on the books for a while is the notion that the tort law should pay attention to other things being equal to who in that chain of causation is in the best position to have avoided the harm at the lowest cost, least cost avoidant. And you can see how here that would be a really interesting and important question. Who could have done a very little bit to avoid that person from getting run over? There are dozens of other areas of law but, this to me, is a really good example of how when Silicon Valley says, oh, we have to decide if AI is going to be regulated. I think there's a little disconnect with reality. OK. I think we have a-- go ahead. Yeah. Yeah. OK. Yeah. Thanks. I have a question. So these systems are probably very opaque for judges and people who actually have to make decisions about what happened and how does it interact with case law and statutes and whatever? And my question is, if as a judge you have to answer a question of fact about a particular AI system, you just like have experts come in and testify and be like, well, nobody else has any hope of being able to understand it. So we're just going to take what they say as gospel. Or do judges and clerks and things actually try to educate themselves on math behind how the stuff works? That is an excellent question. And I have good news and bad news. What do you want to hear first? Let's hear the good news. OK. The good news is that in some ways, the problem you've just described is not completely new to law. And in our system, we have a kind of interplay of decision making that involves jurors being asked, subject to some instructions given by judges, how to interpret an ambiguous fact pattern. Lay people who are supposed to be acting in their best conscientious effort to do the right thing and follow the instructions. And then experts, who in an adversarial system are selected carefully, vetted carefully, debriefed before they sort of come before the court, and then subject to cross-examination who can shed light on things and help the jurors and the judge make decisions. And then judges, who are supposed to resolve questions as a matter of law, like questions that ultimately are more about how do you interpret the legal issue itself? And if, for example, a statute says, a "highly autonomous system" shall be regulated subject to subpart J, but then is this a highly autonomous system or not? So that's a mixed question of law and fact. For example the judge might do part of it. So in particular, the expert testimony piece will frequently involve experts who come and pick at the really intricate math type questions that you're raising. And I wouldn't say the system is perfect, but I would say the system works OK. So other contexts where the highly technical gets adjudicated, kind of like it would in an AI context, would be DNA evidence, for example. Base pairs and what does it mean if you say the match is 1 in 1.7 billion? How do you know that? What's the difference between 1 in 3 million, which is what the other expert said? We have a way of dealing with that to some extent. Here is the bad news, though. I think the bad news is that none of the technologies in the past have had the potential that AI systems do to talk back. And I think that is like not a small thing, because what that means is AI systems can be designed in a way that I think creates a little bit of a comforting illusion, that even the experts understand what's really going on when they may be influenced more by design choices that might be really, really hard to arbiter. Where the AI system is actually maximizing not the level of accuracy it conveys to the user about the mathematical basis for a conclusion, like that this person is likely to re-offend, but instead maximizing the possibility that the decision maker who's being influenced by the AI system is going to agree with it. And that could even be an expert who's testifying, right? So this is where AI accountability begins to merge with cybersecurity. You're with me? Ultimately, cybersecurity problems are very much about how if you go back, literally, to supply chains and how you can mess up the very core architecture of how a microprocessor works. There are ways you can bias results that can become incredibly difficult even for an expert to pick apart. And I don't think we have a great answer for that. But there may be blockchain type, really fancy ways of using hardcore encryption-like stuff to sort of have greater confidence in results and to know when things have been messed with. But somewhere along the line there are humans, and humans are imperfect. And I worry about that piece of it a lot. Awesome. Thanks. We have a couple of questions in the chat. Great. Are you able to view the chat? [INAUDIBLE] Let's see. I see three participants raised their hand. Oh. In the group chat. Yep. Yep. There are two questions about international issues. Events. Do you want to take both of them together? Yes. Let me see. From a global perspective, any collaboration between countries' regard of AI adoption-- do you see AI systems from countries of different values will be prevented from being adopted. OK. And then similar to how developing countries will strive to attain the same level of life quality as developed countries. It seems just a matter of time before AI becomes the next thing. How should we think about international co-operational rivalry? Oh, yes. In AI development, is there anything we can do as technologists to help? How should we navigate a time where US has marked AI software export restrictions? OK. Great questions. Let me start with the second one a little bit. So I think that as technologists, you can probably help by trying to make sure that the hype doesn't run away with the discussion of these issues. So I could find people who are in the national security world, I can find people who are in the public intellectual world who will see the relationship between the US and other countries so much through the lens of rivalry that very little space will be left for any collaboration between scientists, for example. Between civil society non-profits that are trying to reduce the risk of climate change by using machine learning tools or whatever. And I think that technologists will be important voices in saying, we can be legitimately concerned about how differences in technological development can affect geopolitics or relations between different countries, but not run to the conclusion that everything is pure competition. I bet you that there are people in this class who are not born in the US. Well, I know that's the case because I wasn't born in this US. And I know that's true of others of you. And to me, that's like a really poignant reminder of the risks of having the conversation about AI shut down to the point that it becomes too one sided and too much just about national advancement. In no way do I want to deny-- and this will get me a little into the earlier question-- that there are different agendas, different goals, different geopolitical objectives of different countries and that getting some advantage in AI technology can translate into potential military and economic advantage. So some balance has to be struck, and that requires some careful discussion. It requires some norms. It requires some cooperation from universities, because we think of universities as working best when mostly things are pretty open in the University. We share knowledge. I learn from you all. You all learn from me, et cetera. But the reality is sometimes careful lines have to be drawn. And ultimately, that does reflect the reality that countries do have different values about all this. And I'll just mention one word that it is a simple way of making that point, but there are other examples I could give you. The word is privacy. So I can imagine many countries in the world that could argue either that their populations simply have no particular reason to value privacy the way Americans do, or that whether their population values privacy or not, their law is such that they prioritize other things. And they're simply going to gather a ton of data about a ton of stuff. And I don't think the data translates automatically to greater power in the AI space, but it is a significant advantage if you have it. It'll be interesting to ask in the coming months and years to what extent some combination of reinforcement learning and the generation of artificial sort of fake data, or fake information inputs into a reinforcement learning algorithm can make up for just raw access to real world data. But I still think a corpus of actual human behavior is quite something. And if you simply didn't have to worry about privacy, the insights you might get into the expressions on people's faces when they're having a private conversation about something incredibly sensitive, or the fear you can see in somebody's eyes when they begin to realize that they've said something on social media that is likely to bring a knock on the door from the police is valuable, particularly if your goal is to try to not only improve the lot and the well-being of a population but to control them and to limit the extent to which they push against you. So I think this leaves us in a really challenging space. I really, really would urge all of us to highlight the importance of some public collaboration across borders on this. We don't stand much of a chance, I think, of getting really to where humanity needs to get to on so many crucial issues, including AI safety, by the way, if we don't have some sharing of information. Pure competition is going to drive a lot of the dangerous and riskiest technological experimentation on the ground, at least for a while. On the other hand, I think we'd be naive to think that everybody shares interests. So some degree of building of norms and cooperation among communities of people who are in civil society in the world of nonprofits or philanthropy or education, I think, will be really crucial. [INAUDIBLE] has a question. Yes. A general question. [INAUDIBLE] comes to be real if I just say that I develop projects that are [INAUDIBLE] that collect more information from the network. So what should I care or take care something like the copyright, privacy, get permission from the source, or even some citation that I have to put here when I find this thing. Is there any kind of general rule to follow? Yeah. Yeah. Thank you. This is a really big subject. Let me just try to abstract a little bit from your good question. You're basically raising the broader question of how we might think of ownership and responsibility over data, particularly as people work together on these projects that mine huge amounts of data and use it in maybe different ways. And I'll give you a short answer, but then I'll elaborate a little bit. The short answer is increasingly the world is waking up to the fact that control over data really is control over property in some ways. So just as you might have a use agreement that says to somebody, you can license the use of this technology that I've developed. Let's say, a piece of hardware, like a camera that can see really, really well at night. But by using this camera, you are agreeing not to use it to look into people's homes without their permission or something. So increasingly, I would say there is a crisscrossing regime of law. Some of it is state law. Some of it is federal law pertaining to particular classes of data, like medical data. It really highly regulates the use of data. By the same token are there opportunities still to harvest, scrape even, data from, say, the public internet that can then be used in different ways. Well, of course. And sometimes that will allow breakthroughs to occur in AI. But this is where it gets really tricky, because we're really in the midst now of developing national and then eventually, maybe, global norms about what it means to appropriately design AI systems that will get rid of data that no longer are needed for the original purpose. So let me give you two competing perspectives about that. First one is that because AI systems are so capable of developing insights using the techniques you are learning in this class that discern patterns in the data that human intuition would not have been able to detect, big masses of data used in new ways is risky. because it means that maybe you end up getting embarrassed about people discovering that the fact that you like a certain kind of literature and that your eyes move in a certain way when you're in a conversation means that you have a really short attention span. And you can't be trusted with a certain kind of job, right? So those questions are partly being mediated with respect to do we not create norms that once data are used for the purpose for which they are collected, we destroyed the data. Now, we'll give you the perspective of media academic. So a lot of my early work as a law professor was historical. I would actually look at old memos and documents that were going back to presidential decisions made in the Roosevelt administration. Well, I was looking at what happened when Roosevelt was trying to reorganize the government on the eve of World War II and how he was trying to protect certain programs from being defunded as the country was getting ready to go to war. By the way, as a little subplot on that, one of the things I learned about that I was not expecting, is that there was a big biological weapons research program that was being funded with White House support, despite the fact that that arguably contravened certain kind of statements the White House had made and arguably contravened certain aspects of legal norms at the time. But long story short, the point is if the norm had been followed that you only keep the data for its original intended use and then get rid of it, where in the world would I be able to write the stuff that I did? How would I be able to do it? And you could say, OK, well those are presidential records. That's different. But in general, the historians who write about how humans lived 150 years ago are doing it with data that were not intended for historians. So I think we have to strike a balance. But I would say you should assume whenever you're dealing with data that there's probably some rule. And if there isn't some rule that's in the law, it's probably in human subjects requirements in a university. Thank you. Yeah. OK. My question is with regards to the out of lab and the models working inside the lab, which you mentioned. So let's say an organization develops the model, let's say it's for self-driving cars, for example. I'm just picking it out. 99% of the time, inside the lab, the models work very well. And outside the lab it costs us loss of a person, a life, as an example. I'm thinking in extreme here. So if this kind of situation comes to you as a judge in your court, what is the decision making? Which from the law side, which you take with regards to the AI model. And what's the thinking which goes behind you to make this kind of a judgment against the maker of the model. So it's working 99% of the time, but there are failures because the models are given what has been coded and what the training has been done on. So I'm just wondering the decision process about who-- so is it tied to the risk on which the company has done the due diligence? Or what is the sense of the responsibility? And so that was the broad questions. Thank you. Good. Thank you. So that's another good opportunity for me to share with you how much existing law is already grappling with these issues that arise that are so relevant to AI and particularly to the scaling up of AI outside the laboratory. So I'll preface this by saying that because I'm a sitting judge, I wouldn't want you to feel like I'm telling you exactly how I would decide the case if it came up because we actually have cases that are not unlike what you're saying that are pending in the California courts. And I'm not supposed to say how I would decide them. I can tell you, in general, the bodies of law that are relevant to trying to deal with this question and in what direction they've moved over time. So we have bodies of law, particularly in tort law and in contract law and in consumer protection law more generally, that can use, basically, three sorts of techniques to deal with the risk that you're pointing to, which arises when you go from highly efficacious behavior in the lab to what happens when some technology is operating, quote unquote, "in the real world". So when a vision system, for example, is tested in controlled conditions and it works fine. But now you're putting it on the front end of a car that is going to work mostly autonomously, and it's driving around Palo Alto, and then even driving around some much more irregular environment like some dusty road, unpaved in northern Mexico. So one body of law is tort law. Again, this is the body of law involving the duties that you owe, for example, to others as a company or as a person. And here, a core insight of tort law is that you have a duty of care if there's a theory of negligence that is being-- well, let me put it this way. If the claim is that the manufacturer should have been more careful than the manufacturer was, and the manufacturer owes a duty to the person who is using the product, which is its own separate question, a crucial issue will be the extent to which prevailing norms in the industry about how much testing happens outside the lab were followed or not. The more those norms converge, the easier it is for the company to say, perhaps, well, look. We did some outside the lab testing. You could easily spend a billion dollars a day testing outside the lab infinitely, but we did enough testing that we met the industry norm. A different technique would be to rely more on contract law, where you could say, I was sold a product that was guaranteed to me to have a degree of safety and efficacy in it. And in fact, it didn't reflect that because it wasn't tested outside the lab. And the promise that was made to me was not that this product was just tested in the lab. It also implied a lot of testing outside the lab. And a third strategy is more administrative regulation. So this is like what the FDA does with respect to pharmaceutical products. And here the key insight is that we don't rely purely just on tort law or contract law, we actually have the government saying to you, you can only sell this product if you've tested it in a particular way. And as you go through that pharmaceutical approval process and get into phase II, phase III, phase IV trials, effectively, what you're doing is you're going further and further outside the lab. So we can do this to some degree. There are going to be some nuances, but I think it's really important to remember, we have different tools we can use to deal with that risk. Thank you. I notice we have a couple of questions in the chat. Should we do some of those, and then come back to the raised hands? Yes. I think, maybe, we have time just to take these two questions. So first is under a criminal justice system. [INAUDIBLE] Oh. I see that. OK. I'm going to just read it out. If we find that an algorithm deployed in the criminal justice system that doesn't explicitly take into account race, but is systematically discriminating against Black people, perhaps COMPAS, are there legal ways to counter that discrimination? Is it constitutional to take into account to protect a characteristic of a race? Oh. Great. OK. Does it mean we just shouldn't use the algorithm? Oh. OK. Excellent question. So the short answer is the legal system has in America, for good reason, been long, deeply concerned with racial inequities. Whether it has been insufficiently concerned is a question that we can leave for another day. And others can talk about. But it has been concerned about it. And that concern is reflected in several parts of our legal system. It's reflected in a lot of statutes at the state and federal level against discrimination by race, like provisions against discrimination and employment and hiring. But it's also reflected in the Constitution, for example, in the Equal Protection Clause of the Constitution. And here I would just say, the legal system treats differently uses of race that are-- well first of all, the legal system treats differently uses of race than other classifications that people might be subjected to. They are subjected to what the legal system calls strict scrutiny. Which is a very, very demanding form of review where they're essentially explicit classifications are not permitted unless there's a very compelling, strong justification on the part of the government, and there's really no realistic way of doing it in a different manner. Where it gets more complicated with something like COMPAS is where there's no explicit racial classification, and yet you still have biases. And here I just note that you can find algorithmic ways to reduce that bias or even disappear it completely. And there might even be legal reasons to do that. But generally, when you do that one of two trade offs will happen. You will either increase the likelihood that other variables, that may not otherwise be so consequential, become more consequential. So to some people, that might mean you're introducing a certain kind of different bias. Although, we may not care about it as much because it may not be racial. Or you simply reduce the accuracy of a model in some cases. Now, that may be entirely sensible to do, but these questions about when and how you re-calibrate the results of a decision making process because it doesn't take race into account and yet it still gives unequal results, is a very familiar and vexing and difficult question in criminal justice and in the legal system more generally. Let me just take the last question then. Which one is it? The one says via advertising and [INAUDIBLE]?? The question, actually, was the [INAUDIBLE].. OK. So my question is regarding-- is that one? How much tolerance, OK, would society have when AI makes mistakes? On the one hand, we say humans are not perfect. On the other hand, we want to see how much a company can improve an algorithm to avoid an accident caused by their AI product. To what extent should and AI product's manufacturers optimize their product to become acceptable within society? Great question. I think this gives us a chance to end where we began, actually. Because when it started I highlighted to you that not only can AI have many benefits for society, I named some, but also that the relevant comparison is not to perfection but to what imperfect forms of decision making we might have to rely on, if we don't rely on a particular AI system. But it would be wrong to conclude from that discussion that as long as the AI systems are more accurate than human decision makers, then there's no legal problem or there's no policy problem. I think, instead, the reality is that as AI performance improves in a very discrete domain, two things might happen that are relevant to the answer to your question. The first is that we may come to understand and trust AI systems better to make that discrete decision so long as it doesn't introduce some other biases that we think of as even more concerning. So notice the point that I made about how AI systems might be really good at picking out faces that are unfamiliar relative to humans who are picking out unfamiliar faces. But if they're trying to discern emotion, they may not be as good as humans right now. That might change over time. But right now that means that we have to be very specific with respect to what we're expecting a system to do rather than presuming that there's a sort of halo efficacy beyond the very narrow context in which it's been tested, which might have to include just beyond the lab, right? Number two, as AI systems get better, in general, the standard of care that the legal system uses to discern whether something works effectively or not will begin to be redefined so that it's not just human efficacy. It's a well-performing AI system's efficacy. And not only a well-performing AI system, but ideally a well-performing AI system that does not have a built in set of biases that we consider problematic. So that means, for example, if over time 70% of human passengers are in autonomous vehicles rather than human driven ones, then a faulty form of performance from one of those vehicles that increases the risk of harm and actually results in somebody getting harmed, might still be actionable despite the fact that even that faulty system works a lot better than human drivers did. It also means that even if AI systems are better at discerning new faces, if their accuracy is much, much better for white faces than Black faces, that would be a policy and potentially legal problem for some that might require remediation and attention even if the system works better than humans. All of which is why I think these problems are going to keep your generation busy for a long time. Thank you. All right. Thank you so much, Tino. Thanks for the great talk and awesome discussion and there were lots of interesting questions. This was really fun. So let's thank Tino again, everyone. Thank you, everybody I really enjoyed this. And I appreciate your very thoughtful questions. And best of luck.
Stanford_CS221_Artificial_Intelligence_Principles_and_Techniques_Autumn_2021
Fireside_Talks_Artificial_Intelligence_AI_and_Language.txt
OK great. Let's get started. So welcome, everyone, to the fireside chat or talk on AI and language. So today, we're going to do something a little bit different. I want all of you to go to sli.do. And the guest code is cs221. So we're going to try to use this platform for doing Q&A. And also, I'm going to have a number of polls throughout the talk. So the first question, if you click here, you'll hopefully see that. It's what city are you in right now? And I already got some responses. San Jose, Palo Alto, Stanford, Seattle, College Station, Fort Mill, Cupertino, New York City. So welcome, everyone, from all over. And if you go-- oops. I guess this Zoom thing is in the way. OK. I guess I can't see that. But anyway, there should be a Q&A tab where you can go and type in your question. I'll try to monitor that throughout the hour. OK. All right. So I want to start by asking you a very simple question. What is the difference between these two cute little kittens and these two kids here? Anyone know the answer? Both can see, smell, taste, move around the environment. Kids are sometimes cute too. What's the main difference? Let's make this interactive. Someone can just shout out an answer. Humans can talk. Humans can talk. Yes, thank you. That is the main difference. And while animals do have some sorts of communication, especially songbirds and dolphins, and honeybees have their waggle dance, none I think could boast as such a rich and complex language as the human language. So it's really I think language is something that's uniquely human and defines who we are. So before getting into talking about AI and NLP, I want to spend some time just talking about why language is special and help us that we can get a richer appreciation for language. So if I had one slide to summarize language, this would be it. So this is one of my favorite XKCD comics. Some of you have probably seen it, but I'll just read it anyway because I think it really highlights the right. So. Anyway, I could care less. I think you mean you couldn't care less. Saying that you could care less implies that you care at least some amount. I don't know. We are these unbelievably complex brains drifting in a void trying in vain to connect with one another by flinging words out into the darkness. Every choice of phrasing and spelling and tone and timing carries countless signals and context and subtext and more. And every listener interprets these signals in their own way. Language isn't a formal system. It's glorified chaos. And you never know for sure what any words will mean to anyone. All you can do is try to get better at guessing how your words affect people so that you have a better chance of finding ones that will make them feel something like what you want them to feel. Everything else is pointless. So I assume you're giving me tips on how you interpret words because you want me to feel less alone. If so, then thank you. That means a lot. But if you're just running my sentences past some mental checklist so you can show off how well you know it, then I could care less. So what do we learn from this? So first, language is social. It's meant for communication. I think a lot of us coming more from a kind of data or ML background might think language is just a body of text. But it's really this dynamic thing that humans invented to communicate with each other. The other thing is that language or talk is cheap. And something about language requires incredible amount of trust between the people so that it actually can function. But interestingly, it can also be used to deceive, which is interesting right now. And it's just kind of miraculous how language allows us to express all these different thoughts from poetry to math to how to fix a bike and so on. So where did language come from? The short answer is no one really knows. And it's really hard to pinpoint it because while writing came around 3,000 BC-- and before then, it was a long period of spoken language. And spoken language doesn't leave fossils or anything. And there is so much kind of controversy around that that it was actually banned. The study of origins of language was banned for about 100 years in Paris. But we do just conservatively put an estimate. It started maybe 2.5 million years ago when Homo sapiens first came on the scene to sometime between 100,000 years ago, which is when modern humans really started doing things, which is a huge range. But to put it in perspective, this is a very recent development compared to the history of all of life on Earth. And we know that it served an evolutionary purpose. So if you read Sapiens, this book by Yuval Harari, language is perhaps one of the key reasons why Homo sapiens became so dominant because they allowed you to communicate and coordinate on such massive levels. And for example, detecting when coordinating on a hunt, for example, or communicating about food sources and so on. And interesting, language allows you to talk about things that aren't here and now. That is probably one of the most powerful things. And in fact, it allows you to talk about things that don't even exist. There's a whole genre called fiction that's about that and things which are in the abstract. So in contrast, like I said before, our kind of sister fields of computer vision and robotics tap into capabilities that have been around for much, much longer. Like vision is over 500 million years ago. And language is barely let's say conservatively maybe a million or two years ago. So just for fun, let me do a poll. I actually have to create the poll first. So what language do you speak? Languages do you speak. Let's see. Multiple choice, free text. OK. Let's see if this works. I have to disable-- OK. So go to the poll, and I'll let you fill that out. So we know that there's not one language. There's multiple languages. And furthermore, languages have evolved. So you can draw a giant family tree of languages. And this branch just shows the Indo-European languages, which covers all of Europe, Iran, and some parts of northern India, which developed 10,000 years ago. This branched off into kind of Germanic languages and romance languages. Germanic went into German and English and so on. And today, there's 6,500 languages, many of which are actually going extinct because language, again, is social. So if you don't have anyone to talk to, your language just kind of disappears. And language is changing all the time. English has definitely evolved since Shakespeare. But I think in grade school, you were probably told that "they" is supposed to be plural, and you shouldn't use "they" to refer to a singular person. But now, especially with this trend for having gender neutral pronouns, "they" is kind of proudly singular. And Merriam-Webster declared it as the word of the year in 2019. You can think about internet slang and emojis as also a continuation of language into the kind of digital sphere and so on. OK, so I'm getting a lot of English, Mandarin Chinese, Japanese. So quite a bit of-- Python 2, Python 3. Yes, very nice. OK, so one thing that I often get asked is all these languages-- are some more harder or more powerful than another? It's clear that they're I think widely accepted, that all languages are kind of basically equivalent. But there was kind of this hypothesis around the '20s called the Sapir-Whorf hypothesis that says the structure of language affects speakers' worldviews. And you see this in kind of fiction like George Orwell's 1984, which talks about a new language called Newspeak, which was simplified so that they could make sure people couldn't think to even critique their government. This has been challenged by a Universalist school, Chomsky and Pinker, who thinks that language and thoughts are universal, and all of the differences are very superficial, governed by a few parameters. And it is true that languages do differ. They are largely the same. Most languages have nouns and verbs that refer. We all are humans living in the same world. But there are some differences. One for an example is that English lacks what is called clusivity, which is the distinction between-- when you say "we," it's ambiguous whether you mean to include the speaker or not to include the speaker, whereas some languages like Tamil or some Chinese dialects actually have that distinction. Or Mandarin Chinese lacks the distinction between past tense and present tense but, of course, has other ways of accommodating for that. So one question maybe to just have another poll is-- let me stop that poll-- is do you believe that language shapes thought? Now, I know that these questions are obviously not binary. But I just wanted you to kind of get a gut feeling. Are you leaning more towards yes or no on that? And I will activate that poll. So do you abide more by the Sapir-Whorf hypothesis that the structure of language does influence how you think about the world? Or do you think that all humans are really the same, and we just happen to learn different languages, and those differences are fairly minor? So can you guys see the numbers? Yup. OK. So we have about 90% is Sapir-Whorf and about 10% no. OK. So this is a richly, hotly debated-- [CLEARS THROAT] --topic in linguistics even to this day. So another fascinating thing about language is that we're not born knowing it. Babies can make sounds, but it takes them a few years to be able to actually acquire language. And importantly, despite what their parents might think, they're not taught from explicit instructions or from teachers. Rather, they're taught or they learn naturally from immersion of language. And by the time they're five, they actually fairly have a fluent grasp of the language and speak grammatical sentences and so on. And language acquisition is very multi-modal and grounded. So language accompanies sight, and sounds, and action, and touch, and all these things. And it's active. You can't teach a child by just putting them in front of a TV and expecting language acquisition to occur. So one of the big, big questions around language acquisition is the nature of nurture debate, which also affects other areas as well. So the big question, is language innate? And so Chomsky, who is a famous linguist, in the '50s, he came up with this idea of what he called poverty of the stimulus. And he said that basically sentences that a child hears can't possibly be responsible for the richness of language that's exhibited in actual humans. So he thought-- he concluded naturally that a large part of language must be innate. Sorry. I'm trying to delete this poll and add a new one at the same time I'm talking. OK, so I'll ask this poll. Do you think language is innate? And if you think about it, he does have a fair point because we do, as children, hear so few of the examples. And then we can-- not nearly enough to really capture all the cases. And we constantly run into new language all the time. And we have to generalize compositionally to longer sentences, to new contexts. And we kind of all land on this kind of same language. So I think he does have a point. On the other hand, he was an experimentalist. He was kind of a classic armchair linguist who thought deep thoughts about how things should be. And there's also one could imagine what about the role of grounding? And maybe that is the part of the experience that really shapes language acquisition. And maybe we are just kind of really malleable and so on. So it seems like everyone's quite divided on this. So 53 yes and 47 no. OK. That's always fun. So maybe you guys can talk about this with your friends. What I will say-- oh, now. Whoa, this is a tight race. What I will say is that no matter where you're on the spectrum, once you have kids, you really think that there's more innateness than there's not. So now it's equal, 50-50. OK. Great. So let's move on. So let's take a look at language itself. And I'm going to introduce some basic concepts. So there's a whole field of linguistics that studies language. And I really encourage you if you're interested to take a linguistic class. I think it's one of the most interesting, eye-opening kind of experiences. But I'll just go over some basic things quickly. So here's a sentence. Beethoven was born in Bonn and displays his musical talents at an early age. Now, what's going on in this sentence? The linguists ask, what is the structure of the sentence that allows us to understand what it means? So first of all, there's tokenization, which is the sentence is just a stream of characters. And tokenization is the process of converting that into words. It seems very simple, right? But as we'll see later, it's not as simple as it kind of looks. Part of speech tagging is the idea that some words are nouns, and other words are verbs. And some of the verbs are past tense versus present tense. Parsing goes a step further and talks about the grammatical relationship between words. For example, "displayed" has a subject and an object. And the subject is-- in this case, what is the subject? Anyone? I know there are English speakers in the audience. Is it "Bonn?" OK. I think maybe it's-- OK, it's "Beethoven," right? OK. So even though "Beethoven" is very far away from "displayed," it's nonetheless grammatically very close to "displayed." And language has to be linearized. So not everything can be close to each other. Co-reference resolution or anaphora corresponds to the fact that "his" -- some words are pointers to other words or concepts. So "his" refers to "Beethoven." Named entity recognition is the task of identifying which entities-- now, usually proper nouns or people or locations or organizations. So what is a word? So let's go down into a word. So OK, here's a word, "light." So it seems like the word is pretty straightforward, at least in English. But the problem is that the meaning of unit, what is kind of conceptually should be a word actually goes beyond a word. so "light bulb" is kind of a unit. "Light" doesn't really capture really the full meaning of that word. Sometimes the meaning unit is within a word. For example, "lightning" isn't just a blob. It consists of light. And then the suffix -ening, which turns it into a verb, getund construction. There's also a case of where words have multiple meanings usually called word senses. So in all of these sentences, you can see that "light" actually has a different meaning. And we figure it out based on context. On the converse, conversely, some meanings have multiple words that refer to essentially the same meaning. That's synonymy. This also happens with sentences, which is called paraphrase. Multiple sentences could get at the same meaning. Just a huge caveat is that there's no true equivalences between any two words or sentences. There's always different subtleties and meanings. So you can think of it more as a kind of distance metric. There's also notions of relations between words like hyponymy, which is is-a relations, and meronomy, which is has-a relations. And this allows you to do entailment, which is whether a sentence logically implies a second sentence. Entailment, you can kind of think about as the 3SAT of language. It's the problem that embodies a lot of different tasks. If you could solve entailment, you can do question answering, you can do sentiment classification, and so on. Oh, are there-- I haven't been monitoring the queue. I don't know. I don't think there's any questions. If you have a question, maybe just holler. So this is all about lexical semantics, the meaning of words. Then we talk about compositional semantics. So this compositional semantics is a kind of rich tradition that goes back kind of to logic. So this is Frege, who was a logician at the turn of the 20th century. And there's two ideas-- model theory and compositionality, which I'll explain. So the first is that sentences-- these are just symbols. It's a convention that we say block 2 is blue. And what the sentence means has to be associated with what is in the world. So there's a world in which the block 2 is actually blue. And so this is a kind of important distinction which we kind of in NLP gloss over, or we don't even think of it because language is so natural. And the second one is compositionally, which is that the meaning of a whole is the meaning of the parts. So compositionality is the key thing that allows us to build more complex meanings out of smaller units. And this is probably the reason why we can generalize to all sorts of new contexts because we've learned the meaning of the words, and we know how they combine together. And that's how we can interpret the new sentences and new contexts. Quantifiers I think are really interesting. So "every" is a word that says that-- well, it's hard to explain language without explaining in terms of language. So "every" means "every." Hopefully these pictures tell you what's going on. And "some" is a kind of an existential quantifier. There's also a quantifier scope ambiguity, which means that if you have every non-blue block is next to some blue block, that could mean that every blue block is next to some blue block-- that was very tautological-- which could be different, or that there exists some blue block that's actually next to every non-blue block. So language is ambiguous. So modality involve words like "must" and "can." And this has to do with possible worlds. In all these possible worlds, block 2 is blue. But block 1 is only red in one of the worlds. Beliefs are interesting. So we know that Clark Kent is the same person as Superman. And naively, you might think that we can just substitute these two in all contexts just because they're equivalent. But Lois believes that Superman is a hero is not the same as Lois believes that Clark Kent is a hero. And this has to do with the fact that this "believes" sets up a kind of opaque context which you can't just do substitution. There's much more to be said about this if you study linguistics and just want to give you a flavor for how language can be kind of quite subtle. Here are some other examples of pragmatics. So conversational implicature is this phenomenon where there is a sentence that you say, but there's actually additional meaning beyond that sentence. So if two people are talking, and A says, what on Earth happened to the roast beef? and B says, the dog is looking very happy, sure, the dog is looking very happy. That's a sentence. But really, the implicature is the dog ate the roast beef. Presupposition is actually kind of very subtle but different-- is the background assumption that's independent of the truth of a sentence. So if I say I have stopped eating meat, what's the presupposition? That means I was once eating meat. OK? So regardless of whether I have stopped eating meat, or even if I said I didn't stop eating meat, that still presupposes I was once eating meat. So presuppositions are these very slippery and insidious things that people use to convince other people of things without them knowing. So it's really useful to know what a presupposition is because if someone tries to presuppose something on you, then at least you'll have the language to detect what it is. And it's precisely insidious because it's in the background. So you're focusing on did I stop eating meat without realizing that you just got a presupposition that you might not agree with laid on you. OK. So Paul Grice, who's this philosopher, established language as a kind of cooperative game between a speaker and a listener. And the dynamics of the game is what gives rise to these things like conversational implicature and presupposition. Well, this goes back to the XKCD comment. It's really a game between speakers and listeners who are trying to communicate and agree on something. And the conventions and what language means in all these contexts is really kind of context dependent and fluid. Just a few other ideas. Ambiguity and vagueness and uncertainty. So let me try to explain what each of these means and how it's different. So ambiguity means that a sentence has more than one possible but precise interpretation. So here are some headlines. And let me know what you think of them. OK, so stolen painting found by tree. OK. So what does that mean? How about Iraqi head seeks arms or local high school dropouts cut in half, juvenile court to try shooting defendant, kids make nutritious snacks, ban on nude dancing on governor's desk. And you can see if you're kind of smiling a little bit that these headlines are funny because they have maybe a serious meaning and then a meaning which is totally ridiculous but is nonetheless kind of technically ambiguous. Vagueness is where a sentence has one interpretation, but it does not specify the full information. So if I said I had a late lunch, there's no ambiguity there. It's just that I didn't tell you what time I ate lunch. Maybe it was 1 o'clock or 2 o'clock or something. Uncertainty is another form of not knowing something. And it's due to having an imperfect model. So say the witness was being contumacious. Some of you probably know what that means, so you're not uncertain. But some of you probably don't. And you have this uncertainty, which is not a property of the sentence but of the speaker's ability to understand natural language. So all of these things are useful to think about differently, although often they get conflated especially in kind of more model free methods. So I will say that there is another form of style of linguistics called distributional semantics, which actually goes back to the '50s as well. And I'll give you the basic idea. So if I give you these sentences, the new design has blank lines. Let's try to keep the kitchen blank. I forgot to blank out the cabinet. So what does blank mean? Or what word goes there? Someone say the answer. The answer's in chat. Oh, OK. I didn't know there was-- this is in the Zoom chat? Yep. OK. Let's see. I think I lost my-- OK, chat. OK, there we go. Oh, OK. There are answers. OK, clean. Great. OK, got it. OK. So the idea about distribution of semantics, I didn't have to tell you what the word means. The meaning of the word is characterized by the context in which it appears. So this is idea of the distributional hypothesis. Semantically similar words occur in similar contexts. Or more eloquently said by Firth, "You shall know a word by the company it keeps." So this is another way of thinking about semantics and actually the one that has really been picked up because it's so synergistic with modern kind of statistical techniques. So just going to summarize. There's two ways of thinking about semantics or meaning of sentences. One is compositional semantics where it's more top-down and model first. So you think about how language works. You try to think about parse trees or semantic forms. And you can capture a lot that way. We went through a lot of examples where you can feel language does obey certain types of structures. On the other hand, you can think about distributional semantics, which is a bottom-up data first approach and generally associated with vector spaces where you think about just words as meaning-- and not really trying to nail down what meaning is, but just associated with the set of contexts in which appears. So let me do another poll-- hold on, I need to create this question-- and ask what do you think is the best way to achieve natural language understanding? So is it compositional semantics or distributional semantics? OK. So go to sli.do. And I'm curious what you think. OK? It would've been interesting to go back maybe 10 years and ask this question, because I think the answers would've been quite different. And I'll talk a little bit more about that in a bit. So it looks like it's about 30% compositional and 70%, maybe three quarters. So most of you think that distributional semantics is the way to go, which may be concordant with what is happening in the world right now. OK, so why don't I take a few minutes right now. I went through a lot of material. And maybe I'll just ask if there are any questions to discuss. No questions? Someone has to have a question. So one question is context information. It's never spelled out, but the meaning depends on who is speaking it where. Yeah. So I've been kind of deliberately vague about what context is. Traditionally, it's a linguistic context, which is the words next to a sentence. But you could imagine that it could be very much generalized to context of the speaker in the multi-modal, what's going on in the world, who is speaking, to whom the person is speaking. And all of that rich contextual information, it definitely is useful for understanding the meaning of that word. So in the beginning, you said that humans have the highest level of communication. How likely is it that actually some animal has much higher level of communication but we're not smart enough to understand it? Yeah, that's an interesting question. So yeah, it's almost a bit of a philosophical question because it is in theory possible that some animal has a brilliant system of communication, and we just didn't measure it properly. People have been surprised by how sophisticated certain animals are able to communicate like dolphins or elephants or even bees. Often people separate the line between having recursion or language that's able to express kind of compositional thoughts versus ones which are maybe very contextual and nuanced but don't have that kind of level of abstraction. And so according to that, I think we're pretty sure that humans are the ones that have the most amount of abstraction. But then again, I guess this is also a very human centric way of defining what highest level of communication means because maybe some other creatures have more context and more nuance than in human language. Elephants communicate below 20 Hertz infrasound. OK. And Hitchhiker Guide. Thanks. That's a good one. How about a plant's communication can be chemical, color, temperature, even touch? Yeah. So there's a lot of other-- I guess in general, communication is not the same thing as language. Yeah. So I'm using language very narrowly here to mean kind of human language and what we know to be human language. Of course, even humans can communicate in other ways, like gestures and so on. OK, great. Let me move on. Thanks for the questions. So what I want to do next is to talk about building language understanding systems and what has happened over the last 60 years now that we have maybe a greater appreciation of what language is. So we've seen this slide before it's the Turing test. Alan Turing in 1950 asked the philosophical question, devised the Turing test to test whether a computer or a machine could be intelligent by seeing if it could talk to a human and convince the human that it was actually a person. This is one of the dialogues that they have from his paper. You can read it. What I want to emphasize here is that Turing was not interested in language and wasn't trying to design a language understanding test. He was trying to design a test of intelligence. And language was just the means to convince someone or to kind of verify that there was something up there. And so I think this is kind of very interesting from an AI perspective because language does seem like it has a unique capability of being able to even-- in order to have intelligence, you need to be able to essentially show it in some way. And you can think all the deep thoughts in your head, but if you don't have a way of kind of externalizing it, then, well, it's like if the tree falls in the forest, does it make a sound kind of setting. And language has turned out to be a very useful way of kind of probing for understanding. One of the early systems I want to highlight is ELIZA. So Joseph Weizenbaum when he was at MIT developed this system called ELIZA which was really meant to be a parody of a psychotherapist. So you would lie on a couch. And you would talk to the system. And some dialogues would go like this. ELIZA would say, is something troubling you? Men are all alike. What's the connection do you suppose? They're always bugging me about something or other. Can you think of a specific example? Well, my boyfriend made me come here. Is it important to you that your boyfriend made you come here? Because I'm just so depressed and so on. So what was interesting about the system is that the system itself is very simple. This is 1960, so there's not that much kind of going on in terms of complexity. It was based on rules and matching. So you can imagine a rule that says if the word "alike" shows, then you ask what is the connection. Or if you say "always," then ELIZA says, can you think of a specific example and so on. So it's very simple. But what Joseph Weizenbaum found out, which was really surprising, is that the people he showed this to actually started getting emotionally attached. And there was one incident where Weizenbaum's secretary actually asks Joseph to leave the room so that the secretary could have a real conversation with ELIZA. So in the '60s, I think it was very perhaps I guess telling of maybe what is to come. Later I'll talk about GPT-3, which is obviously a much more realistic version of this. But you could think definitely about some of the consequences of that technology. Incidentally, Weizenbaum later in his career became very pessimistic and actually very negative and critical about technology maybe because he had this epiphany that, well, what we're building is actually maybe not so good after all. So this is one of my kind of favorite natural language systems, is built by Terry Winograd. Was also at MIT, but he moved to Stanford where he became a faculty for a number of years. It's called SHRDLU. And the idea is that you have a person who is able to conduct a dialogue about a blocks world environment. So pick up a red block. OK, grasp the pyramid. The computers can say when it doesn't understand things. Find a block, which is taller than the one you're holding and put into the box. So it's fairly complicated. And then the computer can kind of reason and do anaphora, or co-reference resolution and ask for clarifications and so on. What I think is remarkable about the system is that it was an end to end system, included a parser, can do semantic interpretation, dialogue, planning. It wasn't just a language system. In fact, it was more framed as an AI system that could allow a robot to kind of do things in the world. And so this was in some sense kind of the first real super ambitious project for its time. However, while SHRDLU worked really well in the limited domain, Terry Winograd later wrote this paragraph, which is interesting. He said, a number of people suggested to me that this is a dead end in programming. Complex interactions between the components made it just really hard to understand what was going on. So eventually, Terry couldn't even build and extend the program because it was just too hard to keep in his head. So this is interesting because as we know, language understanding didn't really get solved despite these kind of narrow successes. And the history of NLP mirrors quite closely in the history of AI in general. Remember in the first lecture, I talked about how AI was filled with more of kind of these logical based methods, which didn't quite scale. What's interesting is at that time, in AI in general, there were people working on neural networks, although they were the vast minority of people. But in language, it was perhaps even less so because I think language is actually a discrete communication system. And there kind of was a rich body of work on linguistics. And NLP and linguistics kind of co-evolved in certain ways that made it kind of very natural to embrace all the logical structure that was embodied in language. But I think people realized that there was cracks that were showing at the seams even in the '70s but especially the late '80s. And in 1990, it was a time for a new set of methods to come onto the scene. So this actually started a bit earlier from speech recognition because speech and language are closely related. And speech is definitely something that is a bridge between the continuous noisy world where you want to be doing more pattern recognition type things and kind of the logical world. So HMMs, Hidden Markov Models, were developed for speech in the '70s and '80s. And finally in 1990, there was a famous paper from IBM research colloquially called the IBM models for machine translation. They developed a probabilistic model that could translate between two languages. And before then, translation was completely kind of logical and grammar and rule-based. And this was a radical way of thinking about it. This is actually incidentally based on a lot of Bayesian networks that we'll see later in the course. So for a lot of the '90s, a lot of these were called generative models. Thinking about them as extensions of Bayesian networks really kind of dominated NLP. About 2000, people started turning to discriminative models, a.k.a. linear classification. And there was another famous paper which introduced conditional random fields, which marries the structure that was so inherent in language with basically linear classification. So this was used to do things such as named entity recognition where you would mark up words as names of people or companies and so on. And so instead of predicting just one y from x, you predict a bunch of y's from x where y's are the labels of all the words. So this technology was actually quite influential in NLP but also more broadly in computer vision where for much of the 2000s, people were invested in having-- this was kind of the main way people tackled kind of structured tasks. Another thing I'll mention is Latent Dirichlet Allocation, which also came from models of language. And here the emphasis on unsupervised learning where you point LDA at a text. And it can discover topics in the text. So here is a text where it can discover things like, oh, some words are about budgets. Some words are about children and some words are about arts in unsupervised way. And this led to a whole kind of cottage industry of topic modeling papers. And LDA continues to be something that is commonly used in practice. What I will say is that a lot of these developments-- it's interesting to kind of think about how they were developed by someone trying to address a problem in natural language processing. And it led to more general technology that then was applied in all sorts of different areas like computer vision and genomics and so on. OK. So now 2000s were ending. And we know that at the end of the 2000s, deep learning really started gaining momentum. ImageNet was 2009, so it wasn't huge yet, but there was definitely rumblings. And it's interesting kind of culturally how the NLP community kind of reacted. At the time, NLP and vision were both all very kind of skeptical. And if you think about where NLP had been, a lot of people still view language as structure heavy. And language has a lot of lane structure. And no way that this mess of neurons could actually do anything with this kind of intricate structure. And you can think about a lot of the work in the 2000s was really a marrying of this structure with kind of statistical methods. So you can think about as putting probabilistic choices on this very rich structure discrete backbone, right? So in some ways, this is kind of a reconciliation of the compositional semantics with the distribution of semantics. You have it with both. But still, I think it was still largely based on kind of traditional linguistic thinking. So then I remember there was this 2011 workshop at NeurIPS. I was at it. So NeurIPS is the machine learning conference. So there are a bunch of machine learning people who were using vector based models to argue that this covers semantics. And then you have Ray Mooney, who is much more of a logic old school AI at the time kind of person. And a heated argument kind of broke out. And he is famous for saying, you can't cram the meaning of a whole sentence into a single vector. OK. So that captured the attitude at the time. And then things kind of started changing. I think the first kind of maybe move was word2vec, which was this way of taking lots of text and embedding words so that each embedding was characterizing the context of that word. So actually, word representations have been around since the '90s. But it was somehow this word2vec came at the right time that really caused people to pay attention. And I think one thing that they noticed which gathered a lot of attention was the fact that you could do analogies. For example, if you embed things in a vector space, you see that country and capitals are related by a consistent relationship with some asterisks. There was a recent paper from last year which I'll just highlight because I thought it was really interesting. I mean, six years later, and they used kind of simplest method. But they ran this word2vec just on about three million abstracts of material science papers, just strings. And they were able to discover certain types of patterns by looking at the vector spaces and actually predict certain types of compounds as having certain kind of material properties like being thermal electric. So this is kind of an interesting view of how something that's so kind of dead simple and knows nothing about chemistry and only knows about where co-occurrences can actually generate some interesting insights. So word2vec wasn't deep learning in the sense that it was only I guess one layer. So it was kind of shallow learning. And I think 2014 was when the deep learning community really kind of in some sense vindicated itself in the NLP community. So there's a sequence to sequence learning paper from Google in 2014 where they did machine translation. And the way they did machine translation was by taking a sentence and running a LSTM. If you don't know what it is, it's fine. It's just some black box that embeds the sentence into a single vector. And then using that vector, it spits out a new sentence. So if you watch the module on differential programming, it'll give you a better idea of what I'm talking about. So this was really cramming the meaning of a sentence into a vector literally. And at that time, the results were kind of OK. But it was enough of a proof of concept and surprising enough that it later extensions of this really transformed into actually usable technology. So it's interesting if you look at the progression from rule-based machine translation where there's no machine learning to statistical machine translation where they're still data driven but based on more or less some sort of structure of language to the neural world where there's really even less structure. And things have kind of gotten better. There's a researcher called Fred Jelinek who's famously quoted as saying, every time I fire a linguist, my accuracy goes up. I'm not sure he said those exact words, but that's at least an urban legend. One other note I'll make is that machine translation seems to be the task at least in NLP, but maybe more broadly that has really pushed the limits of kind of technologies. And I think it was really the driver that got Seq2Seq technology going. So in 2016, Google kind of completely transformed their machine translation system. And instead of having multiple systems, one for every pair so you have n squared pairs, you actually have one system that can do translations between any pair of languages, which was really kind of mind blowing at the time and still I think kind of impressive. I think I mentioned some of these things already. But I think it's worth highlighting that these statistical methods do have a lot of biases in them. So if you translate these sentences, you get genders appearing, which are correlated with certain types of professions. This is even more extreme. So if you take a rare language where there's not much data, and you pump in something that's just garbage, you get some really disturbing translations coming out. This was a few years ago, so they might have fixed this. But nonetheless, it turns out that you can cram a lot into a vector. But there's some really weird stuff in there. So maybe this is a good time to pause if there's any questions before the next wave of slides. There are some questions on sli.do I see. Oh, OK. Sure. I guess I-- sorry, I was looking at Zoom chat. OK, so questions. Oh, well, OK. Yes, I'll cover GPT-3. How about body language? So I think I mentioned that gestures are typically not studied in NLP. But they're definitely fair game for communication. And there's I think an interesting kind of sub-field in NLP which has to do with grounding and how people use language in the world, which it's natural to consider gestures. Let's see. Can we build a common electric language with precise meanings so all languages can reference based on it? So this has been tried. So there was a language called logline, which was developed to remove all ambiguity from language. So it would be precise, and everyone would know what you mean. Personally, I think it was a kind of a little bit of a fool's errand because I think that ambiguity is exactly what allows language to be so efficient. So the meaning of a sentence is a function of not only the words, but also of the context. So if the context already makes some things obvious, then you don't really need to say it. And also, you have to take into account the ease of acquisition. And it's much easier to almost by construction to learn the languages that we've evolved to learn. And something that's designed is generally not-- they're going to be super productive. So NLU as a combination of plausibility and fluency. So I guess natural language understanding is something I haven't quite defined because I think there's no accepted definition of it. You can think about it as demonstrating a proficiency on a number of tasks such as question and answering or translation. Or I guess if you think about generation, you have to think about plausibility and fluency, but also truthfulness. Why did you choose to do research in NLP as opposed to other areas? How would you think about-- sorry, there's this icon in the way-- what you wanted to study during the future? Sorry. I don't know why sli.do has this icon. That makes me want to read this. So I guess how do I choose to do research in NLP-- I guess this is very much I guess a personal thing. So I don't think my answer is necessarily good for everyone. But I think just the idea of what you can do with language just seemed so kind of powerful to me. Like I said, it's one of these things that is so kind of uniquely human. And also, I think it seems like a window to understanding kind of cognition because it's a way to do the IO from in and out of brains I suppose. Is there research on how to incorporate real time changes in language, the constant emergence of new words and phrases? Yes, so there is a lot of interesting work studying language change over time. There's historical linguistics, which talks about bigger scale changes of Latin to Spanish and French and so on. But there's also an interesting opportunity to do it on the web because internet language changes very, very quickly. There's always new words that come up. And also I guess on Twitter, you can geotag things so that you can actually witness how language spreads over time. So yeah, there is an active area of thinking about language change. Well, not just new words, but also the new words changing in meaning. Like the word "awful" used to mean to kind of-- sorry. The word "awesome" I think used to mean something more like "awful." But now it's changed, kind of flipped in sentiment from negative to positive. Cool. So maybe I will move on. Thanks for the questions. OK so up until this point-- so 2014 is when deep learning really started gaining momentum. And from 2014 to '18, it was really kind of normalizing everything you have. Parsers, co-reference resolution systems, named entity recognition systems, everything under the sun. And numbers went up. Things did get better because the models were just more powerful than what existed previously. I think a big turning point came in 2018. So there was this paper called deep contextualized word embeddings. Maybe better known as ELMo. And the idea behind ELMo can be summarized as follows. So imagine you're trying to do question answering. So our group, we actually spent quite a bit of work creating the SQuAD question answering data set with 100,000 examples. So it was a lot of work to get that. But in some sense, 100,000 is really, really small compared to the massive amounts of text on the web. And so the idea behind pre-training is that you train a language model to predict the next word given the previous context. And so this is called I guess self supervision. You just make up a task, which is called predict the next word given previous words. And then you learn embeddings. And then you use those embeddings to drive some downstream task where you have much fewer labeled examples. And the result was that across the board, all across a number of different benchmarks, the accuracies went up by a few points. So I guess it's maybe hard to really appreciate what three points means. But think about these systems are already hard to improve, and one point gained was good. And this is a substantial gain across a wide variety of tasks. So this got a lot of NLP people really excited. Later that year, BERT came out from Google. And again, I'm not going to go into the details. Actually, if you watch the differentiable programming lecture, I do explain a bit more about what BERT is doing. But think about it again, it's a masked language model, which is kind of like, predict a word given its context. And this was more or less just scaled up and engineered properly. And this, again, yielded huge gains over previous methods. So this really I think changed the game in NLP from having kind of specific architectures to do different tasks to a world where you have one architecture that does multiple tasks. So I guess I didn't mention that. BERT is now used to essentially power all-- or BERT plus friends, some Muppet is used to power downstream NLP tasks like co-reference resolution or semantic parsing and so on. So it really kind of brings us one step closer to having kind of a unified representation back or unified one model that can kind of rule them all in a sense. So going back to reading comprehension, one thing that is remarkable is that if you look at the leaderboard and the accuracies, they're way above human level performance. So these systems look like they're getting superhuman performance. But one thing that we did a few years ago was to really probe into whether these systems actually understood language. So here's a paragraph and a sentence. The number of new Huguenot colonists declined after what year? And BERT correctly answers 1700, which is right here. But if you add a distracting sentence which looks like it answers the question, but it doesn't, then BERT will get distracted and answer the wrong thing. And quantitatively, all the systems just fall by quite a bit when we added this distracting sentence, whereas humans obviously don't get fooled by that as much. So one thing to keep in mind is that while models have gotten impressive results on benchmarks, there's still these blind spots that means that solving a benchmark is not the same as solving the actual underlying task, which can be misleading if you read headlines where computers can read better than humans, where that's just not true. Computers do SQuAD better than humans. That is true. OK. And what's a little bit more maybe worrying is that these models can be easy to break. But we don't actually know how to fix them except for by training larger models and hoping that they break less. OK. Any questions before I talk about GPT-3 because that's going to be the last topic? OK, so is naming algorithms for cartoon TV characters a thing or just a coincidence for the two instances? I would gather that it's very not a coincidence because you also had ERNIE that came out afterwards and BigBird. And it's just clearly people are going along with a theme. There's another cast of characters. Bart and Marge are other-- I guess they're not Muppets, but they're from The Simpsons. And that's another line that Facebook has been pursuing. Is there any work on going to improve comprehension, reading between the lines understanding? It's actually interesting because these large models are so contextual and leverage so much kind of external world knowledge about text that they're almost reading too much between the lines. They're making inferences and making assumptions, which is what leads to all these biases in the models because it's not stated in the text. They're just learning from associations. OK, let me move on. So to get to now the final thing. So 2020. So in May, OpenAI releases GPT-3. I'm skipping a bunch of other models like GPT and GPT-2, which is in the interest of time. And this was essentially a big language model. I mean, big is an understatement. It's a ginormous language model trained on common crawl, which is a best approximation of the internet so to speak and has 175 billion parameters, whereas BERT had maybe 300 million parameters or so. So this is much, much, much larger. So the interesting thing about GPT-3 is this ability to do what they call in context learning. So traditionally, if you use BERT, what you would do is you have a model, you show an example. And then you perform an update, and you show another example, and you perform an update, and so on. And this is called fine tuning a language model. But GPT-3 showed that that was actually not necessary to get some interesting performance. So you can actually do a zero shot training. So you can say translate English to French. And you say "cheese," and then you give the prompt, and then it will actually do something reasonable. Or you give one example or a few examples. And notice that this is not training data in the conventional sense where you optimize a loss. This is actually the input into the language model. So language model, all it does is that you give it a string, and you ask it to predict the next string. Right? So this is encoding a task actually in natural language, which is kind of a very different way of thinking about learning. Let's see. Do I have enough time? So OpenAI has this playground. Let me just show you a little bit. So there's many things you can do. So this is a prompt. And I'm going to say, how can I help you today? Can you tell me the difference between SARSA and Q-learning? I actually don't know whether this will work. OK. So, sure. Inspired by Q-learning. OK. So (LAUGHING) this doesn't really answer the question. But that was nice, but that didn't answer the question. Yeah. So how about something else? Who founded Microsoft? OK, so this is also [LAUGHS] a lie. So you can see that it generates fluent English, but it sometimes doesn't have the best tendency to tell the truth. This is another example. So again, this is a prompt. And let me go to-- take this up. So this is kind of from the course syllabus. Whoops. So this is some complicated expression. And then this is, again, what you feed into GPT-3. And it says, artificial intelligence is the magic that makes computer do things they're (LAUGHING) not supposed to do like talking and driving cars. Interesting. OK. Well, you can judge for yourself what to make of this. Anyway, if you have access, you can burn quite a bit of time playing with this. Another example I'll show is you can train it on CS221 quizzes. So you can train it on quiz 1 question and answers. And you see how well it does on quiz 2. So here are prompts in-- so what is bold is given to the algorithm. So this should be familiar from quiz 1. And then you ask it, which of the following are examples of regression? And it answers, A, C, and D, which is wrong. But then it offers an explanation of this, which is it kind of starts off on target. And then it kind of-- examples of supervised regression, house price estimation, which is actually pretty good. Spam detection, which contradicts what it answered. And this unsupervised regression, I don't know if it's a thing. So you can get a sense that GPT-3 is really good at generating text, which if I didn't tell you this was fake, you'd probably read it and say, uh-huh. But you have to actually look pretty closely now to know that something is up. OK. So I'm trying to give you maybe a more balanced view of GPT-3. I think if you go on the internet and look at all the Twitter, you'll just be completely blown away by all the awesome things that people are building out there. But I want to balance it out with some things which are like, yeah, maybe it's not doing everything. Some other things to mention. Now that's gender bias, I think, is very hot on people's minds in NLP. They have an entire section in the paper that calls out that, yes, not surprisingly, GPT-3 has gender bias. I mentioned before that there was actually someone who used GPT-3 to generate a blog post. And this ended up number one on Hacker News for a while. And there's even papers coming out that says that GPT-3 primed with extremist context can generate more extremist content, which is, again, perhaps not surprising given that this was trained on the internet. And the internet has many lovely things in it. So some kind of things to think about I think. So one clear question here is can we make GPT-3 more unbiased and less toxic? Certainly out of the box, this is just kind of a wild tool that's kind of-- you should not be using GPT-3 directly to generate things to show people. Another question is what are the societal impacts of using text automatic generation, which is kind of related even if it's not generating unbiased things. But especially if it's generating unbiased things, the obvious worry is that fake news is already a problem. And this is just a massive amplifier for producing large amounts of credible sounding text, which just swamps out anything else anyone wants to say. So that's potentially a pretty dangerous world. On the kind of more scientific side, I think there's still the outstanding question of can we achieve language understanding without a model of the world or without world experience? So GPT-3 is kind of only trained on text. And furthermore, it's just a large transformer model which has no internal structure. And one question is can it be said to understand language? Or does it need the grounding and experience? So if there were more time, I would actually do a poll and get people to discuss this because I think it's actually not obvious what the answer is. Maybe I'll end with final remarks, and then I'll take questions. So one thing I wanted to just-- starting at the top, I want to highlight that language is this incredibly rich, complex communication system. Quote, unquote from XKCD, "glorious chaos." but yeah, which is I think fascinating to study. At the same time, there's a lot of regularity and structure. There has to be because I think otherwise, we wouldn't be able to productively coordinate and use language as systematically as we have been able to. Current models kind of ignore it, but it's also unclear how to incorporate this information in a way that kind of matters. As you can see, the field is moving at an incredible pace. GPT-3 was 2020. BERT was only two years ago. Seq2seq was six years ago. And only in the last six years, NLP has been completely transformed. I would say even in the last two years, it's been completely transformed. So it's interesting what will happen in the next year. And I think there's even more I think urgency to kind of understand what these models are doing and make sure that the technology is kind of directed in the right way. And then, of course, there's a lot more to be said and learned from this. There are wonderful classes at Stanford that you can take-- 124, 224-- to learn more about NLP. So with that, I will take any questions. Let's see. Where is-- OK. Is voice language a better source than text language? So this is kind of spoken audio. Depends what you mean by better. So voice does contain prosodic cues, which contain more information than what's just in the text. Unfortunately, there's not that much of it compared to all the text that's on the web. I think a lot of the issues of language understanding can be somewhat factored out from speech. Often people convert spoken language into written language and we then take it from there, although there's definitely structure in speech as well. How do you feel understanding multiple languages is a plus of research? No, how do you feel understanding of multiple languages is a plus of research in NLP? So a lot of research in NLP is very English centric, which is I think a potential problem if you think about fairness. Right? I think many rarer language and low resources languages as they're called don't enjoy such high accuracies. And those are probably the people who kind of need it the most. So there is a community of people in NLP who are very much interested in how to design more efficient learning algorithms that help low resource languages. Finally, how has the study of NLP helped linguistics grow as a field? So this is a really interesting question. Unfortunately, I won't have time to really do it justice. Chris Potts is a linguist at Stanford and has lots of insightful opinions about this. Linguistics is interesting because for much of its history, it was so, so dominated by kind of formal grammars and semantics starting from Chomsky in some ways, that his era led to this kind of whole field being dominated by a certain kind of way of thinking. And it's almost that some of these other perspectives hasn't really had a chance to breathe. And a lot of still the formal semantics is still very much kind of in the logical tradition where you have sentences which you would look at carefully, and you'd try to intuit what might be going on. It's a very different approach than just looking at a broad corpus and trying to make sense of what's there. But this is starting to change a bit. I think at least people are thinking about how these types of models that we're developing in machine learning could be useful. I think that, unfortunately, it's a hard ask because in some ways, these deep learning models don't give you much more than existence proof of certain types of data leading to certain types of behavior. And it doesn't really give you necessary insight into language itself because interpretability is kind of a missing component. But I think this is a really great question. And interesting to ask how these models can actually help us understand language better-- help us understand language better, not help computer exhibit better language understanding. Can we alter and add to the objective function of modern models that makes them more logical and coherent? Yeah. So this is kind of a natural direction that a lot of people are thinking about. So why can't you have both? You can have the richness of modern neural models plus some logical. And there's been a bunch of work that tries to fuse the two together. Personally, I feel like this is far from being solved by just kind of simple combination because I think there's something maybe deeper about how things should be kind of structured. You can add a regularizer that makes BERT kind of be more consistent or GPT-3 be more consistent. But I think the problem is that why are we using these models in the first place is that they can't be captured by logical regularities. And many of the advantages that we're getting from them are in places where logic kind of fails to deliver. So in those areas, I think we don't have much of an option. We don't have the option of slapping on a logical regularizer because, otherwise, we would've just built the logical thing. How far is real time spoken communication in multiple languages? So this is simultaneous translation. So if I am on Skype, and I speak in English, and it comes out in Japanese or something. This is actually-- I would say that this is, I mean, not mature technology by any means right now, but it's coming. And I think there's been work-- first of all, speech recognition I think is getting really, really good. Then the other main challenge is in machine translation. What makes real time difficult is that word order in languages differ, so you can't translate word to word. So you have to wait a little bit to get enough context and then translate it and so forth. But there are models in NLP that try to do this. And I think you can do a lot by kind of doing predictive-- predicting essentially what you're going to say. So I think that a lot of this can be done without even deep, deep understanding of language. What I think we've learned from translation is that getting 90% of the way with translation doesn't really require any understanding of language. It's just kind of matching kind of symbols contextually. I think getting the remaining bit and having translations that you can actually trust and are nuanced and proper I think is going to require quite a bit of more work. OK. So OK, another question. Are there examples of building languages with RL, making it advantageous to communicate in a multi-agent environment? So there is a bunch of work on what is called multi-agent communication where people set up some sort of environment and let a bunch of RL agents train a bunch of our agents to act in this environment where one of the actions is talk, right? So this is an interesting experiment where you can actually get certain types of languages to kind of evolve from this procedure. And language does help you play the game or solve the environment better. But it's rare that these languages just automatically line up with our notions of natural languages because often these worlds are too kind of limited for kind of language to be-- it to be necessary to take on that kind of richness. And also, I think just that there's no pressure that these have-- human language is probably not optimal for anything. It's just what we have and what we kind of happened to evolve. But that's a good question. What are major problems in society that advancing NLP will solve versus what problems they may create? So there are a bunch of places where NLP can be used for societal-- so one thing that it can do in principle is to allow a broader set of people-- especially with different people who might not speak English to be able to kind of tap into English resources and breaking down the kind of multilingual barriers. It's also been useful for doing kind of analysis on studying how people talk. So Dan Jurafsky and others have a project on analyzing how language of police officers in stops compares depending on whether they're stopping a Black person or a white person. And this could be using NLP techniques to try to study that question. So I think one huge area is language is used in a kind of social and societal context. And therefore, building tools that help us manage and navigate that societal landscape can be real interesting. Problems it can cause. Certainly fake generations, biases in models. If we start trusting translations or our systems, it could lead to kind of amplification effects where kind of the haves and the have nots get pushed farther apart. All right. Well, I think we're out of time. So thanks, everyone, for coming and listening. And have a good rest of the week.
Stanford_CS221_Artificial_Intelligence_Principles_and_Techniques_Autumn_2021
Artificial_Intelligence_Machine_Learning_4_Stochastic_Gradient_Descent_Stanford_CS221_2021.txt
Hi. In this lecture, I'm going to talk about stochastic gradient descent. So recall gradient descent, which was the optimization algorithm that we decided on for optimizing all our training losses for classification and regression. So recall that the training loss is an average over all the examples in the training set of the per example losses. So gradient descent works as follows. We're going to initialize the weight vector to 0, and then we're going to repeat T times and do the following update. We're going to take the old weight vector and subtract out the step size times the gradient over the training loss. And now this looks very simple. But if you unpack what this gradient is, it's actually the average over the gradients of the per example losses. So now imagine you have a data set with a million examples. Computing the single gradient is going to involve looping over all the million examples just to get a single update, and then you take a step and then you have to do it all over again. So this is why gradient descent is slow, because it requires going through all the training examples just to make one update. So what can we do about this? So the answer is stochastic gradient descent. So here is the same training loss function, and stochastic descent is going to work as follows. So we initialize the weight vectors to 0, and then we iterate T times. And now on each epoch, we're going to loop over the training examples and then perform an update on the individual losses. OK, so here instead of going through the training set and performing one update, we're going to go through the training set. And after each example, we're going to perform an update. And this is going to be a lot faster in terms of having the number of updates be large. Of course, there is a trade-off because each update itself is not going to be as high quality because it only consists of one example as opposed to all the examples. And that's it for stochastic gradient descent. I want to talk about one small note, which is the step size. So recall that update includes a step size which determines how far in the direction of the gradient or away from the gradient do you want to move. OK, so what should eta be? And in general, there's not really a one satisfying answer to this. And it's usually a hyperparameter that has to be tuned via trial and error. But here are some general guidance here. So the step size has to be greater or equal to 0. And if it is small, that means you're taking little, little steps. But that means your algorithm is going to be more stable and less likely to bounce around. And as you increase eta larger and larger, then you're taking more aggressive steps so you can move faster but perhaps at the risk of being a bit more unstable. So two typical strategies for setting a step size. One is using just a constant step size. We've used, so far, eta equals 0.1-- kind of an arbitrary number-- or you can do a decreasing step size rate, where eta is 1 over the number of updates that you've made. And the intuition here is that, in the beginning, you're far away from the optimum, so you're going to move quickly. You want to move quickly. But as soon as you start getting close to the optimum, you want to slow down. So now let us explore stochastic gradient descent in Python. I'm going to code it up and see what happens, OK? So remember last time, we did gradient descent. So I'm going to copy this code over. DescentHinge-- sorry, stochasticGradientDescent. And what we're going to do is modify this code to make it do stochastic gradient descent. OK, so just recall, last time, we set up some training examples. We defined the loss function, and then we had this generic optimization algorithm. So now to really tell the difference between gradient descent and stochastic gradient descent, I'm going to make a larger data set. And I'm going to do it in a way so that it's large but it's structured so we know what the right answer is. Because, otherwise, how can we verify it did the right thing? To do this-- this is kind of just a general trick-- is that you kind of generate synthetic data from kind of a ground truth, and then you try to recover that ground truth. So suppose we had some true weight vector. This is our secret code, which is unknown to the learning algorithm, but we hope that learning algorithm will recover this, and then we're going to define a function called generate which uses this trueW to generate an example. So here I'm going to generate x, and I'm going to just sample randomly a five-dimensional weight vector-- oh, sorry, an input point, and then I'm going to set y to be trueW.dot(x). So the examples I'm going to generate are generated from the true weight vector, and then I'm just going to add some noise. Do randn, OK? And then I'm going to set the training examples to be just generate for, let's say, one-- let's do 1 million examples. That's a lot of examples. All right, so let's see what this data looks like. So I'm going to print out x and y and just to see what is coming out-- oops, I had a typo here. OK, so here is the data set that we are going to train on. So, example, x is a five-dimensional vector, and the output is a scalar. A lot of examples here. All right, so I need to update the feature vector to be just x, the identity, and here I'm going to-- the initial weight vector has to match the dimensionality of the true weight vector, and then everything else, the training loss and gradient, I'm going to leave alone, OK? So now let's uncomment this line and let's run gradient descent. Let's see what happens here. OK, so it's going to generate the data. And now to compute a single gradient, it has to enumerate over one million examples. So this is going to be quite slow. I'll finish the first epoch, and it has some values. And then the second epoch, and it seems like it's making some progress. Remember, we want to see if this can hit 1, 2, 3, 4, 5. The loss is going down, which is good, and it seems like it's moving in the right direction. But it's pretty slow, and I'm here so I'm just going to stop it there because I don't want to wait forever. OK, so now let's do stochastic gradient descent. So first, I need to change the interface because gradient descent only had access to F and the gradient of F. And now stochastic gradient descent needs to assess individual losses. So what I'm going to do is I'm going to define a stochastic-- or actually I just call this the loss of w. I'm going to use i here to denote an index into one of these terms and the sum. So what the loss is going to be is it's just going to be one of these terms. And the term I'm going to select out is just the ith data point, OK? And similarly, the gradient of the loss is going to be just the gradient but for the ith data point. And this also takes in the index i. So now if I feed in i for various values, I can access the loss and the gradient of that loss function for any given point vector. All right, so now let's go over to the optimization algorithm and let me do stochastic gradient descent. OK, so I'm going to call this stochasticGradientDescent, and I'm going to call this-- just to distinguish things, I'm going to use lowercase f for individual components of an objective function. OK, so I'm going to initialize the weight vector. I'm going to use a different step size here, just for fun. I'm going to initialize with 1.0-- actually, let me do this instead. I'm going to set the step size to be 1 over the square root of number of updates. And each time I do an update, I'm going to increase the number of updates. So, actually, let me do it in this order. OK, so the number of updates starts at 0. And then remember in stochastic gradient descent, I'm going to loop over the number of components of the objective function. So 1-- 0 to n minus 1. So another thing I'm going to have to pass in is the number of components that I'm going to use to index into F. And so now this is f of w, 1, gradient f of w, i, and then I'm going to move everything inward. OK, so now to call this function, I'm going to run stochastic gradient descent. And with just the loss and the gradient of the loss, I'm going to pass in n, which is a number of training examples and initial weight vector, OK? So let's just review what's going on here. So stochastic gradient descent takes a function which can access individual components of the objective, initialize the weights, and then iterate some number of times. And in each epoch, it's going to loop over all the examples, compute the value, compute the gradient, and then it's going to do a gradient update. And here I'm using a step size, which is 1 over the number of updates that I've made so far. OK, so let's see stochastic gradient descent in action now. I have two returns here, so there's a syntax error. Let me fix that. So now it's going through 1 million examples-- oh, I need to import math as well. So it's going to loop over 1 million examples, but each example it's going to perform an update. And so when it prints out, it's going to have already taken 1 million steps of stochastic gradient descent. And look at what happened here. So after the first step, it's already quite close to 1, 2, 3, 4, 5. And objective in the-- I guess the function of value doesn't really mean as much because it's only of an individual point. But you can see that the weight vector is converging quite nicely. And this shows that stochastic gradient descent, just even sometimes with one pass of a training data, can get much closer to the optimum than if you were to do many, many rounds of gradient descent. OK, so that was stochastic gradient descent in Python. So let's summarize here. So we want to optimize this training loss, which is an average over the per example losses, and we looked at gradient descent, which takes a step on the gradient of the training loss. And we also looked at stochastic gradient descent, which picked up individual examples and updated on computing-- after computing the gradient of individual examples. And now on this example, we've shown that stochastic gradient descents wins. And the key idea behind stochastic updates is that it's not about quality. It's about quantity. So maybe not a general life lesson, but it seems like, in this case, it is more wise to keep in mind what you're trying to do, which is optimize this objective, rather than compute the gradient, which is only a means to an end. OK, so that concludes the module on stochastic gradient descent. Thanks for listening.
Stanford_CS221_Artificial_Intelligence_Principles_and_Techniques_Autumn_2021
Artificial_Intelligence_and_Machine_Learning_1_Overview_Stanford_CS221_AI_Autumn_2021.txt
All right. In this module, I'm going to be talking about machine learning and give you an overview of all the topics we're going to cover. So remember that machine learning is the process of taking data and converting it into models. And with those models, you can go and perform inferences and answer all sorts of questions. So we're going to focus on reflex-based models. These are models, including linear classifiers, neural networks, in which inference is very fast and feedforward, which makes them very attractive. So in a nutshell, this is what a reflex-based model is. We call up a reflex-based model predictor, and the predictor takes, as input, some x and produces some output y. And in general, x can be something arbitrary, like an image or a text. And y is going to be restricted, and that particular restriction is going to determine what type of prediction task we are talking about. We'll consider two common cases of prediction tasks here. The first is binary classification. So in binary classification, the predictor is also called a classifier, and the output y is called a label, and that label can either be plus 1 for the positive class or minus 1 for the negative class. So some examples of binary classification problems-- there's fraud detection, where x is a credit card transaction, and you're trying to predict y, whether there's fraud or no fraud so that the transaction can be blocked or not. Another example is moderating online discussion forums. So the input x is an online comment, a piece of text, and you're trying to predict y, whether it's toxic or not so that the comment can be flagged or taken down appropriately. And finally, here's an example from physics. So after the Higgs boson was discovered, scientists wanted to know how does it decay? So the Hadron Collider collected a bunch of data, which includes measurements of events. So here x is a measurement of a particular event, and you're trying to predict whether it was a decay event or simply background. So the second type of task we're going to consider is regression. So in regression, y is going to be a real number, and is generally known as the response. So here are some examples of regression problems. So in poverty mapping, x is a satellite image, and you're trying to predict y, which is the asset wealth index of the homes in that area in the satellite image. In housing, you might want to predict using the information about a house-- location, number of bedrooms, year-- and predict the price. And finally, you might be interested in predicting arrival times, given where you are going, weather conditions at the time, what time of day it is, and you're trying to predict y, which is the time of arrival. So the main difference between regression and classification is that, in classification, y is a discrete entity and, in regression, it is a continuous entity. So the final thing we're going to talk about is structured prediction. So in structured prediction, it's a little bit of a catchall. And in structured prediction, y is simply a complex object. So some examples include machine translation, where x, the input, is a sentence in one language, and y is its translation in another language. Dialogue can also be cast as structured prediction. You're given a conversational history between a user and an agent, for example, in a virtual assistant setting, and you're trying to predict y, which is the next utterance that the agent should say. Another example is image captioning, which might be useful for visual assistive technologies. x is the image of a scene, and y is a sentence describing or narrating that scene. Image segmentation, which is useful for autonomous driving, takes an image of a scene as x and produces y, which is a segmentation of that scene into regions corresponding to objects in the real world. So it might seem daunting at first to be able to generate segmentations, or sentences, or texts. But there's a secret here, which is that many structured prediction problems can be actually decomposed into a sequence of multi-class classification problems. And this allows us to leverage the machinery that we'll talk about in this multi-class classification or structured prediction. So here is the roadmap of the rest of the modules in the machine learning ed. So first, we're going to start with regression and classification, the bread and butter of machine learning. And we're going to focus on the most simple settings-- linear models where we're training using gradient descent. Then, we're going to step over to algorithms and introduce stochastic gradient descent, which is going to give us major speed ups over gradient descent. Next, we're going to hop over to models and improve from linear models. So first, we'll show that, actually, even linear models can be pushed to its limits by using non-linear features using this linear machinery. We can use feature templates to organize a set of features that we have. Then, we'll talk about neural networks, which also allows you to have non-linear predictors, but allow these non-linearities to be learned from data. Following neural networks, we're going to look at the backpropagation algorithm for computing gradients automatically so you don't have to do it yourself manually so you can train neural networks. We're going to hop back over here and talk about differentiable programming, which is a generalization or extension of neural networks that will enable us to build all sorts of complicated deep-learning models using building blocks. And all of this is generally done in the context of supervised learning. We're going to touch on unsupervised learning a little bit and introduce the classical K-means algorithm for clustering points. And finally, we're going to end on a few notes. So first is generalization-- the question of, if you train a machine-learning model on a particular set of data, when is it able to generalize to a new set of examples? And finally, I'm going to talk about best practices, like cross validation, and how do you do machine learning in practice. So that concludes this module.
Stanford_CS221_Artificial_Intelligence_Principles_and_Techniques_Autumn_2021
Artificial_Intelligence_Machine_Learning_2_Linear_Regression_Stanford_CS221_AI_Autumn_2021.txt
Hi. In this module, I'm going to cover the basics of linear regression. Our story of linear regression begins on January 1st, 1801. Italian astronomer Piazzi looked up at the night sky and discovered something which he named Ceres. He didn't know what it was. Was is a comet or a planet? But he did make some observations of the location before it was obscured by the sun. The data he collected looked like this-- at a particular time, two numbers which represent the location of Ceres in the night sky. But now the big question at the time was when and where was Ceres going to be observed again to re-emerge from behind the Sun? All the top astronomers at the time went and tried to analyze this data and figure out the answer. So Carl Friedrich Gauss, famous German mathematician, took Piazzi's data and created a model of Ceres' orbits and makes a prediction. This prediction was actually wildly different from all the other predictions that other astronomers made, but in December, Ceres was located, and Gauss's prediction was by far the most accurate. So now, there's an interesting story here. Gauss was actually very secretive about what his method was, and in 1805, the French mathematician Legendre was actually the first to publish the method, before Gauss could publish in 1809, even though Gauss had this method back in 1795. The method here is none other than least squares linear regression, which is the topic of this module. So here is the framework. So we are given some training data which consists of a set of examples. Each example consists of an input x and output y. So 1, 1, 2, 3, 4, 3, and we can visualize these examples on a 2D plot here, plotting y, the output, against x, the input. So here is 1, 1, here is 2, 3, and here is 4, 3. So what we want to do is take this data, have a learning algorithm. Produce a predictor f, and the predictor in this case is let's say a line. OK? And what the predictor allows us to do is to take new inputs, such as this 3 here, and send it through and produce an output, 2.71, corresponding to this point on this line here. And there are three design decisions that we need to make to flesh out this framework. First, what are the possible predictors that the learning algorithm is allowed to output? Is it only lines, or is it curves as well? This is a question of what is the hypothesis class. Second question, how good is a predictor, and the answer is going to be framed in terms of determining a loss function that judges each individual predictor in the hypothesis class. And finally, how do we actually compute the best predictor? There are a lot of predictors there, and even if we have a loss function, how do we go searching for them? And this is going to be the question of the optimization algorithm. So this is a recipe that we're going to see over and over again, and it's kind of like a build your own learning algorithm. So we're going to start with the first question. What is the hypothesis class? So here is that predictor that we were looking at, f of x equals 1 plus 0.57x, and that corresponds to this red line. And here's another one. Here's a purple predictor which has an intercept of 2 and a slope of 0.2. And in general, you can consider predictors of a following form, f of x equals w1 plus w2 of x for arbitrary w1 which is the intercept and w2 which is the slope. So now, we're going to generalize this using vector notation. So let's take w1 and w2 and pack them up together into a vector, which we will call w. This is called a weight vector, and we're also going to define a feature extractor, also known as a feature map, phi. So phi is going to take an arbitrary input x and return the vector 1, x, at least in this case, and 1, x is going to be known as the feature vector. So now, we can simply rewrite this equation up here in vector notation. So we're going to write f sub w to denote that this predictor depends on the weights of a particular input x is equal to w dot phi of x. And this w dot phi of x, which we'll see over and over again, is called the score. OK? So here's an example. If you stick in 3 into this predictor, then what we're doing is taking the weight vector and dotting it with the feature vector applied to 3. And remember the definition of feature vector is of 1, x, so that's 1, 3 here. And if you take the dot product, 1 times 1 plus 0.57 times 3, that gives you 2.71. So now, finally, the hypothesis class is defined as a set script F is the set of all predictors f sub w, where w can be an arbitrary intercept and slope. w is an arbitrary vector. OK. So that defines our hypothesis class that we're going to be working with. So now, let's turn to the second design decision. How good is a predictor? So let's take our predictor that we're looking at, the red one, and let's look at some training data. So this is the training data that we've had before. Let's plot the predictor and the three data points, this one, the 2, 3, and 4, 3. So intuitively, how good is a predictor is how good it fits the training data, and we're going to quantify that by measuring the distance between the prediction and a target. This difference is called the residual. So we're going to measure the residual for each of our points, and we're going to take that into account. So formally, we're going to define a loss function, which is a function of an example x, y and a particular wave vector, and that's going to be equal to the prediction f of x minus the target y. So that's the residual here, and I'm going to square it. So that is called the square loss. So as an aside, you can also take the absolute value here, which gives you the absolute deviation loss, but we're going to stick with the squared loss for mathematical convenience. So on these three examples we can compute the loss. So we take 1, 1 and the weight vector. We dot them together. That's the prediction. You subtract off the target and square it, and that gives you 0.32. A second example is 2, 3, and the third example is 4, 3. Each one gives you a loss function which corresponds to the square of the length of these dashed lines. So now, we can define the training loss of a particular wave vector to be simply the average over the losses. So formally, this is going to be a sum over all the examples in our training set, so these, and of the loss function of a particular example with respect to that weight vector. And then, finally, we're going to just divide by the number of points in the training set from the line. So in this example, we just average these three numbers, and we get 0.3. OK? So that is how we defined the squared loss and the training loss, in terms of the squared loss. So here is a training loss, as we had from the previous slide, and we can visualize this. So for every single weight vector, we can stick it in and get out a number. So fortunately, w is only two dimensional here, so we can plot this actually. So here is a plot, w1, w2, and every point here gives you a training loss on the z-axis. So red here denotes high loss. Blue here denotes low loss. And so it's natural to think about how you would find the point here with the minimum training loss, and that's captured mathematically as minimum over w of TrainLoss of w. So this is the optimization problem that we want to solve. So now, the third question is how do you compute the best? So fortunately, we already have our well-defined goal. We want to find the weight vector that minimizes the training loss, and we're going to adopt a very simple strategy called follow your nose. OK? So you start at a particular w, and then you sniff around, and then you just move in a direction that seems like it's going to reduce your loss the most. More mathematically, we're going to define the gradient as the direction that increases the training loss the most, and importantly, we want to go in the opposite direction, because we want to decrease the training loss, not increase it. So pictorially, what the follow the nose strategy, or gradient descent, is going to look like is you're going to start at some w, and then you're going to follow the gradient. And you're going to end up here, and then you're going to compute the gradient. You're going to end up here, and you might bounce around a bit, but hopefully, you'll decrease the loss on average over time. OK. So here's the pseudocode for gradient descent. So we initialize w to be something, let's say 0s for simplicity, and then we're going to repeat big T times. Big T is the number of epochs, and what we're going to do is we're going to take our old value of the weight vector, and we're going to subtract out some eta which is called the step size, which we'll get into a little bit later. So here's the step size times the gradient, so grad of returning loss of w. So that's called the gradient here. OK? So that's it. There's three lines, and really only one line that's of interest here, and that's all there is to gradient descent. OK. So at least at an abstract level, so all that remains to do is to actually compute the gradient. So remember, here is our objective function, TrainLoss is the average over the individual losses, which I've expanded the square loss right now. But gradient descent is actually much more general than just the square loss or even machine learning. And now, we just need to compute the gradient, and if you remember your calculus, here's how you do it. So the gradient with respect to w of the TrainLoss of w. So remember, there's a lot of symbols here, but we are differentiating with respect to w, not x, not y, not phi. And this is going to be equal to the gradient of this expression. This is just the constant. So the gradient can be pushed inside, and this is a sum. The gradient can be pushed inside of some linearity. So this is the sum over the training set again, and then-- So now the interesting thing happens. So here is something squared, the gradient of something squared. You bring down the 2, and then you have the same something, which is, if you remember, the residual times the gradient of what's inside here. And what's inside here is w dot phi of x. Phi of x is a constant. y is a constant. So the gradient of that residual is just phi of x. And notice that-- there's something interesting I want to point out here, which is that the gradient can be expressed as the residual times a feature vector, where the residual is the prediction minus the target. So intuitively, you can think about, if the prediction is equal to target, then the gradient is 0. So nothing will happen. And if the prediction is not equal to target, then the gradient will be in the direction that ascends the prediction away from the target. And remember, we're always minimizing, so we're subtracting that off, in which we'll move the weights in the right direction. OK. So let's walk through the gradient descent for our example here. So here is a training example, again. Here is the expression for the gradient that we just computed on the previous slide, and here is the gradient update, where I've taken the liberty of just substituting the step size to be just 0.1, just for simplicity. OK? So we start with w equals 0, 0, and then what we're going to do is plug in 0, 0 into this training loss expression, this somewhat nasty-looking thing, and that is this. This is just simply the average. Three examples, 1 over 3, here's the first example. Here's a second example. Here's a third example. Each example consists of a dot product, the prediction minus target times the feature vector of that example. So I'll let you go through the details here, but if you do the math, you get minus 4.67, minus 12.67. If you multiply by the step size, and you get this weight. OK. So now, the second duration, you're going to take this weight vector, stick it into this expression all over again. You compute a new gradient, and then you subtract that gradient times 0.1 from this weight vector. And you're going to get a new wave vector, and then you keep on repeating and repeating. So after maybe 200 iterations, you're going to end up with something like this. And something interesting happens. If you're lucky, the gradient at the end will be 0. So what does 0 mean? So 0, if you subtract out 0, you get the same thing. So it means that gradient descent has converged. By subtracting off the gradient, you're not going to move anywhere. So you might as well just stop, and the stopping point is this weight vector 1, 0.57, which is indeed the red classifier. So just to concretize this even more, let's do gradient descent in Python. OK, so I'm going to pull up a terminal here. So in practice, you probably wouldn't implement gradient descent from scratch, except for if you're just trying to learn about gradient descent. But for pedagogical purposes, let me do this. OK? So I'm going to do this in a kind of very barebones way. So I'm going to use numpy rather than PyTorch or something that can do gradients for you. First, I'm going to define our training examples as 1, 1, 2, 3, and 4, 3. I believe those are the training examples. Let me just double check over here, 1, 1, 2, 3, 4, 3, OK. So now, I'll have to define a feature vector of x which is, remember, is 1, x. So this is just a numpy array. I'm going to initialize the weight vector with, let's call this initialWeightVector, and this is just going to be an all 0s vector of dimension 2 which is going to match the dimensionality of phi. OK. So now, I need to define the training loss. Training loss takes a weight vector, and I'm going to actually go to the previous slide here. And it's just basically copying down math and turning it into code. So this is 1 over the number of training examples times the sum, and the sum is over for all training examples xy in trainExamples. And for each one I'm going to do w dot phi of x. It's really, literally the same thing, minus y. And I'm going to take this expression, the residual, and I'm going to square it. OK? So let's make this a little bit bigger. OK. So that's the training loss. OK. Now, I need to take the gradient. So I'm going to cheat a little bit and just copy that down here and edit it. So the gradient of the TrainLoss is going to be 2 times the residual times phi of x. OK? So that's it for the training loss. OK. So now, I'm going to implement gradient descent. So I'm going to do it this way, actually. So gradient descent, like I alluded to before, is actually a general purpose optimization problem. So all it needs is a function, gradient access to that function, an initial weight vector, and it's ready to go. OK? So I'm going to initialize w to the initial weight vector, and then I'm going to, let's set eta to 0.1 for a number of iterations t in range of, let's just say, I don't know, 500, just for fun. I'm going to evaluate the function at w. I'm going to evaluate the gradient, and I'm just going to do the one-line thing of subtracting out eta times the gradient from the existing weight vector and setting it to the new weight vector. And I'm going to print out where I am, so epoch t w equals w F of w equals of the value. And let's do the gradient just for fun, so grad gradient F equals the gradient. OK? OK. So now, I just need to call gradient descent with-- what function am I optimizing? The TrainLoss. The gradient of the TrainLoss is the gradient of the TrainLoss and the initial weight vector. OK? So that's all I have, and let's actually just run it, gradient descent. So we see here that in epoch 0, the weight vector is something, and the function value is something gradient something. And over time, the function value is going to decrease which is a good sign. The gradient of f is going to start becoming more 0, and the weight vectors are now converging to 1, 0.57, as advertised. So I will declare this program working. Let's just, to kind of summarize what we did here-- so I want to set this up as follows. So here is the optimization problem, which is you have the training example, the feature vector, the loss, and gradient and so on, and this is kind of a specification of what the problem we want to solve is. And then, down here, we have the optimization algorithm. And we're going to be doing this a few times throughout the course, drawing kind of module duals where we can separate the optimization problem from the optimization algorithm. Notice that the optimization algorithm, again, doesn't depend on anything relating to machine learning at all, and the optimization problem doesn't say anything about how you solve it. So decoupling this what from the how I think is a really important theme. OK. So that was gradient descent in code, and let's summarize now. So in summary, we take training data. We have a learning algorithm that produces a predictor that can produce new predictions on new inputs, and there are three design decisions, or build your own learning algorithm. Which predictions are possible? That is a question of the hypothesis class, and we consider linear functions here, where the function is simply w dot phi of x. With a particular feature map, 1, x, you can imagine other things. We'll see other things later, nonlinear features and even neural networks, but it's still the question of what is the hypothesis class? How good is a predictor? That's a question of what is the loss function. For regression, we looked at the squared loss. Later, for classification, we're going to look at the hinge loss and the zero-one loss, but this is orthogonal. For neural networks, we can also look at the hinge loss or the square loss or any of the other losses. And finally, how do we compute the best predictor? This is a question of what is the optimization algorithm? And for this, we introduce gradient descent which is this lovely, simple, and very effective algorithm for our optimization. So that concludes this module.
Stanford_CS221_Artificial_Intelligence_Principles_and_Techniques_Autumn_2021
Artificial_Intelligence_Machine_learning_3_Linear_Classification_Stanford_CS221_Autumn_2021.txt
Hi. This module is about linear classification. We're going to go through a linear classification via a simple example, just like we did for linear regression. So as before, we have training data, which consists of a set of examples. And each example is now going to be an input x1 and x2 followed by a label y. So we have three examples here. The input 0, 2 has output 1. Minus 2, 0 has output 1. And minus 1, minus 1-- sorry, 1 minus 1 has output minus 1. So we can visualize these points, just the input part, on a 2D diagram, so where I'm plotting x1 by x2. So here is 0, 2. And I'm coloring it orange to denote that as a positive point. This is minus 2, 0. That's also orange because it's positive. And here is 1, minus 1, which is blue because it's labeled as negative. So given these points, we want to design a learning algorithm that can output a predictor in classification. It's known as a classifier. And this classifier can take new inputs, and crank them through the classifier, and produce an output label. And so this is demonstrated as follows on the plot as follows. So the classifier in classification is going to be represented by a decision boundary. So the decision boundary carves up the space into a region where the points are labeled positive and the region where the points are labeled minus. So 2, 0 is going to be predicted as a minus 1 in this case. OK, as before, we have three design decisions we need to settle. First, which classifiers are possible? And this is a question of the hypothesis class we're going to consider. Are the decision boundaries going to be straight or can they be curved? Second, how good is a classifier? This is a question of a loss function. And third, how do we compute the best classifier, a.k.a. The classifier with the lowest loss? And that's going to be a question of the optimization algorithm. So before we begin talking about the design space of the hypothesis class. I want to focus on an example of a linear classifier here. So we have f of x equals. And then, we have-- I'm going to define this weight vector, w, to be minus 0.6, 0.6, OK? And I'm going to take the dot product with a feature vector, which is going to be just the identity feature vector mapping to x1, x2. Remember, x is now a two-dimensional list of two numbers. And then, I'm going to take this dot product. I'm going to take the sign. And remember, the sign of a scalar is equal to plus 1 if that scalar is positive, minus 1 if it's negative, and 0 if it is 0, OK? So let's see what this classifier does on some points. So each point is x1, x2. So let's look at 0, 2, OK? So let's look at where 0, 2 is on the plot. 0, 2 is right here. And I'm going to represent it by this vector here. And now, this vector is phi of x. w is going to be this vector here. That's the weight vector. And the dot product, remembering from linear algebra, is the cosine of this angle. And in particular, the product is positive if and only if this angle is acute. And it's negative if the angle is obtuse. So in this case, it is acute. So therefore, this point is going to be classified as positive. So let's take another point, so minus 2, 0. Minus 2, 0 is here. And this angle is also acute. So therefore, this point is also labeled as positive. And the third point is 1, minus 1. So 1, minus 1 is over here. And now, this angle between the red and the blue is obtuse. Therefore, the sign is negative. So you can kind of understand how a classifier behaves geometrically. But you can also do this symbolically by following the math. So if you plug in for our first point, 0, 2-- the dot product is 1.2. You take the sign and you get 1. If you take the second point, the sign is also 1.2 on either one. And you take the third point, the sign is minus 1.2. And the sign of minus 1.2 is minus 1. OK, so you can kind of see the pattern now. So we have any point over here that forms an acute angle with this weight vector-- minus 0.6, 0.6-- is going to be labeled as positive. And anything that forms an obtuse angle with this weight vector is going to be labeled as negative. And the decision boundary is exactly those points that are perpendicular. And indeed, you can see that this is a right angle here. These are the points which the classifer just doesn't know if it's positive or negative. OK, so that was one particular classifier. That was this one. But we can imagine other ones. We can imagine this purple classifier, which has weights 0.5 and 1. And that corresponds to this point here. So that is 0.51. And remember, the decision boundary is the thing that is perpendicular or normal to the weight vector. And in 2D, it's given by this line. So this purple classifier will classify all of these points plus and all of these points minus. In general, the binary classifier f sub w, where fw is a weight of a particular input x is equal to-- you take the dot product. And then, you take the sign of that dot product. And the hypothesis class, as before, is just simply the set of all possible classifiers by ranging the weights over any two real numbers. So that's the hypothesis class. Now, let's go on to the second design decision. What is a good loss function, OK? So let's take our purple classifier and some training data. And we're going to evaluate how good this classifier is on this training data, OK? So the training data, let's go through this. So here's the classifier. And the first point is 0, 2. And this was labeled as plus 1, OK? So that is this point over here. And this classifier is predicted correctly because it's on this side. It's a positive label, and the classifier also thinks that it's positive. So therefore, we expect low loss. Whereas this point over here-- minus 2, 0-- is labeled as positive, but it's on the other side of the decision boundary. And therefore, it's classified incorrectly. On this point-- 1, minus 1, minus 1 is over here. And it's labeled in the training data as a minus and is on this side of the decision boundary. So it's predicted as minus. Therefore, it is labeled incorrectly as well. So to formalize this, we're going to find something called the zero-one loss. And just like any loss function, it takes in a particular example and a weight vector. And it looks at the prediction and the target and says, do they disagree? And if they disagree, then this indicator function will return 1. And if they agree, then the indicator function returns 0. So this is a zero-one loss. So mathematically, you can walk through these calculations. You plug in the first point and you look at the sign. The sign here is going to be 2. The dot product of 2 is 1. They don't disagree, so that's 0. The second point, they do disagree, so the loss is 1. And the third point, they also don't disagree. And the loss is 0. And as before, the training loss over the entire training set of examples is just simply the average over the per-example losses. And in this case, it's 1/3. So before we move on to the design decision of how to optimize the loss function, let's spend some time understanding two important concepts so we can rewrite the zero-one loss in a slightly different way. So recall that the predicted label on a particular input is the sign of the dot product. And the target label is y, OK? So the score is something that we've seen before. The score on an example is simply this expression, which is a dot product inside the sign. And while the sign is just 1 or minus 1, the score is a real number, which intuitively represents how confident we are in predicting plus 1. So points over here have large dot products with its purple weight vector and have a high score. Ones on the decision boundary have 0 score. Ones over here have very negative scores. The second concept is that of a margin, which takes into account the target label. So the margin on example is simply the score times the correct target label. And this measures how correct we are. Notice that you can be confident but not correct, important life lesson. So if y is positive, then the margin is going to be high when this number is hugely positive. And if y is minus 1, then the margin is going to be high when this score is hugely negative. OK, so with these two definitions in mind, we can now look at the zero-one loss again. Remember that the zero-one loss is the indicator of whether the prediction and target disagree. But now we can represent it in terms of the margin. So this is the expression. It's basically when the margin is less or equal to 0. So remember, a more positive margin means that we're classifying correctly. A negative margin means that we're classifying incorrectly. And we can visualize this as follows. So here I'm plotting the margin against the loss. And if the margin is positive, greater than 0, the loss is 0. And if the margin is less or equal to 0, then the loss is 1. OK, so that is zero-one loss expressed in the margin. OK, so now, let's optimize the third design decision. Let's optimize the training loss. We want to find the minimum weight vector that minimizes this expression, which is the average of the individual losses. And let's just use gradient descent as we did before. And to do it, we have to compute the gradients. So the gradient of a training loss is equal to the sum over the gradient of the individual losses. You look at the individual losses, take the gradient. And now, you have to take the gradient with respect to this indicator function. And now, that's where things go wrong. So if you remember what the loss looks like, it looks like this step function. And what's the gradient of this function? Well, it's 0 almost everywhere. It's 0, 0, 0, 0, 0. And then, there is this discontinuity where it's undefined. And then, 0, 0, 0, 0. So remember what gradient descent is trying to do. It computes a gradient. And then, it moves in that direction. And if the the gradient is 0, the gradient descent just gets stuck and it can't go anywhere. So gradient descent will not work on the zero-one loss. So one kind of technical note is that if someone asks you, why can't you do gradient descent on the zero-one loss? Initial reaction might be because it's not differentiable. And that is true, it's not differentiable. But it's only not differentiable at one point. The real reason is that the gradient is 0 everywhere. And with a 0 gradient, you just can't make any progress. So how do you fix this? There's a few things you can do. But one example is what is called the hinge loss. So pictorially, the hinge loss is just another loss function, the one in green here. And I'm plotting it on this margin-versus-loss plot. The zero-one loss looks like this and the hinge loss looks like that. So it's the maximum of two lines. One is this descending line, and one is this flat line at 0. OK, so formally, what is this? So the hinge loss is equal to the max over two things. The first is 1 minus the margin-- so this complicated expression is just the margin-- and the 0 function, corresponding these two arguments to the max, corresponding to these two regions of the hinge loss. OK, so let's interpret this a little bit. So if the margin is greater than or equal to 1, then the hinge loss is 0. But once the margin starts dipping below 1, then the hinge loss starts growing linearly with the margin violation. Now why is there a 1 here and not at 0? Well, this is because we asked the classifier to predict not only correctly, but by a positive margin of safety. And just an aside, this 1 could really be 2, or 3, or any number as long as it's positive and its magnitude effectively determines the regularization strength if you're using regularizers. Don't worry if you didn't get that. OK, so also notice that the hinge loss is an upper bound on the zero-one loss. So this is cool because suppose you optimize a hinge loss and you drive it down, drive it down. This thing is going to start pushing on the zero-one more or less. And in particular, if you get this hinge loss of 0, then what is the zero-one loss? Well, it's also going to be 0. So that's a nice fact. So here's a minor digression. There's a lot of other loss functions. Here is the logistic loss. And we can just plot it on this diagram. And you see that the logistic loss doesn't have this kink in it. It has a smooth transition between something that's growing linearly to something that fades away to 0. And the key property of the logistic loss is that even if you were out here, so if you have a margin of 2, then you're classifying correctly. And the hinge loss would say you get 0 loss and you don't need to do anything. But the logistic loss is greedy. It says, well, you still have a little bit of a loss. And if you try to minimize a logistic loss, you're going to try to keep on pushing this margin as far out as possible. So the logistic loss is differentiable everywhere, and smooth, and it's nice. And it's typically known as logistic regression because it has connections to probability. OK, so let's now go back to the hinge loss. Here is our friend, the hinge loss. And here is the expression for the hinge loss. And remember, it's the maximum of two expressions-- this decreasing line part and then the 0 part, in orange and blue, respectively. OK, so now if we want to apply gradient descent to the hinge loss, we have to take the gradient. So how do we take the gradient? So the gradient of the loss hinge is equal to. And now, we have this max thing, OK, which might be a little bit scary. But if you look up here, we can just do this kind of visually. So what is the slope here? Well, the slope here is whatever the slope of the orange part is. And what is a slope here? It's the slope of this blue part. And so now, we just have to switch between the two cases. So in particular, if the margin-- this orange part-- is greater than 0, that means we're in this region. And then, the gradient is just going to be the gradient of this expression. 1 is a constant. So it's going to be minus. We're differentiating with respect to w. So and phi of xy is a constant. So it's just going to be phi of xy. And then if this condition doesn't hold otherwise, that means we're in this region. And what is the gradient of 0? Well, that's the world's easiest differential calculus problem. And it's 0. OK, so this is the gradient of the hinge loss. And just to kind of sanity check things-- so you have to pick up an example and it's on this side over here, then the gradients are going to be 0. And you're not going to update your weights. On the other hand, if you are over here, then the gradient will be non-zero. In particular, it's going to be minus phi of xy. OK, so now, let's put things together and revisit our example. So here's the purple classifier over here. And here, we have some training data. And we're going to try to compute the hinge loss on this training data along with its gradient, OK? So remember, the hinge loss is the expression. So let's look at the first point, 0, 2. 0, 2 is here, and it's labeled as a positive. So if you go and you plug that point into the hinge loss, then you get a max over 1 minus the margin and 0. So what is the margin here? Well, it's this dot product, which happens to be 2. So we have 1, minus 2, minus 1. And my max of minus 1 and 0 is 0. And that agrees with our intuition that the loss here should be 0 because it's correctly classified and correctly classified by a margin of 2. And now, let's look at the second point-- minus 2, 0. So if we compute the loss here, we see that the loss is actually 2. So notice that even though we are getting this-- sorry, this loss is actually 2. And that makes sense because we misclassified this point. So now if we look at the third point here-- 1, minus 1. And the loss on this example is 1.5. So notice that even though we're classifying this point correctly, we're still incurring a loss because the margin was only 0.5 and didn't meet the threshold. Or I guess the loss is maybe 1.5. Sorry, the margin is 0.5, but the loss is 1.5. So now we can also compute the gradients here. So the loss on the first point is 0 because the loss is 0. And generally, not always, if the loss is 0, then the gradient will be 0 as well. And on the second one, the loss is not 0, so we have a non-zero gradient, which is minus phi of xy. So it's this part times this minus sign. And the third point also has positive loss, so it has a positive gradient, 1, minus 1. Now, we can compute the training loss, which is the average over the losses. That gives us 1.17. And the gradient of the training loss is just an average of the gradients. And that gives us 1, minus 0.33. OK, so let us now move on. Let's concretize this in Python. OK, so let's remember. Last time, we coded up gradient descent for linear regression. So now, I'm going to just copy this, launch gradientDescentHinge and do it for the hinge loss. OK, so I'm going to use this as a starting point. And I'm going to just change a few things. Let's change the training examples because now we're working with this training data. So we have, just to keep track of things, so this is x, y pairs. So x, now, is 0, 2, 1. And then, the second point is minus 2, 0. And the third point is 1, minus 1. And so we have three points-- xy, where x is a triple. OK, so phi is just going to be x. And the dimension of the weight vector is still 2. And now, the key thing we have to do is change the definition of the loss. So now, let us see. So before, we have average over a sum here. Instead of the square loss, I'm going to make this the hinge loss. So the hinge loss is max over 1 minus the margin, OK, and 0. So this is max over 1 minus the margin and 0. And the gradient of that is going to be-- so let's actually just copy this down. Let's delete this so we do not confuse ourselves. So remember, if this first expression is greater than 0, then the gradient is minus phi of x times y if we're on that side of the curve. And otherwise, it's just going to be 0. OK, so that's it. We just changed the training examples and changed the definition of the loss function. And the optimization algorithm, we don't actually have to change at all, OK? So let's run this and see what we get. So here's gradient descent. It starts out with w equals 0. And then, it starts moving w to minus 0.5, 5. You see that the train loss is decreasing nicely. And actually, in this case, it gets to 0, which means that we-- remember, the hinge loss is the upper bound of the zero-one loss. So that means the zero-one loss is also 0. And the gradient also vanishes and becomes 0, meaning that we converged. OK, so just to recap, all we did here was changed the training examples, the featurizer, and redefined the loss. And it's great that we didn't have to touch the optimization algorithm because this was meant to be a generic piece of code. All right, so let us summarize. And in particular, I'm going to contrast regression with classification, since we've seen two of them so far. So the key quantity that drives the prediction in both cases is the score, the dot product between the weight vector and the feature vector. And in regression, the prediction is exactly just a raw score. While in classification, you stick it through the sign function so you get 1 or minus 1. How the prediction is related to the target-- well, in regression, we looked at the residual, which was the score minus y. And in classification, we're looking at the margin. So in regression, low residual was good. And in classification, high margin is good because we want score and y to have the same sign. Using those quantities, we can define loss functions. So in regression, we looked at the square loss. But as I mentioned briefly, you can also do the absolute deviation loss. In classification, the story becomes a little bit stranger because we generally care about the zero-one loss, that's our misclassification rate. But we can't optimize it, so we have to come up with a surrogate loss function, like the hinge loss, which we went into depth and the logistic loss, which we briefly mentioned. And given the loss functions in both cases, we use the gradient descent algorithm to optimize the loss function. And that's it. That concludes the unit on linear classification. Thanks for listening.
Stanford_CS221_Artificial_Intelligence_Principles_and_Techniques_Autumn_2021
Bayesian_Networks_1_Overview_Stanford_CS221_AI_Autumn_2021.txt
Hi, in this module, I'm going to talk about Bayesian networks, a new modeling paradigm. So we have talked about two types of variable based models. The first was constraint satisfaction problems, where the objective is to find the maximum weight assignment, given a factor graph. Then we talked about Markov networks, where we used factor graphs to define a joint probability distribution over assignments. And we were computing marginal probabilities. Now I'm going to talk about Bayesian networks, where we still define a distribution over a set of random variables using a factor graph. But now the factors are going to have special meaning. So Bayesian networks were developed by Judea Pearl in the mid 1980s, and really have evolved into the more general notion of generative modeling that we see today in machine learning. So quickly, before diving into Bayesian networks, it's helpful to compare and contrast with Markov networks. So both are going to define a probability distribution over assignments to a set of random variables. But the way that each approaches this is very different. So if you're defining a Markov network, you tend to think in terms of specifying a set of preferences. And you throw these factors, encoding these preferences into the Markov network. So for example, the last time we just threw in the transition factor and observation factor for the object tracking example. So the Bayesian network is going to require a more coordinated approach. So in a Bayesian network, the factors are going to be local conditional distributions, as we'll see later. And we really think about a generative process by which each of these variables is set, based on other variables, in turn. So there are many applications of Bayesian networks, and more generally, generative models. So I'll just go through a couple of them here. So the first one is topic modeling, where the goal is to discover hidden structure in a large collection of documents. So an example of topic modeling is Latent Dirichlet Allocation or LDA. And the LDA posits that each document is generated by drawing a mixture of topics and then generating the words, given those topics. Another interesting example is this idea of vision as inverse graphics. So, much of computer vision today is taking images and processing them in some way to generate semantic descriptions, such as object categories or scene descriptions. So vision as inverse graphics takes a very different approach, where we specify, using laws of physics, a graphics engine that can generate an image, given some semantic description-- for example, a 3D model of an object. And then, given this model, computer vision is just inverse graphics, where we're trying to recover the semantic description using the image as input. So this is an example of inference on this generative model. So while this idea hasn't really been able to be scaled past some limited examples, it's, I think, a very tantalizing idea, nonetheless. So switching gears a little bit, let's talk about communication networks. So in the communication networks, nodes must send messages, just a sequence of bits to each other. But these bits can get corrupted along the way, due to physics. So the idea behind error correcting codes-- in particular, these things called low density parity codes-- is that the sender sends random parity checks on the data bits. And then the receiver obtains a noisy version of both the data and the parity bits. The Bayesian network defines how the original bits are related to the noisy bits. And then the receiver can use Bayesian inference to compute and recover the original bits. So this is actually a very effective idea that's used in practice. The final example is either controversial or a little bit grim, which I'll explain later. So this is the problem of DNA matching. So there are two use cases of this, one is in forensics. So, given the DNA found at a crime site, even if the suspect's DNA is not in the database, one can still match this DNA against the family members of a subject. And here, the Bayesian network is structured along the family tree and specifies a relationship between the family member's DNA, using a Mendelian inheritance. So now, while this technology has actually been used to solve a number of crime cases, there's definitely a lot of tricky ethical concerns about this expanded DNA matching, especially when an individual's decision to release their own DNA can impact the privacy of family members. The second use case is in disaster victim identification. So after a big airplane crash or some other disaster. For example, Malaysia Airlines crashed in Ukraine in 2014. And victims' DNA is found at the crash site and is matched against the family members, using the same mechanism as I just described, to help identify our victims. And these methods are very scalable, which allows them to deal with these unfortunate large crash sites. So why Bayesian networks? Well, these days, it's kind of hard not to think about problems exclusively through the lens of standard supervised learning, such as, just train a deep neural network on a pile of data. Bayesian networks really operate in a very different paradigm, which offers several advantages that I want to underscore here. So the first is that it can handle heterogeneously missing information. So normally, when you're doing standard supervised learning, your data is fairly homogeneous. You have training input and output pairs, both at training and test time. But in cases where you have missing information, where you have auxiliary information, Bayesian networks can gracefully handle this messiness in a way that's a little bit more challenging for traditional supervised methods. The second is that Bayesian networks allow you to incorporate prior knowledge much more easily. So when you have it, for example, you understand how Mendelian inheritance works on DNA. Or you understand the laws of physics, that Bayesian networks provides a nice language for incorporating this information into your model. And now, using this model, you can actually learn from very few samples and extrapolate beyond the training distribution. Whereas in contrast, many kind of model, agnostic, low inductive bias methods, such as deep neural networks, require much more data to be effective. Because you're specifying prior knowledge, you can also interpret the variables inside the Bayesian networks. And this could be useful for understanding why a model is making a certain decision. And you can introspect and ask questions about any of the intermediate variables. And this just follows from the laws of probability. Finally, Bayesian networks are an important precursor to causal models. So these are beyond the scope of this course. But they are extremely important, especially these days. They allow you to answer questions about interventions. For example, what would happen if we give this drug to this patient? When the counterfactual is, what would have happened if we have given this drug? So these questions are extremely tricky and deep, that standard machine learning, or any methods that view the world through just the lens of predictions, are really inadequate to answer. So we're not going to talk about in this course. But I highly encourage you to explore this topic on your own. So finally, Bayesian networks, obviously, aren't the panacea in many situations. So often, in these canonical AI applications, such as vision, speech, and language, we actually have large data sets. And we mostly care about prediction. And it's extremely hard to incorporate prior knowledge into your models in these very complex domains. So in these cases, Bayesian networks haven't been as successful and have largely been supplanted by deep learning approaches. But still, having Bayesian networks in your toolkit will allow you to use it effectively when you discover the right problem. So in the remaining modules on Bayesian networks, I will first introduce Bayesian networks more formally. And then I'll talk about probabilistic programming, which is a way to define Bayesian networks using probabilistic programs. So this is a really cool way to think about modeling. Then we'll turn to inference. I'll talk about what inference means, computing conditional and marginal probabilities. We're actually going to reduce the problem in Bayesian networks to the same problem of a bit probabilistic inference in Markov networks, allowing to leverage the stuff that we talked about, where we talked about Markov networks. Then we're going to specialize to hidden Markov models, HMMs, an important special case of Bayesian networks. We're going to show that the forward backward algorithm can leverage the chain structure of an HMM, allowing you to do exact probabilistic inference efficiently. Then we're going to talk about particle filtering, which allows you to do approximate inference and scale up to HMMs, where variables have larger domains. Finally, we're going to talk about learning in Bayesian networks. We're just going to start with supervised learning, where all the variables are observed. And this actually turns out to be quite easy. You'll be pleasantly surprised. Then we're going to show you how to guard against overfitting, using a Laplace smoothing. And finally, we're going to turn to cases where not all the variables are observed. And we introduce the EM algorithm that will help us learn in such Bayesian elements. OK, so let's jump in.
Stanford_CS221_Artificial_Intelligence_Principles_and_Techniques_Autumn_2021
Constraint_Satisfaction_Problems_CSPs_3_Examples_Stanford_CS221_AI_Autumn_2021.txt
Hi. In this module, I'm going to show you how you can take some real world problems and model them as constraint satisfaction problems. So we'll begin with our first example, LSAT. So LSAT is the standardized test for admission into law school, and it features these logic puzzles. So here's one example of a logic puzzle. So imagine you have three sculptures, A, B, and C, that are to be exhibited in two rooms, 1 or 2, of an art gallery. So the exhibition has imposed a certain number of conditions on you. So sculptures A and B cannot be in the same room. Sculptures B and C must be in the same room. And room 2 can hold only one sculpture. So how do you model this as a constraint satisfaction problem? Let's do it via this JavaScript demo. Erase that and start over. So the first thing you want to do when you model is you figure out what the variables are. So looking back here, we want to put the three sculptures in rooms. So let's just define a variable for each of these sculptures. So in this JavaScript demo, I'm going to define a variable A. And the domain of that A is either 1 or 2, depending on what room that sculpture A should be placed in. And I hit step, and I get this variable, and I can mouse over and can see the domain of that variable. OK, so now I can do it for the other two sculptures, B and C. And you'll see that now I have three variables-- A, B, and C, each of which can take on values 1 or 2. So now let me define the factors. So I'm going to define a factor for each of these three conditions. Usually each condition corresponds to a factor. But as we'll see later, that's not always the case. So the first condition says that sculptures A and B cannot be in the same room. So this naturally is a factor that touches variables A and B. So I'm going to call that factor AB. Its scope is variables A and B. And remember, a factor is a function that takes on an assignment to the variables in that scope, A and B in this case. And it's going to return a non-negative number. In this case, I want it to be that case that A and B are not in the same room. So I'm going to return A not equal to B. OK, so if I hit Enter, that is going to give me this factor. And I can check that its table that says 1, 2 is good, 2, 1 is not. 2, 1 is also good. But 1, 1 and 2, 2 are not good. So now I'm going to move on to the second condition. Sculptures B and C must be in the same room. So this is similar, but now apply to B and C-- B and C. And they have to be in the same room, so I'm just going to return B equals C. So I'm going to check that that factor does what I want it to do. So it's happy with 1, 1 and 2, 2, which is good. And now what about the final condition? So room 2 can hold only one sculpture. This one's a little bit tricky because it doesn't mention sculptures exactly. It mentions only the room. But here, what it really means is that I have to look at all the sculpture variables. So I'm going to define a factor, let's call it r2, which depends on all of the variables here. And I'm going to need to figure out whether room 2 has at most one sculpture. So let's keep a counter. And I'm going to go through all the sculptures and see if sculpture A is in room 2. If it is, I'm going to increment the counter. If sculpture B is in room 2, I'm going to increment the counter. If sculpture C is in room 2, I'm going to increment the counter. And now I'm going to return whether the number of sculptures in room 2, which is now n, is at most 1. So I make that factor. And I can see that this factor is happy if at most one sculpture is in room 2, or 0, OK? So now I have to find my constraint satisfaction problem or factor graph, set of variables, set of factors. And now if I press Step, then it will magically solve the CSP. And here there is one satisfying assignment, which assigns A to room 2, assigns B and C to room 1, OK? So that's our first example of solving a constraint satisfaction problem. Here is another example from object tracking. So suppose you're trying to build an autonomous driving system. You want to track where objects are, such as cars and pedestrians, so you know where not to drive. So we're going to work with a very simplified setup here. And here, the setting is we're going to have a number of discrete time steps-- 0, 1, 2, 3, 4. And at each time step, we're going to have a sensor observation that tells us a noisy indicator of the position of a particular object. So maybe at time step 1, I'm going to observe that the object was at 0. And at time step 2, I get observation of 2. And at time step 3, I'm going to get observation of 2. So the noisy sensors report these positions-- 0, 2, 2. And we know that objects can't teleport. So the question is, what trajectory did the object take? Did it do something like this, and actually the sensor readings are correct? Or maybe it did something like that, or something completely different. So how do we do this? We're going to set up an object tracking CSP. So let's first define a factor graph. The variables of the factor graph are going to include the position of the object at each time step, 1, 2, or 3. There's three time steps. And the domain of each variable is 0, 1, or 2. So object could be in position 0, 1, or 2. So Xi represents the true position of the object at time step i. So now we're going to define a bunch of observation factors. And this is going to be attempting to incorporate the sensor information into the problem. So remember at time step 1, we observe that the object was 0. Of course, this is noisy, so we don't want to trust it completely. We're going to define an observation factor, o1, that captures this. So o1 is going to be a unary factor. It depends only on x1. And it's going to highly favor assigning x1 to 0, which is the actual observation. But if x1 might be at 1, which is a neighboring location, that's going to have a weight of 1. And if the object is too far away, then a 2, then I'm going to say that's disallowed. So whenever you see a 0 weight-- a factor returning 0, that's saying that's a veto. OK, so 0, 2 is similar, but applied to x2. x2 is, remember, the position of the object at time step 2, and it's going to favor x2 being 2, and degrade if it's a 1 away, and forbid it if it's 2 away. x3 is also similar, but applied to x3, which is the object position at time step 3. It's going to favor x3 equal 2, but also going to degrade and forbid if it's too far away. OK, so we have three observation factors that capture the sensor readings. And now we're going to define transition factors, which represent the fact that object's positions can't change too much, or in other words, objects can't teleport. So here, we're going to write this factor a little bit differently. It's going to be a little bit more compact. So we're going to look at the absolute difference between an object at time step high and an object at the next time step, i plus 1. And if the object hasn't moved, which means that the difference is 0, I'm going to assign a weight of 2. If it's moved by 1, then I'm going to assign a weight of 1. And if it's moved by 2, I'm going to assign a weight of 0, which is disallowing it, OK? So this concludes the definition of the constraint satisfaction problem for this simple object tracking example. And if I click on the demo, I'll show-- I can see what the CFC looks like in JavaScript code. I have defined three variables-- x1, x2, x3. I'm going to define this helper function nearby that returns 2 if A and B are equal, 1 if they're 1 apart, and 0 if they are 2 apart. And then I'm going to define these factors-- o1, o2, o3, and t1 and t2. So if I solve this CSP, this will return all the set of non-zero weight assignments. And I'll see the maximum weight assignment is 1, 2, 2. So this is a solution to a CSP. It's assigning x1, 1, x2, 2, and x3, 2. Looking at this picture, it's 1, 2, 2. So we think that the object probably took this path. OK, so that's the end of this example. So now let's look at a third example-- event scheduling. So CSPs are really suited for generally scheduling problems. So here is an example of a simple scheduling problem. So you have a set of events and that need to be assigned into a number of time slots. So the events are numbered 1 through e, and the time slots are numbered 1 through t. So we have three conditions here. The first condition is that each event must be put in exactly one time slot. Condition 2 says that each time slot can have at most one event. So you can't double-book two events into one time slot. And then condition 3 says that event e is allowed in time slot t only if this pair exists in a set of allowed pairs. So I can visualize a as a set of edges between the events and the time slots. And here is one possible assignment. I assign event 1 to time slot 2, assign event 2 to time slot 1, and assign event 3 to time slot 3. Notice that I can't assign event 2 to time slot 2 because that would violate C3. There's no edge between event 2 and time slot 2. OK, so how are we going to model this as a CSP? I'm actually going to show you not one but two possible formulations of the CSP, which goes to show that there are some flexibility, or you can say artistic license, in terms of how you decide to formulate problems as CSPs. OK, so the first formulation is going to be looking at it from the events perspective. So here, I'm going to define a set of variables. For each variable-- each event e I'm going to define a variable Xe. And the domain of Xe is going to be some integer 1 through T. So notice here that right off the bat I've satisfied condition C1. Because in a CSP, every variable has to take on exactly one value. And so that means that each event will be put in exactly one time slot. So what about C2? Now I have to do something for C2. Notice that C2 is in terms of time slots, but our variables are in terms of events. So if you remember from the LSAT puzzle, that means we implicitly have to define a factor that looks at all possible variables here. So I'm going to define a constraint which is for-- on every pair of events. I'm going to make sure that the time slot that event e was assigned is not the same as the time slot that event e prime was assigned. So if I check this for all pairs of events, now I've satisfied C2. I can guarantee that no time slot has two events piling on to it. OK, so now what about C3? So each event must be only allowed in certain time slots. So here, again, I'm going to look at each possible event. And I'm simply going to enforce that whatever time slot event e was assigned-- that's denoted Xe-- that pair is in the set of allowed event time slot pairs. And that's enough to satisfy condition three. OK, so that's the first formulation of the CSP. So now let's look at alternative formulation. So now I'm going to look at from a perspective of time slots. So here I'm going to define a variable, Yt for every possible time slot t. And Yt can take on a value which is either one of the possible events, or null, which means that no events have been assigned to that time slot. So notice here right off the bat I've satisfied condition 2 because, remember, every variable gets assigned exactly one value, which is either going to be event or no event. So you can't possibly assign two events to a time slot. Now we have to deal with condition 1. So how do we deal with it? So here all our variables are in terms of time slots, but condition 1 is in terms of events. So, again, we're going to have to define a constraint that touches all the variables. So for every event here I need to enforce that. If I look over all the time slots, that that event shows up exactly once. So what this is saying is that this factor looks at all Y1 through Yt, and checks that Yt equals-- Y little t-- equals e for exactly one of the possible ts. So this will check the box for C1. And now C3 is similar to before. So for every time slot, we're going to enforce that either nothing was scheduled at that time slot, or if something were scheduled, that that event and that time slot are compatible. OK, so that concludes the definition of the second formulation. And now one might wonder which one is better. And this is a matter of efficiency, and there's various trade-offs which are discussed more in the notes. OK, so here is a final example of a CSP, which is going to be a little bit different. And so it will be kind of interesting. So this is program verification. So everyone writes programs, and you're probably used to the idea of writing unit tests to check whether a program is correct. But just because your program passed a bunch of tests doesn't actually guarantee that it's correct, because you're never sure that you covered all the cases. So the idea behind program verification is to prove that your program works for all possible inputs. So let's work through a simple example. Suppose you have this program, foo, which takes in two values, x and y, and it computes the following. So it's going to assign x times x to a. It's going to add y times y to a, and then assign that to b. And then it's going to subtract this quantity and assign it to c and return c. So the thing I want to prove here is this following specification that c is greater or equal to 0 no matter what value x and y take. So here is how I'm going to specify the CSP. I'm going to define a set of variables that corresponds to both the inputs and also the intermediate quantities that are computed along the way. So x, y, a, b, and c. And now I'm going to define a set of constraints corresponding to the program statements which are going to relate these variables. And so for the first constraint, I'm going to have a equals x squared, which captures what this first statement is doing. I'm going to have b equals a-plus y squared, which is going to capture the second program statement. And c equals b minus 2xy, which is going to capture the third program statement. So an important but really subtle note is that equals means two things here. So in the Python program, equal is an assignment operator. It says take the right-hand side, compute its value, and then put it in the variable that is on the left-hand side. Whereas, in the CSP, equal represents mathematical equality. It's saying whether the left-hand side is equal to the right-hand side. So remember what this factor-- don't be deceived by the looks of this factor, is actually a function that takes in a value of a and a value of x, and checks whether a equals x squared. It returns a 1 or a 0. So it's doing checking. Whereas, a equals x times x is doing assignment. It's taking x squared and putting it into a. So now there's a final constraint for this specification. And this is also kind of interesting. Note that we wanted to check that c is greater than 0 for all x and y. But we're going to negate that here. Because CSPs were doing-- only looking for an existence of a particular assignment. CSPs can't natively check all possible assignments in a sense. So we're going to negate it. So intuitively, what this is doing is looking for a counterexample. It's going to say, hey, can we find a setting of x, y, a, b, and c, such that we are able to find a c less than or equal to 0. And if we can, that means the specification doesn't hold. There's a counter example. But if we're not able to find any consistent assignment, if a CSP is not satisfiable, that means the program satisfies a specification. So it's maybe a little bit counterintuitive at first. But we're proving correctness based on the fact that the CSP has no satisfying assignments. So one thing that's really kind of cool and interesting about formulating the program as a CSP, and the fact that this mathematical equality is bi-directional, is that the CSP can actually reason in no particular order. It can look-- start with this constraint, see less than 0, and it can work backwards. It can look backwards to c, b, and a. Or it can look forwards, starting with x and y. Or it can do it in kind of a more sophisticated order. Whereas, if you were only to execute the program, you can only go forwards. So this shows you the flexibility and power of reasoning over programs using a constraint satisfaction problem. OK, so we've presented a number of examples of real world problems and shown you how to formulate them as a CSP or two. So how do you do it? Well, first step is to decide on the variables and the domains. And you want to check that an assignment to all these variables gives you the result of interest. And then we take a look at all the desiderata, the constraints and the preferences, the wishes, and translate them into a set of factors. And the nice thing about CSPs is that this process is often paralyzable. So if you have a set of desiderata, usually each desideratum translate into a factor or a set of factors. And then at the end of the day, you just throw in all the factors into your CSP. So there are some notes to keep in mind when you're designing constraint satisfaction problems. You should keep the CSP small so that they will be more computationally efficient to solve, which means either having fewer variables, fewer factors, a smaller number of domains, smaller number of arities. You can't make everything small, and there's various trade-offs. What exactly is the recipe for computational efficiency really depends on the problems. There's no kind of general rule. So this is going to be a little bit of an art here. And finally, one just kind of reminder is that when you think about implementing each factor, it is true that each factor is itself a little mini program. But you should really think of it in terms of checking a solution, checking whether assignment to the variables that that factor encompasses is valid, rather than trying to compute the solution. So equals is mathematical or equality, rather than assignment. And this is really important, and it takes a little bit of getting used to. Because CSPs requires a fundamentally different mindset than normal kind of procedural programming, which is most salient in the program verification example. But hopefully, after a bit of practice you'll get used to thinking in terms of CSPs, and hopefully it will become more second nature. All right. So that's the end of this module.
Stanford_CS221_Artificial_Intelligence_Principles_and_Techniques_Autumn_2021
Bayesian_Networks_4_Probabilistic_Inference_Stanford_CS221_AI_Autumn_2021.txt
Hi, in this module, I'm going to talk about the general strategy for performing probabilistic inference in Bayesian networks. So recall that a Bayesian network consists of a set of random variables, for example cold, allergies, cough, and itchy eyes. And then the Bayesian network defines a directed acyclic graph over these random variables that capture the qualitative dependencies between the variables. For example, cough is caused by cold or allergies. Itchy eyes is caused by allergies alone. Quantitatively, the Bayesian network specifies a set of local conditional distributions of each variable xi given the parents of i. And so in this example, I would have probability of c times probability of a times probability of h given c and a times probability of i given a. And then when I multiply all of these probabilities together, then I get by definition the joint probability distribution over all the rest. In this case, I have a joint distribution over C, A, H, and I. So you can think about the Bayesian network as defining this joint distribution, which is a probabilistic database where you can answer questions about this data. For example, what is the probability of C given H equals 1 and I equals 1? Generally, you have a Bayesian network. Some of the variables you observe as evidence, for example H and I in this case. And another set of variables you are interested in which are the query variables, so that way Q would be C here. And what we want to produce is the probability of the query variables conditioned on evidence. Normally this is a probability of Q equals q for each of the values of little q. So the overarching strategy that we're going to take for performing inference in Bayesian networks is to convert them into Markov networks, which we discussed inference for. So recall-- we're going to walk through this example. So recall that the joint distribution over the variables here is equal to simply the product of the local conditional distributions by definition of the Bayesian network, OK? But these local conditional distributions a are non-negative quantity. So they can be interpreted as factors in the factor graph. So let's draw the factor graph. So here we have the same set of variables. For every variable, we have a factor corresponding to this local conditional distribution. We have probability of c, probability of a, probability of h given c and a, which connects C, A, and H and then probability of i given A. So in the factor graph representation, these are simply functions. This is a function that depends on c and h. And the factor graph doesn't really care that it's a local distribution. So now remember in a Markov network, we take a factor graph and we multiply all the factors together. And we divide by the normalization constant to get this product to sum to 1. But notice that in this case that the normalization constant is exactly 1. Because we had this equality from the definition of the Bayesian network. So Z has to be 1 in this case. So the Bayesian network is just a Markov network with the normalization constant 1. And that means we can take any Bayesian network and reinterpret it as a Markov network and answer all sorts of marginal queries. For example, you can ask for the probability of A. We can ask for the probability of H, and so on. But I'll just remind you that a single factor connects all the parents. So notice that there are two edges, C to H, A to H here. But in the factor graph representation, you should connect the parents and the child into one. So there's only one thing missing from this picture, which is that often in Bayesian networks you want to condition on evidence. So let's condition on H and I. To do this, we're going to define a Markov network over the non-conditioned variables. So in this case, that's going to be C equals c, A equals a, condition on H equals 1 and I equals 1. And what I'm going to do is we're just going to substitute the values of the evidence into the factors themselves. So here is a factor graph. I have only C and A left. And p of c and p of a is the same. And now we have this factor that depends on C and A, but h is equal to 1. So I don't need to represent h as a variable. And here the same. I equals 1. So I don't need to represent i as a variable. But now I take these four factors. And I multiply them all together. And I get that as this factor graph. And now I need to normalize by 1 over Z. It's a different Z now. In this case, Z is not 1, because I'm conditioning on evidence. And in particular, Z is going to be the probability of the evidence. And you can see this, because this is a joint-- this is a conditional distribution. And conditional distribution is equal to the joint distribution divided by the marginal of the thing that you're conditioning on. So Z has to be equal to the marginal of the evidence. But, nonetheless, this is a Markov network. And now, again, we can run any inference algorithm we'd like over this Markov network, for example, Gibbs sampling. Well, let me actually do that in this little form here. So here is the medical diagnosis. We define variables C, A, H, and I. We're going to condition on H equals 1. I equals 1. And we're interested in the marginal probability of C. And we're going to run Gibbs sampling. So Gibbs sampling, remember, is going to take an arbitrary factor graph or Markov network. And it's going to go through an assignment and reassign each variable one at a time. And it's going to accumulate these counts. But let me speed this up a little bit and say I do 1,000 steps at a time. And now you can see that these counts should converge to the right. It's about 0.13. Should converge to the right, of probability of C conditioned on H and I. So then we're kind of done. We have a Bayesian network. We condition on evidence. We formed this reduced factor graph or Markov network. And then we just run Gibbs sampling. So in some sense, we are done. But I want to push this a little bit further and show how we can leverage the structure of Bayesian networks to optimize things. So let's take another example where we're now conditioning on H. OK, so we're conditioning on H. So let's go through the motions here. We're going to define a Markov network on the variables that we didn't condition on, conditioned on H equals 1. And that's going to be equal to just the product of all the local conditional distributions where we've substituted now H equals 1. And now, the normalization constant is the probability of the evidence. And now, I can ask the question, what is the probability of C equals 1 given H equals 1? This is something that I can just go and compute using Gibbs sampling. But the question is, can we reduce the Markov network before running inference? Because if we can get the Markov network to be a little bit smaller, then hopefully inference can be a bit faster. So the answer is yes. And we're going to show this by doing a little bit of algebra here. So here is this Bayesian network again where I've conditioned on H. So now, let me compute the marginal distribution where I marginalize out I. So here, I don't have I anymore. But I can express this in terms of this probability of C, A, and I given H where I simply sum out all possible values of [AUDIO OUT] so this is just the definition of marginal probability inference. So now, using the definition of the Bayesian network, I can rewrite the joint distribution in terms of local conditional distribution. OK, so and now, I make an observation, which is that summing over i. But none of this actually depends on I except for this last factor. So what I can do is push all of this stuff out or equivalently push the summation inside. So now, it's wrapped tightly around this p of i given a. Now, what is this? By definition of local conditional distributions, this is exactly 1, so it gets dropped. So now, I have this nicer form. But not only is it smaller, let's try to understand what it is. This is the probability of c, probability of a, probability of h equals 1 given c and a. So it's as if this variable I didn't exist at all. So this is a general idea behind Bayesian networks, which is that you can throw away any unobserved leaves before running inference. So this is very powerful because it connects marginalization over variables, which is generally an algebraic operation. It involves a lot of hard work with removal, which is a graph operation, which is more intuitive and trivial in this case. So in general, marginalization is hard. But when they are unobserved leaves of a Bayesian network, it is trivial or just remove it. So here is another type of structure we can exploit, which is actually not specific to Bayesian networks. It shows up more generally in Markov networks. Well, let's take another example here. We're going to condition on I at this time. So here, we're going to define this Markov network, where let's just write down this query that we're interested in. We're interested in C equals c given I equals 1 here. And expanding it out based on the definition of marginal probability, I can put in probability of C, A, and H where I sum over all possible values of A and H. So I'm marginalizing out A and H here. And by definition of the Bayesian network, I can replace this with the local conditional distributions. And now, using the same trick as before, I notice that H is an unobserved leaf, so I can actually marginalize out H. And this factor disappears graphically. This H disappears. And now, I am left with this Bayesian network, where notice that the only thing that depends on c is this p of c. We're going to try to pull it out and rewrite it as follows. And now, I have p of c times some mess. And the nice thing in this case is that this mess is just the constant because it doesn't depend on c. And moreover, because p of c is a distribution and this left-hand side is a distribution, this constant is actually 1. So because C graphically, C and this AI sub-graph, is actually disconnected, which means that I can simply remove this part. So generally, I can throw away any disconnected components before running inference, OK? So in general, let's summarize here. We've tackled the problem of how to perform probabilistic inference in Bayesian networks by reducing the problem to inference in a Markov network. So to prepare the Markov network, we're going to first condition on the evidence. So this is tantamount to substituting the values of the evidence into the factors. Then we throw away any unobserved leaves, in this case, H. We throw away any disconnected components. But these two are just optimizations, which are totally optional, but it will often save you some work. Now, we define a Markov network over the remaining factors. Now, we just have a factor graph, where we can now run your favorite inference algorithm. So in the case, it's very simple. As it is the case here, you can just do it manually. Or if what's remaining is more complicated, then you can do something like Gibbs sampling. And that's the end.
Stanford_CS221_Artificial_Intelligence_Principles_and_Techniques_Autumn_2021
General_Conclusion_Stanford_CS221_AI_Autumn_2021.txt
Welcome, everyone, to the final lecture. Let me just share my screen, and we can get going on this. Share screen. OK, so this lecture is going to be broken into three parts. First, I'm going to do a quick recap of the class. Then I'm going to talk about future classes that you might take. Hopefully, this class has piqued your interest in AI. And then finally, I'm going to end with some broad remarks on where AI is going and what we should all keep in mind. So since it's a live lecture, feel free to interrupt and ask questions. I'll monitor the chat. if you notice anything, you can flag it to me as well. All right, so let's begin with a recap. So congratulations to making it so far in the quarter. We've covered a lot of ground. So I'm just going to highlight some of the key things that you should try to keep in mind. So recall-- we started with this modeling inference learning paradigm. So modeling is the what? It's about how you build a mathematical model that approximates the real world. It might be a neural network, might be a Bayesian network. Inference is the process of how you use the mathematical model to answer questions. It's trivial for neural networks, but can be really hard for Bayesian networks. And learning is how do you take data and produce a model so that you can do inference on it. So in this course, we talked about machine learning, then reflex based models, state based models, variable based models, and logic. So let me just go through each of them in turn. So in machine learning, we presented the loss minimization framework where you have a training set and you want to find parameters that minimize on loss. And one thing I want to stress is how general of a principle this is. The loss captures basically kind of what you want a classifier to have, and we explored a few different types of losses depending on the task. And then we had a fairly simple algorithm, stochastic gradient descent that was able to approximately optimize these objective functions. And this is really kind of the workhorse of machine learning. And there's these two slides-- most of machine learning can be captured at least these days by writing down a loss function and optimizing it. And it works for neural networks, it works for clustering problems and kind of k-means and so on. So I want to underscore that machine learning is kind of a general way of being. It's an idea of taking data and turning it into models. But there's multiple types of models. So we looked at reflex based models in the very beginning. Linear models, neural networks, nearest neighbors. Inferences just feed forward pass through the neural network. And learning, we use stochastic gradient descent or k-means in the case of clustering. Then we looked at problems where you weren't interested in just a single decision, but you were interested in a sequence of decision, let's say, from getting to point A to point B. And we embarked on a journey of state based models and here, the idea of a state is a summary of all the past actions sufficient to choose future actions optimally. And that crisply encapsulates what kind of a state based model is and you have lots of practice coming up with state based models for various problems. If they're deterministic, those are called search problems and can use uniform cost search or A star. If you have randomness, and you model uses Markov decision processes, you can use things like value iteration for inference. And for games, these capture cases where there's an adversary and you have to use a min-max formulation. For search problems, we didn't really touch on learning, although you can do that. And for MDPs and games, we have these reinforcement learning algorithms. So really, you can think about reinforcement learning as machine learning for state based models where there's randomness in the environment. Then we move on to variable based models, which is a higher level of abstraction. It's kind of a different modeling language if you will. We looked at, the key idea here is a factor graph which captures a set of variables whose values you want to determine. And there are the factors, these little squares, captures dependencies between variables. And the key thing is that the factors are generally local, but the questions you want to answer are probably global. So they're deterministic. We have constraint satisfaction problems for things like scheduling. We looked at backtracking and beam search. If you put on your probability hat, then we can turn these factor graphs into Markov networks by defining a distribution over all the random variables. For inference, we looked at Gibbs sampling, but there's other methods as well. And then to give a kind more of a interpretation to how the factors were constructed, we look at Bayesian networks where each of the factors was a local conditional probability. And then we look at forward-backward and particle filtering methods for now these chain structure Bayesian networks. There's much more to be said about here. This is just kind of a taste of variable based models. For learning, we only look at learning for Bayesian networks based on the maximum likelihood principle. But you can apply maximum likelihood to any probabilistic model including Markov networks. For Bayesian networks, it was really nice because learning is closed form, you just count and normalize. For latent variable models, you have the expectation maximization algorithm where you have to use inference to impute the missing variables and then you count and normalize. And finally, we look at logic based models. And here, the idea of logic is that it goes a kind of one level of abstraction higher. It introduces these things called formulas which allows you to represent more kind of powerful things, even kind of infinite things. You can talk about all the even numbers for example, which is infinite set. We look at two models-- propositional logic and first order logic-- to do inference. It generally is pretty hard. For propositional logic you can do model checking or you can work on the inference rules directly, which is one of the nice things about having logical rules. Modus ponens and resolution are different inference methods. So sometimes, I like to say that logic is about how you can express kind of very complicated things very succinctly. And learning, we didn't really get a chance to talk about, but there are ways to also bring machine learning to logic. So hopefully, you see CS221. And if you haven't seen a lot of this material for the first time, it can be a little bit overwhelming. There's just so many models, tools, methods. But I hope that this kind of organization gives you a way to think about how everything fits together. But I don't want you to think about, oh, OK, there's nearest neighbors and there's Bayesian networks, how are they related? I want you guys to think about the trajectory of-- there's a bunch of models with different kind of-- which bucket into reflex state and variable and logic. And then you can do learning on top of that. And that allows you to have just a much more nuanced and holistic picture of all the methods in AI. And what's important, I think, is that the individual methods, like whether you use particle filtering, will change over time. And in general, in applications, you might have to use something a bit more sophisticated. So hopefully, this class has imparted on you just kind of a way of thinking about the modeling and the inference and learning as separate so that whenever you encounter a new algorithm that you read in a paper somewhere, you can actually incorporate it into your conceptual map. OK, that's all I want to say about the recap. Are there any questions? Can you talk some on what should be the first tool that we should be using if we are presented with a problem? What's the tool that we should try first? What is a tool that you should try first? It really depends on the problem. I think these days, it's very natural and easy to throw machine learning, supervised classification at the problem. And that makes sense when your problem involves basically a single action, it's high dimensional, you don't really know what to do with it. But for many problems, like if you're kind of scheduling or doing route planning or something a bit more structured, you wouldn't want to necessarily start with machine learning because in sort of start with machine learning, you need to gather data. And if you don't have data, then that might not be the best place to start. So I don't think that there is any one place to start. And hopefully, these-- you can think about the CS221 tool box as like kind of the first layer in a breadth first search. These are the different options, you should think about hmm, is machine learning good or should this be a search problem or should this be a Bayesian network, for example. OK. Thank you. Thank you. Most of the current machine learning are reflex based. They're a low level intelligence compared to logic. Interesting point. We are in a very interesting time where a lot of what we see as machine learning. And it's also very impressive how a lot of the so-called reflex based models are actually capable of doing some fairly sophisticated things. If you think about AlphaGo, yes, there was Monte Carlo tree search that allowed you to actually build a competitive agent. But even just like classifying a game board, that could definitely beat me at a goal. So I think that in cognitive science, people talk about system one and system two, right? And both kind of coexist. System one is kind of the reflexive agent, making kind of guesses at what should be the right thing to do. And system two is kind of the more rational well thought out and reasoned path. And I think we need both and the two need to kind of coexist and kind of feed off of each other. Could you give some examples of ML methods for search problems and logic problems? So a lot of things can be cast as search problems. So there's a whole field called structure prediction which our goal is to output a structure, let's say a graph or a sentence or something. And in those cases, you often want to learn how to do that. So there, you actually combine some search techniques. So the inference algorithm becomes search rather than just a feed forward through a neural network. But the learning part is still the same. And we didn't talk about the structure perceptron, I think. But just look at it. I think it's in the slide still where you are able to make a prediction using an inference algorithm, you get a prediction, you compare that with the correct prediction, and you do a gradient update. And then logic-- there's similar things you can do. For example, Markov logic is a way of using, combining kind of Markov logic in the context of Markov networks. And Markov networks, you can estimate using maximum likelihood. All right. Yeah, great questions. Feel free to put more in the chat. But I'm going to move on for now since there's a few other things to get through. OK, so now you've taken CS221. Maybe this was your first class. Maybe you have taken a bunch of other classes. I want to talk about what else is related to CS221. So first off, I'm not going to give you a whole list of-- the complete list of courses. You can see the list of AI courses on this website. But that isn't even the whole list of courses which I think are relevant. So what I've done here is try to help us understand what are the types of courses that you might be interested in and why you might be interested in them. And then I'm going to go through each category and give a few examples of the most kind of popular ones. So the most obvious type of course, is-- well, here we've taken 221. We've learned about some methods. Let's learn more advanced methods. Let's learn about Markov chain Monte Carlo in general, for example. And these tend to be more general purpose. But that's not the only type of course that I think is relevant. So applications are extremely useful. Of course, we are interested in applications because that's kind of the real impact of AI is when they're applied to things. But also, in the other direction, often when you take an applied class, you learn the method much better than you if you take a kind of an abstract because you appreciate when it works, when it doesn't work, all the nuances. And then finally, I would really stress and kind of invest in building depth for both kind of methods and applications. And usually, these are courses not in AI. They might-- and for a method side, maybe investigating kind of more mathematical foundations or in the applied area, if you're interested in computational biology, take a biology class. I think these days, it's too I think easy to kind of go through kind of an AI curriculum and not really have as much depth because you can do a lot with kind of just downloading packages and running data. But I think if you really wanted to-- especially if you think kind about research-- having the depth can distinguish you from, and make you more able to come up with kind of new insights and ideas. So let's start with methods. So these are categorized into maybe the different topic areas that we've covered in the class. So first is machine learning. Everyone probably knows about CS229. So that's kind of the standard poster child machine learning class. Compared to CS221, I think there's this question comes up a lot. It's more kind of mathematical derivations rather than as much programming. There's more continuous variables. CS221 tries to shield you from that and deal with discrete variables just for the interests of kind of scoping. And you learn some kind of fancier things like kernel methods and PCA. So if you really want to dig in more into machine learning, that's the class for you. If you are looking for just a kind of more of how do I apply machine learning, especially deep learning, which has been increasingly important, CS230 will tell you how to train these deep neural networks, which has a lot of bells and whistles and things that you need to know about drop out and batch norm to make these things work. So if you're really interested in the general practice of how you get deep learning to work, that's the class for you. So there's three other classes that I want to mention. Of course, there's more. So the first class is machine learning under distribution of shifts. So we have mentioned a few times that machine learning fails when the training distribution isn't the same as the test distribution. For example, there's adversarial examples, and this class is aimed at telling you what's going on there and what you can do about it. Often, you think about machine learning as kind of one task at a time. But increasingly now, we're seeing much more general learning tools that allow you to generalize across multiple tasks. So that's pretty exciting. And finally, we think about machine learning often on kind of single data points which are less structured. But machine learning can be done in the context of graphs. So there's a class about that as well. Then there's reinforcement learning. If you like the reinforcement learning section and want to know more advanced methods, take CS234. You get to learn about policy search whereas we've looked at kind of more Q learning, which is estimating a value function. Then there's decision making under uncertainty from the aero-astro department which focuses more on model based, if you remember the distinction between model based and model free. So in kind of more serious applications, you really want to have maybe a model of what's going on in the world. And you do things like planning rather than just being a kind of a reflex agent. Generative models-- if you enjoyed Bayesian networks and Markov networks, this is the class for you. CS228 Probabilistic Graphical Models. It's kind of a fairly kind of natural extension of the things that we've talked about-- fancier inference algorithms, how you learn the structure and so on. In the last, I guess, five years, there's been a surge of interest in generative models which are supercharged with deep learning. Probably, many people have seen GANs generating really photorealistic images. This is all enabled by deep generative models which builds on the principles of generative models, but you kind of combine it with deep learning and you get really interesting results. So let's talk a little bit about applications. I'm only going to talk about three applications-- vision, language, and robotics. Of course, there's computation biology, there's health care, and there's other things which I'm not going to have time to mention. So vision-- there's a kind of a stock-- I mean, the canonical vision class is, I guess at this point, is CS231N. It's fairly machine learning heavy. You talk about learn about convnets and Transformers. So it's more general purpose than vision. But you talk about some vision specific tasks like detection and segmentation, generation. There is CS231A which is more kind of on the vision side. So if you feel like you already know your ML, but you really want to learn more about kind of vision, this might be a good class for you because vision ultimately is about how light works in kind of a 3D world. And so you kind of get into that. There's also I think a newish class on how AI intersects with graphics, which is kind of a close cousin of vision. And this has some emphasis on kind of generating things like generating animation, but also a much more in depth emphasis on kind of rendering and geometry. OK, so robotics-- there's Introduction to Robotics where you learn about how-- you explore physical models of robotics, how to kind of move arms and how to relate kind of joint angles to actually what the robot does in the real world. CS237 has a little bit more learning involved because for more complex robotics tasks, you can't really do everything from first principles. So there's some learning involved, but you still need to look at kind of the structure of the robotics problems. Language-- there's a few language classes. CS224N is kind of the standard language class. It is also ML heavy just like CS231N. It talks about a bunch of different language tasks like parsing and translation. CS224U is called natural language understanding. People ask like what's the difference between processing and understanding. Historically, there used to be a bigger difference. But now with deep learning, I think these two classes have much more overlapped. You can look at the topics that are kind of slightly different, maybe more emphasis on I don't know, semantics. There's a class on applications of virtual assistants. And next quarter, I'm actually going to be teaching Understanding and Developing Large Language Models. So you might have heard me talk about foundation models or GPT-3 or things like that. Beyond just the technical aspects of how these models work and how they're built are a lot of kind of social, ethical, and legal considerations. So we're going to talk about some of those things as well as giving you hands on experience, giving access to these large language models so you can kind of feel them and kind of even train some of them yourself. So it should be an exciting and interesting class. OK, so the third category is foundations. There's many types of foundations. These are more mathematical foundations. So convex optimization-- it's a great class to really kind of understand optimization. So most machine learning people these days think run SGD and that's good. And for many things, that's fine. For a kind of more, quote unquote, sloppy optimization, that's fine. There are cases where you do want to optimize your utility function and you need to do something more serious. So optimization-- and also, this class is actually-- I took this similar class in grad school. And that's really when I started kind of understanding linear algebra. So I think even if you're not interested in optimization, it gives you familiarity with thinking about kind of linear algebra. Statistical inference-- so there's a whole host of statistics classes, which is important to kind of think about. So machine learning and statistics have clearly a lot of overlap, but they kind of have different emphasis. Statistics focuses more on kind of scientific discovery, machine learning more engineering. So some of the questions you might ask are different. You care about like hypothesis testing and confidence intervals and the validity of your inferences because you don't always have just like a held out test set that you could-- or validation set that you can measure performance against like you have in engineering. So a lot of if you're thinking about more like kind of scientific applications, I think a bit of rigorous statistical thinking would be healthy. And there's a cost-- if you ever wonder why does it all work? Why does machine learning, deep learning-- why is it so effective? You can take machine learning theory. And it talks quite in-depth about the kind of fairly technical probabilistic tools like uniform convergence that helps you explain or partially explain the success of machine learning. Although, it won't answer the question why things work. There's a lot left to be understood. But it hopefully will give you at least a little bit of taste of like, oh, OK, now I understand it's not just kind of all heuristic. There's some kind of statistical principles behind what we're doing. Cognitive science and neuroscience are kind of other areas that feed into AI. Cognitive science you can think about as a software. We're thinking about the human mind. So this class talks about using probabilistic programs, remember from the Bayesian networks kind of modules, to model human reason. So this is very kind of interesting. And then you can look at neuroscience, which has to do maybe more with clinical hardware. I mean, this is a theoretical neuroscience class. So it's not actually going to be real hardware so to speak. But you ask questions like, what is the back propagation, which is the bread and butter of deep learning. Actually, the brain can't implement that because it's not a local kind of rule. So people have been interested in these questions like, what is kind of a neurally-plausible approximation that kind of explains it, so pretty interesting open question there. OK. To summarize, so here's the type of classes, methods. So this is kind of going straight in some sense. You learn about more advanced techniques, general purpose, all good. But I would really encourage you to also think about applications of AI, especially things that really interest you and that you're passionate about. And again, they really help you understand and appreciate the methods that you're learning. And do invest some time in investing in depth. And there's a lot of classes outside AI at Stanford. So definitely explore and don't limit yourself just to kind of AI classes. So just some general tips beyond taking classes-- there are a lot of resources online, talks, tutorials, blog posts. It's information rich and you can learn a lot from just watching things online if that mode of learning works for you. Some people prefer downloading code and tinkering. A lot of stuff, thankfully, is still open source and people release their code and tutorials. And just talk to professors and other students about not just like what classes to take but how they think about AI in the world because a lot of learning is not written down in some sort of formulaic textbook. The field is moving so fast that I think sometimes, it's just in the heads of a few people. All right, so that's the end of the second section. I'll take any questions now. So is it OK to take 230 without first taking 229? I believe the answer is yes. If anyone has taken these, feel free to chime in. I think 229, you-- I mean, especially if you've taken 221, that should be more enough to take 230. 229 really gets you kind of-- you derive a lot of different learning algorithms and think about mixtures of Gaussians and so on which aren't needed if you're just interested in applying deep learning. Any other questions? What will be the best way to talk to professors and other students? That is a good question. I guess Ed is probably not going to be super-- I mean, it's probably going to go dead after the course. I guess email is always an option. I mean, the best time to talk to a professor and other students is during the quarter when they're holding office hours and everything. But maybe after the course, there's still some professors still have office hours. Is it OK to take 224N without previous experience in deep learning? The short answer is yes because I think the first, again, conditioned on you haven't taken 221, you have kind of the basics. 224N starts with some of the basics of deep learning. So you can get by. I think if you can take deep learning, I think that's better. I mean, there's always the thing where if you take-- the more pre-reqs you kind of take, then the more time you'll be able to spend kind of actually enjoying the kind of language aspect rather than let's say the deep learning aspect. CS224 would be offered online later. It's going to be offered in person in the winter. In the future, it's definitely a possibility. I haven't thought that far in advance. Depending on how much interest there is, I guess. What are the classes that would be offered remote? For that, you have to check the-- so I don't know which ones are remote versus in-person. I think by default, everything is going to be an attempt to be in person, I think. All right. So I think that's the low in the questions. So let me move on to the third part. OK, so now we get to kind of step back and think about where AI as a field is going. So if you think about where we are today in AI, think of AlphaGo is kind of a quintessential kind of image that captures the progress and the optimism that we're feeling today, like kind of a very bold effort, surprised a lot of people, experts of Go and AI and kind of really-- it was a kind of a triumph of sorts for AI and machine learning and deep learning. And you kind of see this kind of optimism kind of-- and boldness kind of continued with things like GPT-3 which came out last year. OpenAI released this large language model, 175 billion parameters trained on a bunch of text, and orders of magnitude larger than the previous model. And the cost is something like $4 or $5 billion-- sorry $4 or $5 million. Billion would be a lot. And one thing that's interesting is that it's just the language model. So remember, a language model is just something that takes a context and predicts the next word. So you think about this is the world's most boring task. Like why would you want to just predict the next word? But it turns out that if you do this at scale, you can do all sorts of other things. You can get it to convert natural language into SQL queries or you can have do question answering in a dialogue format. It doesn't do any of these like particularly well. But the mere fact that now you have a single model that wasn't trained for these tasks doing anything sensible is impressive. The question isn't how well the bear dances. It's that bear is dancing at all in some ways. And this has led to a whole era of large models which are really improving the accuracy across the board on mostly kind of language tasks for now, but you see it in vision as well. And it's kind of this optimism in progress that's really leading to AI being kind of deployed across a countless number of different areas from all the consumer services I can think about, Facebook or Google, but also in other types of areas as well, although obviously to a lesser extent because they don't have as much kind of AI expertise as the tech giants. And it's also being applied in even many areas like education or private employment, which really starts to kind of affect people, right? That said, some of these AI systems are logistic regression, not GPT-3. Many of them are actually closer to logistic regression than GPT 3. But nonetheless, this whole umbrella of using data methods to automate certain types of decision making is a general trend that encompasses many different regimes. So now, what I want to spend the last kind of lecture reflecting on is what is the societal impact of this trend having spent a whole quarter of time talking about the technology. So I just want to use a simple example. Machine translation-- many of you probably used it. And it's one application where just the quality of machine translation has just improved significantly due to advances in AI and which is great. It can help break down language barriers, increase accessibility, improve the kind of productivity of the economy and so on. And so this is generally positive. But there's always the kind of-- when you look at the flip side of things, and while they're ubiquitous, they have problems. So for example, Hungarian is a language that doesn't distinguish between female and male third person pronouns. So when you translate into English, it has to guess what the gender of a pronoun is. And you can see that it patterns very stereotypically a long kind of professional stereotypes. And these kind of biases are sometimes-- I mean, if you don't bother to think about it, maybe don't seem to raise alarm. But I think these are actually kind of the frog getting boiled alive kind of setting where it starts kind of creeping insidiously throughout society and it gets amplified. So it's something really to be careful of. There's also weirder stuff. So there's this one example from a few years ago where in some Maori, which is I guess a language that doesn't have that much data, you type in some nonsense, dog, dog, dog, and you get some really disturbing stuff coming out. And no one really knows why this is kind of happening I think many of these issues are due to the fact that machine learning thrives on these complex models feeding spurious or fitting correlations in data so it's like we're kind of pushing the limits of what we can do. And that's the kind of outlook I think the field has had for quite some time. I mean, you have to remember that even 10 years ago, computer vision basically didn't work. And so people are like, really, for decades, trying to get things to work at all. And now things work, well, now we have other things to worry about. So I want to highlight something called spurious correlations which I think is a cautionary tale. So here's a task. It's a pretty solid task. You take an X-ray image of a chest and you're trying to predict whether there's a collapsed lung or not. And if you take a standard computer vision machinery, this works pretty well. But take a closer look at this image. See that tube coming out here? That's called a chest drain. And that's something that's-- it's a common treatment for collapsed lungs, OK? So and it turns out that this is one of the signals that the model is picking up on. So it looks like hey, this person was treated for a collapsed lung. Therefore, he has a collapsed lung. OK, so if you look at the accuracies, the AUC of-- here is the set of the entire population. Here are the people who have gotten chest drains. You're predicting them much more accurately than the people without chest drains. So you might seem like, Oh, OK, we're doing pretty well. But actually, for the segment of a population that doesn't have chest drains, you're doing actually pretty not so well. And this is exactly the subpopulation of untreated patients that you actually care about because if they already have a chest drain, you don't need a prediction whether they have a collapsed lung or not. So this kind of is a cautionary tale that you really need to not just look at the accuracy, but kind of really understand how the model is actually making that prediction because if it's just latching on to spurious correlations and you go to deploy this, it might not be so good. Here's another example. Suppose you're trying to figure out the effect of a treatment on survival of patients. So here, maybe you did some study and here's the data. So for untreated patients, 80% survive. And for treated patients, 30% survive. So the question is, does the treatment help? So how many people think it helps? So maybe raise your hand if you think it doesn't help or just put something into the chat. That's fine too. I'm trying to make this a little bit interactive and getting people to think. Doesn't help. That's possible. Unclear whether it helps. Yeah. Who knows? It's right. If you're very naive about it, you can think like, Oh, OK, well, it's correlated-- survival is correlated with not treating. But-- exactly. Sick people are more likely to undergo treatment. So there's a hidden confounder here, which is how sick you are. So this doesn't tell you anything. So if you're just doing machine learning naively, you could really be doing completely the wrong thing. And there's this whole field of causal inference which provides kind of rigorous machinery to help you answer these kind of questions. And especially kind of in these kind of high stakes medical settings where there's maybe not lack of data and lack of ground truth, you really, really need to be careful. Like for machine translation, you kind of can get a human to look at the sentence like yeah, seems reasonable. And this is kind of a more typical engineering attitude like, I try it and I can always validate if it works. When you can validate when it works, yeah, maybe it's OK to use something that you maybe don't fully understand. But when you have to rely on-- when there's no validation, I think you have to lean much more on kind of first principles. OK. So cautionary tale is always be aware of the limitations of a technology, and machine learning definitely has a lot of limitations. And I think it's really important that you walk away with this class not thinking like, oh, yes, machine learning. I know how to do SGD. I get a data set and I can just go with it. You have to be aware of the limitations. All right, so now the second part of this module, I'm going to talk about AI ethics. So many of you probably have heard the term AI ethics kind of thrown around. It's often in the news. There's a lot of heat kind of around this term. People are not being ethical and what's going on here. And the broadest level, it's about how we ensure the AI is developed to benefit society and not harm society, OK? So sounds easy. Not easy, but uncontroversial, right? And there's a lot of principles and people have written a lot about this. So I'm not an ethicist. So I mean, I can't speak to this in great depth. But starting with the kind of the Belmont Report from the 1979 on human subject research and there's an ACM code of ethics and all of these companies are now putting on a kind of responsible AI principles and so on. It seems like there's a lot of guidelines, which is good. Often, these things say like respect persons, don't do harm to people. And which you look at it and you say, OK, well, yeah. Yeah, I don't want to harm people. But the real question is like how do these high level principles relate to the concrete actions you take. Because a lot of these ethical issues aren't about any malice or misguided kind of intent. It really has to do with kind of ignorance. If you're not aware of something, then bad things can happen even if you're not aware of something, OK? So what I'm going to do is walk through a few specific considerations which has to do with this umbrella AI ethics, which hopefully gives you a bit more kind of concrete guidance. So the considerations are data. There's what objectives you optimize, there's inequality which we've talked about before, the idea of harmful applications which I'm going to get a survey of people to get ready for that, and then automation versus augmentation. All right, so moving on. So data-- so AI is largely powered by machine learning. And without data, there's no machine learning. So we must naturally ask the question like what is this data that we're talking about. So here is an example. There's a data set called TinyImages, 80 million images which has been used since 2006 in the computer vision community. And it was actually taken down because it was found to have various kind of offensive content in it, even ImageNet had some of these kind of objectionable issues and it was kind of cleaned up afterwards. So a lot of the times, AI systems are relying on web scraped data. And we know sometimes on the web, it's not a pretty place. And if you're just scraping data and not really carefully looking at it, you can kind of inherit a lot of these kind of offensive material. Second of all, there are historical biases inherent in data. So a kind of social biases with race and gender. Even if they're not offensive, the idea that maybe the lack of representation of certain kind of marginalized populations is itself kind of a problem. So you have kind of two types of problems. One is you can represent the people badly or you can not represent them. And both are things to worry about. So there's another maybe thing that people don't normally think about when it comes to data, which is that should a piece of data-- let's say I go on a vacation and I take a picture of my dog and I post it on Flickr. And then some big tech company scrapes it and then does some pre-training and uses it to do some scene classification. Is this good or I mean, should it be allowed, right? And right now, I think there's no-- we're kind of pretty laissez-faire about this where internet scrapes are kind of the norm and there's no consent here. I mean, a lot of things are copyright and I'm sure there's tons of potential copyright violations. So this kind of begs the question-- data is produced by people for doing certain activities, right? I post an article, I write a book, I send messages to people. And machine learning is something that kind of sits on top and kind of siphons that data for usually another purpose. And the question is what right do I have to say like no, that should be allowed or not allowed. And often, this kind of goes without even your users being aware of what's happening. So another piece of data that is important is how much work it takes to produce it. So often, we think about technology and machine learning methods because that's kind of-- well, I mean, from a computer science perspective, that's kind of the object of study. But more and more, I think it's important to be aware that data takes what's powering all of these things, right? You think of AI as reducing human labor and makes things more efficient and so on. But it's not free and it requires resources. So there's this excellent book by Mary Gray and Siddharth Suri called Ghost Work that kind of documents the amount of kind of human labor, usually crowdsourcing, that's used to create data sets or moderate and flag content that is used to power these systems. So a lot of AI systems have the kind of a veneer of being like automated. But really, they're kind of being powered by people at some level. As a kind of one example I want to point out, which is good food for thought, is in machine learning, we like to think about the distinction between labeled data, which is really expensive to obtain because you have to pay people to label it, and unlabeled data, which is cheap or even free, right? But if you think about it, if you go back to what I said about data is created by people, expending capital, right? Think about, quote unquote, raw text, books and articles. It's free because, well, we just took someone else's book that they spend a whole year writing and we didn't pay them for it. That's why it's free. And so it's kind of important to kind of keep that perspective that a lot of machine learning is kind of deriving value from the labor of people who are not getting compensated for the asset. So just a little bit of perspective. All right. So the second topic is objectives. So optimization is largely touted in this class. It's a powerful paradigm. It allows you to express a desire in the form of an objective function and then separate that from the intense resources and algorithms to make it kind of come true. You make a wish and then you can get it to come true. That's the kind of the power of optimization. But the question is what should the objective be, right? Ideally, it would be something like happiness or productivity. But usually, these things are impossible to measure. So often, we have surrogates for them. And so OK, now we're not getting the thing we actually care about because surrogates are approximations. And furthermore, there's kind of different incentives, right? Businesses are always incentivized to maximize profit. I mean, nothing against them. That's what they're designed to do. And that's not always aligned with what's the benefit for the social good. And so just an example, most internet companies use clicks or views as a major component of their objective function. Why? Because it's the signal that they have and it's really good at driving up kind of profit. And usually, it does reasonable things. It gives you what you say you want. But obviously, what people's reflexive actions are at any given time are not necessarily representative of their long term goals. And more of a societal level, we see that this leads to potentially big problems like polarization, which is a whole other topic that I won't get into. So I think it's always important to think about what the objectives that you kind of set out to be and beware of any sort of surrogates or misaligned incentives. Inequality-- we've talked about in a machine learning lecture where if you remember the Gender Shades project where different gendered kind of image recognizers work differently on different populations of people. What do you do about this? Well, you can collect more data for certain types of groups. But often, this is hard to do and more expensive. So there might not be kind of incentives to do this unless there's regulation. One solution is data. The second solution is in the methods. We looked at how you can minimize the maximum group loss using group PRO. And you can mitigate some of these performance disparities. Of course, it's a big philosophical question like what kind of trade offs you want to take which you've had the opportunity to kind of reflect on in the homework. But one thing I want to mention is the idea of auditing. Auditing I think is a really powerful force because a lot of systems, especially commercial systems, you don't really know what's going on in them, right? And in the Gender Shades example, after the study came out, companies were incentivized to fix the problem. And after a period of time, the disparities were largely vanished. So just like by the mere fact of studying what systems are doing can actually sometimes be enough to incentivize companies to take action. OK. So now I'm going to do a little bit of audience participation. So get ready to kind of raise your hands. So there's a question of what applications are ethically OK, OK? So this is going to be interesting. And moreover, when a researcher makes a kind of advance, how do you assess its potential harms? OK, so here's one-- autonomous weapon systems powered by AI that can track objects and fire missiles or whatever. So maybe you can vote either up if you think this is an OK application or down if you think this is a not OK application. OK, so I think most people who voted say no, if not all. OK, so this should be an easy case. I don't know what everyone else thinks. Hopefully, this should be-- OK, so maybe there's some-- I think you could maybe make a case for this. But I think this is largely in the community regarded as very, very ethically problematic at the very least. So I vote yes for defensive rule because I was thinking like if just like there was an attack on the Israeli Iron Dome system, that it would defend missiles and thus protect a lot of people. Then in that perspective, it might be good to have autonomous system. But in general, bad. Yeah. So I'm not going to get into the kind of detail argument. I think you can debate these things for a long time. But I think I'm trying to kind of bring together a spectrum. So I think this is one example which most people think is very problematic. So what about deepfakes? So again, vote how many of you think this is OK technology versus not OK. So they have genuine use cases maybe in entertainment, like if you want to create an avatar or something or pretend you're a celebrity. But on the other hand, of course, as this picture points out, you can fake Barack Obama or some head of state into doctoring a video that gets them to say anything. So obviously, this is potentially pretty problematic from a point of view of disinformation where people can't tell what's fact and what's fiction. So what about image generation? So this just gets a little bit more interesting. So suppose you're not generating Barack Obama, you're just generating cute puppies. How many of you think this is OK now? Cute puppies. Yeah, who doesn't want more cute puppies? OK, so more yeses. But intuitively, it feels like it should be OK because you kind of distance yourself from any sort of weaponry. But the truth is that a lot of the methods that you see is actually general purpose, right? If you can generate cute puppies, you can generate Barack Obama, right? So this is where I think the ethical dilemma comes about, which is this idea of dual purpose technology, right? The same technology can be used for putting a smile on your face or spreading disinformation. I'm not going to offer any solution even if there were a solution. But this is something that needs to be kind of just-- the process of thinking about this while you're doing AI is extremely important. And you can even go farther and say what about deep learning? OK, so maybe most people would say, yeah, deep learning is probably fine because there's so many good things you can do with it. But I mean, some people would argue that the idea of developing technology that enables large organizations to amass data and have centralized power and all that is inherently evil. So you could take that position as well. So there's no right or wrong answer here. But I think there's just a spectrum of viewpoints. And I think a lot of the AI ethics is this process of debate and reflection as opposed to-- it should not be like here are the principles. If you just follow them, then you get a stamp of approval. It's not like that at all. It's about like internalizing these questions and carrying them with you at all times. OK, so the final thing I want to talk about is automation versus augmentation. So AI is-- you see a lot in the news like oh, AI is dangerous because it can replace jobs or you may think of like AI that goes rogue. And I think a lot of this, whether it be kind of a real worry or not, has to do with the framing in itself. Ever since the beginning and inception of AI, there is this idea of an agent that is supposed to be intelligence or wanted to be intelligence and it can do things in the world, right? And once you call it an agent, that means it has agency. That means it can do things that are-- it's its kind of own entity in some sense. And if you frame it like that, then now you're just fighting an uphill battle to coax it to be aligned with human values and it's like whoa, whoa, wait, no, I didn't mean that. Let's get it to do what we want. And this is so deeply ingrained into the framing of AI from things like the Turing test which is about an agent that can actually deceive a human being for what it's worth. RL agents that kind of are autonomous and doing their things and the whole idea of GEN AI, artificial general intelligence. And this leads kind of to a very explicit kind of automation perspective because well, you have an agent, it's doing things. Now it's going to do the thing that the human was doing. Now if you go back to the 1950s, there was another kind of line of thinking, which is interestingly called IA, intelligence augmentation or amplification, which was about creating tools that help humans. And this is kind of, in some ways, kind of a predecessor of HCI, Human Computer Interaction, which focuses on the augmentation of human abilities. So now this, in some ways, I think is more amenable to-- I think this perspective allows us to kind of sidestep a lot of these kind of problems because baked in into the premise of IA is we are trying to make humans smarter or faster or whatever. And it's human-centric or as opposed to kind of agent-centric. So a lot of these kind of interesting AI moonshots like the Turing tests or an agent that can play chess would not be pursued under kind of a IA agenda. And it's clear today that AI has-- by focusing on the agent perspective, it has led to a lot of powerful technology. But it's also clear that we need a lot more kind of IA thinking to help shape this technology because fundamentally, we should be developing AI to improve the human condition. OK. So this is the final slide. So AI is a technology. And like most powerful technologies, it's a dual use technology, which means that it can improve efficiency, accessibility, productivity, dare I say happiness, I don't know. It can do a lot of good things. But it can also do a lot of damage. It can be explicitly used to harm people. But even putting that aside, I think even for by virtue of not being cognizant of certain issues, it can exacerbate social inequalities. It can do harms without people thinking about it, which is why I want to kind of stress so much the idea of just being aware of these issues I think is kind of half the battle in terms of making progress here. And the final takeaway is just because you can build something, doesn't mean necessarily that you should. Maybe you should, maybe you shouldn't. You should always ask yourself, what are the benefits and what are the risks? And this might mean sometimes slowing down and challenging the status quo, which is uncomfortable because we're used to thinking about charging ahead and the march of progress. And there aren't any easy answers, but I think really mindful deliberation can go a long way here in making AI more ethical. All right, so that is the end of the lecture. Hopefully, you guys learned a lot and hopefully this was good food for thought. Please give feedback on course evaluation on Access. And thanks for exciting quarter. All right, I think we have a bit of time for questions. So actually, I have a question. More to hear your thoughts about the slight that you had regarding the deepfake and then picture generation, things like that. So I feel like a while ago if you want to verify something or find out about something, you read it up from a verifiable source on the internet. And then it was like if you have a video of a person talking, that's more reliable because it's content you can trust. But now deepfake videos are not so [INAUDIBLE] anymore. So a sense of an erosion of verifiable truth in this content. So I just wondered what your thoughts might be on that. Yeah. I mean, everything you said is true. We can't really trust what we see online. I mean, this is going to be even more true kind of going in the future. I don't think all hope is lost. That just means we need to kind of reset our expectations and have other mechanisms for validation. And I think maybe there are non AI things you could attempt to do, for example, authentication of provenance of like, OK, this video or this image or this text was actually produced by this entity at this particular time or place and it was certified. And you have to design a kind of a secure mechanism for authenticating. So this is more in the kind of realm of security. But another example is Photoshop exists. And I think we're all OK. I think we maybe-- I mean, video might be a little bit more kind of visceral in some sense. But routinely, there's images that can easily be photoshopped with high fidelity, and we don't necessarily trust those. So I guess I'm trying to not avoid sounding too pessimistic about the future, that there are things we can do, but we do need to do them. And I think that when developing AI technology, it's going to happen eventually. I think most of this is kind of just buying us time, right, and kind of slowing things down enough so that we have time to react. I think in 20 years, I don't think there's any way that's going to-- no way you can stop people from having deepfakes. And it's much, much earlier than 20 years. But just to give an upper bound. Thank you. As AI become more democratic, more and more people can practice AI, then we become less and less harmful in the longer term just like everyone in the world [INAUDIBLE] then could become less harmful? Not necessarily. I don't like using an analogy, but like if you imagine everyone can build a nuke in their backyard, it doesn't mean that things are better. And it could just lead to an arms race between attackers and defenders as well. Whoever has the most powerful model can win. I mean, it's really interesting because I've been such a big proponent of transparency and openness. And in research, you just put everything out, right? You're supposed to. That's kind of the whole idea of science. But sometimes, there are technologies and there's a situation where maybe it can do harm. Sure, sure. Thank you. And one more question. So throughout the course I enter [INAUDIBLE] like AI cannot still learn common sense. So do you have any pointers, any reflections, as to how to learn common sense in AI? Yeah. One question is what constitutes common sense. There has been a bunch of work in common sense reasoning in the last five years. Eugene Choi as a professor at University of Washington has done a lot of excellent work in this area. Used to be like common sense reasoning was talked about before machine learning and people don't really work on it. But now it's kind of coming back. But it's tricky. It's really a slippery concept what constitutes common sense and how you get your hand around it. Sure. Thank you. Any other questions? Professor, what are some good sources to what is latest happening in the industry? What is the latest that's happening in AI industry? Yes. So in different fields, what is the new technology coming up for the application to prolong? You're asking just generally what-- [INTERPOSING VOICES] [INAUDIBLE] So you're asking for references where you can find out about how to keep up with the latest AI or you're asking me what I-- yeah. I don't know if there's a definitive source. I mean, arXiv I guess, provides a lot of-- is a feed of the latest papers. Often blog posts or Twitter, people post a lot of kind of recent advances there. I guess social media is, for a lack of better concrete description. OK. Thank you. I mean, I would say that it is a very biased sample. It's the things that are generally done in research, done in kind of prominent research labs, I think, which is good. I mean, I think it's, yeah, follow other researchers on Twitter. That's how you learn about stuff. I think there's also a lot of AI that's in the kind of private organizations where people aren't publishing and it can be hard to figure out what's going on. Sure. Thank you. Yep. Anything else? Why do people publish models even if they are expensive? I think multiple reasons. Publishing models allows other people to build on top of work. So it's good for the community to have more sharing. You can make progress faster. Also, you get recognition more on a kind of selfish note. If other people built on top of your work, that's kind of the academic model in some sense. OK. Well, if there's nothing else, then let's end there. Thanks, everyone, again for coming to the last lecture. After a whole quarter of modules, I guess it's kind of nice to get a little bit of interaction. Although, I guess I've seen many of you at the faculty chats. So that's been nice. But yeah, good luck with the rest of your quarter and see you next time.
Stanford_CS221_Artificial_Intelligence_Principles_and_Techniques_Autumn_2021
Constraint_Satisfaction_Problems_CSPs_5_Arc_Consistency_Stanford_CS221_AI_Autumn_2021.txt
Hi. In this module, I'm going to be talking about the notion of arc consistency. This is going to lead us to look at an algorithm called AC-3 which is going to enable us to prune domains much more aggressively than before in the context of backtracking search. Let's begin. First I want to review backtracking search. So backtracking search is a recursive procedure where it takes a partial assignment x, its weight, and the domains of each of the variables in the CSP. If all the variables have already been assigned in x, then we just see if it's better than the best assignment we've seen so far and if so update it. And then we return. This is the base case. Otherwise, we're going to choose an unassigned variable Xi. We're going to look at all the values in the domain of Xi and order them, order into some heuristic LCV. And now we're going to step through each of the values v in that order. We're going to compute the weight update based on the Xi's being set to v. And if this is 0, then we can just stop recursing right there. Otherwise, we're going to use this updated assignment as an input into the lookahead algorithm to reduce the domains. And now if any of the domains become empty, then, again, we stop recursing. Otherwise, we recurse. So last time we talked about the heuristics for choosing unassigned variable, ordering the values, these are the MCV and LCV heuristics. And then we looked at forward checking which was the one step look ahead. Now, we're going to upgrade that to AC-3. So before we get into AC-3, I need to talk about arc consistency. Let's use a simple example. So suppose we have just two variables, X1 and Xj. X1 can be 1, 2, 3, 4, or 5, and Xj can be 1 or 2. So and Xi and Xj are related, via a single factor which says that their sum must equal 4 exactly. So what does it mean to enforce arc consistency on, let's say, Xi? This means I'm going to go through each of the values in the domain of Xi and try to eliminate it if can't be satisfied by any value in Xj's domain. OK. So let's try this. So look at 1, does there exist any possible setting of Xj so that I can do 1 plus something to get 4? 1 plus 1 is not 4, 1 plus 2 is not 4, so therefore one is just impossible without even knowing the value of Xj. So let me eliminate it. What about 2? Well, I can exchange a 2 to get 4, so that's OK. Notice that it's fine that 1 plus 2 isn't 4, it just matters that there exists one of the values in Xj that work. So let's leave 2 alone. But what about 3? Well, 3 plus 1 is 4. So that's OK, too. What about 4? I can't add 4 to 1 or 2 to get 4, so that gets eliminated and same with 5. So in the end, enforcing arc consistency on Xi results in a smaller domain which only consists of 2 and 3. So notice I can eliminate values without even knowing what the exact value of Xj is. So more formally, arc consistency is a property which I'll explain. So a variable Xi is arc consistent with respect to another variable Xj if for each value in the domain of xi there exists some other value in the domain of xj such that essentially all the factors check out. So formally what that means is that if you look at all the factors whose scope contains Xi and Xj, and you evaluate that factor on xi, xj, then you get something that's not 0. OK, and forcing arc consistency is a procedure that takes two variables and just simply removes the values from domain i, to make Xi arc consistent with respect to Xj. Exactly what we did on the example on the previous slide. So let's revisit the Australia example and apply AC-3. OK. So here is the empty assignment. And here are all the domains of each of the variables. So let's suppose we set WA to be red, OK? So as before, we eliminate the other values from WA's domain, of course. And then we enforce arc consistency on the neighbors of WA. In this case, NT and SA. So out goes red on both of these. And now we continue to try to enforce our consistency on the neighbors of NT and SA. But in this case, I can't actually eliminate anything. OK. So now we're going to recurse. And suppose now in the next level of backtracking, we assign NT green. So now, again, we're going to enforce arc consistency on the neighbors of NT. So that will eliminate green from these two. So notice that one step should look very familiar. This is exactly forward checking. But AC-3 doesn't stop there. And then it says, enforce arc consistency on the neighbors of Q and SA. OK. So let's enforce arc consistency on the neighbors of SA, that eliminates blue from its neighbors. And now let's enforce arc consistency on the neighbors of Q. So that eliminates red from neighbors. And now let's enforce arc consistency on the neighbors of NSW. So that eliminates green. And at this point, we're done. So notice what happened. Each of these domains is only left with one value. So even though we're still in the context of backtracking search at NT, we're still trying to figure out what to do with NT. By looking ahead, we've actually seen what values are even possible. And we actually solve the problem. So now formally we haven't set these values yet, we just eliminated their domains. But backtracking search recursing on the rest of these values should be really a walk in the park. You go into SA and you set it to blue, set Q to red, NSW to green and V to red and you're done. So this shows you the power of AC-3. With one fell swoop, it basically can clean out a lot of the domains and reveal what the actual assignments values are possible here. So here is AC-3 more formally. So remember, forward checking, what you do is when you assign the variable Xj to some value in xj, little xj, you set the domain to only include that value. And then you enforce arc consistency on the neighbors Xi with respect to Xj. So here's a picture. So you're setting Xj. And then you consider all the neighbors of Xj. For example, Xi and then you enforce arc consistency on Xi. So you try to propagate what you know about Xj to Xi and try to eliminate Xi's domain. So now AC-3 just repeatedly enforces arc consistency and there's nothing left to do. So here is the algorithm. We're going to maintain a working set of variables that we need to go process, but we start with Xj, which is the variable that we just assigned. And while there's still variables to process, we're going to just remove any Xj from S. But the order doesn't really matter here. And then for each of the neighbors Xi of Xj, we're going to enforce arc consistency on that neighbor with respect to Xj. So propagate the constraints out. And now if the domain of Xi changed, then we're going to add Xi to S. Because we know more about Xi now and we can hopefully propagate the information farther to its neighbors. So notice that a variable could be revisited multiple times. So this is kind of like breadth first-search with exception that you might visit a node more than once because you might propagate some value to another neighbor and that value might constrain something else and then you might get more additional information back and this can kind of go on for a while. But it does run in polynomial time. You can read the notes for a little bit more details about the running time. So as great as AC-3 might seem, it's not a panacea, and it shouldn't be. And it shouldn't be surprising because solving a CSP should take an exponential time. In general, an AC-3 isn't doing any sort of backtracking search. So here is a small example that shows when AC-3 doesn't do anything. So here we have a Mini Australia here with three variables. And suppose each of them can either be red or blue, red or blue, red or blue. So immediately you should realize that there is no consistent assignment to three variables with only two colors such that any pair can't have the same color. But what happens if you run AC-3? OK. So let's look at this factor here. So WA, NT. So this is arc consistent because if I assign WA red, then NT can be blue. If I assign WA blue, then NT can be red. So if I just look at this local configuration, there's no problem. And analogously, if I look over here, there's no problem. And if I look over here, there's no problem. So AC-3 doesn't detect a problem even though there's no satisfying assignment. So the intuition here is that AC-3 in general, arc consistency, all it's doing is looking locally at the graph and it says, it only detects problems that are kind of blatantly wrong which can be detected locally. But you can't avoid exhaustive search to actually detect the kind of deep problems. So let me summarize here. Enforcing arc consistency is a way to take what you know about one variable's domain to propagate that information via the factors to reduce the domains of its neighbors. Forward checking only applies arc consistency to its neighbors. And this was already somewhat effective. AC-3 just takes that to the extreme limit and enforces arc consistency on the neighbors and their neighbors and their neighbors and so on until you converge. It's trying to exhaustively enforce arc consistency as much as possible to eliminate as much of the values from the domains as possible. And, of course, remember that AC-3 forward checking, ours lookahead algorithms which are used in the context of backtracking search to detect inconsistencies so we can prune early and also to maintain these domains so that we can use them for heuristics such as MCV and LCV. And lookahead turns out to be very, very important for backtracking search. If you can look ahead and detect inconsistency, then that saves you the work of actually having to recurse and explore a combinatorial number of possibilities. OK that's the end.
Stanford_CS221_Artificial_Intelligence_Principles_and_Techniques_Autumn_2021
Fireside_Talks_State_of_Robotics_I_Automation_and_Robotics_Engineering_Lectures_Stanford.txt
Hi, everyone. OK, let's start. 221, the second lecture or the second week of this quarter. So, yeah. Hi, I'm Dorsa. You saw me last time. I'm co-teaching this class with Percy. And today our plan is to talk a little bit about robotics. So this is going to be kind of an informal introduction to robotics, a little bit of a history, a little bit of state-of-the-art, some cool videos, and a bit of chat. So, I don't need to finish my slides. I have a lot of slides. I'm probably not going to finish my slides. So, feel free to interrupt me at any point in time if you have questions about anything. We can make this an informal discussion. Again, at any point in time. I will probably not cover all the things that are in the slides anyways. All right. So, let's get into some quick logistics. All right, so, this was our plan for the class. I'm sure you've seen this from the last lecture. So the plan was to start with reflex-based models. So you've kind of already started that. So, Percy basically went over the machine learning components last week. And we also have another week of machine learning, a little bit of deep learning. And then starting next week, we are going to talk about state-based models. So I'm going to cover a lot of that. So we are going to do search, MDPs, games. I'm going to reuse a lot of videos from last year, just giving you a heads up. We'll probably add and remove some, but last year's videos are on Nvidia. We have a whiteboard, everything is great. So I'll probably reuse a lot of that. But we basically-- we'll plan to break them into modules. And then, Percy will cover variable-based models. And I will finish logic and finish the class. So that was just a quick overview of what the plan is. If you remember, Monday lectures are not modules, right? Monday lectures are going to be guest talks and chats and having fun. So, just to give you an idea of the Monday lectures-- so, this is a tentative schedule. You've already done the Introduction to AI. Percy did that. Today I will be doing this talk on state of robotics, talking a little bit about what that is, why you should care about it. And then next week, we have-- next Monday, we have a guest speaker. And the guest speaker is Mariano-Florentino Cuellar. He is a faculty in the law school. He's also in the California Supreme Court. So, it should be a fun talk to attend. He does a lot of work around AI and law. He does teach a class, actually, on regulating AI, and he has a lot of interesting opinions on that. So, we really recommend showing up for that. I think that would be a lot of fun. Then the week after, we have Tatsu Hashimoto. So Tatsu is a new faculty in the CS Department. He does a lot of work around robust machine learning, so he'll probably be talking about that. Followed by Percy talking about state of natural language processing in week five. By the way, this is tentative, so do not quote me on that. Some of these dates might change. I think the speakers are accurate. The dates might move around. And then finally in week six, we have Emma Pierson talking about AI, inequality, and healthcare. She'll be a faculty at Cornell Tech, joining next year, so it would be interesting to hear from her. And then week seven is kind of like a fun chat, just with Percy and I. We'll show up, and you can ask us anything basically. We plan to think about grad school and talk about those topics probably. So if you have any questions about that, research, what to do after 221, I think that's a good week to attend. Week eight, we have Yoav Shoham. So, he was faculty at Stanford. And he has done a lot of work from back in the days in AI and kind of like all the revolution of AI. So, really fun to show up for his talk. And then week nine, we have Drago Anguelov. So Drago is the head of autonomous driving at Waymo. So if you're interested in learning about autonomous driving to the extent that he can talk about it, that would be week nine. And then week ten, I will do a conclusion. so, that is-- and I'll wrap up the class. So that is kind of our plan for Monday lecture. I just want to advertise it so, and you kind of have a plan of what will show up next-- next couple of weeks. OK? All right, any questions? Do you have any questions? By the way, just put it on chat or interrupt. All right, so today we want to talk about robotics. And I just want to start it off-- I have a lot of videos today, so it will be fun videos. I just wanted to start off playing some video showing robots can dance, just to advertise this. [MUSIC - MARK RONSON FT. BRUNO MARS, "UPTOWN FUNK"] --hot, hot damn. Call the police and the fireman. I'm too hot, hot damn. Make a dragon wanna retire man. You guys can hear this, right? Yeah, OK. Good. --who I am. I'm too hot, hot damn. And my band 'bout that money, break it down. Girls hit your hallelujah, woo. Girls hit your hallelujah, woo. Girls hit your hallelujah, woo. 'Cause uptown funk gon' give it to you. 'Cause uptown funk gon' give it to you. 'Cause uptown funk gon' give it to you. Saturday night and we in the spot. Don't believe me just watch. Come on. Anyways, I wanted to start with this video just to motivate why we care about robotics. This is Spot from Boston Dynamics. Boston Dynamics is a company that does a lot of really cool robotics type of work. We'll see more of their robots later in this lecture. But robots can dance, very cool. And let's just start the conversation with this question of what is a robot? And when is it that we call it a robot? So, question that I have, and I think maybe like it would be a good starter that you can just go into breakout rooms for two to three minutes. Just chat about this. And the question that I guess I have is, think about a hammer. Is a hammer a robot? What do you think? And think about Google, or Google Home, or like Siri. Is that a robot? So what defines a robot? And I think that's just like the starter to think about what is a robot, why should you care about it? And go to breakout rooms two to three minutes. We'll come back with your answers. Introduce yourself, talk to the people in the breakout room, and then put your answers when you come back on chat, and we'll continue. Back? Yes, I believe everyone's back. All right. Yeah, so, I hope you just met your friends and other people in the class. And, yeah, if you have thoughts, put it into chat. We'll look at it later. What is a robot? But let's actually continue with our talk today because, again, I have a lot of videos. So my plan for today is to do a bunch of things. I'm going to start with a quick history of robotics. Where did it come from? Why do we care about it? Then I'm going to spend a bit of time talking about why you should care about robots and why should this class care about robots. How are robots related to AI? What is their relationship there? And then I wanted to spend a little bit of time talking about robotics at Stanford. So, what are the research that's done at Stanford? Who are the faculty who do robotics at Stanford? Just so you know the faces and you know what classes to take and kind of what type of research is done and how you can get in touch with them. This is probably to the extent that I can get. But if I have time, I'm going to talk a little bit about some exciting robotics applications like all the awesome things that are happening, also all the not-so-awesome things that are happening-- the fact that robots are far from perfect or just not there yet. And then if I have time, which is very likely to not be the case, I will talk a little bit about my own research around interactive robotics. So again, the rules of it is at any point in time, interrupt, just ask questions, raise your hand. We'll do that, and we'll go from there. And let's just jump into this quick history of robotics. Where does it come from? Where does the word robot even comes from? So the word robot actually is kind of old. It comes from this play from Karl Capek in 1921. The play is called Rossum's Universal Robots, and it's about basically these mechanical men that are built in a factory and are supposed to do work for humans, and then they rise against humans. So that's kind of like the plot of it. And basically, it has a little bit of a dystopian view of that. And the word-- these mechanical men, basically, are called robota, which basically means slave or kind of like labor type work in Czech. I don't know if anyone knows Czech, and I don't know how accurate that is. But that's basically what I read on Wikipedia, so I assume that is accurate. So that was the word robot. Then the word robotic was also first introduced by this guy, Isaac Asimov, who is a science fiction writer. And then he came around in 1950s, and he wrote a bunch of books about robots. And the view of it was a little bit nicer, it was a little bit friendlier than this dystopian view. And he talked about these different rules of robotics. The robots were there to help humans, and they were supposed to follow these different rules. So you guys might have heard of these three rules of robotics by Isaac Asimov. So, the first one is that a robot may not injure a human being or through inaction, allow a human being to come to harm. The second one is a robot must obey the orders given it by human beings, except for each-- where such orders would conflict with the first law. And then the last one is a robot must protect its own existence as long as such protection does not conflict with the first and second law, OK? So, kind of like-- this is obvious, right? Like, you don't want the robots to go against humans, and you don't want the robot to kill itself. And these are kind of the three rules. And the reason I'm mentioning this is that people are coming back to these rules even these days, like, many other people are thinking about robots, they feel like, oh, a robot should try to satisfy these three laws of robotics, and these are kind of like the core laws that need to be satisfied. And the thing about these laws is, sure it is nice, but they're kind of obvious, right? And actually, satisfying them is the most difficult part. And he doesn't really go through that like if I have my robot running gradient descent on a loss function, how do I define that loss function accurately so I satisfy these rules? That is not a very obvious thing, and I guess he didn't go and talk about that. Let me give you an example. So let's say I have Rosie the robot that's supposed to, like, clean my house. Or I have a Roomba. That Roomba is supposed to clean my house. And let's say I have built this super nice intricate house of cards, OK? Any human who is helping me clean my house would know that you shouldn't touch this house of cards because I spent so much time, and this is so valuable to me, and you should get in touch it, you shouldn't go and clean it. But a Roomba wouldn't know that. Why would it? Why would it know what a house of cards is or how much energy I've put in in creating it? So that is kind of like the objective function. I'm sure it's not about harming-- it might be harming me, it's harming humans. But in general thinking about what should be the objective that the robot should satisfy, what should the reward function be? We'll be talking about MDPs in a couple of weeks, and we'll be talking about reward functions. What should the reward function be is actually a very difficult problem. This is an active area of research trying to understand what are the human preferences? What are the things that humans actually want robotic agents to do around them? And then at the same time, what does the robot think those preferences are? There's a mismatch between that. There's always going to be a mismatch between that. And that mismatch, how harmful is that going to be? How unsafe is it that the robot doesn't know everything? And you might ask, well, why don't you just write that as an objective that, hey, robot, don't go and destroy Dorsa's house of cards? Well, that's perfectly fine. But the thing is, it is really hard to write out all these specifications, all these properties that we would want to satisfy in the world because there's just so much conflict, so much information in the world. And humans just know that. As they grow up, they learn, and they know that. And robots wouldn't necessarily know that. And you might say that's just data, so let's just learn through that. A lot of people agree with that. A lot of people disagree with that. It's still, like, a point of debate, like, is that just data? Like I just need to show more things so the robot knows house of cards are important to Dorsa. But that's also another view. But in general, the point of the slide is, hey, these rules of Asimov, these rules that are put out there, they're not that obvious to satisfy. Sure, I can say, don't kill humans, but it's not really obvious how I write out what does it mean not harming humans. And the second point I want to make here is that even that is still under question. They're not harming humans. It's actually not obvious to everyone that we shouldn't use robots to not harm humans, which is a little bit silly in my opinion, but I'm just talking about everyone's opinion here. So if you think about our defense or other countries' defense, right? People use these things called autonomous weapon systems, which are basically like you can have drones that can detect targets and shoot at them. You can have lethal autonomous weapon systems, these are commonly referred to LAWS. By LAWS, lethal autonomous weapon system, basically. And it's a big question, should we use these? Should we not use these? When can we use lethal autonomous weapon systems? Yeah, it's not like-- it is here, it's not a thing that's science fiction. It's actually a thing that our-- basically, US defense has it, like China-- the whole, like, all over-- different countries have some version of this. And the question is like, yeah, if you don't use it, other countries might use it. And how do we think about using or not using these systems. How do we put a ban even on it? What does a ban on it mean? And there's a lot of debate around this. Stuart Russell who is a faculty at UC Berkeley, and he also teaches AI there. He is basically a proponent of banning, fully banning, lethal autonomous weapons systems. And he has a lot of interesting talks around this. We will talk about this a little bit more toward in the conclusion lecture. But this is also something I wanted to mention because it could be an interesting topic to talk to Tino about next week, like when you think about laws and regulating these things. How does the regulation-- how do the regulations work and make sense? So, yeah. So even not harming humans, which is what Isaac Asimov said, that is still even under question here. It's not clear that is what we want to do. But OK, let's go back to the history of robotics. Why do we have robotics? When did it start? So, around like '50s and '60s, we had a lot of excitement around AI, right? So Percy was talking about AI last week, talking about the history of AI. And that was the time that there was a ton of excitement. And even Turing, Turing has this paper where in the paper he writes, "The best thing we can do is to build a robot with TV cameras for its eyes and motors for its legs and have it run around the countryside and learn from the world." So this is from back in the day like Turing time, and this is what he was thinking. And even in the same paper, later in that paper, Turing says, well, this is too difficult. I don't want to deal with this physical interaction with the countryside. So instead, maybe we should focus on that problem of intelligence. Maybe we should focus on AI. And that is how the next 50 years was all about AI and building good AI systems. And it was all around robotics too. Sure, robotics also has seen a lot of advances since '50s, but a lot more has happened in the AI side of things just because the robotics side was so difficult. An example of that is thinking about Deep Blue, which basically won its first game against the world champion for chess in 1996. And it was doing like amazing intelligence, right? That is intelligence, being able to win at this game of chess. But the thing that was happening was that the chess pieces were moved by humans because that was so difficult. Grasping is still so difficult when you think about a robot trying to actually move these pieces. And that was not solved in 1996 in any ways at all, OK? All right, so, when did the first robot come? So I've been talking about this history, and the question is, OK, so what was the first robot out there? The first intelligent robot. So the first intelligent robot was Shakey, I have a video of it here. It's kind of like a five-minute video, so it's a little bit of a long video, but let's just watch it. I think it has a lot of interesting history in it. Shakey was the world's first mobile intelligent robot, embodying numerous breakthroughs in artificial intelligence, robotics, computer vision, navigation, and other research areas. The robot was developed from 1966 to 1972 by SRI International, then called Stanford Research Institute. And its legacy and impact are still very much alive today. Shakey is really the great-grandfather of things like self-driving cars and even military drones. The hardware was really pretty primitive, but the software architecture and the software algorithms are what changed the world. I think we all thought we were doing really interesting stuff, so it didn't really dawn on us that we were doing anything special. Shakey established a position about what we should be thinking about as possible, as feasible. To understand why Shakey is so important, we have to go back to 1966, and we have to understand where AI research was at that time. Well, you have to remember it was pretty much of a green field which Shakey started. All over the country, and even outside of the United States, people were beginning to build the components to artificial intelligence. Nobody had tried at the time that Shakey was launched to integrate all the components of AI and robotics into a single moving vehicle that could reason about the world, could sense the world around it, and could take actions. Prior to 1966, there were no robots, or at least, not intelligent ones. The concept of an intelligent robot was limited to the realm of fiction. You'll need a charming, daring robot, always at your service. When you read the title of the original proposal, it was something like a mobile automaton for reconnaissance. And the reason we called it an automaton was because until Shakey, you couldn't go into an funding agency and say, I want money to make a science fiction kind of device. So we needed a name, and finally Charlie, in his inimitable fashion, said, it shakes like hell when it moves, let's just call it Shakey. Key components of Shakey's hardware were a TV camera to observe its environment, an antenna and radio array, bump detectors, and a push bar to move objects. My role was mainly to get the images and get whatever coordinates they needed to determine where they were and extract the information from the image. I remember when I first saw it. Gee, that looks like a dishwasher on wheels. While charming, Shakey wasn't impressive for its looks. It was the AI and programming advancements that made it famous. We structured Shakey's software in four distinct layers. And that was the first time that a layered architecture was used for robots. Shakey's pioneering software architecture paved the way to a new era of AI and robotics. The SRI team later developed Flakey, a research robot that demonstrated fuzzy logic and goal-oriented behavior. Then came Centibots, one of the earliest projects in swarm robotics where one hundred autonomous robots demonstrated the ability to map a complex area collaboratively. I like how it's code that isn't just turning numbers into other numbers. You get to see the thing come to life right next to you. Shakey also inspired research in natural-language-based interactions, leading to the popular speech-based technologies that we use today. Shakey's breakthrough in computer vision is now used to help drivers stay in their lanes. And every time you get driving directions on your phone or a navigation system, you are benefiting from the A* navigation algorithm, first invented for Shakey. Even NASA's Mars exploration rovers use navigational techniques that were first launched with Shakey. The key feature is things like potentially having teams of autonomous aircraft that can go out, for example, and do firefighting and doing this either fully autonomously or, potentially, in tandem with human piloted aircraft that can go out and work with them collaboratively. Shakey now resides in the Computer History Museum, visible to hundreds of thousands of visitors annually. And in 2017, Shakey was honored with an IEEE Milestone Achievement Award. The Shakey milestone is important because, first of all, Shakey is the world's first mobile intelligent robot. In addition, this is the first IEEE milestone in the areas of either robotics or artificial intelligence. Looking back, more than 50 years after the Shakey project began, it's inspiring to see how one small team can make such an impact, how one ambitious idea continues to benefit our lives, how one robot changed the world. We didn't realize, I think any of us, what the significance of this was. We knew we were the first, but nobody knew where it was going. And I don't think any of us would have predicted what happened. Shakey planted the flag way out there. It's a model of the kind of ambitious projects that we should be looking at in the future. All right. So that was Shakey's video. It is actually in the Computer History Museum down the road. So when things open up, I suggest going there and seeing Shakey there. Cool, so that was my quick history of robotics. Any questions comes up, just feel free to ask now or later. So, in the next part, what I like to do is I'd like to talk a little bit about how is this related to some of the topics that we are learning through this class? So how are robots, in general, using ideas around AI? And I want to spend a little bit of time on that. So, if you think about robotics, there is this common architecture that is usually used for robots. It's more under question these days, but back in the day actually, this was the architecture that a lot of robots tend to use, which is the sensing, planning, and acting architecture. And then looping that, right? So you sense the world, you watch the world, you perceive the world, you do perception. And then from that, you try to plan on what to do next. That is where the intelligence lies. And then after that, you just act, you execute that plan. And then once you've acted, you can go back and sense, plan, and act again. And that's a very common architecture that most robots do use. And then these days, people are thinking about a more intertwined relationship between sense, plan, and act. For example, you shouldn't just sense the world for the sake of sensing, right? Sensing needs to be active. So there's this area called active perception, which is about the fact that I only sense the parts I care about and I need to act on. And you should have this intertwined relationship between acting and sensing. Or there's another paradigm these days that basically tries to go from images to actuation and kind of like skip that planning part of it by replacing it by neural networks, right? So if I have a machine learning system and I start from images, can I get directly a control on my robot? That's another paradigm. I'll talk about that a little bit actually in this section. But let's just consider this type of paradigm of sensing, planning, and acting. And in this class, starting next week, we're going to first talk about search. And actually, as you heard in the video, we're going to talk about algorithms like AI star. That's actually something that we will discuss next week. And AI star was introduced for Shakey, right? The point of it-- it's actually like an extension of Djikstra's algorithm. And it does have a bunch of heuristics, and it's fairly fast, and it was introduced for things like robots moving around and navigating around. Today we do use a lot of sampling-based type techniques. So the algorithm that you see here running is this algorithm called RRT* star, which is similar to A star to some extent, but it's more of a sampling-based algorithm creating this tree, this dense tree, and navigating along the lines of trees. So the type of things we will be talking about next week, search actually falls a bunch like in planning for robots, right? When you're planning for robots, you should think about searching in that space. How do you get from one location in space to another location? Or how do we get from one robot configuration to another robot configuration? Following search, the week after, we are going to talk about MDPs and games, right? MDPs are Markov decision processes. Basically, the idea there is the world has uncertainty and we should actually model those probabilities and uncertainties. And that commonly shows up when you think about robots interacting with each other or with the world in general, right? Like when you have dynamic environments around you, when you have a self-driving car driving right next to other cars, we can model that as an MDP. And similarly, if you think about this interaction with another intelligent agent, you can model that as a game. And we will be discussing that in a couple of weeks. And these ideas show up a lot in robotics. So here, the video on the left basically shows two robots that are trying to coordinate with each other. And what they're trying to do is they're trying to move this rod together. But the interesting thing is that they're decentralized, they don't talk to each other, and they have different observabilities. So the robot in the front can see the books here, and the robot in the back can only see the boxes. And just because of the forces and the feel of the forces, they can understand what the other agent is doing. They can learn what the other agent's policy is and coordinate with that agent to do this collaborative type of maneuver. Here's another example. This is also from my lab. So here, what we are looking at is we're looking at two robots playing air hockey. And here, we are again having this paradigm, this game theoretic paradigm of two robots trying to coordinate with each other. There's a bit of more learning happening here. So, the robots are trying to, again, learn the policy of the other agent or a representation of the policy of the other robot, and based on that, kind of trick the other agents or influence the other agent and win this hockey game. So, OK. So MDPs and games pretty much show up for any type of interactive systems. And as robots are leaving factory floors, they have more and more interactions with people or with other agents. And they're super useful, again, for the planning parts of robotics. We will see Bayesian networks immediately after that. Bayesian networks, again, are super useful when it comes to things like mapping and estimation. So here, there's this algorithm called simultaneous localization and mapping, SLAM. And basically, the idea of it is that when you go to this new environment and you don't know anything about this new environment, you're going to sample points. And based on that, you're going to create a map and navigate yourself around it and estimate where you are. So a lot of ideas around Bayesian networks shows up here. Actually, one of the homeworks, we're going to look at things like particle filters and things like state estimation and based on that, how do we use ideas from Bayesian networks to do a better estimation of where we are, where other agents are? And again, that is super useful for any robotic system that tries to do anything in the world, basically. OK? All right, so a lot of that was around planning. Logic is also another topic we will discuss in this class. It is not as much used in robotics, but it does show up at various places. So here, this is actually a work by Kress-Gazit's group, and this is called LTLMoP, which is basically a tool. And the idea here is they try to get this robot to navigate this space and go to various squares here. And while doing that, it tries to satisfy some logical formula. So instead of giving it an objective, a loss function, and then doing gradient descent, let's say, on that and try to come up with a policy, what this robot does is it gets that logic formula, and based on that, it creates a plan on how to navigate this space. And why would anyone want to do that? Well, the reason why would anyone want to do that is if you have that logic formula, you can actually prove things about this robot. You can actually prove that if it would satisfy the specification or if it wouldn't satisfy the specification. And again, that is very useful when you think about safety of, let's say, autonomous cars, right? If you want to prove that your autonomous cars are safe, you would need to add a little bit of logic there. You would need to think about how that can be used in planning. And it also helps with transparency because there is a smaller gap between, let's say, natural language and temporal logic, which is this logical language that they're using here. And that smaller gap can help us have a more transparent and clear idea of what the robot is doing here. Yeah, all right. So that was all planning and all the topics that we are discussing in this class. We're currently talking about machine learning. A common place that machine learning shows up in robotics is on the sensing side of things. So you sense the world, and based on that, you perceive the world. So perception and vision is a big part of robotics. And a lot of that is done using machine learning these days, right? So you have a machine learning network that basically tries to do object recognition and activity prediction on what other objects around you are doing or, basically, who's the owner of the car in this case, I think. And what are the other objects around us and things of those form. So that's a very common place that machine learning shows up in robotics, in the sensing and perception side of things. And you might ask about this acting part or what goes into acting part. So it's not just AI that shows up in robotics. A bunch of other fields also show up in robotics. Specifically control theory and optimization is kind of like a core of this acting component, of this architecture of sense, plan, and act. And basically, the idea is you might again have an objective like following a trajectory, and you actually want to put the right control, the right accelerations, and throttle and steering angle on your autonomous car in this case and get your autonomous car to navigate-- Let me turn off the audio here. And a lot of that-- a core of that is actually done using ideas around control theory. More recently, people have been using ideas around machine learning here too, so like adding ideas around deep learning for actually planning and getting the car to navigate in this space, OK? So those are some of the core ideas. As I mentioned earlier, there are some other paradigms of the sense, plan, and act. One specific paradigm which I think is pretty interesting is to use machine learning to do kind of all of this sense, plan, and act by trying to learn from humans, OK? So this is commonly referred to as learning from demonstrations or imitation learning. And the idea of it is if I just watch how humans do things, then from that, I can just directly figure out what their objective was or what their policy is. And the idea has been around since 2000s in the area of robotics. The work on the left that I want to show is actually like this idea directly being applied to robotics. This is the work that Pieter Abbeel and Andrew Ng did in 2004 at Stanford. And basically, the idea is there are basically these helicopters that before then, it was really hard to fly them by just using control, by just using like AI and control. It was actually just really hard to fly them. And then there are these pilots that can fly these helicopters just much more easily, much more simply, and basically, the idea is, could we use that here to get this helicopter fly in this space? So this is Andrew. What the helicopter does is it watches if you have tried to fly the helicopter around for a while. And by watching a human pilot learn to fly, it then learns to fly by itself. And so, what it does is watch the person fly, and then it would try to fly the same stunt and try to do it by itself, and maybe try a few times until it's [INAUDIBLE].. And what you're seeing is the end result of this machine learning process called apprenticeship learning. So that was this idea of apprenticeship learning, where kind of for the first time, people were able to fly these quadcopters autonomously, by learning from human pilots, by learning from human experts. And that idea has been around in research, and people in general have been thinking about how we can learn from humans. How can we get robots to act in the world by directly learning from humans, but not just from demonstrations, but also from things like they're asking their preferences or language instructions. So in my lab, actually, we are doing a lot of work in this domain where we were looking at preference-based learning and actively, basically, querying people for what their preferenced trajectory is in order to basically get a robot to play some version of mini golf here, like targeting for one of these goals and get the robot to actually hit the ball correctly so it gets in the right goal. And that's kind of exciting because you can learn from all sorts of human feedback, right? You can learn from demonstrations, comparisons, language. And from that, you are able to basically get a robot to do an interesting type of maneuver. Another place of machine learning shows up is, again, by combining this sensing, plan, and act into one giant box, right? Like starting from visual data, and once you start from visual data, can you try to directly get actions on your robots? And this idea of robot learning has been very popular in recent years. This is work from Berkeley from 2015 where, basically, the idea is to get the robot to just try out things. And then from that, from these images, and kind of joint angles of what the robot tries, from that, it learns how to achieve the task and how to do the task. OK? It's a very data-intensive type of process, but there's a lot of excitement because you can achieve things that you weren't able to achieve before. And we have seen a lot of advances in machine learning when it's applied to, let's say, NLP and vision, places where we have a lot of data. We don't have that much data in robotics, but if that is the bottleneck, then maybe we can create an arm farm, kind of like this image-- this video here. And create an arm farm and basically just collect lots of lots of data of robots basically navigating in this space, moving randomly inside of this box. And from that, learn how to just grasp any object that you see. This is actually a work by Google Robotics. So Google X has a subgroup called Google Robotics that does a lot of work around robot learning, a lot of interesting work. And, yeah, very intensive, you need-- it's very compute intensive, but lots of excitement around this idea of directly learning actions from sensing data. Yeah? All right. So, let me move to-- oh, OK. So those were all the things I wanted to say on how AI is used in robotics. But AI is not the only thing that is used in robotics. There are a bunch of other things, like, as you probably noticed, robotics spans a bunch of different departments. For example, you see robotics in mechanical engineering. And that has a very different view of robotics, and that view is usually focused on design and co-design, which is a super important problem, right? So if you're thinking about building an arm or building a hand that needs to do very precise manipulation, what type of sensors you're using, how are you building this system, those are all really good questions. Like, how do we make sure that you build a prosthetic that is not too heavy and is also comfortable and is also very safe for the person to use for walking? These are all really great questions that are usually design questions that are super important in robotics. Another reason that this is important is that there's a whole new area around co-design which basically says, well, for whatever hardware we pick, there's going to be some AI algorithm. But if I change that hardware, my AI algorithm is going to change. And if I change my algorithm, that could run on a different hardware very differently. So can we design both of these at the same time? We design what our robots should look like and what algorithms should run on them at the same time. Or can we have reconfigurable robots? And there are a lot of excitement around this area in general when you think about design and co-design of these systems. I just want to show a few other cool designs that are out there that are very impressive. One design that I want to show is this robot on the left. This robot on the left is a dynamic tensegrity structure. So what it is it basically has a bunch of rigid links like these guys. And then these links are connected by things that are kind of like ropes. And it's kind of like a funny structure, but the interesting thing about it is it is shock-resistant. So NASA really cares about this robot because landing robots on the surface of Mars is really difficult. But if I just drop this robot, nothing happens to it. And it can just roll around and continue moving forward, which is, again, a very interesting design for something that you don't want to break, because robots, in general, are pretty rigid, and this robot is very flexible. So there's a lot of interest in basically building robots that are softer, that are less rigid, that are flexible. And this is an example of that. Another robot that I think is kind of fun-- I'll show this a little bit later, a video of it-- is this robot from Stanford. This is from Mark Cutkosky's lab, and it's called Stickybot. It's basically a robot that has kind of gecko-like hands. So its hands are inspired by hands of gecko and having these suction cups. And because of that, it can climb up walls and really slippery slopes, which is, again, a very interesting design. Another design that I find super interesting is this inflatable snake robot. This is from Allison Okamura's lab, again, in mechanical engineering at Stanford. And the idea here is that this robot can inflate itself as it goes through different parts of the environment. So it might be really difficult, for example, to go through this hole, but as the robot is inflating, it's actually making its way through different kinds of narrow spaces and getting to various areas of the space. This can also be used inside of our body, for example, when we are doing [INAUDIBLE] intelligence, type of endoscopy. We can actually send some of these robots inside of the body and navigate a little bit better inside of our body for robotic surgery, for things like endoscopies, and so on. I can take any questions now, actually, about any of this. A quick question, so you mentioned that robotic consists of, like, computer vision, machine learning, [INAUDIBLE] system. So what do you recommend for people who want to go into the field? Do you study all control theory from different angles or what sort of role does the team have? Yeah, that's a very good question. Yeah, so, it kind of seems like a giant thing, right? Because it incorporates, like, everything. And the field of robotics-- in general, when you go to the conferences-- it is interesting because you see people from all of these different fields, but they are coming together for the same problem, but not the same technique, which is a very interesting kind of field to be in. But at the end of the day, everyone just focuses on their expertise, and brings it together as part of a team. So, for example, robotics and CS, like at Stanford-- I'll be talking about that, actually, a little bit-- is a lot focused on developing AI algorithms, developing the algorithmic side of this, but they don't really do as much on the design side. But robotics and mechanical engineering is very focused on building new designs, new structures. And, yeah, we do have a lot of joint projects where we use new designs and try to develop new algorithms for them, and lots of collaboration in these different fields. So it is a very interdisciplinary field. But, yeah, as I said, it might seem too large. It's not that large. At the end of the day, everyone focuses on the thing that they're actually very interested in for the same goal of building robots. Thanks. I have a question. What's some fields of robotics would be really great to go into a startup for right now? So to start up, right? Is that what you-- yeah, so that's a good question to ask. Yeah, so, I think there are a lot of excitement around autonomous driving, right? And autonomous driving these days is very focused on vision and machine learning and control theory. So those three kind of backgrounds are commonly used in autonomous driving. But beyond autonomous driving, I think a lot of big companies are using them, so it's not necessarily startups. But beyond that, yeah, so people are very interested in domestic robots these days, right? Actually getting robots inside people's houses, which wasn't the case a few years back even, right? We have robots functioning very well on factory floors, but having robots in our homes is a big problem. And you're very some startups that are very, very successful. So it's kind of like an edgy area to be in, but I think there's a lot of excitement there like home automation, things like next-generation Roombas, or other things that you can have in homes. And again, for that, I think a lot of these systems are using machine learning AI in general, so that is, again, a huge background that you would want to have, but the design is pretty important there, so the type of hardware design that you use there is actually super important. Yeah, and healthcare. Kind of thinking about robotics being used in healthcare and hospitals and things of those form, I think that is also a very exciting area. Got it. Thank you. Can I ask a question? Sure, yeah. Yeah, so it's kind of like, so you said in robotics, like research? Because, right now, like as you said previously, there's a lot of things like happening in AI stuff. And also, there's also like something [INAUDIBLE] control theory. So, is there any research kind of like moving just two things together? Because for my undergrad, I study also things like control theory and also AI because I find out like in control theory, you have some really interesting concepts, actually, like stability, like observability, like all of this. So have you-- is there any research like, for example, using this concept to do example like reinforcement learning setups, which is missing like a stability or some concepts like this. Is there any research about that? Yeah, there's a lot of excitement actually around that. And I totally agree with that, right? There is a lot of interesting topics in control theory, there are a lot of interesting topics in AI and machine learning that are just making their ways into robotics. And the two communities, at first, there was a little bit of clashes between them, I would say. But now, I think there is a lot of coming together and trying to combine those ideas. There's a new conference called Learning for Dynamics and Control, L4DC. And the whole point of that is actually bringing learning people and control people together to try to use the same ideas. Yeah, lots of research trying to bring learning and control together, and I think that is actually the right direction because, as you said, lots of interesting ideas and dynamics in control. And I think a lot of those ideas could be used as prior structures that could be put on learning-based systems. So when you're, let's say, training a neural network, you can bring in structure that you know about the system that comes from control theory, let's say. Good, thank you. Hi. I have a question. Sure. Sure. I was wondering, you talked about arm farms and collecting lots of data. Do you feel like the field is more data-limited or more algorithm- and learning-limited? I think about when I learned to drive, wasn't that right it should have been more data and it wasn't-- it was just maybe a couple of weeks of practice and I was ready. That's a very good point. So I think it's a combination. I do think the field this very data-limited. And it's interesting because, yeah, when you learn to drive, right, you spend a couple of weeks, but you have seen cars drive right next to you like the learning by observation, which is a very interesting type of learning. You're not learning by doing, you're learning by observing other people drive right next to you. And that has so much information. And it's kind of the same problem that I mentioned earlier with the house of cards example, right? Your autonomous car doesn't know that it's important. Well, you would know that, you have so much context in the world. And because of that, I think the issue is-- specifically for autonomous cars-- is some of these corner cases. Driving on a highway, that's solved, right? The issue is some of these corner cases that it hasn't seen yet in the data and maybe more data will solve that. So, I think more data is definitely a problem. I think we can still do better on our algorithms too, but data is still-- I would say that's the higher kind of issue, at least in autonomous driving I would say. On the data side, do you feel that synthetic data could be something that would be useful for machine learning applications or is that something that's going to always be a fantasy? I think it's super useful, right? If you can create near-accident driving scenarios in driving, and then kind of train your car in those settings, and then just generate that automatically, that would be super useful. So then you don't need to wait forever just to hit a near-accident driving scenario on the vehicle. I think one issue there is this simulation to reality gap, which is a big problem specifically for robotics. But I do think generation of data is important, yeah. Hi, so I have a question regarding the deterministicity of robotics, for example. In terms of machine regulations, my government usually requires that the actions of a machine to be deterministic. Manufacturers of a machine are sometimes responsible for these actions. But these are machine-learning-based or deep-learning-based algorithms, they are statistical by definition. So, how could or how do we define the responsibility of the manufacturer of the machines if there's an accident caused by a autonomous driver? Who should be responsible in this case? Yeah that's very-- it's a very good question for Tino next week, actually. So, very good question. Yeah, so the first point that you made that all the laws are about deterministic systems. That's actually not always the case. So, for example, Mykel Kochenderfer in the aero and astro department, he has done a lot of work around POMDPs that actually run on aircraft systems. So, there's this ACAS XU system, which is basically an unmanned aircraft. All of the motions, landing and taking off and all of that is done in an unmanned setting. But the system is small. It's a POMDP which has like a few states and all that, and you can verify everything about it. So there's a lot of interesting rules around verification and validation in this space. And even if it is not deterministic, you can still verify it. So, a small POMDP for example, partially observable Markov decision process. You won't see that in the class. If you're interested in that topic, take Mykel's class. But when it comes to neural networks, yeah, we don't really have that much guarantees around it. And there's a lot of, again, discussion here. So some people are taking the route of trying to prove things and trying to verify neural networks. Clark Barrett is someone in the CS department who does a lot of work on verification of neural networks. Again, we are limited in the size there, so we can't have giant neural networks there. Another kind of perspective on this is giving these statistical guarantees, right? If my autonomous car is safer than humans statistically, maybe that is good enough and we're OK with some number of accidents some number of times. Maybe we would be OK with that. And some of it is, yeah, like acceptance issues too, right? The first aircraft that were out there were probably not safe, probably people were OK with that. And the number of deaths were higher, right? And I think there was a little bit of that acceptance issue for how this is going to pan out. But it is actually a very good question of how are you going to regulate and who is going to be at blame. One can take Tesla's approach and be like, hey, human is always in control, so if anything happens, it was a human's fault, which is kind of like a weird type of approach, I would say. It's not necessarily the safest way to go, but yeah, anyone who wants to have good answers for this, something to ask Tino next week, OK? I have a question with regards to the co-design. So you mentioned about the hardware and the AI algorithm needs to go hand in hand. So, for example, the self-driving car has to make-- the algorithms have to make quick decisions with the real-time change and the environment, the algorithms take a long time to run. And I know that hardware-wise, there are a variety of ways the algorithm can be put on, let's say, on a GPU or on other aspects. So would you be-- are there any pointers towards how these two go hand in hand, and what is the best way to do that? Yeah, so I was more thinking about it in an offline kind of fashion, right? So in an offline kind of fashion, you can have a fancy algorithm that does everything-- that takes a lot of compute, let's say, for a hardware that is very simple. Or you can kind of increase complexity of your hardware, and on the other hand, have a really simple algorithm that runs on it. One place that-- so I wasn't really thinking about the online aspect of it, which you're right, they were like running at different frequencies, so how could they work together? One example of that is in assistive robotics, so assistive cooperation when you're using a joystick to control a robot arm. This is something that we work on commonly. And you can make the hardware very interesting, very intricate, like use haptic devices and then be able to control things much easier. On the other hand, you can have a hardware that's really simple. So, for example, there are these sip-and-puff devices that a lot of patients with disabilities use. And it's a very simple device, you can only sip and puff. That's the only thing you can do, but then the algorithm that's underneath needs to be much more complicated to be able to capture what that sip and puff means. So that's kind of like one place that this interplay between hardware and software-- hardware and algorithm really shows up. All right. Thank you. Of course, yeah. OK, so let me continue a little bit, and then I'll stop at the end of this section. And I can take more questions too after that. I want to show some of these applications too at some point. This section is small, so the robotics at Stanford one. Yeah, so OK, so we talked about all of these, we talked about the history. Robotics at Stanford have an interesting history too. So it does have an old history. Here's a video that I kind of just wanted to show for fun. This is a video from Oussama Khatib's lab. Oussama always has the best videos. He has these two robots, Juliet and Romeo. The other fun thing in this video for me is lots of these people are faculty now or very famous in the field. So it's kind of fun to see them as grad students at Stanford. So, yeah this is like Oliver Brock. This is Gates, if you look at it closely. It is first floor of Gates. It hasn't changed much. This robot is still on the first floor of Gates, so if you get a chance to go there, it is still somewhere. And yeah, like this is basically getting this robot helper to help you do various sort of things like move objects for you. They had basically two of these. Let me move forward a little bit. So it helps you carry your objects and things of those form. It's a very old robot. And some of these concepts that you study now, even, thinking about interaction between humans, they were actually-- they were thinking about this back in the day. This is collaborative transport. These robots are not decentralized. They do actually have a centralized control, but they're compliant, so, meaning that they're not rigid, right? If you move it, they also get moved. Yeah, so, I think and then later on there is this video of dancing with Romeo. So, again, it's compliant, and it kind of moves around you and all that. So this is some old videos from Stanford Robotics. Some more recent videos of Stanford Robotics and successes, I guess, is about this DARPA Grand Challenge. So, it's not that recent. It's from 2005. So DARPA Grand Challenge was this competition that DARPA put out. And basically, it was a competition that was trying to get researchers to work on autonomous driving. And this was basically the competition in 2005 where Sebastian Thrun was heading the Stanford team, and Stanley was the vehicle and actually won the competition. [CHEERS] Stanford logo. Stanley, Stanley, Stanley, Stanley. In case you don't recognize it, that is a Volkswagen Touareg. And to the finish line configuration. Ladies and gentlemen, boys and girls, it's been done. [CHEERS] So that is Stanley passing the finishing line. After this, actually, Sebastian Thrun left Stanford and joined Google and started the Google self-driving car group there, now Waymo. And yeah, lots of advances in autonomous driving since then. But this was kind of one of the big successes of Stanford Robotics, winning, basically, this DARPA Grand Challenge, which was very exciting. But in general, robotics at Stanford kind of falls into a bunch of different departments. So in computer science, here are some of the faculty. I just want to show their faces so you know who they are, and you can take classes from them later on and things of those form. So, Oussama Khatib, I've already showed a lot of videos from his lab. I have one more from his lab that I'll show later. And Oussama does a lot of work on field robotics, meaning that I'm going to send robots to places that humans haven't seen before and see what happens, which is really exciting. Then we have Ken Salisbury, who does a lot of work on helper robots and building systems that can actually help people. Silvio does a lot of work around vision and robotics. So he's primarily a vision faculty, but he is thinking about that intersection of vision robotics. And some of the more recent people to join including myself, Jeanette, and Chelsea. So Jeanette does a lot of work here on manipulation. I am personally very interested in interaction, so been thinking about multi-agent interaction or interaction with humans. And Chelsea does a lot of work on robot learning, meta learning, and things of those form. In addition to some of these faculty, there are some other folks in the CS department who do a lot of work that is related to robotics. So again, Fei-Fei does quite a bit of work in vision but also is interested in robotics, that intersection. And then Karen Liu and Jiajun Wu, both of them recently joined Stanford. And they do a ton of work around physical simulation, graphics, things of those form. And that has a lot of relations to building robots that can work with like deformable objects, and things of those form. And some folks who used to do robotics, I guess, are Andrew and Sebastian. So I showed you the video of Andrew's learning-from-demonstration work earlier, the flying video of the helicopter. And then Sebastian has done a lot of work in autonomous driving. They're both kind of around. So Andrew does a lot of work in healthcare these days. Sebastian comes in. He's an adjunct faculty now. Outside of computer science, you still have a lot of robotics faculty. So in the aero and astro department, we have Grace Gao, Mac Schwager, Marco Pavone, and Mykel Kochenderfer. I mentioned Mykel's work around aircraft systems earlier, so, building these ACAS Xu systems and trying to improve properties around them. They all do a lot of work around drones and quadcopters and helicopters, things of those form, multi-agent systems, and being able to get guarantees, and talk about risk. And finally, in mechanical engineering, we have a good number of faculty. Allison Okamura, Sean Follmer, Mark Cutkosky, Steve Collins, and Monroe Kennedy. A lot of them, most of-- almost all of them do quite interesting work around design too, so building systems that are actually interesting and useful. The Stickybot that I showed earlier was from Mark's lab. The snake robot was from Allison's lab. Sean does a lot of interesting work at the intersection of robotics and HCI, so if that is something you're interested in, you should check out what these faculty teach and all that. So that was kind of my very quick robotics at Stanford type of overview. Let me spend another five minutes showing some of these applications, and maybe after that, I'll take questions for the last five minutes. And I have a seven-minute video that I'll just leave after the class for you guys to watch. It's like a 50-year history of robotics, which is kind of fun. All right, so, I wanted to show you, basically, generally, some exciting applications of robotics. And I actually had a hard time classifying it because I think classifying them are-- they can be classified with different axes, and it was hard to classify them. But I ended up classifying them into three main groups. The first group is bio-inspired robots, which is basically, let's look at biology and try to build robots that are useful. So a lot of interesting design goes there. Another interesting direction is soft robotics, meaning that, hey I'm going to build systems that use soft material so-- tensegrity structure was an example of that. So they're flexible, they're soft, they're not rigid, they're not going to break. I'm not going to actually talk much about soft robots, but I'm going to talk about manipulating soft objects, which is a very difficult algorithmic question. And then finally, if I have time, I will talk a little bit about domestic and interactive robots, which is something that I think is really exciting. Its interaction with humans is something that you should really care about as robots are basically starting to interact with us. All right, so, bio-inspired robots. This is more of an interesting design question. So, from kind of early on, everyone was interested in humanoids because we want robots to look like humans for some reason. So, there's a lot of work around humanoids and building robots that look like humans, that is, they have two arms and two hands and two legs and a face. But at some point, people realized robots don't need to look like humans. And they started looking at the nature in general and started thinking about, generally, bio-inspired robots, right? There are a lot of animals that can get to places that humans can't, and can be built robots that are similar to them. And another interesting topic that shows up here, specifically under humanoids, is this idea of walking. So people have been obsessed with walking for years now. And it's an interesting problem. If you want to build a robot that walks kind of like humans, that is still very difficult. All walking robots have like weird gaits, and they don't really walk human-like. And when they do, they're just super inefficient. But humans are just amazing at walking. And that's in general a very active area of research, trying to get robots to walk. And why do we care about that? Well, first off, it's an interesting question. Second, building exoskeletons building systems that can help people walk is always an interest in this field. So let's look at a few bio-inspired robots. I really just want to show a lot of videos of these systems. So, one type of bio-inspired robots is looking at insects like cockroaches and try to build robots that kind of act like cockroaches because they're amazing at getting through obstacles. And there's a team at UC Berkeley, Ron Fearing's group, that basically designs robots that are similar to cockroaches, and they go through places like cockroaches. And the nice thing about it is, yeah, they're very agile, they go through things. The other thing is they're small, and they can be super fast. So you can have a swarm of these robots get to places super fast. And another interesting thing about cockroaches is when they navigate, they use their antennas. So, that is actually how they figure out where the wall is. Using their antenna, they figure out, they kind of follow the wall. And basically, people in Ron's group have been using similar type of ideas to be able to sense the world and actuate in this world. And they even build these robots using origami, which is kind of interesting. But it makes sure that they're small, and they are not-- they don't take as much battery power and energy, and they're kind of like lights. Let me actually move to this one. The other robot that I showed you a little bit earlier is a bio-inspired robot, is the Stickybot from Mark's lab. So basically, if you look at gecko feet, it has these suction cups that-- it's very, very tiny suction cups that gets attached to glass, and because of that-- this robot is using a similar type of paradigm-- and it can just like walk up really slippery slopes like polished granite. And that is a super impressive thing. So lots of cool design going on here. Similarly, I think snake robots and eel robots are very popular. There are a lot of links connected to each other, and they can navigate easily. This is an eel-like robot. It's an underwater robot. It does have a camera on the front. And based on that, it navigates, which is kind of cool. All right. And then this is a hopper. So this is basically a robot-- again from Ron's group, Salto-- which is kind of like jumps around like a bush baby. It's kind of cool. If you guys have used things like MuJoCo, which is a simulation environment, you might see these random animals in MuJoCo. Part of it is roboticists care really about different types of animals like swimming type of motion, hopping, and that's why those show up in this MuJoCo type environment, which is a simulation envi-- a physics simulator that allows you to train things basically in simulation. All right, so those were generally bio-inspired type robots. As I mentioned earlier, you've been obsessed with humanoids, so lots of energy and money goes into building humanoids. Honda, actually, spent a lot of money on building this robot called ASIMO. Its walking is kind of weird. Hello, [INAUDIBLE]. Hello, ASIMO. Hello. It's nice to see you. It's nice to see you too, ASIMO. I am happy to be here with you today. Thank you. I'm excited to be here in Washington, DC. The all-new ASIMO received a-- All right, I'm going to cut it there. But yeah, so, humanoids have been really exciting. And another place that actually humanoids show up is basically when you have-- one other example I want to show basically is sending humanoid-type robots to spaces that you wouldn't be able to send before. So I want to show a video of this robot from Oussama's group, Ocean One. Some of you might have seen this video. This is basically not full humanoid. It does have two arms, so you can co-operate it and get the robot to do various things. But it goes underwater to places that people have not been able to go before. So let's just quickly watch this video. It's kind of a nice video. Oussama's work. [MUSIC PLAYING] Ocean One is aimed at bringing a new capability for underwater exploration. The intent here is to have a diver diving virtually, creating a robot that can be the physical representation of the human. You want a robotic diver that can have bimanual capabilities. So it has two hands, it has stereo vision. And the most amazing thing about it is that you can feel what the robot is doing while sitting up on the boat. And this is combining the technology of haptics, that is the idea that we can reflect the contact forces. It's almost like you are there. With the sense of touch, you create a new dimension of perception. This robot is oil filled. It allows us to take the robot very deep. This robot can go to thousand-- [INTERPOSING VOICES] --human-like machine that is also human-friendly. La Lune is a 17th century shipwreck located about 20 miles off the coast of Toulon in France at a hundred meters. In the last year we have been working and getting our robot ready to take on that expedition. And we are going to land on the moon. More than 70% of the surface of the planet is water. We have a lot of structures, a lot of coral reefs to monitor. We need to reach down there. You can think about it as a solution where-- Again, we don't have that much time. I'm going to kind of move forward because probably the last thing I want to show is the walking video. And then I'll pause it and ask for questions. But yeah, in general, if you're interested in underwater robots, Oussama has a lot of work around that. He's at Stanford. He teaches classes and I think that's a very interesting direction. And then finally, I guess the last video that I can show under this category is this idea of walking and jumping and things of those form. So, there's a lot of, again, excitement around that. And Boston Dynamics-- the first video I showed you guys with the dancing robot, it was also from Boston Dynamics-- does a lot of work around building very dynamic robots. So they have really good controllers for these robots. And this is Atlas from Boston Dynamics. It jumps, flips, even. That is super impressive. And usually, roboticists show that one video of it working and don't talk about videos of it not working. But more recently, people are showing more videos of things not exactly working. So I think this video, it actually-- it's super impressive how it recovers because that is really hard to do in real time. That was a failure. Yeah, so OK. So lots of excitement around these areas. I can start taking questions now. I'm not going to show more videos. I have more stuff to show, but let me just answer a couple of questions and then I'll just, at the end at 2:20, I'll leave this video of 50 years of robotics that Oussama put together and has fun music. So any questions? I have a question. I was super impressed by the last video you showed of the robot doing flips. I was wondering, it looked really heavy, like, it had a lot of materials and I'm wondering if why they chose to equip it with such materials. I thought maybe using lighter materials so that it could be easier to jump. I was just curious if you knew the reason behind how they designed that robot. That's a really good question, yeah. So I don't know the details of it because I don't personally work too much on walking and the design side of things. But yeah, I actually don't know what material they are using. They definitely do consider different types of material and making sure that it's lightweight and all that. But I think there's just a lot of drawings and a lot of stuff going on with that robot. If you're interested to learn more about that, check out Boston Dynamics' website. They have all their cool robots there. Hi, I have a question. So I was wondering what are some examples of state-of-the-art research involved with robotics and NLP or language-based AI? For example, voice-activated systems or related work? Mm-hmm. Yeah, there's a lot of excitement around actually NLP and robotics. I actually have a student joining me and Percy, which is very exciting. This is like the first time doing NLP and robotics together. There's a lot of work around instruction-following, so basically just making-- and interactive teaching. So when you have a person and a robot at home, like how would the robot learn that you care about a house of cards or you don't care about a house of cards type of a thing. So thinking about human-robot interaction a little bit more carefully, then you actually have access to NLP. So that is one place that it shows up. Another place that is a little bit harder to think about, but I think it has a lot of value, is thinking about the large data sets of natural language data that we have generally. We have a lot of like text data, and from that, if you can from that, you learn something about context, learn something about how a robot should cook an egg, I think that is very interesting to and-- I haven't seen that much work around it, but, again, lots of excitement in that particular intersection of NLP and robotics. Hi. I have a question as well. Maybe the is more into the scope of visual recognition, but robots will be playing a part in this too. So, the world unfortunately will always consist of good factors and evil factors. And for international security purposes, there will be a role-- or if there already is a role for robots and autonomous systems. Well, these same methods can unfortunately also be used for human rights violations too. How do you build it so that the technology is neutral as it's used that determines what the outcome is. Maybe in the case of human rights violations, how can you build systems so that an authoritarian regime-- what will be the way to using technology to evade an authoritarian regime that will have the best technology? I've seen some works around how to fool facial recognitions. How can technology work against technology when it's needed and also serve its purpose when needed, I think it's a tough question. It is a very tough question. I'll refer that to Tino next week specifically. But in the case of vision and-- in the case of using ML in general like machine learning, I think it is much, much tougher. In the last lecture, I will talk about this a little bit, this idea of fooling neural networks. There are some recent workaround basically showing that you can always find adversarial examples. So this idea of trying to safeguard your system so it doesn't get affected by adversarial examples is just not going to work. There's this proof by [INAUDIBLE] and others actually showing that you can always find like basically an adversarial example in some settings, under some instances. In the case of robotics, a big part of some discussions around this idea of autonomous weapon systems, as I mentioned earlier, for those discussions-- there are discussions on the number of drones that, for example, can be purchased at the same time and things of those form. A lot of concerns there is basically autonomous weapon systems becoming a weapon of mass destruction, which is kind of scary as I talk about it. But yeah, discussions around that is what sort of limitations can be put, what sort of regulations can be put there so people don't buy too many drones at the same time and weaponize it, so like things of those form. But I'm definitely not an expert in this. I refer you to Tino Cuellar next week on more details on this. Thank you. Thank you so much. I have a question regarding among all this-- since robotic is such an integrated subject, like any place of mechanical engineering, artificial intelligence, and also power management and also regulations. What is the biggest limiting factor that prevent robotics from affecting everyone's lives from being widely adopted? By being widely adopted? There's still like dealing with uncertainty is still so difficult, right? So, like, you have robots in factory floors' confined spaces. They can move around so much easily. Putting it in a world where humans are just like walking around it there's so many reasons that the human could walk around it and figuring out what those reasons are can be really difficult. So, in general, dealing with uncertainty, dealing with-- in the case of autonomous driving, we're dealing with things like near accident scenarios that it hasn't seen before. All of those-- that uncertainty is a big factor that's not allowing us to have robots out there, widely used in our everyday lives. So keeping the current AI technology mostly based on learning algorithms, but if you keep doing the learning algorithm that means you can only learn existing behaviors, so you're going to deal with these uncertainties. Are there any efforts to deal with uncertainties in life or do some self-generated motion or self-motivated actions from the robots itself? Mm-hmm. Mm-hmm. Mm-hmm. Yeah, so, definitely. There's a lot of workaround, actively generating these scenarios, active learning in this domain, so the robot has-- but the robot still has some sort of hypothesis space that it can search in, right? So, you have a hypothesis space of things that can happen, and within that, you can search. And, yeah, so there are these things that are called known unknowns and unknown unknowns. You can't really do much around unknown unknowns other than just randomly experiencing them. But for known unknowns, yeah, definitely there's a lot of work on actively looking for the most informative data. I guess another reason that we don't have robots widely used is it's such an integrated system, and it's such an interconnected system, so you have the best AI algorithm and all of a sudden, your camera fails. The hardware failure can affect-- there's so many things that can fail in that pipeline that makes it just such a difficult system to debug. It's like everything coming together. Thanks. Well, so, all right. So I'm going to just leave this video on. And then at the end of it, I'm going to sign off. Because it's a fun-- it has fun music. Oussama again made this. Oussama is awesome in making music. But if you have more questions about these things, just come to the office and I'd be happy to answer any questions. Let's just watch 50 Years History of Robotics. Or if you guys want to sign off, sign of too. I'll talk to you later. This is seven minutes. All right. [TYPEWRITER] [MUSIC PLAYING] All right. That was kind of long, but kind of fun. So this is what you guys have in 2020. OK. Do I still have 30 people? Oh my god. OK. All right. Good seeing you all. That's it. I'll see you at office hours and our next lecture. Talk to you later. Bye.
Stanford_CS221_Artificial_Intelligence_Principles_and_Techniques_Autumn_2021
AI_History_Stanford_CS221_AI_Autumn_2021.txt
The next thing I want to do is talk a bit about the history of AI. And, obviously, the history of AI is going to be necessarily abbreviated and simplified here, but I just want to give you appreciation for how multifaceted the history is and how rich and somewhat sometimes controversial it is. So a natural starting point to talk about the history of AI is Alan Turing's famous paper in 1950 called Computing Machinery and Intelligence. So in this paper, he asked the question, can machines think? And he proposes the Imitation Game as his solution, more popularly known as the Turing Test. And some of you probably know the Turing Test is said to be passed by a machine if it can fool a human judge into thinking that it is actually a human being. So this paper is remarkable not because it built a system or proposed new methods, but it framed the philosophical discussions of what is intelligence for years to come. And you just have to appreciate how difficult a notion intelligence is to pin down, so this was really the first actionable formal answer to the question, can machines think? And now, whether you think that working on a Turing Test is a good idea that will lead to progress is questionable and controversial, but at least philosophically it's quite thought-provoking. So for us, one major takeaway of the Turing Test, which was not really highlighted, is this objective specification. So note that the test itself is meant to be capturing what a system ought to be doing independent of how you get there. It doesn't say whether it should be using neural networks or logic based methods or so on. And this modularity is going to be really important to us in this course. So at the end of the paper, Turing does speculate on what might work. So he talks about two possible approaches. You could take a top down approach and try to tackle abstract problems such as chess. This is the route taken by symbolic AI. You could also, quote unquote, "provide a machine with the best sense organs" aka sensors and teach it like a child. And this is more of the approach taken by neural and statistical AI, and both have been tried, and we'll see how all three types of AI, symbolic, neural, statistical, kind of meld together at the end. So to start our first story, let's go to the summer of 1956. The place was Dartmouth College. John McCarthy, who actually founded the Stanford AI lab, organized a workshop. He gathered the brightest minds of the time in attendance with Marvin Minsky, Allen Newell, Herbert Simon, all of whom went to make seminal contributions to AI, and the participants set out a not so modest proposal. It was to-- they claimed that every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it. So they were really after the moon. They were after generality. And this was post-war computers, which is coming on the scene. It was a really exciting time, and people were really ambitious. So during this time, there are a few systems that were built. Arthur Samuel built a computer program that could play checkers at a reasonable amateur level and actually featured some machine learning. Allen Newell and Herbert Simon came up with the logic theorists that could prove theorems. For one theorem, they actually found a proof that was better than the human written proof, and they tried to submit a paper on the result, but the paper got rejected because the reviewers said it was not a new theorem. What the reviewers didn't realize, that the third author was actually a computer program. Later they generalized these ideas to the General Problem Solver, which was aimed at solving any problem, provided it could be suitably encoded in logic. And again, this carries forward the ambitious general intelligence agenda. And this was a time of high optimism, with the leaders of the field, who are all really impressive thinkers, predicting AI would be solved in a matter of years. But we know that they didn't get it solved in 10 years, and there were some tasks such as machine translation which are very stubborn. So there is now a folklore story, I don't know how true it is, but it's amusing nonetheless. You take a sentence like, "The spirit is willing but the flesh is weak." Translate it into Russian, which was the favored language for translation in the '50s, and you translate it back, and then you get, "The vodka is good but the meat is rotten." So this was less than amusing to the government funding agencies, who decided to write a report showing how really machine translation wasn't going anywhere and cut off funding. This led to the first AI winter. So what went wrong here? So there's two things. First is that most of the approaches involved casting problems as logical reasoning, which required a search over an exponentially large state space, and the hardware at the time was just simply too limited. And secondly, even if the research had infinite compute, they would still not be able to solve AI, because there's just too many concepts in the world. Words, objects, and all this information has to somehow be put into the AI system. So these grand ambitions weren't realized, but nonetheless there were some useful contributions, many due to John McCarthy, that came out of this era. First, Lisp was invented for AI, and arguably it's still the world's most advanced programming language. Garbage collection is something that, if you're programming only in Python, it allows you to not know what garbage collection is. And time sharing, the ability to use a single computer by multiple people was prescient at the time. So then fast forward to the '70s and '80s, knowledge was the key word. And AI researchers thought knowledge was the key to combat both the computation and information limitations of the previous era. And at that time, expert systems became very fashionable, where a domain expert could encode knowledge in the form of rules, usually looking like this. And there is a noticeable shift as well. The solve-it-all optimism from the '50s and '60s was gone, and instead researchers focused on very practical systems targeted at particular domains. For example, chemistry, medical diagnosis, and business operations. And there were some good things. Knowledge did help curb both the information complexity and also restricted the state space so that it alleviated the computational burden. And this was the first time that AI had real applications on industry, but there were obviously problems. Deterministic rules couldn't handle the complexity and uncertainty in the real world, and moreover, these rules just became quickly too complex to create and maintain. So this is a quote from Terry Winograd, who some of you know was on the HCI faculty at Stanford. But before he was HCI faculty, he worked at MIT as an AI researcher, and this is what he had to say in the mid '70s. He thought that it was a dead end. There were just too many complex interactions between all the components, no easy footholds, and you just couldn't have a mental model of what was going on in your head. And, moreover, there was a lot of overpromising and under-delivering. The field collapsed again, and it really seemed that history was repeating itself. So at this point, we're going to leave aside the story of symbolic AI, which dominated AI for multiple decades, and go back in time to 1943 to tell the story of neural AI. So 1943 is the year often attributed to the birth of artificial neural networks. So McCulloch and Pitts devised a simple model and studied mathematical properties of the simple model. But they didn't do anything in a way of learning the models or parameters. In 1946, there was the first learning rule by Donald Hebb based on the mantra that cells that fire together wire together. It was nice and simple, but it didn't really work. 1958, Rosenblatt came up with a perceptron algorithm for learning single layer artificial neural networks, AKA linear classifiers, which actually turned out to work really well and was used even fairly recently. '59, there was an analog for linear regression by Widrow and Hoff. They came up with actually a multilayer generalization called ADALINE, which was actually used to eliminate echoes on phone lines at the time, and this was one of the first real-world applications of neural networks. And then 1969, this was a big year. So Marvin Minsky, Seymour Papert wrote a small book called Perceptrons, and they analyze perceptions with very mathematical properties, and they had a little almost trivial result that showed a that single layer perceptron couldn't recognize the XOR function, and even though that is said nothing about the capabilities of deep networks, somehow this book is largely credited with shutting down neural networks research and the continued rise of symbolic AI. It's a really kind of interesting piece of history and I encourage you to go examine it. In the '80s, neural networks started coming back again. 1980 was the first convolutional neural network, which was trained in a kind of an ad hoc way. 1986, Rumelhardt, Hinton, Williams reinvented and popularized backpropagation from multi-layer networks, and now training became a little bit more principled. 1989, Yann LeCun devised a convolutional network that was able to recognize handwritten digits and was actually deployed for the USPS to recognize zip codes, and this was one of the first major success stories of using neural networks. But until the mid 2000s, neural networks research was still fairly niche, I would say, and they were very notoriously hard to train. In 2006, this kind of started changing. Jeff Hinton and his colleagues had a paper showing how you could use unsupervised layerwise pre-training to mitigate some of these effects, and the term deep learning started getting used around this time as well. But it was really 2012, I would say, that that was kind of a major break for neural networks. So Alex Krizhevsky, Ilya Sutskever, and Jeff Hinton wrote this landmark paper, which came up with what is now called AlexNet, a convolutional network which had huge, huge gains in object recognition. And at the time, the computer vision community was very skeptical, and almost overnight it completely transformed the field. Think about computer vision without neural networks today. It almost feels like kind of a distant memory almost. 2016 was another big event. AlphaGo defeated Lee Sedol in Go, something that experts thought was still many decades away, and that just kind of firmly more established deep learning as a dominant paradigm in AI. And this kind of continues even to the modern day. But let's reflect so far. So we have seen two intellectual traditions. Symbolic AI, which has roots in logic, and neural AI with its roots in neuroscience. The two have fought fiercely over the decades over philosophical differences, but I want to suggest some food for thought. Maybe there are deeper connections here. So remember that in that McCulloch and Pitts paper that introduced neural networks and arguably the roots of deep learning? Well, they spent most of the time talking about how it can actually encode logical operations. And the game of Go, which is actually a perfectly logical game designed by a few elegant simple rules, but AlphaGo used the power of pattern matching capabilities of neural networks to solve this otherwise logical game. So there may be room for more symbiosis than we think. So now there's a third and final story that you must tell to complete the picture. So this story is not really about AI per se, but it's about the influx of certain other ideas from other areas that have helped shape and form a mathematical foundation for AI, and we call this statistical AI. So machine learning is very popular, but the idea of fitting models from data, which is at the core of machine learning, goes far back, even to Gauss and Legendre at the beginning of the 19th century, who developed least squares for linear regression. Classification was also very early in statistics, and AI also consists of sequential decision making problems. For deterministic versions, Dijkstra's algorithm from the algorithms community for models with uncertainty. From control theory, Bellman created Markov decision processes. And notice that all of these developments largely predated the '50s and '40s, where AI really kind of started springing up. So you might have noticed, if you're paying close attention, that where we left symbolic AI was at the end of the '80s, but where neural AI started really gaining traction was the 2010s. So what was going on in between? And what was going on between was that there was a period where the term AI wasn't really used, at least not to the extent that it is today. And I think that part of it was to add distance to the failed attempts of the recent kind of AI winter, and also because the goals were just more down to Earth. People talked about machine learning, and during that period there were two paradigms. There is Bayesian networks developed in the '80s by Judea Pearl, which provided a reasoning under uncertainty framework, which is something that is symbolic AI didn't have a satisfying answer for. 1995, support vector machines were developed derived from ideas from learning theory and optimization, and at that time, SVMs were easier to tune than neural networks and really became the favorite tool in machine learning before deep learning started taking off again. So to kind of wrap up, there's three stories that we talked about. Symbolic AI took a top down approach and really failed to deliver on its original promise, but it did offer a vision and built impressive artifacts like question answering and dialogue system. Imagine trying to do this on ancient hardware in the '60s. Neural AI took a completely different approach, proceeding bottom up, starting with simple perceptual tasks, which the symbolic community wasn't interested in at the time. I compared machine translation with removing echoes on phone lines, for example. But in the end, it offered a class of models and a way of thinking about data that has proven capable of conquering today's ambitious problems. And finally, statistical AI foremost for us will offer mathematical rigor and clarity. For example, in the course, when we define objective functions as separate from optimization or have a language to talk about the complexity of a model in learning, these ideas and language all stem from statistical AI, and the course will actually be presented mostly through the lens of statistical AI. But I want to highlight all three views are kind of compatible and just offer different advantages on the same underlying ideas. Stepping back, the modern world of AI is kind of like New York City. It's a melting pot that has drawn largely from a lot of different fields, statistics, algorithms, neuroscience, economics, and it's really a symbiosis between all these fields and how they come together and allow you to tackle real world applications that makes AI so rewarding. OK, so that ends the AI history module. You can read much more about it at a few links at the end of these slides.
Stanford_CS221_Artificial_Intelligence_Principles_and_Techniques_Autumn_2021
Markov_Decision_Processes_1_Value_Iteration_Stanford_CS221_AI_Autumn_2019.txt
Okay. Let's start guys. Okay. So our plan for today is to catch up. So we're a little behind. So, uh, it's okay. So today, I want to talk about MDPs, Markov decision processes, and my plan is to talk about that for the first hour. And then after that, I want to talk, uh, for 10 minutes about the previous lecture. So remember, like we went over relaxations kind of quick, so maybe we can go over that again. And then, the last 10 minutes I want to talk about the project and, kind of the plan for the project, how we should think about it, this is coming up, so we should start talking about that. So this is an optimistic plan, so, [LAUGHTER] uh, let's see how it goes with, this is the current plan. Okay. All right. So. Okay, let's get into it. So Markov decision processes. So let's start with a question. Um, let's actually do this just by hand, so you don't need to go to the website. So the question is, it's Friday night and you wanna go to Mountain View and you have a bunch of options but, what, what you wanna do is you want to get to Mountain View with the least amount of time, okay? Which one of these modes of transportation would you use? Like, how many of you would bike? No one would bike. A couple of you would bike. How many of you would drive? This is, this is popular in Mountain View, would be good. Caltrainers? Some people would take Caltrain, sounds good. Uber and Lyft? We have like a good like distribution. Fly? [LAUGHTER] Yes, yeah, a good number of you want to fly, uh, as flying cars are becoming a thing, like this could be an option in the future. There are a lot of actually startups working on flying cars. Um, but, but as you think about this problem like the way you think about it is, is there a bunch of uncertainties in the world, like it's not necessarily a search problem, right. You could, you could bike and you can get a flat tire and you don't really know that right, you have to kind of take that into account. If you're driving, there could be traffic. Uh, if you are taking the Caltrain, there are all sorts of delays with the Caltrain, uh, and all sorts of other uncertainties that exist in the world and, and you need to think about those. So it's not just a pure search problem where you pick your route and then you just go with it, right, there are, there are things that can happen, uh, that can affect your decision. So, and that kind of takes us to Markov decision processes. We talked about search problems, where everything was deterministic, and now you're talking about this next class of state-based functions, which are Markov decision processes. And the idea of it is, you take actions but you might not actually end up where you expected to because there is this nature around you and there's this world around you that's going to be uncertain and do stuff that you didn't expect, okay. So, so, so far we've talked about search problems. The idea of it is you start with a state and then you take an action and you deterministically end up in a new state. If you remember the successor function, successor of S and A would always give us S prime, and we would deterministically end up in S prime. So if you have like that graph up there, if you start in S and you decide to take this action one, you're going to end up in A, like, there's no other option. But that's how you're gonna end up in it, okay. Uh, and the solution to these search problems are these paths. So we have the sequence of actions because I know if I, if I take action one, and action three, and action two, I know like what is the path that I'm going to end up at and that would be ideal, okay. So when we think about Markov decision processes, that is the setting where we have uncertainty in the world and we need to take that into account. So, so the idea of it is, you start in a state, you decide to take an action but then you can randomly end up in different states. You can randomly end up in S_1 prime or S_2 prime. And again, because there's just so many other things that are happening in the world and you need to, you need to worry about that randomness and make decisions based on that, okay. And, and this actually comes up pretty much like every run- every application. So, uh, this comes up in robotics. So for example, if you have a robot that wants to go and pick up an object, you decide on your strategy, everything is great, but like when it comes to actually moving the robot and getting the robot to do the task like the actuators can fail, or you might have all sorts of obstacles around you that you didn't think about. So there is uncertainty about the environment or uncertainty about your model like your actuators that, that you didn't necessarily think about and in reality, they are affecting your decisions and where you're ending up at. This comes up in other settings like resource allocation. So in resource allocation, maybe you're deciding what to produce, what is the product you would want to produce and, and that kind of depends on what is the customer demand and, and you might not have a good model of that and, and that's uncertain, right? It really depends on what, what products customers want and what they don't. And you might have a model but it's not gonna be like accurate and, and you need, you need to do resource allocation under those assumptions of uncertainty about the world. Um, similar thing is in agriculture. So for example, you want to decide, uh, what sort of, uh, what, what to plant but, but again, you might not be sure about the weather, if it's gonna rain or if the, if the, the crops are going to yield or not. So there's a lot of uncertainty in these decisions that we make and, and they make these problems to, to go beyond search problems and become problems where, where we have uncertainty and we need to make decisions under uncertainty. Okay? All right. So let's take another example. So this is a volcano crossing example. So, so we have an island and we're on one side of the island and what we wanna do, so we are in that black square over there. And what we wanna do is, you want to go from this black square to this side of the island and here we have the scenic view and that's gonna give us a lot of reward and happiness. So, so my goal is to go from one side of the island to the other side of the island. But the caveat here is that there's this volcano in the middle of the island that I need to actually pass, okay. So, and, and if I fall into the volcano, I'm going to get a minus 50 reward, more like minus infinity. But, but for this example like imagine you are getting a minus 50 reward if, if you fall into the volcano, okay. So. All right. So, so, if I have this link here in this side, so if my slip probability is 0 which is- I'm sure I'm not gonna fall into the volcano, should I cross the island? No or yes? Well, I should cross the island uh, because I'm not gonna fall, right, like I'm, I'm not gonna fall into that minus 50. Uh, slip probability is 0, I'll get to my 20 reward, everything will be great, okay. But the thing is like we've been talking about how the world is, is stochastic and slip probability is not gonna be 0. Maybe, maybe it's 10%. So if there's 10% chance of falling to, into the volcano, how many of you would, would still cross the island? Good number, yeah. So, um, the optimal solution is actually shown by these arrows here. And yes, the optimal solution is still to cross the island. Like your value here, we're going to talk about all these terms, but the value here is basically the value you're gonna get, uh, at the beginning like state which is the, kind of- we'll, we'll talk about it, it's the expected utility that you're gonna get. It's gonna go down because there is some probability that you're going to fall into a volcano, but still like the best thing to do is to cross the island. How about 20%? How many of you would do it with 20%? Some number of people, [LAUGHTER] it's less. Um, still turns out that the optimal strategy is to cross. 30% percent? One person. [LAUGHTER] So with 30%, that's actually the point that you kind of you'd rather not, not cross because there's this volcano and then with a large probability you could, you could fall into the volcano and the value is going to go down. Okay. So these are the types of problems we're gonna, we're gonna work with. Yes. The value like with respect to two because two is like what you can do with them. So two is like the value- the reward that you are going to get at, at that state, and then value you compute that you propagated back. We'll talk about that in details on, on how to compute the value, [NOISE] okay? [NOISE] All right. Okay. So that was just an example. So, so that was an example of a Markov Decision Process. What we wanna do in this lecture, is we are going to, like, again, model these, er, types of systems as Markov decision processes, then you are going to talk about inference type algorithms. So how do we do inference? How do we come up with this best strategy path? Um, and in the middle, I'm going to talk about policy evaluation, which is not an inference algorithm but it's kind of a step towards it. And it's basically this idea, if someone tells me this is a policy, can I evaluate how good it is? And then we'll talk about value iteration which tries to figure out what is the best policy that I can take, okay? So that's the plan for today. Then next lecture we're going to talk about reinforcement learning where we don't actually know what the reward is, and we don't know what the- where the transitions are. Uh, so, so that's kind of the learning part of- part of these, er, MDP lectures. So Rita is going to actually do the- do the lecture next, next- on, on Wednesday, right? Okay. So let's get into- let's get into Markov decision processes. So we have a bunch of examples throughout this lecture, so this is kind of another example. So all right so actually I do need volunteers for this. So in this example, uh, we have a bunch of rounds, and the idea is you can at any point in time, you can choose two actions. You can either stay or you can quit, okay? If you decide to quit, [NOISE] I'm going to give you $10, I'm, uh, actually I'm not going to give you $10, but imagine I'm gonna give you $10 [NOISE], and then we'll end the game, okay? And then if you decide to stay, then you're gonna get $4 and then I'll roll the dice. If I get one or two, we'll end the game [NOISE]. Otherwise, you're going to continue to the next round, and you can decide again, okay? So who wants to play with this? Okay. All right. Volunteer. Do you want to stay or quit? Quit. [LAUGHTER] [LAUGHTER] so that was easy. You got your $10. [LAUGHTER] Does anyone else want to play? Stay, stay again. Oh, you've got 8, $8. Sorry. [LAUGHTER]. The dice is still. Um, so you kind of get the idea here, right? So, so you have these actions and then with one of them, like if you decide to quit, you deterministically you will get your $10 and you're done. Uh, with the other one, it's, it's probabilistic and you kind of wanna see which one is better and what, what would be the best policy to take in this setting. So we'll come back to this question. We will formalize this, and, and we'll go over this. I have a question. Is like, I think I see a similar example. Is it better to always, like, just continue once and then quit? Like, isn't it better to switch or? So when, when not. Okay so, so then you need to actually compute what is the- Yeah. -expected utility, right? So- and that's what we wanna do, right? So, so [NOISE] you might say, "Oh, I wanna, I wanna stay and then I get my $4, and then I want to quit and then I get 14, and maybe that is the way to go. Um, that could be a strategy, but for doing that, right? Like we are going to actually talk about that. For doing that, we are going to define what would be the optimal policy. One other thing that, uh, for this particular problem, you're going to keep in mind is, I'll, I'll talk about it when, when I define a policy. But, but the policy the way we, we define it is it's a function of state. So if you decide to stay, that is your policy. If you decide to not stay, that is your policy. Like, you're not allowing switching right now. Like, as I talk about this later in the lecture. But, but I'll come back to this problem, okay? So if you- if you decide that your policy, the thing you want to do is to just stay. Uh, keep staying, this is the probability of, like, the total rewards that you are gonna get. So you're gonna get four with some probability. And then if you're lucky, you're gonna get 8. And then even if you're luckier, you're gonna get 12, and if you're luckier, you're gonna get 16. But, but the probabilities are going to come down pretty much like really quickly. So the thing we care about in this setting, is, is the expected utility, right? In expectation, like if I- if I- if I run this, and if I average all of these possible paths that I can do, what would be the value that I get? And for this particular problem, it turns out that in expectation if you decide to stay, you should get 12. So, so you got really unlucky that you got 8. But [LAUGHTER], but in general, in expectation, you should decide to stay, okay? And, and we actually want to spend a little bit of time in this lecture thinking about how we get that 12, and and how to go about computing this expected utility. And, and based on that, how to decide what policy to use, right? Okay. And then if you decide to, to quit, then, then expected utility there is kind of obvious, right? Because that, that, you're quitting and that's with probability of 1 you're getting $10, so you're just gonna get $10 and that is the expected utility of quitting. Yes. [inaudible]. [NOISE] Uh, [NOISE] so, so when you- when I say- when you roll a die, I said if you get one or two- You stay. You, you, you, stay, yeah. And then if you get the other, so the two-thirds of it, you continue. So, so it's a one-third, two-third comes from there, okay? All right. I'll, I'll come back [NOISE] to this example. This is actually the, the running example throughout this lecture [NOISE], okay? So [NOISE]. [inaudible] so how are you able to do this calculation? We're going to talk about that next. That is what the lecture is about. Okay. So let's, let's actually, uh- I do wanna finish it in an hour, that's why maybe I'm rushing things a little bit. But we are going to talk about this problem like throughout the class. So, so don't worry about it. If it's not clear at the end of it, we can clarify things, okay? All right. So I do want to formalize this problem. The way I want to [NOISE] formalize this problem is, er, using an MDP. So I wanna- I wanna formalize this as a ma- as a Markov decision process. Maybe I can [NOISE] just use this [NOISE]. So in Markov decision processes, similar to search problems, you're going to have states. So in this particular game, I'm going to have two states. I'm either in the game [NOISE] or I'm out of the game. So I'm in an end state where everything [NOISE] we ended you're out of the game, you're done, okay? So, so those are my states. Then, um, when I'm in these states, I'm in each of these states, I can take an action. And if I'm in an end state, I can take two actions, right? I can either decide to stay [NOISE], right? Or I can quit [NOISE], okay? And if I, if I decide to stay, from in state, that takes me to something that I'm [NOISE] going to call a chance node. So a chance node is a node that represents a state and action. So it's not really like, like the blue things are my states, but I'm creating this chance nodes as a way of kind of going through this example, to, to see where things are going. So, so the- these blue states [NOISE] are going to be my states. I'm in S. These chance nodes are over state and action. So basically, this node tells me that I started [NOISE] with in, and I decided to stay, okay? And the chance node here, basically tells me that I started with in, and I decided to quit [NOISE], okay? Yes. Why do we still call it a chance node even though it's deterministically? So I deterministically go through it, but then from the chance node that's where I'm introducing the probabilities. So from the chance node I can like probablistically end up in the- these different states. In the case of quit, it's also deterministic. In the case of the quit in this case it's deterministic. Yeah. So in the case of the quit, we say [NOISE] with probability 1 [NOISE], I'm going to end up in this end state. So I am going to draw that with the no- with the- with the edge that comes from my chance node, and I'm gonna say, with probability of 1 [NOISE], I'm going to get $10 [NOISE] and just be done, okay? But if you are in this state, this is actually the state where interesting things can happen with probability two-thirds, I'm going to go back to [NOISE] in, and get $4, or with probability one-third, I'm going to end up in end, and, and do I get still 4, $4 [NOISE] , okay? So, so that is my Markov decision process. So, so I had maybe we can keep track of a list of things we are defining in this lecture. So we just defined states [NOISE], and then we said well, we're gonna have these chance nodes [NOISE] because from these chance nodes probabliistically, we're going to come out of them depending on what happens in nature, right? Like I end up- this is the decision I've made, now nature kind of decides which one you're going to end up at, and, and based on that we, we move forward, okay? All right. So, so more formally, we had a bunch of things when we define an MDP. Similar to search problems, we- like we, we now need to define the same set of things. So, so we have a set of states. In this case my states are in and end, okay? We have a start state. I'm starting with in. So that's my start state. I have actions as a function of states. So when I ask what are the actions of the state, my actions are going to be stay or quit. What are actions of end? I don't have anything, great, end state doesn't have any actions that come out of it. And then we have these transition probabilities. So transition probabilities more formally, take a state, an action, and, and a new state. So S, A, S prime, and tell me what is the transition probability of that, it's one-third in this case. And then I have a reward which tells me how much was that rewarding, that was $4, okay? So, so I'm defining- so when I'm defining my MDP, kind of the new things I'm defining is this transition probability, which tells me if you're in state S, and take action A, and you end up in S prime. What is the probability of that? I'm in in, I decide to stay, and then end up in end. What's the probability of that? That's one-third. Maybe I'm in in, I decide to quit, I end up in end. What's the probability of that? It's equal to 1, okay? And then over the same state action state primes, like next states we are going to end up at, we're going to define a reward [NOISE] which tells me how much money did I get? Or like how, how good was that. So it was $4 in this case. Or, or if I decide to quit, I got $10, okay? Um, and if you remember in the case of search problems, we're talking about cost. I'm just flipping the sign here, we wanted to minimize cost. Here we want to maximize the reward just a more optimistic view of the world I guess. Um, so, so that is what the rewards are going to be defined, okay? We also have this as end function, which again similar to search problems just checks if you're in an end state or not. And in addition to that, we have something that's called a discount factor. It's, it's this value Gamma [NOISE] which is between 0 and 1. And I'll talk [NOISE] about this later don't worry about [NOISE] it right now. But it's a thing to define for our search pro- er, for our MDPs, okay? All right. So how do I compare this with search? Again, these were the things that we had in a search problem. We had the successor function that would deterministically take me to S prime and we had this cost function that would tell me what was the cost of being in state S and taking action A. So, so the major things that are changed is that instead of the successor function, I have transition probabilities these T's, that, that basically tell me what's the probability of starting in S, taking action A, and ending up in S prime. And then the cost just became reward, okay? So, so those are kind of the major differences between search and MDP. Because things are- things are not deterministic here [NOISE], okay? All right, so, so that was the formalism. Now, now I can define any, any MDP model- any Markov Decision Process. And then one thing- just one thing to point out is this transition probability is this t, basically specifies the probability of ending up in state S prime if you take action A in state S. So, so these are probabilities, right? So, so for example again, like we have done this example but let's just do it on the slides again, if I'm in state in, I take action quit, I end up in end, what's the probability of that? 1. And then if I'm in state in, I take action stay, I end up in state in again, what's the probability of that? I end up in again, two-thirds. And then if I'm state in, I take action stay, I end up in end, what is the probability of that? One-third, okay? And then these are probabilities. So what that means is they need to kind of add up to 1, but one thing to notice is well, just what is going to add up to 1? Like, like all of the things in the column are not going to add up to 1. The thing that's going to add up to 1is if you consider all possible these- different s primes that you're going to end up at, those probabilities are going to add up to 1. So, so if you look at this, this sta- stable again, if you look at deciding and being stay in and taking action stay, then the probabilities that, that we have for different s primes are two-thirds and one third, and those two are the things that are going to add up to 1. And in the first case, if you're in stay in and you decide to quit, then wherever- whatever s primes you're gonna end up at, in this case, it's just the end state, those probabilities are going to add up to 1. So, so more formally what that means is, if I'm summing over s primes, these new states that I'm going to end up at, the transition probabilities need to add up to 1. Okay, because they're basically probabilities that tell me what are the- what are the things that can happen if I take an action, okay? And then these transition probabilities are going to be non-negative because they are probabilities. So that's also another property, okay? So usual six. All right. So, so that's a search problem. Let's actually formalize another search problem. This is- let's actually try to code this up. So what is a search problem? This is the tram problem. So remember the tram problem. I have blocks 1 through n. What I wanna do is I have two possible actions, I can either walk from state S to a state S plus 1. Or I can take the magic tram that takes me from state S to state 2S. If I walk, that costs one minute, okay? Means reward of that is minus 1. If I, if I take the tram that costs two minutes, that means that the reward of that is minus 2, okay? And then the question was how- like how do we want to travel from, from 1 to n in the least amount of time? So, so nothing here is, is probabilistic yet, right? So I'm going to add an extra thing here which says the tram is going to fail with probability 0.5. So I'm going to decide maybe you take, take a tram at some point and that tram can, can fail with probability 0.5. If it fails, I end up in my state, like I don't go anywhere. And, and actually like in this case, you're assuming you're still losing two minutes. So if I decide to take a tram, I'm gonna lose two minutes, maybe you'll fail, maybe we will not, okay? All right. So let's try to formalize this. So we're gonna take our tram problem from two lectures ago. So this is from search one. We're gonna just copy that. So all right. So this was what we had from last time. You had this transportation problem and we had all of these algorithms to solve the search problem. You don't really need them because we have a new problem so let's just get rid of them. And now I just want to formalize an MDP. So, so it's a transportation MDP, okay? The initialization looks okay. Start state looks okay. I'm starting from 1, this end looks okay. So the thing I'm going to change is the- first off I need to add this actions function. Okay? So what would actions do? It's going to return a list of actions that are our potential actions in a given state. So I just copy pasted stuff from down there to just edit. So it's going to return a list of valid actions. Okay? So what are the valid actions I can take? I can either walk or I can tram. So I'm going to remove all these extra things that I had from before and just keep it to be I'm either walking or I'm taking the tram, okay? As long as it's a valid state. So, so that looks right for actions. The other thing we had was a successor and cost function. So, so now we want to just change that and return these transition probabilities and end reward. So, so it's basically the successor probabilities and reward. Okay? So I'm putting those two together, similar to before we had successor and cost. Now I'm returning probabilities and reward. Okay? So what this function is going to return is it's going to return this new status S prime, I'm going to end up at and the probability value for that and reward of that. Okay? So, so given that I'm starting in state S and I'm taking action A, then what are the potential S primes that I can end up at and what are the probabilities of that? Then what, what is T of SAS prime and what is the reward of that? What is the reward of SAS prime? I want to have a function that just returns these so I can call it later. Okay? All right. So I need to basically check like for, for each one of these actions, I can for, for action walk. What happens for action walk? What's the new state I'm going to end up at? Well, I'm going to end up at S plus 1. It's a deterministic action. So I'm going to end up there with probability 1 and what's the reward of that? Minus 1 because it's one minute cost, so it's minus 1 reward. Then for action tram, we kind of do the same thing but we have two options here. I can- I can end up in 2S. Tram doesn't fail, I end up in 2S. The probability 0.5 that cause- that reward of that is minus 2 or the other option is I'm going to end up in state S because I didn't go anywhere because we had probability of 0.5, the tram did fail. And that, that- the reward of that is minus 2. And that's pretty much it. That, that is my, my MDP. So I can just define this for a city with let's say 10 blocks. Oh, and we need to have the discount factor but we'll talk about that later. Let's say it's just 1 for now, okay? And they'll use right- I'm writing these other states function for later but, okay. Does that look right? We just formalized this MDP. So let's check if it does the right thing. So maybe we want to know what are the actions from state three? What are the actions from state three? Oh, we need to remove this utility function from before because we don't have it in the folder. So remove that. What, what are the actions from state three? I have 10 blocks. If I'm in state three, I can either walk or tram. Either one of them is fine, right? So, so that did the right thing. Maybe we want to just check if this successor probability and the reward function does the right thing. So maybe, maybe we can try that out for state three and walk. So, so for state three and action walk, then what do we get? Well we end up in four and that is, that is with probability 1 with the reward of minus 1. Okay? Let's try it out for tram. Again, remember tram can fail, so I'm gonna get two things here. So these are the things I'm going to get for tram, I'm going to either end up in six with probability 0.5 with the reward of minus 2 or I will not go anywhere. I'm still at three with probability 0.5 and that is with a reward of minus 2. Okay? All right. So that was just the tram problem and we formalized it as an MDP. Again, the reason it's an MDP is, is that the tram can fail with probability 0.5. So we added that in, then we defined our transition function and our problem- and our reward function. Okay? All right, everyone happy with how we are defining MDPs? Yeah? Okay. Pretty similar to search problems except for now we have these probabilities, okay? All right. So, so now I have defined an MDP, that's great. The next question that in general we would like to answer is to give a solution, right? So there's a question here. So what is the Markov part of an MDP? So the Markov part means that you just depe- so, so when you just depend on the state and this current state, like the way we define our state remember, our state is sufficient for us to make optimal decisions for the future. So the Markov part means that you're Markovian, it only depends on the current state and actions to end up in the probabilistically end up in the next, next state. So yeah. So the interesting question we would like to do is well, we want to find a solution, right? I want to figure out what is the optimal path to actually solve this problem. And again if you remember search problems, the solution to search problems was just a sequence of actions, said that's all I had, like a sequence of actions, a path that was a solution. And the reason that was a good solution was like everything was deterministic, so I could just give you the path and then that was what you would follow. But in the case of MDPs, the way we are defining a solution is by using this notion of a policy. So a policy- let me actually write that here. So we have defined an MDP but now I want to say well, what is a solution of an MDP? A solution of an Markov decision pro- process is a policy pi of S. So and this policy basically goes from states, so it takes any state and it tells me what is the- what is the potential action that I would get for that state. Okay? So, so if a policy is a function, it's a mapping from each state S in the set of all possible states, to, to an action and the set of all possible actions. Okay? So in the case of the volcano crossing, like I can have something like this. I can be in state 1, 1 and then a policy of that state could be going south, okay? Or I can be in state 2, 1 and a policy for that state is east. If, if this was a search problem, I would just give a path. I would just say go south and then to- go east and go north, right? So, so that would be my solution. But- but again, like if I decide that well the policy at 1, 1 is to go south, there is no reason for you to end up at south, right? Because this thing, this thing is probabilistic. So, so the best thing I can do is for every state just tell you what is the best thing you can do for that particular state and, and that's why we are defining a policy as opposed to ge- giving like a full path, okay? All right, so policy is the thing you're looking for. And ideally, I would like to find the best policy that would just give me the right solution. But in order to get there, I want to spend a little bit of time talking about how good a policy would be. So and that's kind of this idea of evaluating a policy. So in this middle section, I don't want to try to find a policy, I, I just assume you give me a policy and I can evaluate it and tell you how good that is. So, so that's the plan for the middle section, okay? All right. Everyone happy with- so, so far all I've done is I've defined an MDP, which is very similar to a search problem, it's just probabilistic. Okay? So so how would we evaluate a policy? Okay? So if you give me a policy which basically tells me at every state S, take some action, then that policy is going to generate a random path, right? I can get multiple random paths because nature behaves differently and the world is uncertain. So I might get a bunch of random paths and then those are all random variables, uh, random paths, sorry. And, and, and then for each one of those random paths, I can, I can define a utility. So, so what is the utility? Utility is just going to be the sum of rewards that I'm going to get over that path. I'm calling it as, as the discounted sum of the rewards. Remember that discount, we'll talk about that but, but you can- you can discount the future. But, but for now just assume it's just a sum of the rewards on that path, okay? So a util- the utility that we are going to get is also going to be a random variable, right? Because if if you think about a policy, a policy is going to generate a bunch of random paths and and utility is just going to be the sum of rewards [NOISE] of each one of those. So it's a random variable. So, so if you remember this example, right? So I can, I can basically have a path that tells me start in in, and then stay and then that ends. Right? So so this is one random path, and for this particular random path, well, what is the utility I'm gonna get? I'm just gonna get $4. That's one possible thing that can happen. If my, if my, um, policy is to let's say stay, like there is no reason for for the game to end right here. Right? Like I can have a lot of different types of random path. I can have a situation where I'm staying three times and then after that ending the game and utility of that is 12. We can have this situation where we have stay, stay, and end. That's the situation it's all, like you had, you had an utility of eight and so on. So, so you're getting all these utilities for all these random paths. So, so these utilities are also going to be just random variables. Okay? So I can't really play around with the utility. That's not telling me anything. Although it's telling me something but it's a random variable. I can't optimize that. So instead we need to define something that you can actually play around with it and, and that is this idea of a value which is just an expected utility. So, so the value of a policy, is the expected utility of that policy. And then that's not a random variable anymore, that's actually like a number and I can I can compute that number. I can compute that number for every state and and then just play around with value. Okay, next question? What is the value of the policy, does, is that policy needs defined for all possible states or a particular state? For all possible. So so the question is, yeah, so when you say value of policy, uh, is the policy basically telling me, um, is a policy basically telling me, uh, what- what is a strategy for all possible states? Well, um, you're defining policy as a function of state, right? So, and value is the same thing as a function of state. I might ask what is the value of being in in? So the value of being in in is, is, and, and, following, and following policy stay, is, is going to be the, the value of fo- following policy stay from this particular state which is the expected utility of that, which is, which is basically that 12 value there. I could ask it for about any other state too. So I can be in any other state and then say well, what's the value of that? And, and when we do value iteration and you actually need to compute this value for all states to kind of have an idea of how to get from one state to another state but [OVERLAPPING]. [inaudible] will be in state in and the policy given your state in taking the actions stay. Yes. Okay. Yeah. And that is, that is what 12 is. Okay? And 12 like we kind of empirically we have seen, it's 12 but we haven't shown how to get 12 yet. Okay? All right. So, um, actually let me write these in my lists of things. So we talked about the policy. What else did we talk about? We talked about utility. So what is utility? Utility, we said it's sum of rewards. [NOISE] So if I get like reward 1, then I get reward2two. It's a discounted sum of rewards. So I'm gonna use this gamma which is that discount that I'll talk about in a little bit times reward 2, plus gamma squared times reward 3, and so on. So utility is, you give me a random path and I just sum up the rewards of that. Imagine if gamma is 1, I'm just summing up the rewards. If gamma is not 1, I'm summing- I'm looking at this this discounted sum. Okay, so, so that is utility. But value- so this is utility, value is just the expected utility, okay? So you give me a bunch of random paths, I can compute their utilities, I can just sum them up and average them and that gives me value. Yes. If the discount factor is 1, would that be bounded? That's a very good question and we'll get back to that. So, so, so in general, and, and, and. Okay. If if it is acyclic, it is fine, but if you have a cyclic graph you want your gamma to be less than 1. And we'll talk about that when we get to the convergence of these algorithms. All right, how am I doing on time? Okay. All right. So so let's go to the, uh, this particular volcano crossing example. Um, so in this case, um, like I can run this game, and every time I run it, I'm gonna get a different utility because like I'm gonna end up in some random path, some of them end up in the volcano, that's pretty bad, right? So I get different utility values, utilities [LAUGHTER] but the value which is the expected utility is not changing really. It's just around 3.7 which is just the average of these utilities. So I can keep running this getting these different utilities, but values is one number that, that I can, I can talk about and, and that's the value of this particular state and that tells me like what would be the best policy that I can take and what's the best amount of utility that I can get from in expectation from that state? Okay? All right, so we've been talking about this utility I've actually written that already on the board. So utility is going to be a discounted sum of rewards. And then we've been talking about this discount factor. And the ideal of the discount factor is I might like care about the future differently from how much I care about now. So, so for example, if if you give me $4 today, and you give me $4 tomorrow, like if that $4 tomorrow is the same kinda amount and has the same value to me as as today, then then I might, it's kinda the same idea of having a discount counter of 1, uh, discount of, of 1, gamma of 1. So you're saving for the future, the values of things in the future is the same amount. If you give me $4 now, if you give me $4 10 years from now, it- it's going to be $4. I care about it like $4 amount and I can just add things up. But it could also be the case like you might be in a situation, in a particular MDP, where you don't care about the future as much. Maybe you give me $4 10 years from now and that's that doesn't like, I don't have any value for that. So, uh, if then that is the case and you just want to live in the moment and you don't care about the values you're gonna get in the future, then that's kind of the other extreme when- when this this gamma, this discount is equal to 0. So so that is a situation that if I get $4 in the future, that they don't like val- like they don't have any value to me. They're just like a 0 to me. So, so I only care about right now living in the moment what is the amount I'm going to get. And then in reality you're like somewhere in between, right? Like we're not this, this case where we are living in a moment, we're also not this case that, that everything is just the same amounts like right now or in the future in- and like in balanced life as a setting where we have some discount factor, it's, it's not 0, it's not a 1, it actually discounts values in the future because future maybe doesn't have the same value as now but, um, but we still value things in the future like $4 is still something in the future. And, and that's where we pick like a gamma that's between 0 and 1. So so that is kind of a design choice like depending on what problem you're in, you might want to choose a different gamma. Question, yeah. So is discounting utility, is it an assessment of risk or is there, like, a different way we can assess how much risk you want to take? Um, you could, you could think of it as an, it's not really an assessment of risk in that way. It depends on the problem, right? It depends on like in a particular problem, I do want to get values in the future or have like some sort of long term like goal that I want to get to and I care about the future. Like it it depends, like, if you're solving a game versus you're solving like, I don't know, mo- mo- mobile like a robot manipulation problem like it might just be a very different like discount factor that you would use. For a lot of examples we'd use in this class, we just choose a gamma that's close to 1. Like- like usually like for a, for a lot of problems that we end up dealing with gamma it's like 0.9. That's like the usual. Okay, like for usual problems. Like you might have a very different problem where we don't care about the future. So, so then we just drop it. Yes. [inaudible] is gamma a hyperparameter that needs to be tuned and is a gamma 0 the same as a 3D algorithm? Gamma. Okay. So so that's a good question. So is- is gamma a hyperparameter that you need to tune? I would say gamma is a design choice. It's not a hyperparameter necessarily in that sense that, oh if I pick the right gamma that will do the right thing. You want to pick a gamma that kind of works well with your problem statement. Um, and, and gamma of 0 is kind of greedy, like you are picking like what is the best thing right now and I just don't care about the future ever. Question right there. Does gamma violate the Markov property because like this kind of memory of what you save is. It doesn't violate the Markov property. It's just a discount of like your- it's about the reward. It's not about how this state affects the next state. It basically affects how much reward you're going to get or how much value you reward in the future. It doesn't, it doesn't actually like- it's still a Markov decision process. [inaudible] and make your possible actions [inaudible]? What you are getting with- it's affecting the reward yeah, but it's Markov because if I'm in state s and I take action a, I'm gonna end up in s prime and that doesn't depend on like gamma. Okay. All right. So. Okay. So, so in this section we've been talking about this idea of someone comes in and gives me the policy. So the policy is pi and what I want to do is, I want to figure out what's the value of that policy, and again value is just the expected utility. Okay? So V pi of s is the expected utility received by following this policy pi from state s. Okay? So, so I'm not doing anything fancy. I'm not even trying to figure out what pi is. All I want to do is, I want to just evaluate. If you tell me this is pi, how good is that? What's the value of that? Okay? So, so that's what a value function is. So value of a policy is, is V pi of s. Okay? That's expected utility of starting in some state, um, let me put this here and then I'm going to move these up. [NOISE] Um, yeah, yeah so V pi is, is the value- the expected utility of me starting in some state S. Okay. And state S has value of pi of S. And if someone tells me that, well you're following policy pi, then I already know from state S, the action I'm going to take is pi of S. So that's very clear. So I'll take pi of S. And if I take pi of S we'll- I'm going to end up in some chance node. Okay. And that chance node is, is a state action node. It's going to be S and the action- I've decided the action is pi of S. Okay. And of this- define this new function, this Q function, Q pi of S, a, which is just the expected utility from the chance node. Okay. So, so we've talked about value, value is expected utility from my actual states. I'm going to talk about Q values as expected utilities from the chance nodes. So after you've committed that you, you have taken action a, and, and you're following policy pi. Then, what is the expected utility from that point on, okay. And well what is the expected utility from this point on? We are in a chance node, so many things that can happen because I have like nature is going to play and roll its die, and anything can happen. And they're going to have in transition, S, a, S-prime and with that transition probability, I'm going to end up in a new state. And I'm going to call it S-prime, and the value of that state- again, expected utility of that state is V pi of S-prime, okay. All right. So, okay. So what are these actually equal to? So I've just defined value as expected utility, Q value as expected utility from a chance node, what, what are they actually equal to? Okay. So I'm going to write a recurrence that we are going to use for the rest of the class. So pay attention for five seconds. There is a question there. I understand how semantically how pi and v pi are different, in like actual numbers, like expected value- how are they different? So they're- both of them are expected value. Yeah. So it's just- one is just a function of state the other one you've committed to one action. And the reason I'm defining both of them, is to just writing my recurrence is going to be a little bit easier, because I have this state action nodes, and I can talk about them. And I can talk about how like I get branching from these state action nodes, okay? All right. So I'm going to write a recurrence. It's not hard, but it's kind of the basis of the next like N lectures, so pay attention. So alright. So V pi of S, what is that equal to? Well, that is going to be equal to 0, if I'm in an end state. So if IsEnd of s is equal to true, then there is no expected utility that's equal to 0. That's a easy case. Otherwise- well, I took policy pi S. Someone told me, take policy pi S. So value is just equal to Q, right? So, so in this case, V pi of S, if someone comes and gives me policy pi, it's just equal to Q pi of S, a. Okay. These two are just equal to each other. So the next question one might ask is- actually let me write this a little closer so I'll have some space. Yeah. So this is equal to Q pi of S, a, okay. So, so what is that equal to? What is Q pi of S, a equal to? So this is V pi S. So now, I just want to know what is Q value, Q pi of S, a. What is that equal to? Okay. So if I'm right here then there are a bunch of different things that can happen, right? And I can end up in these different S-prime. So if I'm looking for the expected utility then I'm looking for the probability of me ending up in this state times the utility of this state, plus the probability of me ending up in a new state times the utility of that. So, so that is just equal to sum over all possible S-primes that I can end up at of transition probabilities of S, a, S prime. Transition probability of ending of a new state, times the immediate reward that I'm going to get, reward of S, a, S prime, plus the value here. But I care about the discounted value. So I'm going to add gamma V pi of S-prime, because I've been talking about this, this next state. Okay. There's this, does everyone see this? Okay. So this is the recurrence that we are doing in policy evaluation. Again, remember someone came and gave me policy pi. So I'm writing this policy pi here. Someone gave me policy pi, I just want to know how good policy pi is. I can do that by computing V pi. What is V pi equal to? Someone told me you're following policy pi, so it's gotta be equal to just Q pi. What is Q pi equal to? It's just sum of all the- like the expectation of all the places that I can end up at that sum over S-primes, transition probabilities of ending up in S-prime, times the reward- the total reward you're getting which is the immediate reward, plus discounting in my future, okay. Yes. What if Q values and then following policy pi starting from S-prime? Yes. Yeah, yeah, yeah, starting from S-prime. All right. So okay. So far so good. So so that is how I can evaluate this policy, right? So, so I have these two recurrences- if I have these two recurrences, I can just replace this guy here, and let's imagine we're in the case- maybe I can use a different color up here. Um, I'm just replacing, I'm just replacing this guy right here. I don't know if it's worth writing it. Imagine we we're not in an end state. If you're not in an end state then V pi of S, well, what is that equal to? That is just equal to sum of transition probabilities S, a, S-prime, over S-primes, times immediate reward that I'm going to get, plus discounting V pi of S-prime. Okay. So this is kind of a recurrence that I have. I, I literally just combined these two, and wrote it in green, okay, if you're not in an end state. So if you're not in an end state, this is the recurrence I have. I have V pi here, I have V pi on this side too. So that is nice. And, and that is kind of the, the placer. I can compute V pi. Maybe I can do it literally or maybe I can actually find a closed form solution for some problems, but that is basically what I'm going to do. I have V pi as a function that depends on V pi of S-prime. And I can just solve for this V pi. Okay. It allows me to evaluate policy pi. I haven't figured out a new policy. All have done is evaluating what's the value of pi, okay. All right. Okay, so let's go back to this example. So let's say that someone comes in and tells me well the policy you gotta follow is, is to stay. So my policy is, is to stay. Okay. I want to know- I want to just evaluate that, I want to do policy evaluation. When you're doing policy evaluation, you gotta compute that V pi for all states. So let's start with V pi of end, oh that is equal to 0, because we know V pi at end state is just equal to 0. Now, I want to know what's V pi of in, okay stay, in. What is that equal to? That's just equal to Q pi of in and stay, right? V pi is just equal to Q pi of in and stay. So I'm going to replace that, that's just equal to one-thirds, times immediate reward, which is 4, plus value of the next state I'm going to end up at, which is end in this case, plus two-thirds, times the immediate reward I'm going to to get, which is $4, plus value of the state I'm going to end up at, which is end. Okay. So, so that is just that sum that we have there, right? V pi of end is 0, so let me just put that 0 there. I'm going to put 0 there. I only have one state here too, right? So, so th- I just have this other function of this one, stay, in. So having an equation, I can find the closed form solution of V pi of in. I'm just going to move things around a little bit. And then I will find out that V pi of in is just equal to 12. So, so that's how you get that 12 that I've been talking about. So, so you just found out that if you tell me the policy to follow is stay, if that is the policy, then the value of that policy from state in is equal to 12. Is it you always choose the same or- so you always choosing to state. Yeah. So, so the policy is a function of state. I only have this one state that's interesting here, right? That, that one state is in. So I need to- when, when I defined my policy, I need to kind of choose the same policy for, for that state, right? My policy says, in in you've got to either stay or you've got either quick- quit. Okay. All right. So you can basically do the same thing using an iterative algorithm too. So, so here like in the previous example, it was kind of simple. I just solved the closed form solution. But in, in reality like you might have different states and then the com- it might be a little bit more complicated. So we can actually have an iterative algorithm that allows us to find these V pis. So the way we do that is, we start with the values for all states to be equal to zero. And, and this zero that I- I've put here, is the first iteration. So, so I'm going to count my iterations here. So, so I'm going to just initialize all the values for all states to just be equal to zero. Okay. Then I'm just going to iterate for some number of time, whatever number I care, like I would like to. Then, what I'm going to do is, for every state- again, remember the value needs to be computed for every state. So for every state, I'm going to update my value by the same equation that I have on the board, okay? And the same equation depends on the value at the previous time step. So this is just an iterative algorithm that allows me to compute new values based on previous values that I've had. And I started like everything zero and then I keep updating values of all states and I keep going, okay? So basically, that equation but think of it as like an iterative update every round. So you- you don't run this for multiple rounds. Every round you just update your value. Okay. So like here, is just pictorially you're looking at it, imagine you have like, five states here, you initialize all of them to be equal to 0. The first round, you're going to get some value you're going to update it. And then you're going to keep running this and then eventually, you can kind of see that the last two columns are kind of close to each other and you have converged to the true value. So, so again, someone comes and gives you the policy, you start with values equal to 0 for all the states, and then you just update it based on your previous value. Okay. So how long should we run this? Well, we have a heuristic to- to kind of figure out how long we should run this particular algorithm. Uh, one thing you can do is you can kind of keep track of the difference between your value at the previous time step versus this time step. So, so if the difference is below some threshold you can, kind of, call it- call it done and- and say, well I've- I've found the right values. And then in this case, we are basically looking at the difference between value at iteration T versus value at iteration T minus 1. And then we are taking the max of that over all possible states, because I want the values to be close for all states. Okay. Yes. [inaudible] Is this- so I'm going to talk about the convergence when we talk about the gamma factor and- and- and the- the discount factor and acyclicity. Um, also how long you should run this to get these is also a difficult problem and it depends on the properties of your MDP. So if you have an ergodic- if you have an ergodic MDP if this- this should work. Okay, but in general, it's a hard problem to answer for general Markov decision problem processes. Okay. And another thing to notice here is, I'm not storing that whole table. Like the only thing I'm storing, is- is the last two columns of this table because- because that's V pi at iteration T and V pi at iteration T minus 1. Those are like, the only things I'm storing, because that allows me to compute and if I've converged then that kind of allows me to keep going because I only need my previous values to update my new values, right. In terms of complexity, well this is going to take order of T times S times S prime. Well, why is that? Because I'm iterating over T times step, and I'm iterating over all my states and I'm summing over all S primes, right. So because of that- that's a complex idea yet, and one thing to notice here, is it- it doesn't depend on actions, right. It doesn't depend on the size of actions. And the reason it doesn't depend on the size of actions as you have given me the policy, you are telling me follow this policy. So if you've given me the policy then I don't really need to worry about, like, the number of actions I have. Okay. All right. Um, here is just another like the same example that we have seen. So at iteration T equal to 1, in, is going to get 4, end is going to get 0, at iteration 2 it gets a slightly better value. And then finally, like at iteration, like, 100 let's say, we get the value 12. And then remember for this particular example, like, this example we were able to solve it, like, solve the closed form V- V- of, ah, V- V of policy staying, uh, from state, in, but, uh, but you could also run the iterative algorithm and get the same value of 12. Okay. Yes. Number of actions is just the size of S prime, right? The number of, uh, actions is the size of S prime. Uh, no because the size of S- you might end up in very different, different states. This depends on your probabilities. Oh, okay. The size of S prime is actually the size of, like, size of states is the same thing, right? Like it's you can- worst case scenario, you're going from every state to every state. So just imagine the size of S. [NOISE] Okay. All right. So summary so far where are we? So we have talked about MDPs. These are graphs with states and chance nodes and transition probabilities and- and rewards. And you have talked about policy as the solution to an MDP, which is this function that takes a state and gives us an action. Okay. We talked about value of a policy. So value of a policy is the expected utility of- of that policy. So, so if you have like utility you- we have these random values for all these random paths that you're going to get for every policy. The value of utility is just an expectation over all those random, random variables. And so far we have talked about this idea of policy evaluation, which is just an iterative algorithm to compute what's the value of a state. If you'd give me some policy, like, how good is that policy what's the value I'm going to get at every state. Okay. All right. So- okay, that has been all assuming you'd give me the policy. Now, the thing I want to spend a little bit of time on is- is figuring out how to find that policy. Uh, is that possible that the variable actions for problem that is going to change the value of the policies. We learn new actions. So for example here, we only have stay or quit. Uh-huh. If you have a different problem that they can learn another action, like, stay quit or something, uh, um, the trade. Is it going to change the value of the policies because then we had a new action and then we need to update our policies? So in this case so, so far I'm assuming that a set of actions is fixed. I am not like adding new actions, like, the way- even with search problems, like, the way we defined search problems or the way we are defining MDPs is I'm saying, like, I'm starting with a set of states are fixed, actions are fixed, I have stay and quit. Those are, like, the only actions I can take. Uh, the reward is fixed, uh, transition probabilities are fixed under that scenario. Then what is best- the best policy I can take and best policies is just from those set of like, def- already defined actions. Okay. Um, next lecture we will talk about unknowing settings, like when we have transition probabilities that are not known or reward functions that are not known and how we go about learning them. And, and that- that will be the reinforcement learning lecture. So next lecture I might address some of those. Okay. All right, so let's talk about value iteration. So, so that was policy evaluation. So like, that whole thing was policy evaluation. So now, what I would like to do is I want to try to get the maximum expected utility and find the set of policies that gets me the maximum expected utility, okay? So to do that I'm going to define this thing that's called an optimal value. So instead of value of a particular policy, I just want to be opt of S, which is the maximum value attained by any policy. So, so you might have a bunch of different policies, I just want that policy that maximizes the value. Okay. So and that is V opt of S. Okay. So, um, so let me go back to this- this example. So I'm going to have this in parallel to this example of policy evaluation, I want to do value iteration. Okay. So I'm going to start from state S again, state S has V opt of S. Okay. That is what I like to find here I have V pi of S. If I'm looking for V opt of S, then I can have multiple actions that can come out of here and I don't know which one to take, but like, any of them- if I take any of them, if I take this guy, that takes me to a chance node of SA. Okay. And then I'm looking for Q opt of SA. And from here, it's actually pretty similar to what we had right here. So I'm in a chance node, anything can happen, right? Nature plays and with some transition probability of SA, S prime I'm going to end up in some new state S prime and I care about V opt of that S prime. Okay. So if I'm looking for this optimal policy which comes from this optimal value, then I need to find V opt. And if I want to find V opt well, that depends on what action I'm taking here. But let's say, I take one of these. And if I take one of these I end up in a chance node, I have Q opt SA in that chance node. And then from that point on with whatever probabilities I can end up in some S prime. Okay. So I want to write the recurrence for this guy similar to the recurrence that we wrote here. It's going to be actually very similar. So- okay, so I'm going to start with Q because that is easier. So what is Q opt of SA that- that just seems very similar to this previous case. What is that equal to? What was Q pi? Q pi was just sum of transition probabilities times rewards, right. So, so what is Q opt? [inaudible]. Yeah. So, so it would just be basically this equation except for I'm going to replace V pi with V opt. So, so from Q opt, I can end up anywhere like based on the transition probabilities. So I'm going to sum up over S primes and all possible places that I can end up at. I'm going to get an immediate reward which is RSA S-prime. And I'm going to discount the future but the value of the future is V opt of S-prime. Okay. So, so far so good that's Q opt. How about V opt. What is that equal to? Well, it's going to be equal to 0 if you are in an end state that's similar to before. So if end of S is true then- then it is 0. Otherwise, I have- I have a bunch of options here, right. I can take any of these actions and I can get any Q opt. So which one should I pick? Which Q opt should I pick? The one that maximizes, right? Like, um, which actually I should pick an action from the set of actions of that state that maximizes Q opt. So, so the only thing that has changed here is before someone told me what the policy is, I just took the Q of that. Here I'm just picking the maximum value of Q and that actually tells me what action to pick. So what is the optimal policy? What should be the optimal policy? Hmm? I'm going to call it pi opt of S. What is that equal to? It's gotta be the- the thing that maximizes V, right. Which is the thing that maximizes this- this- this Q. So because that gives me the action. So it's going to be the argmax of Q opt of S and A. Where A is an action of S. Okay? All right, so this was policy evaluation. Someone gave me the policy. With that policy I was able to compute V, I was able to compute Q, I was able to write this recurrence, then I had an iterative algorithm to do things, This is called value iteration. This is to find the right policy Iteration. This is to find a policy. How do I do that? Well I have a value that's for the optima- optimal value that I can get and it's going to be maximum over all possible actions I can take of the Q values and Q values are similar to before. So I have this recurrence now and at optimal policy is just an argmax of Q. Yeah. It looks like there are two argmax, right? Sorry? What? Phi for argmax like just two argmax, right, like there are two As? Oh, yes. You could get two A's, So the question is, yeah, like, what if I have two A's that give me the same thing? I can return any of them. It depends on your implementation of max. So you can return any of them. How am I doing on time? [NOISE] We are five minutes over and if you want. [LAUGHTER] So good news is the slides are the same things that I have on the board. So so Q_opt is just equal to the sum that we've talked about V_opt. I just add the max on top of Q_opt same story, okay? And then if I want the policy, then I just do the argmax of Q_opt and that gives me the policy. Right. I can have and again an iterative algorithm that does the same thing. It's actually quite similar to the iterative algorithm for policy evaluation. I just start setting everything to equal to 0. I iterate for some number of times. I go over all possible states. And then, I just update my value based on this new recurrence that has a max, okay? So very similar to before, I just do this update. One thing is the time complexity is going to be order of T times S times A times, S prime because now I have this max value over all possible actions. So I'm actually iterating over all possible actions versus in policy evaluations, I- I didn't have A, because, because someone would give me the policy. I didn't need to worry about this. All right. So let's look at coding this up real quick. Okay, so we have this MDP problem. We define it, it was a Tram problem, it was probabilistic, everything about it was great. So now I just wanna do an algorithm section and inference section where I code up value iteration and I can call a value iteration on this MDP problem to get the best optimal policy. Okay. So I'm going to call value iteration later. All right. So we initialize, so all the values are going to become- I might skip things to make this faster. So we're gonna initialize all the values to just 0, right, because all these values are gonna be 0. So I defined a states function. So i for all of those the value is just going to be equal to 0. So it's initialized with that. Then you're just gonna iterate for some number of time. And what we wanna do is you wanna compute this new value given old values. So it's an iterative algorithm. We have old values, we just update new values based on them. So what should that be equal to? So we iterate over our states. If you are in an end state then what is value equal to? 0, right? If you're not in an end state, then you're just gonna do that- that- that recurrence there. Okay, So new value of a state is going to be equal to max of, what the Q values, okay. So new V is just max of Qs of state and actions. Okay. So now I need to define Q or what does Q do? Q of state and action is just equal to that sum over- over S primes. So it's gonna return sum and it's gonna return sum over S primes. I define this successor probability and reward function that gives me newState probability and reward. So I'm gonna iterate over that and- and call that up here. So given that I have a state and action I can get newState probability and reward. What are we summing, you're summing the probability, the transition probabilities times the immediate reward which is reward here times my- plus my discount times my V which is the old value of V over S prime, over my newState. So that is my Q, that is my V, and that's pretty much done. We just need to check for convergence. To check for convergence, we kind of do the same thing as before. We check if value of V and new V are close enough to, to each other that we can call it done. I'm gonna skip these parts. So- so you can basically check if V minus new V are within some threshold for- for all states. And if they are then, V is equal to new V. We need to read the policy. So policy is just argmax of Q. So I'm gonna make this a little faster. So the policy is just going to be, well, none if we're in an end state and otherwise it's just going to be argmax of- of our Q values. So I'm just writing argmax here pretty much. I'm just returning the action that maximizes the Q. And then we spent a bunch of time getting the printing working. So let me actually get. Yeah, okay. All right actually right here. So I'm running this function. I'm- I'm writing out, actually these are a little shifted grid. States [LAUGHTER] values and then Pi which is the policy K. So it starts off walk, walk, walk. Remember this is the case where we have 50% probability of tram failing and with 50% probability of tram failing, these are the values we are gonna get. And the policies still walk until state five. And then take the tram from, from state five. Okay, just kind of interesting because the policy of the search problem was the same thing too. Okay, so the thing we can do is, we can actually, let me move this a little bit forward. We can actually define this fail probability which becomes just a variable. So you can play around with this. If you pick different fail probabilities you're gonna get different policies. So for example if you pick a fail probability that is large then probably like that policy is going to be just, just walk and never take the tram because the tram is failing all the time. But if you- if you decide to take fail probability is close to 0, then- then this is your optimal policy which is close to the search problem. It's basically the solution to a search problem. So play around with this, the code is online. This was just value iteration- value iteration, um, on this tram problem. Okay. So I'm gonna skip this one too. All right, so yeah. And- and this is also showing like how over multiple iterations you can kind of get to the- get to the optimal- optimal value and optimal policy using value iteration. So in one iteration it hasn't seen it yet. So it think that the value, optimal value is 1.85, it hasn't updated the values. And so with like, I don't know, three iterations, it gets better but it hasn't still updated. It still thinks it can't get to the other side. And remember this is with stick probability of 10%. But if I get to like I think 10, then it eventually learns the best policy is to get to 20 and the value is 13.68. And if you go even like higher iterations after that point it's just fine-tuning. So the values are around 13 still. So you can play around with the volcano problem. Okay. So when does this converge? So if the discount factor is less than 1 or your MDP graph is acyclic then this is going to converge. So if MDP graph is acyclic that's kind of obvious you are just doing dynamic programming over your full-thing. So- so that's going to- that's going to converge. If you have cycles, you- you want your- your discounts to be less than 1. Because if you're, if you have cycles and your discount is let's say 1 and let's say you are getting 0 rewards from, then you're never going to change. You're never going to move, you move from your state. You're always going to be stuck in your state. And if you have non-zero rewards you're going to get this unbounded reward and you keep going because you have cycles and it's just going to end up becoming numerically difficult. So just a good rule of thumb is pick a Gamma that's less than one. Then you kind of get this convergence property. Okay, all right, so summary so far is we have MDPs. Now, you've talked about finding policies, rather than path, policy evaluation is just a way of computing like how good a policy is. And the reason I talked about policy evaluation is there's this other algorithm called policy iteration which uses policy evaluation and we didn't discuss that in the class. But it's kind of like, not equivalent but you could use it in a similar manner as value iteration. It has its pros and cons. So policy evaluation is used in those settings. Do not leave please. We have more stuff to cover. [LAUGHTER] And then we have value iteration, uh, which, uh, computes its optimal value which is the maximum expected utility, okay? And next time, we're going to talk about reinforcement learning, and that's going to be awesome. So let's talk about unknown rewards. All right. So that was MDPs [LAUGHTER] doing inference and, and kind of defining them. I'm going back to the last lecture just to kind of talk about some of the stuff that we didn't cover last time, okay? All right. So if you remember last time, we were talking about search problems. So big switch now. Search problems, where we don't have probabilities, and we talked about A-star as a way of just making things faster, and we talked about this idea of relaxations which was, uh, a way of finding good heuristics. So A-star had this heuristic. Heuristic was an estimate of future costs. We wanted to figure out how to find these heuristics, like, how do you go about finding these heuristics? And one idea was just to relax everything, that allows you to come up with an easier search problem or just an easier problem, and that helps you to find what the heuristic is, okay? So, um, [NOISE] so we talked about this idea of removing constraints, and when you remove constraints, then you can end up in nice situations. Like in some settings, you have a closed-form solution. In some other settings, you have just an easier search problem, and you can solve that, and in some other settings, you have like independent sub-problems. So when you remove constraints then, then you have this easier problem. You can solve that easier problem, and that gives you a heuristic. You're not done yet, right? You're- you have a heuristic. You take that heuristic, and then change your costs, and then just run uniform cost search on your original problem. So, so solving an easier problem is like you're not done when you have solved the easier problem. It just helps you to find a thing that helps for- with the original problem, so it's kind of like a multi-step thing. So examples of that is, if you have walls, remove all the walls, you have an easier problem. If you solve that easier problem, that gives you a heuristic, and in this case, like when you knock down these walls, that easier problem you have a closed-form solution for it. You don't need to do anything fancy. You don't need to do uniform cost search. Any of that. You just compute the Manhattan distance and, and then that gives you the heuristic. With that heuristic, you go and solve the original problem. That was one example. Another example is, when you remove constraints, you have an easier search problem. So you don't have closed-form solutions, but you have an easier search problem. So you might have a really difficult search problem with a bunch of constraints that are hard to do. Remove the constraints. So when you remove the constraints, you have a relaxed problem, which is just the original problem without the constraint. That's a search problem. You can solve that search problem using uniform cost search or dynamic programming and, and solving that allows you to find the heuristic. Again, you're not done yet, right? You take the heuristic, and then you go to the original problem, change the costs, and, and draw your uniform costs there. And just one quick kind of example here was, uh, when you're computing these relax problems, the thing you want to find is the future costs of this, this relaxed problem, and, and to do that, you have this easier search problem. You still need to run uniform cost search or dynamic programming. In this case, if you decide to run uniform cost search, remember, uniform cost search computes past costs. In this case, I really wanna compute future costs. So you need to do a bunch of engineering to get that working. In this particular case, the relaxed problem, you need to reverse it. Because when you reverse it, past costs of the reversed relax problem becomes future cost of the relaxed problem, if that makes sense. So, so the way I'm reversing this is I'm basically saying start state is n. End state is 1, and my walk action takes me to s minus 1, instead of s plus 1, and my tram action takes me to s over 2 instead of S times 2, and the whole reason I'm doing that is- is that the past cost of this new problem is the future costs of the non-reversed version. Okay. Because I, I need to use uniform cost search here, okay? So I run my uniform cost search, that gives me a heuristic, and that heuristic gives me this future cost of the relaxed problem, and everything will be great. Another example is, I can have independent subproblems using my heuristic. So in this case, like we have these tiles, they technically cannot overlap. Instead, what we are allowing is, you're allowing them to overlap. So if we allow them to overlap, I have eight independent subproblems that I can solve. These subproblems give me heuristics, and I can just go with them, okay? So, so these were just a bunch of examples, and kind of the key idea was reducing edge, li- like when we are coming up in these relaxed problems, we're reducing edge costs from infinity to some finite costs. Okay. So I'm getting rid of walls before I couldn't cross, like it was infinity. Cost of that was infinity, but if I get rid of the wall and making it a finite cost. So this type of method, um, this is a general framework. So the point I wanna make is, generally, you can talk about the relaxation of a search problem. So if you have a search problem P, a relaxation of a search problem, I'm going to call that PR, uh, Prel, is going to be a problem where the cost of the relaxation for any state action is less than or equal to cost of state and action. I'll take questions afterwards. All right. So, uh, so that is a relaxed problem, okay? So the cool thing about that is, if you're given a relaxed problem, then you can pick your heuristic to be the future cost of the relaxed problem, and that is called the relaxed heuristic, okay? So, so this is kind of a recipe. A general framework. Like, if someone asks you find a good heuristic, find a relaxed problem, future cost of the relaxed problem is a heuristic. And the cool thing about that is it turns out that, that, that future cost of the relaxed problem, which you are deciding to be a heuristic, is also consistent because we talked about all these consistency properties, and how you want to find the heuristic to be consistent for the solution to be correct, and how in the world am I gonna find a consistent heuristic? Well, here is one. Here is one way of finding consistent heuristics. Pick your problem, make it relaxed. Making it relaxed means that pick your cost that's less- pick, pick your relaxed problem where the cost is less than the cost of the original problem, and then future cost of that relaxed problem is just going to be a heuristic, and, and it's going to be consistent. So proof of that is two lines, so I'm going to skip that. And, and the cool thing about this like, like note about this is, there is a trade-off here. There is a trade-off between efficiency and tightness. So, sure, like making things relaxed and removing constraints. It's kinda fun, right? We have this easier problem, and you just solved it, and everything is great about it. But it's not like, like there is kind of a trade-off between how tight you want your heuristic to be. Like, you shouldn't remove too many constraints, because if you remove too many constraints, then your heuristic is not a good estimate of future costs. Remember, your heuristic is supposed to be an estimate of future costs. So, so if it is not a good estimate of future costs and it's not tight, then it's not that great. So, so there is a balance between how much you are removing your cons- your constraints and, and how that makes finding the heuristic easier, versus the fact that you want your heuristics to be tight and be close to your future costs, so, so don't remove everything. Leave some constraints [LAUGHTER] and then solve it. Um, and you can also do things, like if you have two heuristics that are both consistent, you can take the max of that, and if you take the max of that, it's, it's a little bit more restrictive. Maybe, maybe that is closer to your future costs, and that is- and then you can actually show the max of that, is also consistent, okay? Uh, so we talked about, uh, like relaxations A-star. One other quick thing I want to mention because that wasn't very clear last time, is structured das- perceptron. We talked about that a little bit too, and we talked about convergence of that. So quick things on that. Structured perceptron actually converges. There was this question that, uh, if we have- if that- if, if we have a path, that is let's say walk, tram, and, and we end up recovering another path. That is tram, walk, is that bad, is that good? Well, turns out that the cost of both of these paths are the same thing. So if I end up getting this path, well that's perfectly fine too. Right? Like that, that is also with the same optimal weight. In the example that we have shown, in a tram example, I don't think we are able to get to a path that looked like this because of the nature of the example. So, so in general things to remember from structured ce- perceptron is, it does converge. It does converge in a way that it can recover the two Ys, but it doesn't necessarily get the exact Ws, as we saw last time, right? Like, you might get two and four, you might get four and eight, like, as long as you have the same relationships, that, that is enough but, but you are going to be able to get the actual Ys, and it does converge. So with that, um, project conversation is going to be next time. Do take a look at, do take a look at the website. So all the information on the project is on the website. So if you have started thinking about it, look at the project page, and that has something for you.
Stanford_CS221_Artificial_Intelligence_Principles_and_Techniques_Autumn_2021
Logic_8_First_Order_Modus_Ponens_Stanford_CS221_Artificial_Intelligence_Autumn_2021.txt
OK. So far we've been talking about first order logic and its syntax and semantics. And now what we would like to do is to talk about inference rules for first order logic. In this module, we are going to be talking about modus ponens, when we have only Horn clauses. And then in the next module, we are going to be talking about resolution when it comes to first order logic. OK? All right. So if you remember what inference rules do is they basically do simple manipulation. So they take the formulas, the syntactic form of the formulas, and they have no notion of meanings or anything of that form. But based on the formulas that are knowledge based, they basically try to infer, they try to derive, or prove a new formula based on what exists by syntactically moving things around-- kind of like what we have seen in modus ponens for propositional logic, right? So what we would like to do is, you like to focus on applying modus ponens to first order logic when we are under a scenario where we have only Horn clauses. And if you remember Horn clauses, there were definite clauses, and goal clauses. And definite clauses are of the form of having some set of positional symbols. So p and q, for example, implying pp1 and p2, for example, implying some positive literals-- and then with each other, implying a new positive literal. So how do we extend that idea of definite clause to the space of first order logic? So if you want to look at definite clauses in first order logic, we are going to have a set of variables and we're going to have quantifiers on top of them. So for example, this is an example of a definite clause-- where we talk about for all x, for all y, for all z. And then we have these predicates-- takes x and y, and ANDed to another predicate covers y and z. And that implies a whole new predicate knows x and z. So we have kind of like these atomic formulas ANDed with each other. And we have a set of quantifiers outside and this implication. So basically if you propositionalize here, we get one formula for each value of x,y , and z. So if you remember propositionalization from the last module, what we can do is, we can basically think about x, y, and z taking specific values like x being "alice," and y being "cs221," and z being "mdp." And if you think about each one of these values, like each one of these formulas taking one value for each x, for each y, and for each z, and then we end up with a propositional logic formula that ends up being actually definite causes. But we would like to be able to represent this in its more expressive way. And because of that, we are defining definite clauses in first order logic using these variables, and using these quantifiers, and so on. So more formally, a definite clause has the following form. So it has this form of having for all quantifier, for all xn-- for x1 through xn are variables. And we have these atomic formulas-- a1 through ak and b-- all of these are atomic formulas. And we are ANDing these atomic formulas a1 through ak, and that implies B. And remember these atomic formulas actually contain these variables, x1 through xn. So they actually have x1 through xn inside of them, contain them-- kind of like this example up here. All right so this is a definite clause in first order logic. So how can we do modus ponens in first order logic? So if this is a definite clause for all x1 through xn, a1 ANDed through ak implies b, one possible attempt-- maybe our first attempt in looking at modus ponens-- is we have this, and in addition to that, maybe in our knowledge base we have a1 through ak. And based on that we should be able to derive b. Based on these premises, maybe we can conclude b, OK? So does this work? Does this definition of modus ponens work? Let's look at an example. So it turns out that it actually doesn't work. So let's look at this example where we have P of alice-- so P is a predicate over alice, and maybe that defines our a1. And then we say for all x, P of x implies Q of x, OK? And ideally, what should I get from this? Ideally I would like to get Q of b from this. But I'm really not able to do that. Well why am I not able to do that? Because remember, modus ponens is an inference rule. Inference rules don't really know anything about semantics or meanings. So they're basically just matching symbols. And if I'm just matching symbols, first off, P of alice has nothing to do with P of x. So I can't really match P of alice and P of x. So I'm kind of screwed, I can't apply this modus ponens idea on top of it. And then in addition to that, even if I could somehow say P of alice and P of x are the same thing, I'm not going to be able to get here Q of alice, because Q of alice and Q of x are very different things. So I can't infer Q of alice, and I also can't really match P of x and P of alice-- they don't really match here. So this modus ponens rule that I've written here just doesn't work. This is not the modus ponens is that we should be using in first order logic. So how are we going to solve this? So there are two ideas that I'm going to be talking about in this module-- substitution and unification. And substitution and unification are the things that are going to improve our modus ponens and help us apply modus ponens in first order logic. So let's look at what they are. So what is substitution? So what substitution does, is it takes a substitution rule that substitutes a variable with a term. And it takes a formula, and it basically takes that formula and substitutes all those variables with those terms that it is given, OK? One thing to notice is it's going to substitute a variable, like x, with a term. And what is a term? If you remember our module on syntax of first order logic, a term is going to be either a constant symbol, or another variable, or a function, OK? So here in this example, alice is a constant symbol. So I'm replacing a variable, x, with a constant symbol, alice, OK? Here is another example. So I'm substituting x with alice, and I'm substituting y with z, with another variable-- in this formula, P of x, and K of x and y. So I'm doing find and replace. Basically, I'm doing find x, replace it by alice, find x, replace it by alice, find y, replace it by z. And that is what substitution does. So a substitution theta-- it's a mapping from variables to terms, and substitutes theta f returns basically the result of performing that substitution on a formula f, OK? All right. So what does unification do? So that was substitution, that's great. So what is unification do? So what unification does, is it takes two formulas, and it tries to match them as closely as possible. And unification returns a substitution rule that tries to match those formulas as close as possible. So if I'm doing Unify Knows alice arithmetic and Knows x arithmetic, I have these two formulas, I try to match them as close as possible, and the substitution rule that matches these as close as possible is replace the variable x by alice, OK? So that is what I'm going to return. Let's look at another example. So I might have Unify Knows alice y and knows x z. So what is the substitution rule that gets me there? I'm going to get a substitution rule that says, replace variable x by alice, replace variable y by z. And that is going to be the substitution rule that I'll get out of unifying these two formulas. Here's another example. So I have Unify Knows alice y, and knows bob z. So this is going to return fail. The reason it's going to return fail is, I'm not going to be able to substitute a constant symbol by another constant symbol. Remember we are substituting variables by terms, and there are no variables here to substitute. There are two constant symbols, there are two terms, right? So I'm not going to be able to substitute these, so I'm going to get fail from unification here. And here's another example. So I might have Knows alice and y and Knows x and F of x, a function here, right? So here, a substitution rule is, take the variable x, replace it by alice. And takes variable y and replace it by F of alice. So I'm taking the most general form of this, where I could have F of x here, but because I already know in my substitution rule that x needs to be replaced by alice, instead of putting F of x here I'm putting alice. I've already replaced x by alice, OK? So what is unification? Well, what does it do more formally? It takes two formulas, f and g, and it returns the substitution-- which is the most general form of unifier. So Unify f and g, to formulas, return a theta. So then, if I do substitution of theta and f, that gives me the same thing as substitution of theta and g. And it returns fail if such a substitution doesn't exist, OK? So why am I defining these? So the reason I'm defining unification and substitution is I can now modify my modus ponens, and I can use this idea of substitution and unification in order to make modus ponens work in first order logic. So here, I'm going to have different a1 prime through ak prime, these atomic formulas-- from a1 through ak-- and different b prime than b. These are going to be different atomic formulas, OK? Specifically, if you think about it, these a1 prime through ak prime are groundings of this a1 through ak, which basically operate on these variables' x's. And b, again, operates on a variable. And x and b prime, you can think of it as a grounding of b, OK? And then b prime and b, or a1 prime though ak prime, they don't look the same, right? So that's why I can't syntactically just replace them by each other. But what I can do is I can use substitution and unification. What I can do is-- first up, I can look at my a1 prime through ak prime-- my groundings-- and then these other atomic formulas, a1 through ak. And I can unify them. So once I unify them, I get a substitution rule, theta. And what I can do is, I can derive b prime. And what is b prime? b prime is going to be the result of substituting theta in b. And that is going to be my new modus ponens rule. So I'll end up getting a grounded version of b prime. And how do I get that? By substituting theta in b. And where do I get theta? I get theta by unifying a1 prime through ak prime, and a1 through ak, OK? Let's look at an example. So let's say that in my knowledge base, I have a premise that says alice takes cs221. So this is my first version-- a1 prime-- which is a grounded version of x taking y. And then I have cs221 covers mdp, again, it's a grounded version of y taking z. So what I do is, first I do a unification of these two formulas and these two formulas. And based on that unification, I'm going to get a substitution rule. That substitution rule tells me that, take variable x, replace it by alice, take variable b-- sorry, and take variable y, replace it by cs221, variable z, replace it by mdp. And then what am I going to do? What are we going to return out of modus ponens? Modus ponens basically tells me that, this is your b, you want to return kind of a modified version of b. And what is that modified version? That modified version is using this substitution rule over your b, or over this Knows x z. So if I substitute theta and Knows x z, I'm going to get alice Knows mdp. And that is the thing I'm going to be returning-- that is the thing that I'm going to be deriving here, or proving here. And that's basically applying modus ponens in first order logic. So let's think about the complexity of this. How is the time complexity here, and how bad is this? So if you remember, when we're doing modus ponens in propositional logic, every time were running modus ponens, we were adding one propositional symbol right in the propositional logic line. Here, every time you're running modus ponens, you're only adding one atomic formula-- which is not bad, which is actually pretty good. And in addition to that, if you don't have any functions-- right, if there are no functions going on here-- then the number of these atomic formulas is, at most, the number of constant symbols that we have to the power of maximum predicate arity. So in this example, for example, I might have P of x y and z. And maybe x takes 100 values, y takes 100 values, and z takes 100 values. So then, I'm going to get 100 to the power of 3, which is not bad. But the thing is if there are functions here, then we actually end up with an infinite number of them being applied to each other. So this becomes unbounded. So if I have a function over a, I can keep applying that. And I end up with an infinite number of things being added in, because I can keep applying the function on it. So remember, for example, the sum function that we saw. Earlier in one of the examples we had sum of 3 in x, right? So I can keep applying sum on top of itself, and almost recreate arithmetic by applying sum on itself. But what you are going to get an unbounded number of formulas here, which is not that great, OK? All right, what else do we know about modus ponens? So the thing that modus ponens in this space of first order logic. So what we know is modus ponens turns out to be complete for first order logic with only Horn clauses. This is a similar type of completeness that we have when we look at modus ponens in propositional logic. Again, if we are limited to Horn clauses, we have completeness in first order logic as known. In addition to that, we know that first order logic-- even only when it is restricted to Horn clauses-- is semi-decidable. So what does that mean? What that means is that if our knowledge base entails f-- and then we want to figure out if it entails f or not. But if it actually entails f, and we keep doing forward inference, we keep trying to derive a new formula until convergence using modus ponens, this forward inference-- until we get these complete inference rules and getting f-- takes finite time. So if my knowledge base actually entails f, I should be able to derive f in finite time. I should be able to prove f by just inference rules in finite time, which is pretty nice. But with the difficulty that gets me with semi-decidability, is if knowledge base doesn't entail f-- and I might not know if knowledge base entails f or doesn't entail f, if I don't know and actually knowledge base doesn't entail f-- it turns out that there are no algorithms that can show this in finite time, OK? And this is actually kind of related to a halting problem. So actually people have shown that there is no algorithms that could do this in finite time, and we are kind of screwed in that case, OK? So in general this is not too bad. In general, you can think about having a budget for an amount of time that you're going to run your inference rule, and see if you get lucky and KB actually entails f, you're going to be able to get f in finite time. So you could actually run first modus ponens with first order logic, if you have Horn clauses. And it does work in some instances when KB actually entails f. But then in the next module, what I would like to talk about is, we want to go beyond modus ponens and we want to talk about resolution-- specifically how resolution would work in first order logic.
Stanford_CS221_Artificial_Intelligence_Principles_and_Techniques_Autumn_2021
Search_1_Dynamic_Programming_Uniform_Cost_Search_Stanford_CS221_AI_Autumn_2019.txt
Hi everyone, I'm Dorsa, uh, and this week I'll be teaching the state-based models and the plan is for the next couple of weeks for me to, to teach the state-based models MDPs, uh, and games and then and after that Percy, we'll come back and talk about the later, some of the later topics. So a few announcements. Uh, so homework 3 is out. So just make sure to look at that. And then the grades for homework 1 will be coming out soon. So just yeah, be aware of that. All right. So, so let's talk about state-based models, let's talk about search. So just to start, I was thinking maybe we can start with this question. Uh, if you can, let me reset this. So basically, okay, let me tell you what the question is and then think about it, and then after that I will get this working. So, so the question is you have a farmer and the farmer has a cabbage, a goat, and a wolf, and it's on one side of the river. Everything is on one side of the river. So you have this river. We have a farmer. We have the farmer with a cabbage, with a goat, and with a wolf, okay. And the farmer wants to go to the other side of the river and take everything with, with, with himself, um, and- but the thing is the farmer has a boat and in that boat can only fit two things. So the farmer can be in it with, with one of these other things, okay? So the question is how many crossings can, can the farmer do to take everything on the other side of the river? And there are a bunch of constraints, the constraint is if you leave the cabbage and goat together the goat is going to eat the cabbage. So you can't really do that. If you leave wolf with the goat, the wolf is going to eat the goat, you can't really do that. How many crossings should you take to take everything to the other side? Think about it, talk to your neighbors, I'll get this working. Everyone clear on the question? Okay. So the link doesn't work because, uh, we can't connect to Internet, but all right so. Okay. So how many people think it is four? Four crossings. Five, five crossings. Six, six. Some people think six. Seven? More people. No solution? No solution. Okay. So the point is actually not like what the answer is, we'll come back to this question and try to solve it, but I think the important points to, to think about right now is how you went about solving it. So, so what were you thinking and what was the process that you were thinking when you were trying to solve, solve this problem. And that is kind of the commonality that search problems have and, and we want to think about those types of problems where it's, it's more challenging to answer these types of questions and let's say reflex based type of questions. So, so that's kind of just a motivating example that we'll come back later. And here's an XKCD on this. So basically one potential solution is the farmer takes the goat, goes to the other side, comes back, takes the cabbage, goes to the other side and just leaves the wolf because why would he need a wolf, why would a farmer need a wolf. So [LAUGHTER] if you answered four, you probably were thinking about this. [LAUGHTER] And I guess it has like an interesting point in it because sometimes maybe you should change the problem. Your model is completely wrong. Maybe, maybe sometimes you should rethink and go back to your model and try to fix that. But anyways. So we'll come back to this question. So all right. So this was our guideline for the class, and, and we have already talked about the reflex-based model. So we have talked about machine learning and how that can get applied, and now we want to start talking about state-based models. This week, we're going to talk about search problems, next week, MDPs, and then the week after we're going to talk about games. If you remember the kind of the guideline that, that we had for the class was, uh, we were thinking about these three different paradigms of, of modeling, all right, we talked about this already. So modeling, inference, and learning. So for, for reflex-based models we talked about this already, right? So what would the model be, well, it can be a linear predictor or it can be a neural network. So, so that was a model. And then we talked about inference but in the case of reflex-based models it was really simple, it was just function evaluation. You had, you had your neural network and you would just go about evaluating it and that was inference. And we also spent some time talking about learning. So how would we use like let's say gradient descent to try to fit the parameters of the model, okay. So similar thing with search-based models. You want to talk about these three different paradigms that we have in the class, and, and the plan is to talk about models and inference today and then on Wednesday we'll talk about learning. We kind of have the same sort of format next week too. So we're going to start talking about modeling and inference on Mondays, Wednesdays are going to be about learning. So, so just to give you an idea of what the plan is. All right. So, so what are search problems? Let's start with a few motivating examples. So, so one potential example one can think of is, is route finding. So you might have a map and you want to go from point A to point B on the map, and you have an objective. So you want to maybe find the shortest path or the fastest path or most scenic path. That is your objective and the things you can do is you can take a bunch of actions. So you can do things like go straight, turn left, turn right, and then the answer for the search problem is going to be a sequence of actions. If, if you want to go from A to B with the shortest path, the answer that one would give is maybe turn right first and then turn left and then right again or any, any of these sequences. Okay so, so this is just a canonical example of what a search problem is. There are a few other examples. So for example you can think of robot, robot motion planning. So if you have a robot that wants to go from point A to point B, then it might want to have different objectives for doing that. So again the question might be what is the fastest way of doing it or what is the most energy efficient way of getting the robot to do that or, or what is the safest way of doing it. Like another question that we are interested in is what is the most expressive or, or legible way of robot doing it so, so people can understand what the robot really wants. So you might have again various types of objectives you can formalize that, and then the actions that, that you can take in the case of the robot motion planning is the robot is going to have different joints, and each one of the joints can translate and can rotate. So translation and rotation are the type of actions that you can take. So, so in this case I have a robot with seven, seven joints and then I need to tell what each one of those joints should do in terms of translation and rotation. That's your robot? This is my robot, yes. [LAUGHTER] It's a fetch robot. [LAUGHTER] All right. So, so let's look at another example. So games is, is a fun example. So you might, uh, think about something like Rubik's cube or, or this 15-puzzle, and again what do you wanna do as a search problem? Well, you wanna, you wanna end up in configuration that's desirable, right? So you wanna end up in a configuration where, where you have this type of ah, configuration on Rubik's cube or, or the 15 puzzle. So that, that is the goal, that's the objective. And then the action is you can move pieces around here. So, so the sequence of actions might be how you're moving these pieces around to get that particular configuration of the 15 puzzle, okay. So again another example of what a search problem is. Um, machine translation is, is an interesting one if it's not necessarily the most natural thing you might think about when you think about search problems, but what it is actually you can think about it as a search problem again. So imagine you have a phrase in a different language and you want to translate it to English. So what is the objective here? Well you can think of the objective as going to fluent English and preserving meaning. So, so that is the objective that one would have in machine translation. Um, and, and then the type of actions that you're taking is you're appending words. So you start with the and then you're appending blue to it and you're appending house to it. So, so as you're appending the- these different, different words, those are the actions that you're taking. So, so in some sense you can have any complex sequential task and, and the sequence of actions that you would get to get to your objective is there's going to be the answer for, for your search problem and you can pose it as a search problem, okay? All right. So, so what is different between let's say reflex-based models and, and search problems? So, so if you remember, reflex-based models the idea was you'd have an input x and then we wanted to find this f for example a classifier that, that would output something like, like this y which is labeled, it's a plus 1 or minus 1. So, so the common thing in, in these reflex-based models was we were outputting this, this one label, this one in this case action being minus 1 or plus 1. Again in search problems, the idea is I'm given an input, I'm given a state, and then given that I have that state, what I wanna output is a sequence of actions. So I do want to think about what happens if I take this action like how is that going to affect the future of my actions. Okay. So, so the key idea in search problems is you need to consider future consequences of, of the actions you take at the current state. Yes. Is this like not equivalent to like just outputting one thing and then like rerunning the function, on like the updated state? So if you rerun it. So, so the question is, yeah, is it not the same as like I'm rerunning it, I output a thing and then I rerun it again. And you could do that, but that ends up being a little bit of a- that would be some- similar to a greedy algorithm where like let's say I want to get to the door and I want to find, find the fastest way and right now if I just look at like my current state maybe I think the fastest way of getting there is going this way. But if I actually think about a horizon and I think about how this action is going to affect my future I might come up with a different sequence of actions. Okay? All right. Okay. So and, and you've already seen this paradigm so let's start talking about modeling and inference during this class. So this is the, the plan for today. So we're going to talk about three different algorithms for, for doing inference for search problems. So, so we're going to talk about tree search which is the most naive thing one could do to solve some of these search problems, but that's the simplest thing we can start with. And then after that you want to look at improvements of that doing dynamic programming or, or uniform cost search. So, um, the difference between search-based problem and reflex-based problem, the very fact that in a reflex-based problem, the output that you gave does not influence a string, and it doesn't search? Yeah. Tha- that's true. Yeah so, so the output that you get in search problem it is an action that actually influences your future. Yeah, that's a good way of actually thinking about it. Yes. All right. So, so let's talk about tree search. So let's go back to our favorite example. Um, okay so we have the farmer, cabbage, goat, and wolf. So let's think about all possible actions that one can take, when we have this farmer, cabbage, goat, and wolf. Okay. So, so a bunch of things we can do is a farmer can go to the other side of the river with the boat alone. So, uh, this triangle here just means like going to the other side of the, uh, the river. The farmer can take the cabbage. So C is for cabbage G is for, ah, goat, W is for wolf. So another possible action is the farmer takes a cabbage or the farmer takes the goat or the farmer takes a wolf and goes to the other side of the river. We also have a bunch of other actions. The farmer can come back. The farmer can come back with the cabbage, come back with the goat, come back with the wolf. So I'm basically numering- enumerating all possible actions that, that one could ever do. And sure none of- like not- some of these might not be possible in particular states but I'm just creating this library of actions things that are possible. Okay. So then when we think about the, ah, this as a search problem, we could create a search tree. Which, which basically starts from an initial state of where things are and then we can kind of think about where we could go from that initial state. So the search tree is more of, ah, what if- what if tree which, which allows you to think about what are the possible options that, that you can take. So, um, conceptually what- what it looks like is you're starting with your initial state, where everything is on one side of the river. So those two lines are the riv- the river the blue lines. Um, and you can take a bunch of actions, right like one possible action is you can take the cabbage and go to the other side of the river and you end up in that state. And that state is not a good state. I am making that red. Well, why is that. Because the wolf is going to eat the goat. That's not that great. Okay. Um, and, and every action, every crossing let's say ma- let's say every crossing takes cost of one. So that one that you see on the edge is the cost of that action. Okay. So that didn't really work that well. What else can I do? Well, I can, I can do another action. I can, I can- from the initial state, I can take the goat and go to the other side of the river, that ends up in this configuration. From there the farmer could come back, take the cabbage, go to the other side, end up in this configuration, the farmer can come back. That's again, not a great state because cabbage and goat are left on the other side of the river, goat is going to eat the cabbage. That's not great. What else can I do? Well, the farmer can come back with the goat. And then once the farmer comes back with the goat, the farmer leaves the goat, takes the wolf, goes to the other side, comes back gets the goat again. And then boom, you're done. Okay. So- so how many steps does this take? Well, one, two, three, four, five, six, and seven. So- so the ones who answer seven that was the right answer. Um, and that is kind of the idea of getting to this end state. Yes. So to be specifically, ah, not include the option that the going back to the previous state even though that's a valid next step just because we know that there's something- So you could have this giant tree where you go to different states but we can actually have like a counter that tells you if I have visited that state and if you have visited that state maybe you don't want to go there again because, because you have already explored all the possible actions from there. You're not done with this tree though, right? Like I've, I've found that this good state here, but maybe there's a better way of, like getting there. I don't know yet. I haven't explored everything. So, so what I can do is, I can actually explore all these other things that, that one could do. And I'm not gonna go over them. But there is another solution, and turns out that other solution also takes seven steps. So it's not necessarily a better solution, but, but you've got it for all of that because there could be another solution later on that. That is, uh, better than the seven steps. Okay. All right. Yes. Are these slides up? They are, they should be. Okay. Slides are up. Okay. Um, all right. So, so this is how the search tree looks like. Yeah. I'm just asking [inaudible] Oh, that's a very good point. Thank you for- [LAUGHTER] thank you, so for SCPD students I'll try to repeat the questions. I always forget this. Um, I'll try to repeat the question. The question was, ah, was the slides, uh, the slides aren't up, they're up, they should be up. So okay. All right. So, uh, going back to our search problem. Ah, so we can try to formalize this search problem. So, so let's actually think about it more formally. So what are the things that we need to keep track of. So, so we have a start state. So let's defined a start to be the start state. In addition to that we can, we can define this function called actions which returns all possible actions from states. So actions as a function of state. If I'm in a state, that basically tells me what are the actions I can take from there. I can, I can define this cost function. So this cost function, takes a state and action and tells me what is the cost of that and in this example, the cost of crossing the river was just one but you can imagine having different costs values. Ah, we can have a successor function that basically takes a state and action and, and tells us where we end up at. So if I'm in state S and I take action A where would I end up at? And that's the successor function. And then we're going to define an IsEnd function, which basically checks if you're in an end state where we don't have any other possible actions that you can take. Yes. So these are the [inaudible] I got a call? You can, you can think of it as, yeah, as a way of like finite state machine type of, type of, uh, way of looking at it. Yeah. So like we- we use a similar type of formalism, uh, for MVPs and games too. So this is good idea to get like all these formalisms right. But start state, transitions, costs. Those sort of things. Okay. Yes. What's the [inaudible] like [inaudible]. Ah, say it again so. A cost [inaudible] like. Cost? Position and action, and action already concerns the state. So then- so- so the action, okay, so action depends on state. So you start from start state where you haven't taken any actions right, and then from that start state then you can think about all possible like right up there. So you're under that start state, and there you can think about all possible actions you can take, and then those actions depend on current state but they don't depend on the future state, right. So based on like the current state, everything is on one side of the river. I can think about all possible actions I can take and where I know- where I end up at. And then, after that like the next action depends on that. Yeah, that's it. So it's a sequential thing. Okay. Yes. You have all the information on the actions and the cost that you could do beforehand, how is this conceptually different than like a min cost flow convex optimization? You can think of it. Okay. So- so how- how is it different from a kind of convex optimization type of role? So- so we have- we have an objective here and then you can think of what that objective is and based on what that objective is, we can have different methods for solving it, right? So- so you can basically formulate this as an optimization problem where you saw- you look for the solution to a search problem as an optimization problem too that's perfectly, a perfect way of doing it. And, and we're going to talk about various types of methods for- for solving this problem today. Okay. All right. So- so let's look at another example. So, um, this is, um, transportation problem. Now I'll just move this. So, um, okay. So basically, what we wanna do is we have street blocks from 1 through N. So 1, 2, 3, 4, so on. So these are street blocks and N is here. And what we wanna do is we basically want to travel from, from 1 to, to some N number. And we have two possible actions. So at any state, let's say I'm in state S. At any state, I can either walk, and if I walk I end up in S plus 1. So if I'm in 3, I'm going to end up in 4. And walking takes one minute. Or I can take this magic tram. And this magic tram, takes any state S to 2 times S. So if I'm in 3, then I am going to end up in 6 by taking the magic tram. And the magic tram always takes two minutes, doesn't matter from where to where. So, so if I'm in 2, I will end up in 4, if I'm in 5 I can end up in 10 by taking the tram. Okay. So, so I have two possible actions in any of these states. And what I want to do is, I want to go from 1 to N and then I want to basically do that in the shortest, uh, time possible. Okay. So with the- with the least amount of costs. That's the problem, makes sense? Okay. All right. So, so this is kind of like, what the search problem is. So what we wanna do is first off, you want to just formalize it. Uh, and I'm gonna do that here. I'm not gonna do live solutions because I'm not Percy, and I did that once and it was a disaster. So [LAUGHTER] we are going to, uh, yeah I taped these in 2018. Uh, but, uh, basically, we're going to go over it together. So, so let's just do that. Um, so we're going to define the search problem, this tram problem. So we're gonna define a class for transportation problems. So we're going to separate our search problems from our algorithms because remember modeling is separate from inference. So let's just have a constructor for this transportation problem. It takes N, because we have N blocks. Okay. So N is the number of blocks. Okay. All right. So, so then you have- we still have a start state. We're starting from 1 so block 1. And then we need to define IsEnd state. So IsEnd state basically checks if you've reached N or not. Because, because we have to get to the Nth block. Okay. All right. So what else do we need? So we have a successor function. We also have a cost function. I'm gonna put both of them together, because, because that is just easier. So the successor and cost function, I'm saying let's just give it state S. And then given a state it's going to return this triple of action, new state, cost. So I give it a state, let's say initial state, and then it just returns all possible actions, within new states I can end up at and how much does that cost. Okay/. So what are my options? Well, if I'm state S, I can walk to s plus 1 that costs 1. If I'm in state S, I can take the tram, I can end up in 2S, and that costs 2. Okay. So that's how I'm creating my triples. And, and I need to check if I don't pass the Nth block. Remember, like we have N blocks so we don't want to pass the Nth block. Okay. So, so that's just to make sure that we don't pass it. So we are still below the Nth block. And, and this is what my successor and cost function will return that, the triples. Okay. So let's just return that. Okay. So that is my transportation problem. Let's make sure it does the thing the way we want it. So let's say we have 10 blocks, and now I wanna print my transportation- my successor and costs function. Let's say I'm returning successor and cost for 3. What should I get? So from 3, I can have two actions, right. I can either walk or I can take the tram. If I walk, uh, it costs 1. If I take the tram, it costs 2. I'll end up in 4 or 6. Let's just try. I don't know 9. If I'm in state 9, I can only do one thing, I can walk, right? Because remember, the, the block is- number of blocks is 10 and I can't go beyond that. So- all right. Um, okay. So that was, um, [NOISE] yeah, let's go back here. So that was just defining, uh, the search problem, [NOISE] okay? And, and I haven't told you guys like how to solve it, right? This is- we are just doing the modeling right now. So we just modeled this problem. We just coded it up. Modeling it means, what is this- what are, what are the actions, what is a successor function, what is a cost function, defining an is end function, saying what, what the initial state is, okay? So, so now I think we are ready to think about the algorithms in terms of, like, going and solving these types of search problems, okay? So the simplest algorithm we want to talk about is, is backtracking search. So the idea of backtracking search is- maybe I can draw a tree here, is you're starting from an initial state and then you have a bunch of possible actions. And then you end up in some state and you have a bunch of other possible actions. [NOISE] Let's say you have two actions possible. And this can become- [NOISE] this exponentially blows up so I'm going to stop soon. [LAUGHTER] All right. So, so we create this tree and this tree has some branching factor. That's the number of actions you have at, at every, at every state. And then it also has some depth. [NOISE] So that is how many levels you go down. [NOISE] So let me just define that with D, okay? And now there are solutions down in these notes, right? So, so we wanna figure out what those solutions are. And backtracking search just does the simplest thing possible. What it does is, it starts from this initial state and it's going to go all the way down here. And if it doesn't find a solution, it's gonna go back here and then try again and try again. And it's gonna go over all of the tree because there might be a better solution down here too. So it needs to actually go over all of the tree, okay? So I'm gonna have a table of algorithms because we're gonna talk about a few of them here. Algorithms, [NOISE] what sort of costs they allow, in terms of time, how bad they are, in terms of space, how bad they are. So if you've taken an algorithms course, like, some of these are probably familiar. So, er, all right. So we talked about backtracking search, [NOISE] backtracking search. That is basically this algorithm that goes through pretty much everything, and it allows any type of cost. So I can have [NOISE] any cost, right? I can have pretty much any cost I want on these edges because I'm going over all of the tree. It doesn't matter what these costs are, okay? So, um, how-, how bad is this in terms of, in terms of time? So in terms of time, I'm going over the full tree. By going over the full tree, then, then this, this is going to have this exponential blowup where I'm looking at order of b to the d, where b is, again, my branching factor and d is the depth of the tree, okay? Cause in terms of time, this is not a good algorithm. Like, in terms of time, I have to go over everything in the tree. And that's the size of my tree, okay? And in terms of space, in terms of space, what I mean is, I need to figure out what was, what was the sequence of actions I needed to take to get to some solution. So let's say that my solution is down here. If my solution is down here, then for me, in or- like, I need to store a bunch of things to know how I got here, and the things I need to store are the appearance of this node and that is depth of D. So in terms of space, this algorithm takes order of D, okay? Because, because that is, like, the things that I need to store in my memory to be able to recover, like, the solution when I get there. Yes. [NOISE]. Question. Because we need to look at everything, shouldn't this space be big or here D to the D as well? Because until you get to that, you need, you need to have the space to have everything, right? You can prove that, but [NOISE] no. So actually, we'll talk about breadth-first search later, which does require you have a larger space. So, so the reason you can forget it is the only history that I need to keep track of is this particular branch, right? I don't need to figure out, like, I don't need to keep track of, like, actually the history of all these other nodes. I can, I can throw it- [NOISE] those out. But for something else like breadth-first search where we'll talk about in a few slides, you actually need to keep track of, like, the history of everything else. So, so let me get back to that in a few slides. But for this one, basically the idea is, um, yeah, like, I wanna know how I got there. To, to know how I got there, I just need to know the parents. Yes. [inaudible] like the minimum cost to reach a point or is it to find whether, like, you can or cannot reach a certain point in your search. So it depends on what your objective is. Like, it really depends on what the search problem is asking. So, so in the case of that farmer-goat example, uh, the search problem is asking, you wanna move everything to the other side of the river. So you have that criteria. And you wanna find the minimum cost one, so you also have that other cri- criteria. So it really depends on what the search problem is asking. And some of these nodes might be solutions. Some of them might not be solutions. So, so it really depends, okay? All right. So, so let's just look at these on the slide. So the memory is order of D. It's actually small. It's nice. In terms of time, this is not a great algorithm, right? Because even if your branching factor is 2, if the depth of the tree is 50, then this is gonna blow up, like, immediately. So a lot of these tree search algorithms that we're gonna talk about, like, they have the same problem. So, so they pretty much have the same time complexity. We're going to just look at very minimal improvements of them. And then after that, we'll talk about, uh, dynamic programming and uniform cost search, which are polynomial algorithms that are much better than these, okay? All right. So let's actually- let's go back to the tram example and let's try to write up what backtracking search does. So- all right. So we defined our model. Our model is the search problem, this particular transportation search problem. It could be anything else. Um, and now we're going to kind of have this main section wi- where we're going to put in, like, our algorithms in it. And we're gonna write them as general as possible so, so we can apply them to other types of search problems, okay? So let's define backtracking search. It takes a search problem. It can take the transportation problem, okay? All right. So- and then we're going to- basically in backtracking search, what we're doing is we're recursing on every state given that you have a history of, of getting there and the total cost that it took us to, to get there, okay? So, so at the state, having gotten some history and some accumulated costs so far, we are going to basically recurse on that state and look at the children of that state, okay? So, so we're going to explore the rest of the subtree from, from that particular state, okay? All right. So how do we do that? [NOISE] Well, we gotta make sure that we're not in an end state. Or if you're in an end state, like, we can actually update the best solution so far, okay? So let's put that for to do. So, so, so the bunch of things that we need to do. We need to figure out if you're in an end state. If we are, well, we got to, we gotta update our best solution. If you're not in an end-state, then we're going to recurse on children, okay? All right. So we can do that later. And then in general, this recurse function is, is going to, uh, we're going to call it on on the, on the start state. So let's actually do that too. So, so what backtracking search does is it calls this recurse function on the initial state that we have with history of none, right? Like, we don't have any history yet, and, and cost is 0 so far because we haven't really gone anywhere. So, so we start with a start state. We call recurse on it, okay? [NOISE] And how do we recurse on children? Well, we have defined this, this successor and cost function. So by calling that successor and cost function on state, then we can get action, new state, and cost. So, so we get this triple of action, new state, and cost, okay? And then we can basically recurse on the new state. Um, I'm not putting the histories right now in this code. So, so we need to keep track of the history too, but, but let's just not worry about the history. Oh, I guess I'm putting it in this one. [LAUGHTER]. In the later ones I will not put them. But, but basically the history is keeping track of, like, how you got there. And to- total cost is going to be [NOISE] what, what you've got so far plus the cost of this, this new state, action pair, okay? Okay. So we need to keep track of the best solution so far. So I'm just going to find a dictionary here just to make sure that we keep track of it and for Python scoping reasons. Okay. And then the place we're going to update our best solution so far is that to do that is left, right? So, so if you're in an end state, then we can actually update the best solution so far, okay? And what do we want in our best solution? Well, we wanna know what the cost is. So, so we can start with cost of infinity. And anything below infinity is better. [NOISE] And then we're going to start with a history of empty, but we're going to fill up that history too, okay? So that's the initialization of best solution so far. Then, we're going to update that, right? If you're in an end-state, if the total cost that we have right now is smaller than the best solution so far, then we're going to update that best solution. And, and you're going to update its history with whatever its history is, okay? All right. And, and that's it, that's backtracking search, okay? So let's just make sure it does the thing. So maybe- so to do that, [NOISE] we are going to- actually, no, we gotta return the best solution so far. Mm-hmm. All right. So now we have defined a transportation problem. Now, what I want to do is, I want to call backtracking search on the transportation problem, okay? So that all sounds good. I need to write a print function also to- to be able to print things. So I'm gonna just write a generic print function that we can call on any of these types of problems. So let's- let's define a print solution function that just like, prints things the way we want them. So we get the solution, and we're gonna just unpack that cost and history and just print the cost and history nicely. Okay. All right. So I can- I can use this print solution for pretty much all the other algorithms, we'll talk about today too. Okay. And we're gonna talk about how we get there- to the history. So now I have my print function, I have my backtracking search algorithm, I've defined my transportation problem. I can just call it on this transportation problem with 10 blocks. So as you guys can see here, so the total cost is 6. So what this means is for going from city 1 to city, city 10, then this is the best solution. I- I gotta walk walk, walk, walk, and then after that ta- take the tram. Because like I end up in 5, and then after that it's actually worth taking the tram and paying the cost 50. Um, let's try it out for 20. What do you think is the answer for 20? So [LAUGHTER] similar to before, walk, walk, walk until we get to 5, then we take the tram, then we take the tram again. The cost is 8. And then if, if it is 100, it's a little bit more interesting if you have 100. So you are walking and then you're taking the tram and you get to 24 and you what- you have that in one step to get to 25 which is the good state because then you can just multiply that by 2. So you walk for that one step and take the tram again, okay. So what if I want to try out a much larger number of blocks? So is this gonna work? No, because, because remember, that time was order of b to the d. That wasn't that great. So let's try that. Well, we got maximum recursion then, we can fix that. So [LAUGHTER] let's try fixing that. [LAUGHTER] So you can, you can set your recursion limit to be whatever. So you can try that. Is this gonna work? [LAUGHTER] Now, it's just gonna take a long time, right. So, so it's not going to give you an answer [LAUGHTER] And it's gonna just take a long time. So all right. [LAUGHTER] Actually, how do I view? Okay. Let's go back here. All right. So that was backtracking search, right? So all it was doing was just going over all of this tree and it was taking exponential time as you saw and we just tried it out on that transportation problem that we defined. So we just defined a search problem, we used this really simple search algorithm to find solutions for that, and- and then that's what we have so far. So, so now what we want to do is, we want to- we want to come up with a few better improvements of this backtracking search. Again, don't get your hopes up, it's not that big of an improvement. But, but we can do some- something better. So, so the first improvement you want to make is by using this algorithm called depth-first search, as some of you might have heard of it. DFS or depth-first search, okay? So the restriction that DFS put in, is, is that your cost has to be 0. So your cost has to be, let me leave that. Um, let me actually draw a line between them. So you don't get. Okay, so, so we are talking about DFS now, and the restriction is the cost has to be 0. So, so what DFS does, is it basically does exactly the same thing as backtracking search, but once it finds a solution down here then it is done. It basically doesn't like explore the rest of the tree. And the reason it can do that is the cost of all these edges is 0. So if the cost of all these edges are 0, then if I find a solution I found a solution. I don't need to like find this better solution. Because, because that, that is good enough like anything that I find also has a cost of 0, so I might as well just return the solution. Like, an example of that is if you have Rubik- Rubik's cube uh, like if you find a solution then you have found a solution, right? There are a million different ways of like getting to a solution, but like you just want one. And then if you find one, then you're happy, you're done. Okay. So as you can see, this is a very, very slight improvement to backtracking search. Um, what happens is in terms of, in terms of space it's still the same thing. So it's order of D. So in terms of space nothing has changed. It's pretty good, it's order of D. In terms of time, in practice it is better, right? Because in practice if I find a solution, I can just be done, don't worry about the rest of the tree. But, but in, in general, if you want to talk about it in theory then the worst case scenario is just trying out all of the trees, so you write it as worst case scenario, it's order of b to the d. So, so nothing has really changed in terms of- in terms of exponential blow up. Yes. I've been thinking of how you draw that tree, it seems that you imply that the sub problems do not overlap, right? Because you're kind of [inaudible] but in fact the sub-problem could overlap. So you- somebody with a training problem, you can get to the same place through different history but the rest is the same. Yeah, so you can- so, so the question is yeah, do sub-problems overlap here or they don't. So you could actually have it in a setting where sub-problems do overlap, but you could actually add this, this extra like constraint that says if I visited the state, then don't add it to the tree. So, so you have that option or you have the option of like going down to tree with some, like particular depths and not trying out everything. In the setting that we have here, yeah, like we're basically trying out all possible. Like, I'm talking about the most uh, like, general form where you're going over all the states and all possible actions that could come out of it, okay? All right. So that was DFS. Okay. So the idea of DFS again as you're doing backtracking search and then you're just stopping when you find a solution because- because cost is 0, okay? So in terms of s- space order of D, in terms of time, it's still order of b to the d, okay? All right. So that was DFS. We have another algorithm called breadth-first search BFS. And this is useful when cost is some constant but it doesn't need to be 0, it's just some, some, some positive constant. So what that means is all these edges have the same cost and that cost is just C. So I have the same cost pretty much everywhere, okay? So the idea of breadth-first search, is we can- we can go layer by layer. Like, like we're not going to try out the depth. Instead what we can do is, we can go layer by layer, try out this layer and see if we find a solution here. Remember the tree doesn't need to go all the way down here. The tree could end here or like at any of these and any of these nodes. Like, like I can have like a tree that looks maybe like this. I have a solution here. Like this tree doesn't need to be like this nicely formed. Like I can have a tree that looks like this, okay? So if I have a tree that looks like this, with breadth-first search, I'm gonna try out this layer. See if this guy is a solution. If it's not, I'm gonna try this guy, see if this is the solution. If not I'm gonna try here, here, and then when I find a solution when I get here, I'm done, right? Because like if I find a solution here, I know it took 2C to get here. Like two of these C values. And if there is any other solution anywhere else in this sub-tree or in this sub-tree, those solutions are going to be worse than this. Because they are gonna just like take like, they- they're going to have a higher cost, okay? So because the cost is constant throughout. Okay. So then it's, it's useful if your solutions are somewhere like high up in this tree and then you can find it. So in terms of time, I get some improvements here because I can call this depth, this shorter depth the small d. I'm gonna call this shorter depth small d. And in terms of time, it's still exponential but it's order of B to the small d. And this is actually a huge improvement, because if you think about it, the tree has exponentially become larger. So these like lower levels are a lot of things that you need to, you need to explore. If we have like branching factor of 10, the next layer has 100 things in it, right? So- so going down these layers is actually pretty bad. So, so the fact that with bre- breadth-first search I can improve the timing and, and limited to a particular depth, that's pretty good. Still exponential, but pretty good. Yes. [inaudible] negative cost at that point, you can also assume this is best solution. Yeah, you can assume that this is the best solution. Yeah, exactly. So you are assuming that there are no negative cost. So at this point, I know this is the best solution, I'm done. Like I call it and and I don't like explore anything else. The problem with breadth-first search is um, there's a question there, sorry. Are you also assuming all the costs are the same? Yeah, we're assuming all the costs are the same. Because maybe you like all the costs are 1, if- if I don't assume that, if all of these costs are 100 and then like there might be like some, some other like um. [inaudible]. Yeah, you need to explore the rest if they're not the same basically. That's what I mean. All right. So, so the the problem with BFS is, in terms of memory we are losing. In terms of memory, you need to actually keep track of the history of all these other, like all the nodes that you have explored so far. So uh, in terms of memory, this is going to be order of b to the d, kind of similar to the time. And, and the reason is, I have explored this guy. And then after exploring this guy, I need to still have like a history of where it's going to go, because next time around when I try out this layer, I need to know everything about this parent. And I,- like when I- when I explore here and this is not a solution, I need to store everything about this, because maybe I don't find a solution in this, in this level and I need to come down. And when I come down, I need to know everything about these nodes. So I need to actually store pretty much like everything about the tree until I find my solution. And then that's where you lose like in breadth-first search. In terms of space, it's not going to be that great. So in terms of space, it's now order of b to the d. It's a lot worse than what we've had. In terms of time, it is, it is better. It's still exponential, but it is better, okay? All right. Okay, so now um, let's talk about one more algorithm and then afterward we, we jump to dynamic programming. There is a question back there. One thing though, the small d can be the same as the big D, right? It can. Yeah. So, it is exponential. I agree. Small d can be the same as big D. But in practice, if small d is not the same as big D, we are- we are winning a lot because, because, yeah, these lower layers are so bad that, that people actually like to call it- call the fact that we, we are order of b to the small d rather than big D. Yes? Is there a reason for why DFS would be the worst case scenario for the time enough for DFS? Uh, so DFS needs to go all the way down to these lower, lower levels. But BFS can stop at every level because it's doing level by level. That can be the worst case scenario [inaudible]. Yeah. So the reason is- yeah, so like you were saying, okay, so in DFS we were also saving some time, right? Like why aren't we are calling that out. And then the reason is with DFS you still need to get to these like lower layers, and that is the, like, that is the place that you're losing on time. So, so the fact that you're still, like, losing on time and surely you haven't explored these other ones, but you have already got to these lower trees, like, so far, um, that's pretty bad. So, so that is why we are calling it order of b to the d in a worst case. Okay. All right. So this, this last algorithm I wanna to talk about is, is an idea that tries- it's a cool idea. It actually tries to combine the benefits of BFS and DFS. And, and this is called, uh, DFS Iterative Deepening. So what this algorithm does is it basically goes level by level, same as BFS, because then that way i- if you find a solution, you're done, everything is great, right? Uh, but what, what it does is for every level, it runs a full DFS. And, and it feels- it's like it's gonna take a long time. But, but it's actually good because, again, if you find your solution, like, early on, it doesn't matter that you have ran like a million DFSs so far. So, um, so it's kinda like an analogy of it is, is imagine that you have a dog, and that dog is DFS, and it's on a leash, and you have like a short leash. And when it is on that leash, it's going to do a DFS and try out and search all the space, and it doesn't find anything. So it comes back, and then you're going to extend the leash a little bit, and it's gonna do everything, and, like, search everything, and do a DFS. Comes back, doesn't find anything you extend the leash again. So, so that's the idea. Like extending the leash is this idea of extending your, your levels, okay? So, uh, so how does, how does DFS iterative deepening be? Yes? Um, if what we're looking for in following the tree is even worse [inaudible] Uh, say that again, say that. So if, if what we're looking for in following the tree, is that gonna be worse than- Yes, exactly. Yes, that's, that's okay. That's a good point. So the point is, uh, the, the point that, um, I mentioned is, if your solution is, like, here, you are screwed. It's worse than BFS or DFS, right? You're doing all these DFSs through like a bigger, like, higher-level BFS and you're- and, and it's, it's a terrible situation. But again, in practice, like, we are hoping the solutions are not gonna end up like down this tree. But yeah, if the solutions are down the tree, then you're not, like, winning anything by, by using DFS. What exactly, like what problems do you think DFS iterative deepening would be, like, useful? In general, if you- okay. So the question is, yeah, so what problems do we think DFS iterative deepening is useful? Uh, in general, if like, there are problems that I think BFS is going to be useful, usually, DFS iterative deepening is useful. The reason I would think that is, like, there is some structure about the problem that I would think I would find my solution earlier. So if I, if I have some reasons or some, some reasons about the problem, about the structure of the problem, and I think solutions are low depth, I should use some of these algorithms. And in DFS with iterative deepening in terms of space, it helps too, so might as well use that. All right. So, so in terms of space, it's going to be order of small d. So in terms of space order of small d. And then in terms of time, you'd get the same benefits of, uh, it gets the same benefits of, uh, BFS. So, so that's, that's nice. And then again, like, because it's has this BFS out of the loop, it has the same sort of constraint on the cost. That's gotta be a, uh, constant constraint that cost, right? So that is our table. And again, in looking at this table in terms of time, you're just not doing well, right? Like you have this exponential time algorithms here. And, um, we cou- could avoid the exponential space with using something like DFS iterative deepening. But still, this time thing is- it's just not that great, okay? And what we wanna do now is we wanna talk about search algorithms that bring down this exponential time to polynomial time somehow. And then there is no magic, we'll talk about how. [LAUGHTER] And dynamic programming is, is the first algorithm, okay? Yes? You might give us ideas b to the d time in term of d space. Uh, yeah. So it- so, so the way iterative deepening works is, it sets the lev- or say level is one. So if level is one, I'm gonna do a full DFS, okay? Because I'm doing a full DFS in terms of space, uh, I- it's the same as DFS in terms of space. I just- it's just the same as the length where we find a solution. Let's say the length where I find the solution is small d. So now, I say level is two, my new level is two, I'm gonna do a full DFS, okay? [NOISE] So when I do a full DFS, then in terms of space, I need to- I need to just remember my pairings, so that's why it's order of d in terms of space. And in terms of time, it's, it's order of b to the d because if I find my solution here, I'm done, I don't need to, like, explore anything else. And, and that is exponential but exponential in, in this smaller depth as opposed to the longer depth similar to, similar to BFS. Yes? I'm sorry. I still don't understand why, let's say, like, the small d is the same as the big D, right? And- That's a- okay. So that's a very good question. So you- I think I know it. So you're asking small d, if small d was the same as big D. If I had my solutions down here, why am I, like, differentiating here between a small d and big D, right? Is that what you're asking or am I- I'm just gonna ask if it's, like, the depth is quite large, like, small d is large, and why is it, like, why do we need to find also a function of d? As in why wouldn't it be, like, d times b to the d? Um, Oh, I see where you're saying. So, so you're saying, okay, like, when I'm doing, when I'm performing DFS iterative deepening, then I'm doing DF- DFSs. So sure, it's order of b to the d for each of them, but then I'm doing d of them. And if d is really large, I should put that here. Sure, I, I do agree that is the right time. But again, I'm- like, in, in, in the, in the case of this exponential, this is so bad that that we are just dropping that, like, we don't even worry about that, the extra d that comes in. But it is true, you need to have that extra d, like, in, in general if you want to talk about it. Kind of wanna move on to dynamic programming, but last question there. First of all, I'm after that, presumably though you're saving the work that you've done during the prior iterations, so you're not really computing anything larger than O to the B, capital D, correct? Yeah, that's right. The worst-case scenario is O to the B, capital D. All right. So let's move to dynamic programming. Okay. So, so what does dynamic programming do? So maybe I can- I'll, I'll still use this because I might need to use this thing later. Okay. So I'm gonna erase my parameters up on here. Okay. So the idea of dynamic programming, we have already seen this in the first lecture, is I have a state s, and I wanna end up in some end state. But to do that, I can take an action that takes me to S-prime, right? I can, I can end up in s-prime by cost of s and a. I can take an action that, that ends up in s-prime. And then from there, I can do a bunch of things. I don't know what. But I'll end up in some end state, okay? And, and what I'm interested in actually computing is for this state s is to find what is future cost of s, okay? And this part of it, is future cost of S prime and I don't know what it is but I can just leave it as future cost of S prime. So if I wanna find what future cost of S is, maybe I should make this a little bit to the right one cycle. I'm gonna write cost of s, a for this edge. I'm gonna erase this. What I'm interested in finding is future cost of my state S. So what is that equal to? Well, that's going to be equal to this cost of s, a. Right? Like a state S, I'm going to take action a. So it's going to be cost of s, a plus future cost of S-prime. Again, I don't know what that is but that's future Dorsa's problem. So this is future cost of S prime. And then you might ask well what is a? Where does a come from? How do I know what a is? I don't know. I'm gonna pick an a that minimizes this sum. I'm gonna put this around it. Okay? So future cost of S is just going to be equal to minimum of cost of s, a, plus future costs of S-prime over all possible actions. And it's going to be 0, if you are in an end state. If is End of S is true. Okay? So if I already know I'm in an end state, then there is no future cost. That's going to be equal to 0. Otherwise, future cost is just going to be, cost of going from S to the next state and then future cost computed from there. Okay? So that is just how one would go about formalizing this problem as a dynamic problem and they're not a dynamic programming problem, okay? And then how do I find what S prime is? Well, I wrote this successor and cost function [NOISE] in my code. Remember like we know how to find the successor given that we are in state S and we are taking action a. So S prime is just calling that successor function over s and a. All right. So let's go back to some route finding example. So, so this is slightly different route finding example. So let's say that we want to find the minimum cost path from going from city 1 to some city n in the future, moving forward, we can always just move forward and it costs c_ij to go from city i to city j. Okay? So this this is my new search problem. Okay? So, so this is kind of how the tree would look like. So, so if I wanna draw this research for this, I can start from city one, I can end up in a city two or three or four. Then if I'm in city two, I can end up in three or four. If I'm in three, I can end up in four like this is how it will look like. Ah, I can have a much larger version of it. If I'm talking about going to city seven, then I have this type of tree. And by just like looking at this tree, you see all these sub-trees just being repeated like throughout. If you just look at five like future cost of five, it's gonna be the same thing. Right? It's just gonna be the same thing throughout. And if I use like something like tree search that we have talked about, then I have to like go and explore like this whole tree and then it's gonna be really time-consuming. So, so the key insight here is future cost, this value of future cost, only depends on state. Okay? So it only depends on where I am right now. And because of that maybe I can just store that the first time that I compute future cost of five and then like in the future, I just called that and, and, and I don't like recompute future costs of five. Okay? So, so the observation here is, future cost only depends on current city. So, so my state in this case is current city and, and that state is enough for me to compute future cost. Okay? All right. So, so if you, if you think about what we have talked about so far, like we have thought about like these these search problems where the state we think of it as the past sequence of actions and the history of actions you have taken and all that. But right now for this problem, like state is just current city and that's enough. Okay? So and and because of that, you are getting all these exponential savings in time and space because again, I can compute future cost of five there and collapse that whole tree into this graph and just go about solving my search problem on this graph as opposed to that that whole tree. Right. So, so that's that's where you get the savings from, from dynamic programming. Um, and I just wanna emphasize that again of, let me actually do this. So, so the key idea here is, like I was saying there is no magic happening here. The key idea here is is how to figure out what your state is. It's actually important to think about what your state is. In this case we are, we're assuming a state is summary of all parts, all past actions that we've taken sufficient for us to choose the optimal future. Okay? So, so that's like a mouthful but ah, basically what that means is, the only reason dynamic programming works. And for this particular example we just saw, is the state the way we define it is enough for us to plan for the future. Like I might have a different problem where the state. Like I define a state in a way that it's not enough for me to do a plan for future. But if I wanna use dynamic programming, then I gotta be smart about choosing my state because, because that is the thing that, that decides for the future. So, so for example for this problem, like I might visit city one, then three, then four, and then six, and for solving this particular search problem, I just need to know that I'm in city six. That is enough. Okay? But like maybe I have some other problem that requires knowing one, three, four, and six and and because of that maybe I need to know the full tree. Okay? So so this is where the saving comes from like figuring out what the state is and and defining that. Right? All right. So so we will come back to this notion of state again and I think about the state a little bit more carefully. But maybe before that maybe we can just implement dynamic programming real quick. All right. So let's go back to our tram problem. I'm back to the tram problem and let's implement dynamic programming. Okay. So how do we do this? We're basically just writing that like math over there into code. That, that's all you're doing. So, so we're going to define this future cost. If you're in an end state, we're going to return 0. If you're not in an end state we're just going to add up cost plus future cost of S prime. How do we get S-prime? Well, we're gonna call this successor success and cost function. So we can get action new, new, new state and costs. And then you're gonna take the minimum of them over, over all possible actions. So minimum of cost plus future cost of new state. That is literally what we have on the board. Okay? All right. And we're returning the result. So that is future cost. What's your dynamic programming there? It should, it should return a future cost over initial state. Right? Start state. And you will return the history if you want. In this case, I'm not returning [LAUGHTER] the history. Okay. So how do I get savings? Well, I gotta put a cache. Right? That's the only way I'm gonna get savings. So um, that is where I put the cache. And if I, if the state is already in the cache. I'll just call my cache. Otherwise I don't. Any question there? [inaudible]. What's that? Are we getting future costs? How are we getting? Uh, say that again. Sorry, I didn't hear. So future cost takes some states, but what actually- is there like- uh, do we actually have, like, a function in the menu to calculate future costs or is that like [inaudible]. So future cost is going to be, uh- yeah, so, so we have this function, right? Future cost over state. But you're going to call future cost- so, so, so future cost over state is going to be equal to cost of state and actions, in this function I'm saying all possible actions, try that out, plus future costs of S prime. And S prime comes from the successor and, and, and cost function, uh, successor and cost function. All right. So- and then, yeah- and so, so we do the caching, the proper caching type of way of doing this too. And now we have dynamic programming. So we can basically call this over, uh, our tram problem. So I'm gonna, I'm gonna move forward. Okay. So let's do print solution, dynamic programming over our problem. Uh, you can, again, play around with this. The only way I'm checking this is if it gives me the same solution as backtracking search because I knew how that works, right? So let's just call it on ten. And, yeah, it gave me the same, the same answer. So I can play around with this, okay? All right. So, uh-huh, let's go back. Okay. So one assumption that we have here, to just point out, is we are assuming that this graph is going to be acyclic. So, so that's, that's an assumption that we need to make when we are solving this dynamic programming problem. And, and the reason is, [NOISE] well, we need to compute this future cost, right? For me to compute future costs of S, the S, S prime, I need to, like, have thought about- sorry. For me to compute future costs of S, I need to have thought about future costs of S prime. So there is, kind of, this natural ordering that exists between my state. So if I think about an example where there are cycles, then, then I don't have that ordering, right? If I want to compute, let's say, I want to go from A to D here, and on B, C. So if I want to compute future cost of B, I don't really know if I should have computed future costs of A before or C before or what order should I have gone to compute, like, future costs of B? So, so you actually need to have some way of ordering your states in order to compute these future costs and, and apply dynamic programming. So that's why, like, we can't really have cycles, like, when we, when we think about this algorithm. But we are going to talk about, uh, uniform cost search which actually allows us to have cycles, like, in a few slides. Yes. So when is the run time of the dynamic programming? So the run time of this is actually polynomial time in the order of states. So order of n. O of n? Yeah O of n, where n is the number of states. Yeah. Okay. All right. So- all right. So let's talk about the idea of states a little bit more because I think this is, this is actually interesting. All right. So, so let's just reiterate. What is a state? State is a summary of all past actions sufficient to choose future actions optimally, okay? So, so everyone happy with what state is? So now, what we want to do is, we want to figure out how we should define our state space. Because, again, this is an important problem, right? Like, how we we're defining state space is the thing that gets the dynamic programming working. So, so we got to, we got to think about how to do that. So, so let's go back to this example, and let's just change that a little bit. So, so this is the same example of, I'm going from city one to city n, I can only move forward, and it cost C_i_j to go from any city i to city j, and I'm going to add a constraint. And the constraint is, I can't visit three odd cities in a row, okay? So what that means is, um, [NOISE] maybe I'm in state one. And then, I went to state three, or city one, I went to city three. And then after that, can I go to city seven or- no, based on this constraint that I've added, I, I, like, can't do that, right? So I want to define a state space that allows me to keep track of these things, so I can solve this new search problem with this new constraint. So, so how should I, how should I do that? [NOISE] So in, in the previous problem, when we didn't have the constraint, our state was just a current city. Like previously, we just cared about the current city. And the reason we cared about the current city is like, is like we are solving the search problem, like, we end up in a city. We need to know how I'm going- where I should go from three. So I should, I should have my current city in general, right? So, so for the previous problem without the constraint, current city was enough. But, but now current city is not enough, right? I actually need to know, like, something about my past, okay? Yes. [inaudible] have a count of how many that's odd states. Yeah. That's actually a very good point. [NOISE] Yeah. And so, so one suggestion is, have a count of how many odd states. Not only maybe, like- and the- maybe the first thing that would come to our mind is something simpler. So maybe we say, well, the state is- maybe I'll write previous city just to be similar to the slide. The state- like, when we say, well, the state is previous city and current city. Okay? So this is one possible option for, for my state, right? Because, because if I have this, if I have this guy as my state, and then that is enough, right? Like if I- my current city is three, I know my previous city was one. I know I shouldn't go to seven, like that's enough for me to make, like, future decisions, okay? But there is a problem with this. Well, what is the problem? So I have n cities, right? So, so current city can take n possible action and n possible states, previous city can also take n possible options, has n possible options. So if I think about the size of my state space, it is n squared. If I decide to choose the state, okay? If I, if I decide to choose the state, I'm going to have n squared states. And remember, we are doing this dynamic programming thing, like, we need to actually, like, write down, like, all the- like, how to get from all those states. That's gonna be big. But there is an improvement to this. And that's an improvement that you suggested, which is, I don't actually need to have this whole giant previous city which has n options. I can just have a counter to just know whether the previous city was odd or not. Like, that's enough, right? Like if I- I don't care if it was one or three or whatever. Like, I just care to know if previous city was odd or not. So, so another option for- I'll write it here. Another option for my state is to know if previous was odd or not, okay? And then I need to know my current city again, right? Current city we need that because, like, we need to know how to get from there. And then this brings down my state space, like, how does it bring down my state space? Because, well, what's the size of my state space? This guy can take n possible, uh, states. If my previous city was odd, that's two, right? Like, so I just brought down my state space from something that was n squared to 2n, and, and that's a good improvement. So in general, when you're picking these state spaces, you should pick the minimal, like, sufficient thing for you to make decisions. So it's got to be a summary of all the previous actions and previous things that you need to make future decisions, but pick the minimum one because you're storing these things, and it, it actually matters to pick the smallest one. So, so here is an example of, like, exactly that. So, so my state is now this tuple of whether the previous city was odd or not, and my current city. So if I start at city 1, well, like, I don't have a previous city, and I'm at city one, I could go to city three, and I end up in odd and three. I could try to go to city seven, well, that's not possible because now I have listed three states, and, and I end up here, and there are, like, the rest of the tree, you can have any other examples. Yeah. [inaudible]. So, so the way I'm counting this is, how my- so, so my state is a tuple of two things, right? If the previous city is odd or even, I have two options here. It's either odd or even, that's two. And then my current city. And I have n possible options for my current city. It could be city one, city two, city three, so that's n. So I have n options here. I have two options here. That's why I'm saying my whole state space is two times n, okay? All right. Okay. So let's try out this example. Let's not put it in. Uh, just talk to your neighbors about this, and then maybe, if you have ideas just let me know in a minute. So- okay. So what is the difference here? So we're traveling from city one to city n, and then the constraint is changed. Now, we want to visit at least three odd cities. So that's what we wanna do. And then the question is, what is the minimal state? Talk to your neighbors. [NOISE] All right. Any ideas? Any ideas? [BACKGROUND] What is a possible state? Like it- don't worry about the minimal even, like for now. Like what do I need to keep track of? Number of odd cities. Number of, number of odd cities? Yeah. Okay. So- and is that it? Do I need to just know the number of odds cities? Um, or number of odd is about your, uh, [OVERLAPPING] So number- so, so what I meant is I also need to have current city, right? So, okay. So one possible option for this new example, I'm gonna write that here, is I want to visit at least three odd cities, I also need my- to know my current city, for any of these types- like, not any of these types of problems, for these particular problems that I've defined here, I need to know where I am. So I need to know what my current city is. So- so that is, like, that is given what I need to have that, okay? So I want to see at least three odd cities. So one possible option is to just have a counter and keep counting number of odd cities, okay? So this could be one potential state, okay? Yes? Do the cities have to be different or it could be one, three, one? So, um, okay, so the question is do the cities need to be different? The way we are defining the problem is we are moving forward. If I'm in one, like, I can just just move forward. I can't like stay at one or I can't, like, go back. So- so we're always moving forward. But when we talk about the- the state space, we are talking about the more general, like, setting. Like, some- some of that 2N might not even be possible, but- but that's the way we are counting, okay? All right. So- so this is one option, but I can actually do better than this. Yes? [inaudible] you need at least three odd cities, and then you need at least two odd cities, then you need at least one odd city and then you're- And then you're done. Right. So- so a suggestion there is we can- we can have, like, you can- you can start, like, saying you need at least three odd cities, then you need at least two odd cities, then you need at least one- one odd city and then you're done. And one way of formalizing that, that's exactly right, right? I only care if I have four odd cities now, or five odd cities, like, as long as I have like above three, that's- that's good enough, right? One odd city, two odd city, three odd city, above that is just three plus, like- like that's enough for me, okay? So if I have this, then the state space here is going to be N options here, and number of odd cities, it's around N over 2, so it's going to be N squared over 2. But if I use this- this new suggestion, where I don't keep track of four, five, six, seven, I just keep track of one, two, and three plus, then my state space ends up becoming 3 times N, and I- I can formally write that as S is equal to minimum of number of odd cities, and three, and then current city, you need the current city. And with this state space, then the size is equal to 3N, okay? So I just, again, brought down N squared to N, and that's- that's a nice improvement. Yes? Do you not also need an option for zero odd cities specific to [inaudible] Zero. We're starting from city one, so we're already counting that in, but yeah, like, if you have zero odd cities, that is a good point too. All right. So I've gotta move. Okay, so, um, that was that. This is how it looks like. Like you can think of your state space like this again as a tuple of I visited one, two, three, and- and then the cities. I have another example here, you can think about this later and yeah, like, work, work it at home. But, uh, basically the question is, again, you're going from city one to N, and you want to visit more odd cities than even cities. What would be the minimal state space? But we can talk about it offline. So the summary so far, is- is that state is going to be a summary of past actions sufficient to choose future actions optimally. And then dynamic programming, it's not doing any magic, right, it's using this notion of state to bring down this exponential time algorithm to a polynomial time algorithm, and then, with the trick of using memoization, and with a trick of choosing the right state, okay? And we have talked about dynamic programming and how it doesn't work for acyclic graphs. And now, we want to spend a little bit of time talking about uniform cost search, uh, and how that can help with the- with the cycles. So if you guys have seen Dijkstra's algorithm, this is very similar to Dijkstra's, like, yeah. So- so it's basically Dijkstra's. But- all right. So let's- let's actually talk about this. So- so the observation here is that when we- when we think about the cost of getting from start state to some s prime, well, that is going to be equal to cost of going from s to s prime and then some past cost of s, okay. And then when dynamic programming, let's make sure that we have this ordering and these things are computed in order, so we're not worried about, like, visiting the state, like, multiple times. But- but in- in uniform cost search, we might visit a state multiple times, and if you have cycles, we don't know what order to go. But the order we can go is we can actually compute a past cost- a suggested past cost, and- and basically, go over the states based on increasing past cost, okay? So, um, let me actually- yeah, so- so uniform cost search, what it does is it enumerates states in an order of increasing past cost. So- and- and in this case, we need to actually make an assumption here, we need to assume that the- the cost is going to be non-negative. So- so I'm making this assumption for uniform cost search. So here is an example of uniform cost search running- oh, we don't have internet, I just- yeah, there is a video of uniform cost search running in action. If I have time, I'll connect to internet and get it working. But- so- so let's talk about the high level idea of uniform cost search. So in uniform cost search, we have three sets that we need to keep track of. One is explored set, which is the states that we have found the optimal path. These are the states that we are sure, like, how to get to, we have computed the best path possible to get there, we are, like, done with them, okay? Then we have another set called a frontier, where this frontier are the states that we have seen, we have computed like a cost of getting there, like we know, somehow, how to get there and what would be the cost, but we're just not sure about it, like, like, we're not sure if that was the best way of getting there, okay? So- so the frontier, you can think of it as a known unknown. I know they exist, but, like, I actually, I'm not sure what's the optimal way of getting there. And then finally, we have this unexplored part of states. And these unexplored part of states, I haven't even seen them yet, I- I don't even know how to get there, and you can think of it as more of an unknown unknown. So- so that's, like, how you would think about these three. So let's actually work out an example for uniform cost search. I'm actually going to do this one. So- so I'm just gonna show how uniform cost search runs on this example. So I said we are going to keep track of three sets: unexplored, frontier, and then explored. Explored. Okay? All right. So everything ends up in unexplored at the beginning, A, B, C, and D. And what I wanna do is I wanna go from A to D, that- that's what I wanna do, okay? So I wanna find the minimum path cost- path- minimum cost path to get from A to D, given that I have this graph, okay? So what I'm gonna do is I'm gonna take my initial state, that's A. I am going to put A on my frontier, and it costs zero to get to A because I'm just starting at A, okay? So that's on my frontier, then in the next step, what I'm gonna do is I'm going to pop off the thing with the lowest cost from my frontier. There's one thing on my frontier, I'm just gonna pop off that one thing off my frontier, I'm gonna put that to explored, the cost of getting to A is 0. And then, what I'm going to do is after popping it off from my frontier is, I'm gonna see how I can get from A to any other state. So from A, I can get to B, that's one option, and with the cost of 1. So from A, I can go to B with a cost of 1. Where else can I go? I can go to C with a cost of 100. Okay? So what I just did is I moved B from unexplored to frontier, and then I- I know how I- to get there from A, and I moved C to the frontier, and I know how to get from there. Okay? So now it's the next round, I'm looking at my frontier, A is not on my frontier anymore, it's in explored. And I'm going to pop off the thing with the best cost off my frontier. Well, what is that? That's B. So I'm going to move B to my explored. The way- the best way to get to B, I already know that, right? That's from A to B. Everything is good. Okay? So now that I've popped off B from my frontier, I'm gonna look at B and see what states I can get to from B. From B, I can go to A, but A is already in explored, like, I already know the best way to get to A, so- so there is no reason to do that. From B, I can get to C, and if I want to get to C, then I can actually get to C with the cost of 1 plus whatever cost of B is already, 1. So what I'm gonna do is I'm going to erase this, because there is a better way of getting there, and that's from B, okay? And then, from B, I can get to D. So I'm gonna move D from unexplored to frontier. I can get to it from B. And then, how do I get to it from B? There's a cost of 101, right? Because 100 plus cost of getting to that, okay? All right. So I'm- I'm done exploring everything I can do from B. Going back to my frontier again. So these two are not on my frontier. I just have C and D on my frontier. I'm gonna pop off the thing with the best cost, that is C. I'm gonna move that to explored with a cost of two, and the way to- the best way to get that is from B, okay? So we're done with C. And then, we're gonna see where we can go from C. From C, I can go to A. Well, that's done, that's already on the explored- in- in the explored set, I'm not gonna touch that. Similar thing with B, already in the explored, don't need to worry about that. From C, I can get to D, right? And if I want to get to D from C, well, what would be the cost of that? It would be 2 plus 1. So I can update this and have 3. And I can update the way to get to D from here. And then, we're done, we go to frontier. The only thing that's left on the frontier is- is D. I'm going to just pop that off, and then I'm going to add that to explored. And that is 3. And that's what I have in my explored. So the way to get from A to D is- is by taking this route, and it costs 1. So A, B, C, and D. Okay? Is that- is that clear? All right. Okay. So there are two slides left and they're probably gonna kick us out soon, so I'll do this next time. So- so yeah, the two- two slides left is one is going to just go over the- the pseudo-code. So take a look at that, the code is online. And there's a small theorem that says, this is actually doing the right thing. I'll talk about that next time.
Stanford_CS221_Artificial_Intelligence_Principles_and_Techniques_Autumn_2021
Search_2_A_Stanford_CS221_Artificial_Intelligence_Autumn_2019.txt
Okay. So, Hi, everyone. So, uh, our plan for today is to continue talking about search. So, so that's, uh, what we're going to start doing, finish off some of the stuff we started talking about last time, and then after that, uh, switch to some of the more interesting topics like learning. So a few announcements. Um, so the solutions to the old exams are online now. So if you guys wanna start studying for the exam, you can do that. So, so start looking at some of those problems, I think, that would be useful. Um, actually, let me start with the Search 2 lecture because I think that might be, like, that has a, a review of some of the topics we've talked about. So it might be easier to do that. Also, I'm not connected to the network, so we're not gonna do the questions, uh, or show the videos because I have, I have a hard time connecting to the network in this room. Okay. All right. So, so let's start- continue talking about search. Uh, so if you guys remember, uh, we had this, this city block problem. So let's go back to that problem and let's just try to do a review of some of the, some of the search, search algorithms we talked about last time. So, uh, so suppose you want to travel from City 1 to City n only going forward, and then from City n you wanna go backwards, so and back to City 1 going only backwards, okay? So, so you- so the problem statement is kind of like this. You're starting in City 1, you're going- you're going forward and you're getting to some City n. So maybe we're doing that on this. And then after that, you wanna go backwards and get to, get to City 1 again. So you go into some of these cities, okay? So, so that's the goal, and then the cost of going to- from any city i to city j is equal to cij, okay? So, so that's it. So, the question is: What- which one of these following algorithms could you use to solve this problem? And it could be multiple of them. So- so we have depth-first search, breadth-first search, dynamic programming, and uniform cost search. And these were the algorithms we talked about last time. So, uh, maybe just talk to your neighbors for a minute and then we can do votes on each one of these. Yes, question? Just needed to ask [inaudible] The [OVERLAPPING]? Okay. Let me check that again. Thank you. Thank you for. [BACKGROUND] All right, so let's maybe start talking about this. So how about depth-first search like how many people think we can use depth-first search? How many people think we can't use depth-first search? There's -a very like good split. [LAUGHTER] So, some of the people think we can't use depth-first search, what, what are some reasons maybe just like call it out. The depth first-search, the assumption was that based upon the cost is zero. Yes, that's right. Yeah, so here we are basically going from City 1 to city n. Each one of these edges had a cost of cij. I'm just saying cij is greater than or equal to 0. That's the only thing I'm saying about cij. But if you remember depth-first search, you really wanted the cost to just be equal to 0 because if you remember that whole tree, like the whole point of depth-first search was I could just stop whenever I could find a solution. And we were assuming that the costs of all the edges is just equal to zero. So we can't really use depth first search here, because, because our cost is not 0. So assuming, like now that you know that reasoning, how about breadth first-search? Can we use breadth-first search? Yes? All of that moving from one city to city n that is not the city n. So that's a good point. So, so what suggesting is can we think about the problem as going from City 1 to City n? And then after that, like introduce like a whole new problem that continues that and starts from City n and goes to City 1. Let me get back to that point like in a second, because like you could potentially think about that -actually like that might be an interesting way of thinking about it. But, but irrespective of that I can't use depth first-search. So I'm -so far I'm just talking about depth first-search. Irrespective of how I'm looking at the problem, the costs are gonna be uh, non-zero. So because the costs are going to be non-zero, I can't use depth-first search. So, so let's talk about that first. So how about breadth first-search? Can I use breadth-first search? [inaudible] That's exactly right. So we cannot use breadth-first search here because for breadth first-search. If you remember, you really wanted all the costs to be the same. They didn't need to be 0, but they needed to be the same thing because then you could just go over the levels. And here I'm not- like I'm not saying I'm not putting any restrictions on cij being the same thing. Okay? So now let's talk about dynamic programming. How about dynamic programming? Can we use dynamic programming? All right, so that looks right, right you like we could use dynamic programming here. Everything looks okay, cij's are positive, looks fine. Um, how about, um, actually one question? So, so don't they have cycles here? We kind of, briefly talked about this already. So, don't I have like this cycle here? Uh, we can think about possibly going from one to n and then n to one. Yes, so this is a suggestion that, that we have already like heard twice. So we could actually use dynamic programming here even if it kinda looks like we have a cycle and the reasons we can kinda use this trick were we can basically draw this out again. And for going forward basically go all the way here, and then after that we're going backwards, kind of include the directionality too. So all I'm doing is I'm extending the state, the state space to not just be the city but be the city in addition to that, it would be direction that we're going. So if I'm in City 4 here, it's City 4 going forward. And if at some point in the future I'm in City, I don't know, 4 again, it's City 4 going backwards. So I'll keep track of both the city and the directionality. And when I do that then I'm kind of breaking the cycle. Like I'm not putting any cycles here and I can actually use dynamic programming, okay? Does that make sense? And then uniform cost search. That, that also sounds good too, right? Like Uniform cost search, you could actually use that. Doesn't matter if you have cycles or not. And then we have positive, positive, non-negative costs. So we could use uniform cost search. Okay? All right, so this was just a quick review of some of the things we talked about last time. And, um, another thing we talked about last time was this notion of state. Okay, so, so we started talking about tree search algorithms and at some point, uh, we switched to dynamic programming and uniform cost search where we are, uh, like we don't need to- like we don't need to have this exponential blow up. And the reason behind that was we have memoization. And in addition to that we have this notion of state. Okay? And so, what is a state? A state is a summary of all past actions that are sufficient for us to choose the future optimally. So, so we need to be really careful about choosing our state. So in this previous question, uh, we looked at past actions. So if you look at like all cities that you go over it can be in City 1, then 3, then 4, 5, 6 and city 3 again. So in terms of state, the things that you wanna keep track of is what city you are in. But in addition to that, you wanna have the directionality because you, you need to know like where you are and how you're getting back. Okay? So, and we did a couple of examples around that trying to figure out what is, what is like a specific notion of state for various problems. All right. So, so we started last time talking about search problems and, and we started formalizing it. So if you remember our paradigm of modeling and inference and learning we started kind of modeling search problems using this formalism where we defined a starting state, that's s start. And then we talked about the actions of s, which is a function over our states which returns all possible actions. And then we talked about the cost function. So the cost function can take a state and action and tell us what is the cost of that, that, that, that edge. And then we talked about the successor function which takes a state and action and tells us where we end up at. And again, we had this end function that was just checking if you're in an end state or not. So these were all the things that we needed to, to define a search problem and we kind of tried that and a couple of examples to try an example. The City example, all of that. Okay? And then after talking about these, these different ways of, um, thinking about search problems, um, we started talking about various types of inference algorithms. So we talked about tree search. So depth first search, breadth first search, depth first search with iterative deepening, um, backtracking search. And then after that we talked about some of these graph search type algorithms like, uniform cost search an- and, uh, dynamic programming. So last time we did an example of, um, uniform cost search but we didn't get to prove the correctness of it. So I want to switch to some of the last, er, last, last time's, um, slides to just go over this, this quick theorem and then after that just switch back to, to this lecture. Okay. So uniform cost search. Like, if you remember what we were doing in uniform cost search, we had three different sets. We had an export set which was basically the set of states that we have visited, and we are sure how to get to them, and we know the optimal path, and we know everything about them. We had this frontier set which was a set with, with a set of states that we have got to them, but we're not sure if, if the cost that we have the best cost, cost. There might be a better way of getting to them and you don't know it. Like you're not sure yet. And then we have the unexplored, er, set of states which are basically states that we haven't seen yet. So we did this example where we started with all the states in the unexplored set and then we moved into the frontier and then from the frontier, we move them to the explored set. So, so this was the example that we did on the board. Okay? And, and we realized that, like, even if we have cycles, we can actually do this algorithm and then we, we ended up finding the best path being from A to B to C to D and that costs 3. So, uh, let's actually implement uniform cost search, uh, so I think we didn't do this last time. So going back to, um, our set of, ah, so, so we started writing up these algorithms for search problems. So we have, we have written dynamic programming already and backtracking search. So now we can, we can try to kind of implement uniform cost search. And for doing so, we need to have this priority queue data structure. So this is in a util file. I'm just showing you what it like what functions it has, it has an update function, and it has a remove min function. So, so it's just a data structure that I'm gonna use for my frontier. Because like, my frontier I'm popping off things off my frontier. So I'm going to use this data structure. All right. So let's go back to uniform cost search. So we're going to define this frontier, where we are adding states to- from unexplored sets, you're adding states to the frontier. Okay? And it's going to be a priority queue so, so we have that data structure because we've just imported util. And you're going to basically add the start state with a cost of 0 to the frontier. So that's the first thing we do. And then after that, like, while the frontier is not empty. So while true, what we're going to do is, uh, we're going to remove the minimum, uh, past cost element from the frontier. So, so basically just pop off the frontier that the best thing that exists there, and just move that to the explored set. Okay. So when I pop off the thing from the frontier, basically I get this past cost and I get the state. Okay? All right. So, so if, if, er, you're in an end-state, then you're just going to return that past cost with the history. I'm not putting the history here for now, I'm just returning the cost. Okay. So after popping off this state from the frontier, the thing we were doing was you were adding the children of that. So, um, the way we do that is we're gonna use this successor and cost function that we defined last time. So we can basically iterate over action new state and costs and this successor and cost function. And, and basically update our frontier, by adding these new states to it. Okay. And then the cost that you are going to add is cost plus past cost if, if that is better. So, um, so that's what the update function of the frontier does. And that's pretty much it. Like that is uniform cost search. You add stuff to the frontier, you pop off stuff from the frontier. And, and that way you explore and remove things from unexplored set to the explored set. So let's just try that out. Looks like it is doing the right thing. So it got the same value as dynamic programming. So, er, looks like it kinda works okay. [NOISE] So, um, this code is also online. So if you want to take a look at it, um, later, actually it's not what I wanted. Um, yeah. Okay. All right. So, so that was- and here's also the pseudo-code of uniform cost search. Okay? Okay. So we have- is there a question right there? What's the runtime of uniform cost search [inaudible]. That's a good point. So so what's- the question is what's the runtime of uniform cost search? So the runtime of uniform cost search is order of n log n, where the log n is because of, like, the bookkeeping of, of the priority queue, uh, and you're going over all the edges. So, so if you can think of n here as the edges and worst-case scenario if you have a fully connected graph, it's technically n squared log n. But in practice, er, we have [inaudible] graph so people usually refer to that just n log n where n is the number of states that you have explored. And it's actually not all of the states. It's the states that you have explored. Okay? And dynamic programming, it's order of n. So technically, like, dynamic programming is slightly better but really depends. Yeah, certainty. Actually go first and then I'll get you back. Is the only difference between this and Dijkstra's is that you just don't have all [inaudible] beginning? That wasn't- the question is what's the difference between this and Dijkstra's algorithm, they're very similar, the only difference is, this is trying to solve a search problem. So you're not like exploring all the states. When you get to the solution, you get to the solution and then you just return that Dijkstra, you're going from- you're basically exploring all of, all of the states in the- in your graph. What's your question? [inaudible]. All right. Sounds good. Okay. So, uh, I just want to quickly, er, talk about this correctness theorem. So, so for uniform cost search we actually have a correctness theorem which basically says uniform cost search does the right thing. So, uh, what basically this theorem says is, if you have a state that you are popping off the frontier and removing it from the frontier to the explored, then it's priority, that value which is equal to past cost of s is actually the minimum cost of getting to, to, to the state s. So what this is saying is, let's say that this is my explored set. So this is my explored set, and then right here is my frontier, and I have a start state, okay? And then I have some state s, that right now I have decided that I am popping off s from the frontier to explored because that is the best thing that has the best past cost. So what the theorem says is, this, this path that I have from s_start to s, is the shortest path possible to get to get to the state s. Okay. So the way to prove that is to show that the cost of this path is lower than any other path, paths that go from s_start to s. So let's say there is some other path, this green one, that goes from s_start to s some other way. And, and the way that it goes to s is it should probably leave the, the explored set of states from some state called t maybe to some goes- go to some other state u and then from u go to s. u and s can be the same thing. But what the point of it is, if I have this other path that goes through- to s, it needs to leave the explored set from some state t. Okay. So what I want to show is I want to show that the, the cost of the green line, I want to show that that is greater than the cost of the black line. Okay. All right. So the cost of the green line, what is the cost of the green line? It's gonna be the cost to here, and then cost of t to u, and the cost of u to s. So I can say well, this cost is actually greater than or equal to, um, priority of t, because that is the cost of getting to t, plus cost of t to u. And I'm just dropping this, this last part. The u to s, I'm just dropping it. Okay. So cost of green is at least equal to priority of t plus cost of t. t, t to u. Okay. Well, what does that equal to? Priority is just a number, right? It's just a number that you are getting off the, the, the, priority queue. So that is actually equal to past cost of T, plus cost of t to u. Okay. And, and this value is going to actually be greater than or equal to priority of u. Well, why is that? Because if u is in my frontier, I've, I've visited u. So I already have some priority value for u. And, and the value that I've assigned for the priority of u, is either equal to this past cost of t plus cost of t, t to u, because I've like, seen that using my explored, using my frontier. So I've definitely seen this or it is something better that, that I don't know what it is. Right? So, so priority of u is going to be less than or equal to this past cost of t plus cost of t to u. Okay. And well, what do I know in terms of priority of u and priority of s? Well, I know priority of u is going to be greater than or equal to priority of s. Well, why is that? Because I already know I'm popping off s next, I'm not popping off U, like, like I've- I know I'm popping off the, the thing that has the least amount of priority, and the least value here, and that's s, and well, that is equal to, er, cost of the black line, black line. Okay. All right. So that was just a quick, like proof of why the uniform cost search always returns kind of the best minimum cost path type [NOISE]. All right. So let's go to the slides again. So, um, just a comparison, quick comparison between dynamic programming of uniform cost search. So, uh, we talked about dynamic programming. We know it doesn't allow cycles, but in terms of, uh, action cost, it can be anything like, like you can have negative costs, you can have positive costs. And, er, in terms of, um, complexity is order of n, and then uniform cost search, you can have cycles. So that is cool. But the problem is, the costs need to be non-negative, and into order of n log n. And if you have- if you end up in a situation where you have cycles and your costs are actually negative, there is this other algorithm called Bellman-Ford, that we are not talking about in this class, but you could actually like have a different algorithm that addresses those sort of the things. Okay. All right, how am I doing on time? Okay. So that was, that was this idea of inference. Right now we have like a good series of ways of going about doing inference, uh, for search problems, you have to formalize them. And now the plan for this lecture is, is to think about learning. So how are we going to go about learning when we have search problems? [NOISE] And when our search problem is not fully specified, and there are things in the search problems that are not specified and you want to learn what they are, like the costs, okay. So, uh, so that's going to be the first part of the lecture, and then towards the end of the lecture, we're going to talk about a few other algorithms that make things faster. So, so smarter ways of making things faster. We're going to talk about A star and some sort of relaxation type strategies, okay. All right. So, um, so let's go back to our transportation problem. So, so this was our transportation problem where, er, we had a start state and we can either walk, and by walking we can go from state s to state s plus 1, and that costs one, or we can take a tram, a magic tram that takes us from state s to state 2s, and that costs 2, okay, and we want to get to state n. So, uh, we can formalize that as a search problem. We can like we saw it- we saw this last time, we can actually try to find what is the best path to get from state 1 to any state n like we saw- like path- like walk walk, tram tram tram, walk tram tram. This is one potential like optimal path that one can get, okay? But the thing is, uh, the world is not perfect like, like modeling is actually really hard, like it's not that we always have this nice model with everything. And we could end up in scenarios where we have a search problem, and, and we don't actually know what the costs of our actions are. So we don't actually know what the cost of walking is, or what the cost of tram is. But maybe we actually have access to, to this optimal path. Like, maybe I know the optimal path is walk walk tram tram tram, walk tram tram, but I don't know what the costs are. So the point of learning is, is to go about learning what these cost values are based on this, this optimal path that we have. So, so I want to actually learn the costs of walking is 1, and the cost of tram is 2. And this is actually a common problem that we have like in machine learning in general. So like for example, um, you might have data from, uh, how a person does something or like how a person, let's say, like grasps an object. And I, I have no idea what was the cost that the person who was optimizing to grasp an object, right, but I have like the trajectory I know like what, what the path they took when they picked up an object. So what I can do is, if I have access to that path of how they picked up an object, then from that I can actually learn what was the cost function that they were optimizing, because then I can put that cost function maybe on a, on a robot that does the same thing. Question? [inaudible] like five or something. That's a good question. So the question is, is it possible to have multiple solutions here? Yes, so we are gonna actually see that like later, like what sort of the solutions that we gonna get, are there, ther- there could be cases where we have multiple solutions. The ratio of it is the thing that matters. So if you have like, walk is 1, tram is 4, if you get to an 8, you kind of get the same sort of behavior. Uh, and then it also depends on what sort of data you have. Like if your data allowed you to actually recover the, the, the true solution. So, so we're gonna actually talk about all these cases, okay? All right. Okay. So if you think about it, when the way- the search problem we were trying to solve, this, this was the inference problem, was when you are, you are given kind of a search formulation and you are given a cost, and, and our goal was to find the sequence of actions, this optimal sequence of actions, that was the shortest path or the best path and, and some path or some way, and this is a forward problem. So search is this forward problem, where you're given a cost and you want to find a sequence of actions, okay. So it's interesting because learning in some sense is, is an inverse problem. It's the inverse of, of search. So the inverse of search is, if you give me that sequence of actions, the, the best sequence of actions that you've got, then can you figure out what the cost is? So, so in some sense you can think of learning as this inverse problem of, of search and, and we are going to kind of address that. So I'm going to go over one example to, to talk about, er, learning. Um, and I'm actually going to use the notation of, uh, the machine lea- learning lectures that we had, um, at the beginning of like last week basically. So, um, let's say that we have, ah, maybe I can draw this. [NOISE] Um, yeah, I will just draw the scheme. So let's say we have a search problem without costs, and, and that's our input. So if- so, so we are kind of framing this problem of learning as a prediction problem. And if you remember prediction problems, in prediction problems we had, ah, an input. So our input was x, okay. And in, in this case you are saying our input is a search problem, search problem without costs, okay? So that is my input. And then we have outputs. And in this case my, my output y is this optimal sequence of actions that one could get- gets, so it's the solution path, so it's a solution path, okay. And what I wanna do is, I wanna- like, like if you remember machine learning, the idea was, I would wanna find this predictor, this f function, f that we take an input, f of x, and then it would basically return the solution path in other settings and it would generalize. So, so that was kind of the idea that we explored in machine learning, and you kinda wanna do the same thing in here. So, uh, let's start with- um, I'm going to draw that here. So let's start with an example where we are in city 1, and then maybe we walk to city 2, so we can walk to city 2. And then from there, maybe I have two options. I can keep walking to get to city 4. So I can do walk walk walk. Or maybe I can take the tram and end up in city 4, okay? And, and the thing is I don't actually know what the costs of these, these actions are, I don't know what the cost of do- uh, walk is, what the cost of tram is. Okay? But one thing I know is that my, my solution path, my y is equal to walk, walk, and walk. So, um, so one way to go about this is to actually start with some initialization of, of these costs. So the way we're defining these costs are going to be, uh, I'm going to use the word, um, I'm gonna write here maybe. I'll just write up here. I'm going to use w like, because I want to use the same notation as as the learning lectures. So w is going to be the weights that o- of, of each one of my actions. I have two actions. In this case I can either walk or I can take the tram so I'm going to call them action 1. So w of action 1 is w of walking. And then w of action 2 is w of taking the tram. So action 2 is taking the tram. So I'm defining these w values, and the way I'm defining these weights is just as a function of actions. This could technically be a function of state and actions but right now I'm just simplifying this and I'm saying the w's is this values, the costs of walking just depend- the cost of going from 1-2 just depends on my action. It doesn't depend on what state I'm in. You could imagine settings where it actually depends on like what city you are in too, okay? So, so then under that scenario what is the cost of, cost of y? It is going to be w walk, plus w walk, plus w walk. Okay? So what I'm suggesting is let's just start with something. Let's just start with- yeah, like let's just start with these weights. So I'm gonna say walking costs 3. And it's always going to cost 3. Again, the reason it's always going to cost 3 is I'm basically saying my weights only depend on the action, they don't depend on state. So it's always going to cost three. And I'm going to say well why not let's just say, the tram takes the cost of 2. Okay? So this doesn't like look right but like let's just say I assume this is the right solution, okay? So now what I wanna do is I want to be able to update these weights, update these values in a way that I can get this optimal path that I have, this, this walk, walk, walk. Okay? So how can I do that? So I started with these random initializations of what the weights are. Okay? So now that I've done that I can, I can try to figure out what is the optimal, optimal path here based on these weights. So what is my prediction, so that is y prime. That is my prediction based on these weights that I've just set up in terms of like what the optimal path is. Well, what is that? That is walk tram because this costs 5 and this costs 9. So with these weights, these random weights that have just come up with I'm going to pick walk and tram. And that is my prediction. Okay? So now what we wanna do is you want to update our w's based on the fact that our true label is walk, walk, walk and our prediction is walk, tram. Okay? And, and the algorithm that kind of does this, this does like the most like silliest thing possible. So, so what it does is it's going to first look at the truth value of W. Okay? So it's going to look at- so, so, so the weights are starting from- so I decided that this guy is 3 and I decided that this guy is 2, and I'm gonna update them. So I'm going to look at every action in this path. And for every action in this path I'm going to down-weight the, the weight of that. Well why am I going to do that? Because I- I don't want to penalize that, right? This is the true thing. I want the weight of the true thing to be small. So I see walk. I'm like okay so I see walk. The weight of that was 3. I'm going to down-weight that by 1. I'm gonna make that two. I see walk again. So I'm gonna bring that with 1. I see walk again, I'm going to subtract one again. I end up at 0. Okay? Now I'm gonna go over my prediction and then for every action I see here I'm going to bring it up, bring the cost, uh, the, the weight up by 1. So I see you walk again here, I'm going to bring it up by 1. So, so, these were subtract, subtract, subtract, bring it up by one because it's over my y prime. And then I see tram. And then because I see tram, I'm going to bring this up by 1. And that ends up in 3. So my new weights here are going to be three- the, the, the, the weight of walk just became 1 and then the weight of tram just became 3. Okay? And, and now I can kind of repeat doing this and see if that gets me this, this optimal solution or not. So I'm going to try running my search algorithm. If I run my search algorithm this path, this path costs 3, this path costs 4. So I'm actually going to get this path and this path. So my new prediction is just going to be walk, walk, walk. They're going to be the same thing. My weights are not gonna change. I'm going to converge. Yes. Is it always one? So I'm talking about a very simplified version of this but yeah it is always one. So the very simplified version of this is this version where I'm saying the w's just depend on, on actions. If you, if you make the weights depend on state and actions, there is a more generalized form of this. This is called the stru- er, the structure pe- er, perceptron algorithm, we'll talk about- briefly talk about the, the version where there is a state action too, but for this case we are just depending on action. You're literally just bring it up by one or by whatever like by whatever you bring it up here, you gotta bring it down by the same thing. So, so it's plus and minus a whatever a is. There's a question. [inaudible] why we do the plus 1 after we do all the minus 1s? So why am I doing the minus 1s? So I'll get to that. So, so when I look at y here, right? Like this is the thing that I really wanted. So if I- so when I see walk I realize that walking was a good thing, so I need to bring down the weight of that. But if, if the weights that I already had like knew that walking is pretty good then like the weights that I already had knew that walking is pretty good, I should like cancel that out. So, so that's why we are doing the plus 1 because like at this stage like I knew walking is pretty good up here like like my prediction also said walk. So if, if I'm subtracting it, I should add it to, to kind of like get them cancel that. But like right here, like I didn't know walking is good so I'm going to bring down the weight of that and then bring up the weight of, uh, tram. [inaudible]. Yeah. So, so I mistakenly thought tram, uh, is the way to go. So to avoid that next time around, I'm going to make the cost of tram higher so I don't take that route anymore. And there's a question there. So here- only like the only reason why [inaudible] in the second- in the, the y prime is because we know the y prime is different from y. Yes. But then, like what if like we have like a long sequence and y prime is only different in like one small location and like would that change the weights sufficiently? Yeah. So if, if, er, so you're asking. Okay, if my y and y prime, prime are kinda like the same thing walk, walk, walk or something and then at the very end this last one they're going to be different. Yeah. So like we were just and for that last one we are just adding one, right? So, so it does like weighted, er, it does actually address that and it just run- you can run it until you get the sequences to be exactly the same thing so you don't have any mistakes. Yeah. There's a question back there. Does it matter if our new cost become negative? Uh, does it matter if our new costs become- it depends on what, sort of, search algorithm you are using. Uh, at the end of the day it's fine if you're using dynamic programming so I can have like a negative cost here and I'm just calling, uh, like dynamic programming at the end of the day with that and that is fine. Yeah, it's fine if the cost becomes negative. There's a question. In this problem we want to find the true cost for walk and tram, but we ended up converging to 1. So this becomes a problem. Sorry, did not supposed- Just like the end result for this algorithm we got is 1 for walk and 3 for tram. And the real result, like in the previous example was 1 and 2. 1 and 2. Right, yes. Yeah. So the, so the question is, er, we got here 1 and 3. Is this actually right? Like, like if you remember like when we define this tram problem, we said walking costs 1 and tram costs 2 but we never got that. Well, the reason we never got that is the solution we are going to get here is just based on our, our training data. So if my training data is just walk, walk, walk, this is like the best thing I can get and I can kind of like converge to this solution where, where the two end up being equal. I don't have any mistakes on this. If I have more like data points then I'm going to do this longer and actually try it out on other training data and, and then I might converge to a different thing. Is there any rule for as far as initializing the weight? Is- I, I, I, I, I'm assuming when- the fu- uh, further when we are from the actual truth, the longer it's going to take to, uh, actually converge. It's- o- okay so the question is how do we initialize? So in na- in a natural algorithm you're just initializing with 0. So we're initializing everything by 0. It's actually not that bad because you just, you just basically have this sequence and in the- for the more general case you're computing a feature value that you just compute the full thing and you just do one single subtraction. So it is not that costly actually to do this. Yeah. [inaudible] know the path for a given cost. If you have that input can we incorporate that into the algorithm? So, you're saying if we have some prior knowledge about the cost can we incorporate it? Yeah. Um, that is interesting. So, uh, in this current format. So if you have some prior algorithm maybe you'll like then your prediction is going to be better, right? So if you have some knowledge about it maybe you'll get a better prediction and then based on that you don't update it as much. So maybe you can incorporate into the search problem. But again this is the most like general form of this algorithm. The simple- kind of, like the simplified version of it also like even like for the action. So not doing anything fancy. It's not doing something that hard either, honestly. Are we worried about overfitting at all? [BACKGROUND] Yeah. So it is going to- it can too- you're- yeah, so I'll show some examples on this. Like we are going to code this up and then we'll see overfitting, kind of, situations. So- so I'll get back to that actually. All right. All right. So, um, all right, so let's move on. Ah, okay. So- so this is just like the things that are on the slides are what I've already talked about. So, uh, yeah, so here's an example. So we start with, 3 for walk and 2 for tram. And then the idea is like how are we going to change the costs so we get the- the solution that we're hoping for. Um, and- and as I was saying, well, we can assume that the costs only depend on the action. So I'm assuming cost of s, a is just w of a, and in the most general form it- it can depend on- on the state too. Um, okay. So then if you take any candidate output path, then what would be the cost of the path? It would just be the sum of these W values over- over all the edges. So it would just be W of a_1 plus W of a_2 plus W of a_3. And as you've seen in this example, the cost of a path is just W of walk, plus W of walk, plus W of walk, or W of walk plus W of tram. So- so that's all this slide is saying. So- so that's how we compute the cost. All right, so- so now, uh, let's actually look at this algorithm like running in practice. Um, okay, let me actually go over the pseudocode. So- so, you start initializing W has to be equal to 0. And then after that we're going to iterate for some amount of T and then we have a training set of examples. It might not be just one here. I just showed this one example like- like, the only training example I had was- was that walk, walk, walk is a good thing, but you can imagine having multiple training examples for a search problem. And then what you can do is you can compute your prediction so that is y prime given that you have some W and the-then you can start with this W equal to zero and then-then just compute your prediction y prime, and then basically, you can do this plus and minus type of action. So for each action that is in your true y that is in your true label, you're going to subtract 1. So to decrease the cost of true y. And then for each action that is in your prediction you're going to add- add one to- to, kind of, increase the cost of the predicted y. Okay. All right. So let's look at implementing this one. And let's try to look at some examples here. All right. So let's go back to the tram problem. So this is again the same tram problem. We just want to use the same, sort of, format. Uh, I actually went back and wrote up the history here. If you remember the last time I was saying I'm not returning the history. Now we have a way of returning history of each one of these algorithms cause we are going to call dynamic programming and we need the history. All right. So let's go back to our transportation problem. So we had a cost of 1 and 2 for walking and tram, but what we wanna do is we wanna put parameters there. So you wanna actually put this weight and we can give that to our transportation problem. So in addition to the number of blocks, now I'm going to actually give like the weight of different actions. Okay. All right. So then walking has a weight and, um, [NOISE] tram has a weight. So now I have updated my transportation problem to generally take different weight values. So- so, now we wanna be able to generate some- some training examples. So that's what I wanna do. I wanna generate different types of training examples that- that we can call so we can get these true labels. So let's assume that the true weights for our training example is just 1 and 2. So- so that is what we really want. Okay. And- and we're going to just wri- write this prediction function that we can call up later to- to- to get different values of y. So the prediction function is going to get the number of blocks. So- so it's going to get, um, N, the number of blocks here. And it is going to act with this path that we want. So it's going to output these- these y values, this different path. Okay. So, all right, so the whole point of prediction is- is basically, like running this f of x function. Um, and we can define our transportation problem with- with n, n weights. And the way we are going to get this is by calling dynamic programming. So someone asked you earlier could the costs be negative? Well, yes because now I'm calling dynamic programming and if like this problem has negative cost, that is fine too. Um, So and the history is going to get and the action new state and- and costs, right? So but the thing that I actually wanna return from my predict function is a sequence of actions. So I'll just get the action out of this history that I get from dynamic programming. So I'm calling dynamic programming on my problem that is going to return a history or get the sequence of actions from that, and that is my predict function and I can just call that later. So let's go back to generating examples. So, um- [NOISE] so, I'm just going to go for, uh, try out n to go from 1-10. So 1 block to 10 blocks and we are calling the predict function on these true weights to get the true y values. So these are my true labels, okay? And those are my examples. So my examples are just calling generate examples here. Okay. So let's just print out our examples. See how it looks like. We haven't done anything like in terms of like the algorithm or anything. We're- we're just creating these training examples, um, by calling this predict function on- on the true weights. I have a typo here, [LAUGHTER] generate examples and I need parentheses, oh, fix the typo. Okay, so that kinda looks right, right? So that's my training example 1 through 9. And then what is- what is the path that you would wanna do if- if you have these two weights, the 1 and 2. Okay. So now I have my examples. So I'm- I'm ready to write this structured Perceptron algorithm. It gets my examples. It gets the training examples which are these paths. Um, and then we're going to iterate for some range. And then, um, we can, um, basically go over all the examples that we have in our true- true y values. And then we can- we can basically go and update our weights based on- based on that and based on our predictions. So let's initialize the weights to just be 0. So that's for walking and tram, they're just 0. And, uh, prediction actions, this is when we're calling predict based on the- the current weights. So if my current weights are 0 then pred actions is just that y prime. So pred actions is y prime, true actions is y, like the things that we had on the slides. If- okay, and- and I wanna count the number of mistakes I'm making too. So if the two are not equal to each other then I'm going to just keep a counter for number of mistakes. If- if the two become equal then- then my number of mistakes is zero. I'm going to break then maybe I'm happy then. Okay. So I make a prediction. And then after that I'm going to update the weight values. Okay. So how do I update? Well, basically subtract. If you're in true actions which is y, the labels that I've created from my training examples and then, uh, do plus 1 if you're in prediction actions based on the current weight values. And- and that's pretty much it. Like- like that is structured perceptron. Okay. So let's just print things nicely so we can print the iteration and number of mistakes we have and what is actually the weight values that we have. And I'm just breaking this, um, whenever I have like no mistakes. So if number of mistakes is 0, I'll- I'll just break this. Okay. Okay. That sounds good. So if number of mistakes is 0, then I'll break. [NOISE] Okay. So all good. Uh, I'm gonna run this, it's not gonna do anything because I didn't call it. So I'll go back and actually call it. I have another typo here, I don't know if you guys can guess, like where is my typo. This is gonna give an error [LAUGHTER]. Well, I called it weights, not weight. [LAUGHTER] So, I'll go and fix that. Okay, this should run. Okay. So and then- then, this is what we get. So let's actually look at this. So what we got is the first iteration number of mistakes was 6, and then, uh, we ended up actually, at the fir- first iteration, we ended up converging to 1, 2. So then the second iteration, the number of mistakes just became 0, and then we just got 1, 2, which is- which is the- the weights that we were hoping for. Okay? So that kind of, looks okay to me, that's my training data. Everything looks fine. There's a question actually. [inaudible] more like integers. Is that right? Yeah. So in this case, yeah, we are summing all the weights as integers, and you're adding them. Given our update model as well, Well, we're- we're assuming that the number of walks and the number of trams were different. What if tram was in a different location but the number of walks to the tram can be correct? You would still- So- so I see what you're asking. No. It should- it- like, it should figure- figure it that out. So, um, we- we- we can go over an example after- after the class and I'll show you like how- how it actually does it. All right. So- okay. So let's try 1 and 3. So with 1 and 3 takes a little bit longer, and, uh, but it does recover. So 1 and 4 is actually the interesting one, because it does recover something. It does recover 2, 8. It doesn't recover 1 and 4. But like given my data, actually, 2, 8 is- is like- like, there is no reason for me to get 1- 1 and 4. Like the ratio of them is the thing that that I actually care about. So even if I get 2 and 8, like- like that is a reasonable set of weights that one could get. Um, I'm gonna try a couple of more things. So let's try 1 and 5. So I'm gonna try 1 and 5, and this is what I get. So I get the weight of walk to be minus 1, and the weight of tram to be 1. Now, my mistake is 0. So why is this happening? Yeah. Your training data is all walking. So it's learning to just walk. Yeah, that's right. So- so what's happening here is, if you look at my training data up here, my training data is just has like walk, like all walks. It hasn't seen tram ever, so it has no idea like what the cost of tram is with respect to the cost of walk. So it's not going to learn that. So we're gonna fix that. Like one way to fix that is to go and change the training data and actually like get more data. So, uh, we can kind of do that. Um, so like just one thing to remember is, this is just going to fit your training data, whatever it is. Um, so yeah. So when we fix that, then walk becomes two and tram becomes 9, which is not 1 and 5. But it- it is getting there, like it's a better ratio. Uh, a number of mistakes is still 0. So it really depends on what you're looking for. Like if you're trying to like match your data and your number of mistakes is 0, and you're happy with this, you can just go with this. Um, and even though like it hasn't like actually recovered the exact value, the ratios, that's fine. Or maybe you're looking for the exact ratios and you should like run it longer. More iteration questions? Structured perceptron like suspect to getting stuck in local optima, like maybe, all we need is different initializations? Sorry. Like I was looking at the- can you repeat that? Oh, sorry. Um, does the, uh, structured perceptron, like, have a risk of getting stuck in local optimum, like k-means, so we need different initializations? Um, that is a good question. So in, um, actually, lemme think about that. Um, do you see this in NLP? Do you actually know if this gets into local optima? I haven't experienced it personally, but I feel like there's [inaudible] There is reasons for it to do this. It's still in this kind of- I mean, let me think about this. I'll think about this, because even in the more general form of it, uh, it's commonly used in like- like the matching, like sentence- like words and sentences. So I haven't experienced that either but, um, I can look into that and back to you. Question? I was gonna ask, are you just being at all of the optimal paths, currently? Yes. Yeah, yeah, yeah. But if we do figure all the optimal paths then technically, it should be complex, right? Because like you just match paths. Um, if you're feeding it all the optimal paths, uh, it should- you- you're just matching path, you're saying is- [inaudible] Yeah. So- so in terms of- okay- so, yeah. So in terms of like bringing down the number of mistakes then- then it should always match it. But if you have some true like weights that you are looking for, and it's not represented in your dataset, then it's not necessarily like- like learning that. So- so in those settings, you could find the local optima. So kind of like a- another version of this is, uh, when you are doing like reward learning and- and you- you actually have this true reward you wanna find. Like in those settings, you can totally fall into like local optima because you want to find what your reward function is. But you're right, like if you're just matching, uh, the data. Just in the reward function, you are on the scaling two, you still get like the optimal policies. So the scaling would be a different problem, right? So the scaling is kinda- yeah, so you can have reward shaping, so you can have different versions of the rewards function, and if you get any of them, that is fine. Uh, but, uh, but you might still get into local optima that's not explained by reward shaping. So okay. So that we- we can talk about these things offline. Maybe, I should just move on to the next topics because we have some more stuff going on. Okay, so I was actually going to skip these slides because we have stuff coming up, but this is a more general form of it. So remember I was saying, this w is a function of a. Ah, but, um, [NOISE] um, you could- you could have a more general form, ah, where your cost function is not just w as a function of a, it is actually w times the set of features. Ah, and then the cost of a path is w times the features of a path. Uh, and that's just the sum of features over the edges. So- so you can have this more general form. Go over this slides later on, maybe, because we've gotta move to the next part. But just real quick to update here is- is this more general form of updates which is update your w based on subtracting the features over your- your true- true path plus the features over your predicted path. So- so a more general form of this is called Collins' algorithm. So Mike Collins was working on this in- in natural language processing. He was actually interested in it in the setting of part of speech tag- er, tagging. So- so you might have like a sentence, uh, and- and you wanna tag each one of the- each one of the labels here as- as a noun, or a verb, or a determiner, Or a noun again. So- so he was think- he was basically looking at this problem as a search problem. Uh, and he was using like similar type of algorithms to- to try to figure out like- like match what- what the value, like match noun, or like each one of these, um, part of speech tags to the sentence. So he has some scores and then based on the scores and his dataset, he goes like up and down. He moves the scores up and down which uses the same idea. You can use the same idea again in machine translation. So you can have, like if you have heard of like Beam Search. Um, and you can have multiple types like- like a bunch of translations of- of some phrase and then you can up-weight and down-weight them based on your training data. Okay? All right. Okay. So now let's move to ai's- ai's- a star, not ai star. A star search. All right. So, um, okay. So we've talked about this idea of learning costs, right? So we have talked about, uh, search problems in general doing inference and then doing, uh, learning on top of them. And then now, I wanna talk a little bit about, um, kind of making things faster using smarter ideas and smarter heuristics. There's a question. [inaudible] see what is the loss from [inaudible] in this structure? In this structure? So, so in, in- this is, this is a prediction problem, right? So, so in that prediction problem, we are trying to basically figure out what w- w's are as closely as possible as we are matching these w, w- this y prime to y, right? So, so basically, like, like the way we are solving this is, is not necessarily as an optimization, the way that we have solved other types of learning problems. The way we are solving it what- is by just like tweaking these weights to try to match my y as closely as possible to, to y, okay? All right. Okay. So let's get- talk, talk about a A-star. So I don't have internet so I can't show these. Um, but I think the link for this should work if- when you go to the, to the file. So the idea is, if you go back to uniform cost search, like in uniform cost search, what we wanted to do was, we want to get from a point to some solution, but we would uniformly, like increase, uh, explore the states around us until we get to some final state. The idea of A-star is to basically do uniform cost search, but do it a little bit smarter and move towards the direction of the goal state. So if I have a goal state, particularly like in that corner, maybe I can, I can move in that direction in a smarter way, okay? So here is like an example of that pictorially. So I can start from S-start, and, and if I'm using uniform cost search, again I'm uniformly kind of exploring all the states possible until I hit my S-end. And then I'm happy, I'm done, I've solved my search problem, everything is good. But the thing is, I've done all these, like wasted effort on this site which is, which is not that great, okay? So uniform cost search in, in that sense has this problem of just exploring a bunch of states for no good reason, and what we wanna do is we want to take into accounts that we're just going from S-start to S-end, so we don't really like need to do all of that. We can actually just try to get the- to get to the end state, okay? So, um, so going back to maybe, um, I'm going to go on this side. So, um, [NOISE] going back to how these search problems work, the idea is to start from S-start and then get to some state S, and then we have this S-end, okay? And what uniform cost search does is, it basically orders the states based on past cost of s, okay? And then explore everything around it based on past cost of F- S until it reaches S-end, okay? But when you are in state S, like there is also this thing called future cost of s, right? And ideally, when I'm in state S, I don't wanna explore other things like this side. I actually want to- wanna move in the direction of kind of reducing my, my future cost and getting to my, to my end state, okay? So, so the cost of me getting from S-start to S-end is really just like past cost of s plus future cost of s. And if I knew what future cost of s was, I would just move in that direction. But if I knew what future cost of s is, well the problem was solved, right? Like I had the answer to my search problem. Like I'm, I'm solving a problem still. So in reality, I don't have access to future cost, right? I have no idea what future cost is. But I do have access to some- like I can potentially have access to something else and I'm gonna call that h of s. And that is an estimate of future cost. So I'm going to add a function called h_s, and this is called a heuristic, and the- and this heuristic could estimate what future cost is. And if I have access to this heuristic, maybe I can update my cost to be something as what the past cost is. In addition to that, like I can add this heuristic and that helps me to be a little bit smarter when I'm running my algorithm, okay? So, so the idea is, ideally like what I would wanna do is, I wanna explore in the order of past cost plus future cost. I don't have future cost or if I had future cost, I had the answer to my search problem. Instead, what A-star does is it's- it explores in the order of past cost plus some h_s, okay? So remember uniform cost search, it, it explores just in the order of past cost. So in uniform cost search, um, like we don't have that h_s, okay? And h_s is, is a heuristic, it's an estimate of the future cost. All right. So what does A-star do? Actually that's something really simple. So, so a A-star basically just does uniform cost search. So all it does is uniform cost search with a new cost. So before I had this blue costs costs of s and a, this was my cost before. Now I'm going to update my cost to be this cost prime of s and a, which is just cost plus the heuristic, over the successor of s and a minus the heuristic. So, so that is the new cost and I can just run uniform cost search on this new cost. So, so I'm gonna call it cost prime of s and a. Well, what does that equal to? That is equal to cost of s and a, which is what we had before when we were doing uniform cost search, plus heuristic over successor of s and a, minus heuristic over s. So why do I want this? Well, what this is saying is, if I'm at some state S, okay, and there is some other state, successor of s and a, so I can take an action a and end up in successor of s and a, and there is some S-end here that I'm really trying to get to. Remember h was my estimate of future cost. What this is saying is, my estimate of future cost for getting from successor to S-end, minus my estimate of, er, getting from, er, future costs of S to S-end should be the thing I'm adding to my cost function. I should penalize that. And, and what this is really enforcing is, it basically makes me move in the direction of S-end. Because, because if I end up in some other state that is not in the direction of S-end, then, then that thing that I'm adding here is basically going to penalize that, right? It's going to be saying, "Well, it's really bad that you've- you are going in that action. I'm going to put more costs on that so you never going that direction. You should go in the direction that goes goes towards your S-end." And that all depends on like what your H function is and how good, like of an H function you have and how you're designing your, your heuristics. But that's kind of the idea behind it. So here is an example actually. So let's say that we have this example where we have A, B, C, D, and E and we have cost of 1 on all of these edges. And what we wanna do is we wanna go from C to E. That's our plan, okay? So if I'm running uniform cost search, well what would I do? I'm at C, I'm going to explore B and D because they have a cost of 1, and then after that, I'm going to explore A and E. And then finally, I get to, get to E. But why did I spend all of that time looking at A and D? I shouldn't have done that, right? Like A and B are not in the direction of getting to S-end. So instead, what I can do is if someone comes in and tells me, well, I have this heuristic function, you can evaluate it on your state and this heuristic function is going to give you 4, 3, 2, 1, and 0 for each one of these states, then you can update your cost and maybe you'll have a better way of getting to S-end. So this heuristic, in this case, is actually perfect because it's actually equal to future cost. Like the point of the heuristic is to get as close as possible to the future cost. This is exactly equal to future cost. So with this heuristic, what's going to happen is my new cost is going to change. How is it going to change? Well, it's going to become the cost of whatever the cost of the edge was before, which was 1, plus h of- in the case of, for example, the cost of going from C to B. If you look at C to B, it's the old cost, which was 1, plus heuristic at B, which is 3, minus heuristic at C, which is 2. So that ends up giving me 1 plus 3 minus 2, that is equal to 2. And then similarly, you can compute like all these, like new cost values, the purple values and, and that has a cost of two for going in this direction and cost of zero for going towards E. And, and if I just run uniform cost search again here, then I can get to E like much easier, okay? Yes. Does an A-star like kinda result in greedy approaches, where you put these opportunities, like go back with [inaudible]. Does A-star result in- Like greedy approaches. Like where you sort of- greedy. Greedy? Yes. Um, yeah. So okay. So, so in all, ah, so, so the question is, is A-star like causing greedy approaches? So, no. Actually, we are going to talk about that a little bit. A-star, depend- depends on the heuristic you are choosing. So depending on the heuristic you are choosing, A-star is actually going to be like returned to optimal value. But yeah, it does depend on the heuristic. So it actually does the exact same thing as uniform cost search if you choose a good heuristic. Why is cost of CB 1 here? Uh, what- Why is cost of CB 1? Why is cost of C- CE 1? CB. CB. Hold on. [LAUGHTER]. I'm like, really bad, my ears are really bad, so speak up. So cost of CB. Oh because- oh, I see what you're saying. That's what we started with. So this is like the graph that I started with. So I started with the cost, like the blue costs being all 1, but now I'm saying those costs are not good, I'm going to update them based on this heuristic so I can get closer to the goal, like as fast as possible. [inaudible]. You return like the actual cost of not, like you wouldn't count the heuristic in there, because it can be like wrong. That's, that's right. So, so the question is what costs are you going to return at the end? And you do want to return the actual cost. So you're returning the actual cost, but you can run your algorithm with this heuristic thing added in because that allows you to explore less things and just be more efficient. Okay. Oh, I gotta move on. All right. So, um. Okay. So a good question to ask is well, what is this heuristic? How does this heuristic look like? Like can any- does any heuristic like work well? So turns out that not every heuristic works. So here's an example. So again, the blue things are the costs that are already given. These are the things that I already have, and I can just run my search algorithm with it. The red things are the values of the heuristic, someone gave them to me for now. In general we would want to design them. So someone comes in and gives me these, these heuristic values, and, uh, then what I wanna do is I wanna compute the new cost values. So the question is, is this heuristic good? So I get my new cost values. They look like this. Like does this work? We don't have time so I am going to answer that. It's not gonna work. [LAUGHTER] So the reason this is not gonna work is, uh, well we just got a negative edge there, right? So I'm running uniform cost search at the end of the day, like A_star is just uniform cost search. Um, and I can't have negative edges. So, uh, I'm not- like that was just not a good heuristic to have here. So, so the heuristics need to have specific properties and, and you, you should think about what those properties are. So one property that you would want to have the heuristics to have is this idea of consistency, this is actually the most important property really. So, um, so when we talked about heuristics, I'm gonna talk about properties of them here. Heuristics h. They should be consistent. So a consistent heuristic has two conditions: The first condition is it's going to satisfy the triangle inequality. And, and what that means is like the cost that- your, your updated cost that you have should be, should be non-negative. So, so this cost prime of s, s and a, this should be positive. So, so that means that the old constant s and a plus h of, um, successor I'm gonna use s prime for that minus h of s is greater than or equal to 0. Okay. So that is the first condition. And then the second condition that you are going to put is that, uh, future costs of s_end is going to be equal to 0, right? Because the future cost of the end state should be 0. So then the heuristic at the end state is also equal to 0. So, so these are kind of the properties that we would want to have if you want to talk about consistent heuristics. Okay. And they're kinda like natural things that we would want to have, right? Like, like the first one is basically saying, well, the cost you are going to end up at should be, should be greater than or equal to 0 and you can run uniform cost search on it. But it's really like talking about this triangle inequality that you want to have, right? Like, er, h of s is kind of an estimate of this future cost. So if I'm going to- from s take an action that cost of s and a that added up h of successor of s, s and a should be greater than just h of s, the estimate of future costs. So that's, so, so that's, that's all it is saying. And then the last one also makes sense, right? I do want my future cost of s_end to be zero, right? So then the heuristic at s_end should also be equal to 0, because again heuristic is just an estimate of the future cost. Okay. All right. So, so what do I know about A_star beyond that? So one thing that we know is that, um, if, if h is consistent. So if I have this consistency property, then I know that A_star is correct. So that there is a theorem that says, A_star is going to be correct if h is consistent. And well, we can kind of look at that through an example. So, so let's say that I am at s_0 and I take a_1 and I end up at s_1 and I take a_2 and end up at s_3 and, uh, a 0 at s_2, take a_3 and I end up at s_3. So let's say that I have, I have kind of like a path that, that looks like this. Okay. So then, uh, if I'm looking at the cost of each, each one of these, right? I'm looking at cost of- cost prime of s_0 and a_1. Well, what is that equal to? That's- that's my updated cost. Updated cost is old cost, which is cost of s_0 and a, plus heuristic value at s_1 minus heuristic value at s_0. Heuristic value s_1 minus heuristic value at s_0. Okay. So, so that is the cost of going from s_0 and taking a_1. I'm gonna to just write all the costs for, for the rest of this to figure out what's the cost of the path. The cost of the path is just the sum of these costs. So s_1, a_2 is cost of s_1, a_2 plus heuristic at, um, what is it? S_2 minus heuristic at s_1, so that is the new cost of this edge. And the new cost of the last edge which is cost prime of s_2, a_ 3, and that is equal to the old cost of s_2, a_3 plus heuristic at s_3 minus heuristic at s_2. Okay. So I just wrote up all these costs. If I'm talking about the cost of a path, then it's just that these costs added up, right? So if I add up these costs, what happens? Bunch of things get canceled out. All right. This guy gets canceled out by this guy, this guy gets canceled out by this guy, right? And what I end up with is, is sum of these new costs, these cost primes of, um, s_i minus 1, a_i is just equal to sum of my old cost of s_i minus 1, a_i plus my heuristic, I guess last state whose end state minus heuristic at s_0. Okay. I'm saying my heuristic is a consistent heuristic. So what is a property of a consistent heuristic? The heuristic value at s end should be equal to 0. So this guy is also equal to 0. So what I end up with is is if I look at a path with the new cost, the sum of the new cost is just equal to the sum of the old cost minus some, some constant, and this constant is just the heuristic value at s_0. Okay. So, so why is this important because when we talk about the correctness, like remember we just proved at the beginning of this lecture that uniform cost search is correct, so the cost that it is returning is optimal. That is, that is this cost. A_star is just uniform cost search with a new cost. So A_star is just running on this new cost. But this new cost is the same thing that they have as old cost minus a constant. So if I'm optimizing the new cost, it's the same thing as optimizing the old cost. So it is going to return the optimal solution. Okay. All right. So that is basically the same things on the slide like, like I basically did that. So, so that's one property, right? So, so we talked about heuristics being consistent. We have now just talked about A_star being correct, because it's uniform cost search. It's, it's correct only if the heuristic is consistent, right? Like only if we add that property. Because, because that consistency gets us, gets us the fact that this guy is equal to 0 and gets us the fact that these guys are going to be positive and I can run uniform cost search on them. Um, the next property that we have, uh, here for A_star is A_star is actually more efficient than uniform cost search, and we kind of have already seen this, right? Like, like the whole point of a A_star is to not explore everything and explore in a directed manner. So, um, if you remember uniform cost search like, how does it explore? Well, it explores all the states that have a past cost that are less than the past cost of s_end. So again, remember, uniform cost search, you're exploring with the, with the order of past cost of states, and then we explore all those states that have past costs less than the end state. Okay. A_star like- the thing that A_star does is it explores less states. So it explores states that have a past cost less than past cost of the end state minus the heuristic. So, so if you kinda look at the right side, the right side just became- becomes smaller, right? Like, like the right side for uniform cost search was just past cost of s_end. Now it is past cost of s_end minus the heuristic, so it just became smaller. And then why did it become smaller? Because now I'm doing this more directed search. I'm not searching everything uniformly around me. And then that's the whole point of the heuristic. Okay. And that makes it actually more efficient. So- and then kind of the interpretation of this is if h is larger then, then that's better, right? Like if my heuristic is as large as possible, well that is better because then I am kind of exploring a smaller like area to, to get to the solution. Uh, the proof of- this is like two lines so I'm gonna skip that. So let me actually show, uh, how this looks like. So if I'm trying to get from s_star to s_end, again, if I'm doing uniform cost search, I'm uniformly exploring. So like all states around me, and that is equivalent to assuming that the heuristic is equal to 0, like it's basically uniform cost search is A-star when the heuristic is equal to 0. So what is the point of the heuristic? The point of the heuristic is to estimate what the future cost is. If I know what the future cost is, then, then h of s is just equal to future cost. Uh, and then, that would be awesome and I only need to like explore that green kind of space. And then the thing I'm exploring is, is just the nodes that are on the minimum past cost and co- uh, cost path, and I'm not exploring anything extra, right? Like that's the most, like efficient thing one can do. In practice, like I don't have access to future costs, right? In, in practice if I had access to future costs, like the problem was solved. I have access to some heuristic that is some estimate of the future cost. It's not as bad as uniform cost search, it's getting close to future costs, like, like the value of future costs, and you're kind of somewhere in between. So it is going to be more efficient than uniform cost search in some sense. Okay. All right. So, so basically the whole idea of A_star is it kind of distorts edge, edge costs and favors these end states. So I'm going to add here that A_star is efficient too. So that is the other thing that, that we have about A_star. Okay. All right. So, so these are all cool properties, um, one more property about heuristics and then after that, we can talk about relaxation. So um, so there's also this other property called admissibility, which is something that we have kind of been talking about already, right? Like we've been talking about how this heuristic should get close to FutureCost and should be an estimate of the FutureCost. So an admissible heuristic is a heuristic where H of S is less than or equal to FutureCost. And then the cool thing is, if you already have consistency, then you have admissibility too. So if you already have this property, then you have admissibility too. So another property is admissible. Which means H of S is less than or equal to FutureCost of s, okay? All right. So the proofs of these are again like just one liners, so this one is more than one line but- [LAUGHTER] but it's actually quite easy, it's in the notes. So you can use induction here to prove, uh, to prove that if you have consistency, then you're going to have admissibility too. Okay, so, so we've just talked about how A-star is a sufficient thing. We've talked about how we can come up with- we haven't talked about how to come up with heuristics, but we have talked about consistent heuristics that are going to be useful and they are going to give us admissibility and they're going to give us correctness and how like A-star is going to be this very efficient thing. But we actually have not talked about how to come up with heuristics. So let's spend the next, yeah, couple minutes talking about, uh, talking about how to come up with heuristics. And in the main idea here, is just to relax the problem. Just relaxation. So, so what are- so, so the way we come up with heuristics is, we pick the problem and just make it easier and solve that easier problem. So, so that is kind of the whole idea of it. So remember the H of S is- is supposed to be close to FutureCost, um, and, and some of these problems can be really difficult, right? So the- so if you have a lot of constraints and it becomes harder to solve the problem, so if you relax it and we just remove the constraints, we are solving a much easier problem and that could be used as a heuristic, as a value of heuristic that estimates what the FutureCost is. so, um, so we want to remove constraints and when we remove constraints, the cool thing that happens is, sometimes we have closed form solutions, sometimes we just have easier search problems that we can solve and sometimes we have like independence of problems and we can find the solutions to them, and that gives us a good heuristic. So, so that is my goal, right? Like I would want to find these heuristics. So let me just go through a couple of examples for that. So, so let's say I have a search problem and I want to get the triangle to get to the circle, and that is what I wanna do and I have all these like walls there and that just seems really difficult. So what is a good heuristic here? I'm going to just relax the problem. I'm gonna remove like all those walls, just knock down the walls and have that problem. That- that just seems much easier, okay? So- so well, like now, I actually have a closed form solution for getting the triangle, get to the- get to the circle. I can just compute the Manhattan distance and I can use that as a heuristic. Again, it's not going to be the- like actually like what FutureCost is, but it is an approximation for it. So- so usually, you can think of the heuristics as, as these optimistic views of what the FutureCost is, like, like it's an optimistic view of the problem. Like what if there was like no walls. Like if- if there are no walls here, then how would I get from one location to another location? The solution to that is going to give you this FutureCost- this estimate of FutureCost value which is- which is H of S. Okay? Or the tram problem, let's say we have the tram problem but we have a more difficult version of it where we have a constraint. And this constraint says, "You can't have more tram actions than walk actions." So now this is my search problem, I need to solve this. This seems kind of difficult. Like we talked about how to come up with states word last time and even that seemed difficult, like I need to have the location, I need to have the difference between the walk and tram. That seems kind of difficult, like- like I have an order of N squared states now. So instead of doing that, well, let me just remove the constraint. I'm- I'm just gonna remove the constraint, relax it. And after relaxing it, then I have a much easier search problem I need to deal with. I only have this location, and then I can just go with that location and, and everything will be great. Okay? All right. So, so the idea here was like where, where, where this middle part is, if I- if I remove these constraints, I'm going to have these easier search problems, these relaxations. And I can compute the FutureCost of these relaxations using my favorite techniques like dynamic programming or uniform cost search. But- but one thing to notice is, I need to compute that for 1 through N. Because is heuristic is a function of state, right? So I actually need to compute FutureCost for this relaxed problem for all states from 1 through N. Uh, and that allows me to have like a better estimate of this. There are some, uh, like engineering things that you might need to do here. So, so for example, um, you might- so, so here we are looking for FutureCost, so if you plan to use uniform cost search for whatever reason, like maybe Dynamic Programming doesn't work in this setting, you need to use uniform cost search, you need to make a few engineering things to make it work. Because if you remember, uniform cost search would only work on past costs, doesn't work on FutureCost. So you need to like, create a reverse problem where- where you can actually compute FutureCost. So, so a few engineering things but beyond that, it is basically just running our search algorithms that we know, uh, on, on, uh, these relaxed problems. And that will give us a heuristic value, and we'll put that in our problem and we will go and solve it. Okay? Um, and another cool thing that heuristics give us, is, is this idea of having independent subproblems. So, uh, so here's another example. I want to solve this- this eight puzzle and I move blocks here and there and come up with this new configuration, um, that seems hard again. A relaxation of that is just assume that the tiles can overlap. So the original problem says, the tiles cannot overlap. I'm just gonna relax it and say, "Well, you can just go wherever and you can overlap." Okay? So that is again much simpler and now I have eight independent problems for getting each one of these points from one location to another location and I have a closed form solution for that because that's again just Manhattan distance. So that gives me a heuristic, that- that's an estimate. That's not perfect, it's an estimate. And then I can use that estimate in my original search problem to solve the search problem. So here were- it was just some examples of this idea of removing cons- removing constraints and coming up with better heuristics. So like knocking down walls, like walk and tram freely, overlapping pieces, er, pieces and that allows you to kind of solve this new problem, uh, and, and the idea is you're reducing these edge costs from infinity to some finite- finite cost. Okay? All right. So, um, yeah, so, so I'm gonna wrap up here, uh, and I guess we can always talk about these last few slides next time, uh, since we're running late, uh, but I think you- you guys have got like the main idea. So let's talk next time.
Stanford_CS221_Artificial_Intelligence_Principles_and_Techniques_Autumn_2021
Constraint_Satisfaction_Problems_CSPs_1_Overview_Stanford_CS221_AI_Autumn_2021.txt
Hi, In this module, I'm going to talk about constraint satisfaction problems. So before we get into the constraint satisfaction problems, I just want to revisit where we've been in the course. We started off with machine learning and applied to reflex-based models such as classification or regression where the goal is just to output a single number or a label. And then we looked at state-based models, in which case the goal was to output a solution path. And we thought in terms of states, actions, and cost or rewards. And now we're going to embark on a new journey through variable-based models. It's going to be a new paradigm for modeling. In which case, we're going to think in terms of variables and factors. So the heart of variable-based models is an object called a factor graph. We're going to define factor graphs formally in the next module. But for now, let's just try to give some intuition. So a factor graph consists of a set of variables usually denoted X1, X2, X3. These are in circles. And a factor graph also contains a set of factors usually denoted f1, f2, f3, f4. These are going to be in squares. So now each factor, as you'll notice here, touches a subset of the variables. And so each factor is going to express some sort of preference or determine the relationship that a subset of variables has. So for example, f2 is going to specify how X1 and X2 are related. And f3 is going to specify how X2 and X3 are related and f4 is going to specify how f3 should be related. The objective of a constraint satisfaction problem is to find the best assignment of values to the variables where we're going to define what best means in a second. So let's look at an example of a problem that can be solved via constraint satisfaction problem. So here's map coloring, a classic problem. Here is a map of Australia. We have a number of provinces, 7 to be exact. And each province, Western Australia, Northern Territory, South Australia, et cetera, has to be assigned a color. And the question is, how can we color each province either red, green, or blue so that no two neighboring provinces have the same color? So you don't want Western Australia and Northern Territory to have the same color. So here is one possible solution. We can call it Western Australia, red, Northern Territory, green, and so on. And you can double check that no two adjacent provinces have the same color here. So now this is a simple enough problem that we can just solve it by hand. But as usual, we want to ask, what are the algorithmic principles or how do we come up with something more general to solve problems such as these when we encounter them? So before we talk about how we do this with constraint satisfaction problems, I want to revisit how we might do it with as a state-based model because that's the hammer we have. So let's try to cast this as a search problem. So we're going to start with initial state. And this state is going to represent not having assigned any provinces any colors. And then from the state, we can take three possible actions. We can grab WA and assign it red, we can grab the WA and assign it green, or you can grab WA and assign it blue. And from each of these points, we can take-- NT and sign it red, green, or blue, red, green, or blue, red, green, or blue. And you can see here that this is a search tree as the ones that we have studied before. And at the very bottom of the search tree, we have a complete assignment to all the variables. And each assignment to all the variables is going to be labeled with either a 0 if it is inconsistent. In other words, it doesn't solve the problem. Here the problem is that NT and SA are assigned the same color. That's bad. Here's another complete assignment. This is also bad because WA and NT share the same color. Here is an assignment that is good. And you can verify that all the provinces that are neighboring each other have different colors. And this is going to be denoted with a weight of 1. So in general, each state here represents a partial assignment of colors to variables. And at the end of the day, we can simply return any leaf that is consistent. For example this one. So this is a perfectly fine way of solving this problem. And it goes to show how powerful the state-based models can be. Just to recap. The state here is a partial assignment of colors to provinces. And from each state and action, assigns the next uncolored province a compatible color. So what's missing? Why are we talking about this when we already know how to solve it using a state-based model? Well, the question is, can we do better than this? And the answer is going to be, yes because there is more problem structure. Let me say what I mean by that. So notice that in this problem, there's just a bunch of provinces, they need to get assigned colors. It doesn't matter which order I assign the colors. In other words, the variable ordering doesn't affect correctness, which means that we can not just stick with a fixed ordering, but we can optimize this ordering. And this is something that inference algorithm can do for us. And secondly, the variables here are interdependent in only a local way, and we can decompose the problem. So for example, here we see that Tasmania is completely separated from the rest of Australia, which means that we can effectively solve the two separate independent problems separately and just combine the solutions. And this is as we'll see later, is great because it allows us to really speed up search. So variable-based models allow us to capture these two additional pieces of structure. Variable-based models are an umbrella term that include constraint satisfaction problems, Markov networks, and Bayesian networks which all of which we're going to get through over the next few weeks. And the key idea behind variable-based models is we want to think in terms of variables. And in variable-based models, a solution to a problem is simply an assignment to the variables. And so when you're modeling using variable-based models, we want to set up a set of variables so that the solution is an assignment to the variables. And the decisions about how to choose the ordering of the variables and how to determine which variables to set first, this is going to be chosen by the inference algorithm. And the key idea here is that you can think about variable-based models as a higher level modeling language than state-based models. So here's an imperfect analogy from programming languages. So if you were just trying to solve a problem directly in an ad hoc way, that's writing and assembling. You just go at it. If you were using C, or C++, that's kind of using state-based models. It gives you a higher level of abstraction, which is powerful and allows you to save a lot of kind of headaches. But variable-based models are kind of even a higher level language like let's say Python, which allows you to think purely in terms of kind of the variables and the modeling and let the inference algorithm do more of the work, which is always good because then you can spend more time doing the fun stuff which is modeling. So I'm going to talk about first constraint satisfaction problems. Constraint satisfaction problems appear in a number of applications most of which revolve around large scale logistics, scheduling, and supply chain management. So companies such as Amazon have to figure out how to put packages on vehicles and deliver them to customers and at the same time minimizing costs and meeting all those promised delivery times. And so here the variables might be the assignment of packages to vehicles and the factors would include travel times and various costs. So ridesharing services such as Uber and Lyft also have to figure out how to best assign drivers to riders. And all of these are extensions of the classical vehicle routing problem. Here's another example from sports scheduling. So the NFL every year they have to schedule which teams play which other teams and when these games are going to be held. And the schedule here should minimize travel times between teams. They have to be a time where they fit the TV broadcast schedule, you want to be fair across teams and so on. So other scheduling problems such as these also involve assigning courses to slots. So the registrar office has a number of courses that need to be offered every quarter and they have to figure out which classrooms to have these courses in and what various time slots, again, training off various constraints like preferences and availability. So a final application of constraint satisfaction problems is a little bit different, and this is called the formal verification of circuits and programs. So say you have a computer program and you want to prove that this program is correct. Let's say the program is trying to do something like sort numbers. So here what you can do is normally you would let's say, test the program, design a bunch of test cases, run the program and see what happens. But this, how do you know for sure that it works on all inputs? So this is where verification comes in. You want to actually check that it works for all inputs. So the way you would set this up is that you define a set of variables which corresponds to the unknown inputs to the program. And then the factors encode the program itself. It's going to encode how execution proceeds line to line. And then you're going to ask the question whether there exists a program input that produces an error or an incorrect result. So unlike the other applications of CSPs where you're trying to find a satisfying assignment, in formal verification, you're trying to prove that no such satisfying assignment exists because that would mean an error in your program. So here is a roadmap for the rest of the modules on CSPs. So first, we're going to talk about the definition of a constraint satisfaction problem and factor graphs, do it more formally. Then we're going to give a few examples of CSPs. Then we're going to move over to inference. We're going to start by talking about backtracking search which is in the worst case exponential time unfortunately. But there are a number of ways to speed up search. Taking full advantage of the fact that we can assign variables in any order, we can look at dynamic ordering which we're using heuristics to figure out which variables are assigned first. And then we're going to look at a pruning strategy based on arc consistency, which is going to allow us to prune out various values for each of the variables which are not promising to explore so that dynamic ordering can be much more effective. But in case you're impatient and don't want to wait an X amount of time, but you're satisfied with an approximate solution, you can also do approximate search. So here there's two algorithms, beam search, which is kind of a extension of the greedy search algorithm. But it's a little bit smarter. It's going to explore only a small fraction of the exponentially sized search tree. And local search is going to take an initial assignment to all the variables and just trying to improve it by changing one variable at a time. All right. So that's it for this overview module.
Stanford_CS221_Artificial_Intelligence_Principles_and_Techniques_Autumn_2021
Artificial_Intelligence_and_Machine_Learning_5_Group_DRO_Stanford_CS221_AI_Autumn_2021.txt
Hello. In this module, I'm going to first show you how minimizing the average error on your training examples can actually lead to disparities between-- in performance between groups. And then I'm going to show you a simple approach called group distribution and robust optimization that can mitigate some of these disparities. So let me begin with a very famous example of disparities or inequalities in machine learning, it's called the Gender Shades project. In this project, the authors collected a data set of images of faces of different genders and different skin tones. And then, they evaluated a gender classifier from Microsoft, Face++, and IBM. What they found was rather striking. So for a group of lighter skinned males, the classifier was almost perfect. But if you look at the performance of this classifiers on darker skinned females, you'll see that the accuracies are much, much worse. So this is a general problem in machine learning, which is that inequalities between different groups arise because machine learning is generally where you minimize the average loss. So, these inequalities can have real world consequences. So in this vivid case, a Black man was wrongly arrested due to an incorrect match with another Black man captured from a surveillance video. And this mistake was due to a mistake made by a facial recognition system. So given what we just saw on the Gender Shades project, we can see that lower accuracies for some groups might lead to more false arrests, which adds to already problematic inequalities that exist in our society today. So in this module, I'm going to focus on this issue of performance disparities between groups and how we might be able to mitigate them. But I also want to highlight that even if we didn't have any disparities between groups, there's a question of whether facial recognition technology should be used in law enforcement or in surveillance or anything at all. And these are big thorny ethical questions, which we're not going to unfortunately be able to spend much time with in this module. But I just want to highlight that it's important to remember that sometimes the issue is not with the solution but in the framing of the problem itself. So Gender Shades was an example of classification. But to make things simpler, let us consider our friend linear regression. So recall in linear regression, we start with a training set, which consists of examples. Each example has an input x and output y. But in our case, we're going to assume each example is also annotated with a group g. So we're going to have a set of-- let's plot this over here. So here's 1, 4. And here's a second example, 2, 8, which is up here. And then these examples down here are going to come from group B. So we're going to have two groups, A and B, and here they are over here, OK? So the goal of machine learning or linear regression, in particular, is to produce a predictor such as this one. And the predictor is going to take new inputs such as 3 and produce an output, such as 3.27. So in linear regression, we assume that the predictor has the form a weight vector, dot, a feature vector, phi of x. In this simple example, we're going to restrict ourselves to the case where the feature vector is simply the identity map-- just x-- which gives us a hypothesis class, which is the set of all lines that go through the origin. So you can think about sweeping lines through the origin here. And the weight vector is just going to be a single number, w. So already you can see some tension here. So which weight vector would you choose? Would you choose one that's closer to these points in group B or in group A? And this tension means that we have to compromise somehow. And exactly how we compromise is going to have some implications. So notice also that the predictor doesn't use group information, it just takes an input x as before. What's going to use group information is the learning algorithm, and we'll get to that a little bit later. So just as a review, for linear regression, we define the loss function of an input x y and a particular weight vector to be simply the difference between the predicted value of that classifier-- or sorry-- progressor f of w and the target value of y squared. And remember that we defined the training loss of a particular weight vector as follows. It's going to be the average. So 1 over a number of training examples over sum of the training examples of the per example loss. So visually, we can see this on this plot where for each value of w-- in our case here, remember w is a scalar for this particular example-- we get a loss value. So this is the training loss, which is this curve here. And what we can do is let's practice evaluating this training loss at a particular value of w, let's say 1. So this is going to take the average over this data set and it's going to return some value, 7.5, OK? So the loss of-- the average loss at w equals 1 is 7.5. Which seems OK. But now let's remember-- let's peer a little bit closer at how the loss is spread across groups. So we're going to define a notion of a per-group loss. So here's our training set. So for group A, what is this loss? In group B, group B what is this loss? So formally, we're going to define the per-group loss, writting train loss sub g for group g. g can be either A or B. To be the average now over only those examples in that group. So this notation D train of g is the set of examples in Group g and of the example loss again, OK? So we're going to plot these two losses on this curve here. And we see that we have these two plots. So train loss A it looks like this and train loss B looks like that. And we can practice evaluating these different loss functions at our example weight vector 1 here. So train loss A is going to be an average-- remember, only over the examples in group A-- and that's going to give us 22.5. You can see it looks like about 22.5 here. And then what about B? So B actually gets a loss of 0, which you can see at this point. So you can see that we have a single weight vector 1. Gets very different losses on the two data sets, on the two groups. A is doing a lot worse, it has 22.5, and B is doing much better, it has a 0, which is the minimum loss you can hope for. So this is an example of a disparity between-- if we were to choose weight vector 1, there would be a huge disparity between the performance on these two groups. So we can look at the losses of different groups, but it will be helpful to kind of summarize that as a single number. Now we're going to capture it by a quantity called the maximum group loss. And you might guess from the name that the maximum group loss, written TrainLossMax is simply just going to be the maximum over all groups of the per-group loss. And so visually what this looks like is as follows. So let me-- so remember we had in yellow here or orange the loss of group A and in blue, we have the loss of group B, and the maximum group loss is this function of w, as the other functions, which is going to be the point wise maximum. So at every point, we choose either the value of loss of A or B that's going to be larger. So as you can see, it traces out this kind of upper envelope here. Over here, the loss of A is higher, so it's going to track that. And over here the loss of B is higher so it kind of hugs B from there on. OK. So let's evaluate at a point w equals 1. We see that-- remember from the previous slide that the two losses are 22.5 and 0 for the two groups. And then to compute the maximum, we just take the max of these two values and you get 22.5. So 22.5 is a single number that summarizes how bad is the worst group off. The max was the maximum group loss. And if you compare the maximum group loss, 22.5, with the average loss, which is 7.5, you see that the maximum group loss is larger-- and it's always larger. So now let's compare these two loss functions. We have the average loss and the maximum group loss. So we can plot both of these here. And what-- pictorially, we can see what's going on. So here, let me just plot our data points just so we have them available. So these functions are definitely very different. OK. So what happens now when we try to minimize the average loss versus the maximum group loss? So let's start with minimizing the average loss. So this is standard learning. This is a status quo. You find the minimum of the average loss, which is going to be this point, w equals 1.09, it gets a loss of 7.29. So it looks like we were doing pretty well. But if you remember, look at the worst group loss of that weight vector, then we'll see that it's above 20, which is not great. So what you can do instead is to do what we call group Distributionally Robust Optimization or group DRO, which is simply going to minimize the maximum group loss. So it's going to minimize this purple plot here. And what happens when you do that, you get w equals 1.58, which gets a loss of 15.69, which is better than the 20 plus that you would have gotten there. Now of course, the average loss is worsened, because at this point the average loss on the red curve is a little bit higher. So there's a tradeoff here. And we can see this tension kind of play out over on this plot over here. So here we see that if you were to minimize the average loss, what you would do is find a regressor or a model that's very close to the points over here, B, because there's four of them. Kind of the majority class dominates. Whereas, if you minimize the maximum group loss, then you're going to get this purple line, which is going to be able to balance out the two groups no matter how many points are over here versus over here. So you can think about this purple line is more fair because it treats groups more equally. So how do we minimize the maximum group loss? So as before, we're going to use gradient descent and follow our nose. So what this looks like-- let me just try to plot this. So here's the objective function. The maximum group loss is train loss max, is remember, the maximum over all the groups of the per-group training loss. And so how do you take the gradient of a max? Well, the gradient of a max, remember, is equal to the gradient of the function where we're evaluating at the particular value of g star. And what is g star? g star is the arg max over the training loss. So let's look at this picture. So basically what are you doing? So we want to diff-- take the gradient of this purple curve, right? And so if you're over here, the gradient of the purple curve is exactly the gradient of the loss on group A. And if you're over here, the gradient of the maximum group loss is exactly the gradient of group B. And it exactly corresponds to g star is A over here because group A is worse, and g star is B over here because group B is worse. So to compute the gradient, it's actually kind of very simple. You first just evaluate at your current weight vector what are the losses of the different groups. And you look at the group that is hurting the most, has the highest loss, and then you just update on those examples. So it's a very intuitive process. You find which group needs the most amount of help and then you only update your parameters based on that group. So one note is that it's important that we're talking about gradient descent not stochastic gradient descent. Because stochastic gradient descent relies on the objective function being a sum over terms, but this is a maximum over a sum. So it exactly won't work. How it exactly gets the casting methods to work properly is beyond the scope of this module, but you can read the notes for pointers OK. So let me summarize here. So we've introduced the setting where examples are associated with groups. We've done it for group regression, but this generalizes for classification and other more general machine learning problems. We saw that we have the average loss and the maximum group loss. And these are different. What is good on average is not going to be good for all groups. And we see that there's always a tension between the groups if the groups are pulling you in kind of different directions. And we saw that group Distributionally Robust Optimization or group DRO is a very simple algorithm that minimizes the maximum group loss, the purple curve over here. And finally, I want to remark that this module has kept things simple but there's many, many more nuances. So intersectionality is this principle or property where you-- a group such as white women is actually made out of multiple attributes, and these groups might behave differently than their more coarse groups, the women or the set of white people. And so we have to kind of take into account finer gradient groups. There are also cases where we might not know what the groups are. Maybe we don't collect demographic information and we have to infer them. There's also an issue with overfitting. We're talking only about the training loss here just for simplicity, but of course what we care about in machine learning is doing well on out of a test set, but we're not talking about this test set here. So there's many more references in the notes. And I hope this has piqued your interest and realizing that inequalities should be considered a first class citizen when we think about machine learning methods. So that's it. Thank you.
Stanford_CS221_Artificial_Intelligence_Principles_and_Techniques_Autumn_2021
Constraint_Satisfaction_Problems_CSPs_4_Dynamic_Ordering_Stanford_CS221_AI_Autumn_2021.txt
Hi, in the previous module, we looked at modeling CSPs. This module I'm going to start talking about inference, in particular introducing backtracking with dynamic ordering. So just a quick refresher. Remember a CSP is defined by a factor graph which has a set of variables and where each variable on some values in domain apply. And it also has some factors f1 through fm where each factor-- fj is a function that takes a subset of the variables and returns a non-negative quantity. So the assignment weight is defined as follows. So each assignment to all the variables has a weight, which is given by the product of all of the factors. And the goal in finding this-- in solving CSP is to compute the maximum weight assignment. So let's start with backtracking search, which we already talked about a little bit. So the backtracking search is going to be the kind of blueprint for the current algorithm that we're going to talk about. So if we start with the empty assignment, no variable has [INAUDIBLE]. And we choose one of the variables, assign a particular value, red in this case. And then we recurse. And we pick another variable, assign the value and keep on recursing. Maybe we reach a leaf and then we back up, backtrack and try green, backtrack and try blue. And then we backtrack up here. Now, we're going to try setting WA to green up here. Explore this sub tree. And we come back up. Fix, explore NT green, NT blue, and so on and so forth. So at the bottom of this tree, we have the leaves. And each leaf is a complete assignment. And each assignment adds a weight, which we can compute. And now if once we search through all the assignments, we simply take the assignment with a map. So this is the most straightforward way of taking a CSP and solving it using backtracking search. So the first thing we'll note is that we can actually compute weight of partial assignments as we go rather than waiting until the very end to compute the weight of an entire assignment. So this is how it's going to proceed. So let's start with empty assignment. And we assign WA red. And us we can't evaluate any of the factors so far-- but once we assign NT, we can actually evaluate this factor and test whether WA [INAUDIBLE] NT. But these other factors we can't evaluate yet because we don't know these values [INAUDIBLE] values to these variables, but we can move on. Now we recurse. We assign SA value. And now we can assign. We can evaluate these factors, WA not equal to SA and NT not equal to SA. Then we assign q. We kind of evaluate these two factors. In NSW, we can evaluate these two factors and assign v. We can now reassign. Evaluate these two factors. And these are all the factors in TSE. So at any point in time, for example, at NSW, we have this partial assignment here, and we define the weight of that partial assignment to be the product of all the factors that we can evaluate where all the factors are evaluated-- are evaluable if all the variables scope of that factor have been met. So more formally, it shows we have a partial assignment. And [INAUDIBLE] as follows. We're going to define the set of dependent factors as follows. So d of a partial assignment x and a particular variable Xi is the set of factors depending on Xi and x but not on the unassigned variable. So, for example, D of this partial assignment up here and this variable SA are simply these two factors here. These are the factors that are going to be multiplied in when Xi is set. OK, so now we're ready to present our main backtracking search algorithm, OK? So this is going to be a kind of general blueprint for many of the bells and whistles that we're going to talk about soon. So backtrack takes a partial assignment x, and the weight of that partial assignment, which is all the factors that we can evaluate so far and domains, which specifies valid possible values for each of the variables in the CSP. More on a slide in a bit. Well, if x is a complete assignment, then we have reached a leaf. And we look at its weight and we update our current best. And we return. If not, we're going to choose an unassigned variable Xi. We're going to look at the values in the domain i of Xi and order them somehow. We're going to go through each value in that order, and we're going to compute a weight update. So we're going to look at this assignment, which is the x extended with Xi set to v. And then we're going to look at all the factors in the dependent set of factors of x, the partial assignment, and new variable Xi that we're going to assign. Multiply all the factors evaluated at this extended assignment. That number we're going to call delta, which is going to be the update on w, OK? If delta equals 0, then we stop there and don't recurse further. Because remember any 0 by a factor is enough to zero out the-- this-- the weight of their given assignment. So if not, then we continue. We're going to do this thing called look ahead, which takes the domains and tries to reduce them, tries to prune away things based on this new assignment Xi2. So now if any of these domains become empty, then we can again prune and stop recursing. Otherwise, we're going to recurse and backtrack on this extended assignment with this updated weight, w times delta, and the new domains that we've completed via lookahead. OK, so this recipe has three choice points-- how to choose the unassigned variable, how to order the values of the assign-- unassigned variable, and finally this look ahead, which is how we prune it. So we're going to talk about each of these in turn, starting with the lookahead. So we're going to introduce a simple form of lookahead called forward checking. OK, so first we're going to visualize the domains of each of the variables with this set of valid colors above the respective [INAUDIBLE].. So in the empty assignment, all the values are allowed. OK, so now we're going to set, let's say, WA equals red. So at this point, two things happened. First, we wipe out the all-- the other values from that variable, which is clear that WA [INAUDIBLE]. We're committing to that. But in addition, what we're going to do is do one step lookahead forward checking. So we're going to eliminate the inconsistent values from the domains of Xi's neighbors. So in this case, we're going to look at the neighbors of WA, which are NT and SA. And we're going to remove red from those domains. And why is that? Because this factor says that, well, if this is red, then this can't be red. The red is gone now. OK, so now backtracking search is going to recurse, and it's going to, let's say, it sets NT to green. So now, again, I do one step lookahead. Look at the neighbors of NT. And I'm going to wipe out green from [INAUDIBLE].. OK, so suppose I recurse again. And now I set q to be blue. So again one step lookahead. I'm going to wipe out blue from my neighbors. Now look at what happens. SA has an empty domain, which means that there are no possible values that I can set SA to make the assignment consistent. So in this case if any domain becomes empty, I simply return here. And this is important because-- now, all of these other variables have not been set yet. And rather than recursing and trying to set them all sorts of different ways, I already know at this point that SA is not settable. So I just stop there. So this allows-- Forward checking allows me to use these domains to prune. OK, so forward checking is also going to allow me to choose another unassigned variable and order the values in in a variable. I'll show as follows. So suppose we're in this situation. So WA and NT happens [INAUDIBLE] applied forward checking to propagate the constraint to the-- all the neighbours. And now the question is, which variable do I assign there? So there is this heuristic called most constrained variable, MCV, which simply chooses the variable that has the smallest domain, OK? So what's the domain size here? So there's two elements of q, three elements here, one element here. So SA is the variable that has the smallest domain. It has only one element. And the intuition here is I want to restrict the branching factor and choose variables that have small branching, determined by the number of elements in that domain. So the second choice point is once I've selected a variable, how do I order the values to explore? So consider the following. So I'm just trying to assign a value to q. Do I first try red, or do I try blue? So the idea behind this heuristic called least constrained value is I'm going to order the values of the selected Xi by decreasing number of consistent values of neighboring variables, OK? And what does this mean on this example? So I look at q. And remember I set this to red tentatively. And I propagate via forward checking to its neighbors. So I wiped out red here. And now I look at the neighbors and say, how many possible consistent values are there? So there are 2 plus 2 plus 2, so that's 6 values. And what about if I set it to blue here? And I eliminate blue from these neighbors. And the number of consistent values is 1 plus 1 plus 2, which is 4 here. 6 is larger than 4, so I'm going to try red in this case. So intuitively why does this make sense? I want to choose a value that gives us much freedom as possible to its neighbors so that I don't run into trouble and get things to be inconsistent here. And you can see that by having red here and red here, I have more options for the neighbors NT and SA. Then over here if-- since I can only do green here, and you can-- if you look even one step ahead, you'll notice that this is already going to cause trouble. OK, so least constrained value order the values to-- in order to free up the neighbors as much as possible. So this might seem a little bit strange. So most constrained variable-- least constrained value seem superficially kind of at odds with each other but there is a reasoning, which is that variables and values are very different. So in a CSP, every variable must be assigned. We can't leave a variable alone and hope that the problem will disappear later. So what we're going to do is we're going to try to choose the most constrained variables as possible. So if we're going to fail, we're going to choose the hardest variables first to try. And we might as well fail early, which leads to more pruning. On the other hand, values are-- those that we're going to choose-- we just need to choose some value for each variable. So what are we going to try to do is choose the value that is most likely to lead to a solution. It doesn't matter if some value is going to cause trouble because if we choose a value that happens to work, then maybe we'll be happy. So when do these heuristics help kind of formally? The most constrained variable is useful when some of the factors are constrained. It's OK if some of the factors are not constrained, but it's important that at least one of the factors is a constraint, which means that it's something that returns-- can return 0. If all the factors are returning non-zero values, then none of these heuristics are going to be helpful. You kind of have to explore everything. Least constrained value is useful when all the factors are constraints. In other words, the assignment weights are 1 and-- or 0. So they have to look like this but not like [INAUDIBLE] factor. And the rationale here is that we have to-- we don't have to find all of the assignments in this case. If we find an assignment that has weight 1, then we know we're done because 1 is the maximum weight possible. And we just return immediately and stop early. But if there's other factors which have varying values-- or varying weights of different magnitudes, then we can't necessarily stop if we find a weight of 2. Well, maybe there's a weight-- another assignment that has a weight of 4 or 8 or 17 or so on. We can't really stop early in. And notice that we need forward checking to make both of these things work. Because these heuristics rely on counting the number of elements in domains. And so we need to groom or prune these domains so that the heuristics are-- OK, so let's conclude here. So we presented backtracking search. So backtracking search has three choice points. First, we need to choose an unassigned variable Xi. This is done by using most constrained variable, MCV. Once we found a variable to assign, we're going to order the values of that unassigned variable based on the LCV heuristic or a least constrained value. And then we're going to compute the updated weight as we discussed before. And then we're going to update the domains via one step look ahead, a.k.a. forward checking. And then if the number of elements in any domain is 0, then we stop there and recurse-- don't recurse otherwise [INAUDIBLE].. OK. So notice that none of these heuristics is guaranteed to speed up backtracking search. There's no theory here but often in practice these can make a big difference. So next time we'll look at the lookahead. and see how we can even improve upon forward checking. So that's it.
Stanford_CS221_Artificial_Intelligence_Principles_and_Techniques_Autumn_2021
Logic_10_Recap_Stanford_CS221_Artificial_Intelligence_Autumn_2021.txt
OK, so this is the last module of last set of lectures this quarter. So let's do a recap of logic. So what have we talked about? So we talked about logic during this week. We talked about three main ingredients of logic. We talked about syntax, which basically defines a set of formulas, allows us to syntactically, symbolically talk about formulas and talk about things that exist in the world. So for example, I might say rain and wet without knowing what rain means, or wet means, or what this and symbol means. And that is in the syntax line, where I just like have symbols and I can manipulate those symbols. And then I can assign meanings to the syntax using semantics. So the idea of semantics is that for every formula, you can specify a set of models, M of f, which is basically a set of assignments or configurations in the world that assign meaning to a formula, a syntactic formula f. So in the case of rain and wet, for example, rain can take values 0 and 1, rain to wet can take values 0 and 1. And then in the case of rain and wet, it would be this darker area that corresponds to the meaning of what does it mean when both rain and wet is true, OK? So in general, when we try to define logic, we need both syntax and semantics. Syntax as a symbol, as a way of just writing it out, writing out the formula, semantics as a way of giving meaning to those formulas. And in addition to syntax and semantics, we talked about inference rules. We spend quite a bit of time talking about modus ponens and resolution for both propositional logic and first order logic as ways of doing inference on our knowledge base. So we have a knowledge base, which has a bunch of formulas. And the question is, what are some new formulas that we can derive from that knowledge base. For example, you might have rain and wet. And from that, I can derive rain. I can actually derive and prove that it's raining. So how do we think about inference rules? How can we infer new type of formulas here? And what can we tell about the formulas that we infer? So I think that is also an interesting question that we have been talking about. So how do we think about inference algorithms? So if you had a knowledge base, the idea is if you have an inference rule, like modus ponens or resolution, we should repeatedly apply that inference rule until we get new formulas and we derive new formulas, new F. And as we get new formulas, you're expanding our knowledge base. But you're shrinking the space of models, because you're adding more constraints, if I add a new formula in general, I'm shrinking my space. If I'm deriving a formula, though, if that formula is just derived from knowledge base. It's not changing the size of my knowledge base. So here's an example. So let's say I have wet and weekday. And wet and weekday implying traffic. From these three formulas in my premise, what can I conclude? What can I infer? So we talked about modus ponens as an inference rule that allows us to infer traffic out of this, OK? And then more generally, what this modus ponens does-- what does modus ponens do? Modus ponens thinks about having a set of propositional symbols, p1 through pk, and then a formula that is p1 anded to pk implying q. And then it basically says we can derive q. That is what modus ponens does. And then we basically talked about soundness and completeness of inference rules, for example, modus ponens as an example. So what does soundness mean? Soundness means that if you're deriving things, if you're driving new formulas, you need to make sure that these new formulas are actually true. They're actually-- they live in this space of things that are entailed and are actually true. And if you remember what our example with glass and water inside of the glass, what soundness means is anything that we are deriving should be inside of the glass. It should be a formula that's inside of the glass, because everything that's in glass is entailed, and it's true. So we need to make sure that everything that we are deriving, if you want to make sure it's sound, it has to be inside of the glass. On the other hand, we talked about completeness. And completeness means that we are deriving the whole truth. Meaning that if I have this glass, I should be able to derive everything that is inside of this glass or even more. That is what completeness means, OK? And if I have both soundness and completeness, then derivation and entailment are basically the same thing. Right, remember, entailment is about meanings, about semantics, right? Entailment is about what is the meaning of f actually being entailed by the knowledge base or being contradicted by the knowledge base. But derivation is just basically symbol manipulation. So it's difficult to talk about entailment kind of in the semantics land, because you have to think about meanings and so on. But if you can do derivation in the world of syntax, you are just moving around formulas. And kind of mindlessly, by moving around formulas and applying inference rules, you can derive new formulas. And that allows you to have a compact way of thinking about these formulas and new formulas that are being derived. So if derivation is the same thing as entailment, that's pretty nice. Because if you have a virtual assistant and you want to ask your virtual assistant a question or maybe you want to tell it some information, that corresponds to maybe an entailment question. And that might be difficult to answer. So instead, if you have a sound and complete inference rule for your logic, then maybe you can just check derivation. And derivation alone is going to give you the answer, OK? So that is why we talked about derivation for a bit. And we discussed modus ponens for propositional logic and the fact that modus ponens is actually sound for propositional logic. But it is not complete. It's not able to get all the formulas that are true. So in order to solve that, we had two solutions. One was maybe propositional logic is too large. Maybe we should reduce the size of propositional logic. And then the other idea was maybe modus ponens is not as strong. Maybe we should make modus ponens stronger or come up with a stronger inferential. So let's talk about those two ideas. The first idea is propositional logic allows us to talk about any legal combination of symbols. And that is pretty expressive. But maybe that is too expressive. So maybe we can just look at propositional logic with only Horn clauses. So this is a restricted set of logic formulas. And that allows us to basically have this more restricted set. And that allows us to get both soundness and completeness with modus ponens. So what is a Horn clause? Horn clause is basically a clause that has, at most, one positive literal. So if you write this in the CNF form, in the conjunctive normal form, then you want to basically make sure that you have, at most, one positive literal. Another way of writing it is that you have an and of a set of propositional symbols, p1 anded through pk. And that implies some q. And you basically want to make sure that there are no ors here or no branchings here. That is why we could actually show completeness with Horn clauses. All right, so that was basically propositional logic with Horn clauses using modus ponens. And that gives us completeness. The general propositional logic doesn't give us completeness with modus ponens. The other option we discussed is maybe we should have a fancier inference rule, specifically resolution was the thing that we started looking at. So resolution was able to give us both soundness and completeness. The issue with it was actually exponential time, as opposed to linear time. And when we think about modus ponens, where we keep adding only one formula, and at most, you'll end up with n formulas. But with resolution, we might have an exponential time algorithm. But we end up getting both soundness and completeness, which is nice. OK, so that was all about propositional logic. At some point, we started talking about first-order logic. So we started expanding our logic and tried to be more expressive in our logic, tried to be able to talk about variables, talk about quantifiers, and be able to have a better way of representing things that are much harder to represent in propositional logic. And then we started talking about syntax, semantics, and inference rules for first-order logic. So basically, we went over the same things for first-order logic. So comparing propositional logic with first-order logic, in propositional logic, we have this option of doing model checking, when we think about our models and the semantics of our models. In first-order logic, we don't really have an analog of that. But we have this other thing called propositionalization. So for a subset of first-order logic formulas, what we can do is we can do propositionalization. And that takes us back back to the propositional logic land. And we can use the same tools that are there. Thinking about inference rules, we discussed modus ponens with Horn clauses and the fact that it's sound and complete. Same story is true with first-order logic. So we could apply modus ponens with Horn clauses in first-order logic. It's sound and complete. There is a plus plus here. And that plus plus basically means that we had to change modus ponens a little bit. So we discussed unification and substitution, because we have variables here. So because there are those variables, you should be able-- we should apply unification and substitution to make sure that our modus ponens make sense in the space of first-order logic. Similarly, we discussed resolution and showed that it is general. And it's sound and complete in propositional logic. And in the case of first-order logic, we briefly discussed this in an optional module. And again, this is resolution plus plus, because we are talking about applying unification and substitution on resolution too. And again, it's sound and complete even under first-order logic, which is kind of nice. All right, so that summarizes our logic lecture. I just want to leave you guys with one thought when you think about logic. So what is it about logic that is useful again? We talked about all the limitations of it, the fact that it can't handle uncertainty, or it's not probabilistic really, or it's pretty brittle. It's not able to capture data. As you get more data, it's hard to update its rules, because it has all these deterministic rules that are built on top of each other. But it does have one big benefit. And that one big benefit is that it allows us to have a very compact and concise way of representing knowledge that we wouldn't normally have, right? Remember, the whole point of inference rules was I had this logical formula, which is a very compact way of thinking about semantics and knowledge that's actually pretty difficult to represent in the semantics land. And now that I have this concise formula, I can just manipulate it. I can move it around. And I can do all sorts of inference rules on top of it. I can come up with new formulas, derive new formulas, and prove new formulas. And that's pretty interesting. And it's much harder to do that in the semantics land. So the thing that logic really gives us, it's really a big strength here, is having this compact representation that can help us think better about formulas and do better manipulation of them. And I think one thing that would be very interesting to think about is how could we use these ideas, maybe not exactly logic, but ideas from logic in some of the more modern AI systems, some of the more modern machine learning based systems. And I think that is a pretty interesting view of logic that would be good to take from this class.
Stanford_CS221_Artificial_Intelligence_Principles_and_Techniques_Autumn_2021
Machine_Learning_13_Kmeans_Stanford_CS221_AI_Autumn_2021.txt
All right, in this module, I'm going to talk about k-means, a simple algorithm for clustering, one form of unsupervised learning. So I want to start with a classical example of clustering from the NLP literature around clustering. So this was the unsupervised learning method of choice before word vector or contextualized word and so on. So the input to the algorithm was simply raw text, lots and lots of words of news articles. And the output was a clustering of those words. So the algorithm was able to pick out a cluster 1, which is Friday, Monday, Thursday and, generally, days of the week. Cluster 2 had months. Cluster 3 had some sort of natural resources and you can list. And each cluster had barely coherent structure in it. The one thing that's quite interesting to note is that no one told this algorithm what days of the week, or what months are, or what family relations are. It was able to simply figure it all the stuff out just by looking at the data. Well, on a personal note, Brown clustering was actually my first experience that got me to pursue research in NLP in the first place. Seeing the results of unsupervised learning, when it worked, was kind of really magical. And, of course, today we're seeing even more strong evidence of the sheer potential of unsupervised learning with a language model with attention. So I want to contrast-- do unsupervised learning and supervised learning. So in supervised learning, we looked at classification. You're given a training set, which is labeled. So inputs are labeled with an output y. This goes into a learning algorithm. You get a classifier, and then you [INAUDIBLE] new points. And then the main challenge with the labeled data is that labeled data is expensive to obtain. You have to have it annotated, often domain experts had. So in contrast, unsupervised learning, of which clustering is a form of, uses unlabeled data that's a very cheap to obtain. So as a concrete example, suppose you have some points here, and they're just unlabeled points. Any data can be [INAUDIBLE] here. There's the points. And we want a learning algorithm that can produce, not a predictor, but an assignment of each point to a cluster-- two clusters. So let's assign the first four points-- the blue cluster. Here are the four points, blue cluster, and the second set of points to the orange cluster for the point set. So intuitively, we want to assign nearby points to the same cluster. And you can kind of see that these points are closer to each other. And these points are closer together and on separation clusters here. So more formally, the task of clustering is you're given some training points, D train. And this is a list of points under Xn. And the output is an assignment of each point to a cluster. Formally, we have an assignment vector, which is described z1 all the way through zn, where each zi is an element from-- is a number between 1 and K. So assuming we have big K clusters, each point is assigned to one of them. So what makes a cluster? The key assumption behind k-means is that each cluster can be represented vaguely by a centroid, denoted mu k. And we're going to concatenate all the centroids together to form a-- So this diagram illustrates what a centroid is trying to capture. The centroid is, in some sense, a point, which is closest to all the other points in that cluster, so it represents a cluster by some concrete point in space-- centroid. So the intuition in terms of centroid is that we want each point to be close to its assigned centroid-- u of zi's, bit of notation, which we'll go through later. But intuitively, you can look at this point over here. We want this point to be close to the centroid of the sign cluster, and this point close to the centroid. So now we can define the k-means objective function. Here's a picture, which I'm going to talk through. The K-means objective function is denoted as a loss. k means loss function. And it's a function of the cluster assignments, 1 through zn and the cluster centroids, mu 1 through mu k. And this is equal to-- I'm going to look at all the endpoints, sum over them-- look at the ith point. Look at the difference between them. So zi is a number between 1 and k, which specifies which cluster a point i is assigned to. And I'm going to access its centroid. And I'm going to take a difference between these two and square it. So this is the squared difference-- distance between the point and a centroid. And so pictorially, what I'm looking at is for each point, I look at its assigned centroid, and I'm looking at the squared of the length of these dashed lines. The sum of all the squares of the dashed lines is exactly k means plus, that has to be as small as possible. So I want to minimize, with respect to both the cluster centroids, the cluster assignments and the centroids of this objective function. So to get some intuition, let's consider a simpler example in one dimension. So here we have four points at 0, 2, 10, and 12. OK, so I'm going to consider the optimistic case of if we know what the centroids are-- because that makes our life easier. So if we know this centroids are at 1 and 11, indeed, this becomes a pretty trivial problem. Because now-- remember how do we assign points? Well, for the first point, we just assign it to the closest centroid. Because we know where the centroids are. So this point is 0. It's closest to 1, so I'm going to assign this to 1. For z2, this point is closest to 1, so I'm going to assign it to 1. This point is closest to the center of the cluster 2 and same with this one. So all I'm doing is looking at all the centroids, and computing which one is the closest to the point I'm trying to assign. So now let's consider the case where I don't know the centroids but I have the assignments. If I have the assignments, then I can also compute the centroid [INAUDIBLE].. So for the first cluster, I simply look at all the points that are assigned to that cluster. And remember, I want to find the centroid, which is as close as possible to all of them on average. And so this is going to be minimum over some of the square distances. And recall that that is exactly optimized in closed form by setting it to the mean of the points assigned to that cluster. So for mu 2, points 10 and 12 are assigned to that cluster. So the mean of those is 11. So now given either the cluster assignments or the centroids, we can successfully recover optimally. But this is a chicken and egg problem. Because we neither have the centroids nor we have the assignments to begin with. So what can we do? Well, let's just take a gamble and just initialize randomly. And so we're going to initialize the centroids, some random-- so usually, they are assigned to some of the existing points, data points. Let's just assign them to it. So clearly this is not optimum. But let's try to iterate to it. So first iteration, what we're going to do is we're going to fix these centroids and optimize the cluster assignments. But let's look at each point and try to assign it to one of the clusters. So 0 is closest to 0, so I'm going to assign that to cluster 1. This point, 2, is closest to cluster 2 because it's right on top, so that's 2. And these two points are also closest to cluster 2. So I'll annotate them, assign them to 2. And then I'm going to use these new assignments and try to re-estimate the centroid. So for the first cluster, I'm just going-- that's only at this point. So I'm going to-- the centroid right there. And for the second cluster, now I have these three points, and I'm going to find a place the centroid that minimizes the square distance to all them, which is going to be the average at 8, which is 2 plus 10, plus 12, divided by 3. OK, so now have these updated centroids at 0 and 8, and see that this is looking a bit better. And now the second iteration, I'm going to reassign the points based on these-- plus new centroids [INAUDIBLE]. So the first at this point is going to be assigned to cluster right there. This point is also going to be close assigned to cluster 1 because 0 is closer to 2 than this. And point 10 is going to be closest to the second cluster and same with point 2. OK, so now we have new cluster assignments. We can go back and re-estimate the centroids. And now we're back in our familiar problem from the previous slide, where we look at the first cluster, assign the centroid to be just the mean between the two points. And same with the second cluster, the mean of a 10 and a 12 is 11. And now, we actually converged. If you try to repeat this process, nothing will change. So we're done. And in this case, it happens to recover the optimum of clustering for these four points, even though we didn't know anything about algorithms. So here is the k-means algorithm stated more formally. So first, we are going to initialize all of the centroids randomly. Then we're going to iterate T times, or until convergence. And we're going to alternate between step one, which is set the assignments given the centroids, and step two, which is setting the centroids given assignments. So step one, we're going to go through each point. And we're going to try to assign it. We're going to assign it to the zi. It's going to be equal to. We're going to look at all the clusters, 1 through K. For each one of these, we're going to look at where that point is and look at the square distance between the centroid of that cluster. And then we're just going to take the argument, which is the cluster k, which minimizes this one. For step two, we're going to loop over the clusters, and we're going to set the centroid for that cluster to be-- We're going to look at all of the points i, which are assigned to that cluster k and just average-- sum of all the points. And this-- we want to divide by the number of points. I'm summing over to get the average equals k. Great, so that is the k-means algorithm. So one word about whether it will work somehow-- the k-means algorithm is guaranteed to converge to a local minimum of k-means objective, but is not guaranteed to converge-- to find the global minimum. The cartoon picture of optimization function-- it can converge to a local minimum but not global minimum. So if you click here I have this demo, which shows you how k-means works. So you can actually construct your own set of training examples. And if you step through the k-means algorithm, you initialize, and it will alternate between moving these centroids around and reassigning the points. And in this happy case, we actually get to a pretty good clustering. The blue points over here, red points over here, and green. Training error-- k-means objective function is 44.7. But if I initialize in a slightly different way, so let's see what happens. It converges to something with much worst error. And you can see visually that this is a suboptimal clustering because this point has only one-- this cluster only has one point. Whereas, this cluster has so many other points, which are spread out. So what do you do about this? Well, there are a couple of things. One is that you can just run it multiple times from different randomizations and take the best one. Another thing you can do is use a smarter heuristic. So I didn't say very much about initialization. But there's a cool thing you can do called k-means++, where you initialize the clusters one at a time-- the centroids one at a time to be the data points, which are as far away as possible from all the others. So this kind of make sure is that the centroids are spread out and that they can kind of move over time to capture all the points in your data set. OK, to wrap things up, so far we've talked about k-means which is an algorithm for doing clustering. And clustering is a useful task that can be allowed to discover structure in unlabeled data in particular-- group points together. It's useful to distinguish k-means to-- the meanings of k-means. The first is the k-means objective. So this is objective function that says, find assignments and find centroids that minimizes the squared distance between each point and the centroid of assigned cluster. And then there's a k-means algorithm, which performs alternative minimization on the k-means objective. So this is setting the assignments given the centroids and setting the centroid given assignments. Chicken-egg problem-- so this is not guaranteed to globally minimize the k-means objective. Although, it usually get pretty good results robustly. Stepping back a little bit, k-means is for clustering. There's other types of clustering methods out there. And clustering is a form of unsupervised learning. And, generally, unsupervised learning has a few use cases. One is just data exploration and discovery. You get a pile of data. You might not have had a chance to annotate or label it. So you can run clustering or other types of unsupervised learning to group points and discover any sort of structure to get insight. And a second use case is that when you perform clustering or some sort of representation learning. You can actually get useful representations or features that you can stick them in to downstream supervised learning problems when you do get a bunch of labeled data. And this generally is helpful for helping supervised learning work better. OK, so that's the end of this module.
Stanford_CS221_Artificial_Intelligence_Principles_and_Techniques_Autumn_2021
Bayesian_Networks_2_Definition_Stanford_CS221_AI_Autumn_2021.txt
Hi. In this module, I'm going to present the formal definition of Bayesian networks, give a few examples, and then talk about a really interesting property called Explaining away. So before we begin, I want to review some basic probability. So suppose we have some random variables. I have one called S, which is representing whether there is sunshine, and another random variable called R, which represents whether there's rain. So you should think about a setting of values to S and R as capturing some state of the world. So we don't know which state of the world we're in, so we're going to capture this uncertainty using a joint distribution. So the joint distribution over S and R, written P S, R, is equal to a table. And this table specifies, for every possible assignment to S and R, I'm going to have a probability associated with it. So for example, there's the probability of no sun and no rain, is 0.2. No sun and rain is 0.08, and so on and so forth. So notice that I'm using lowercase letters to denote values and uppercase letters to note the random variables. And also, notice that this quantity is a number. It's a probability. Whereas this quantity is a table. So the joint distribution captures everything that you really want to know. You can think about it as a probabilistic database that captures how the world works. So now we can use the joint distribution to answer all sorts of interesting questions. So we can compute what is called a marginal distribution. So the idea here is that suppose I'm interested in only whether it's sunshine or not. I don't care about whether it's raining or not. So we can compute P of S. And this is a table which specifies the possible values of S-- 0, 1-- and the probability, the marginal probability, of that particular value. So how do we compute this? We simply aggregate rows. So in particular, we're going to look at s equals 0, look at the joint distribution, and match all the rows where s is 0, so these two, and add them up. So that gives us 0.28. And then for s equals 1, the matching rows are these last two rows. And that gives us 0.72. That's the marginal distribution over S. So there's also another concept called conditional distribution. And here, the idea is that suppose I knew it was raining. So I'm going to condition on R equals 1. So now I want to know, what is the probability of sunshine? So this is a table where I can specify the possible values of s-- 0, 1, and I want to know the probability, the conditional probability, of s, given R equals 1. So the way I want to approach conditional distribution is as follows. I have this condition, R equals 1. That means I'm going to effectively remove all the rows where R does not equal 1. I'm left with these two. And now I'm going to simply normalize this distribution. So I have 0.08 and 0.02. The sum is 0.1, so I divide by that sum. And that's going to give me 0.8 and 0.2 for the values of S, given R equals 1. So all I did was select the rows that matched the condition and normalize to get the distribution. And now just a simple note is that the normalization constant, which is the sum of these two, is actually the marginal probability of R equals 1. And you can check that the conditional, by definition, is equal to the joint divided by the marginal of the [INAUDIBLE]. So let's expand our example a little bit. So suppose we have variable is now sunshine, rain, traffic, and autumn. So we have a joint distribution over all four variables. And marginal and conditioning are not mutually exclusive. We can actually define core questions that involve both. So here is an example. Suppose I know that there's traffic and that we're in the autumn quarter. And now we're interested in a particular query variable, in this case R. So this question can be written as follows. The probability of the query variable R-- conditioned on this evidence, T equals 1, A equals 1. OK, so now the variables which are not mentioned here are said to be marginalized out. So S is not mentioned here, so we're marginalizing out S. OK, so in general, there's three sets of variables-- the query variables, the conditioning variables, and the marginalized out variables, which form a partition of all the variables in your system. So now let's turn to a classic puzzle, which we will solve using Bayesians. Suppose that in the world there are unfortunate things such as earthquakes and burglaries. And suppose that they're independent events and hopefully, rare. Each of them happens with probability epsilon. Probability epsilon is a sum small. You've installed an alarm, which will go off whenever either there's an earthquake or there's a burglary. So you got some special deal where it's a two in one kind of alarm. So suppose you're away on vacation. And then you got a notification that your alarm went off. OK? So now, does hearing that there is an earthquake on the radio or in your newsfeed, increase, decrease, or keep constant the probability of burglary? OK, so does knowing that there's an earthquake in addition to an alarm, how does that change your beliefs about a burglary? So now we could try to intuit the answer, and I would encourage you to do that and see if you're right. But sometimes, this can be very slippery, because the right answer could be counterintuitive. So you might think that because, well, earthquakes and burglaries, I said they're independent, so knowing that there's an earthquake, why should that change the probability of burglary? That's one way to think about it. But that turns out to be wrong, and I'll show you why. So let's try to tackle this problem using Bayesian networks. So we're going to define a joint distribution over earthquake, burglary, and alarm. I'll do this in the next slide. But first, let's talk about the questions. Let's convert this word problem into mathematical notation here. So the two things I want to compare is, what is the probability of there being a burglary, given only that I heard an alarm? Versus, what is the probability of a burglary, given that I heard the alarm, and I heard that there was an earthquake? So is it smaller? Is it the same? Is it larger? And that's what I want to know. So now let us define the Bayesian network completely. So there's going to be four steps to thinking about how to define a Bayesian network. First of all, let's figure out what the variables are. So the variables are whether there's a burglary, B, whether there's an earthquake, E, and whether the alarm went off or not, A. OK? Second, what we're going to do is to model the dependencies between these variables, using directed arrows. And you can think about them as capturing causality, although that's not necessarily the case here. And so these are meant to just capture qualitative relationships. So here, the alarm is triggered off either by in a burglary or an earthquake. So that seems sensible. So to make these qualitative relationships quantitative, I'm going to define a local conditional distribution for each variable conditioned on its parents. So let's go through these examples. So we have B. That's a variable. So a local conditional distribution specifies, for each possible value of B, what is its probability? So I said that the probability of burglary is epsilon. And that means the probability of no burglary is 1 minus epsilon. Then we look at E. E has no parents as well. So the probability of earthquake is epsilon. And the probability of no earthquake is 1 minus epsilon. So now let's turn to the-- and I can write these conditional distributions as follows as well. So I can write probability of b is epsilon times indicator of b equals 1 plus 1 minus epsilon times indicator b equals 0. So if I plug in 1 here, then I'm going to get epsilon. And if I plug in 0, I'm going to get 1 minus epsilon. And same with probability of epsilon, e. So now, what is the probability of a, given its parents? So it's easiest to write it, actually, mathematically like this. So this is just the indicator of whether a equals b or e. So this is a deterministic relationship, but I've lifted this to this probabilistic notation via the [AUDIO OUT] I can also write this out as a table, where I specify, for every possible configuration of the parents and of a itself, what is its probability? So here, if b and e are 0, and does a equal-- is 0 equal to a or 0? That's yes, so this probability is 1. Does 0 or 0 equal 1? The answer is no, so that's a 0. Does 0 or 1 equal 0? That's also a no. And does 0 or 1 equal 1? The answer is yes, so that's a 1. Go on to the rest of the table analogously. OK, so now I have defined a local conditional distribution for each variable, given its parents. And now the final step is to multiply all of these together. And that is defined as the joint distribution over all the random variables. So notice that I'm deliberately using two types of P here. So one is lowercase p, which is used to specify the local conditional probabilities. And the Blackboard uppercase P is reserved for the joint distribution and also, derived marginal and conditional distributions. So notice that, again, that these local conditional distributions are just defined, whereas this joint distribution is derived from the local conditional distribution. All right, so the joint distribution, like I said, is simply the product of all the local conditional distributions. So if I work that out, I get this table over all possible assignments to be E and A and its probability. OK, so now I can work on these questions that I'm asking. So this is my probabilistic database. Let's go query it. So let's warm up with something relatively simple. So probability, what is the marginal probability of B equals 1 here? OK, so remember how do I compute marginal probability? I look at B equals 1. OK, so that selects these rows down here. And I simply add up these probabilities. So there's epsilon times 1 minus epsilon and then adding epsilon squared. So that gives me epsilon minus epsilon squared plus epsilon squared equals and so on. OK, so what about probability of burglary in addition on the alarm? So remember, for conditional distributions, I'm going to now wipe out all the rows where A is not 1. So I'm left with all these rows, which are consistent with my evidence of A equal to 1. And now I'm going to look at probability of B equals 1. So that are these two rows. And now I add epsilon 1 minus epsilon plus epsilon squared. OK? And then you're going to divide by the sum of all these three things, which is the same as the numerator plus this additional 1 minus epsilon times epsilon. If you do the math here, you get 1 over 2 minus epsilon. OK, so this intuitively makes sense. The prior probability of a burglary is small, but if I hear alarm, then this goes up to actually a little bit over 50%. So now the final question is, what is the probability of burglary, given that I heard the alarm and also, I hear that there's an earthquake? OK, so I'm conditioning on now A equals 1 and E equals 1. So I'm now going to wipe out the rows where E is 0. And now I am left with, what's the probability of E equals 1. So that is epsilon squared divided by the sum over these two probabilities, which is epsilon squared plus 1 minus epsilon times epsilon. And this gives me-- I could do the math. It gives me epsilon. OK, so this answers our question now. When I heard the alarm, the probability of a burglary goes up, rightfully. But now I see, that if there is an earthquake or hear that there's an earthquake, the probability goes down, back to epsilon. OK, so the answer to the question is that observing the earthquake does cause the probability of burglary to go down. OK, so let me actually work the-- or convince you of this via this demo. So here, remember from before that, we can define arbitrary factor graphs, including Bayesian networks, using this tool. So we have three variables, B, E, and A. Epsilon, we're setting to 0.05 here. I'm going to define factors or local conditional distributions here, probability of B, probability of E, probability of A, given B and E. And now I'm going to ask for the probability of B. So if I step through this algorithm, I see that the probability of B is 0.05, which is epsilon. So now what happens when I condition on A, when I condition on A, I find that the probability of B conditioned on E equals 1 is 0.51. So remember this is 1 over 2 minus epsilon. So now finally, I am going to condition on earthquake here. So if I condition on earthquake, then I see that the probability of burglary goes down to 0.05, which is epsilon. OK, so what have we learned from this? So you can write a flashy headline, saying, earthquakes decrease burglaries. OK, so of course, this is run a little bit tongue in cheek, because this is actually not a causal statement. You have to be, because if you go in and cause some earthquakes-- I don't know how you would do that, but supposing you do-- then it's not like all the burglaries are going to disappear. So here, decrease does not mean causally effect. It just means that given this evidence, that actually the probability of various other variables change. So the punch line here is that dealing with all these probabilities and reasoning under certainty is quite slippery. So we need some sort of sound mathematical framework, such as Bayesian networks, to deliver the right answers. So this type of phenomenon is so important to Bayesian networks, that it has a special name. And it's called explaining away. So in general, explaining away is when you have two causes or more that positively influence an effect. So conditioned on the effect, further conditioning on one cause actually reduces the probability of the other cause. So mathematically, this is written as probability of the other cause, given the effect and one of the causes is less than the probability of the cause given just the effect alone. And this is true even if the causes are independent, which might be somewhat of a counterintuitive effect. And this is kind of a hallmark of Bayesian networks. This is called a V structure. Looks like a V. So you can rationalize this, if you want some intuition, as follows. So you have this effect, and you observe A equals 1. And now you're trying to seek an explanation for what caused this effect, is it B or E? So just conditioned on A, or it could be either one. So it's kind of 50-50. But if I told you that one of the causes was actually activated, then that intuitively lessens the responsibility, and you don't really need this other cause to explain A. So that's why the probability of this other cause goes down. So of course, that is very hand wavy, but you can rest assured that there is rigorous mathematical calculations that I just did. OK, so let's look at another example. This is kind of a toy medical diagnosis problem. So suppose you are coughing, and you have itchy eyes. Do you have a cold or something else? So remember there's four steps, so let's go through them in turn. So the first step is to write down, what are the random variables of interest? So here, we have cold C, allergies A, cough H, and itchy eyes I. OK, so these are our variables, C, A, H, and I. So the second step is to draw arrows between them, using prior knowledge. So using a really super crude medical knowledge, I'm going to just declare that, well, cough could be either because of cold or allergies, whereas itchy eyes is generally due to allergies alone, but not to cold. So step three is I'm going to make this quantitative by defining local conditional distribution. So I'm going to, for each variable, I'm going to write down a local conditional distribution, given its parents-- so probability of C, C has no parents; probability of A, A has no parents; probability of H, given its parents C and A; and probability of I, given its parent A. OK, so I'm not going to bother to write down the actual probability distribution on the slide. Step four is to multiply all these together to form the joint distribution over all the random variables. And lowercase p is local condition distribution. Blackboard P is the joint distribution. OK, so now I can ask this. I have this probabilistic database. Now we can ask questions about it. So let's warm up, not exactly what this question, but a different question, which is, what is the probability I have a cold if I just were coughing? OK, so let's look at this demo. So here is a Bayesian network for medical diagnosis, where I have defined C, A, H, and I. And now I'm conditioning on H equals 1 and I equals 1. And I'm asking for the probability of C marginalizing out A. OK, so I am going to do this a few times. It runs the sum variable animation algorithm, which don't worry about that for now. And it produces that the probability of C conditioned on H equals 1, I equals 1 is 0.13. Sorry, this, I meant to only condition on H equals 1. So let me do that again. And I get 0.28. So I'm going to write 0.28 here. And now, what is the probability when I condition on both H equals 1 and I equals 1? And actually, I already did this. But I'll just do it again. This is going to be 0.13. OK? So again, you can rest assured that these calculations were followed before using laws of probability. And one thing I want to point out is that this is another case of explaining away, but slightly disguised. So here's how to think about it. So I condition on I equals 1. So I observed that I have itchy eyes. OK? So itchy eyes is only connected to A. So that's only going to boost support for A. Even though I don't condition on A, I'm getting more support for A. And now having more support for A, now this is explaining away why the cough. So now A can explain the cough, which lessens the need for the cold. So that's why the probability of cold actually decreases compared to if I didn't have itchy eyes. OK? So you should be really kind of impressed by this kind of reasoning. It's quite subtle, even for this very small four-node Bayesian network. And even qualitatively, you might think it's hard to understand what will happen to C. But just imagine if you have a huge Bayesian network, and you want to get qualitatively precise answers. You should be glad that we have Bayesian networks that can allow you to answer these questions, based on the laws of probability. So now let's define Bayesian networks formally. So a Bayesian network is specified by a set of random variables, generically X1 through Xn. And it specifies a directed acyclic graph over these random variables. So that specifies the dependencies qualitatively. And then we specify local conditional distributions for each variable xi, given the parents of xi. And when you multiply all of these local conditional distributions together, then you get the joint distribution over all the random variables. OK? So again, we're using local lowercase p to denote local conditional distributions and Blackboard p to denote joint distribution. So now we can look at probabilistic inference more formally as well. So probablistic inference you're given as input, a Bayesian network specifying this joint distribution. So this is my probabilistic database. I get some an evidence where a subset of the variables E has been observed to take on particular values of little e. And I'm interested in a set of query variables Q, which is another set of variables. So now the probabilistic inference produces a probability of Q conditioned on evidence. And just to be very precise what this means, this is a probability P equals Q, given E equals e for each value little q. For example, if I'm coughing, and I have itchy eyes, do I have a cold? This is expressed as this probabilistic inference question, what is the probability of a cold, conditioned on coughing and itchy eyes? So this is a formal definition of probabilistic inference. The bad news is that computing this is actually going to turn out to be very computationally intractable. But we'll see algorithms that can tackle either approximately or special cases of it shortly. So in summary, we've introduced Bayesian networks. It's important to think about the basis of Bayesian networks is based on these random variables, which capture the state of the world. We have directed edges between these variables, which can represent directional dependencies. Quantitatively, we define a local conditional distribution for each variable conditioned on its parents. And we multiply all those together to produce a joint distribution. Now, this joint solution is my probabilistic database, where I can ask questions about the world. And this is the process of probabilistic inference. And hopefully, through the alarm and the medical diagnosis examples, hopefully you can appreciate the framework, which is Bayesian networks. And it captures certain types of reasoning patterns, such as explaining away, which might be intuitive or counterintuitive. But you can rest well at night, because this is all based on laws of probability. OK, so good night.
Stanford_CS221_Artificial_Intelligence_Principles_and_Techniques_Autumn_2021
Artificial_Intelligence_Machine_Learning_11_Generalization_Stanford_CS221_AI_Autumn_2021.txt
Hi. In this module, I'm going to be talking about the generalization of machine learning algorithms. So recall that a machine learning framework has three design decisions. The first is the hypothesis class, which could be linear predictors or neural networks. The second design decision is a loss function, which in the case of regression, this could be squared loss. If it's a classification, it could be the hinge or logistic loss. If you take the loss and you average them, you get the training loss, which is our training objective that we so far have been optimizing. And finally, we have the optimization algorithm, which is either gradient descent or stochastic gradient descent. All good so far. Now let's take a step back and be a little more critical. Is this, the training loss in particular, a good objective to be optimizing? So here is a little cartoon example that does really well on training loss. Here it goes. It's called rote learning. So the rote learning algorithm is just going to store all the training examples. And then it's going to return this predictor. And this predictor takes an input x. And it's going to search for an x in the training set, and if it can find it, then it is going to return the corresponding y. And otherwise, it just gives up and segfaults or crashes. And so this learning algorithm minimizes the objective perfectly. It gets zero training loss. But you can kind of tell that it's a bad idea because it doesn't get anything else right. So, this was an example of extreme overfitting. Here are some examples of less extreme overfitting in pictures. So here's an example from classification. You can see that the green decision boundary here tries really hard to separate the blue and the red points and does so successfully getting zero training error. But you can kind of intuitively sense that it's overfitting. And perhaps this black decision boundary would be better. In the case of regression, this red curve gets zero training loss by going through all the training points. But you can see that it's overfitting and instead maybe you should be capturing the broader trend using a simple line. So, in general, if you try to overly optimize the training loss, then you risk overfitting [AUDIO OUT].. So then, what is the true objective if it isn't the training loss? Well, to answer that question, let's take a step back and think, what are we trying to do? Machine learning is just a means to an end. The end is a predictor that you're going to launch into the world and make predictions on real people. And this just happens to be trained from a learning algorithm. So how good is this predictor in the world? Well, the answer is, it's the goal-- how good it is depends on how well it's able to predict on unseen future examples. So a true learning objective should be to minimize the error on unseen future examples. Sounds great. Only one small problem is that we don't have access to the future. And in particular, if we don't see the examples, how can we do anything about them? So, often we settle for the next best thing, which is get a test set. And the test set is just a set of examples that you didn't use for training. So it is a surrogate for the unseen future examples. So I make this distinction because I want to stress the fact that when you deploy a machine learning algorithm-- a predictor into the world, it might encounter all sorts of crazy things. And what you do in the training, in the lab, is-- all you have is a test set. So what you're trying to do is trying to have the test set be as close and as representative of what you actually get in the real world as possible. So now we have an intuitive feeling for what overfitting is. Can we make this a little bit more precise? In particular, when does a learning algorithm generalize from the training set to the test set? Because that's kind of what we settle for. So, there is a way to make this mathematically rigorous. But I just want to give you the framing of how to think about generalization. So the starting point is f star. This is the predictor that is the ideal thing. It predicts everything as correctly as you can hope for. This lives in the family of all predictors. Of course, we can't get to f star. So what do we do? Well, we do two things. We first define a hypothesis class, script F. And then we are going to have a learning algorithm that finds a particular predictor within this hypothesis class. So another predictor I'm going to talk about is g. This is also a kind of a thing that you can't get a hold of. It's the best predictor that you can find in the hypothesis class. So now, we're interested in the difference between the error of the thing you have and the thing that you wish you had, OK? So, mathematically, that's written as error of the learned predictor minus the error of f star [AUDIO OUT]. And this error can be decomposed into two parts. The first part is the approximation error. Approximation error is the difference between g and f star. Mathematically, that's the difference between the error of-- so approximation error is the difference between the error of g minus the error of f star. This measures how good your hypothesis class is. The second error is the estimation error, which is the gap between f hat and g. This measures how good is the learned predictor relative to the potential of the hypothesis class, error of f hat minus the error of g. And you can verify this identity because we're doing just subtracting the error of g and adding error of g. So this right-hand side is equal to this left-hand side. This kind of trivial identity highlights these two quantities' approximation error and estimation error and gives us a language to talk about the trade-offs and generalizations. So let's get some more intuition about how approximation and estimation error behave as you increase the size of the hypothesis class. So when the hypothesis class grows, the approximation error will decrease. This is because the approximation error is measuring how good g is and the g is the best thing in the class. And if you're adding more things, the best thing is just going to get better. So, in other words, you're taking a min over a larger set [AUDIO OUT] where you're optimizing. The second thing that happens is that the estimation error increases when the hypothesis class grows. And this is because it's harder to estimate something more complex. There's just more functions that the learning algorithm has to figure out which one is the correct one given the limited data. So there are ways to make this more precise using the tools from statistical learning theory, but I'll just leave it as intuition for now. So, given these trade-offs, what are the ways that we can use to control the hypothesis class size? So we're going to focus our attention to a linear predictor. But remember, in linear predictors, each predictor has a particular weight vector. So effectively the number of-- the size of the set of weight vectors determines the size of the hypothesis class. So one thing you can do is to reduce the dimensionality of the set of possible weight vectors. So pictorially, this looks like this. So imagine you had three features. So the set of weight vectors for this three-dimensional weight vector is just a ball. And if we remove one feature, then you end up with a two-dimensional ball. Equivalently, this is saying one of the features has to have zero weight, which you can think about as a restriction on a set of values that w should have. So how do you control the dimensionality in practice? The process is called feature selection, or feature template selection. You can do this manually by adding feature templates, seeing if they help, and removing them if they don't. And you're trying to kind of manually figure out what is the smallest set of features that actually gets you good accuracy. There's also ways to do this more automatically. You can do forward selection, boosting, or L1 regularization. This is beyond the scope of the class. But there are ways to make this less manual. One thing I want to stress is that controlling the dimensionality-- dimensionality is this number of features. And that's the key quantity that matters, not the number of feature templates, and also not the complexity of each individual feature. So imagine you write 1,000 lines to compute one feature. Well, it's still a very simple hypothesis class because it's just one feature in so far as generalization is concerned. So the second strategy is controlling the norm or the length of this weight vector. So we can reduce the norm of-- or the length. Visually, this looks like if you have a set of weight vectors which are bounded in length, you can shrink the length and that results in a smaller circle, which is pointedly a smaller number of weight factors. And so this is probably the most common way to control the norm. So there are two ways to do this. One is by regularization. So remember the objective, which we didn't like, was minimizing the training loss of w because that can lead to overfitting. So one way to regularize is you add a penalty term-- lambda over 2 times the norm of w squared. So w is a positive number which controls the strength of this penalty. And what this penalty does is it says, let's try to minimize the training loss, but we also want to keep the norm small because we're taking a min over the sum here. So if we look at what gradient descent does to this objective, we can interpret it as follows. So gradient descent, remember, initializes weights, iterates over t fx and performs an update. So the update is w minus eta, the step size, times this gradient of the training loss. And now we take the gradient of this penalty, which is just lambda times w. So, remember we're subtracting eta. So if w is let's say, 10, 10, then what we're going to do is we're going to subtract that vector and move the weights closer to 0 by an amount that depends. So another way to control the norm is by early stopping. So, early stopping is really easy to explain. So here it is. You run gradient descent. You initialize w. And you repeat a number of epochs. And you perform the update. And the only thing is that you're just going to reduce the number of epochs you go for. That's it. So this seems like a hack. There is-- you can develop some theory about it. But the intuition is that when you start the weights at 0, that's the smallest norm. And when you update the weights over a number of iterations, the norm of w is actually going to grow. It's not obvious that this always happens, but empirically it is true, generally. So by stopping gradient descent early, you're saying, don't let the norm of w get too big. So the lesson here is you're trying to minimize the training error. But you're not trying too hard because you're just going to call it quits after a while. OK, so let's summarize now. So we started by saying the training loss is not the true objective. The real objective is minimizing the loss on unseen future examples. Unfortunately, we don't have access to that. So we're going to settle for the loss on some test data which serves as a surrogate to the unseen examples. Then we studied approximation and estimation error as a way to understand generalization. And it's always just going to be a balancing act between fitting the training error and not letting your hypothesis class grow too big. And the mantra to end with is, perhaps, just keep it simple. So right now we've introduced a bunch of knobs for varying the size of the hypothesis class. Next, we'll see how to actually turn the knobs.
Stanford_CS221_Artificial_Intelligence_Principles_and_Techniques_Autumn_2021
Artificial_Intelligence_Machine_Learning_6_Non_Linear_Features_Stanford_CS221_AIAutumn_2021.txt
Hi. In this module, I'm going to show you how you can use the machinery of linear predictors that we've developed so far to get some non-linear predictors. So we're going to first focus on regression, and then later talk about classification. So remembering regression. We're given some training data. We have a learning algorithm that produces a predictor. And the first key question or design decision is which predictors is a learning algorithm allowed to choose from? That's the question of the hypothesis class. So for linear predictors, remember that the hypothesis class is defined to be the set of all predictors, f of x, equals some weight vector dot feature vector phi of x. And we allow the weight vector to range freely over all d-dimensional real vectors. OK. So if we take phi of x equals 1, x, like we did before, then we can get some lines. So if we set the weight vector to be 1, 0.57, then we get this line with intercept at 1 and a slope of 0.57. And here's a purple one with the intercept of 2 and a slope of 0.2. All is good. But what happens if we get data that looks like this? If you try to fit a line to it you won't be very happy with this. You really want to fit some sort of nonlinear predictor, something that can curve around to fit the data. So your first reaction might be to reach for something like neural networks or decision trees, something that's more complex. But let's see how far we can get it with just using linear predictors. So the key thing is that the feature vector can be arbitrary. So let's take the feature vector to be 1, x as before, but let's just add on an x squared term, just for fun. And so, for example, if we feed x equals 3, then we get the feature vector 1, 3, 9. Let's define some weights-- 2, 1, 0.2. And let's plot what that function looks like. And we get a nice curve. So that's a non-linear predictor. So it has an intercept of 2, a slope of 1, at the origin, and a curvature of negative 0.2. Here's another one-- 4, minus 1, 0.1. Here's an intercept of 4, a slope of minus 1, and a curvature of 0.1. And here's another one-- 1, 1, 0. So what does this one look like? This one just looks like a line because we've used a 0 weight on this x squared term, so it just reduces to a linear predictor. In general, we can define a family of all quadratic predictors that looks like this, by ranging the weight vector, really, over all three-dimensional vectors. So here is our first example of getting a non-linear predictor, in particular quadratic predictors just by changing phi. So one small note here is that in one dimension x squared is just a single feature. But if x were d-dimensional to begin with, then to get the full range of quadratic predictors, we would need d squared features, one for every xi, xj pair. That would be a lot. So that's one slight disadvantage of using the machinery of linear predictors to get non-linear predictors. Let's move on. So quadratic predictors are great, but they can only kind of vary smoothly. What happens if you want a function that looks like this? So here's an example of a piecewise constant predictor. And we can get this predictor, also, by just re-imagining what a feature vector is. So here is-- I'm going to define phi of x equals-- and the first-- I'm going to carve up the input space into a bunch of regions and define a feature to be whether x lies in that region or not. The first feature is test whether x is between 0 and 1, and the indicator function will return 1 if that's true and 0 otherwise. The second one is going to test between 1 and 2, and so on. So here's an example. If you punch in 2.3, that is 0 on all the features/regions except for this one. OK, so if I set the weight vector corresponding to 1, 2, 4, 4, 3, then I get this function. And notice that each weight is just identifying the function value of that region. So between 0 and 1, the feature vector is-- sorry, the function is at 1, and then it's 2, and then it's 4, and then it's 3, OK? So here's another one. It's 4, and then 3, 3, 2, 1.5. And again, in general, the set of predictors is w dot phi of x, where w can range freely. So this is a general technique of piecewise constant functions, which can give you expressive non-linear predictors by partitioning the input space. So again, a caveat is that everything looks nice in one dimension. But if the x were d-dimensions and each dimension were carved up into B regions, then you have B to the D different features, which is an exponential number of features, which is a kind of no-go. So you can kind of get the idea now, but let's just do another example. Suppose you're trying to predict a function with some periodic structure, like you're trying to predict traffic patterns or sales across a year. So imagine that you want to get a function that looks like this, OK? So let's see if we can hack together a feature vector that does that. So phi of x equals 1x and x squared. So put it in the quadratic. And now, let's add a cosine 3x. It's kind of arbitrary. So here's an example. If you punch 2 into x, then you get this feature vector. If you define the weights in a certain way, then you get that red curve. You can define the weights this way, and then you get the purple curve, and then so on. So here the kind of a key idea is that you can really go wild. You can throw in any sort of features you want and get all sorts of wacky-looking predictors, all using the machinery of a linear predictor. So you might say, wait a minute, wait a minute, how were we able to do this? If all this expressive non-linear capabilities, when we haven't really changed the learning algorithm or it's still supposed to be a linear predictor, right? Well, that's because the word linear is a little bit ambiguous here. So remember, the prediction is w dot phi x, so that's a score. And the question is, linear in what? So is a score linear in w? Yes, because the score is just some constant times w. Is the linear in phi of x? Yes, because it's something times phi of x. How about, is it linear in x? Well, the answer is no, because phi of x can be arbitrary. So it doesn't have to be linear x. And the key idea behind non-linearity is that there's two ways of viewing it. From the point of view of gaining expressive non-linear predictors, this is great because you can define phi of x to be something and get arbitrary non-linear functions on. But from the point of view of having to learn such a model, it's actually great because the score is a linear function of w. And when you're learning, you take the gradient with respect to w, so it's just a-- a score is just a linear function, so life is great. In fact, the learning algorithm doesn't even care what phi is. It only looks at the data through the lens of phi of x. It doesn't know whether you gave it x and then applied phi or you just gave it phi of x directly. OK. So now, let's turn from regression to classification. The story is pretty much the same. You can define arbitrary features and get non-linear classifiers. But just a kind of review, remember, on linear classification, you defined two dimensions. You defined the feature vector to be x1, x2. And if you define the predictor as now a sign here, and the sign allows you to define this decision boundary, which separates the region of the space, which is labeled plus from the region of the space, which is labeled minus, OK? So now, what does nonlinear mean? Well, if you look at f of x, because of the sign function, it's already non-linear, so it doesn't really make sense. So instead, non-linearity for a classification means whether the decision boundary is linear or not. In particular, here it is as a line. And if we define the feature vector as x1, x2, then we just get a line. So now, let's try to do something a little bit more interesting. So let's see if we can define a quadratic classifier. Suppose, we wanted to define a classifier that looks like this. So it's a circle. The decision boundary is a circle, where inside the circle, we want to label as plus, and outside, we want to label as minus. OK, so how are we going to do that? Well, let's start with a feature vector equals x1, x2, as we had before. And now, we're just going to tack on a quadratic term-- x1 squared plus x2 squared, OK? And now, if you define the corresponding weight vector to be 2, 2, minus 1, then I claim that this gives you exactly this decision boundary, which is a circle. So there's some algebra that you can do, which I'm going to skip over. But what you can do is, you can rewrite this expression as follows. So f of x, the same f of x is equal to 1 if this quadratic form is less than or equal to 2. So what is this? You might remember from algebra or trigonometry days that this is the squared distance of a point to the 0.11, OK? So in particular, if I constrain the squared distance to be less, we go to 2, then this is the region of points within radius of square root of 2 of a circle, centered at 1, 1, which is exactly what this is. And everything else is classified as minus 1. The decision the boundary we got successfully to be a circle. OK, so let me try to take one more step, to try to reconcile this tension between linear in phi of x and non-linear in x. So what we're going to do here is, remember, the input space x, this decision boundary is a circle. And in feature space, you can see that the decision boundary is a line. So here is a cool animation that I found on YouTube, which, I think, really nicely illustrates this. So this is done in a context of asking if the value of phi is the same. So here, we have points inside the circle and outside the circle. In the ambient x space, they're not separable. But what we're going to do is we're going to apply the feature map. And the feature map, remember, adds this third dimension-- x1 squared plus x2 squared. And now, we're in feature space, which is 3D. And in 3D, we can actually slice a linear predictor that separates the red and the blue points, and that separation induces a circle in the original 2D space. OK, to summarize, so linear is ambiguous. So we have a predictor in a case of regression, which is w dot phi of x. It's linear in w of phi of x, but it's non-linear in x. And this is what allows us to get non-linear predictors using the machinery of a linear predictor. We saw for regression, a non-linearity talks about the predictor directly. And in classification, we talk about the decision boundary. We also saw many types of non-linear features, quadratic features, piecewise constant, periodic features. And again, you can kind of make up your own features for the application you have in mind. So next time someone on the street asks you about linear predictors, you first have to clarify, linear in what? OK, that's the end.
Stanford_CS221_Artificial_Intelligence_Principles_and_Techniques_Autumn_2021
Bayesian_Networks_8_Smoothing_Stanford_CS221_AI_Autumn_2021.txt
Hi, In this module, I'm going to talk about Laplace Smoothing for guarding against overfitting. So let's review maximum likelihood estimation in Bayesian networks. Remember last time, we had an example of a two variable network, it's genre of a movie and the rating of a movie, where their joint distribution is given by probability of a genre times probability of rating given. And now, we don't know these parameters, but we want to estimate them from data. Suppose we gather five data points here. And the way that maximum likelihood estimation works is by counting and normalizing. Parameters here are probability of g. And for that, I'm going to count the number of times g goes up, and normalize. And for probability of r given g, I'm going to look at the number of times each of these configurations shows up. And then I'm going to normalize each one, conditioned on the value of g. So if you look at these estimates, you might notice that there's something funny going on here. So the probability that these parameters assigned to a rating of 2, given that there's a comedy, is 0. It doesn't show up in this row at this table, which means that it's 0. So do we really believe this? Just because we didn't see an example of a comedy being rated as a 2, are we licensed to just give it a probability of 0? Well, that would be very closed minded. So this is a case where maximum likelihood has [INAUDIBLE] overfit to [INAUDIBLE]. There's a very simple way to fix this, called Laplace smoothing. And the idea is that we're just going to add a lambda, which is some positive value, let's say 1, to each count. Let's do a maximum likelihood with Laplace smoothing. So the training data is the same as before. And what we're going to do is, for each of these local distributions, we're going to preset, preload a 1, lambda, more generally, into this position. And now I'm going to go through the training and count as usual. So I had 3. And I had 2. And then I'm going to normalize over these combined counts. And same with the probability of r given g, for each of these configurations. So now I have to actually instantiate all possible configurations. I'm going to load a 1 into each of these counts. And then I'm going to look at my training data. And I'm going to add 2. There's two d4's, 1 d5, 1 c1, and 1 c5. Now, given these counts, I'm going to normalize to get my probability emit, look at all of the d's, add them up, normalize. I get some d's here. And look at all the case rows where [INAUDIBLE],, I'm going to come [INAUDIBLE].. So now, while we revisit our probability estimate of r equals 2, given g equals c, this was 0 before. But now, it's here, that's 1 over 7, which is greater than 0. So now, because we smooth these estimates, now we have a little bit more probability on even those outcomes that we've never seen during training. So the key idea behind maximum likelihood with Laplace smoothing is as follows. So we're going to go through each distribution and partial assignment to the parents of a node and the node itself. And we're simply going to add lambda to the count. Now we do maximum likelihood estimation as usual, though we're going to go through the training data and increment those counts, based on what we saw. And then recount and normalize. And that's it. So the interpretation that we can place on Laplace smoothing is it's like we hallucinated lambda occurrences of each local assignment. So sometimes, these lambda counts are called pseudo counts. Because they're not based on the data, they're kind of made up or virtual counts. So you can think about pretending you saw some examples before you saw data, and then doing maximum likelihood estimation. So how much should lambda be? How much smoothing should we have? And how does it interact with the data? There's two observations that I want to make. First is that the more you smooth, which means that the more, the bigger the lambda is, the closer you're going to push the probability estimates closer to the uniform distribution. So for example, if I just smooth with lambda equals 1/2, and I observe only a d here, then the probability estimates are going to be 3/4 and 1/4. Whereas, if I smooth with 1, then the probabilities are going to be 2/3 and 1/3, which is closer to half half. The second observation I want to make is that no matter what you set lambda to, the data wins out in the end. So suppose we only see examples of dramas, g. So suppose that we're smoothing with lambda equals 1, and we saw only one example of g equals 1. Then, again, the probability estimates are 2/3 and 1/3. But suppose we keep on seeing dramas over and over again, so we see 998 of them. And now, if we account and normalize, then we get the probability estimate is 0.999 of drama, which is much closer to seeing only dramas. So to summarize, we looked at Laplace smoothing for avoiding overfitting and estimating Bayesian networks. And the key idea is that we preload counts with a lambda. And then we're going to go to training data and we add counts based on our data. And then we normalize. So the smoothing pulls us away from zeros to the uniform distribution. But in the end, all the smoothing gets washed out with more data. That's the end.
Stanford_CS221_Artificial_Intelligence_Principles_and_Techniques_Autumn_2021
Markov_Decision_Processes_2_Reinforcement_Learning_Stanford_CS221_AI_Autumn_2019.txt
So this lecture is going to be on reinforcement learning. Um, I will, in the interest of time, skip the, the quiz. So, so the way to think about how reinforcement learning fits into what we've done so far is, you remember this class has this picture, right? So we talk about different models and we talk about different algorithms, inference algorithms to be able to predict using these models and answer queries, and then we have learning which is, how do you actually learn these models, right? So every type of model we go through, we have to kind of check the boxes for each of these [NOISE] pieces. So last lecture, we talked about Markov decision processes. This is a kind of a modeling framework, allows you to define models. For example, for crossing volcanoes or playing dice games or tram, taking trams. Um, what about inference? So what do we have here last time? We had value iteration and which allows you to compute the optimal policy and policy, uh, evaluation which eva- allows you to estimate the value of, uh, a particular policy. So these are algorithms that, um, will operate on MDP, right? And we sort of looked at these algorithms last time. So this lecture is gonna be about learning. Uh, I'll just put RL for now. RL is not an algorithm, it's a kind of, uh, refers to the family of algorithms that fits in, uh, this week. Um, but that's the way you should think about it. RL allows you to, um, either explicitly or implicitly estimate MDPs. And then once you have that, you can do all these, um, uh, inference algorithms to, uh, figure, uh, what the optimal policy is. Okay? [NOISE] So just to review. Um, so what is the MDP? Um, the clearest way- remember to think about it is- it's, um, in terms of a graph. So you have a set of states. So in this dice game, we have in and end. So we have a set of states. From every state, you have a set of actions coming out. So in this case, uh, stay and quit. Um, the actions take you to chance nodes, uh, where the- uh, you don't get to control what happens, but nature does and there's randomness. So out of these chance nodes are transitions. Each transition takes you into a state, it has some probability associated with it. So two-thirds in this case. It also has some reward associated with it which you pick up along the way. So naturally, this has to be one-third, four and remember last time, this was probability 1:10. Okay. So, um, and then there is, you know, uh, the discount factor which Gamma, which is a number between 0 and 1 tells you how much you value the future. Uh, for default, you can think about it as 1, uh, for simplicity. Okay. So this is a Markov decision process. Um, and what do you do with one of these things? [NOISE] We, um, have a notion of a policy and a policy, um, [NOISE] see, I'll write it over here. So a policy denoted Pi. Uh, let me use green. Um, so a policy, Pi, uh, is a mapping from states to action. It tells you a policy when you apply it, it says, "When I land here, where should I go? Should I do stay or quit?" If I land, well, I mean this is kind of a simple MDP. Otherwise, there'd usually be more states and for every state, blue circle will tell you where to go. Um, and when you run a policy, uh, what happens? Uh, you get a path, um, which I'm going to call an episode. So what do you do? You start in state S_0, that's- that will be in. In this particular example, um, you take an action a_1, let's say stay. Uh, you get some reward, in this case it will be 4. You end up in a new state, um, oops, S_1. And suppose you go back to end and, uh, then you take another action, maybe it's stay, reward is 4 again and, and so on, right? So this sequence is a path or in RL speak, it's, uh, an episode. Um, let's see. So let me- let me erase this comment. Uh, so this is an episode. Um, and until you hit the end state. Um, and, uh, what happens out of the episode, you can look at a utility. We're gonna denote U which is the discounted sum of rewards along the way, right? So if you, um, you know, stayed three times and then went there, you would have, uh, a utility of 4 plus 4 plus 4 plus 4, so that'll be 16. Okay? So the last lecture, we didn't really work with, um, the episodes into utility, um, because we were able to define a set of recurrences that, uh, computed the expected utility. So, uh, remember that we want to- you know, we don't know what's going to happen. So, uh, there's a distribution, and in order to optimize something, we have to turn it to a number, that's what expectation does. Um, so there's two, uh, concepts that we had from last time. One is the value function of a particular policy. So V_Pi of S is the expected utility if you follow Pi from S. What does that mean? That means, if you take a particular S, let's take, uh, n, and I put you there, and you run the policy, so stay and you traverse this graph, um, you will have different utilities coming out and the average of those is going to be V_Pi of S. Similarly, there's a Q value, um, expect the utility, if you first take an action from a state S and then follow Pi. So what does that mean? That means if I put you on one of these, uh, red chance nodes and you basically play out the game, um, and average the resulting utilities that you get, what number do you get? Okay? [NOISE] Um, and we saw recurrences that related these two. So V_Pi of S is, um, you, recurrence, the name of the game is to kind of delegate to some kind of simpler problems. So you first, uh, look up what you're supposed to do in s, that's Pi S [NOISE] and that takes you to a chance node which is s, Pi S of S, and then you say, "Hey, how much, um, utility am I going to get from that node?" And similarly from the, the chance nodes, you have to look at all the possible successors, the probability of going into that successor, um, of the immediate reward that you get along the edge plus the discounted, um, reward of the kind of a future when you end up in, um, S-prime. Okay. So any questions about this? This is kind of review of, uh, Markov decision processes from, um, last time. Okay. So now we're about to do something different. Okay. So, um, if you say goodbye to the transition and rewards, that's called reinforcement learning. So remember Markov decision processes. I give you everything here and you just have to find the optimal policy. And now, I'm gonna make life difficult by not even telling you, um, what rewards and what are transitions you have to get. Okay. So just to get a, kind of flavor of what that's like. Um, let's play a game. So, um, I'm going to need a volunteer. I'll, I'll give you the game, but this volunteer, you have to have a lot of, uh, grit and, uh, persistence, because this is not gonna be [NOISE] an easy game. You have to be one of those people that even though you're losing a lot, uh, you're still gonna not give up. Okay. So here's how the game works. Um, so for each round, r equals, uh, 1, 2, 3, 4, 5, 6, and so on. You're just going to choose A or B, um, red pill or blue pill, I guess. Um, and you, you move to a new state. So the state is here and you get some rewards which I'm gonna show here. Okay. And the state is 5, 0, that's the initial state. Okay. So everything clear about the rules of the game? [LAUGHTER] That's reinforcement learning, right? [LAUGHTER] We don't know anything about how. Okay. So any volunteers. Um, how about you in the front? Okay. Okay. Okay. Let me, let me fix that. A. A, A, [LAUGHTER] [NOISE] [LAUGHTER] B, B, A, [LAUGHTER] A. It's a MDP, so, uh, in that case that helps. B, B, B, B, B, just infinitely click B with an A, I guess. [LAUGHTER] It's like I'm losing a point every time. I warned you. [LAUGHTER] Okay. A, A, A, A, B, A, A, A, A, A, A. [LAUGHTER] Okay. [APPLAUSE] I'm glad this worked because last time it took a lot longer [LAUGHTER]. Um, but, you know, so what did you have to do? I mean you don't know what to try so you try A and B. And then hopefully you're building an MDP in your head, right? Yeah, right? [LAUGHTER] Okay. Just smile and nod. Um, and you have to figure out how the game works, right? So maybe you noticed that hey, A is, you know, decrementing and B isn't going up but then there's this other bit that gets flipped. So, um, okay you figure this out, and in the process you're also trying to maximize reward which, uh, apparently I guess wasn't - doesn't come until the very end because, um, it's a cruel game. [LAUGHTER]. Okay. So how do we get an algorithm to kind of do this and how do we think about, uh, us doing this? So just to kind of make the contrast between MDPs and reinforcement learning sharper, so Markov decision process is a offline thing, right? So you already have a mental model of how the work- world works. That's the MDP, that's all the rewards and the transitions and the states and actions. And you have to find a policy to collect maximum rewards. You have it all in your head, so you just kind of think really hard about, you know, what is the best thing. It's like "Oh, if I do this action then I'll go here" and, you know, look at the probabilities, take the max of whatever. So reinforcement learning is very different. You don't know how the world works. So you can't just sit there and think because thinking isn't going to help you figure out how the world works. Um, so you have to just go out and perform actions in the world, right? And in doing so you - hopefully you'll learn something but also you'll, um, you'll get some rewards. Okay so-so to maybe formalize the, um, the paradigm of RL. So you can think about it as an agent. That's, uh, that's you. Uh, and do you have the environment, which is everything else that's not an agent. The agent takes actions. So that sends action to the environment and the environment just send you back rewards and a new state. And you keep on doing this. Um, so what you have to do is figure out first of all how to - am I going to act. If I'm in a particular state S_t minus 1, what actions should I choose, okay? So that's one, um, one question. And then you're gonna get this reward and observe a new state. How -what, what should I do to update my mental model of the world, okay? So these are the main two questions. I'm going to talk first about how to update the parameters and then later in the lecture I'm going to come back to how do you actually go and, you know, explore it. Okay. So I'm not going to say much here but, you know, in the context of volcano crossing, um, just to kind of think through things, every time you play the game, right? You're gonna get some utility. So you take -so this is the episode over here. So a r s, you're gonna -sometimes you fall into a pit. Sometimes you go to a hut. Um, and based on these experiences, um, if I didn't -hadn't told you what any of the actions do and what's a slip probability or anything, how would you kind of go about, um, kinda solving this problem? That's a -that's a question. Okay so there's a bunch of algorithms. I think there's gonna be 1, 2, 3, 4. At least four algorithms that we're going to talk about with different characteristics. But they're all going to kind of build onto each other in some way. So first class of algorithms is Monte Carlo methods, right? So, um, okay. So whenever you're doing RL or any sort of learning, uh, the first thing you get is you just have data. Let's, let's suppose that you run even a random policy, you're just gonna -because in the beginning you don't know any better, so you're just going to try random actions and, uh, but in the process you're gonna see "Hey, I tried this action and it led to this reward and so on". So in a concrete example just to make, uh, things a little bit more crisp, it's gonna look something like in, uh, and then you take, uh, you know you did, um, let's see. Let me try to color coordinate this a little bit. Um, so you're in n, you do, um, stay. And then you get a reward of 4 and then you're back in n, you do a stay, and then you get 4 and then maybe you're done, you're out. Okay. So this is an example episode just to make things concrete. So this is s_0, a_1, r_1, s_2, s_1. I keep on incrementing too quickly. Um, a_2, r_2, s_3, okay? Okay so what should you do here? Alright so, um, any ideas? Model-based Monte Carlo. So if you have MDP you would be done. But we don't have MDP, we have data. So what can we do? [NOISE] Yeah. [inaudible]. Yeah. Let's try to build a MDP from that data. Okay. So, um, the key idea is estimate the MDP. Um, so intuitively, we just need to figure out what the transitions and rewards are and then we're done, right? Um, so how do you do the transitions? Um, so the transition says if I'm in state S and I take action A, what will happen? I don't know what will happen, but let's see in the data what will happen. So I can look at the number of times I went into a particular S prime and then divide it over the number of times I attempted any- this action from that state at all and just take the ratio, okay? And for the rewards, um, this is actually fairly, you know, easy, when I - because when I observe a reward, um, from S, A and S prime. I just write it down and say that's the reward, okay? Okay. So on the concrete example what does this look like? So remember now, here's the MDP graph. I don't know what the -the, uh, transition distribution or the rewards are. Um, so let's suppose I get this trajectory. What should I do? So I get stay, stay, stay, stay, and I'm out, okay? So first I, I can write down the rewards of 4 here, and then I can, um, estimate the probability of, you know, transitioning. So three out of four times I went back to in. One out of four times I went to end. So I'm gonna estimate as three-fourths, one-fourths. Okay. But then suppose I get a new data point. So I have stay, stay, end. So what do I do? I can add to these counts, um. So everything is kind of cumulative. So two more times, I'm sorry one more time I went into in and another time I went to end, so this becomes four out of six, three out of six. And suppose I see another time when I just go into end, so I'm just going to increment, uh, this counter and now it's three out of seven and four out of seven, okay? So pretty, um, pretty simple. Okay so for reasons I'm not going to get into, this process actually, you know, converges to the -if you do this kind of, uh, you know, a million times, you'll get pretty, um, accurate. Yeah, question? Yes, the question is, you don't know the rewards or the transitions, uh, but yes you do know the set of, ah, states and the actions. Set of states, I guess, you don't have to know them all in advance, but you just observe them as they come. The actions, you need to know because you- you are an agent and you need to play the game. Yeah, good question. Okay. So, yeah. Does this work with variable costs? Like, there is a probabilit- or variable reward around it. There's a probability you get some rewards for probability [inaudible]. Yeah. So the question is, does this work with variable, uh, rewards. Um, and if the reward is not a function of, um, sas prime, you would just take the average of the rewards that you see. Yeah. Okay. So- so what do you do with this? So after you estimate the MDP, so all you need is the transitions and rewards. Um, then now we have MDP. It might- it may not be the exact right MDP because this is estimated from data so it's not gonna match it exactly, um, but nonetheless, we already have these tools from last time. You can do value iteration to compute, um, the optimal policy on it and then you just, you know, you're done, you run it. On- in practice, you would probably kind of interleave the learning and the- the optimization but, uh, for simplicity we can think about it as a two-stage where you gather a bunch of data, you estimate the MDP and then you are off. Okay. There's one problem here. Does anyone know what the problem might be? You can actually see it by looking on the slide. Yeah. Well, with your based policy of all this thing, you'll never explore the quick branch of the world. Yeah, yeah. You didn't explore this at all, so you actually don't know how much reward is here. Maybe it's like, uh, you know, 100, right? So- so this is this problem, this kind of actually a pretty big problem that unless you have a policy that, uh, actually goes and covers all the- the states, you just won't know, right? And this is kind of natural because there can always be, you know, a lot of reward hiding under a kind of one state but unless you see it you- you don't- you just don't know. Um, okay. So this is a kind of key idea, key challenge I would say, in reinforcement learning is exploration. So you need to be able to explore, um, the state space. This is different from normal machine learning where data just comes in passively and you learn on your nice function and then you're- you're done. Here, you actually have to figure out how to get the data, and that's- that's kind of one of the, the key challenges of RL. So we're gonna go back to this- this problem, and I'm not really gonna, uh, try to solve it now. Um, for now you can just think about Pi as a random policy because a random policy eventually will just, you know, hit everything for, you know, finite, uh, small, uh, state spaces. Okay. So, um, okay. So that's basically end of the first algorithm. Let me just write this over here. So algorithms, we have model-based, um, Monte Carlo. And the model-based is referring to the fact that we're estimating a model the- in particular the MDP. The Monte Carlo part is just referring to the fact that we're using samples, uh, to estimate, um, a model or you're basically applying a policy multiple times and then estimating, uh, the model based on averages. Okay. So- so now, I'm going to present a- a different algorithm and it's called, uh, model-free Monte Carlo. And you might from the name guess what we might want to do is maybe we don't have to estimate this model, okay? And why- why is that? Well, what do we do with this model? Um, what we did was we, you know, uh, presumably use value iteration to, um, you know, compute the optimal policy. And the- remember this, uh, recurrence, um, for computing Q_opt, um, it's in terms of T and reward, but at the end of the day all you need is Q_opt. If I told you, um, Q_opt (s, a) which is, um, what is Q_opt (s, a)? It's the, um, the maximum possible utility I could get if I'm in, chance node sa and I follow the optimal policy. So clearly if I knew that, then I would just produce the optimal policy and I'd be done, I don't even need to know- understand the- the rewards and transitions. Okay. So with that, uh, insight is model-free learning, which is that we're just going to try to estimate Q_opt, um, you know, directly. Um, sometimes it can be a little bit confusing what is meant by model-free. So Q_opt itself you can think about as a- as a model, but in the context of MDPs in reinforcement learning, generally people when they say model-free refers to the fact that there's no MDP model, not that there is no, um, model in general. Okay. So, um, so we're not gonna get to Q_opt, uh, yet. Um, that will come later in the lecture. So let's warm up a little bit. Um, so here's our data staring at us. Um, remember- let's, let's look at a related quantity, so Q Pi. Remember what Q Pi is. Q Pi (s, a) is an expected utility if we start at s and you first take action a and then follow policy Pi, right? So in, um, in- I guess another way to write this is, um, if you are at a particular, uh, time step t, you can define u_t as the- the discounted sum of the rewards from that point on, which is, you know, the reward immediately that you will get plus the discounted part in the non- next time step plus, you know, a square discounted and then, uh, two time steps in the future and so on. And, um, what you can do is you can try to estimate Q Pi from this utility. Right? So this is the utility, uh, that you get out to predict your time steps. So suppose you do the following. So suppose you average the utilities that you get only on the time steps where I was in a particular state s and I took an action a. Okay. So you have a- let's suppose you have a bunch of episodes, right? So, um, here pictorially, um, uh, let's see. [NOISE] Here's another way to think about it. So I get a bunch of episodes. I'm gonna do- do some abstract, um, drawing here. Um, so every time you have you know, s, a shows up here, maybe it shows up here, maybe it shows up here, maybe it shows up here, you're going to look at how much reward do I get from that point on? How much reward do I get from here on? How much reward do I get from here on? And, um, average them, right? So there's a kind of, a technicality which is that if s, a appears here and it also appears, uh, after it then I'm not going to count that because I'm kind of- if I do both I'm kind of double counting. Um, in fact it works both ways, but just, conceptually it's easier to think about just taking of, uh, an s, a, uh, of the same you don't kind of go back to the same position. Okay, so let's do that on a concrete example. So Q-pi, let's just write it. Q-pi s, a is a thing where we're trying to estimate and this is, uh, a value associated with every chance node s, a. So in particular, I've drawn it here. I need a value here and, uh, a value here. Okay? So suppose I get some data, I stay and then I got- go to the end. Uh, so what's my utility here? It's not a trick question. 4. 4, yes. Um, sum of 4 is 4. Okay, so now I can say, "Okay it's 4." And that's my best guess so far. I mean, I haven't seen anything else, maybe it's 4. Um, so what happens if I play the game again and I get 4, 4? So what's the utility here? 8. 8? So then I update this to the average of 4 and 8, do it again, I get 16 then I average, uh, in the 16. Okay? And, um, and again, you know, I'm using stays so I don't learn anything about this, in practice you would actually go explore this and figure out how much utility you're seeing there. So in particular, notice I'm not updating the rewards nor the transitions because I'm model-free, I just care about the Q values that I get which are the values that sit at the nodes not on the edges. Okay, so one caveat is that we are estimating Q-pi not Q-opt. We'll revisit this, um, later. Um, and another, uh, thing to kind of note is the difference between what is called On-policy and Off-policy. Okay? So in reinforcement learning, you're always following some policy to get around the world right? Um, and that's generally called the exploration-policy or the control policy um, and then there's usually some other thing that you're trying to estimate, usually the- the value of a particular policy and that policy could be the same or it could be different. So On-policy means that, uh, we're estimating the value of the policy that we're following, the data-generating policy. Off-policy means that we're not. Okay? So um, so in particular is, uh, model-free Monte Carlo, um, On-policy or Off-policy? It's On-policy because I'm estimating Q-pi not Q-opt. Okay? That's On-policy. Um, and Off-policy , uh, what about model-based Monte Carlo? [NOISE] I mean it's a little bit of a slightly weird question, but in model-based Monte Carlo, we're following some policy, maybe even a random policy, but we're estimating the transition then rewards, and from that we can compute the- the optimal policy. So you can- you can think about is, um, Off-policy but, you know, that's maybe not, uh, completely standard. Okay. So any questions about what model-free Monte Carlo is doing? So let me just actually write. So what is model-based Monte Carlo is doing, it's trying to estimate the, uh, the transition and rewards and model-free Monte Carlo is trying to estimate, uh, the, um, Q-pi. Um, okay? And just as- as a note, I put Hats on, uh, any letter that is supposed to be a quantity that is estimated from data and that's what, you know, I guess statisticians do, um, to differentiate them between whenever I Q-pi, that's the true, uh, value of that, you know, policy which, you know, I don't have. Okay, any questions about model-free Monte Carlo? Both of these algorithms are pretty simple, right? You just, you know, you look at the data and you take averages. Yeah. So model free is not trying to optimize [inaudible] policy. So the question is is model-free, uh, making changes to a policy or is it a fixed policy? So- so this version I've given you is only for a fixed policy. The general idea of model-free as we'll see later, uh, you can also optimize the policy. Okay. So- so now what we're gonna do is we're gonna, uh, do theme and variations on, uh, model-free Monte Carlo. Actually where it's going to be the same algorithm but I just wanted to interpret it in kind of slightly different ways that'll help us, um, generalize it in the future. Yeah. Are there certain problems where model-free does better than model base? Are there certain problems where model-free is better than model base? So this is actually a really interesting question, right? So, um, you can show that if your model is correct, if your model of the world is correct, model-based is kind of the way to go because there'll be more sample efficient, meaning that you need fewer, uh, data points. But it's really hard to get the model correct in the real world. So recently, especially with, you know, deep reinforcement learning, people have gone a lot of mileage by just going model-free because then, um, jumping ahead a little bit, you can model this as a kind of a deep neural network and that gives you extraordinary flexibility and power without having to solve the hard problem of, you know, constructing the MDP. Okay. So- so there's kind of three ways you can think about this. So the first, we already talked about it, is, you know, this average idea. So we're just looking at the utilities that you see whenever you encounter an s and a, and you just average them. Okay. So here is an equivalent formulation. Um, and the way it works is that for every, um, s, a, u that you see, so every time you see a particular s, a, u, s, a, u, s, a, u and so on, I am going to perform the following update on. So I'm gonna take my existing value and I'm going to do a- what- what we call a convex combination. So, you know, 1 minus eta and eta sum to 1. So it's, you know, a kind of balancing between two things. Balancing between the old value that I had and the- the new utility that I saw. Okay? And the eta is set to be 1 over 1 plus the number of updates. Okay? So let me do a concrete example. I think you'll make this very clear what's- what's going on. So suppose my data looks like this. So I get, uh, 4, um, and then a 1 and a 1. Um, so these are the utilities, right? That's- that's a U here. I'm ignoring the s and a, I'm just assume that there are some- something. Okay, so first, uh, let's assume that Q-pi is 0, okay? So the first time I do, um, uh, let's see, number of updates, I haven't done anything so it's 1, um, 1 minus 0. So 0 times 0 plus 1 times 4 which is the first view that comes in. Um, okay, so this is 4, okay? So then what about the next data point that comes in? So I'm gonna to take, um, one-half now times 4 plus one-half times 1, which is the new value that comes in. And that is, I'm gonna to write it as 4 plus 1 over 2, okay? So now- okay just to keep track of things, this results in this, this results in this, and then now, um, I'm running out of space but hopefully we can- so now on the third one, I do, um, uh, two-thirds, so I have 4 plus 1 over 2 times two-thirds plus, um, actually I- I guess I should do two-thirds to be consistent. Two-thirds times 4 plus 1 over 2 which is the previous value that's sitting in Q-pi plus one-third times 1, which is a new value, and that gives me, um, 4 plus 1 plus 1 over 3, right? So you can see what's going on here is that, you know, each, uh, each time I have this, you know, sum over all the tools I've seen over the number of times it occurs and this eta is set so that next time I kind of cancel out the old uh, count and I add the new count to the denominator and it kind of all works out so that at every time-step what actually is in Q-pi is just a plain average over all of the numbers I've seen before. All right, this is just kind of an algebraic trick to, um, get this original formulation, which is a notion of average, into this formulation which is a notion of, um, kind of you're trying to, um, take a little bit of the old thing and add a little bit of a new thing. Okay. So [NOISE], um, I guess I'm going to call this, uh, I guess, um, combination I guess. So the- that's the second interpretation. There's a third interpretation here which, uh, you can think about is, uh, in terms of stochastic gradient descent. So this is actually a kind of a, uh, simple algebraic manipulation. So if you look at this expression, what is this? So you have 1 times Q Pi, so I'm gonna pull it out and put it down here and then I'm gonna have minus eta times Q Pi, that's this thing and then I also have a eta, a u, so I'm going to put kind of minus a- u here and this is, uh, inside this parenthesis. So if you just, you know, do the algebra you can see that these two, you know, are equivalent. Uh, so what's the point of this? Right, so, um, where have you kind of seen this, uh, before, something like, maybe not, not this exact expression but something like that [NOISE]. Any ideas? Yeah, when you look down at a stochastic gradient descent in the context of, uh, the square loss for linear regression. Right, so remember, uh, we had these updates that all looked like kind of prediction minus target which was, you know, the residual and that was used to kind of update. So one way to interpret this is, uh, this is kind of implicitly trying to do stochastic gradient descent on the objective which is a squared, uh, loss on, uh, the, the Q Pi value that you, you, you're trying to set and, uh, u which is the new piece of data that you got. So think about in regression this is the y, this is, uh, y, you know, the- what the output is and you- this is the model that's trying to predict it and you want those to be close to each other. Okay? So, so those are kind of three views on basically, uh, this idea of averaging or incremental updates. Okay. So it'll become clear why, you know, I, I did this isn't just to, you know, have fun. Uh, okay. So now let's, uh, see an example of model- free Monte Carlo in action on this, ah, the volcano games. So remember here we have this, uh, you know, volcanic example and, uh, I'm going to, uh, set the number of episodes to let's say 1,000, let's see what happens. Uh, so here, okay. So what does this kind of, uh, uh, grid-like structure, a grid of triangles denote? So this remember is a state, this is 2, 1. So what I am doing here is dividing into four pieces which correspond to the four different action, so this triangle is 2, 1 north, this triangle is 2, 1 east and so on. Okay. And a number here is the Q Pi or value that I'm estimating along the way. Okay, so the, the policy I'm using, uh, is a complete random, uh, just move randomly, uh, and I run this 1,000 times and we see that the average utility is, uh you know, minus 18 which is, uh, obviously not great. Okay. Uh, but this is an estimate of how well the random policy is doing. So, you know, as advertised, you know, random policy you would expect to fall into a volcano quite often. Uh, okay. Uh, and you can run this and sometimes you get slightly different results but, you know, it's pretty much stable around minus 19, minus 18. Okay. Any questions about this before we move on to, uh, different algorithms? Okay. So model-based Monte Carlo we're estimating the MDP, model-free Monte Carlo we're just estimating the Q values of a particular policy for now. Okay. So, so let's revisit what model-free Monte Carlo is doing. So if you use the policy Pi equals stay for the dice game, um, you know, you might get a bunch of different, uh, trajectories that come out. These are possible episodes and in each episode you have a utility, you know, associated with it. Uh, and what model free Monte Carlo is doing is it's using these utilities, uh, to kind of update, uh, towards, uh, update u Q Pi. Right, so in particular like for example this you're saying, okay, I'm in, I'm in, uh, the in-state and I, you know, take an action and stay, when you're- what will happen? Well, in this case I got, you know, 16 and, uh, this case I've got 12. And notice that there's quite a bit of variance. So on average, this actually does the right thing. Right? So, um, just by definition, this is our unbiased, you know, estimate, if you do this a million times and average you're just going to get the right value which is, uh, 12 in this case. But the variance is here, so if you, for example if you only do this a few times, you're not going to get 12, you might get something, you know, sort of related. Uh, so how can we kind of counteract, uh, this, this variance? So the key idea, uh, behind what we're going to call bootstrapping is, is that, you know, we actually have, you know, some more information here. So we have this Q Pi that we're estimating along the way. Right? So, so this view is saying, okay, we're trying to estimate Q Pi, um, and then we're going to try to basically regress it against, you know, this data that we're seeing but, you know, can we actually use Q Pi itself to, uh, help, you know, reduce the variance? So, so the idea here is, uh, um, I'm going to look at all the cases where, you know, I started in and I take stay, I get a 4. Okay? So I'm going to say, I get a 4 but then after that point I'm actually just going to substitute this 11 in. Okay? This is kind of weird, right, because normally I would just see, okay, what would happen? But what happens is kind of random. On average it's going to be right but, you know, on any given case, I'm gonna get, like, you know, 24 or something. And the, the hope here is that by using my current estimate which isn't going to be right because if I were, if it were right I would be done but hopefully it's kind of somewhat right and that will, you know, be, you know, better than using the, the kind of the raw, rollout value. Yeah, question. You, you would update your current estimate at the end of each episode, correct? Uh, yeah. So the question is, would you update the current estimate, um, after each episode? Yeah. So all of these algorithms, I haven't been explicit about it, is that you've seen an episode, you update, uh, after you see it and then you get a new episode and so on. Yeah. Sometimes you would even update before you're done with the episode, uh. [NOISE] Okay. So, uh, let me show this, uh, what, um, this algorithm. So this is a new algorithm, it's called SARSA. Does anyone know why it's called SARSA? [inaudible]. Oh, yeah, right. So if you look at this, it's spelled SARSA and that's literally the reason why it's called SARSA. Uh, so what does this algorithm say? So you're in a state s, you took action a, you got a reward, and then you ended up in state s prime and then you took another action a prime. So for every kind of quintuple that you see, you're going to perform this update. Okay, so what is this update doing? So this is the convex combination, uh, remember that we saw from before, um, where you take a part of the old value and then you, uh, try to merge them with the new value. So what is the new value here? This is looking at just the immediate reward, not the full utility, just the immediate reward which is this 4 here and you're adding the discount which is 1 for now, um, of your estimate. And remember, what is the estimate trying to do? Estimate is trying to be the expectation of rewards that you will get in the future. So if this were actually a q pi and not a q pi hat, then this will actually just be strictly better because that would be, uh, just reducing the variance. Uh, but, you know, of course this is not exactly right, there's bias so it's 11, not 12 but the hope is that, you know, this is not biased by, you know, too much. Okay? So these would be the kind of the, the values that you will be updating rather than these kind of raw values here. Okay. So just to kind of compare them, well, okay. Okay, any questions about what SARSA is doing before we move on? So maybe I'll write something to try to be helpful here. So Q pi model-free Monte Carlo estimates Q pi based on u, and SARSA is still Q pi hat, but it's based on reward plus, uh, essentially Q pi hat. I mean this is not like a valid expression, but hopefully, it's some symbols that will evoke, uh, the right memories, um, okay? So let's discuss, um, the differences. So this is- this- whenever people say, kind of, bootstrapping, um, in the context of reinforcement learning, this is kinda what they mean, is that instead of using u as its prediction target, you're using r plus Q pi, and this is kind of you're pulling up yourself from your bootstraps because you're trying to estimate q pi, but you don't know q pi, but you're using Q pi to estimate it. Okay. So u is based on one path, um, er, in SARSA, you're based on the estimate which is based on all your previous kind of experiences, um, which means that this is unbiased, uh, model for your Monte Carlo is biased, but SARSA is biased. Monte Carlo has large variance. SARSA has, you know, smaller variance. Um, and one, I guess, uh, consequence of the way the algorithm is set up is that model-free Monte Carlo, you have to kind of roll out the entire game. Basically, play the game or the MDP until you reach the terminal state, and then you can- now you have your u to update, whereas, uh, SARSA when- or any sort of bootstrapping algorithm, you can just immediately update because all you need to do is you need to see, this is like a very local window of S-A-R-S-A, and then you can just update, and that can happen, kind of, you know, anywhere. You don't have to wait until the very end to get the value. Okay. So just as a quick sanity check. Um, which of the following algorithms allows you to estimate Q opt, so model-based Monte Carlo, model-free Monte Carlo, or SARSA? Okay. So I'll give you maybe ten seconds to ponder this. [NOISE] Okay? How many of you more- need more time? Okay. Let's, uh, get a report. I think I didn't reset it from last year, so this includes last year's, uh, participants. Um, so model-based Monte Carlo, uh, allows you to get Q opt, right? Because once you have the MDP, you can get whatever you want. You can get Q opt. Model-free Monte Carlo, um, estimates Q Pi; it doesn't estimate Q opt and, um, SARSA also estimates Q Pi, but it doesn't estimate Q opt, okay? All right. So, so that's, uh, kind of a problem. I mean, these algorithms are fine for, uh, estimating the value of a policy, um, but you really want the optimal policy, right? In fact, these can be used to improve the policy as well because you can, um, do something called policy improvement, which I didn't talk about. Once you have the Q values, you can define a new policy based on the Q values. Um, but there's actually a kind of a more direct way to do this, okay? So, so here's the kind of the way mental framework you should have in your head. So there's two values: Q Pi and Q opt. So in MDPs, we saw that policy evaluation allows you to get Q Pi; value iteration get- allows you to get Q opt. And now, we're doing reinforcement learning, and we saw model-free Monte Carlo and SARSA allow you to get Q Pi. And now we need, I'm going to show you a new algorithm called Q-learning, that allows you to get Q opt. So this gives you Q opt, and it's based on reward, uh, plus, uh, Q opt, kind of. Okay. So this is going to be very similar to SARSA, and it's only going to differ by, essentially, as you might guess, the same difference between policy evaluation and value iteration. Okay. So it's helpful to go back to kind of the MDP recurrences. So even though MDP recurrences can only apply when you know the MDP. For deriving reinforcement learning algorithms, um, it's- they can kind of give you inspiration for the actual algorithm. Okay. So remember Q opt, what is a Q opt? Q opt is considering all possible successors of probability immediate reward plus, uh, future, um, returns. Okay. So the Q-learning is, it's actually a really kind of clever idea, um, and it's- it could also be called SARS, SARS, I guess, um, but maybe you don't want to call it that, and what it does is as follows. So this has the same form, the convex combination of the old, uh, value, uh, and the new value, right? So what is the new value? Um, so if you look at Q opt, Q opt is looking at different successors reward plus V opt. What we're gonna do is, well, we don't have all- we're not gonna be able to sum over all our successors because we're in our reinforcement learning setting, and we only saw one particular successor. So let's just use that as a successor. So on that successor, we're going to get the reward. So R is a stand-in for the actual reward of, I mean, is the stand-in for the reward, the reward function, and then you have Gamma times. And then V opt, I am going to replace it with, uh, the, our estimate of what V opt is, and what should the estimate of V opt be? So what relates V opt to Q opt? Yeah? I think the a that maximizes Q opt but [inaudible] V opt. Yeah. Exactly. So if you, define V opt to be the max over all possible actions of Q opt of s in that particular action, then this is V opt, right? So Q is saying, I'm at a chance node, um, how much, what is the optimal utility I can get provided I took an action? Clearly, the best thing to do if you're at a state is just choose the action that gives you the maximum of Q value that you get into, okay? So that's just Q-learning, so let's put it side-by-side with SARSA. Okay. So SARSA, these two are very similar, right? So SARSA, remember updates against r plus Q Pi? And now we're updating against r plus this max over Q opt, okay? And you can see that SARSA requires knowing what action I'm gonna take next, um, kind of a one-step look ahead, a prime and that plugs i- into here, whereas Q-learning, it doesn't matter what a you took because I'm just gonna take the one that maximizes, right? So you can see why SARSA is estimating the value of policy because, you know, what a prime, uh, shows up here is a function of a policy. And here, um, I'm kind of insulated from that because I'm just taking the maximum over all actions. This is the same intuition as for value iteration versus policy evaluation, okay? I'll pause here. Any questions? Q-learning versus SARSA. So is Q-learning on-policy or off-policy? It's off-policy because I'm following whatever policy I'm following, and I get to estimate the value of the optimal policy which is probably not the one I'm following, at least, in the beginning. Okay. So let's look at the example here. So here's SARSA and run it for 1,000 iterations. And like model-free Monte Carlo, um, this, um, I'm estimated that an average- the average utility I'm getting is minus 20, and in particular, the values I'm getting are all very negative because this is Q Pi. This is a policy I'm following, which is the random policy. Um, if I replace this with q, what happens? So first, notice that the average utility is still minus 19 because I actually haven't changed my exploration policy. I'm still doing random exploration. Um, well, yeah. I'm still doing random exploration. But notice that the value, the Q opt values are all around, you know, 20, right? And this is because the optimum policy, remember, is just to- and this is, uh, slip probability is 0. So optimal policy is just to go down here and get your 20, okay? And Q- and I- I guess it's kind of interesting that Q-learning, I'm just blindly following the policy running, you know, off, off the cliff into the volcano all the time but, you know, I'm learning something, and I'm learning how to behave optimally, even though I'm not behaving optimally, and that's, uh, the kind of hallmark of off-policy learning. Okay. So, any questions about these four algorithms? So model-based Monte Carlo, estimate MDP, model-free Monte Carlo, um, estimate, ah, the Q value of this policy based on, um, the actual returns that you get, the actual sum of the, ah, rewards. SARSA is bootstrapping estimating the same thing but with kind of a one-step look ahead. And Q learning is like SARSA except for I'm estimating the optimal instead of, um, fixed policy Pi. Yeah. Is SARSA on-policy or off policy? SARSA is on-policy because I'm estimating Q Pi. All right. Okay so now let's talk about encountering the unknown. So these are the algorithms. So at this point if I just hand you some data, um, if I told you here's a fixed policy, here's some data, you can actually estimate all these quantities. Um, but now there's a question of exploration which we saw was really important, because if you don't even, even see all the states, how can you possibly act optimally? So, um, so which exploration policy should you use? So here are kind of two extremes. So the first extreme is, um, let's just set the exploration policy. So, so imagine we're doing Q learning now. So you have this Q_opt estimate. So it's not a true Q_opt but you have an estimate of Q_opt. Um, the naive thing to do is just take a- use that Q_opt, figure out which action is best and just always do that action. Okay. So what happens when you do this is, um, you, ah, don't do very well. So why don't you do very well? Because initially while you explore randomly and soon you find the 2. And once you've found that 2, you say, "Ah, well, 2 is better than 0, 0, 0. So I'm just gonna keep on going down to the 2 which is you know, all exploitation, no exploration. Right? You don't realize you that there's all this other stuff over here. Um, so in the other direction, we have no exploitation, all exploration. Um, here, ah, you kind of have the opposite setup where I'm, I'm running Q learning, right? So as we saw before, I'm actually able to estimate the, uh, the, the Q_opt values. So I learn a lot. But the average utility which is the actual utility I'm getting by playing this game is pretty bad. In particular, it's the, the utility you get from just, you know, moving randomly. So kinda what you really want to do is, uh, balance you know, exploration and exploitation. So just kind of a, kind of an aside or a commentary is that I really feel reinforcement learning kind of captures, ah, life pretty well. Um, uh, because in life there's, you know, you don't know what's going on. Um, you want to get rewards, you know, you want to do well. Um, and, ah, but at the same, time you have to, um, kind of learn about how the world works so that you can kind of improve your policy. So if you think about going to in restaurants or finding the shortest path better way to get to, um, to school or to work, or in research even when you are trying to figure out, um, a problem you can work on the thing that you know how to do and will definitely work or, you know, do you try to do something new in hopes of you learning something but maybe it won't get you as high reward. So, um, hopefully reinforcement learning is, um, I know, it's kind of a metaphor for life in the US. Um, okay so, ah, back to concrete stuff. Um, so here's one way you can balance, um, exploration and exploitation, right? So it's called the Epsilon-greedy policy. And this assumes that you're doing something like Q learning. So you have these Q_opt values and ideas that, you know, with probability of 1 minus Epsilon where Epsilon is, you know, let's say like 0.1, you're usually gonna give exploit. We're just gonna do, give you- give it all you have. Um, and then, um, once in a while, you're also gonna do something random. Okay. So this is actually not a bad policy to act in life. So once in a while, maybe you should just do something random and kind of see what happens. Um, so if you do this, um, what, what do you get? Okay, so what I've done here is, uh, I've set Epsilon to be starting with one. So one is, ah, all exploration. And then I'm going to change the value, ah, a third of the way into 0.5. And then I'm gonna, two-thirds the way I'm gonna change it to 0. Okay. So if I do this then I actually estimate the values, ah, really really well. Um, and also I get utility which is, you know, pretty good, you know 32. Um, okay. And this is also kind of something that happens, uh, as you get older, you tend to, um, [NOISE] explore less and exploit more. Um, it just happens. Um, okay. All right. So that was exploration. So let's put some stuff on the board here. Um, do I need this anymore? Maybe [NOISE]. Okay. Um, okay. So covering the unknown, so we talked about, you know, exploration, um, you know, Epsilon-greedy. Um, and there's other ways to do this. Um, Epsilon-greedy is just kind of the simplest thing that actually, you know, works remarkably, you know, well, um, even in the stabilized systems. So the other problem now I'm gonna talk about is, you know, generalization. Uh, so remember when we say exploration. Well, if you don't see a particular state, then you don't know what to do with this. I mean you think about it for a moment, that's kind of unreasonable because, you know, in life you're never gonna be in the exact same, you know, situation. And yet we are [NOISE] we need to be able to act properly right. So general problem is that a state-space that you, you might deal with in a kind of a real, ah, world situation is enormous. And there's no way you're going to go and track down every possible state. Okay. So this state space is actually not that enormous, um, but this is the biggest state space I could draw on the- on the screen. Um, and you can see that this, you know, the average utility is, you know, pretty bad here. Okay. So what can we do about this? So, um, I guess let's talk about a large state space. So this is the problem. So now this is where the second- the third interpretation of model-free Monte Carlo will come in handy. So let's take a look at Q learning. Okay. So in the context of, ah, SGD, looks like this. Right. So it's a kind of a gradient step where you take the old value and you minus eta and something that kind of looks like, ah, it could be a gradient, which is the residual here. Um, so one thing to note is that under the, the kind of formulations of Q learning that I've talked about so far, this is what we call a kind of rote learning. Right. Um, which if we were, you know, two weeks ago, we already said this is, you know, kind of ridiculous because it's, uh, not really learning or generalizing at all. Um, right now it's basically for every single state and action I have a value. If I have a different state and action, completely different value. I don't- I don't- there's no kind of, ah, sharing of information. And naturally, if I do that, I can't generalize between states and actions. Um, okay. So here's the key idea that will allow us to, um, actually overcome this. So it's called function approximation in the context of reinforcement learning. Uh, in normal machine learning, it's just called normal machine learning. Um, so the way it works is this, uh, so we're going to define this Q_opt s, a. It's not going to be a lookup table, it's going to depend on some parameters here w. And I'm gonna define this function to be w dot Phi s, a. Okay. So I'm gonna define this feature vector very similar to how we did it in kind of machine- in the machine learning section except for instead of s, a we had x. And now the weights are going to be kind of, you know, the same. Okay. So what kind of features might you have? Ah, you might have for example, um, features on, you know, actions. So these are indicator features that say, "Hey, maybe it's better to go east then to go west or maybe it's better to be in the fifth, ah, row or as it's good to be in a six column and, you know, things like that." So, um, you have a smaller set of features and you try to use that to kind of generalize across all the different states that you might see. So what this looks like is now with the features is actually the same as before except for, um, now we have something that really looks like, uh, you know, the machine learning lectures, is that you take your weight vector and you do, um, an update of the residual times the feature vector. Okay. So how many of you this looks familiar from linear regression? Okay. All right. So, so just to contrast, so before we were just updating the Q_opt values, um, but the residual is exactly the same and there's nothing over here. And now what we're doing is we're updating not the Q values, we're updating the weights. The residual is the same and the thing that connects the, the, the Q values with the, the residual width, the, the weights is, ah, the kind of the feature vector. Okay. As a sanity check, this has the same dimension. This is a vector. This is a scalar. This is a vector which has the same dimensionality s, a; w. Okay. And if you want to derive this, um, you can actually think about the implied objective function as, ah, simply, you know, linear regression. You have a model that's trying to predict a value, um, from an input, um, s, a. So s, a is like x and Q_opt is like kind of y. And then your regre- sorry. This target is like, uh, the y that you're trying to predict and you're just trying to make this prediction close to the target. Yeah, question. Is the eta, you said that [inaudible] [NOISE] Yeah. So a good question. So what is this eta now? Uh, is it the same as before? So when we first started talking about these algorithms, eta was supposed to be one over the number of updates and so on. But once you get into the SGD form like this then now this just behaves as a step size and you can tune it to your heart's content. All right. So that's all I will say about these two challenges. One is how do you do exploration? You can use Epsilon-greedy which allows you to kind of balance exploration with exploitation and then the second thing is that for large state spaces, Epsilon-greedy isn't going to cut it because you're not going to see all the states even if you try really hard and you need something like function approximation to tell you about new states that you fundamentally haven't seen before. Okay. So summary so far, online learning. We're in an online setting. This is the game of reinforcement learning. You have to learn and take actions in real world. One of the key challenges is the exploration-exploitation trade-off. We saw, um, four algorithms, there's kind of two key ideas here. One is Monte Carlo which is that from data alone, you can basically use averages to estimate quantities that you care about, for example, transitions, rewards, and Q values. And the second key idea is this bootstrapping which shows up in SARSA and Q-learning which is that you're updating towards a target that depends on your estimate of what you're trying to predict. Um, not just the kind of raw data that you see. Okay. So now I'm gonna maybe step back a little bit and talk about reinforcement learning in the context of some kinda other things. So there's kind of two things that happen when we went from binary classification which was two weeks ago to reinforcement learning now and it's worth kind of decoupling these two things. One is state and one is feedback. So the idea about partial feedback is that you can only learn about actions you take. Right. I mean this is kinda obvious in reinforcement learning. If you don't, don't, quit in this game, you never know how much money you'll get. And the other idea is the notion of state which is that new rewards depend on your previous actions. So if you're going through a volcano, you have to, ah, there's a kind of a different situation depending on where you are in, in the map. Um, and there's actually kind of- so, so this is kind of you can draw a two-by-two grid where you go from supervised learning which is stateless and full feedback. So there is no state, every iteration you just get a new example, ah, and that doesn't have, you know, there's no dependency and in terms of prediction on the previous examples. Um, and full feedback in because in supervised learning, you're told which is the correct label. Even if there might be 1,000 labels for example in image classification, you're just told which ones are the correct label. Ah, and now in reinforcement learning, both of those are made harder. There is two other interesting points. So what is called multi-armed bandits is kind of a, you can think about as a warm up to reinforcement learning where there's partial feedback, but there's no state which makes it easier. And there's also, you can get full feedback but there are states. So instruction prediction. For example in machine translation, you're told what the translation output should be, but clearly though actions depend on previous actions because, you know, you can't just translate words in isolation essentially. Um, okay, So one of the things I'll just mention very briefly is, you know, this is deep reinforcement learning has been very popular in recent years. So reinforcement learning, there was kind of a lot of interest in the kind of '90s where a lot of the algorithms were kind of, ah, in theory were kind of developed. And then there was a period where kind of not that much, not as much happened and since I guess 2013, there has been a revival of reinforcement of research. A lot of it's due to I guess at the DeepMind where they published a paper showing how they can do- use raw reinforced learning to play Atari. So this will be talked about more in a section this Friday. But the basic idea of deep reinforcement learning just to kind of demystify things is that you are using a neural network for Q_opt. Essentially that's what it is. And there's also a lot of tricks to make this kind of work which are necessary when you're dealing with enormous state spaces. So one of the things that's different about deep reinforcement learning is that people are much more ambitious about handling problems where the state spaces are kinda enormous. So for this, the state is just the, you know, the pixels, right, so there's, you know, a huge number of pixels and whereas before people were kind of in what is known as a tabular case which the number of states you can kind of enumerate. So, um, there's a lot of details here to care about. One general comment is that reinforcement learning is, it's really hard, right, because of the statefulness and also the delayed feedback. So just when you're maybe thinking about final projects, I mean, it's a really cool area, but don't underestimate how much work and compute you need to do. Some other things I won't have time to talk about is so far we've talked about methods that are trying to estimate the Q function. There's also a way to even do without the Q function and just try to estimate the policy directly that's called, um, methods like policy gradient. There's also methods like actor critic that try to combine of these value based methods and policy-based methods. These are used in DeepMind's AlphaGo, and AlphaZero programs for crushing humans at Go. This will actually will be deferred to next week's section because this is in the context of games. There's a bunch of other applications. You can fly helicopters, play backgammon, this is actually one of the early examples TD-Gammon was one of the early examples in the early '90s of kind of one of the success stories of using reinforcement learning in particular, you know, self play. For non-games, reinforcement learning can be used to kind of do elevator scheduling and managing data centers and so on. Okay. So that concludes this section on Markov decision processes which we- the idea is we are playing against nature. So nature is kinda random but kind of neutral. Next time, we're going to play against an opponent where they're out to get us. So we'll see about that.
Stanford_CS221_Artificial_Intelligence_Principles_and_Techniques_Autumn_2021
General_Intro_Stanford_CS221_Artificial_Intelligence_Principles_and_Techniques_Autumn_2021.txt
OK, hello everyone. I'm Dorsa Sadigh. And I am one of the co-instructors of CS221. And today, I'm here with Percy Liang and our group of CAs here to teach the first lecture of 221. So with that, I'd like to first, before getting started in the details of the class, just introduce the team. So I am Dorsa Sadigh. I am Assistant Professor in Computer Science. And I-- this is the fifth time I'm teaching CS221, the second time I'm teaching it online virtually. I think it is the third time I'm co-teaching with Percy Liang. So really excited to start the quarter with you guys here. And my research, just a little bit about my research. My work is on in robotics and in AI. And, in general, I'm very interested in the interaction between robotics and AI agents with humans and with other agents, other intelligent agents. So if these topics are of interest, come to office hours. We'd love to chat about that and talk about it offline in general, too. My co-instructor today here is Percy Liang. I think I saw Percy somewhere. Yeah, I'm here. Hello, everyone. I'm Percy. I'm the co-instructor. And I think this is my ninth or tenth year teaching 221. It's really been interesting how AI has evolved since when I first started talking about it. My research interests are in machine learning and natural language processing, thinking about how to make systems more robust and trustworthy. Recently, I've been really fascinated by what we've been calling foundation models, models such as GPT-3, and BERT, and DALL-E, and happy to discuss that more later in the class. All right, thank you Percy. All right. So what are we going to be talking about today? So our plan for today is to talk a little bit about some of the course logistics and then some of the course contents, like what are we going to actually cover as part of this class. Then we'll have some icebreakers. So you'll have a five-minute break at home. We'll discuss things about AI. And then toward the end of the class, I'm going to talk a little bit about the history of AI, a brief history of AI, and then what AI is today and what are some risks and benefits of AI and how we should think about it moving forward. So that is our plan for today, OK? So before I start, also if there are any questions, feel free to put questions on Zoom Chat or raise your hand and the CAs can try to kind of like answer the questions or ask the questions throughout as I give the talk. All right, so let's talk about course logistics. So we are going to have a set of activities as part of the class. And last year, when we had to go virtually because of COVID, we started experimenting with a few different ways of changing and reformatting the class. And some of them worked really well and some of them actually didn't work so well. So based on the experience that we have from last year, we've decided to switch up the activities a little bit more. And also, like some of these changes really make sense to do also throughout like normal quarter even if you are not virtual. So one of these changes is going from traditional lectures to something that we are calling modules. So these are basically pre-recorded modules and lectures that are broken into small bite-sized chunks. So on every topic that we are going to cover in this class, we're going to have a lecture of 10 to 20 minutes in a module really, that goes over that topic. And these are pre-recorded. And we're going to release the modules for that week on the Monday of the week so that you have time to watch these, kind of based on your own schedule and when it makes sense for you to watch these lectures. So it's a little bit easier to manage these bite-sized chunks. That's one reason that we are going towards these modules. Also, these are pre-recorded. And you are probably going to use the same recordings as last year so we have more time to spend with you guys doing our lecture times, kind of in a flipped format, OK? So that's the modules. Then, in addition to that, during our normal lecture time, you're going to have two types of activities. So on Mondays, you are going to have faculty chats. So these are going to be on Zoom. And they're basically small group discussions with faculty on AI-related topics. So they're going to be basically six, like 25-minute sessions, so every Monday from 1:30 to 3:00 PM. Percy and I, each of us, are going to have a Zoom room, like 30 minutes each, for each session. And you're going to be assigned for one of these faculty chats. So this is actually mandatory, to attend at least like one of these faculty chats. And the reason we are doing this is traditionally when you have a large AI course, like we have 300 something enrollment. And it's really hard to get to know you guys and actually talk to you. And sometimes it becomes really difficult to know the faculty when you're in some of these larger classes. And this is really like-- what we were trying to do here is really to get to know you through some of these faculty chats and discuss some AI-related topics, some of the more recent material or research material around newer topics, like foundation models that Percy is going to talk about, or some of the topics around robotics, autonomous driving, and so on. So I'll talk about this a little bit later in the talk, the exact format of it. And there is a little bit of homework to do beforehand, before coming to these sessions. But the idea is we'll have these in-person-- not in-person, sorry-- virtual faculty chats on Monday's lecture time. And one other point that I want to mention is that if you have conflicts during lecture times, you want to make sure that you wouldn't have conflict for the time that you're assigned. So it does actually-- because that is mandatory. So make sure that you actually don't have conflicts during lecture times. So the second bit is problem session. So this is going to be on Wednesdays. So on Wednesdays, again during class time, we are going to have these problem sessions. And they're kind of like traditional sections, except for we have changed them a little bit based on feedback that we got last year. And during these problem sessions, we are going to have the CAs work out practice problem. So this could be previous year's quizzes, or previous year's exam questions, or basically just problems that can help you get started on your homework, or basically get ready for some of the exam questions later on. So I do recommend going through these problem sessions. They're incredibly useful to get your hands dirty on some of the topics that you're learning next week. And, again, this is on Zoom, on Wednesday's class time. All right, so what else? We are going to also have homework parties. So homework parties used to be very popular when they were in-person. And last year, I think it was a little bit more difficult to make it happen. But eventually, people realize that homework parties matter a lot because that's a very good place to show up and work with other people on your homework problems, get started on some of the more challenging problems together, study together. And the CAs will be there to answer questions. So these homework parties are going to happen on Nooks, which is a platform that we started using last year. Again, all the information about Zoom, Nooks, like all these links is on the CS221 website. So the details of everything I say today is also on the website, OK? All right, so beyond homework parties, we also have office hours. So the CAs have office hours. And there are two types of office hours. So there are a set of in-person office hours, that I was talking about earlier. These are limited. But they're going to be in the basement of Huang. And they are group-based. There is no sign-up required. But basically, there is a group of students and the CA in the basement of Huang. In addition to that, we also-- the majority of office hours are actually going to be virtual. And these are by appointment. So we used Calendly last year. And part of it was just these queues were becoming too long to handle. So now you can make an appointment for CA office hours. It's going to be a one-on-one office hour. And these office hours will happen on Nooks, OK? In addition, our CA office hours are two categories. So we have separated general office hours from homework office hours. So if you have homework questions, you should just go through the dedicated homework office hours. But if you have more general questions about the course, if you're thinking about your project, if you're thinking about general AI questions, you should go to the general office hours or the faculty office hours, basically. All right, and then the final thing that we have is faculty office hours. So Percy and I, both of us, will have 15-minute office hours weekly. And the schedule for this is also on the website. So you can take a look at that. Again, it's one-on-one. It's is going to be virtual. And you can sign up for them beforehand and then come and chat with us. And, again, all details is on the website. All right, so let me just like see if there are any questions here. Anyone have any questions? I should look at Chat. OK, let me just quickly look at Chat. So when do we know which faculty chat to attend? So we will be in touch about this soon. So actually on that, there is-- I'll talk about this in a little bit. But, basically, there was a survey that you need to fill out by-- I should know this-- by Wednesday. Correct me if I'm wrong. By Wednesday? Yeah, that's right, by Wednesday. OK, by Wednesday. Yeah. So that is basically to get your preferences on what faculty chats you would like to go. And then we will also assign you after that to specific faculty chats, OK? All right, do we sign up for CA and faculty office hours through Nooks? So no, we will use Calendly for signing up. But we will use Nooks for answering questions, all right, OK. So if there are no more questions, let me go to the next slide. So now let's talk a little bit about prereqs. So this is a question that oftentimes comes up. So what are the prereqs for the class? So you need to have some programming background. It would be good if it is in Python. So these are some courses that are prereqs for this course. In addition to that, it's a good idea to have some math background. So discrete math, CS 103, is a prereq. And in addition to that, it would be a good idea to have some background in probability and linear algebra, so 109 and math 51. OK? But, in general, we want you guys to have general familiarity with these and have some mathematical rigor and general familiarity with probability, linear algebra, discrete math, these types of topics. We are not really expecting a very specific type of knowledge. For example, like in linear algebra, you'll learn about eigenvectors. But we don't really require the knowledge of eigenvectors in this class. So they're not specific topics that we are looking for. But, generally, you want to know math, you want to know programming, and coming to class with that knowledge. And the reason is that this course is also fairly fast-paced. So you don't want to spend your time like learning Python or learning math. Through this class, your Python programming is going to improve. Your math knowledge is going to improve. But you don't want to spend time learning these backgrounds. And you want to really spend all of your time learning AI. So if there are gaps, some people do catch up. It is possible. But, again, you want to spend your time learning AI. So we kind of leave it to you guys to decide and move forward. And you might ask, OK, how do I decide? So we have a couple of things online that you can take a look at. So we have a set of modules that we recorded actually last year. And these are prereq modules. So these kind of provide refreshers of some of these topics. So definitely take a look at some of these prereq modules. And that gives you kind of like a good sense of what is required to know to come into this class. In addition to that, the first homework is based on foundations. And the first homework really gives you a good idea of what to expect as part of this class in terms of, again, programming and math knowledge coming in. So take a look at these before deciding if you want to skip a prereq or not. But, in general, again, I do think it's a good idea to have these backgrounds coming into this class. All right, so let's then talk about grading a little bit. So grading is fairly straightforward. So we are going to have a set of homeworks. And that's 55% of the grade. And we are going to have two exams. So that is 40% of the grade. The faculty chats, we actually count participation as part of the grade. So that is 5%. And then projects, we are going to make that optional this year. So it's going to count towards extra credit. And then if you contribute to Ed-- so we're going to use Ed this quarter as opposed to Piazza. If you contribute to Ed, that is also going to give you some level of extra credit, OK? And, in general, you can take the class for letter grades or pass/no pass. That is also your choice, basically. So now let's talk about each one of these components a little bit in more detail. So in terms of homeworks, we have eight homeworks. And these eight homeworks are a mix of programming questions and written questions. And the programming problems are mainly focused on a specific application. So like for example, we might be looking at blackjack as a game or we might be looking at-- or Pac-Man as a game of Pac-Man, or various types of topics like car tracking. So there's a particular application that is used as part of the programming component of the homeworks. And these programming components, they're autograded. And then there are a set of basically public and private tests. So you should definitely try out these public tests first. Make sure that you test this thoroughly because the grading is very strict. It's based on autograding. And you don't see all the tests. So that's kind of like the point that I was trying to make here. And then in addition to that, you have seven total late days. And you can use maximum of two per homework. The reason for that is we want to release the homework solutions. So you can't use more than two late days for homeworks. OK, so that is homeworks. So that's our plan for homeworks, the usual. We'll go with that. One other point that I wanted to add on homeworks is we are adding an extra addition to every single homework, which is an ethics component. So an ethics component is going to be added to all of our homeworks. It's a new addition that we are having this quarter. And we're also going to significantly change some of these homeworks to incorporate an ethics question into them. So we're trying to incorporate that throughout the class, throughout these homeworks. So that would be also in addition to consider this quarter. All right, so moving forward with exams. So this quarter-- last year, we decided to do a set of quizzes. This year we're not going to do the quizzes. That didn't really-- like students don't really like it every week. So instead, we were going to have two exams. And the point of the exams is really to test the ability of your knowledge and working in new problems. It's not really to know facts that we are teaching. It's more about your knowledge of AI and if you can actually apply that to new problems. And all these problems are going to be written, so no coding. And you should take a look at past exams to get a sense of how these problems look like and what is the format of them. So each one of the exams is going to be a hundred minutes. And then these exams are going to be open book. So we actually have the dates for these exams already. They are going to be released in a 24-hour window. So they're going to be released on-- the first one is going to be released on October 29, at 3:15 PM. And then it's going to be to due the next day, at 3:15 PM Pacific time. And similarly, we have exam 2 on December 8. 3:15 PM Pacific time is the time, OK? So we have these dates. If you have major conflicts about any of these dates, you should let us know by October 8, which is week 3 of the class, OK? In addition, we will not have any late dates for these exams, again because we need really solutions. We need to make sure it works for everyone. So no late dates gets applied to the exams. And, of course, no collaboration on the exam. So please do not talk about the exams on Ed. So like if you have done it, you're done with it, but there is still like time left within that 24-hour window, do not post anything about the exam on Ed, OK? All right, so that was exams. And the last component, that is mandatory as part of the class, is the faculty chat participation. So as I was mentioning earlier, the goal of this is really discussing topics around AI, with related, and topics around AI. So fill out this initial survey, that I was talking about, by Wednesday. So that way we can start scheduling these. You're going to be assigned in session. Again, six sessions run in parallel on Mondays. And this is during class time on Mondays. So make sure that you can actually make that time. And then you should prepare before these sessions. And these sessions are going to be on different topics. So if they are on specific research topics, like robotics, autonomous driving, ethics, robustness, foundation models, we often have some related material that we released beforehand. Sometimes these are-- we had a set of fireside chats last year. It could be like that fireside chat to watch and-- or talks last year. It could be basically that talk to watch beforehand. So come to the session a little bit prepared. And we can talk about these topics. We also have another set of topics that are really about more thinking about academia versus industry, graduate school, thinking about how you read a research paper. So some of these other components that are maybe not necessarily a particular research area. And you'll have some material for these. Reading material to have beforehand so you come in, again, prepared So the way we are looking at participation as part of these faculty chats is as you come in, you should introduce yourself. And you should also share a little bit about your thoughts or your goals for that session. So you should actively participate in that 25-minute session. And you kind of expect that when you're thinking about participation and grading participation during these faculty chats, OK? You will not be tested on the material that you are discussing on the faculty chats also. I just wanted to mention. All right, do we need to attend one faculty chat session to get-- yes. So you will be assigned to one faculty chat. If there is room, you can actually attend more faculty chats. We are potentially going to have more room based on the number of students who are enrolled. But we will be in touch on what are the availabilities and if you can attend more than one faculty chat. But, yeah, you will be assigned one. OK, so let me talk about the project also real quick. So the project this quarter is going to be optional. This is what we did last year too, because, of course, it's virtual. And we thought it could be-- it might be a little bit more difficult to find a team and work together. But regardless, like a lot of students did the project last year. And there are a lot of interesting ideas and projects that came out of that. And it was really exciting to see like so many cool projects, like during that quarter too. So I do recommend that you guys look into this closely, even though it is optional. So the idea is you want to choose a task where you can actually apply some of the ideas that you have learned as part of this class and use those techniques for that particular task. It's a little bit open-ended. You need to decide what that task is. But that's also the beauty of it. Like, you can pick anything and apply it to all the AI techniques that we are learning for that. The idea is that you can work in groups of up to four people. And then you also have a set of [AUDIO OUT].. Like, you need to fill out a project interest form. There's a proposal, a progress report. And there's a video on final reports that you need to do. So if you decide to do the project, and actually get the extra credit, you should do all these different-- you should actually pass through all these milestones and finish the project. Again, the task is completely open. But there are a set of well-defined steps that we expect you guys to have throughout the course, throughout the course for this project. So this includes things of the form of defining the task, or implementing your baselines, and oracles, and things of those form. Or having a literature review, thinking about what your evaluation metrics are. And you will have a CA assigned to you. If you decide to do a project, you will have a CA assigned to your group. And your CA can also walk you through some of these different components that you want to have as part of your project. And in addition to that, one other thing that we have added is a mandatory check-in meeting with your CA. So this is a 15-minute mandatory check-in meeting with your CA. We think this is really useful to make sure that you are-- you keep up with the project if you decide to do it. And, in general, if you want to think about ideas for what to do for your project or if you have some idea and you want to discuss it, definitely come to office hours. You can come to Percy or my office hour or like the CAs' office hours and discuss some of these questions, OK? All right, and the last point that I want to mention on logistics is the honor code. So I want to spend a little bit of time talking about this because this is really important. You guys don't want to deal with it. We don't want to deal with it. So let's just talk about it and get it out of the way. So especially this quarter, given that things are online, we do want you guys to collaborate. We do want you guys to discuss together, learn together, like think about problems together. But the write-up and the code needs to be independently. So you need to write your code. You need to write up your solutions, like independently, based on your own thoughts and your own ideas. So please do not share code. Please do not share your write-ups with others. And don't look at anyone else's write-up or code even if it is on internet and you found it. Do not look at these things. And then-- yeah, do not post it online. Like, if you're proud of your code, you shouldn't post it on GitHub. Do not do that. And, in general, when you are debugging, try to look at like input-output behavior. You could be like going to homework parties and debugging your solutions with other people. And really just look at input-output behavior. Don't look at each other's code. And that way you'll be safe. But I do want to emphasize that we do run MOSS periodically. And this will automatically detect if there is matching between codes. And please do not do that. MOSS is really good. And every year we have a number of cases. And sometimes like we run these things mid-quarter. So I want to also emphasize that. And then you don't want to [AUDIO OUT] things. Like, yeah, you don't want to go through these things, like mid-quarter. And it's, again, something that we don't want to deal with. You don't want to deal with it. Let's just not do it. We are also changing a number of homework questions. And we're adding-- like we are adapting things to make this a little bit easier on everyone. All right, oh, and the last point I want to make is on communication. So we're going to use Ed this quarter. So, in general, if you have any questions, the best idea is to put a public Ed post. That way, of course, staff, students, everyone can see it. And you have a broader group of people who can answer that question. And you can help-- like, probably other people are thinking about that question too. So that's kind of a best way of communicating with us. If there is a private question, make a private Ed post. And that way, the course staff can see it. And, for example, if you have a question, that can give away answers, it's a good idea to post that as a private post. And, in general, if there are sensitive matters that you want to discuss or even accommodations, you should email this particular email address. This goes to only four people. This goes to Percy and I, and Shiori, and Fei, so our student liaison and our head CA. So if you have any sensitive matter, you should just send an email to this email list. That goes to the four of us. In addition to that, you are going to have periodic surveys. You also have a welcome survey, already on Canvas. So please take a look at that and fill that up. And that way, we can start getting some feedback. And, again, as the course is virtual, we would love to get more feedback, periodic feedback, throughout the course. So it will be great to give us feedback, tell us what works, what doesn't work. So we can adapt. And, again, all these details, everything I've said so far, is on the course website. And with that, I can take any questions about logistics. I know I covered quite a bit on logistics. Anyone who wants to just ask a question, that's probably easier too because then I-- So for an exam, if we're looking for a clarification, should we post that privately to Ed, not at all, or should we email far it? If we're assuming that it's not something that was supposed to give anything away, it's just supposed to be a clarification of what's intended by the question? You should post a private post on Ed. That only goes to the course staff and CAs. And also, my related question, so the in and out-- input/ output that was a good guideline. But then as far as the decoding of Python, what about the use-- in terms of basic routines, obviously not trying to copy code wholesale, but as far as using things like Stack Overflow, and others, as far as for-- obviously, there's all the various things that can be used as sort of like virtual tutorials and various things you want to accomplish with the Python that you're writing. I assume that as long as it's just little routines, that it's not a problem. The problem is when it's starting to be that you're taking somebody's idea wholesale. Yeah. And, in general-- yeah. So try to write things like yourself basically. Like, when it comes to writing the code part of it, try to get ideas. You can discuss the idea of it with other people. Or you can look at online forms for ideas. But when it comes to writing the code, try to just write it yourself. If there are specific things that you are not sure of, you should go to the CA office hours or go to our office hours, basically ask us that specific instance. And we can talk about it then, yeah, OK. All right, so let's go forward. So now, let's talk a little bit about the course content, OK? So what are we discussing, what is AI, what are we going to be covering in this class? So, in general, we are in AI, right? We are interested in solving realistic, complex problems, that have a lot of messiness and uncertainty. And if you think about the complex problem-- let's say routing cars in a city, with a lot of like complex settings that is happening in that city, how do you go about solving that question, let's say the question is just routing the vehicles, right? Like, you're not going to start just writing out code for it, right? Like, you don't from start from scratch and from not really having a formalism to just directly code. That seems pretty difficult. And, in general, there's a gap between the code, or the software, or the hardware, that in general we develop as AI scientists, as engineers. And what is happening in reality? Like, what is the real world. It's messy, with all the messy and complex-- messiness and complexities that exist. And really what AI and what this course is trying to do is to bridge that gap, to figure out how we can take some of these real world problems and make it simpler in a way that is manageable, so we can develop algorithms and code for it. And for that, we have a paradigm in this class that we like to follow. And this paradigm has three different core components, three pillars. And these three pillars are the modeling, inference, and learning pillars. And I'm going to talk a little bit about these. So the idea is we take a very difficult problem, we model it, and then we develop inference algorithms for it. And then throughout this process, there could also be-- the model could have a set of unknowns and use learning throughout to actually make our models better. So let me try to make this a little bit more clear, moving forward. So let's go back to this real-world problem that we are talking about, routing vehicles like in a city, OK? So this is a big problem. And, in general, I would like to have a formalism. So what modeling does is it takes that complex problem and it tries to come up with a formalism, a the mathematical way of thinking about that problem. And modeling, just by definition, is lossy, right? I'm not going to get all of that complexity that exists in the real world, right? All models are wrong, but some are useful, right? So under that idea, of course, you're going to lose some of this complexity. But we are still going to come up with something that is somewhat useful for the goal that we have. Maybe I would like to find the shortest way of getting from one road to another road. And if that is my goal, I can basically maybe model this real-world problem as a graph problem, where I have a bunch of edges and vertices. And my vertices here are maybe my locations, like in the world. And then the edges are maybe the roads that connect them, OK? So this would be a graph model that represents that real-world problem. So you're going to spend quite a bit of time in this class talking about modeling. And then once I have a model, then I can start asking questions about that model, right? I can ask, well, what is the shortest path of getting from one node to another node or what is the most scenic path of getting from one region to another region? Or I might have different objectives, that I would like to be able to optimize. And inference is really a way of trying to solve that problem and give us an answer to some of these questions that we have here, OK? So how do we make predictions? How do we figure out what is the right path to take in this problem, is kind of like the thing that inference lets us get. And then, finally, the last pillar is learning. And the way I want you to think of learning is that if you think about that model, right, we are-- oftentimes, you are not going to be able to write everything in that model, with all the complexities. But all we can do is we can write a skeleton for what we were trying to do, maybe a graph. But that graph, you might not know what are the weights on the edges. Like, we might not be given the edge values here because that would be like too complicated to write. Or we might just not have it, like periods at the beginning. So we often have a model without parameters. And the goal of learning is to look at data. And from data, complete this model and add these parameters that were unknown at the beginning. So what learning is really doing is taking the complexity that we have and writing the specification, writing the model. And it takes that away and puts that into data. And the fact that there is data, I can take that data. And based on how good that data is or based on what I can learn from that data, I would be able to complete my model and have a better model, that it can actually do inference over. So we're going to have also learning throughout this class as a pillar, in every section that you'll talk about in this class, OK? All right, so, modeling, inference, and learning are the three pillars that keep appearing throughout every week of this class. But what is our course plan? So our course plan is really to talk about different types of models, starting from low-level intelligence all the way to high-level intelligence. And we are going to basically go over a variety of these models. But before we start talking about these, we're going to actually spend two weeks talking about machine learning. So this is just to get some of the basics of machine learning out of the way. Also, machine learning in general is a very powerful tool, that has been quite impactful in the field of AI. So it's a good idea to learn some of these ideas in machine learning at the beginning, so then we can actually also use it throughout the class when we are thinking about learning, modeling, and inference throughout the class based on these different types of models that we will discuss throughout, OK? So next week, and the week after, is basically going to be modules on machine learning, OK? And just spending a little bit of time on what is machine learning, so, again, the role of machine learning is to take data, right? And from that data, try to generate these models that were at the beginning incomplete, but now we can actually use them and then we can actually incorporate the data, the information that's in that data, in the model. And the idea of it is really moving from code to data. So, again, adding-- or moving the complexity that exists in the code to complexity that exists in data. And one other point about machine learning is it kind of requires faith, right? So we have some data. Based on that data, we build the model. There is no reason on the surface that model could work in a new scenario, that it could generalize to new settings. And then we'll talk about the idea of generalization quite a bit, like when is it that the model could generalize to new settings? Like, if I trained it on some set of data, let's say like house prices, how can I make sure that this model could actually work in a new setting for a new house? And that kind of goes back to this question of generalization? And we'll spend time on that. All right, so that was machine learning. So as we are talking about machine learning in the first two weeks of the class, we're also going to spend a little bit of time talking about reflex-based models. So these are kind of the lowest level of intelligence in terms of the modeling paradigms that we'll be talking about throughout the course. An example-- and here's an example of a reflex-based model. So I'm going to ask you guys, what is this animal, OK? Maybe you can put in the Chat what is this animal, or just like chat, what was it? It was a zebra, right. And very quickly, you were able to quickly figure out that you just saw a zebra here, right? And this is really based on your reflexes. This is really like an example of what a reflex-based model could do. Other examples of reflex-based models are things of the formal linear classifiers or deep neural networks. And the idea that I'm calling these-- the reason I'm calling these low-level intelligence is you're not doing a lot of reasoning here. We basically have a feedforward model. And you're not doing much computation into responding and saying, well, that was a zebra. We quickly were just able to quickly say that that was a zebra. And these reflex-based models there are the most common form of models in machine learning. They're often like fully feed-forward, no backtracking and no reasoning about what was going on, and just like evaluating your model. And deep neural networks are an example of this. Linear classifiers are an example of this. And that's actually the reason we are going to discuss machine learning. We're going to also spend a little bit of time thinking about reflex-based models, where inference is extremely simple. And then we just call the model. So moving on, on top of reflex-based models, one level higher, we are going to talk about state-based models. And these state-based models, we're going to talk about three types of them, search problems, MDPs, and adversarial games. So what are state-based models? So here is an example. Let's say you want to play a game of chess. And you want to figure out what should be the next move of white? So this is not the same as detecting if that animal was a zebra. This is actually a lot more difficult than that. You actually need to sit down and do a little bit of reasoning, and figure out what state in the world you are in, and figure out how the world is going to evolve? So there is kind of like this notion of sequences of actions and sequence of states that are coming after each other, like A leading to B, and so on. And this kind of brings us to this idea of state-based models. And it has many applications, including in games. So if you think about games like chess, Go, Pac-Man, StarCraft, these are all examples where we can think about state-based models as a good way of modeling them. They show up in robotics all the time. You can think about motion planning, like getting a robot arm to move from one location to another location. We oftentimes use state-based models as a way of formalizing that. They also show up in natural language generation, machine translation, image captioning. They're basically like all throughout AI. And they're a really good way of thinking about what are all the sufficient information that you need to know at the current time and how that should evolve, like in the next time step, and then adding an ordering of going from this state to the next state. So we'll talk about three types of state-based models. We'll talk about search problems, where you can actually control everything. So you have a state. And then based on the action that you take, you end up in a new state. But we'll talk about Markov decision processes, which are making these search problems a little bit more difficult by adding uncertainty that comes from the world. So basically, these Markov decision processes are state-based models where you're playing against the nature. Like, nature gives you some probabilities. You would look at, like, coin tosses. And based on that you proceed. So there is kind of this notion of uncertainty. And then we'll spend some time talking about games, adversarial games, where you're not playing against nature, which is probabilistic. Instead, you're playing against another opponent, which is also very intelligent, and is making decisions like against you, as opposed to you. And we'll basically go over these different types of state-based models a little bit. OK, and as part of the homework for state-based models, we are going to play around with the game of Pac-Man. I just want to show a quick demo of this game here. Yeah. So you're going to play around with the game of Pac-Man and basically come up with algorithms for Pac-Man, who can avoid ghosts and eat these food pellets. And it will be kind of fun playing around with it. And let me go back to my slides. Yeah. And as you are thinking about Pac-Man and in general state-based models, the things to think about is what is a notion of states, how do you transition from one state to another, how can you come up with a strategy, a policy, that can get you from one point to another, so you avoid like the ghost, and eat your food pellets, and so on? And these are some of the questions that we're going to talk about when we discuss state-based models. So moving forward, you're then going to move to the next level of complexity in intelligence and-- not complexity, really, intelligence. And that is variable-based models. So an example of a variable-based model is something like a game of Sudoku. So if you think about state-based models there's a notion of sequential ordering of states. You have to do A to get to-- you have to go through A to get to B, right? If you think about moving through a graph for solving shortest path, you actually need to see city one, and then after that see city two. But there are a set of problems that don't really require that type of ordering, that type of strict ordering. So think of the game of Sudoku, right? Like the game of Sudoku, you have a bunch of numbers. You want to make sure that you can fit in digits of 1 through 9 in every row and column. And the order that you put in these numbers, that doesn't really matter, like you can put the 9s first or you can put the 1s first. And that really doesn't matter [AUDIO OUT].. So it brings us to this idea of variable-based models, where you don't have this strict ordering. And because of that, then we can do something that's a little bit more intelligent and helps us come up with better algorithms in these settings. So we will talk about two types of variable-based models. We'll talk about constraint satisfaction problems. These are settings when we have hard constraints. So Sudoku was an example, right? Like you have a hard constraint, you have to actually fit like 1 through 9 in your board. Or like scheduling type problems, like a person cannot be at two places at the same time. So it actually has very strict relations between the different variables that exist. But in addition to that, we have also Bayesian networks, that try to take those hard constraints and make them soft. So those soft dependencies, when you think about Bayesian networks, unlike, let's say, Sudoku or scheduling. So an example is, let's say you want to track an airplane or you want to track a car. Like, if you're tracking your car, you might have a set of sensors on that car. And those sensors are noisy. They're not going to give you the ground truth of where the car is. You also know that your car cannot teleport, right? So the previous times, where it was, and the current times-- or the next time step, they're related to each other. And based on these different types of relations of where the car is, and where the car is going to be, and the fact that you have these noisy sensor readings, you can have these soft dependencies between these variables. And that allows you to estimate where the car is. And that's a topic that we'll discuss, like through Bayesian network. We'll have homework on this, that will actually be about tracking cars. That will be exciting. All right, and then finally, the last component that we are going to discuss is going to be on logic. So this would bring us to the highest level of intelligence. And for logic, like as an instance of-- an example that uses logic, we can think of a virtual system. So think of a virtual assistant. What do you want from a virtual assistant? You oftentimes want to tell it some information, give it some information. And you also want to be able to ask it some questions and expect it to respond. And maybe you would want to use, like, natural language as a way of communicating with this virtual assistant. So we actually go through a virtual assistant example, as part of the homework in logic. I want to show you a quick demo of that again here. Let me see if I can bring this to the right window. Let me go to the-- let me see if I can bring the terminal. There you go. There you go. So this is actually a tool that we are going to play around with doing the logic homework. And basically, it's a virtual assistant. You can give it information. You can ask it information. So let me try some example. I mean, let me actually give it some information. So I'm going to say Alice is a student, OK? Hey, Dorsa, could you zoom in a little bit? Oh, yeah. How's that? It's better. Let me see if we can zoom in a little bit more. OK. All right, there you go. So I told it Alice is a student. And it just learned something. I can ask it now, is Alice a student? What should it say? It says yes, right, because I just told it Alice is a student. I'm going to ask, is Bob a student? What should it respond? It should probably say I don't know, right, because how would it know? I don't know. Let me give it some facts. I can say students are people, OK? Then I can say, Alice is not a person. Let's see what it says in response to that? OK, it says, I don't buy that. So it understands contradiction. So I told it students are people. I showed a generalization. And now it's a contradiction to that. And it understands that. I can say Alice is a person. Let's see what that says? It confirms, already knew that, OK? Let's give it some more information. Alice is from Phoenix maybe, let's do that. Alice is from Phoenix. I learned something. We can say Phoenix is a hot city. I learned something. I can say cities are places-- places. I learned something. Let me make this a little bit smaller so you can see this, OK? So if it is snowing, then it is cold. So I'm going to teach it kind of like this if-then-else type of statement. I learned something. OK, I'm going to ask it if it's snowing? What should it say? Well, it doesn't know. So it says I don't know. So I'm going to give it more information. If a person is from a hot place, and it is cold, then she's not happy. So I'm giving it this more complicated if-then-else type statement. I learned something. I'm going to ask is it snowing? What would it say? Well, it doesn't know. I don't know. I can say, Alice is happy. Now, I'm going to ask is it snowing? What should it say. It says it's not snowing. So this was just an example that was going over like this virtual assistant. And you'll play around with this virtual assistant. And in the logic module, you'll be thinking about this idea of giving information, asking information, and logical relationships between them. And this will be something that we will work on. So I just want to quickly show this demo. One thing to notice here is that here we were giving these kind of heterogeneous information. This was very different from me giving millions of pictures of cats and training a neural network. Like, I was giving these very heterogeneous information. And the system was able to reason about these things, this information, in a very deep way. It was making these deep connections. And I could ask these questions from it. And that's very exciting, like being able to have these types of deep interactions between the symbols that you are providing it. All right, so that kind of brings us to the end of this module, where we are thinking about different types of states-- different types of, sorry, models. And as a quick recap, right, in this class, we are going to talk about low-level intelligence, all the way to high-level intelligence models, reflex-based models, state-based models, variable-based models, and logic. And in each one of these modules, we are going to talk about the usual paradigm that we have, that tries to do modeling. So we'll talk about modeling in each one of these settings. And then we talk about inference. What are the different inference algorithms that we can use? And in addition to that, we talk about learning, right? Like, how can we have data and how can you learn and improve our models for each one of these components? So that paradigm keeps showing up throughout the class, basically every week. All right, so now let's spend five minutes and let's have an icebreaker. So it's going to put us in groups of four, in breakout rooms. And during those five minutes, let's just try to introduce ourselves to others. And just to maybe set the stage, let's have a question and let's discuss the question. And the question is, what is the biggest benefit of AI and what is the biggest risk of AI? And when you come back, try to put that on Chat. And we will discuss, and go from there, OK? So let's spend five minutes in breakout rooms. So it was good talking to some of you during the breakout rooms. Maybe-- yeah. So maybe you can kind of put some of your responses, some of the things that you discussed on Chat, as a way of just discussing some of them. Also, quick things-- don't direct message me on Chat. I'm like barely looking at Chat. So if there are any questions, please ping the CAs or email me later on with questions on Ed. OK, all right, yeah, any biggest benefits? Biggest risks? Anyone have thoughts? OK. Thank you. We're starting this. Improving people's lives; tangible applications; mutual assistance; biggest risks, ML fairness, ethics. Yeah. So these are all great points. Then talk about them a little bit at this time. But now, let's continue with the kind of like the next segment where I want to give a little bit of a history of AI here. And this is going to be brief. I don't want to go into too much detail. It's not going to be a complete history. But I think it's a good idea to talk about this because it gives a little bit of insights in terms of why we are where we are today and how things shaped over time. So if you want to do the history of AI, like you can really go back to 1950. And 1950, Alan Turing put out-- Turing put out his kind of like-- his landmark paper on computing machinery and intelligence. And in this paper, Alan Turing asks the question, can machines think, OK? And he came up with his answer, which was the imitation game, which you might know as the Turing test. And then you guys might be familiar with the Turing test. The idea of the Turing test is that a machine can pass a Turing test if it is able to fool a person into thinking that it's actually a human, OK? And this paper was really like foundational, like in a sense that it started allowing us to think about intelligence a lot more carefully and actually try to formalize that in a better way. It was kind of like one of the first works, one of the foundational works, that it started thinking about formalizing this idea of intelligence. And then we might argue on, if a Turing test is a good test or not in terms of measuring intelligence. And then we might have various opinions on that. But that part of it is not really the part that matters. The part of it that matters is thinking about intelligence, being able to formalize it. And one other thing that Alan Turing actually-- Turing actually provided in this paper was this idea of separating the question that you're trying to ask, the what, from the how? Like, how we going to answer this question? Like, Alan Turing came up with this idea of imitation game or-- and basically, what this was giving us was this idea of object specification, right? This is the thing that we are trying to get, our specification that we're trying to get at, but how we do it or how the machine really does it. But he didn't really specify that. And this modularity of specifying what we are trying to get and how we are trying to get at, this is really like a foundational idea, that we have been also using in a lot of our algorithms that we see throughout this class. Like, separating out the objective from the algorithm and how we were going about it is actually quite important and is a very good foundational idea. And one interesting thing is that Turing also provided, like at the end of this paper, provided some ideas in terms of how we should go about it. And he talks about two different approaches. One was this very abstract way of going about this problem, kind of like a top-down view, kind of like how we would go about solving chess. And this is really related to this idea of symbolic AI that I'm going to talk about. And he also provided another potential way of going about this, which is like having machines that have sense organs, a.k.a. sensors. And teach them like a child, right. And this idea of teaching a machine, and getting that machine, and have sensors on it, that it sends data, and from that data try to learn, that idea is very related to the idea of neural AI. So since this point, 1950, where Turing put out this paper, there has been kind of three different flavors of AI that has been around, right? There is symbolic AI. There's neural AI. And there's statistical AI. And I want to get a little bit of a history, a brief history, of each one of these. So let's start in 1956. And let's start with the story of symbolic AI. So this is the first type of AI, flavor of AI that I want to talk about. So the term AI really comes back to this age of 1956. And this is when McCarthy basically organized a workshop at Dartmouth College. So John McCarthy, he was a faculty at Stanford. He actually created the Stanford AI Lab. And he organized a workshop at Dartmouth College that summer. And he invited a lot of other big names, like Marvin Minsky, Allen Newell, Herbert Simon. He invited all of these people. And the goal of this workshop was to think about intelligence, right? They wanted-- they had a very ambitious goal. They wanted to think about every aspect of learning and then every feature of intelligence. And they wanted to model that so precisely, so they can have a machine that could simulate that. And this is a very ambitious goal, right? And they were really after generality. They wanted to figure out what are the general principles of intelligence and learning that they can really get, so they can have an artificial intelligence, an intelligent agent that can simulate that. And that was really exciting. And immediately after this workshop, all of these people went back to their ways and they started producing really cool systems around this time. So this was really the birth of AI. And you started seeing a lot of early successes. So like 1952, Arthur Samuel put out one of these first checkers programs that was able to play at the level of an amateur, like play checker at the level an amateur, which was really exciting. We also, in addition to that, we had, in 1955, Newell and Simon came up with a theorem, kind of like one of the first theorem provers. And they basically had a system that could solve generally solve problems and solve theorems. They came up with a bit of proof for a theorem that was actually a lot more elegant than what people had before. And they tried to publish a paper on this proof. But the paper got rejected because the reviewers thought the theorem already existed. But it was really exciting to be able to have systems that can prove theorems, that can play checkers, that can generally solve problems. And there was a lot of optimism, right? Like, all these really famous people in the field, they all had a lot of optimism about what is possible with AI, right? Herbert Simon, said "Machines will be capable, within twenty years, of doing any work a man can do. Marvin Minsky said, "Within ten years the problems of artificial intelligence will be substantially solved." Claude Shannon said, "I visualize a time when we will be to robots what dogs are to humans, and I'm rooting for the machines." So these are not like random people on the street. There are famous people, like founding fathers of AI. And these are some of the optimism, overwhelming optimism, that people had around that time, what we could actually do with these AI systems. And, unfortunately, we started seeing very underwhelming results. So around this time, the government really cared about the problem of machine translation. And there was a lot of funding around it. And then we started seeing results that weren't very-- were kind of underwhelming. And here is kind of a made-up example. But the results were things of the form of you might have a text that says the spirit is willing but the flesh is weak. And you might translate that to Russian and translate it back to English. And you would get a text that says, the vodka is good but the meat is rotten, which is not very good. And as we started seeing these results, like governments started putting out a report about how these results are not so great. And then they started cutting off funding for AI research. And this is around the same time that you started seeing the first winter of AI. So a lot of optimism wasn't really going anywhere. And we had this first winter of AI. And that wasn't so great. So if you think about this first early era of AI, what were some of the problems? So some of the problems were-- well, first of all, we had very limited computation. Like, a lot of these problems are written as logical problems and usually resolved as search problems, where the search space was just growing exponentially. And the limited hardware that you had, it was just like not possible to solve these very difficult problems. But even if we had infinite compute at that time, which we didn't, there was another problem. And this other problem was the fact that we had limited information. Like if you think about solving some of these very complex AI problems that people were thinking about, they needed to write out these problems and the knowledge that exists around them using words and objects and writing out the concepts. And it was very difficult to actually provide all this information. And we had really limited information about some of these concepts. But regardless, we started seeing a lot of interesting contributions that came out of this era, even though it was a failure and we had winter of AI. There were a lot of interesting ideas that came out around this time, like we had a Lisp programming language. We had ideas like garbage collection and time sharing. And a lot of these ideas are associated actually to John McCarthy. But it was exciting to see a lot of advances, even though the problems were still there. And we couldn't really solve the big problem. And then this really brings us to the era of '70s and '80s. So in the '70s and '80s, really people started thinking about this idea of knowledge and building knowledge-based systems. And kind of like the core idea was knowledge is really the key. If we can encode knowledge, if we can bring in ideas from experts, domain like bring in domain knowledge and incorporate that in the system, then we can actually solve interesting AI questions. So this was the rise of expert systems, where you can basically elicit domain-specific knowledge from an expert and encode that into these if-then type statements, into rules that the system can call and be able to solve various types of problems. There was also another shift around this time. So the first era of AI, like the John McCarthy, like Dartmouth College workshop that happened, that was all about understanding intelligence, being able to say, well, what is human intelligence and can we simulate that? And that didn't really work out. But in this new era, what happened was people started kind of changing-- changing paradigms. And they started thinking about applications a lot more. So, sure, we are not going to be able to think about intelligence and simulating intelligence. But I can build systems that can be used in chemistry, or they can be used in medical diagnosis, or they could be used in business. And this was kind of like the first era, that people started building AI systems. Like, maybe they weren't so much about simulating intelligence, but they were about solving interesting, useful problems, that could be used in industry. So lots of events during this time, right? So these knowledge-based systems, they really helped us get both the information and computation gap. They allowed us to incorporate knowledge and information. And by doing so, they allowed us to kind of prune the search space. And that would help us need less compute to solve some of these problems. So that was exciting. And this was the first time that we we're seeing some of these real applications that actually impacted industry. And that was also a lot-- that was also very exciting. But there were still some problems around this era. One of the problems was these rules were very deterministic. And they couldn't really handle the uncertainty that existed in the real world. We had all these deterministic connections and rules that were coming together. And they weren't really capturing the complexity that exists in the world. In addition to that, the rules were becoming very complex, very quickly. So this is a quote from Terry Winograd, who was a faculty in the CI-- in CS department, in HCI, at Stanford. And at the time, he was the faculty actually working in AI at MIT. And here's some quotes that he said around these knowledge-based systems. He said, well these systems are dead ends. And they have very complex interactions that are difficult to handle. And there are really no easy footholds to have. And this really brings us to the second winter of AI. So lots of excitement, right? we were seeing real applications. But there was still quite a bit of difficulty into extending these systems. And this was kind of the end of like this era of symbolic AI, right? Symbolic AI really dominated AI for many decades. And I now want to go back in time, like I'm now in 1987. But now I want to go back in time and tell you a little bit of a history of neural AI, and where that started, and what was kind of the progression of that. And that takes us to 1943. So going back in time, let's think about artificial neural networks and how they started. So in 1943, McCulloch and Pitts, they came up with the first artificial neural network, where they had a single neuron. And they basically modeled that single neuron. And they thought about very simple relations, like ANDs and ORs. And they weren't thinking about learning rules or anything of that form at that point. And that is kind of like 1943, the very first version of artificial neural networks. And in 1949, Hebb came up with the idea of coming up with a learning rule here. This learning rule was very simple. It was, "cells that fire together, wire together." And this learning rule it was actually kind of-- like, it didn't really work. And it was very unstable. But it was one of the first learning rules that was put in place. And finally, in 1958, you started seeing some advances in artificial neural networks. This is when Rosenblatt came up with the perceptron algorithm for a single layer neural network, which is basically a linear classifier. And this perceptron algorithm was even being-- like was being used until very recently. And it showed a lot of success. It was actually very powerful. And there was a lot of excitement around it. 1959, we started seeing that analog of linear regression, that was ADALINE. There was a multilayer extension of that, MADALINE. And this was actually used for removing echoes on phone lines. And this was, again, one of the very first times that people started using artificial neural networks for real application, like removing echoes from a phone line. And 1969, that was an important year for artificial neural networks. So this year, Minsky and Papert wrote a book on artificial neural networks. They wrote a book called Perceptrons. And they basically tried to analyze mathematical properties of linear models. And the thing that they showed was actually something that was very simple, which is a single layer neural network is a linear classifier. It's actually not going to be able to solve the function of XOR. And this book is really associated to shutting down the research on artificial neural networks. Like that was a time that people started thinking maybe these types of artificial neural networks are not very powerful. And maybe we should stop doing research on them, even though the book wasn't really saying anything about deep neural networks Regardless, we started seeing a revival of neural networks around 1980s. And this was kind of with the rise of convolutional neural networks and the idea-- like, this came under the umbrella of connections if-then. And we started seeing these very first convolutional neural networks. And the training of them was very ad hoc. But in 1986, that was kind of around the time that we started seeing better training and giving more principled ways of training these systems. So around this time, Rumelhardt, Hinton, and Williams, they popularized or kind of reinvented the idea of backpropagation. And that was adding a lot more principle into how we should train these systems. And in 1989, again, that was like one of the first times that we started seeing these systems being used in practice. So Yann LeCun applied convolutional neural networks for recognizing handwritten digits. And he actually deployed this, with USPS, for detecting digits of zip codes, which was really exciting. But still, this idea of artificial neural networks, it was still like a niche. Like, it wasn't like a thing that everyone was working on. And that was until like the era of like deep learning in the 2000s, 2010s era. And part of the reason was, it was actually really difficult to train these models. But in 2006, Hinton et al., they developed a system, an unsupervised layer-wise pre-training system, that was helping-- pre-training some of these neural networks and kind of reducing the effort that goes into training these models. But the break really happened around 2000. So 2012, we started seeing systems, like AlexNet, that were giving us huge gains in terms of object recognition. Like, these systems basically visualized and transformed the field of computer vision overnight. The computer vision of course that I took in 2012 was actually pre-neural networks. And it was very different from what is being taught today. And this basically changed the field, like overnight, with the rise of convolutional neural networks, and training these systems, and being able to do object recognition. And finally, in 2016, also we started seeing things like AlphaGo, like another breakthrough. And AlphaGo was basically using deep reinforcement learning to defeat like a world champion Go player. And that was another breakthrough. And that was a game that people were not-- were thinking it was a lot more difficult. And it was really exciting to see deep learning, and deep reinforcement learning, was able to solve some of these problems. So let me try to wrap up because I know it's almost 3:00 PM. And we'll release modules on some of the rest of the lecture later today. But let me just give you some, kind of like food for thought. I've talked about symbolic AI. I have talked about neural AI. Symbolic AI is really this top-down view that is-- this goes back-- its roots goes back really to logic. Like, it had these very big goals of building a virtual assistant. And neural AI, on the other hand, it's more like bottom-up. It's trying to solve these perceptual tasks. And the two might seem to have very philosophical differences. And they might seem contradictory. But they're actually not very contradictory. But there are a lot of deeper connections between them. And today, actually, people are thinking about integrating them in ways that we weren't able to do before. And even if you go back to the history of it, what McCullough and Pitts were doing with the first neural network, was actually analyzing properties of a logical system. Or AlphaGo, if you think about AlphaGo, it's a very logical game, that you write like the rules of the game in logic. And it's using neural networks to solve that game. So there are deeper connections between these two views of AI. And they really come together. All right, so sorry for going over a little bit. What is left as part of this lecture is really talking a little bit about statistical AI. We'll release modules on these, and thinking about where statistical AI is coming into play, and wrapping up the history of AI. And then the other part that is really left in is talking about AI and what are some risks and benefits of that? And that's something that you guys talked a little bit about during the breakout rooms. But we'll also talk about that in lecture. So if you want to watch these lectures later, that would be cool. And with that, I can-- if there are any questions, I can take any questions. Otherwise, I'll see you guys next week.
Stanford_CS221_Artificial_Intelligence_Principles_and_Techniques_Autumn_2021
Stanford_Fireside_Talks_Robustness_in_Machine_Learning_I_Robust_Machine_Learning.txt
So, today, we're pleased to have Tatsu Hashimoto here with us. Tatsu did his PhD at MIT, did a post-doc at Stanford, spent one year as a researcher at Microsoft Semantic Machines. And he's joining Stanford as of last month as a fresh assistant professor. So welcome back to Stanford. He'll actually be teaching 221 in the winter. So if you like his talk, you should go tell all your friends to have them take 221 in the winter. So Tatsu has worked on a number of areas from computational biology, text generation, and NLP. But he's probably really well-known for his work on robustness in machine learning. And I think throughout this course, we've emphasized that machine learning is something that's really being deployed in the real world all over right now and having real impact in the world. Just last week, we heard from Tino Cuellar about this. So I think robustness of machine learning systems is a really, really important area. And Tatsu is an expert in this. So I'm really happy to have him tell us what robustness in machine learning is all about and where things are at the moment. So take it away, Tatsu. OK, great. So I want to start with emphasizing what Percy already said, which is that there's been this enormous and rapid progress in machine learning over the last decade or so in-- especially in tasks like image recognition. 10 years ago, errors were at the level of 20%, 30%, and human level performance was sub 7%. And there was this huge gap in performance, and everyone said it will take a long time to reach human-level performance. But, nowadays, really, human-level performance is being achieved on all sorts of tasks, image recognition as of, say, 2015 but also in tasks like natural language processing and much more challenging reasoning-based tasks. These systems are now getting really close if not exceeding human performance. And so machine learning has really achieved these sort of great successes, and they're being deployed. And we can sort of ask, what has machine learning been good at? And what is it good at? And it's really good at extracting patterns from training data and applying this on a test distribution to do some prediction. And so we can think of this as classic digit prediction tasks. You have some images of digits, and you need to return the numbers that are associated with them. As long as sort of the source and the target distributions look the same, modern machine learning systems based on large amounts of data and neural nets are going to do exceedingly well on these tasks. But, really, the challenge is, what if the training data doesn't look very much like the test data? In these cases, we're going to have a lot of challenges. So on the image that I put here in the source domain, we have these black and white images in sort of desaturated settings. And now at test time, you have these yellow cabs in New York, and your predictions might not work so well once you have this what's called distribution shift. And so once we start to think about going beyond just sort of data that looks like the training data, we see a lot of problems on the horizon. And we've discovered a lot of these problems beyond test set accuracy. And I'm going to, at the beginning of this talk, cover sort of three classes of problems that, hopefully, you'll think about as you sort of continue on your journey in AI and machine learning. The first one is sort of discrimination and performance on minorities. Another one is vulnerability to adversaries in high-stakes secure applications. And then last one, which is a little bit more abstract, but I will get into this in more detail, is that models don't really display an understanding of the tasks that they're actually performing. And this is going to be a little bit abstract. But because it's a AI-focused class, I think this is an important thing to be discussing and going through. And so it's sort of the unifying theme. These seem like very different problems, right, problems that machine learning systems have today. But, really, they're all sort of connected with a single underlying theme, in that many of these problems can be cast as these problems in robustness. And so when the training distribution and the test distribution are different, these models break down because they're brittle. So to start with, let's talk about sort of discrimination and fairness in minority groups. So a really typical thing that happens in a lot of machine learning systems today is that there is sort of a majority group. Let's say western cultures, English text, or sort of males in many cases. So in this majority group that dominates the training data, you get extremely good superhuman performance in these systems. And, often, you are going to be deploying this to a wide variety of users. And so you will have minorities using your system. And in these cases, you end up with horrible sort of near random performance. And you can sort of immediately see how this is a discrimination issue and sort of an equity issue. And I'm going to go over a lot of these examples in turn. But these just show up in all sorts of places that you might not initially think about when you think about fairness problems like, say, dependency parsing or video captioning. Face recognition is a very common one that people probably already know. But in these sorts of common widely deployed ML systems, you start to see these gaps between how these systems perform on majority groups versus minority groups. So the first one that I think is probably maybe surprising to many people that there's these kinds of gaps, it is a test called dependency parsing. So the input is just sort of sentences tokenized and sort of split up. So an example here is bills on ports and immigration were submitted by Senator Brownback, Republican of Kansas. And the output is that you're supposed to analyze sort of the syntactic structure of this sentence and create dependencies between what are called headwords and their dependents. And so you end up with what looks like a tree here. And so the sentence above, like the bills on ports and so on, can be parsed into this tree-shaped structure here on the bottom. And so this is called dependency parsing because there's these explicit dependencies between tokens that show up in your data. And in sort of classical NLP pipelines such as, say, if you want to extract relations between people or entities, who was the person that submitted the bill in the sentence, for example, you might use something like a dependency parser to look at dependencies in your sentence and to extract relations, right? So this is a first step in terms of getting these kinds of more sophisticated analyses in these sort of classical pipelines. Nowadays, many things are sort of end-to-end and neural. But that's sort of beside the point here. And what's sort of surprising-- or maybe not surprising if you've thought about these kinds of problems is that these parsers do much, much worse on data that's not commonly used to train these dependency parsers. So this is a study from Su Lin Blodgett in 2016, where they took a bunch of different dependency parsers and applied them to text from standard American English as well as African-American vernacular. And that's the column labeled AA LAS. And the performance here is measured by what's called label attachment score. So that's how well do you reconstruct the tree. And the numbers here, you might not really know how to internalize this. But you see these big gaps, right? So in terms of standard American English, you get this 57 sort of F1-score type accuracy and then 43 in African-American. And you get a 14-point gap. And sort of state of the art for this task you're competing over a one-point difference. So these are enormous gaps once you go from standard American English to African-American vernacular. And these kinds of things can have huge downstream impact if they're used in things like relation extraction or QA systems, right, because text from African-Americans are just systematically not going to get extracted into, say, relations or entities when you build knowledge bases and things like that. And so you might see how this begins to affect these kinds of minority groups through these kinds of robustness problems. Another example is video captioning. So many of you have already interacted with systems like this through YouTube's video captioning system, where the input is you have a video with some spoken text audio, and the output is text captions that are automatically added to the video. And these things are increasingly important because, say, if you have a-- I know that in medical domains, if you have Medicaid-funded sort of videos that you need to put up on the internet, you need to have captions. And so in these cases, you either run these systems, or you have people transcribe the videos. And what's been found is that these kinds of systems work a lot worse for women. So this is a study by Rachael Tatman in 2017, where she basically showed that if you took male versus female speakers and you ran them through YouTube's video captioning system, you got systematically higher error rates for women. And you see that sort of the median error rate is essentially the upper quartile error rate for men. So that's actually a pretty substantial difference in the word error rate between these two groups. And you also see sort of expected differences between dialects, which is Scottish speakers get substantially worse video captioning accuracy, whereas speakers from California get really good word error rates. And so you can see how this manifests, right? YouTube being based in California, obviously dogfooded with people with Californian accents and, when tested out of distribution on Scottish speakers, suddenly performs a lot worse. And so this is the kind of robustness problems that you initially don't think about because you sort of think about, well, is our model performing well on really complex inputs? And so you might put in some really complex inputs as a California speaker. But, really, you haven't tested out of distribution on Scottish accents. And then we'll come to another example, which many of you hopefully already know, in facial recognition. This has sort of been really widely discussed even in the media. And just to go over what the task is, the input is images, possibly containing a face or not depending on the task. And you can do many sorts of things with these images. And there are many outputs that are associated with facial recognition or identification tasks. And so you might ask, is there a face in this image? And that's sort of face recognition. You might need to match a given face to a database of faces. And that would be identification, or you might need to predict attributes. Is this face a female face or a male face or happy or sad? You have many sort of attribute prediction tasks that can be built on top of faces. And this is one of the original studies, I think, in terms of highlighting how bad these kinds of systems can be in sort of widespread ways. And so there's a study from MIT Media Lab, Gender Shades, and Joy Buolamwini in 2018, where she basically took a whole bunch of portraits of legislators from different countries, African and, I think, northern European, and ran them through different face attribute prediction systems for whether or not they were male or female. And what you can sort of see on this on the top right is that dark female skin results in much worse gender predictions compared to light-skinned males, where you basically have perfect prediction. And these kinds of things are pretty problematic. If you've been testing your systems on light-skinned people, you think your system is near perfect. And so you might be using it for really high-stakes tasks where you need 100% performance. But then when applied to the darker-skinned demographic groups, you end up with substantially worse performance. And so you don't even realize the kinds of harms that you're causing by using these kinds of systems. And what's sort of problematic and sort of you can see is that they reflect a lot of the benchmark data that's been constructed for this task. And so on the right, on the right bottom here, you see the distribution of sort of skin color and gender for benchmark data sets in this kind of sort of gender identification from face image tasks. And what you see is that there's sort of a systematic underrepresentation of both females and darker-skinned demographics. And you might say this really just reflects the underlying data distribution. And so maybe all we need to get is unbiased data. You hear this term a lot from, I think, people who haven't thought too deeply about problems of robustness. But the issue is that there's really no such thing as truly unbiased data in the sense that there will always be an underrepresented group if you slice your data fine enough. So we need to really just go beyond thinking about balancing the data set. And we need to think about how can we make our models work well even for really small groups, really small demographic groups and even individuals. Another task that has these kinds of issues is language identification. So as an input, you might be working at Twitter. And you need to identify the language of a tweet so that you can run a machine translation system and automatically translate a tweet into someone's sort of speaking language. But in order to do this, you need to first identify what text the tweet is written in, right? And so you might have a lot of different kinds of inputs. And this figure one shows the challenge in this task. So you might have dialectical text. So the top one is Nigerian English. The second one is sort of Irish tweets, and the last one you can have code switching. So you can have a mix of both Indonesian and English. And so in language identification, when you're given these kinds of tweets, you need to identify the source language that they were written in. And so the output of the task is the language. What's sort of been identified is that there's systematic biases once again in language identification. And sort of one that's immediately a little bit troubling is that African-American English often gets identified not as English. So there's sort of like an implicit normative judgment being made here that African-American Vernacular is not English. And you see this error rate here as AAE having almost double the error rate of language identification compared to a more standard American English data set. And you also see this across languages. And this is a study by Jurgens et al in 2017, where if you sort the languages by the human development index of the countries, you see that there is this decreasing recall or decreasing accuracy as the countries get less and less developed. And that's because, often, these kinds of countries have under-resourced data sets. And so there isn't as much data with which to train these language identification systems. So you see these systematic biases in terms of how well-developed and how internet connected these countries are. And this leads to sort of representational harms, right? If you're an African-American English speaker and the system tells you that you're not speaking English, that's kind of harmful. And there's also utility harms,, right? If your text doesn't automatically get translated to English, you're going to-- your tweets won't reach as wide of an audience. And so you can think of these as having sort of pretty serious implications for fairness as machine learning becomes more widespread and more useful and more impactful. And so there's these problems of serious sort of active discrimination, where this was a story in The New York Times, where a face recognition system identified a person as being a criminal. And this was faulty, and this was essentially the only reason for arresting this Michigan man. And so if you have a system that's much more error-prone on African-Americans, you're basically going to have a much higher error rate when deploying these kinds of algorithms. So you have these active discriminations and harms that are being done. But on the other side, we think that-- as people taking machine learning and studying machine learning, we think that these kinds of technologies are broadly beneficial and useful and increase efficiency. And so there's a study by Eric Brynjolfsson, which says the application of machine translation systems increased exports on eBay by 17.5% because it's really easy to translate text. And so people from other countries can buy your product. But if, for example, language identifiers can't identify your language and so you can't use machine translation systems, then you don't get these benefits, right? So you get unequal access to sort of the fruits of these kinds of AI systems. And so this can lead to sort of harms in both ways. You don't get access to the benefits, and you get these kinds of active harms from the errors that these systems make. And I'm going to stop here because I think fairness is a topic that many people have feelings and comments about. And I'd be happy to sort of just sit around and discuss for the next couple of minutes if anyone has questions about sort of how fairness and these sorts of robustness questions interact with each other. Yeah, Tatsu, there's a bunch of questions in the chat. Oh. You mind taking a look? Yes, sorry. I have full screen. So give me a moment to pull up the chat. Yes, OK. OK, I see it now. OK, so I think there are-- I'll start with the first question, which is having balanced, unbiased data is not enough. And so this is a very subtle point. So there are data that you can construct that will make you robust to certain kinds of groups, right? So let's go back to just this slide. So if we look at this sort of distribution of data, it's clear that at least for this top one Adience, we're probably going to have some sort of bias in terms of light versus dark-skinned because dark-skinned is so underrepresented. But if we balance this data out, right, we might still have sort of unbalanced demographics in certain other areas. Maybe it's not dark versus light skin, but maybe it's geographic region. It might be income, right? These kinds of problems are innumerable. And so what you really need is not necessarily the search for this unreachable perfectly balanced data set but sort of a model that can sort of do well on small data sets or small amounts of data. So you want to have a model that can take in these kinds of imbalanced data and do well both on the male-- or sorry-- the dark and the light-skinned. And the important thing in this task is there's no real trade off, right? There's no real reason that you can't do well on both the light and dark-skinned. And I think that's the sort of crucial structure here. If you can do well on both groups, then it's not really about the amount of data or the distribution of data. It's more about the model and sort of how you're learning it. And the second question-- "Is there a way to audit models without having access to the model?" So that's an interesting question. I mean, I'm not sure if you meant by this like access to the model's outputs or something else. If you have access to the model's outputs, right, you can perform a study like Gender Shades, where you run the model on certain sort of challenge examples, and you look at what the error rate is. And you say, well, clearly, we are doing much worse on dark-skinned females than light-skinned males, so there is some sort of bias. So you can audit models that way. It becomes much harder to audit models if you can't execute the model on your own data. Then you'll have to do something a little bit tricky, and it requires very specialized conditions, I think, to be able to audit those kinds of models. Also feel free to ask follow-up questions if I didn't answer any of these. So similar to the issue with the person in Michigan, there has been efforts in applying AI to model future human behavior. Oh, this is a comment. Yes. And that's highly problematic. I think in one of the earlier slides or talks, there was a discussion about how sort of amplification of feedback effects are really insidious. And, yeah, so predicting the future and actioning on predicting future behavior is even more problematic than the task I described here because acting on the real world will change the outcome, right? So if you predict that crime will happen in a certain area, you assign more police, and you find more crime. That's going to lead to a pretty vicious feedback loop. So you need to really think about sort of the whole sociotechnical system rather than just the classification system narrowly when you're in those settings. The last one-- "it seems that we can always slice our data into more sub-populations to test for fairness. Are there industry standards for what we should usually start with?" That's a great question and also a really important academic one. So there is a sort of easy answer, which is that a lot of research and a lot of industry work has focused on sort of legally protected groups. And that's a well-defined set of attributes that you can't discriminate on. And so you can group by those. You can group by intersections of those. And you can say those are the groups I shouldn't discriminate on. But, sort of academically, this seems unsatisfying because why should those be the only things we care about? And there's a lot of work on sort of individualized fairness, making sure that you do well on sort of individual people, make sure you treat people that are similar similarly and things like that. And that's a whole active area of research and sort of not really something where there's an obvious and clear answer yet. OK, any other final questions before I move on? OK, so now I'm going to sort of move on to the second point that I talked about before, that machine learning systems aren't really secure and can't really be used in many high-stakes situations. So I'm going to start with one of the most well-known examples of this called adversarial examples, where on the left, we have an image. This is a panda. And a classification system gets this mostly right. It's a panda with 57% confidence. That's great. Now what we're going to do is we're going to add a very specially designed and visually imperceptible perturbation. So this middle panel looks like complete noise. We scale it down so that it looks just like zeros, and then we add it to the panda image. And we get the image on the right. Now we run our image classifier. And what we get out is it's almost certainly a gibbon, which is completely wrong. And so what this tells us is we can find visually imperceptible perturbations that lead to very confident misclassifications. And I'm not going to show you the results of this adversarial example stuff. But you can do this to almost any system, and you can completely and catastrophically destroy the accuracy of all of these systems. And this also happens in NLP systems and so on. So this is a really sort of hard to avoid and almost sort of universal behavior. And I want to show you how sort of robust this kind of behavior is. And so it doesn't have to be images on a computer screen. It can happen by putting little black and white patches to a stop sign. And so the left system is going to classify that as a yield instead of a stop sign. The middle one is a fun, 3D-printed toy, where if you try to run an object recognizer, it will say gun from almost any angle. And the right one is an adversarial sticker, where if you stick it anywhere and you take an image, it's going to say that it's a toaster instead of a banana, which is what it should be. So these are very many different formats. But you have this same and kind of disturbing phenomena, where it's obvious to us that something shouldn't be tricking us. Black and white patches that are that small or weird texture on a turtle shouldn't really be fooling us into changing our predictions. But it really fools these image classifiers. And when you first see this, you think there must be a really simple patch, right? Maybe you run it through a JPEG compressor. Maybe you add a little bit of extra noise in every image. And so this has led to an enormous number of papers, over a hundred or so over the last five or six years in which people have tried a lot of different things to defend against these kinds of what's called adversarial perturbations. But the problem is that every time someone comes up with a defense, soon after, someone's going to-- someone breaks it by finding a better attack or even somewhat more disturbingly, just running the old attack for longer. And so it kind of seems like this is a really persistent and serious phenomenon. And I think the recent view of a lot of these adversarial example type problems is not that there's some really degenerate artifact about the way we train models or the way we optimize things. It's really just the fact that there are a lot of ways to have a high-performance prediction system. And many of the ways in which we can predict accurately rely on these what we're going to call non-robust features. And so when we try to, say, classify a dog or a cat or so on, we as humans rely on these what we're going to term robust features, right? We try to identify eyes and snout and these parts. And so these kinds of things are pretty robust to pixel-level perturbations. But, actually, low-level textures and really small image patches are also very predictive of classes of dogs versus cats, let's say. And who are we to say, right, that that's like an incorrect way to make the predictions because when we train the model, all we're saying is just classify these dogs and cats well. And so you can think of this as saying our problem is underspecified. And there are many ways to get a good classifier. And some of them really rely on the use of these sort of non-robust features. And this has kind of serious security implications, right? If you're trying to make a self-driving car system, the stop sign being classified as a yield is pretty bad. You might run over a pedestrian. And this really prevents the application of machine learning systems in things like self-driving cars. Or at least we should be very hesitant if we believe that these kinds of problems are inherent, right, because the world is kind of designed so that they're really easily perceptible to humans and not necessarily designed so that small perturbations, say, by putting on stickers don't change stop signs and yield signs. And in other cases, right, vision systems, I think, are being increasingly being used in high-stakes applications. We might reasonably imagine, say, at a TSA checkpoint, there's a camera that's running, and it tries to identify whether or not you have a gun, right? And if you can make these adversarial examples that say make a gun a not gun or a turtle a gun, that seems very problematic, right? We can't use vision systems for those kinds of high-stakes applications that we might want to use them for. And so both of these really pose challenges for the use of machine learning in these sort of high-stakes life or death kind of settings. And I'm going to stop here to take questions about adversarial examples for the next couple of minutes. "Do I know why the first example was classified as a yield sign?" That's a good question. I mean, with all of these adversarial examples, the reason why they're being classified as yield is pretty confusing. I think, well, for example, why is this turtle being classified as a gun? I'm really not sure. It doesn't look anything like a gun, and the textures don't look like a gun. The way these things are constructed, right, is they're constructed by an optimization process. You're basically looking for perturbations on, say, a normal turtle texture that lead it to be a gun. And so there's no real interpretable reason why, say, this looks like a yield sign, or this is being classified as a gun from every angle. "Can I train on these examples to correct classification?" Oh, sorry. I skipped the question. "How do we define the non-robust feature versus--" yes, OK. So this is an ad hoc definition. So the split here is just whether or not a feature can be flipped by changing the image slightly in pixel space. And so that's really the working definition here of robust versus non-robust. And I think if we were being more precise, I think this should really be split as saying visually imperceptible is non-robust and visually perceptible is possibly robust. And I think that's sort of a pretty reasonable split. Anything that humans really cannot visually tell should not be being used as sort of features as inputs to a reliable prediction system. "Can I train on these examples to correct classification?" I'm going to try to interpret this question because I'm not 100% sure. I think what you're describing is an idea called adversarial training. So the idea is to, basically, instead of training on just the input image-- let's just go back a little bit-- instead of training on the pandas, right, we try to train our system to basically classify this image, the adversarial image, as being a panda, right? And you might think, OK, this is good. We can now make this a panda. But now we need to prevent some other sort of adversarially designed noise from making sure that we look like a panda, right, because there's probably many, many, many different attacks that will change this into a panda. And so the idea that you're describing here is basically called adversarial training. And I think that's in-- yes, that's one of the earliest defense approaches. And it's empirically reasonably effective. But you can still attack it by more sophisticated methods. You can find still visually imperceptible attacks after adversarial training that breaks the system. So this is not really a foolproof way of trying to make models more robust. It's better than nothing. "Are you saying yet unfound defenses are needed for ML self-driving cars to be secure from nefarious attacks?" I think this is a great question. I think I was-- I'm being a little bit too aggressive in terms of the things that I'm saying, right? It's an open question whether or not these kinds of attacks are really feasible in the real world or whether or not there are things that we should worry about, right? In the real world, I can easily cut off a stop sign using a saw. And that's an adversarial human attack. But we're not too worried about that attack. And so maybe we shouldn't be worried about adversarial attacks on self-driving car systems. But I think there are two things that this highlights. One thing is that we should be a little bit careful when we deploy these self-driving car systems, right? We should have fail-safes that, for example, don't rely just on vision. That seems pretty important. We might want to use RADAR or LiDAR. RADAR doesn't work on soft people. But let's say LiDAR, try to make sure we're not going to run over people when we miss-detect the stop sign, right? Having lots of orthogonal checks becomes increasingly important once you realize that there are ways to fool these vision systems. And I think people are working on sort of provably robust machine learning systems. Maybe in settings like military applications, those become truly important. And so there is progress on that. But it's just that provably robust systems achieve much worse average accuracy than non-robust systems. There's this big gap right now that we don't really know how to close. OK, "in my opinion, is research now shifting towards reformulating models to rely on robust features instead of finding ad hoc defenses?" That's a good question. I think there is still a big gap in terms of provably robust defenses versus these what you might call ad hoc defenses that work well for, say, one or two targeted attack types. But I think there's things like randomized smoothing and procedures like [INAUDIBLE] that sort of get the best of both worlds in some sense. They're getting increasingly-- they're provably correct in some framework, and they're getting increasingly better. And so I think for high-stakes applications, I think we'll end up in a place where we'll lose some average case accuracy but not catastrophically so. And we'll still have sort of adversarially robust models, not sort of where I imagine the field will go. It does seem like ad hoc defenses keep getting broken. So that's not really a path towards truly robust systems even though they might make for more useful systems overall if, for example, adversarial training leads to more interpretable latent features. "Does producing adversarial attacks require access to the model? If so, isn't this just an issue of info security equivalent of--" I can't parse that second sentence. But, yes, I agree with this general sentiment as well, right? So if you need access to the internals of the model, then, really, at that point, you've rooted the system if you're attacking, say, a medical imaging system. Or you have access to someone's car. And if you're the Mossad, you can probably mess with their brakes, right? And so it's true that those attack models are pretty obscure and weird. But there are what are called white-box attacks, which only require you to evaluate the model for one. And for two, a lot of systems are shared, right? So you only need to learn to attack them once. And so if you're trying to attack Tesla's auto driving system, you need to get a Tesla. You need to figure out the adversarial sticker that's going to make your system go haywire, and then you need to paste that everywhere, right? That doesn't require a particularly sophisticated threat model in order to execute. So I think there are some models in which these are sort of real and problematic even though I think there is a lot of questions and debate about whether or not we should really care about this cost-benefit trade-offs of robust versus non-robust systems and so on. But it's an important thing to keep in mind, right? OK, any other questions for the adversarial examples part? OK, and so now I'm going to get to the last part, which, given that this is a AI class, is maybe the most important of the three failures of machine learning now in terms of robustness. And I think it's one of understanding. And people throw this word around a lot, that models don't really understand. And it's hard to pin down what understanding is. But it's very easy to show when models don't understand. And so we can go through some examples here. We'll go through them again in more detail. But this is from an overview of what people call shortcuts in the citation on the bottom. And, for example, let's say we're trying to caption this image here. And so we need to describe in text what's happening. But, really, sometimes these systems might just use the background instead of actually recognizing hills and skies and sheep. Adversarial examples may be because we're recognizing textures and not actually recognizing the shape of things like teapots. And if we're doing medical image diagnostics, we might be looking at markings on X-rays, what hospital the X-ray came from instead of actually performing prediction. And so in all of these cases, we're making use of these pieces of information that shouldn't really be central to the task. They're not the core prediction tests that we care about. And somehow the model has picked them up and learned to do really well. And I'm just going to group this broadly under this label of shortcut learning. And the way to think about this is that when we train models in machine learning, we're training them to do well while directly on the training set. And these days, we now expect them to do well on the test set. But, really, what we would ideally like them to do as sort of systems that reason and understand and so on is that they perform well on these challenge sets, really difficult examples that we've constructed to break the model. And so you can think about this as there's a lot of possible rules that work well in the training set, and there's fewer that work well on test sets. And there's very, very few that work well on sort of these challenge sets, the intended true reasoning that we would like our models to extract. And so we can think about machine learning today as we've gone from this tan-colored circle, where we were before, to this blue-colored circle, where we are now, where we do well on the test set. But what we really want to be is still further. We want to make sure we learn the right sort of mechanism. And you may have heard this classic AI story of tank identification. And I think this is a really old Cold War kind of story, where I'm going to read this out loud. The Army trained a program to differentiate American tanks from Russian tanks. It got 100% accuracy on a test set. But, later, people realized that American tanks were photographed on a sunny day, and Russian tanks were photographed on a cloudy day. And the computer had learned to detect brightness, not actually detect tanks. And so this is exactly the kind of problem that I'm talking about, right, where we have this extremely high-test accuracy, and we are super happy. But then we realize we haven't learned anything about the underlying task. And this has been attributed to a lot of different people. It was, I think, originally in written form on Dreyfus's textbook. But it turns out it's not an actual real example. This can't really be attributed to any actual experiment run by the Army. The citation here is Gwern. He has a website where he has gone through all the possible attributions of this urban myth. But, really, this urban myth is so popular, I think, at least in the AI machine learning community because there is a kernel of truth to this sort of story. And we're going to go through several examples of tasks today where there's these kinds of failures. So one of them that's kind of fun is this vision task apparently of where people have tried to predict gender from iris patterns. And there was apparently some belief that this was a test that you can perform because you can actually get a reasonably high-test accuracy if you train CNNs on cropped images of irises, and you try to predict gender. But there's this paper that identified that, actually, this is not actually because of the iris. It's because female eyes often have mascara. And that systematically shifts the brightness of the images. And this sort of histogram tells a thousand words in one image. And so on the top one, you see this distribution of males. And this x-axis here is the average brightness of the image. And so the distributions look pretty similar. Males and females have a similar brightness distribution when females have no cosmetics on. But if you restrict yourself to females with cosmetics, this red distribution shifts to the left, and the image becomes darker. And so you see that there's this very strong confounding effect of the female eyes having mascara, therefore, being darker and, therefore, these systems predicting quite well based on this average darkness even though really, apparently, it wasn't learning anything at all in terms of this actual prediction task. Another one from sort of the Gwern website and sort of the investigation into this tank phenomenon, which is interesting, is Kaggle fisheries competition, where the task is you're given images of fishes being caught on a fishing boat. And the task is to identify whether or not these boats are catching fish illegally. So you're supposed to identify whether or not these fish are part of a set of protected category of fish you're not supposed to catch. And it turns out on the training set, you can do extremely well on this task using a very simple heuristic. These images come from a relatively small number of boats. So you first identify each boat, and then you identify for each boat whether or not they have been catching illegal fish. And this approach does really well because it turns out only a few fish-- or a few boats catch these illegal types of fish. And so by first identifying the boat and then by identifying the fish, you can get extremely high accuracy even though you have learned nothing about actually performing this fish identification task. Another one that seems maybe more high stakes and problematic is in medical prediction. There is a lot of talk about tumor identification or chest X-ray sort of malignancy prediction. And in these cases, right, it's pretty important to ask whether or not we're doing well because these are high-stakes situations that you would like to make sure that you're not being fooled by some sort of feature that makes the task easier than it actually should be. And there's often claims now of the systems performing just as well as human doctors in terms of their diagnostic accuracy and so on. And one sort of really interesting, and maybe a little problematic example, is when you have these sort of tumors, sort of skin lesions that you're trying to classify as whether or not they're cancerous or not, doctors will often put surgical markers to highlight tumors that they think are more serious than others, just so that when someone else is looking at them, they can immediately identify the more problematic ones. And the training set apparently for these systems contained a lot of these markings. And so there was an examination into these sort of tumor classification systems, where they artificially added markings to these images as well as cropped-out markings from already marked images. And they show that they can basically flip the classification of these systems. And so in some ways, the high accuracy of these kinds of classification systems are not because they're identifying tumors. It's because they're piggybacking on humans who have already, in many cases, classified the tumors as being malignant or not. An early problem that someone identified in one of the earlier works in-- oh, sorry, identification of this Esteva in 2011 is when people are trying to identify whether or not tumors are malignant, when, in serious cases, people would include rulers to show how big the tumor is. And so the existence of a ruler would serve as a sort of spurious correlation or as a confounder in terms of whether or not a tumor was malignant. And, finally, one that I think people are now aware about but, initially, I think people are sort of not as aware of is that hospital ID often serves as a really reliable indicator of both sort of base risk level as well as the type of procedures being performed at a hospital. And this you can think of as analogous to the boat example and the fishing problem, where if you identify hospitals that, say, have a lot of smokers, you're going to much more likely find cancer in lung chest X-rays from those types of hospitals. And so it's really important to try to remove the effect of sort of identifying the hospital and then identifying the base risk. A really interesting one I wasn't aware of in image classification until yesterday or so is Pascal VOC is a pretty common object identification data set. And a bias that's been identified is that the horse class for this, I guess, was taken by a single horse photographer who put in watermarks at the bottom left of the image. So around 20% of the horse images have a watermark and rely-- classification systems just learned to pick up on the watermark. So you can make cars classified as horses as long as you add the watermark on the bottom right. And so this is something where unless you really carefully looked at the data set, you probably won't even realize that this kind of bias exists until you've actually sort of carefully examined and adversarially examined the data sets that you have. Finally, I've mostly talked about vision examples thus far. But these sort of shortcuts and lack of understanding is a problem that's common to every area. And I'm going to give probably a very well-known example in natural language processing. The task here is entailment prediction. So you're given a pair of sentences. One is called the premise, and the other one is called the hypothesis. So the first one is a sentence like, "the economy could be better," and the second one is a sentence like, "the economy has never been better." And the goal here is to say, does the hypothesis logically follow from the statement made in the premise? And so if they follow, you say it's entailed. If it's contradicted, you say it's a contradiction. And if it's neither, you say it's neither. And so it's a three-class classification problem. And the way these tasks are-- or sorry-- the way these examples are constructed is through crowdsourcing, where you extract a premise sentence from some large internet text-- or newswire text, I guess. And you have a label that you randomly pick. So you say I have a premise, and it's going to be a contradiction. And then you ask crowd workers to write down a contradiction. And so they write something like, "the economy has never been better." Right? And so what happens here is that crowd workers, because they're writing the hypothesis text after seeing the label, have systematic biases. And the bias that's really strong is this negation, where they learn that negation is often-- or sorry-- where the bias is that when something is not entailed, they use negation. And so a model will often learn to associate the negation or lack thereof with the outcome label. And so instead of actually doing these sort of entailment tasks, they'll often pick up these negation biases. And even more problematically, systems have what's called-- or sorry-- what's called a hypothesis-only baseline, where you don't even look at the premise, it can do extremely well, right? And there's no way to do well on this task while looking just at the hypothesis, right, because how can you know that the hypothesis is entailed from the premise while only just looking at one? And so this shows this really strong bias that these crowd workers put into this data set. And so this has serious implications for the project of pushing machine learning and getting towards understanding in general AI because, thus far, all of machine learning has been predicated on benchmark progress. And that's the way in which the field has really grown and done well. ImageNet and MNLI and these sort of well-known tasks, you get everyone together, and we push on these numbers. And we hope that improvements in these benchmark performances lead to understanding. But it's clear that because of these biases, that may not necessarily be the case, right? So we need a different paradigm to link machine learning performance to understanding. And the other problem that I hope by going over so many examples I was able to impress upon you is that there are so many shortcuts, right? And in this negation bias from crowd workers, you wouldn't know about this unless you sort of looked at the data set carefully after being told that there was a bias, right, where the watermarks were horses. I don't even know how they found that given that how sort of minor that is. So it becomes really hard to say we're just going to construct a data set free of shortcuts. When you're told about these shortcuts afterwards, it seems really obvious. But how can you construct a shortcut-free data set? And so that's the sort of real challenge. Now if we think that we can't get data sets free of these shortcuts and these biases and these minority groups, we need a new way of trying to make sure that our models really learn the right thing. And I'm going to stop here for a moment to sort of talk about shortcuts and understanding. And, hopefully, people have lots of questions because I think this one is a fun one in terms of thinking about how machine learning relates to AI and so on. I have a question. Sure. Sure. Just thinking about committee modeling and I'm going back to that stop sign with the patches that's seen as a yield sign. Does it make more sense to use one big model trained on every piece of data you can find? Or does it make sense to train a bunch of models on some partitions of the data that might overlap in some way and then combine the results in order to make it less sort of overreactive and sort of robotically easily fooled and in a robotic way. Yeah. In any way? I think that's a generally good thing to do. So I guess there's two answers. And the more general one is to think about the trade-off between model capacity and your ability to fit these sort of minority groups or these shortcuts, right? So the idea you're describing-- let's say we have 10 or 100 different models, right, and we fit them to different parts of the data. Then we might have a model that's dedicated to shortcuts. But we might also have a model that really learns the right thing, right? And so the more flexible our model, the bigger our sort of model class, the more we can say part of the model might dedicate to shortcuts. But that's OK because the rest of our model will still learn the right thing, right? But that's sort of still a hope. There's no real guarantee that this will happen. And if the shortcuts are strong enough, that's what the model will learn. So I think it seems really important to have bigger-capacity models. That's sort of a given. But how can we learn big models well without overfitting? How can we make sure that they still learn sort of the right thing? If one part of the model fits the shortcuts, how can we make sure the rest of it learns to do the right sort of prediction without shortcuts? That's sort of the open question, I think, in this area, yeah. There was a question. "For image segmentation, do we have a way to know which part of the image contributes to the prediction?" We could call it prediction traceability. Yes. So I, of course, glossed over quite a bit. But this paper, Lapuschkin 2019, here is exactly about that. It's about trying to identify-- or to attribute predictions to parts of the image using interpretability methods. And that's how they found, I think, this horse problem, where they attributed predictions to locations, and they found that for horses, they were always localized to the bottom right. And it was because of this watermark. And so I think a big important use case of interpretability methods is exactly this. It's to identify these kinds of shortcuts by attributing predictions to locations in the image or to subgroups in the data set. Going along the above comment-- "are there methods of finding what parts of the image or data example has high weights associated with it?" Yes. So the general approach, I think, that people have-- I think there's, of course, many different methods. But Shapley values are a pretty general framework. You can sort of think about-- the analogy is like this. Think about each pixel or each region or sub-part of the image as participating in the prediction. And you can ask, when I remove that part of the image, how much does the prediction accuracy go down, right? And so you can do that after randomly removing other parts of the image. So you randomly remove the other parts of the image, like drop random pixels, and then you drop the part you're interested in. And you ask, on average, how much more accuracy does this part that I'm interested in give me, right? And that's this sort of the estimate called the Shapley value. And Pang Wei, I think who is here, has a paper on approximations to that based on the influence function. So there's all sorts of ways in the interpretability community about doing what's called feature attribution. Another question-- "for a problem with discrimination, would a reasonable approach be to adopt active learning by default, where the model can train with more emphasis on wrongly categorized examples? Then the hope is that the model could steer itself away from the initial biases over time. Or is it not as simple?" Yeah, so that's obviously a-- actually, one way around it, right? When you can collect your own data and you have the ability to know what you're getting wrong, then collecting data in the places where you're wrong serves as a negative feedback loop, right? You get more data where you're wrong. Your model gets the training signal it needs to correct. And so, eventually, you'll learn the right thing. It's just that active learning on the scale that you need is very, very challenging, right? Can you actively collect ImageNet-sized style data? Very challenging. Also very challenging to know what parts of the data you're doing badly on, right? You need to know well enough to say, oh, the part of the data I'm doing badly on is horses without the watermark, right? That's sort of a hard thing to be able to say. So you need to know what you don't know, which is almost equally as challenging as robustness. Oh, wow, there's a lot of questions now. "Physicians are experimenting on end-of-life care based on AI to nudge conversations. Do you have any suggestions for patients and doctors?" Yeah, that's very challenging. I do think one important thing about sort of these high-stakes settings is to think about sort of the alternative and the whole decision systems that they're part of, right? So, say, a medical diagnosis-- or I'll give another example I'm more familiar with, which is predicting whether or not someone will commit crime again and so should be released for parole. These are both really high-stakes prediction tasks. And the way they're performed is there's a machine learning system and a human, and they sort of jointly make a decision. And so you need to think about not only the machine learning system, which is sort of what I've talked about here, but also the human part, right, and how they integrate and how their decisions get combined together. I think that's actually the important part. How do humans override the machines? How do they incorporate the suggestions of the machines even more so than the predictions, which I think need to always be taken carefully? Next one-- "how can we combine model's objectives to gain greater understanding of the world and to combine it to create intelligent behavior?" Yeah, I think this is basically the open question, right, the unfortunate thing that we don't know how to do. And I think that's the thing that we are struggling with in the robustness generalization literature. What is the right approach? And I think there's not really even a consensus. Is it more data collection? Is it smarter ways of training the model? Or is it better models? It's not clear yet what the right way is yet. Unfortunately, I don't have a great answer beyond that. "For shortcuts, I think even people use shortcuts to identify something. Do we have some way to understand shortcuts based on training data?" Yeah, this is a good point. So humans will also use shortcuts. But I think the important thing here is that these shortcuts are a lot more crude than the human ones that we use. And at some level, they don't even pass basic sanity checks, right? For entailment, we know that the output should depend on both the inputs. But in reality, it's only depending on the hypothesis, which is very problematic. And what people do today is they have these sort of challenge sets, like examine the performance of a model just based on the hypothesis. And these kinds of sort of what are close to-- those are unit tests help catch these kinds of shortcuts in many cases. So I think one way to detect them is things like that. Our model shouldn't be sensitive to certain perturbations, or it should be sensitive to certain perturbations. And you go off of those kinds of assertions. "Is there currently a correlation between model capacity and number of shortcuts employed? Are larger models more likely to happen on the correct correlations? Or are smaller models more likely to use shortcuts?" That's a good question. I think the general sense that I get from reading the literature is that smaller models are more likely to use shortcuts in many ways. For example, in this paper about sort of watermark-based shortcuts, linear models did a lot worse. They would really pick up heavily on these watermarks, whereas CNNs did so with less weights and less frequency. And I think, generally, that's true, that large-capacity models trained with a lot of data can use some of its capacity just to model the shortcuts. And it'll still do well on the data without shortcuts as long as they exist. But, really, the key thing here is you need to at least see some data without the shortcut pattern. "How much are you--" Oh, I already answered. OK, great. Yeah, so I want to get into the breakout now, actually, especially because someone asked the question, what's sort of the solution? And are humans robust? So if, Woody, you could drop us into our breakout session for, say, five or six minutes, it would be great to talk about these two questions. So the first one is, are the brittleness issues inevitable? What do you think are the solutions? What are the right approaches that you think of? a The second question is, are humans robust? And if you think so, what's the key ingredient that makes humans robust or more robust than machines? Awesome. Yes, I'll create the breakout rooms. If everyone wants to take a quick screenshot or try to remember these I'll post the questions in the chat as well. But they won't be in the breakout session. Oh, OK. Great. I think I'm muted. Let's see. Yes, you're muted. OK, great. All right, excellent. All right, so I'll go through the second part a little bit quicker. I'm glad that we got so many good questions on the first part, which is the more important of the two parts of this talk. And the second part is thinking a little bit about, how can we do learning? How can we fix these problems? And the kinds of research that Percy and I and others in Stanford have been doing. And so the key problem that I think you should keep in mind with all of this is that the training distribution is very different from the test distribution. This is the root of all evil for these robustness problems. And so we need to think about, is the limitation that we can't generalize from train to test inevitable? Or can we come up with some clever data collection schemes or model training mechanisms that allow us to generalize? And so to do this, we need to think a little bit about how distributions can shift. And so I'll give you a little bit of definitions. The first one is covariate shift. And this is what you usually think of when people say the distributions are different. So you might get these-- Let's say you're making a face recognizer. You have these really nice well-lit portraits in the training data. And at test time, you're using it with CCTVs, so all sorts of different environments. But the underlying task is the same, and there should be a single predictor that does well both on portraits and images cropped from CCTVs. Another example is label shift, where the input features look similar. But now the output label's distribution has shifted. So, for example, if you're making a face recognizer and at training time, you need really precise matching, so you're only going to be calling detections when the images look exactly the same. But at test time, you're making a product for your camera. And so it can be a little bit looser. And so you might deal with blurry images and so on. And so the litmus test for this is you have the same predictor, but you're changing your threshold. You're just saying it's OK if I'm a little bit less confident. I'm still going to make the call. And this is an instance of label shift. And the final one that is basically intractable in all cases is concept drift. And so here, you might have a prediction task where you're trying to initially recognize faces of the same people. But then at test time, you want to match people across time, like young pictures and old pictures. The task is fundamentally different, right, whether or not you're matching the same person or person shifted in time. And so no one predictor is going to do really well on both of these tasks. And so there's a sort of fundamental change in the task definition. And I'm going to go over to sort of ways to deal with all of these problems. The first one is we're just going to collect more data. Someone sort of talked about this as a question earlier. And this is sort of the key thing, right? If we get more data, we can do a lot more things. And the second part is a little bit more ambitious. And it's going to say, let's try to only deal with what data we have. So the first idea is let's just say we're going to try to generalize to a test set, and we're just going to collect more data. So a classic example of this kind of task is you're recognizing digits. So you have images of digits. The left one is MNIST, which is really old. The USPS is even older actually and SVHN, which is a more modern recognized digits from, I think, mail numbers that are taken from the wild. So in all of these cases, you need to output the number, right? This one is a 2. This one is a 7 and so on. When we're collecting new data, let's say we train a model on MNIST, and we want to do well on SVHN at test time. And so this is a distribution shift. But we might be able to collect more data, right? It's unrealistic to maybe say we only have this, and we have to predict this. And so what we might have is we might be able to collect some unlabeled data from SVHN, right? We can't afford someone labeling them by hand, but we might just be able to get the images. And this is called unsupervised domain adaptation. And if we can collect labels, that's all called supervised domain adaptation. And that's even better. And so we can ask, when can we do learning, right? If we're in covariate shift and we have this source data, we might be able to do learning because we have a better model that can adapt to these different kinds of distribution shifts that occur. And so the general thing that you should sort of think about is if we're in covariate shift settings and we have source domain data, so unlabeled data from our target distribution, then we can actually sometimes generalize to our target task even though we don't have labels. And so this is a setting that we're going to be talking about for the rest of the talk, the rest of the couple of minutes of this talk, where we're going to have to say, we have this covariate shift problem where the prediction task is fundamentally the same. And how can we generalize? So the easiest thing to do and the most classic thing to do is reweighting. So let's say we have training data that's 90% frontal images but 10% images taken from the side. And we want to generalize the test data that's 50-50, front and side, right? So how can we do this? Well, let's just reweight the data set. So each frontal facing image counts for less, and each side facing image counts for more, right? So we've artificially rebalanced the data. And this gives us this assumption-free way of getting estimates of how well our model will perform on this 50-50 test set. And this applies to all of what I've talked about before. If your data is imbalanced and has a minority group or maybe has shortcuts, maybe we can rebalance it to get rid of all of these problems, right? So are we done? No, because even though I talked about it that way, if we reshuffle or restructure the data set, the training data is. 100% men, and the test data is 100% women. So there's no overlap. And if we try to reweight this data, we're going to get infinite error, right? because we need to infinitely upweight the women that we don't have in our training data. And so this is the fundamental problem with all of these approaches. When you don't have any overlap, your estimates all blow up and go to infinity. And so in the real world, everything is non-overlapping. And so, usually, these kinds of estimates don't work. But, intuitively, we might think these kinds of tasks are possible. And the reason why you and I and many others think this is possible is this intuition, right? Let's say we have training data that's blue images and test data that's orange images. Clearly, there is no overlap between any of these images, right? They're on different color channels. But if we desaturate the images, we can perform prediction on the desaturated image, and we'll get really good performance, right? And so the intuition that if we can't distinguish the two domains because we've desaturated the images, we might do really well. And this is the idea behind most of modern domain adaptation, where you learn to represent your data in a space that doesn't change when you go from training to test distribution. And so you measure how much your data changes in this sort of higher-level representation. And if your data is close, then you're going to do really well. And the thing to keep in mind is your test performance is going to be your training data's performance plus some sort of distance that measures how different the training and test distributions are. And if you can keep that small you're going to do really well. And so you can think about this very simply as saying, this gap. The test error of a model is the source performance. How well we do on the training data and the gap between train error and test error? And this idea is that we're going to look at a domain distance where no matter what model we pick, we're going to do well because the distributions look so similar, right? If an image is desaturated and they look identical, it doesn't matter what the model is. They do identically. And so if we do well in the source domain and the domain distance is low, we might be able to generalize. And this is really interesting and optimistic, right? These all seem like things that we can measure and think about, and they give us conditions under which we might be able to do well on a test distribution. And there's been a lot of work over the last almost two decades now on these kinds of domain distances and bounds and how you can learn from unlabeled data. And they give great intuition. They let you think carefully about these kinds of problems. But, unfortunately, if you try to actually compute these bounds, what's my guarantee of test error, they're usually vacuous. So you'll say things like the accuracy will be greater than 0. And the error will be less than 1, which is not super helpful. And just to go over how these kinds of things often work in practice, domain distances, the thing I just described, is the basis for a lot of these modern domain adaptation methods. And the key idea is that neural nets are everywhere because neural nets work. And you use them as a way to measure the domain distance. And the idea here is that you have one part of your model maximizing performance on the training data. And you have another part of the model making sure that you can't distinguish between the training and the data distribution on this what's called a bottleneck feature space, where, at this level, your training and test data should be indistinguishable and yet useful for actually doing the task. And so we have this hope that we have this reweighting thing, which has no model dependence, and this model-based domain distances that requires us to carefully construct a neural network. But on the other hand, that's the only way we can get these things to work in the real world. With reweighting, often, these weights are infinity. And so there's no free lunch. We either need model assumptions or assumptions about overlap on the domain. But if we have one of those and unlabeled data on the test domain, we can actually sometimes do well. The other approach that I go over with little quickly because I'm running low on time, I can describe at a very high level as this idea. As I said before, the main issue is that our training distribution and the test distribution are different, right? And if they were the same, we'd be done. But they're not. But what if I told you I gave you a list of 100 possible test distributions? And I say your true test distribution is going to be one amongst these 100. Then we can train a model to do well in all of these, right? We just go through each one of these test sets, and we say our model has to do well on the worst one. And if we can get a model that works on all of them, we know that our model is going to do well in the true test set. So this is thinking about a potential set of test distributions and considering the worst case. And this is what's called a minimax optimization problem. So we're going to find the best model-- that's the min part-- that works well over the worst possible potential test set, and that's the max part. And this idea is going to work whenever the true test set is contained in this uncertainty, q. We're saying worst case over this big Q, which is the set of potential test distributions. And this fails whenever Q is too small or too big. If Q doesn't contain the real test distribution, you've got no guarantees. If Q is so big that it contains everything, then your model is going to be so pessimistic, right, because it has to be prepared for any possible distribution that's it's just going to predict 50-50 or something vacuous for all of your inputs. I'm going to skip over a bit of examples. And I'm going to go to this slide and say these kinds of ideas can be applied in each one of the settings that I described before, talking about minority groups in fairness or adversarial examples or understanding by carefully choosing the kinds of worst-case groups. In the case of minority groups, we basically explicitly list out all of the possible minorities that we care about. And we consider the worst-case performance over all of those. In adversarial examples, we know that these images before and after perturbation are close. So we consider all distributions that are nearby each other in pixel space, and then we maximize over the worst case over those. And then for shortcuts, what we can do is we can explicitly construct groups that don't contain some of these shortcuts. And we enumerate all such groups and then make sure that these worst case groups work well. So, for example, if we have a model that relies too much on backgrounds, we construct subgroups of the data that have sort of mismatching backgrounds and objects. So to basically wrap up here, the limits of this kind of approach is that if we pick too small of a worst-case group, we get no robustness. And if we pick too broad of a worst-case group, we get vacuous models. And there's no simple or general principle for designing these losses even though this approach gives us really nice ways of thinking about and optimizing models for the worst case and getting guarantees. OK, I'm going to wrap up there. If anyone has questions, I would be happy to answer them. I can stay for a little bit longer in chat if people have questions. All right, maybe we should just thank Tatsu for his really insightful and interesting talk. And then maybe people can run off if they have to go. So thanks, Tatsu. Thank you. Really, a lot of interesting things food for thought. So I hope that everyone has their eyes opened with respect to all the different problems that we're seeing and hopefully motivated to help solve some of these because I think there's a lot of interesting open research questions here.
Stanford_CS221_Artificial_Intelligence_Principles_and_Techniques_Autumn_2021
Bayesian_Networks_6_Particle_Filtering_Stanford_CS221_AI_Autumn_2021.txt
Hi, in this module, I'm going to present the particle filtering algorithm for performing approximate inference in Hidden Markov models, which is really useful when the size of a domain of the variables is very large. So let's start with our familiar object tracking HMM. In this case, for every time step, we have a position Hi of an object which we don't see. And instead we see some noisy sensor readings. So the probabilistic story of object tracking is as follows. H1, the first object position is generated uniformly across all of the values-- 0, 1, 2. Subsequently, H2 is generated conditioned on H1 via this transition distribution which goes up with probability one quarter. Stays the same with probabiliy one half, down with probability one quarter, and returns at H3. Now at each time step, we get a sensor reading-- E1, E2, E3 conditioned on the respective locations, and that's governed by the emission distribution, which takes Hi and goes up by a quarter. It's the same with 1/2. Goes down with probably one-quarter. So now you multiply all of these local conditional distributions together, and you get one glorious joint distribution over all the object locations, as well as the sensor readings. That's our HMM. So now, given this HMM, we can ask all sorts of questions on it. In particular, we looked at filtering and smoothing questions. But particle filtering, as the name might suggest, does filtering, so let's focus on filtering. So in filtering, we're asking for the position of an object at a particular time step conditioned on the past. So time step 1, we're asking for-- I'm looking at only the evidence at time step 1 in terms of was like, where is this object? At time step 2, I now have two observations. Now I ask what is the object at time step 2. At time step 3, I have three observations, and now I'm at [AUDIO OUT] four words object at time step 3. So now I could apply forward-backward algorithm to this scenario. And that would work, but the problem is that there is-- if you have a setting where there's many, many location values for Hi, in our simple example, there's only three. But in practice, they're going to be 100,000. And in that case, forward-backward is going to be really, really slow because its running time scales quadratically with a number of 100,000 squared, which is not a nice [AUDIO OUT].. The goal of particle filtering is to realize that, well, you have 100,000 possible values but only a very small fraction of them are really likely given the data you have. So to start introducing particle filtering, let us revisit beam search because structurally, particle and beam search are analogous. And beam search, remember the idea was to keep track of at most K partial assignments, which we're going to call particles now in the particle filtering lingo. So beam search starts with a candidate set of only one assignment, which is the empty assignment and it's going to go through each variable in turn-- 1 to n. And now we're going to-- for each of these partial assignments, for H1 through Hi minus 1. I'm going to take that, and then I'm going to consider all possible values I can assign to Hi. I'm going to extend that assignment. So now we get a bunch of assignments from Hi to Hi my i. And now, I'm going to compute the weight of each of these candidate particles, and I'm just going to take the K highest weight particles. So let's recall beam search on this example here. Here we have our object tracking HMM. We have the variables and all the local conditional distributions, and I'm observing 0, 0, 2. So a beam search starts out, extending to variable H1, and it produces 0, 1, and 2 are the possible particles with weights-- these probabilities here. And a prune which does nothing because K is 3. And then next, I'm going to extend to H2. So each of these particles kind of multiplies into three particles. And now the weight of each of these particles is going to now include the factors, which are the transition into H2, and the emission of e2 equals 2 given H2. Now I'm going to prune down to 3. I'm going to now extend to H3, and I'm going to prune-- and at the end, I get a set of particles. So here I have 0, 1, 2. I have 1-- so 0, 1, 1. I also have 1, 2, 2. And each particle has some weight. Normally in beam search, we're interested in the-- presented in a context of finding max weight assignments. So in this case, you would just return 1, 2, 2. But in the case of particle filtering, we're interested in answering filtering queries. So what we're going to do instead is we're going to normalize the weights over all the particles to get an approximate distribution over assignments. And now we're going to pretend this is a joint distribution over H1 to Hm, given the evidence. And now I'm going to just sum probabilities to get any approximate smoothing-- a filter query I like. So this is fine, but it has two problems with it. One is that the extend step is slow because it requires considering every possible value of Hi. So sometimes you can be clever. You don't have to enumerate all the values in the domain, and you can only enumerate over values that are going to have positive weight, but even that could be a lot. So the second problem is that we are greedily taking the K particles with the highest weight. So this is going to, as we'll see later, doesn't provide enough diversity. So we're going to have to do something about it. So particle filtering is going to solve both of these issues, and particle filtering is going to contain three steps to replace the extend prune 2-step procedure of beam search. So propose, weight, and resample are the three steps, and we're going to go through them in turn. The first step is propose. So in general, you should think about the set of particles as an approximating a certain distribution. In general, the filtering distribution is the probability of the variables that you are considering conditioned on the evidence so far. And suppose we have just two particles here, 0, 1, and 1, 2. So now the propose step is going to take each of these particles, and I'm just going to sample a value for H3 the next variable-- given the transmission distribution. Remember it was up and down with probability one quarter and the same with probability 1/2. So that is going to produce these extended articles conditioned on the same evidence. So for example, I'm going to take 0, 1. I'm going to-- now that will produce this particle with probability 1/2 because I'm just keeping the value of the same year. And I'm going to take this particle, and I'm going to extend it to 2. And that's also going to happen with probability 1/2. Now, this is a random algorithm. So I could have sampled from the distribution. I could have got 1 here. I could have gotten 3 here, but let's just go with 1, 2. So in the next step, I'm going to-- wait, so you should think about these particles as a guess as to what H3 is going to be. But we need to fact-check this guess with evidence. And so the weighting step is going to assign a weight to each article. And that weight is going to be the probability of the new evidence I got conditioned on H3 here. So this is going to produce a set of new particles, which are weighted representing the distribution-- the h1, h2, h3 conditioned on all the evidence so far. So let's work out this example. So for the first particle, I have h3 equals 1. So h3 equals 1 is going to generate the evidence E2 equals 2 with probability one quarter. So I have a weight of 1/4 attached to the first particle. The second particle has-- there should be a 2 here, actually. So the second particle has h3 equals 2. And I'm going to look up this table, and that's going to have weight 1/2 because the probability of generating a 2 given a 2 is 1/2. So I have a weight of 1/2 on this particle. So now, at this point, I have a set of particles that represent the advanced filtering distribution. But notice that the weights are not the same and some weights are small, some weights are big. And in particular, the particles with small weight are kind of wasting space. Just think about the K particles as kind of a limited resource for representing this distribution. So if you have a particle with weight 0.0001 or maybe even 0, then certainly, we shouldn't be wasting one of the valuable K slots for that value. So what we're going to do is to reallocate our resources via resampling. So in the resampling step, we're going to normalize these weights and draw K sample. So normalizing these weights produces this distribution 1/3, 2/3. And now, I'm going to draw K samples from that distribution so redistribute. So the resulting particles are still going to represent the same distribution, but slightly in a different way without weight. So I'm going to sample. Maybe I get 1, 2, 2. I've got that with probability 2/3. Sample again, and maybe I get the same particle again 2/3. Now, of course, this is again a random algorithm. So I could have gotten the first one and the second one, or the second one and the first one, the first one, and the first one. So now you might wonder why are we resampling? Why leave the result of the algorithm up to chance? So to see why consider the following setting. So we have a distribution over a bunch of possible locations. And suppose that distribution were very close to uniform. So you can see-- maybe you can see that there's a slight higher probability in the middle, but it's pretty flat. So now, if you did beam search, which takes the K possible positions with the highest weight, you would end up with this. And so you would end up with all the particles clustering around the middle, which is really not representative of the distribution because you have all of these positions out here with non-negligible probability mass, which have no support. So it's kind of like putting all your eggs in the same basket or the same K baskets, I guess, as a [AUDIO OUT]. So instead, if you resample or sample from this distribution K times, you're going to get something more like this, which I would argue is more representative of the distribution. So in cases where most of the weight is on a few locations, sampling versus taking the top K is not really a big deal. But when there's high uncertainty, as in this example, sampling is really important because it allows you to maintain some certainty. So now we're ready to present our final particle filtering algorithm, which is, again, structured very similar to beam search. So like beam search, we initialize with the empty assignment. For each of these time steps, we're going to propose, weight, and resample. So here we're going to propose-- we're going to take each assignment to h1 through hi minus 1. And we're going to look at the transition distribution and generate one possible assignment to hi, and I'm just going to take that. So in beam search, I'm going to consider all of them, which can result in a blob, but in particle filtering, I'm going to only look at one. Now I'm going to weight the particles based on the evidence, which is this emission distribution. And finally, I'm going to redistribute my resources by normalizing this weight distribution and drawing a particles independently from that distribution, OK? So let's see a demo here. So here I have my object tracking HMM. I'm going to run particle filtering with 100 particles, instead of beam search. So I'm going to start out with extending to just the first variable. And now I have three part-- Well, I have 100 particles, but 38 of them are 0. 33 of them are 1 and 29 of them 2. And these are the weights. I resample, and now I redistribute probability to 0 and 1 with these particle counts. So now I'm going to extend. And notice that before, I had 73 particles at 0. Now 51 of them go to H2 equals 0, and 22 of them go to H2 equal 1. And then I'm going to resample, which redistributes the particles again. Now I'm going to propose and reweight. Now all the particles are all over the place. And now, I redistribute mass so that the particles are used more effectively. OK, so now, at the end, I have now 100 particles covering all of these different assignments. I can simply count the fraction of them for that satisfy various values of h3 and I get my approximate filtering distribution over h3 conditioned on this. OK, so there are two ways to make particle filtering more efficient. So particle filtering-- we've casted in terms of generating a distribution over complete assignments to all of the variables. But if you're only interested in filtering queries, which look at the last variables, then what we can do is, instead of storing all the assignments, we only need to keep the value of the last hi. So I'm going to only look at h3 because this is sufficient to continue the algorithm forward. And Furthermore, if you have multiple particles that have the same value, you can actually just store the counts, as we saw in the demo. One occurs twice, and two occurs three times. So now, let's visualize particle filtering in a more realistic interactive object tracking setting. So here we have a grid, and we have an object that's going to be moving in this grid where we're trying to determine its location. So the HMM is going to put-- have a transition distribution that places a uniform distribution over moving North, South, East, West or staying put. And the emission distribution is going to put a uniform distribution over locations that are within three steps, either vertically or horizontally. So you can kind of see this definition of this emission distribution which only depends on the x distance and the y distance, and it's going to look at a uniform distribution over basically a box. So if I hit Control+Enter here, we can see the observations. They're very noisy, and we're trying to guess where the object is. So I don't know where [AUDIO OUT] somewhere. So what we're going to do is we're going to run particle filtering. Let's say we have 10,000 particles. We hit Control + Enter again. And now what we're going to see is a red blob, and this represents where the particles are, with the intensity of the particle representing the number of particles at that particular location. So you'll see that this is our kind of best guess of where the object is, OK? So you can see how well we're doing by showing the true position. So let's see where this object actually is, and we'll see that we're tracking it rather well. Sometimes, I think you'll notice that in my mess up but, on the whole, it's pretty good. So also, notice that the red blob where it thinks the object is, is not fooled by where the observation is because there's enough noise here that what I'm modeling-- the particle filter is doing is that it's essentially kind of smoothing out the noise. The noise is jumping around a lot, but it's kind of tracking and knows that the object can't be teleporting, and it's moving only by one step at most each time step. So you can play with this demo a bit more. We also have implemented, instead of this box noise, you have Gaussian noise, which looks kind of similar of a spherical blob. You can also play with this kind of really weird-looking noise which places uniform distributions over all positions on this lattice that have a certain kind of parity. OK, so in summary, we've presented the particle filtering algorithm, which allows us to answer filtering questions of the following, where is this object at a particular time step, given the evidence so far? And the key idea is using particles to represent this approximate distribution. So remember, particle filtering has three steps, which is used to advance the set of particles. First, we propose where we take a particle and transition them to the next time step. This is a guess of where the object is going to be at the next time step. Then we're going to fact-check our guess by reweighting the particles based on the emission distribution of what we actually saw. And then we're going to reallocate our resources by resampling. And this will allow the particles to occupy the regions with higher weight. So unlike forward-backward algorithm, particle filtering allows us to scale up to cases where there are a large number of locations. And also, unlike a beam search, it allows us to maintain better particle diversity, especially in situations where the distribution is close to uniform. So now particle filtering is also called a sequential Monte Carlo, and there's many, many more sophisticated extensions that I haven't covered. In particular, particle filtering works for general factor graphs, not just hidden Markov models. And I encourage you to read up and learn more about it. That's all.
Stanford_CS221_Artificial_Intelligence_Principles_and_Techniques_Autumn_2021
Machine_Learning_10_Differentiable_Programming_Stanford_CS221_AI_Autumn_2021.txt
Hi. In this module, I'm going to briefly introduce the idea of differentiable programming. And differentiable programming kind of just runs off with ideas of computation graphs, some back propagation that we developed for simple neural networks. There's really enough to say here, to fill up an entire course at least. So I'm going to try to keep things pretty high level but I will try to highlight the power of composition. So differentiable programming is closely related to deep learning. I've adopted the former term as an attempt to be more precise in terms of highlighting the mechanics of writing models as you would code. So if you look around at deep learning today, there's some pretty complex models which have many layers, retention mechanisms, residual connections to name a few. And this could be quite overwhelming at first glance, but when you look closer, you notice that these complex models are actually composed of functions and these functions themselves are composed of smaller functions. So this is the programming part of differentiable programming which allows you to build up an increasingly more sophisticated model without losing track of what's going on. So let's begin with our familiar example, the three layer neural network. So remember that in a three layer neural network, we start with our feature vector. In this case, it's a six dimensional vector. And we left multiply by a matrix. I've drawn some lines here to help us interpret this matrix as a set of rows where each row corresponds to a hidden unit. And I'm going to take the dot product of each row with the input vector to produce a hidden vector of dimension 4. I'm going to add a bias term and then I'm going to apply an activation function. Element-wise, for example, the ReLU or logistic. Now I have a vector and now I can do the same thing again. I apply a matrix, add a bias term, apply an activation function. Apply a matrix which happens to be a vector, so I get a scalar and I add a simple scalar bias term. And I get a score which then I can happily drive regression or take the sign to drive classification. So what I want to do now is to factor out this kind of complex looking expression into a reusable component which I'm going to call FeedForward. But we're going to see a lot of these box diagrams which are going to represent functions that we can reuse and have a nice interpretation. So the FeedForward function takes in an input vector x and produces an output vector which could be of a different dimensionality. And the way to interpret what people are doing is performing one step of processing. In particular what that processing is, is taking this input vector, multiplying it by a matrix, adding a bias term and applying an activation function, OK? So this is a function or a program, but unlike normal programming it's under specified because the red numbers here are parameters which are private to this function which are going to be set and tuned later via back propagation. So now we can write our three layer neural network using FeedForward and the way I'm going to do this is score is equal to you take x or distribute phi of x. And you apply FeedForward, FeedForward, FeedForward. And you can write this as FeedForward cubed as to be more compact. So this is a very compact way of writing something that would otherwise be quite complicated. So now let's suppose you want to do image classification. We need some way of representing images. So the FeedForward function that we just introduced, takes a vector as input and we can represent an image as a long vector by, for example, adding all the rows. But then we would have this huge matrix that we would need to be able to transform this vector resulting in a lot of parameters which may make life difficult. And the problem here is that we're not really using the spatial structure of images. For example, if I just permuted all the elements of this vector and retrain, I would basically I would get the identical model. So it's kind of not paying attention to which pixels are close by. To fix this problem, we introduce convolutional neural networks which is a refinement of a fully connected neural network. So here is an example of ConvNet in action. So here's a car and you can see that it goes through a number of layers and over time it computes increasingly abstract representations of the image. And at the end, you get a vector representing the probabilities of the different object categories. So if you want to play with ConvNets, you can actually click here for Andrej Karpathy's excellent demo where you can actually create and train ConvNets in your browser. So another comment is that ConvNets, we're going to introduce them for 2D images but they can also be applied to text or sequences which are 1D or videos which are [INAUDIBLE].. So ConvNets have two basic building blocks. We're not going to go through the details. You can take CS231 if you want to learn all about ConvNets. But instead I'm going to focus on the interface and show how these modules compare. So the first is Conv. And so Conv takes an image and the image is going to be represented as a volume which is a collection of matrices, one for each channel, red, green, blue. Each matrix has the same dimensionality as the image, height by width. And what the Conv is going to do is it's going to compute another volume of a slightly different size, usually the height and width of this volume is going to be equal or maybe slightly smaller than the input volume and the number of channels is going to be somewhat different. The way that Conv is going to compute this volume is via a sequence of filters, and intuitively what it's going to do is try to detect local patterns with [AUDIO OUT] So here is one filter and how it works is I'm going to slide this filter across the image. And if I put the filter here, I'm going to kind of align it up with the first pixels on the image. I'm going to compute the dot product between the eight numbers here and the-- actually, sorry, 12 numbers here and the 12 numbers here. I got a single number which I'm going to write into this entry. I slide the filter over a little bit. I'm going to write into the second entry and so on. And then for the second filter, I'm going to use to fill up the second output channel. So the number of filters is the number of output channels. OK. So that's all I'm going to say about Conv. The second operation is MaxPool which again takes an input volume and then it produces a smaller output volume. It's going to have the same number of channels. And for every slice through the matrix, it's going to slide a little max operation over every 2x2 or 3x3 region. So the max over these four numbers is going to be used to build this [INAUDIBLE] and so on. OK. That's all I'm going to say about MaxPool. If you want to go into the details, you can check out this demo or you can learn more in 231. But again, I want to highlight that there's these two modules. One for detecting patterns and one for aggregating, to kind of reduce the dimensionality. And with these two functions along with FeedForward, now we can define AlexNet which was the seminal CNN from 2012 that won the ImageNet competition and really transformed [INAUDIBLE] So how this works is I'm going to start with my input image, apply a convolutional layer, apply MaxPool, apply another convolutional layer, apply MaxPool. Apply three more convolutional layers, apply MaxPool, and then apply three layers of FeedForward. OK. So in one line I have AlexNet. Now of course, I've under specified a couple of things here. One is I haven't specified the parameters. Those are to be learned. And each of these functions holds a private set of parameters that need to be learned. The second thing is I also haven't specified the hyperparameters which is the number of channels, the filter sizes, and so on, which are actually pretty important for getting a good performance. But I just wanted to highlight the overarching structure and the idea that you can compose in a fairly effortless way. So now let's turn our attention to natural language processing. So here is a motivating example. Suppose we want to build a question answering system. We have a paragraph. It's from Wikipedia. We have a question and we want to select the answer from that passage, from the paragraph. So this happens to be from the SQuAD question answering benchmark. Let's just read this. In meteorology, precipitation is any product of the condensation of atmosphere water vapor that falls under gravity. And the question is, what causes precipitation to fall, and the answer is gravity. So to do question answering, you have to do a fair amount of processing. So you somehow have to relate the question with the paragraph but it's not an exact match. Some of the words match like precipitation but some of them are kind of more subtle, like causes is somehow related to product. And also the fact that some words are ambiguous, like product can be-- multiplication or output. So there's a lot of processing that needs to happen and it's hard to kind of specify in advance. So first things first. So words are discrete objects and neural networks speak vectors. So whenever you're doing NLP with neural nets, you first have to embed words, or more generally, tokens. So we're going to define an EmbedToken function that takes a word or a token x and maps it into a vector. And all this function is going to do is it's going to look up vector in a dictionary that has a static set of vectors associated with particular tokens. So this is fine if you have a sequence of words then you can just embed each word into a vector to get a sequence of vectors. There's one problem which is that the meaning of the words and tokens depends on context. So this representation of the sentence is not going to be a particularly sophisticated one. So what we're going to do is going to define an abstract function. Borrowing terminology from programming, an abstract function is something that has an interface but not an implementation. So a SequenceModel is going to be something that takes a sequence of input vectors and produces a corresponding sequence of output vectors where each vector in this sequence is a process with respect to the other elements. So in other words, I want to contextualize these vectors using the sequence models. I'm going to talk about two implementations of the sequence models. One is recurrent neural networks and one is transformers. So historically recurrent neural networks have been around since the early '90s, and since 2011 or so, it became really kind of the dominant paradigm for doing deep learning NLP. Transformers, who came out in 2017, and really has kind of started, I guess, transformed the landscape of deep learning and NLP. So an RNN, or a recurrent neural network, can be thought of as reading a sentence left to right. That's the kind of intuitive way to think about it. So we have a word which gets mapped into a vector that produces some hidden state. And then I'm going to read a second input vector, and I'm going to update this hidden state along with this thing that I just read into a new hidden state. And then I'm going to read another input vector, update its state and repeating and again, OK? So at the end of the day, I have the sequence model because that maps input sequence into an output sequence. And I notice that each vector here now depends on not just the input vector but [INAUDIBLE] So if you look at h3, h3 depends on x3, x2, and x1 following this computation map. So the intuition again is reading left-to-right, updating the hidden state as you go along. It's kind of like a memory. One thing I haven't specified is what this function that takes an old hidden state, an input, and updates the hidden state. So I'm going to do that next. There's two types of implementations I'm going to talk about. One is a simple RNN. So the contract here is I'm going to have an old hidden state, an input, and we're going to want to generate a new hidden state of the same dimensionality. And the way a simple RNN works is I take a hidden state, multiply by a matrix, take the input, multiply by the matrix, and I add these two and I apply an activation function. So it's fairly simple. And one other way to think about this is that this is really the FeedForward function applied to concatenation of h and x. OK, so one problem with a simple RNN is that it suffers from the vanishing gradient problem. If you have long sequences then the gradients start vanishing. So LSTMs, or long short term memory, were developed to solve this problem. And the way that this works is the interface is the same and the implementation is rather involving, that I'm not going to explain. But intuitively, you should black box this and think about LSTMs as just a way to update the hidden state given a new input but without forgetting the past. Remember up here for a simple RNN, we can think of it as this FeedForward on x and h which are treated kind of equally. LSTMs kind of privilege h and make sure that h doesn't get forgotten while going through this arrow. OK, so now we have our sequenced model on RNN which produces a sequence of vectors, and the number of vectors depends on how long the input sequence is. So suppose we want to do classification, we need to somehow collapse that into a single vector. So I'm going to define this function Collapse which takes a sequence of vectors and returns a single vector. So you can intuitively think about this as summarizing the collection of vectors as one. There's three common things you can do. You can just simply take the first vector, you can take the last vector, or you can take the average of all vectors. So if you're doing text classification, you probably want to pick the average to not privilege any individual word. But as we'll see later if you're trying to do language modeling, you want to take the last. So here is an example text classification model that we can develop. The score for, let's say, binary classification is going to be equal to taking the input sequence of tokens. You embed all the tokens into a sequence of vectors, and now you can apply a sequence model, for example, a sequence RNN. And you can do this three times, that gives you depth just like we talked about with FeedForward networks. And now you can collapse that into a single vector, take the dot product to get a number out. So these types of functions where the input and output have the same type signature are really handy because then you can compose them with each other and get multiple steps of computation. So recurrent neural networks work generally fairly well, but they suffer from one problem is that they're fairly local. And so one problem that-- so is a problem that we're going to try to address with transformers. So introducing transformers is fairly involved. So I'm going to step through, introduce a few things before actually defining it. So the core part of a transformer is the attention mechanism. And the attention mechanism takes in a collection of input vectors and a query vector and it outputs a single vector. And intuitively what the attention is doing is it's going to process y by comparing it to each of these x's. OK. So mathematically what this is doing is you start with the query vector. I'm going to multiply a matrix to reduce its dimensionality, in this case from 6 to 3. I'm also going to take the x transpose which is each row here is one of the input vectors. x1, x2, x3, x4. I'm going to reduce its dimensionality to also 3 dimensions. And now I can take the dot product between these x's and y's. So that's going to give me a four-dimensional vector of dot products intuitively measuring the similarity between the x's and the y. So now I can take those scores and I can turn them into probabilities by taking a softmax. So a softmax exponentiates the scores and normalizes that into a probability distribution. So now I have a distribution over the input vectors, x1, x2, x3, x4. It's a four-dimensional vector. I can use those probabilities, those weights, when I multiply by x to take away the combination of the columns of x here. So for intuition, if one of the inputs has a very high probability, let's say it's 0, 0, 1, 0, then I'm just going to pick out the third input vector. So in general, this is a distribution so this is kind of softly picking out which input vector is similar to y. OK, so then finally I'm going to reduce the dimensionality to some four-dimensional object. So similarity can be a multifaceted thing. So one thing that the transformer does is it allows us to use multiple attention heads. So I'm going to repeat this process again, taking the query vector, taking the input vector, comparing them in the distribution over the input vectors. And using that distribution, re-weight the input vector. So I'm selecting out softly the input vector and I multiply it by a matrix to reduce the dimensionality. I've done this twice, but in general you can do this any, 4 or 16. So now I concatenate these vectors. So I have a four-dimensional vector from this computation, four-dimensional vector from this computation. I can concatenate them into an eight-dimensional vector. And now I can reduce the dimensionality back to the original dimensionality of the inputs. OK. So that was a kind of a very involved process, but at the end of the day you can think about this as taking y, comparing it with the x's, and selecting out the one that's most similar and doing some dimensionality reduction in the process. OK. So that's attention. The transformer uses something called self attention, which means that the query vector is actually going to be the input vectors themselves. So if self attention takes a sequence of input vectors and then it's going to output the same sequence of output vectors where the first vector is, I'm going to stick x1 into the query vector for y and compute the attention, and then x2 and x3 and x4. So each of these vectors is comparing a particular input vector with the rest of the input vectors and doing some processing. So in other words, I've basically generated a sequence of vectors where all of the objects, all n squared of the objects I've allowed them to communicate with each other directly. So in contrast with the RNN, you have representations that have to kind of proceed step by step. And the number of steps is the length of a sequence which causes these long chains which prevents kind of fast propagation, whereas attention solves this problem. So one kind of side comment is that I'm speaking very vaguely and intuitively about these things, trying to provide as much intuition as possible. And you can't really be more precise because I'm, again, not specifying the actual computation, I'm only specifying kind of the scope of possible computations that can be done once the parameters are learned from data. OK. So that's an attention mechanism. You can think about this as a sequence model that just takes input sequence and contextualizes the input vectors into output vectors. So there's two other pieces I need to talk about before I can fully define the transformer. Layer normalization and residual connections. So these are really kind of technical devices to make the final neural network easier to train. I'm going to package them up into something called AddNorm and it also has a type signature of a sequence model where I have input sequence of vectors and I spit out the corresponding set of contextualized vectors. And the intuition behind this is I'm going to apply f to x safely. So let me explain what that means. So AddNorm of f of x is equal to-- I'm first going to take x and apply f to it. OK. So why is that not good enough? Well, remember that these functions are under specified. So at the beginning of training, they're basically not doing anything. So they're basically kind of junk. And if this is junk then anything that I build on top of it is also going to be pretty junky. So what I want to do is add a residual connection. So a residual connection is a kind of escape hatch that allows x to be propagated through verbatim. So that means if f is junked, at least I have x. So then I'm going to add a LayerNorm function on top of this. So layer normalization is just a way to make sure that this vector is not too big or not too small because big vectors and small vectors result in exploding gradients or vanishing gradients which stalls training or makes training diverge. So specifically what LayerNorm does on a single vector is that it treats these as a set of elements and it subtracts the mean of those elements and divides by the standard deviation to kind of standardize the magnitude of the vectors. OK. So in summary, AddNorm with a particular function is just applying f to x safely. OK. So now I'm finally ready to define the TransformerBlock. And this is, again, a sequence model that takes a sequence of input vectors and spits out a contextualized set of output vectors, and this is just intuitively processing each xi in context. So there's only one line here. We've done actually most of the hard work. So the transformer block on a sequence of vectors, is going to be x and you apply attention that allows all the vectors to talk to each other, and then you want to normalize and to do this safely. And finally you apply FeedForward to each individual resulting vector independently, and then you also want to normalize and do this safely. So that's it for a TransformerBlock. So now we have enough that we can actually build up to BERT which was this complicated thing that I mentioned at the beginning. So BERT is this large unsupervised pretrained model which came out in 2018 which has really kind of transformed NLP. Before there were a lot of specialized architectures for different tasks, but BERT was a single model architecture that worked well across the many tasks. So this is the way it works for question answering. You take the question, you concatenate it with the paragraph, that gives you just a sequence of tokens. And what BERT does on a sequence of tokens is it's going to embed the tokens, and then it's just going to apply the TransformerBlock 24 times. So again, the nice thing about having a TransformerBlock where the input and output have the same dimensionality and type is that you can just kind of lay it on and get much deeper networks. OK. So at the end of the day, BERT gives you a sequence of vectors which are highly contextualized and nuanced and contains a lot of rich information about the sentence. And from there, you can either use it to drive classification of, let's say, binary classification directly by collapsing the vectors into one vector, or you can use it to select out an answer to the question. And I'm not going to go into details of how that works. So so far we've talked about how to design functions that can process a sentence, a sequence of tokens or vectors, but we can also generate new sequences. And the basic building block for generation is, I'm going to call it GenerateToken. And you take a vector x and you generate token y. And this is kind of the reverse of EmbedToken which takes a token and produces a vector. And the way GenerateToken is going to work is that it's going to actually use this as a subroutine. It's going to look at all the possible candidate words that one could generate. It's going to embed those to and take the dot product of x to get some sort of similarity between the vector and a potential candidate generation. Now we have some scores. We apply the softmax to get a distribution over possible words and then we can generate from that probability function. So here building on top of GenerateToken, we can do language modeling where the input is a sequence of words and the output is the next word. So this is actually fairly simple since we already have essentially all the tools. So language modeling of x is you take x, you embed them into tokens. The crucial step is that you stick it through a sequence model. Remember a sequence model does this fancy stuff and turns this sequence of kind of primitive vectors into contextualized vectors which contain more information and then it collapses them. And this time, you generally want to use the last vector that's closest to the word that you want to generate next. And then that gives you just one vector and you can use that to generate a token. OK. Finally we can take language models and we can build on top of them to create what is known as a sequence-to-sequence model. So this is perhaps one of my kind of favorite interfaces because it's so versatile. So the basic idea is that you have input, which is a sequence, and you are trying to generate another sequence, which is the output. And sequences are very-- in general, you can use sequences to encode basically any sort of discrete output. And the way we're going to do that is just using a language model. So remember, a language model takes the sequence and predicts the next token. So I can start out with x and I can query the language model to generate the next token. And then I can feed this token, attach this token to the history, query the language model again to generate the next token, and so on and so forth until I'm done. So this is by and large how a lot of the state of the art methods for, for example, machine translation works. We're generating a translated sentence, given the input sentence or a document summarization or semantic parsing. Each of these sequence can be framed as sequence-to-sequence tasks based on, usually these days, basically BERT and Transformers. OK so that was a really quick and high level whirlwind tour of different types of differentiable programs from deep learning. So we started with-- Now in hindsight, it seems kind of very simple, FeedForward networks. And we looked at images and looked at convolutional neural networks which were built on Conv layers and MaxPool layers and also FeedForward. So the nice thing about packaging this in a module is that now this actually is used in transformers and different places as well. For text and sequences, we first have to embed them into a sequence of vectors and then we have kind of two choices. We can either use recurrent neural networks, or we can use transformers which are based on attention. We can use sequence models, collapse it into a vector to drive classification decisions, or we can use them to generate new sequences as well. So there are many details that are glossed over. In particular, some of the architectures have been simplified so I encourage you to consult the original source if you want kind of the actual, the full gory details. Another thing I haven't talked about is learning any of these models. It's going to be using some variant of stochastic gradient descent, but there's often various tricks that are needed to get it to work. But maybe the final thing I'll leave you with is the idea that all of differentiable programming-- which is that all of these complex models are built out of modules. And even if you kind of don't understand or I didn't explain the details, I think it's really important to pay attention to the type signature of these functions, as well as with an intuitive idea of what each of these are doing. OK. So that ends this module. Thanks for listening.
Stanford_CS221_Artificial_Intelligence_Principles_and_Techniques_Autumn_2021
Artificial_Intelligence_Machine_Learning_12_Best_Practices_Stanford_CS221_AI_Autumn_2021.txt
So we've spent a lot of time talking about the formal principles of machine learning. In this module, I'm going to talk more about the empirical aspects of machine learning practice. So recall the three design decisions for a machine learning algorithm. First, set up the hypothesis class, then the objective, and optimization algorithm. And each of these design decisions has itself a bunch of different-- so for the hypothesis class, you have to specify the feature extractor, phi. Linear features, quadratic features. And then you also have to specify the architecture. So linear predictor or do you use a one-layer neural network or a two-layer neural network and how many hidden units do you have when you have a neural network? And for the training objective, there's a question of what should the loss function be. The hinge loss or the logistic loss. And then what about regularization? If you use regularization, what should its strength be? With an optimization algorithm, even the vanilla stochastic gradient descent has two hyperparameters. One is a number of epochs, another one is the step size. Here it's a constant, but maybe you want it to be decreasing, or you want to use a fancier adaptive step size rule, AdaGrad or Adam. If you're training deep neural networks, there's more things to think about. There's initialization. How much noise do you add- n retraining? Do you use a batch size for stochastic gradient descent, with batch size 1, do you use 4, 16? What about using a dropout rate to guard against overfitting? So quickly you see that the design space becomes quite big and it's really kind of like choose your own adventure. Some of these design decisions can be made based on principles. For example, if you believe that your data has some sort of periodic structure, you can add periodic features. But many of these, if not most of the design decisions, are really unclear. And you sometimes just want an automatic way for these design decisions made. So each of these design decisions is called a hyperparameter. Hyperparameters are the design decisions that may need to be made before running the learning algorithm. So how do you choose these? How about we choose the design of the hyperparameters to minimize the training error? So this is a really bad idea because the optimum would be to just include all the features, use no regularization, train forever and really drive the training loss down, down, down. But remember, the training loss is not the quantity that we really care about. OK, so how about we choose the hyperparameters to minimize the test error? So this might generate actually good hyperparameters, but this is also bad because now you're looking at the test set, which makes it an unreliable estimate of the actual error. So what do we do then? So the solution is to use a held-out validation set. It's also known as a holdout set or development set. And this set is just taken out of the training set and is used to optimize hyperparameters for analysis. So here's a picture. You leave the test set alone. It's isolated from what you're doing here. And you take the training set, and you divide it into a validation set, which is usually a small fraction but large enough to get reliable estimates, and then the rest of the training set. And now, for each setting of the hyperparameters, you can train on this train minus validation and then evaluate the validation. And then, you can choose the hyperparameters to be the ones that minimize the error on this validation set. So now I'm going to talk about a model development strategy. So we've talked a lot about the formal machinery, and I'm just going to walk you through kind of a typical kind of development cycle. So what you do is you start out by splitting the data. You get some data, and you split it into train, validation, and test. You lock away the test set. And you look at the data, not the test data, the train and validation data to get intuition. You want to understand what kind of properties your problem that you're trying to solve has. And then you repeat the following. So you implement a model architecture or a feature extractor, or you adjust some hyperparameters. And then you run the learning algorithm. You train a model. And then, you look at sanity check the train and validation errors along the way. Make sure the training error is going down, making sure the validation error more or less goes down. If it goes up, that means you're overfitting. You also want to look at least for linear classifiers the weights if they're interpretable. You have to again sanity check and get some intuition. And you'll want to look at some prediction errors. You want to understand if the model is not doing well as you like. How is it making-- how is it screwing up? And you repeat this until you're satisfied. And then finally, you unlock the test set, you evaluate on the test set to get your final error rates that you put in your-- So let's walk through an example of how it works. And so, I'm going to take this simple example of named-entity recognition. So here, the input is a string which contains a name along with a word to the left and the word the right to offer some context. And the output is going to be whether x, excluding this initial and final word, is a person or not. In this case Gavin Newsom plus 1 because he's a person. OK, so now I'm going to code this up. So we have ner.py. So this is a file we're going to use. And this file actually depends on submission.py, and util.py from your sentiment homework, so if you have that, you can plug it in with its code and see it in action for yourself. So let me just walk through this. First, let's read the trainining examples, the validation examples, and then we're going to learn a predictor. This returns a set of weights. We're going to output the weights to a file output-- the error analysis to a file which I'll show you in a second. And then this part is commented out because we don't want to run evaluation on a test set just yet. OK, so the first thing we want to do is just open up this training file just to get some intuition for what the data looks like. OK, so each line here is a training example. This is y, this is a minus, which means it's not a person, and this is x took Mauritius into-- Mauritius is not a person. US is not a person, Malaysia is not a person, Sarah Pitkowski is a person, plus 1. Moscow is not a person and so on. See all these training examples. We have around 7,000 training examples. OK, so now let's begin by implementing a feature extractor. To implement this function, it's going to take x, and just to-- I'll put a comment, what does x looks like? So example, x is the string here. And then, I'm going to define a feature vector to be a default dictionary of floats. So this is going to be a sparse representation of the feature vector. OK, so that is the simplest feature extractor. It happens to be the empty vector with no features but let's just see what happens. Start simple. So starting really simple here. OK, so let's do python.py. We see that across a number of iterations, test error is really high. 72% error. This is not surprising because we haven't-- we don't have any features. OK, so let's add some features. So maybe a kind of obvious feature to add is looking at the identity of entity. So what we're going to do is we're going to process x a little bit. So I'm going to split it into a bunch of tokens. So a list containing took, Mauritius and into. And then I'm going to split it up into the left entity and the right. And this is going to be token zero. Tokens one through everything except for the last token, and then I'm going to have the last token. OK, so I'm just going to divide x into these three parts. So now I can define a feature template. So let's define a feature template to be phi entity is, and then I'm-- entity is now an array, so I'm going to join it. And set that equal to 1. So this is a binary. This is one line that represents one feature template, but it instantiates into a whole bunch of different features. One for every possible entity. OK, so I'm naming the feature in a way that makes it really interpretable. So we'll see how this is quite useful in a second. OK, so let's run this and see what happens. And now the error is 19%. It's in progress. The training error is really low, which means that we're really getting the training error. So now, let's go and inspect what happens. So we look at the weights here. So this is sorted from positive to negative. Here we have the feature name and the weight. So up here, you can see that entity is, and these generally happen to be-- and if you look at the bottom, we see things which are not names, OK. So this is a good sanity check that suggests that the learning is working. Let's look at the error analysis. So this shows you on the validation set the predictions that the model makes. So here is the first input. Erduardo Romero. The true label is one person, but we predicted minus 1, which is wrong. And here I'm showing the features that-- and their particular weights. So entity is Eduardo Romero, that has a feature value of 1. Its weight is 0. And a weight of 0 generally means that it never saw this feature at training time. So therefore, the score is 0. So we have no idea what to do on this sample. Here is another example of the Senate. It says entity is Senate. And Senate has a weight of negative 1. So we have a score of negative 1. And we make the prediction. So let me just look through these incorrect predictions-- Margaret McCullough, blamed, was midfielder. And you can kind of see, well, it's unreasonable to expect that the entity has been seen before. So why don't we try to use the context to figure out whether the name, the entity-- sorry, the name is a person or not? So let's go over here. I'm going to define a feature template, left is left and right is right. So this is a feature template. Left is blank, as we've been asked. And I'm instantiating this feature for this particular x, which is taking the actual value left here, OK? So I added two feature templates. Let's run this. And now we can see that the error rate has gone down to 11%, great. Notice that the training error doesn't actually go down as fast because with more features, sometimes it's harder to optimize. But that's OK because we don't care about the training error. We only care about the test error going down. One note is that this says test error. But I'm actually passing in here the validation set of learnPredictor, the function prints out test error because of the-- have any idea [INAUDIBLE]. OK, so let's look at the weights. So at the top-- still features that's-- look at the entity-- Clinton, Yeltsin. Here is some examples. Left is Minister. So if you have Minister someone, that is probably a person. President someone is a person. If you look down here, you see that if the left context is "the," the weight is minus, which means that the something is not a person. So this all makes sense. It's a good sanity check. Let's look at the error analysis. So now we're getting Eduardo Romero correct. Let's see what we're getting wrong. So Sinitis blamed. Felix Mantilla-- we've never-- I guess we've seen that person before. Kurdistan Workers Party-- it's never seen this. And now you can think more-- brainstorm and think, well, could we-- maybe we don't-- aren't going to see the exact string match of an entity. But we can maybe break it down into pieces. So what I'm going to do is for each word in entity, I'm going to say entity contains a word. So it's pretty easy to write feature templates. And this is very intuitive. So this feature template just says, does this entity contain a particular word in the entity? OK, so let's run this. And now the error rate is going down to 6%. So we're making good progress. Let's look at the weights. Sanity check this. So in entity contains, this feature will fire both for Clinton and also as well as Bill Clinton. And these contains features are generally-- seem more general. And they're given high weight. At the bottom, again, if it has new in it, it's probably New York or something. And that's not-- probably not going to be a person. Contains Newsroom-- I don't know too many bloggers. Error analysis-- let's see. What's wrong here? We're still getting this wrong. Sometimes, you just [INAUDIBLE] going on. We're still getting this, Kurdistan Party-- one wrong. Chairman, and sometimes, it's kind of hard to know what to do. So let's just try something else. Let's say going in the spirit of decomposing the entity into words, we can go further and have patterns that match on prefixes and suffixes. So we can say the entity contains the prefix word and just arbitrarily choose the first four characters. It contains a suffix word. And I'm going to choose last four characters. So first four characters, last four characters, OK? All right, let's see how this does. Now we can see that the error rate is going down to 4%. So we made a little bit of progress there. I'm actually going to call it quits for now just in the interest of time. We've made a lot of progress from 70% error to only 4% error. But remember, this is only on the validation set. So now comes the final trial, see how well this does on the test set. So I'm going to read in a test set and then evaluate the predictor on this test set. And I'm going to print it out. So let's run this and hope that we didn't overfit. And here, we actually did even better on the test set than validation set, which sometimes happens. There's just always some randomness here. And so we ended up with 4% error rate, which is pretty good for 10 minutes of work. So in practice, it's-- things are probably not going to go as smooth as this. This was kind of just an illustrative example to illustrate the kind of process here. OK, so there's much more to be said about the practice of machine learning. So I'm just going to give you some general advice. Many of these tips are kind of related to having the best software engineering practice. So the first thing I want to talk about is starting simple. So the wrong thing to do is code up a really complicated learning algorithm, run it on a million examples, and watch it crash and burn, wonder what happened. Simplify, both in terms of running on small subsets of your data, maybe even synthetic data, and also start with a simple baseline model. We started with a literary classifier that had zero features and maybe one feature just so we can see and understand what it's doing. So this is important because it allows you to work in a regime where things are understandable. And also, importantly, things run quickly. So you want fast iteration time, like what we-- you just saw was you can quickly try something, get a result, try something, get a result. If you have to wait 10 hours to get a result, then you're just not going to make as much progress because you won't get it. One sanity check that I would recommend is try to train on very few examples, like five examples, and see if you can overfit. See if you can drive the training error to 0. Now, of course, doing so is not going to give you a useful model. But it will tell you whether your-- the machinery is working or not. If you're unable to fit five examples, then something is wrong. It could mean that your data is too noisy or you lacked certain features or your model is not expressive enough or your learning algorithm isn't working. Maybe you try to optimize a zero-one loss or something. I don't know. But anyway, it's a good sanity check. The second thing is log everything. Print out metrics. So track the training loss and the validation loss over time. Make sure that it's going down, as intended. Record the hyperparameters that you're using to train so you can keep track of what you actually did to get to your result. Print out statistics of the data set, how many features, how many examples, the model, how many weights are there, the norm of the weights, predictions, as you saw. And it was really useful to have this file that showed you exactly how the model made each prediction. It just gives you a lot more insight. Finally, spend some time figuring out how to organize your experiments. I like to have each run I make go into a separate folder, which you can save. And so then later, back-- then later, you can go back and check all the models, the predictions, and a record of all of the hyperparameters that you used so that you just have an idea of what you did. And then a note about the reporting of results-- it's important to run your experiments multiple times, particularly with different random seeds. So you make sure that your results are stable and reliable. And then you can report the mean and the standard deviation over these random seeds. And finally, we often, in machine learning, tend to be guilty of distilling everything down into one number, test error. But in practice, we might be interested in multiple metrics. In particular, it's important to make sure that if you get 5% error, understand what those errors are. Sometimes, it's useful to report the error rates on different minority groups or subpopulations if you have access to that information and generally be cognizant of the biases in the model. OK, so summarize, we've talked about the practice of machine learning. First, make sure you have good data hygiene. Separate your test set. Leave it alone. And divide your training set into a validation and the rest. Don't look at the test set. But do look at the training or validation set to understand the shape of your data so that you can have intuition for deciding how to model the data. Start simple. And finally, there's a lot of design decisions, which can be overwhelming at first. And the most important thing is to practice doing that, doing experimentation, so that you can start developing an intuition of which hyperparameters matter and what kind of effect they have and then, eventually, developing a set of best practices for yourself. OK, that's all.
Stanford_CS221_Artificial_Intelligence_Principles_and_Techniques_Autumn_2021
Markov_Networks_1_Overview_Stanford_CS221_Artificial_Intelligence_Autumn_2021.txt
Hi. In this module, I'm going to be talking about Markov networks. So far, we've introduced constraint satisfaction problems, the first of our variable-based models. Now we're going to talk about Markov networks, the second type of variable-based models, which will connect factor graphs with probability. And this will be a stepping stone along the way to [INAUDIBLE]. So recall that variable-based models are all based on factor graphs, and Markov networks are no different. So remember that a factor graph consists of a set of variables, x1 through xn, and a set of factors, f1 through fn, where each factor takes a subset of the variables and returns a non negative number. If you multiply all of these numbers together, you can evaluate the weight of a particular assignment. So let's look at an example of object tracking. So here, remember the goal is over time, record the noisy sensor reading of an object's position at 0, 2, and 2. And the goal is to figure out what the actual trajectory of this object is. We modeled this as a factor graph as follows where we have a number of factors representing the affinity for x1 to be close to 0 and x2 to be close to 2 and x3 to be close to 2 and also adjacent positions to be close to each other. So before, we treated this factor graph as a constraint satisfaction problem where the goal is to find the maximum weight assignment. And in this particular example, we look at all the possible assignments. Each assignment has a weight. And you can find that the maximum weight assignment is 1, 2, 2. But just returning a single maximum weight assignment doesn't really give us the full picture. In particular, it doesn't represent how certain we are of this assignment. And what about all the other possibilities? The goal of Markov networks is to try to capture this uncertainty over assignments using the language of probability. So we've actually done most of the hard work already by setting up factor graphs. The only remaining part is to connect factor graphs with probability. So formally, a Markov network, or a Markov random field as it's sometimes called, is a factor graph which defines a joint distribution over a set of random variables x1 through xn. So before, these were just variables, and now they're random variables because we're talking about probabilities. So remember, the factor graph gives us a weight for each possible assignment x. And to convert this weight into a probability, we just need to normalize it. So what I mean by that is I'm going to look at the sum over all possible weights, all possible assignments and their weights. And I'm going to define z as the sum of all the weights. And that's going to be called the normalization constant or sometimes called the partition function. And then I'm just going to divide by z. So this is going to produce something that sums to 1, and I'm going to define that as a joint distribution over big X equals the [INAUDIBLE] at little x. OK, so let's do this example here. We have x1, x2, x3, and a weight of x. So for this, we have a bunch of eight possible, or not-- or six possible non-zero weight assignments with particular weights. We add all these weights up. That gives us the partition function z, which is 26 here. And then we divide each of these weights by 26 to produce the joint probability. And so now this probability distribution represents the uncertainty in the problem. And notice that while 122 was the maximum weight assignment, and it still is, this probability gives us a more nuanced picture, which is that we're only 31% sure that that is actually the true trajectory of that object. So could be useful information. There's a big difference between 31% and 90%. But wait, we can do more than that. So the language of probability can allow us to ask or answer or other questions besides just probabilities of all assignments. For example, if we wanted to know where was an object at time step 2, so that is what is the value of random variable x2? And I don't care about x1 and x3. So this query is captured by a quantity called marginal probability. And the marginal probability of a particular random variable equaling a particular value v is given by-- so we write this as p of xi equals v. And this is given by just summing over all possible full assignments such that xi equals v. So all assignments consistent with this condition of the joint distribution, which we have defined on the previous slide. And now let's look at this object tracking example again. So we have our joint probability table that we computed on the previous slide. And now let's compute some probabilities of marginal probability. So first, let's consider what is the probability of x2 equals 1 here? But to do that, we look over here and look at all the rows where x2 equals 1. So that's the first four here. And then we just add up their probabilities. So that's 0.15, 0.15, 0.15, 0.15. And that gives us 0.6. And now we can issue another marginal probability query. What's the probability of x2 equals 2? And look at all the rows where x2 is 2, and these are these last two rows. And we add up these probabilities, and that gives us our point. There are some rounding errors here, why this doesn't add up exactly. OK, so that allows us to answer marginal probability. So one thing you might know is that the answer here is actually different than if you just look at the max weight assignment and the value of it. In particular, if you look at the maximum weight assignment, it says the most likely thing is that x2 is 122. And you look at x2 of, and it says 2. But notice that that is not the most likely under the marginal probability. The most likely value for x2 under marginal probability is 1, and it has 62% chance of being a 1 in that case. So the intuition here is that while this trajectory has indeed the largest weight, there is actually a lot of decentralized evidence for x2 equals 1 with these assignments, which have less weight. So kind of strength in numbers, if you add up all these weights, they actually outnumber the evidence for x2 equals 2. So this is kind of an important lesson that what answer you're going to get really depends on the type of question you're asking. And in this case, if you're really interested in the object at timestep two, then marginal probability is-- so let's look at a particular example. So the Ising model is a very canonical example that dates back to the 1920s from statistical physics. And the ideal was that this is a model for ferromagnetism. So the idea is that you have a Markov network which contains a bunch of different sites. And each site is going to be denoted xi, which can take on two values-- minus 1 and plus 1. So minus 1 represents a down spin and plus 1 represents an up spin. And furthermore, all these variables are going to be related by a factor. And we're going to call this fij, which connects site i and site j. It's going to depend on the spin of i and the spin of xj. And that's going to be equal to x of beta times xi xj. OK, so intuition is that we want neighboring sites to have the same spin. OK, so by multiplying these together, if both of them have the same sign, then this is going to be 1. If they have opposite signs, they're going to be negative 1. And beta here is a scaling that says how strong is the affinity. If beta is 0, that means this is just x of 0, which is 1. So that means there's no connection between. And as beta increases, then the affinity becomes stronger. The difference between them agreeing and not agreeing becomes i tended. So one thing Ising models are useful for is to study phase transitions in [INAUDIBLE] systems. So here is an example of what happens when a beta increases. So if beta is close to 0, then you're basically going to get unstructured systems where each side is just behaving independently. In fact, if beta is 0, then all assignments are equally likely. And as beta increases, you'll see that more and more coherence happens where neighboring sites really like to be close to each other. But of course, there's going to be some kind of sharp ridges where two neighbors have to disagree. So how we're going to sample from this model is going to be a topic for another module. So here is another canonical application of Markov networks from computer vision. So this used to be very popular before deep learning. So the idea is that you take a noisy image, and you want to denoise it into a clean image. So we're going to present a very stylized, simple example of this. So here is our 3 by 5 image. So each site is a pixel, and we assume that we only-- so xi is either 0 or 1, which is a pixel value, which is unknown for modeling the clean image. And we assume that only a subset of the pixels are observed. Or maybe we observe this one, this one, this one, this one, this one. And the goal is to fill in the rest of the pixels. So we can capture this observation by an observation potential of oi xi, which is 1 if xi agrees with the observation and 0 if it doesn't. So this is a hard constraint that says where I observed a value, xi must take on that value. So this one has to be 0, this one has to be 1, and so on. And finally, we have transition factors that say neighboring pixels are more likely to be the same than different. So again, the same intuition is that Ising model, and we're going to denote this as tij. And this equals is 2 if two neighboring pixels agree and is going to be 1 If they-- so let me summarize. Markov networks, you can think about it as simply as taking factor graphs and marrying them with probability. So again, factor graphs already have done a lot of the work. They allow already you to specify non-negative weight for every assignment. And all we have to do is normalize that to get a probability distribution. And once we have the probability distribution, we can answer all sorts of queries. Now we're computing marginal probabilities, which allows us to pinpoint individual variables and ask questions about them. So it is useful comparing Markov networks with CSP. So CSPs, we talked about variables who are known, unknown. And Markov networks, we call them random variables. They behave like variables, but they're random variables because we're endowing them with a probabilistic interpretation. And CSPs, we talked about weights. Markov networks, we talk about probabilities, which are the normalized weights. And the main difference is that in CSPs we were trying to find the maximum weight assignment. And in Markov networks, we're looking at the distribution over assignments holistically and answering questions about marginal probabilities, which gives us a more nuanced idea of the set of possible components. OK, that's it for this module.
Stanford_CS221_Artificial_Intelligence_Principles_and_Techniques_Autumn_2021
Logic_2_Propositional_Logic_Syntax_Stanford_CS221_AI_Autumn_2021.txt
All right, so in this module, we are going to be talking about syntax of propositional logic. So if you remember this diagram, what we are going to be talking about in this lecture is thinking about syntax of logic, thinking about semantics of logic, the meaning of logic. And in addition to that, thinking about inference rules, how can we manipulate logic? And one point I want to mention is, you might have seen logic in other classes. You might have seen logical formulas, and being able to manipulate them and move things around. And that's not really the point here. The point here is to have this general framework, kind of this more principled way of looking at logic, where we can think about algorithms that can manipulate logical formulas, and can do inference rules just more generally from an algorithmic perspective. So the point is not for you to be able to do logic and move things around. The point is to have an algorithm, that that algorithm can do logic because the goal of this class is to have an artificial intelligence that can do similar things as how humans would do it. So the point is not for you to do logic. The point is for the AI to be able to do logic. And this is kind of like an analogy to that is, thinking about the Bayesian network structure. Like last week, we talked about Bayesian networks. And in that setting, you might be able to do conditional marginal probabilities perfectly fine. You might be able to manipulate things perfectly fine, but that was not the point. The point was not for you to do that. The point was to have an algorithm, maybe like Gibbs sampling that is more general. It can be applied to any Bayesian network, not like a single example. So we're basically trying to do a similar thing in this space of logic here, OK? So let's talk about syntax. So what is syntax? So syntax of propositional logic consists of a few different things. So it consists of propositional symbols. So these could be A, or B, or C. Like these take Boolean values. And then based on these propositional symbols, we can basically build formulas on top of them. The propositional symbols are also commonly known as atomic formulas. And then you can make more complicated formulas based on these atomic formulas using a set of logical connectives. So these are negation and/or implication, or bidirectional implication, OK? So let me actually write that here. So here's what we are going to start with syntax. So what does syntax have? So syntax has propositional formulas, propositional symbols, sorry. So these are A, B, C, and so on. And then we can have formulas defined on top of them. And let me use f for formula. And how do I define formulas? I use these logical connectives to create formulas, these connectives that we just talked about. So here are a couple of examples of how we go about it. So we can build these formulas recursively. Let's call f and g formulas here. And if f and g are formulas, then I can even build more formulas on top of them. So I can have negation of f as a new formula, or f and g as a new formula, or f or g as a new formula, f implication g, or f bidirectional implication g. So f is equivalent to g. You can think of it like that, OK? Here are a few examples. So if A is a Boolean symbol, A is a formula by itself. Negation of A is a formula. Negation of B implying C is a formula. So I've just used a bunch of connectives and created a more complicated formula here. I can have this guy as a formula, right? So negation of A is a formula. Negation of B is a formula. Negation of B implying C is a formula. Negation of B or D is a formula. And then "or" of these and "and" of these is also going to be a formula, OK? Negation of negation of A is a formula. Well, why is that? Because A is a formula. Negation of A is a formula. Negation of that is also a formula. And this guy, A negation B is not a formula. So why is that the case? Well, negation of B is a formula. A is a formula, but A and negation of B are not connected with each other using any logical connectives. So this is just basically putting two Boolean-- two logical formulas right next to each other without any connectives. And then that's not a formula. A plus B is not a formula, but why is that? Because plus doesn't have any meaning. It doesn't have-- the syntax of it is not defined here. I never defined plus, and that doesn't have any-- that doesn't make sense in this logic. It's not defined in this language, OK? And one other point I want to mention here-- and we'll talk about semantics soon-- is that syntax here, you can think of it just as symbols. Syntax doesn't have any meanings, right? Syntax is just the symbols that we are using here and with no meanings assigned to it. And the job of semantics is to assign meanings to what does negation mean, actually, or what does implication mean, what would be the meaning of it. But when we're talking about syntax, I could use any other symbol. I can use this symbol, and I can just define that in my logic. And that would be the syntax of my logic. So don't assign any meanings just yet when we are talking about syntax. It's just symbol manipulation when we are talking about syntax, right? But next in the next module, we are going to talk about semantics and giving some meanings.
Stanford_CS221_Artificial_Intelligence_Principles_and_Techniques_Autumn_2021
Logic_3_Propositional_Logic_Semantics_Stanford_CS221_AI_Autumn_2021.txt
All right, so in this module, we are going to be talking about semantics. So, we've started talking about syntax in propositional logic. And then we define formulas, propositional formulas, which basically take propositional symbols and logical connectives and put them together, and symbolically create something that we call a formula. And then that was kind of a syntactic view of things. Where we didn't assign any meanings. There was no meanings for anything, it was just symbols. And what we would like to do in this module is, we were trying to assign meanings to those syntactical formulas that we defined. And that corresponds to semantics. So in this module, we are going to be talking about semantics and giving meanings to those formulas. And in general in this lecture, we are going to have a good number of definitions. So, I'm going to write out those definitions in a separate whiteboard, so we can keep track of them. But a good number of definitions are coming up, especially in this module. And let's start with some of them. So the first definition is a definition of a model. And this is a very poor choice of words, we have been using the word model throughout the lectures of this class. And as a different thing, like we talked about modeling, inference and learning. But in the logic lecture, we're going to assign a different meaning to the word model. And that has historical reasons, because historically model has been used in these settings in logic in this particular way, to refer to assignments, really. So a model for this lecture and for the logic lecture, let's refer to a model in propositional logic as an assignment of truth values to propositional symbols. So I'm going to use the word W for a model. W for world, so that's why it's called W. So, a model W in propositional logic, is just an assignment of truth values. OK, so what does that mean? So let's look at an example. For example, let's say we have three propositional symbols A, B, and C, OK? How many models do we have? Well, we can have eight possible models, right? Two to three possible models, or worlds that we can live in. And in particular w, a particular model, is going to be a particular assignment. So for example, I can pick A equal to 0, B equal to 1, and C equal to 0, and that's a model. That is one W, that corresponds to an assignment of truth values to propositional symbols. OK, so let me write that on our whiteboard here. So going back here, I'm going to start under Semantics and then we have the word model. No, one second. Model, and I'm going to use the word w for it, OK? All right, let's go back. All right, so now we are ready to define this thing that's called interpretation function. And an interpretation function is the thing that actually gives us semantics and gives us meaning. So what is an interpretation function? So let f be a formula. That's what we defined in syntax. And let w be a model, OK, so an assignment. So an interpretation function I, gets a formula, gets a model, and it basically outputs true or false. It basically tells us if w satisfies f or if w doesn't satisfy f. OK, so interpretation function is really the thing that binds formulas and models. Formulas living in the syntactic land. And models living in the semantic land. And basically interpretation function is trying to connect them, and tell us if that is true or not. So let me go back here. And let me write out interpretation function. Interpretation function. So it's a function I, that takes f, the formula and w, and it gives us true or false. So let me go here and show this. So let's say that we have a formula f. I'm going to write out or draw our formulas using these rectangles. So in the syntactic land, you might have a number of formulas. Let's say I have one formula f, and then in the world of semantics, we might have different models. Right, these are different models or worlds that we can live in, or worlds. And I can pick a specific one, let me call that w. OK, let me call that w. And what I can do is, I can basically connect a formula to this w using this interpretation function. So I'm going to have an interpretation function f and w. That gives me true or false, if f satisfies w or not, OK? I can have other w's in this space of models, in this semantic land. So how do we define an interpretation function? So, the way we define an interpretation function is recursively in a similar way that we defined our syntax. So we're going to start with propositional symbols. So we have these propositional symbols A, B, and C. Right, like these take Boolean values. And the interpretation function of each one of these. Each one of these propositional symbols p. And a model w, is just going to return w on that propositional symbol, OK? Remember w is an assignment to these. So going back here, let me give you an example. So my w is going to be an assignment, that maybe says A takes the value 0, OK? And my Propositional symbol is just A, right? So if I look at interpretation function over p and w. It's basically checking, interpreting A and the model A, being assigned to equal to 0 and that returns the value 0, OK? So that is just for the base case of this, OK? So then when we think about more general formulas, these formulas. How are they defined? They're defined based on these logical connectives applied over propositional symbols. And based on that, we can basically recursively define this interpretation function. So I can have a formula f and a formula g. And I can create kind of this truth table. And interpretation function of f and w could take values 0 or 1. Interpretation function of g and w could take a value 0 or 1. And then for anything else, like for any of these other logical connectives, I can basically recursively define them. So what would be an interpretation function of negation of f and w? It would basically try to negate this column, right? So it would be 1, 1, 0, 0. Or if I'm thinking about interpretation of f and the g? Well, what would that be? And the model w, what would that be? That would be interpretation function of f and w, and interpretation of g and w. So basically anding these two columns, and that gives us these values. And so on, similarly we can define interpretation function over f or g, or f implying g, or f by directional implication of g, and so on. And then we can kind of assign meanings to these more generic formulas, OK? All right, so let's look at an example of how do we do this recursively. So let's say we have a formula f, and that formula is negation of A and B, and by directional implication C. OK, so that's my formula. I have an assignment. I have a model. That model is truth assignment to my propositional symbols A, B, and C. So let's say A is 1, B is 1, and C is equal to 0. And now I can call an interpretation function f and w, and see what the value of that would be. How do we do that? Well, let's start with these nodes So at this node, I can call interpretation function over symbol A and w. Well, what does that equal to? Well, that is equal to just 1. Because I'm just going to call my table of models, right, I already have this, that's just equal to 1. The negation of A is going to be equal to 0. What is interpretation function of B and W? Well again, I have an assignment, I have a model. That tells me B takes value 1, so that's 1. And then if I'm taking an interpretation function of negation of A and B. Then that is the and of these two, So 1 and 0 gives me 0. Similarly, I can look at interpretation function of C and w. Reading that off my model, that is equal to 0. And then when I'm looking at equivalence of this C and negation of A and B. Both these two are equivalent. So that's just going to be equal to 1, OK? So this is just showing recursively, how do we run interpretation function? So there's no learning here. This is defined by logic. Right, you can define your own logic. That would be fun. But this is defined by some sort of logic that is, this propositional logic that you have defined using our formulas and using our connectives, and so on. And I'm just calling this, like I'm just computing this. So I'm not doing anything fancy here, OK? All right, so each formula and model, right, like you can interpret it using this interpretation function. And that gives us a value 0 or 1. OK, so now I'm going to define this thing that's called models of f. And basically it's a set of w's. It's a set of models, where interpretation function is equal to 1. So going back here, right? Like looking at this, there could be one w. And I can check the interpretation function of f and w. I can also be looking at a set of models, right? And I can call this models of f. And what is models of f, right? Models of f is a setting where interpretation is a set of w's. So let me write that here. Models of f, what does that equal to? It's a set of w's, such that interpretation function of f and w is going to be equal to 1. So let me write that in the set of my definition. So we talked about interpretation function. We're talking about a single model. And I don't know why this does this. Erase this. We have models M of f, OK? And what is that? That is a set of w's, such that interpretation function of f and w is just equal to 1, OK? All right, so now we have our models. Let's go back here, OK? All right, so basically intuitively you can think of this models of f, as all the worlds, all the assignments where f holds. And anything outside of this, still has some world, has some possibility. But that's a setting where this particular f doesn't necessarily hold, OK? So let's look at an example. Let's say our formula f is rain or wet. OK, so then if I think about models. All possible models are when you think about rain taking values 0 or 1, and wet taking values 0 or 1. So I can kind of show that by this 2 by 2 grid. That's all possible models. But what is models of f? Models of f is when rain or wet holds, and that is the shaded area, right? So the shaded area here is showing kind of like the meaning of this formula f. Which is again, symbolically written, doesn't have a meaning, but models of f is assigning a meaning to it. So it's saying, hey, these grids is showing what is the meaning of rain or wet, OK? And the key idea here in logic in general, is there's this formula, although it is written like again, syntactically. And it is a symbolic representation. It's a very compact representation of a giant set of models, right? So in general, the nice thing about logic is we could use formulas to compactly represent very large meanings. Like a lot of times like exponential meanings could be represented by formulas that are pretty compact and nice. And that is kind of the power of logic. You can write this compactedly, and you can do operations on it, you can do inference on it and so on. And that is really nice. OK, all right, so that was formulas and models and interpretation functions, finding formulas in models. And what now we want to do, is you want to think about how could we do operations here. And like, what would new formulas add in terms of meanings, to the knowledge that we already have? So for that let's define something that's called a knowledge base. So a knowledge base is a set of formulas that I already know, OK? So if I have a system, a virtual assistant system that I want to add logic to it, or I want to speak to it using language or using logic. That system has a knowledge base. Which is a set of formulas that are already represented. It's a conjunction or intersection of a bunch of things that it already knows, OK? So let me go back here, and write out knowledge base, as I think. So, knowledge base KB. I'm going to use KB for this. And this is a set of formulas that you already know, OK? So we might already know a formula that says rain or snow. Or we might already know there's traffic, OK? So this is our knowledge base, OK? So then what happens is that someone might come and give me a new formula. And what we interested in looking at is, how does that affect our knowledge base? So before getting in there, so knowledge base is a set up formulas. So it is in the syntax land. What would be the analog over in the semantics land? It would be models of KB. And what is the models of KB? Models of KB is going to be an intersection of models of M. OK, so maybe let's go back here. Let me just look at an example. So we looked at an example where, let's say I have a formula F1. And if F1 says it's raining and snowing. OK, and maybe I have F2, and F2 says there is traffic. OK, let me separate these, OK? And I have a knowledge base, and my knowledge base has F1 and F2 in it, OK? So someone already told me that its raining and snowing, and then there's traffic. So what would be models of KB? So models of KB is going to be an intersection of models of F1, with models of F2. And why is that? Because if you think about it, right? Like, F1 is a formula. F1 has a set of models corresponding to it. Models of F1 corresponding to it. And F2 is another formula that I'm just adding to my knowledge base. And that has a bunch of other models corresponding to it. And my knowledge base, right, is now going to be an intersection of these two. Because as we add more formulas, right? As we add more knowledge to our knowledge base. Then our model is going to become smaller and smaller, because we are adding more constraints. Which is pretty interesting, right? So in general, if I have, let me maybe write that in a different color. So if I have a knowledge base, and I add a new formula to that knowledge base. Maybe a union, a new formula that is added, new f. What would be the effect of that on models of KB? The effect of that on models of KB is going to be what I had for models of KB, intersection models of f. So adding new formulas is constraining our models, right? Constraining the meaning more and more. Because it can't be, like if you have raining and snowing, and we have traffic as a whole other set of models, the intersection of the two is going to give me this model, OK. So, also let me connect this, so this corresponds to these models and these correspond to these models. All right, so let's go back here. So that's how we define models of knowledge base. Let's look at another example here. So let's say you're looking at raining as one formula. Models of rain is going to be this shaded area, where rain is equal to 1. And then we have another formula rain implies wet. And what is models of rain implies wet? It is basically negation of rain or wet, so basically it is this shaded area. If I'm looking at a knowledge base that has both of these in it, then what would be the models of that knowledge base? It is just going to be the intersection of these two shaded area. Which is basically this square, OK, where we have both rain and rain implies wet holding. All right, sounds good. And this is what I've already basically mentioned. We have knowledge base, if we add a formula to it, it increases the size of our knowledge base. You're shrinking the size of set of models. Because we are constraining things more and more. So we are constraining the meaning part. All right, so now let's talk about this idea of what happens if I have a knowledge base and I add a new formula. So I have a knowledge base, I'm trying to add a new formula, see what happens. And there are three things that can happen. So one option is entailment. So what Entailment says is, if I have a knowledge base, if I have KB as my knowledge base. And you come and tell me a new formula f. And that formula is not adding anything to my knowledge base, then we say we have Entailment. OK, so this is a scenario where f is just not adding any information or any new constraints. Like this is basically telling me things I already knew. OK, so we say KB entails f. And that is written using this double line kind of symbol. So if you say KB entails f, if and only if models of KB is an intersection of models of f. Let's look at an example here. So let's go back to-- let's go back here. Maybe I'll start a new-- So we have three options. One is called Entailment. Entailment, OK. So let's start with a knowledge base. And my knowledge base maybe is rain and snow. So I have a formula in my knowledge base that says rain and snow. And that has model's corresponding to it. So this is models of KB. OK, and you might come and tell me a new formula, and that new formula is rain And if you tell me rain, and I already have rain and snow in my knowledge base. That doesn't add any knowledge to me, right? Like I already knew it was raining. So then this model is going to be-- Models of f is going to be a super set of models of KB. OK, so we say KB entails f, if and only if models of KB is an intersection of models of f. So didn't tell me anything new, already knew that. And that is Entailment, so let me go back here. And maybe add these under our definition. So now we have defined entailment under it. All right, so let's go back here. So rain and snow is entailing snow. OK, so that was one option. Another option is contradiction. So what is contradiction? Contradiction is a scenario where you're telling me a new formula f. I already have a knowledge base KB. You tell me a new formula f, and that is contradicting with my knowledge base. OK, so in the models land, what happens is that models of KB doesn't have any intersections with models of f. OK, so f contradicts what we know, our knowledge base, If and only if models of KB intersection models of f is going to be the empty set. All right, so let's look at an example. Let's maybe go back here. So our second option is contradiction. So let's write that here, contradiction. So contradiction is the scenario where I know some knowledge base. So my knowledge base is maybe rain and snow again. So I think it's raining and snowing. And then you come and tell me a new formula. And that new formula is negation of snow maybe. OK, and then that contradicts with my knowledge base, right? So if that contradicts with my knowledge base, what happens is that there is a models of KB. And there is a models of f. And they don't have any intersections. So contradiction is a scenario where models of KB intersection models of f is empty. OK, one other interesting thing to kind of like notice here. Is that if you think about contradiction, contradiction is very related to entailment. Contradiction is the same thing as entailing negation of f. And why is that the case? Because if you look at models of f, right? Models of negation of f is anything outside of it, right? So if this is models of negation of f, then what is happening? The thing that's happening is that models of KB is a subset of models of negation of f. And if you remember our definition of Entailment. That is the same thing as KB entailing negation of f, OK? So that's pretty interesting, because contradiction is the same thing as entailing negation of f. OK, all right, so those were the two cases so far, right? You either tell me a new formula, and I already knew it. So that is entailment. Or you tell me a new formula and that contradicts the knowledge base that I've had, so that is contradiction. OK, and let's add that here. So we talked about entailment. Now we've talked about contradiction. And we wrote Entailment as KB entailing a formula. And contradiction is KB entailing negation of the formula. OK, all right, so there is a third case. Let's talk about that third case. Let me skip that, so we talked about contradiction being very related to entailment. And KB contradicting with f, is the same thing as KB entailing negation of f. All right, so the third case here, is what we are calling contingency. And that is when you are telling me a formula, and that formula is actually telling me something I didn't know. It's telling me some non-trivial information. OK, so that is when models of KB has some intersection with some non-trivial intersection with models of f, OK? OK, so that is when we write models of KB, intersection models of f, is going to be a subset of models of KB. But it's going to be strict subset of models of KB. If this is equal, like we get Entailments. So we don't include equality here. All right, let's look at an example. Maybe let's go back here. So our third case is contingency. And that is when I have, maybe my knowledge base. And maybe my knowledge base is just rain this time. And you come and tell me a new formula. And that new formula is snow, OK? So my knowledge base thought it is raining. So I have my models of knowledge base corresponding to raining here. And then you come and tell me, hey, by the way it is also snowing. And models of snowing is here. And there is some non-trivial intersection going on here. OK, so contingency is when models of KB intersection models of f, is going to be a subset and it's not going to be equal to models of KB. And similarly empty set is going to be a subset of this, but it's not going to be equal. So you get like some non-trivial information, something that you didn't know, and that gets added in, OK? So that is contingency. And going back here, let me add contingency as the third option, contingency. And contingency is when you have these non-trivial intersections. I'm not going to write it out. All right, let's go back here. OK, so we have these three possibilities. You give me a new formula. I'm either entailing it or contradicting it, or I have contingencies. So now let's talk about how we would use these ideas, if you want to implement a virtual assistant. Remember, like we started this lecture thinking about having a virtual assistant, where we can talk to it in logic or language. And that virtual assistants, right, like you can tell it some information, or we can ask it questions. And then maybe you want to implement this tell operation. So if I want to implement a tell operation in this virtual assistant, this virtual assistant is going to have some knowledge base at the moment, some KB. And I tell it f, a new formula f. So what can happen? So if I tell it a new formula, maybe I tell it it is raining. So I do Tell Rain. Three things can happen, right? It can either entail f, right, my knowledge base might already have raining in it. In that case, the response would to tell operation, tell it is raining. It's going to be I already knew that, OK? So if it says, if my virtual assistant already entails rain, it should respond to me as I already knew that. OK, if it contradicts f, it should say I don't believe that. Because its knowledge base basically says it's not raining, and now you're telling it it is raining. And because of that, it would respond as I don't believe that, because my knowledge base tells me the opposite. Or if you're telling it something new. You're telling it it is raining and it didn't know that, or it don't have any information about it. Then it should say I learned something new. And based on that thing that you're telling it, it should update knowledge base. It should update its KB by this new formula, that it is raining, OK? So now we can implement a Tell operation based on these three ideas of entail and contingency and contradiction. In a very similar fashion, we can also implement an ask operation. If you ask it, is it raining? Then based on that, it can go ahead and it can answer yes. A definite yes if KB already entails f. So if you have Entailment. You should answer no, if you have a contradiction. If KB contradicts f, right? Or if KB entails negation of f. So it should give us a definite no. No, that there is a contradiction. Or it should tell us I don't know. If there is contingency, it doesn't know. So if you ask it, is it raining? It doesn't know, so it should just say I don't know, OK? So going back to the things you are defining here. Let me remove this inference rule thing. OK, so let me just write Tell in. So we talked about this Tell and Ask operation. And you can basically implement Tell and Ask based on this entailment contradiction and contingency, OK? All right, so I want to just do a quick side note here. I don't want to go into this in too much detail. But there is a connection between the things we are talking about here, and some of the topics we discussed like two weeks ago in Bayesian networks. Right, so we've been talking about this idea of models and having these types of models. And this is the same thing as assignments, right? And you can think of it, you can basically think of having Bayesian networks as a distribution over these assignments, over these models. Right, I can have A equal to 0, B equal to 0, C equal to 0. And I can have a probability assigned to that. Like probability of that could be 0.3. And I can have another assignment or model. And I can have another probability assigned to it. So from a Bayesian network perspective, from probabilistic perspective, one can think about logic in a probabilistic way. And think about probability of a formula given a knowledge base. So when you have a knowledge base, you have some knowledge, and you're asking about a formula. One can think of, instead of thinking about just these three different things, like Entailment, contradiction and contingency. One can think of a probability, like an actual value, right? And probability of the formula given a knowledge base. So what is that going to be equal to? That is going to be equal to probability of models w's that exist in the intersection of models of KB and models of f, right? Over probability of all possible models in the knowledge base. So w's are all possible models in my knowledge base. I'm going to sum over all those probabilities of all possible models in my knowledge base, in the denominator. And in the numerator I'm going to just focus on models that are in the intersection of models of KB and models of f. And so models of KB union f is equal to models of KB intersection models of f. That's why there is a union here. You remember, like if you add an f to your KB, you're shrinking your knowledge base. So that's why this numerator is smaller, right? You're shrinking your knowledge base by adding this f to it, OK. And if you think about this fraction, this is a number between 0 and 1. And now we have probabilities. We have actually a probability for f being satisfied or not, given a knowledge base. But in general, this was just like a quick digression talking about a probabilistic view of this. There's quite a bit of work actually in logic and probabilistic versions of it. And thinking about probabilistic model checking, and instead of giving just like 0 1 values. What would be a probabilistic view of it? We're not going to go into details of those in this class. And basically you can think of these probabilities as these three different ways of looking at the problem that we had been talking about. If this probability is equal to zero, then we basically have an answer of no, we have contradiction. Right, like f is not satisfied. When if this probability is equal to 1, if the numerator and denominator are equal to each other, then you're answering yes. Right, like f is not adding any information. So we have Entailment. And if you get any other of value between, any other value between 0 and 1. Then you're in an Entailment situation. Sorry, we are in a sorry, contingency situation. And we basically say, we don't know. Right, like any of them, we basically say we don't know, OK? All right, so that was just a quick kind of like connection to a probabilistic generalization of some of the things we were talking about in Bayesian networks. But now let's just go back to the same problem we're talking about, right? So we've talked about these three different things, Entailment, contingency and contradiction. We've talked about how we can have a Tell and Ask operator, based on that. Now they're going to talk about this idea of satisfiability. So what is satisfiability? So a knowledge base KB is satisfiable if models of KB is not empty. OK, very simple. Like models of KB is not empty, we have satisfied both. OK, so why is satisfiability useful? Why am I talking about this as Satisfiability? Because Satisfiability is a well known problem. We have really good solvers for it, SAT solvers. And then we're going to talk about that in one slide really quickly. So it's nice to think about these three different things, entailment, and contingency and contradiction, in kind of a view of a problem of satisfiability. OK, so we have these three things. Satisfiability gives me a yes or no answer. So how can I use satisfiability to answer if you're in any of these situations? So the way we use satisfiability, is we do two calls to satisfiability. So the first call, is in general if I want to think about my Ask operator or Tell operator, and if I want to reduce it to Satisfiability. I can do two calls to satisfiability. I can first ask if KB union negation of f is satisfiable or not. OK, so what does the answer to that give me? So if I get no for that, right. If KB union negation of f is not satisfiable, I have entailment, right? So I get my answer for entailment here. And if I get yes for that, that doesn't answer everything, right? Like if I just get yes for this, I don't know if I'm in a contingency situation or a contradiction situation. So what do I need to do? I need to do another call to satisfiability. And the second call to Satisfiability is asking if KB union f is satisfiable. And what does that check? Well, if I get no for that, then I'm getting contradiction. Remember contradiction is the same thing as entailing negation of f. That is why the answer for this gives me contradiction. And if I get Yes for that, I get contingency. So what I've just done, is if I, in general, if I want to know if I'm in the entailment, contradiction or contingency situation. Then I can basically figure that out with two calls to satisfiability. And why do I want to know if I'm in any of these situations? Because that helps me implement my ask and tell operations, OK? So going back here. So we talked about ask and tell, we talked about how it relates to Entailment, contradiction and contingency. And now we have talked about satisfiability, as a way of answering which scenario we are in. OK, and how do we answer satisfiability? So that's a good question to ask. So what is satisfiability? So satisfiability and checking satisfiability, the SAT problem, in propositional logic. Is basically just the special case of solving a constraint satisfaction problem as CSP. And we have already learned about CSPs and solving CSPs. So what that means is, we can basically check Satisfiability. We can basically check if this CSP problem, like what's the solution to this CSP problem. And solve satisfiability with the algorithms that we already have access to, OK? And this idea of checking Satisfiability, is called model checking. You're checking if a model exists or not. You're checking if an assignment exists or not. OK, so the mapping of the SAT problem to CSPs is as follows. So Propositional symbols are basically variables, what we used to call variables. Formulas are basically constraints. And then if you have variables and constraints, you can come up with an assignment. And that assignment is basically a model. So you're checking if a model exists or not. You're checking if a satisfying assignment exists or not, OK? Let's look at an example. So let's say our knowledge base has these two formulas in it. We have A or B, and we have B bi-directional implication negation of C, OK? All right, so we have three symbols A, B and C. These symbols are the same things as CSP variables. So we can have three nodes, these three variables. And then we have basically two formulas. These formulas create constraints in our CSP. So we have A or B, and then we have B equivalent negation of C. And then what are we doing? So we have a CSP, we can solve it, right? We can find an assignment, a consistent assignment for it, which is the same thing as satisfying a model. And if you find an assignment, this problem is satisfiable. Model checking comes up with a model for it. And then if it is not satisfiable, it's going to return as UNSAT, it doesn't come up with any assignments. So that's kind of nice. Going back here. Right, like this problem that we've been talking about, this tell and ask operation reduces to entailment, contradiction and contingency. I can use satisfiability, two calls to satisfiability to answer that. Then how do I do that? Well, I use model checkers to do that. So that's called model checking, checking the satisfiability, which is basically solving a CSP. All right, so going back here. OK, so what does model checking do? Model checking takes as an input, a knowledge base, and what does it output? It outputs if there exists a satisfying model or not. And if it does, it returns that model. So it checks if models of KB is not empty. And there are a good number of algorithms out there that try to do model checking. And some of the older ones is DPLL, is kind of like a well known algorithm that tries to Satisfiability and model checking. What it does is, it uses backtracking search, and quite a bit of pruning, and quite a bit of heuristic goes into it to make sure that it can solve this problem as fast as possible. Some more recent algorithms are things like WalkSat. That is pretty similar to Gibbs sampling, and it does a randomized local search. There are a good number of solvers out there, satisfiability solvers out there. Z3 is a famous SAT solver, that you can look into if you're interested in solving SAT problems. And then with that, we now have a good idea of syntax. We have a good idea of semantics. And next what we would like to talk about is, we would like to talk about what formulas get us. Right, like, why do we live in the formula land? Like, why do we want to even look at syntax? And it turns out that we can do inference on formulas, and that buys us quite a bit. So in the next module, we are going to be talking about what formulas buy us, and how to do inference rules.
Stanford_CS221_Artificial_Intelligence_Principles_and_Techniques_Autumn_2021
Logic_4_Inference_Rules_Stanford_CS221_AI_Autumn_2021.txt
All right. So in this module, we will be talking about inference rules. So if you remember, so far we have been talking about syntax and semantics. And now, we would like to talk about how we can play around with formulas and manipulate them, and apply inference rules on them. But why don't we talk about this diagram a little bit more for a second, before jumping into inference rules? So let me go back to my whiteboard here. So basically, what I've been drawing here is-- we live in the syntax land, and formulas live in the syntax land. So I'm going to draw them like this. So maybe I have formula f1, formula f2, through, maybe, formula fn, OK? And then in the semantics land, I give meanings to these formulas, right? So each formula has a corresponding set of models to it. So each one of these formulas will correspond to something that I'm calling models of f1. f2 might have another set of models-- models of f2-- and so on. And you might have a bunch of lettered ones. So let's say that I only have three of these. Actually, let me make this f3 to make this simpler. So let's say I have f3, and f3 has another set of models corresponding to it-- models of f3, OK? And this part defines our knowledge base, right? So we talked about knowledge base being a set of formulas. And the shaded area corresponds to models of knowledge base. So this is models of our knowledge base. So this is what we have talked about so far. So now, we want to talk about what inference rules really do. So if you have a set of knowledge in your knowledge base, a set of formulas in your knowledge base, the idea of inference rules is could you, basically, apply a set of syntactic rules on them? And based on those rules-- I'm going to call them inference rules-- infer something-- a new formula here. So from the formulas f1 through f3 that you have, could you infer a new formula, a new g, that is just based on the formulas that you have and based on symbolically manipulating them? And the question is, could you make sure that the g that you're inferring actually has models that is a super-set of-- so this is models of j-- is a super-set of models of KB. Because ideally, you want to be able to infer something that just directly comes from the formulas that you have. So ideally, you would want to be in a situation when models of g is going to be a subset of models of KB. And what does that mean? That means KB entails g, right? So could you have a set of inference rules that end up giving you a g, and then you end up in a situation where models of g is going to be a subset of models of KB? So could we come up with those set of inference rules and those set of gs? So that is the idea of inference rules, and what we're going to be talking about today in this lecture, OK? All right, and that's basically what this diagram shows. So we have a set of formulas. Each one of them correspond to a set of models. At the end of the day, I want to apply a set of inference rules on these formulas, and I want to be able to come up with models that is a super-set of models of my knowledge base. Would I be able to do that? All right, so let's talk about that. So let me give you an example-- what do I mean by that? So let's say that I know it is raining, so in my knowledge base, I have that it is raining. And then I have that if it is raining, then it is wet. So rain implies wet. If I tell you that it is raining, and if it rains, that implies wet, what can you tell me, just from that knowledge? From that knowledge, you should be able to infer that, well, therefore, it is wet, right? It's raining, and raining implies wet, so it's got to be wet, OK? So that is the idea of the inference rule. Could we try to have a rule that tries to basically infer wet here, just based on the formulas? So in general, in inference rules, we have a set of premises, we have a set of formulas, like rain, and rain implies wet. And based on that, we want to come up with a conclusion. In this case, for example, that conclusion is that it is wet. And that defines, in general, inference rules. There's a specific type of inference rule that we are going to talk about. It's a pretty simple one, and it's called modus ponens, OK? So modus ponens is a very simple inference rule. And what it does-- what it says is that for any propositional symbol, p and q, if, in my premises, in my knowledge base, I have p, and p implies q, that allows me to conclude q, kind of like this example that we saw here. And you should think of inference rules as very syntactic, symbolic views of the world. So I basically just look at my knowledge base. If I find anything that matches form p, and anything that matches form p implies q, from that, I should be able to infer q, OK? Let me put this here in the set of my definitions. So now we are at inference rules, and we are going to be talking about modus ponens. So modus ponens is an inference rule. It tells us if we have p, and p implies q, we can infer q based on that, OK? All right, so in general, inference rules-- we can write them in this way. We have a set of formulas, f1 through fk, and following an inference rule allows us to conclude g about it. And it depends on what inference rule we are using. Modus ponens is an example that we have just seen. And again, these rules are applied directly on syntax, and they do not care about semantics. They don't care about what raining means, or what "wet" and "raining" and the meaning between them actually means, right? They're just applied on the syntax, on the formulas. And that's kind of the power of logic, right? We talked about these formulas as a compact way of representing much, much larger meanings, like exponential meanings in a lot of times. And now on these very compact formulas, we can apply syntactic rules, we can apply inference rules. And based on them, we can infer new formulas that have new meanings, basically, OK? So if I want to think about what an inference algorithm does-- kind of like a meta-algorithm-- an inference algorithm here does something of this form. So we have an input. That input is going to be a set of inference rules-- we've talked about modus ponens as an example, but in general, I would have other inference rules. And what I'd like to do is I'd like to repeat this loop until there are no more changes applied to my knowledge base. And what do I do? I just choose a subset of formulas from my knowledge base, and if I can match my inference rule and infer a new formula, g, I will add g back to the knowledge base. And I keep doing this until there is no more gs, no more new formulas to be added to my knowledge base. So that is what an inference algorithm does. And one other definition here is this idea of derivation and proving. So what we say is that a knowledge base proves or derives a formula, f, if and only if f eventually gets added to knowledge base, OK? So going back to, let's say, our definitions-- so now we have a definition of derivation or proving. And we're going to use-- so we're going to say a knowledge base derives f-- and you're going to represent that by this one-line symbol. So going back here, we're going to-- basically, if I have f1 through f3 in my knowledge base, if I apply inference rules and get a new g, I would say my knowledge base is deriving or proving g. OK, so on the semantic land, then I have this idea of entailment, which might be different from what we have in the syntactic land, which is this idea of inferring or proving or deriving. All right, so we'll talk about the relationship between these two in a few slides, but let me just go back to talking about derivation a little bit more, OK? So that is derivation. That is proving. So let's look at an example. Let's say that I have a knowledge base, and in my knowledge base, I have that it is raining and I have that raining implies being wet. And I have that wet implies slippery, OK? So the question, can I apply the inference algorithm on this, modus ponens, using just modus ponens and infer new rules? How does that work? Let's actually try this out using the system that we looked at in the overview lecture. So let's say it is raining, OK? So it says, I learned something. I can look at the knowledge base. So let's look at what is in the knowledge base. So raining is in the knowledge base, OK? I can say if it is raining, then it is wet, OK? So it says, I learned something. This is a kind of knowledge base. So it has it is raining, it has raining implies wet-- that is what this means because rain implies wet is equivalent to not rain or wet, right? That's what logical implication means. And then based on these two things, it actually derives wet. It applies modus ponens. Remember, I have rain, rain implies wet. What does modus ponens give me? Modus ponens gives me wet, so I can derive wet. Let's add if it is wet, it is slippery, and let's see what that gives us. So it says I learned something. Let's look at the knowledge base. OK, so we have a bunch of things, right? So these are things I added. I added rain, I added rain implies wet, I added wet implies slippery. From the first modus ponens that we applied, we got wet. We can apply modus ponens on wet, and wet implies slippery, and we can get slippery. So this slippery is added. You also get another formula here-- rain implies slippery. And modus ponens actually doesn't get us that. But this is using other types of inference rules, not just modus ponens. And if you apply other inference rules, you might actually get rain implies slippery here, OK? All right, so let's look at this exact example here. All right, so rain, and rain implies wet will get us wet, right? So we apply modus ponens again on the wet, and wet implies slippery, and that gets us slippery. And if the only inference rule that we have here is modus ponens, then we have converged-- knowledge base is not changing anymore. We have derived wet and slippery-- we have derived new formulas, but basically, we have converged at this point, and we can't derive anything more, OK? OK, so we can't derive a set of other things here, right? I mean, we haven't derived not wet. Probably a good thing that we haven't derived not wet because not wet is actually contradictory to our knowledge base. It's actually not true, right? So we shouldn't be able to derive not wet-- that's a good thing. In addition to that, we weren't able to derive rain implies slippery, which is actually true, right? If you think about entailment and what is the truth, rain implies slippery is entailed here, but we weren't able to get that by just applying modus ponens. And we will talk about that, in general, in a few slides-- why is it that we can't get rain implies slippery? And what can we do to make sure that we get everything that is entailed? And that is the same question as we see here, right, what is the relationship between entailment and inferring and deriving? So derivation and entailment-- how are they related? Are they the same thing, or are they doing different things, and does it depend on the inference rule, OK? All right, so the desiderata so far is we have semantics. Semantics is really about truth, right? It's about entailment, about the meaning-- what is actually true. When we say a knowledge base entails f, what that means is that models of knowledge base is a subset of models of f, and in terms of meaning, that is actually what the truth is. On the other hand, we've talked about syntax. In syntax, we just do symbol manipulation, right, using inference rules. We've looked at modus ponens as an inference rule, and we have looked at things like derivation, so knowledge base derives f, OK? So how are these two related? Let's talk about that. And that brings us to the idea of soundness and completeness, OK? So we're going to talk about soundness and completeness. So let's look at this as an example. So imagine that you have a glass, OK, and things that go inside of the glass are formulas, OK? And imagine that anything that is inside of the glass is the truth. So what does that mean? That means that knowledge base entails those formulas. So every formula that is true is going to be inside of the glass, OK? So the idea of soundness is that if I'm applying inference rules, if I'm running a bunch of inference rules, the formulas that are derived from those inference rules-- I want to make sure that they're also going to be inside of the glass. I want to make sure that they're also true, OK? So the idea of soundness is that a set of inference rules-- rules are sound if the formulas that are derived following that inference rule is going to be a subset of the truth, which is the set of formulas that are entailed by the knowledge base. So they're going to be inside of the glass. They're going to be true. Maybe they don't fill the glass-- that's fine. But what this is telling me is that anything that I'm deriving is still going to be true. I'm not going to derive something that's absolutely false. And that's a very important property that you want to have. In general, you would want to have soundness. You want to have inference rules that are sound because otherwise, we would be deriving things that are absolutely false, and that inference rule is not useful, right? We want to be able to derive things that are at least true. OK, so that is this idea of soundness. On the other hand, there is kind of the other side of the story, which is about completeness. Completeness is about making sure that you're deriving everything that is true. Again, remember, everything that is inside of the glass is true. And the idea of completeness is that you've got to make sure that the formulas that are entailed-- the formulas that are inside of the glass-- are going to be a subset of the formulas that are derived. And so what that means is that your derivation rule makes sure that you are getting all the formulas that are true, or even more than that, right? If you talk about completeness without worrying about soundness, you might even be deriving things that are outside of this glass. But you want to make sure that you are deriving everything that is inside of the glass, too-- so everything that is entailed. And that's the idea of completeness, OK? So if you put soundness and completeness together, you get a just filled-up glass, right? You get everything that is inside of the glass, and just everything that's inside of the glass, which is everything that is true, everything that is entailed, OK? So soundness and completeness is about the truth, the whole truth, and nothing but the truth, OK? So soundness gets you nothing but the truth-- everything that's inside of the glass, and nothing outside of the glass because that would be bad. You don't want to get something false. That's what soundness gets you. Completeness gets you the whole truth-- makes sure that you get everything that is inside of the glass, and no words are kept empty. You're deriving all the formulas that are inside of the glass, and that is what completeness gets you. In general, you want both soundness and completeness. It would be awesome to get both soundness and completeness. And if you get both of them, then entailment and derivation are equivalent, right? If you derive something, you'll make sure that it's equivalent to the thing that you're entailing. In practice, soundness is more important, right, because you don't want to derive something that is false. And maybe you don't get all of the truth, but maybe that is OK. So in practice, soundness-- we prefer to get that first, and then push towards completeness, OK? So going back to here-- so soundness and completeness is the thing that connects these guys together. So soundness and completeness. Make sure that these two are equivalent. I should have brought that here, too. So we talked about soundness and completeness as things that are relating entailment and derivation, OK? All right. And then these are properties of inference rules. So the question is, is modus ponens sound, or is modus ponens complete? What can we say about modus ponens, because that's the only inference rule we have seen so far, OK? So remember modus ponens. We have rain, and rain implies wet, and modus ponens gets us wet. But is that sound? So how do we check soundness? So to check soundness, right, soundness is about the meaning. It's about checking if it is actually inside of the glass-- the thing we are getting is actually entailed. So how are we checking that? We look at models of rain. Models of rain is this shaded area. We look at models of rain implies wet-- that is this shaded area. We take the intersection of them, right, because models of these two formulas is the intersection of these two models-- that is the darker area. And the thing we are going to check is that if this darker area is going to be a subset of models of wet-- is it going to be entailed, right? You're checking entailment because that is about the truth, right, that is the thing that checks the truth. So models of wet is here. And then you have-- this darker area is actually a subset of models of wet. So it turns out that modus ponens is actually sound. We are inferring formulas that are actually true, OK, so it is sound. Let's look at a difference inference rule. So I have a made-up inference rule that says, if you get wet, and if you get rain implies wet, can you infer rain from that? So you've got wet, and raining implies wet-- is it raining? That's the thing you're checking. So this inference rule-- similarly, I can look at models of wet, I can look at models of rain implies wet. This shaded area is going to be the intersection. And that is not a subset of models of rain. As you can see here, that's not a subset of models of rain. So what that means is we don't have entailment here, right? So because of that, this particular inference rule is actually not sound. So the nice thing about modus ponens is it's actually sound. But the next question to ask is, is modus ponens complete? And I want you guys to remember this example that you looked at, right? We got a formula-- we got rain implies slippery. And that wasn't from modus ponens, right? Modus ponens wasn't able to get that. So this gives us a hint that modus ponens is not complete. It's not going to get everything that is actually entailed and is actually true. But yeah, let's look at an example. I'm not going to do justice in proving that modus ponens is not complete. I'm mainly just going to look at a few examples. So, yeah, let's look at another example here. So let's say our knowledge base is rain, and if it is raining or snowing, it will be wet. OK, so the question is if we can infer wet using our modus ponens rules. So the first question is, is it actually true that it would be wet? Just think about it intuitively. Think about it logically. If you just think about it intuitively, you know if it is raining or snowing, then it's going to be wet. So then it's got to be wet, right? It's raining, so it's got to be wet. So if you just think about it intuitively, you kind of realize that wet's got to be entailed here, right? From a meaning perspective, right, wet should be included, or we should be able to get wet and incorporate it in the knowledge base. But modus ponens is not able to infer that. So why is it not able to infer that? Because in modus ponens, we have this very specific syntactic form of f, and f implies g. And then this formula doesn't match that. It does have this "or," and modus ponens doesn't really have "ors" in it-- it doesn't really have any branchings in it. And because of that, I can't really apply modus ponens here. So knowledge base here actually entails f. f is entailed-- it is going to be wet. But syntactically, using just modus ponens, I'm not going to be able to derive f. And then based on this example, you can see that modus ponens is not complete. We're not able to derive everything, OK? So one other thing I want to note here is-- modus ponens is kind of interesting. It's just looking at positive examples, right? You have a bunch of positive clauses-- sorry, positive formulas, and based on those formulas, you're able to infer something positive, and again, infer something positive, and infer something positive, right? It doesn't really have these "ors" or negations. And then that is why it is not able to infer this particular property because we have an "or" here, because it's not able to capture that. And again, it's applying things syntactically, so it doesn't care about meaning. So how can we fix this? So the question is going back here-- right, we just saw that modus ponens-- sure, it's sound, that is great-- but it was incomplete, OK? And ideally, I want to be able to get both soundness and completeness because ideally, I what I'm deriving would be equivalent to what I'm entailing. I want both of them. So the question that we're asking now is, how can we fix that? How can we fix the fact that modus ponens is not complete, OK? And that's the topic of the next few modules. So we have two options to fix this completeness. First option is maybe we should restrict propositional formula. So propositional logic-- maybe it is too large. If we restrict it, maybe we can restrict it to a specific set of propositional logic that only has these things that are called Horn clauses. And under that scenario, if you're looking at propositional logic with only Horn clauses, it turns out that modus ponens is both sound and complete. The other option is-- maybe I don't want to change my propositional logic. I want to keep all the propositional logic, but maybe I should be looking at more powerful inference rules. So modus ponens seems pretty simple. Maybe there are more powerful inference rules that I can use. And specifically, resolution is an inference rule that we are going to be talking about, which is both sound and complete. So next module, we'll be talking about Horn clauses, propositional logic with Horn clauses, and the fact that modus ponens is sound and complete. And then the module after that, we'll be talking about resolution and how we can use resolution, and the fact that it is sound and complete.
Stanford_CS221_Artificial_Intelligence_Principles_and_Techniques_Autumn_2021
Game_Playing_2_TD_Learning_Game_Theory_Stanford_CS221_Artificial_Intelligence_Autumn_2019.txt
Let's start guys. Okay, so, uh, we're gonna continue talking about games today. Uh, just a quick announcement, the project proposals are due today. I think you all know that. Um, all right, let's co- Tomorrow. Tomorrow. You're right [LAUGHTER]. Tomorrow [LAUGHTER] Just checking. [LAUGHTER] Yeah. Today is not Thursday. Yeah. [LAUGHTER] Tomorrow. For a second, I thought it's Thursday. Um, all right, so let's talk about games. Uh, so we started talking about games last time. Uh, we formalized them. Uh, we talked about, uh, non- we talked about zero-sum two-player games that were turn-taking, right? And we talked about a bunch of different strategies to solve them, like the minimax strategy or the expectimax strategy. Uh, and today we wanna talk a little [NOISE] bit about learning in the setting of games. So what does learning mean? How do we learn those evaluation functions that we talked about? And then, er, towards the end of the lecture, we wanna talk a little [NOISE] bit about variations of the game- the games we have talked about. So, uh, how about if you have- how about the cases where we have simultaneous games or non-zero-sum games. So that's the, that's the plan for today. So I'm gonna start with a question that you're actually going to talk about it towards the end of the lecture, but it's a good motivation. So, uh, think [NOISE] about a setting where we have a simultaneous two-player zero-sum game. So it's a two-player zero-sum game similar to the games we talked about last time, but it is simultaneous. So you're not ta- ta- taking turns, you're playing at the same time. And an example of that is rock, paper, scissors. So can you still be optimal if you reveal your strategy? So lets say you're playing with someone. If you tell them what your strategy is, can you still be optimal? That's the question. Yes. [inaudible] It's a small [NOISE] enough game space for- if they know exactly [NOISE] what you're going to play, [NOISE] you won't be successful if you- for a zero-sum real-time simultaneously being the larger scale, I think you could still be successful if that approach is like superior to the other approach taken. [NOISE] So it's not- so, so, so the answer was about the size of the game. So rock, paper, scissors being small versus, versus not being small. So, so the question is more of a motivating thing. We'll talk about this in a lot of details towards the end of the class. It's actually not the size that matters. It's the type of strategy that you play that matters, so just to give you an idea. But, like, the reason that we have put this I guess at, at the beginning of the lecture is intuitively when you think about this, you might say, "No. I'm not gonna tell you what my strategy is, right? Because if I say, I'm gonna play it, like, scissors, you'll know what to play." But th- this has an unintuitive answer that we are gonna talk about towards the end of the lecture. So just more of a motivating example. Don't think about it too hard. All right. So, so let's do a quick review of games. So, um, so last time we talked about having an agent and opponent playing against each other. So, uh, and we were playing for the agent, uh, and the agent was trying to maximize their utility. So they were trying to get this utility. The example we looked at was, uh, agent is going to pick bucket A, bucket B, or bucket C. And then the opponent is going to pick a number from these buckets. They can either pick minus 50 or 50, 1 or 3 or minus 5 or 15. And then if you want to maximize your, your utility as an agent, then you can potentially think that your opponent [NOISE] is trying to, trying to minimize your utility, and you can have this minimax game, kind of, playing against each other and, and, and based on that, uh, decide what to do. So we had this minimax tree and based on that, the utilities that are gonna pop up are minus 50, 1 and minus 5. So if your goal is to maximize your utility, you're gonna pick bucket B, the second bucket, because that's the best thing you can do, assuming your opponent is a minimizer. So, so that was kind of the setup that we started looking at. And the way we thought about, uh, solving this game by- was by writing a recurrence. So, so we had this value. This is V which was the value of a minimax, uh, at state S. And if you're at the utility, er, so if you're an- at an end state, we are gonna get utility of S, right? Like if you get to the end state, we get the utility because we get the utility only at the, at the very end of the game. And if the agent is playing, we- the recurrence is maximize V of the successor states. And if the opponent is playing, you wanna minimize the value of the successor states. And so that was the recurrence we started with, and, and we looked at games that were kind of large like the game of chess. And if you think about the game of chess, the branching factor is huge. The depth is really large. It's not practical to u- to do the recurrence. So we, we started talking about ways to- for speeding things up, and, and one way to speed things up was this idea of using an evaluation function. So do the recurrence but only do it until some depth. So don't go over the full tree. Just do it until some depth, and then after that, just call an evaluation function. And hopefully your evaluation function which is kind of this weak estimate of your value is going to work well and give you an idea of what to do next. Okay. So, so instead of the usual recurrence, what we did was we decided to add this D here, um, this D right here which is the depth that un- until which we are exploring. And then we decrease the value of depth, uh, after an agent and opponent plays. And then when depth is equal to 0, we just call an evaluation function. So intuitively if you're playing chess, for example, you might think a few steps ahead, and when you think a few steps ahead, you might think about how the board looks like and kind of evaluate that based on the features that, that, that board has and based on that, you might, you might decide to take various actions. So similar type of idea. And then the question was, well, how are we gonna come up with this evaluation function? Like where is this evaluation function coming from? Uh, and, and then one idea that, that we talked about last time was it can be handcrafted. The designer can come in and sit down and figure out what is a good evaluation function. So in the game of chase- che- and chess example is, you have this evaluation function that can depend on the number of pieces you have, the mobility of your pieces. Maybe the safety of your king, central control, all these various things that you might care about. So the difference between the number of queens that you have and your opponents number of queens, these are things, these are features that you care about. And, and potentially, a designer can come in and say, "Well, I care about nine times more than I care about how many pawns I have." So, so the hand- like you can actually hand-design these things and, and write down these weights about how much you care about these features. Okay. So I'm using terminology from the learning lecture, right? I'm saying we have weights here and we have features here, and someone can come and just handcraft that. Okay. Well, one other thing we can do is instead of handcrafting it, we could actually try to learn this evaluation function. So, so we can still handcraft the features, right? We can still say, "Well, I care about the number of kings and queens and these sort of things that I have, but I don't know how much I care about them. And I actually wanna learn that evaluation function. Like what the weights should be." Okay. So to do that, I can write my evaluation function, eval of S, as, as this V as a function of state parameterized by, by weights Ws. And, and my goal is to figure out what these Ws, what these weights are. And ideally I wanna learn that from some data. Okay. So, so we're gonna talk about how learning is applied to these game settings. And specifically the way we are using learning for these game settings is to just get a better sense of what this evaluation function should be from some data. Okay. So, so the questions you might have right now is, well, how does V look like? Where does my data come from? Because if I, if you know where your data comes from and your, your V is, then all you need to do is to come up with a learning algorithm that takes your data and tries to figure out what your V is. So, so we're gonna talk about that at the first part of the lecture. Okay. And, and that kind of introduces to this, this, um, temporal difference learning which we're gonna discuss in a second. It's very similar to Q-learning. Uh, and then towards the end of the class, we will talk about simultaneous games and non-zero-sum games. Okay. All right. So, so let's start with this V function. I just said, well, this V function could be parameterized by a set of weights, a set of w's, and the simplest form of this V function is to just write it as a linear classifier as a linear function of a set of features, w's times Phi's. And these Phi's are the features that are hand-coded and someone writes them. And then- and then I just want to figure out what w is. So this is the simplest form. But in general, this, this V function doesn't need to be a linear classifier. It can actually be any supervised learning model that we have discussed in the first few lectures. It can be a neural network. It can be anything even more complicated than neural network that just does regression. So, so we can- basically, any model you could use in supervised learning could be placed here as, as, as this V function. So all I'm doing is I'm writing this V function as a function of state and a bunch of parameters. Those parameters in the case of linear classifiers are just w's and in the case of the neural network, there are w's and these v's in this case of what one layer neural network. Okay. Or multilayer, actually. Yeah, one way. All right. So let's look at an example. So let's think about an example and I'm going to focus on the linear classifier way of looking at this just for simplicity. So, um, okay, let's pick a game. So we're going to look at backgammon. So this is a very old game. Uh, it's a two-player game. The way it works is you have the red player and you have the white player, and each one of them have these pieces. And what they wanna do is they want to move all their pieces from one side of the board to the other side of the board. It's a game of chance. You can actually, like, roll two dice and based on the outcome of your dice, you move your pieces various, various amounts to, to various columns. Uh, there are a bunch of rules. So your goal is to get all your pieces off the board. But if you have only, like, one piece and your opponent, like, gets on top of you, they can push you to the bar and you have to, like, start again. Um, there are a bunch of rules about it. Read it, read about it on Wikipedia if you're interested. But you are going to look at a simplified version of it. So in this simplified version, I have Player O and player X, and I only have four columns. I have column 0, 1, 2, and 3. And in this case, I have four of each one of these players and, and the idea is, we want to come up with features that we would care about in this game of backgammon. So, so what are some features that you think might be useful? Remember the learning lecture. How did we come up with, like, feature templates? Yes. Currently, still bound with the [inaudible]. So maybe like the location of the X's and O's. The number of them. Yeah. Yeah. So one idea is you have all this knowledge about the board, so maybe we should, like, care about the location of the X's. Maybe we should care about like where the O's are, how many pieces are on the board, how many pieces are off the board. So similar type of way that we- we've come up with features in the first few lectures. We were basically, we would do the same thing. So a feature template- set of feature templates could look like this, like, number of X's or O's in column- whatever column being equal to some value or, uh, number of X's or O's on the bar. Maybe fraction of X's or O's that are removed, whose turn it is. So these are all like potential features that we could use. So for this particular board, here are what those features would look like. So if you look at number of O's in column zero 0 to 1, that's equal to 1. Remember we were using these indicator functions to be more general. So, so like here, again, we are using these indicator functions. You might ask number of O's on the bar that's equal to 1, fraction of O's that are removed. So I have four pieces. Two of them are already removed. So that's one-half. Number of X's in column 1 equal to 1, that's 1. Number of X's in column 3 equal to 3, that's 1. It's O's turn. So that's equal to 1. Okay. So, so we have a bunch of features. These features, kind of, explain what the sport looks like or how good this board is. And what we wanna do is we wanna figure out what, what are the weights that we should put for each one of these features and how much we should care about, uh, each one of these features. So, so that is the goal of learning here. Okay. All right. So okay. So, so that was my model. All right. So far, I've talked about this V S of w. I'm- I've defined it as a linear classifier- as a linear predictor. W's times features. And now, the question is where do I get data? Like where and because if I'm doing learning, I got to get data from somewhere. So, so one idea that we can use here is we can try to generate data based on our current policy pi agent or pi opponent, which is based on our current estimate of what V is. Right. So currently, I might have some idea of what this V function is. It might be a very bad idea of what V is, but that's okay. I can just start with that and starting with, with that V function that I currently have, what I can do is I can, I can call arg max of V over successors of s and a to get a policy for my agent. Remember this was how we were getting policy in a mini-max setting. Policy for the opponent is just argument of that V function and then when I call these policies, I get a bunch of actions. I get a sequence of, like, states based on, based on how we are following these policies, and that is some data that I can actually go over and try to make my V better and better. So, so that's kind of how we do it. We call these policies. We get a bunch of episodes. We go over them to make things better and better. So, so that's, kind of, the key idea. Um, one question you might have at this point is, um, is this deterministic or not, like, do I need to do something like Epsilon-Greedy. So in general, you would need to do something like Epsilon-Greedy. But in this particular case, you don't really need to do that because we have to get- we have this die that, that you're actually rolling the dice. And by rolling the dice, you are getting random different- different random path that, that we might take- so that might take us to different states. So we, kind of, already have this, this element of randomness here that does some of the exploration for us. And you just mean like unexplored probability? Yes. So my Epsilon-Greedy, what I mean here is do I need to do extra exploration? Am I gonna get stuck like in a particular set of states if I don't do exploration? And in this particular case, because we have this randomness, we don't really need to do that. But in general, you might imagine having some sort of Epsilon-Greedy to take us explore a little bit more. Okay. So then we generate episodes and then from these episodes, we want to learn. Okay. These episodes look like state action reward states and then they keep going until we get a full episode. One thing to notice here is, is the reward is going to be 0 throughout the episode until the very end of- end of the game. Right. Until we end the episode and we might get some reward at that point or we might not. Uh, but, but the reward throughout is going to be equal to 0 because we are playing a game. Right. Like we are not getting any rewards at the beginning. And if you think about each one of these small pieces of experience; s, a, r, s prime, we can try to learn something from each one of these pieces of experience. Okay. So, so what you have is you actually go on board maybe. What you have here is you have a piece of experience. Let's call it s, a. You get some reward. Maybe it is 0. That's fine if it is 0. And you go to some s prime through that. So s, take an action, you get a reward. Maybe you get a reward. You go to some s prime from that and you have some prediction. Right. Your prediction is your current, like, your current, um, V function. So your prediction is going to be this V function and add state s parameterized with W. And this is what you already, like, you, kind of, know right now. This, this is your current estimate of what V is. And this is your prediction. I'm writing the prediction as a function of w. Right. Because it depends on w. And then we had a target that you're trying to get to. And my target, which is kind- kind of acts as a label, is going to be equal to my reward, the reward that I'm getting. So it's kind of, the reward- so if you look at this V of s and w, well, it's kind of close-ish to reward plus, I'm gonna write discount factor, Gamma V of s prime, w. All right. So, so my target the thing that I'm trying to like get to is the reward plus Gamma V of s prime, w, okay? So we're playing games, in games Gamma is usually 1. I'm gonna keep it here for now but I'm gonna drop it at some point, so you don't need to really worry about Gamma. And then one other thing to notice here is, I'm not writing target as a function of w because target acts kind of like my label, right? If I'm, if I'm trying to do regression here, target is my label, it's kind of the ground truth thing that I'm trying to get to. So I'm gonna treat my target as just like a value, I'm not writing it as a function of w, okay? All right. So, so what do we try to do usually, like when you are trying to do learning? We have prediction, we have a target, what do I do? Minimize the- your error. So what is error? So I can write my error as potentially a squared error. So I'm gonna write one-half of prediction of w, minus target squared, this is my squared error. I want to minimize that. So with respect to w, okay? How do I do that? I can take the gradient. What is the gradient equal to? This is simple, right? 2 reduced, 2 gets canceled. Gradient is just this guy, prediction of w, minus target, times the gradient of this inner expression. The gradient of this inner expression with respect to w is the gradient of prediction with respect to w minus 0 because target is, I'm treating it as a number, okay? Let me move this up. So now I have the gradient. What algorithm should I use? I can use gradient descent. All right. So I'm going to update my w. How do we update it? I'm gonna move in the negative direction of my gradient using some learning rate Eta, uh, times my gradient. My gradient is prediction of w minus target times gradient of prediction of w with respect to w. All right. So that's actually what's on this slide. So the objective function is prediction minus target squared. Gradient, we just took that, it's prediction minus target times gradient of prediction. And then the update is just this, this particular update where we move in the negative direction of the gradient. This is, this is what you guys have seen already, okay. All right. So so far so good. Um, so this is the TD learning algorithm. This is all it does. So temporal difference learning, what it does is it picks like these pieces of experience; s, a, r, s prime, and then based on that pieces of experience, it just updates w based on this gradient descent update, difference between prediction and target times the gradient of V, okay? So what, what happens if I have, if I have this, this linear function, maybe let me write- let me write this in the case that I have a linear, linear function. So what if my V of sw is just equal to w dot phi of s, yeah phi of s. So what happens to my update? Minus Eta. What is prediction? w dot phi of s, right? w dot phi of s. What is target? We defined up it there, it's the reward you're getting- the immediate reward you're getting plus Gamma times V of s prime, w, which is w dot phi of s prime times gradient of your prediction which is what, phi of s, okay? So I just, I just wrote up this indicates of a linear predictor. Yes. With Q learning, what are the differences between the two? Yeah, so this is very similar to Q learning. There are very minor differences that you'll talk about actually at the end of this section, comparing it to Q learning. All right. So, so I wanna go over an example, it's kind of like a tedious example but I think it helps going over that and kind of seeing why it works. Especially in the case that the reward is just equal to 0 like throughout an episode. So it kinda feels funny to use this algorithm and make it work but it works. So I want to just go over like one example of this. So I'm gonna show you one episode starting from S1 to some other state. And, and I have an episode I start from some state, I get some features of that state. Again, these features are by just evaluating those han- hand coded features. And I'm just going to start, what w should they start with? 0, let me just initialize w to be equal to 0, okay, right? How do I update my w? Me- let me let me just write it in this. So, so this is I want to write it in a simple for- not a simpler form but just another form. So w the way we're updating it is, the previous w minus Eta times prediction minus target, I'm gonna use p and t for prediction minus the target, times phi of s. Okay, this is the update you're doing, okay? Uh, yeah, that's right. Okay. So, so what is my prediction? What is my prediction? w dot t of s? 0. What is my target? So for my target I need to know what state I'm ending up at. I'm gonna end up at 1, 0 in this episode and I'm gonna get a reward of 0. So what is my target? My target is reward, which is 0, plus w times phi of s prime, that is 0 because w is equal to 0. So my target is equal to 0. My p minus t is equal to 0. So p minus t is equal to 0, this whole thing is 0, w stays the same. So in the next kind of step, w is just 0, okay? I'm gonna move forward. Um, so what is prediction here? 0 times 0, prediction is 0. What is target? I haven't done 0 because I haven't got any- anything, any reward yet, where do I end at? I end up at 1, 2. So yeah, so target is going to be a reward, which is 0 plus 0 times, whatever state of phi of s prime that I'm at, so that's equal to 0. p minus t is equal to 0, it's kind of boring [LAUGHTER]. So at this point, w hasn't changed, w is equal to 0. What is my prediction? Prediction is equal to 0, that's great. What is target equal to? So I'm gonna end up in an end state where I get 1, 0 and I get a reward of 1. So this is the first time I'm getting a reward. What should my target be? My target is reward 1 plus 0 times 1, 0 which is 0, so my target is 1. So what this tells me is, I'm predicting 0 but my target is 1, so I need to push my w's a little bit up to actually address the fact that this is, this is, this is equal to 1. So p minus t is equal to minus 1. So I need to do an update. Maybe I, I'll do that update here. So how am I updating it? So I'm doing, starting from 0, 0 minus, uh, my Eta is 0.5, that's what I allowed it- like I put it- I defined it to be, my prediction minus target is minus 1. What is phi of s, phi of s is 1, 2, right? So what should my new w be? What is that equal to? 0.5 and then 1. All right, so I'm just doing arithmetic here. So my due- new w is going to become 0.5 and 1 at the end of this one episode. So I just did one episode, one full episode, where w is worth 0 throughout and then at the very end when I got a reward, then I updated my w because I realized that my prediction and target were not the same thing, okay? So now I'm gonna, I'm gonna start a new episode and the new episode I'm starting is going to start with this particular w, and in the new episode even though the rewards are going to be 0 throughout, so like we are actually going to update our w's. Yes, question? If you use, uh, two questions. If you use like, uh, initialize rates do not be zeros which you update throughout instead of just to the end. Yeah. Okay and section two, so S4 and S9 are the same future of activities but you said S4 is S9 [OVERLAPPING]. Uh, this is a made up example, [LAUGHTER] so don't think about this example too much though. Well, is it that possible to have, an end state and not end state have the same feature vector, or no? If you have the same feature vector in the same state- It, it is possible to have, yeah, the, the most of the states to have the same features, right. You could have, like I said up here. Depends on what sorts of feature, you can could, could use like really not representative features. Like if you really want S4 and s- S9 to, to differentiate between them, you should pick features that differentiates between them. But if there were kind of the same and have the same sort of characteristics, it's fine to have feature that gives the same value. Like, like we have different [inaudible]. As one, uh, entry that's always isn't [inaudible] like instead of 1, 2, we have 1, 0 leading to the, the final weight then the weight corresponding to that. Is going to- [OVERLAPPING] Yeah. It will never converge. And that kind of tells you that that entry in your feature vector, you don't care about that, or it's always, like, it, it's always staying the same. If it is always 0, it doesn't matter like what the weight of that entry is. So in general, you wanna have features that are differentiating and, and you're using it in some way. So for the second row, I'm not gonna write it up cause that takes time. [LAUGHTER] So, uh, so okay, so let's start wi- with a new episode. We started S1 again but now I'm starting with this new W that I have. So I can compute the prediction, the prediction is 1. I can compute my target it's 0.5. And what we realize here is we overshoot it. So before, our prediction was 0, target was 1, we are undershooting. We fix our Ws, but now we're overshooting. So we need to fix that. Yes. Uh, a little verification on the relationship between the features and the weights. Uh, they always have to be the same dimension, and what should we be thinking about that would make a good feature for updating the weights specifically, like- So, uh, okay so first off, yes, they need to be always in the same- in dimension cause you are doing this, um, dot-product between them. Um, the feature selection, um, you don't necessarily think of it as, like how am I updating the weights, you think of the feature selection as is it representative of how good my board is. Is it, for example in the case of Backgammon, or is it representative of, uh, how good I am navigating, uh, so, so it should be a representation of how good your state is, and then it's- yeah, it's usually like hand designed, right. So, so i- i- it, it's not necessarily- you shouldn't think of it as how is it helping my weights, you should think of it as how is it representing how good my state is. How is that also, like, thinking of the blackjack example, if you have a threshold of 21 and then you have a threshold of 10, uh, if you're using the same feature extraction for both, how does that affect the generalized ability of the model, the agent? Yeah, so, so you might choose two, two different features and one of them might be more like so, so there is kind of a trade-off, right? You might get a feature that actually differentiates between different states very well, but then that, that makes learning longer, that makes it not as generalizable, and then at the end- on the other hand, you might get a feature that's pretty generalizable but, but then it might not do these specific things that you would wanna do or these differentiating factors about it. So, so picking features, it's, it's an art, right, so. [LAUGHTER] All right. So lemme, lemme move forward cause we have a bunch of things coming up. Okay, so I'll go over this real quick then. So we have the W's, right. So, so we now update the W based on this new value, um, and kind of similar thing, you have a prediction, you have a target, you're still overshooting, so, so you still need to update it. And then once you update it to 0.25 and 0.75 then it kind of stays there, and you are happy. Okay. All right so, so this was just an example of TD learning but this is the update that you have kind of already seen, right? And then a lot of you have pointed out that this is, this is similar to Q-learning already, right? This is actually pretty similar to update, um, it's, it's very similar, like we have these gradients, and, and the same weight that we have in Q-learning. And, and we are looking at the difference between prediction and target, same weight that we are looking at in Q-learning, but there are some minor differences. So, so the first difference here is that Q-learning operates on the Q function. A Q function is a function over state and actions. Here, we are operating on a value function, right? On V. And V is only a function of state, right? And, and part of that is, is actually because in the setting of- in setting of a game, you already know the rules of the game. So we kind of already know the actions. You don't need to worry about it as much the same way that if you are worrying about it in Q-learning. The second difference is, Q-learning is an off-policy algorithm. So, so the value is based on this estimate of the optimal policy which is this Q opt, right? It's based on this optimal policy. But in the case of TD learning, it's an on-policy, the value is based on this exploration policy which is based on a fixed Pi, and sure you're updating the Pi, but you're going with whatever Pi you have and, and, and kind of running with that and keep updating it. Okay, so that's another difference. And then, finally like in Q-learning, you don't need to know the MDP transitions. So you don't need to know this transition function as transition from s, a to s-prime. But in the case of TD learning, um, you need to know the rules of the game. So you need to know how the successor function of s and a works. Okay. So, so those are some kind of minor differences, but from like a perspective of, like how the update works, it is pretty similar to what Q-learning is, okay? All right. So, so that was kind of this idea of, I have this evaluation function, I wanna learn it from data, I'm going to generate data from that generated data I'm going to update my W's. So, so that's what we've been talking about so far. And the idea of learning- using learning to play games is, is not a new idea actually. So, um, so in '50s, um, Samuel looked at a checkers game program. So where he wa- he was using ideas from self-play and ideas from like similar type of things we have talked about, using really smart features, using linear evaluation functions to try to solve the checkers program. So a bunch of other things that he did included adding intermediate rewards. So, so kind of throughout, like the to, to get to the endpoint, he added some intermediate rewards, used alpha-beta pruning and some search heuristics. And then, he was kind of impressive, like what he did in '50s, like he ended up having this game that was playing, like it was reaching, like human ama- amateur level of play and he only used like 9K of memory which is like really impressive [LAUGHTER] if you're thinking about it. So, so this idea of learning in games is old. People have been using it. In the case of Backgammon, um, this was around '90s when Tesauro came up with, with an algorithm to solve the game of Backgammon. So he specifically used, uh, this TD lambda algorithm, which is similar to the TD learning that we have talked about. It, it has this lambda temperature parameter that that kinda tells us how good states are, like as they get far from the reward. Uh, he didn't have any, any intermediate rewards, he used really dumb features, but then he used neural networks which was, uh, kind of cool. And he was able to reach human expert play, um, and kind of gave us- and this kind of ga- gave us some insight into how to play games and how to solve, like these really difficult problems. And then more recently we have been looking at the game of Go. So in 2016, we had AlphaGo, uh, which was using a lot of expert knowledge in addition to, um, ideas from a Monte Carlo tree search and then, in 2017, we had AlphaGo Zero, which wasn't using even expert knowledge, it was all, like, based on self-play. Uh, it was using dumb features, neural networks, um, and then, basically the main idea was using Monte Carlo tree search to try to solve this really challenging difficult problem. So, um, I think in this section we're gonna talk a little bit about AlphaGo Zero too. So if you're attending section I think that will be part of that story. All right so the summary so far is, we have been talking about parameterizing these evaluation functions using, using features. Um, and the idea of TD Learning is, is to look at this error between our prediction and our target and try to minimize that error and, and find better W's as we go through. So, um, all right so that was learning and, and games. Uh, so now I wanna spend a little bit of time talking about, uh, other variations of games. So, so the setting where we take our games to simultaneous games from turn-based. And then, the setting where we go from zero-sum to non-zero-sum, okay? All right. Okay simultaneous games. So, um, all right so, so far we have talked about turn-based games like chess where you play and then next player plays, and you play, and next player plays. And Minimax sca- strategy seemed to be pretty okay when it comes to solving these turn-based games. But not all games are turn-based, right? Like an example of it is rock-paper-scissors. You're all playing at the same time, everyone is playing simultaneously. The question is, how do we go about solving simultaneously, okay? So let's start with, um, a game that is a simplified version of rock-paper-scissors. This is called a two-finger Morra game. So the way it works is, we have two players, player A, and player B. And each player is going to show o- either one finger, or two fingers, and, and you're playing at the same time. And, and the way it works is, is if both of the players show 1 at the same time, then player B gives two dollars to player A. If both of you show 2 at the same time, player B gives Player A four dollars. And then, if, if you show different numbers like 1 or 2, or 2 or 1, then player A has to give o- give three dollars to, to player B. Okay? Does that make sense? So can you guys talk to your neighbors and play this game real quick?[BACKGROUND] All right, so, so what was the outcome? [LAUGHTER] How many of you are in the case where A chose 1, then- and B chose 1? Oh, yeah one. Okay, one pair here. Uh, A chose 1, B chose 2? One pair there, is it like four people played. So A chose 2, B chose 1. We have, okay two pairs. And then 2 and 2? Okay. All right. So, so you can kind of see like a whole mix of strategies here happening. And this is a game that you are gonna play and talk about it a bit and think about what would be a good strategy to use when you are solving this, this simultaneous game. Okay. All right so, um. All right so let's formalize this. We have player A and player B. We have these possible actions of showing 1 or 2. And then, we're gonna use this, this payoff matrix which, which represents A's utility. If A chooses action A and B chooses action B. So, so before we had this, this value function, right? Before, we had this value function, uh, over, um, over our state here. Now, we have this value function that is- do we- we shall use here, I'll just use here. That is again from the perspective of agent A. So remember like before, when we were thinking about value function, we are looking at it from the perspective of the first player, the maximizer player, the agent. Now, I'm looking at all of these games from the perspective of a player. So, so I'm trying to like get good things for A. Yes. In this case it's not at the end [inaudible] ? Uh, yeah. And then this is like a one-step game too, right? So, so like you're just playing and then you see what you get. So, so we're not talking about repeated games here. So, so you're playing, you see what happens, okay? So, so we have this V, which is V of a and b. And, and this basically represent a's utility if agent A plays a and if agent B plays b. Okay? And this is called, and, and you can represent this with a matrix and that's why it's called a pay-off matrix. I'm going to write that pay-off matrix here. So pay-off matrix. I'm gonna write A here, B here. agent A can show 1 or can show 2. agent B can show 1 or can show 2, right? If both of us show 1 at the same time, agent A gets $2. If both of us show 2 at the same time, agent A gets $4. Otherwise agent A has to pay, so agent A gets minus $3. And again the reason I only like talk about one way is we are still in the setting of zero-sum games. So whatever the agent A gets, agent B gets negative of that, right? So, so if agent A gets $4, agent B is, is paying minus $4. So I am just writing 1B from perspective of agent A. And this is called the pay-off matrix, okay? All right. So, uh, so now we need to talk about what does a solution mean in this setting? So, so what is a policy in the setting? And, and then the way we refer to them in this case are as strategies. So we have pure strategy which is almost like the same thing as, uh, as deterministic policies. So a pure strategy is just a single action that you decide to take. So, so you have things like pure strategies, uh, pure strategies. The difference between pure strategy and, and deterministic policies, if you remember, a deterministic policy again is a function of state, right? So, so it's a policy as a function of state. It gives you an action. Here we have like a one move game, right? So it's just that one action and we call it pure strategy. [NOISE] We have also this other thing that's called mixed strategy which is equivalent to, to stochastic policies. And what a mixed strategy is, is, is a probability distribution that tells you what's the probability of you choosing A. So, so pure strategies are just actions a's. And then you can have things that are called mixed strategies and they are probabilities of, of choosing action a, okay? All right. So here is an example. So if, if you say, well, I'm gonna show you 1, I'm gonna always show you 1. Then the- if you can, you can write that strategy as a pure strategy, that says I'm gonna always with probability of 1 show you 1 and with probability 0 show you 2. So, so let's say the first column is for showing 1, the second column is for showing 2. So, so this is a pure strategy that says always I'm going to show you 1. If I tell you, well, I always I'm gonna show you 2, then I can write that strategy like this, right? With probability 1, I'm always showing you 2`. I could also come up with a mixed strategy. Mixed strategy would be I'm going to flip a coin and if I get one-half, I'm gonna give you- uh, if I'm- if I get heads, I'm gonna show you one, if I get tails, I'm gonna show you two. And then you can write that as this and this is going to be a mixed strategy. You could only pull it out to like you're in the si- simultaneous game, you could just bring chance in and be like half the time, I'm gonna show you one, half the time I'm gonna show you two based on chance, okay? Everyone happy with mixed strategies and pure-strategies? All right. So, so how do we evaluate the value of the game. So, so remember in, uh, previous lecture and like in the MDP lecture even, we were talking about evaluating. If someone gives me the policy, how do I evaluate how good that is? So the way we are evaluating that is again by this value function V. And, and we are gonna write this value function as a function of Pi A and Pi B. Maybe I'll just write that up here. Or I'm gonna erase this 'cause this is a repetitive. So I'm gonna say a value of agent A following Pi A and agent B following Pi B, what is that equal to? Well, that is going to be the setting where, uh, Pi A chooses action A, Pi B chooses action B times value of choice A and B, summing over all possible a and bs. Okay. So, so let's look at an actual example for this. So, so for this particular case of Two-finger Morra game, let's say someone comes in and says I'm gonna tell you what Pi A is. Policy of agent A is just to always show one. And policy of agent B is this, this mixed strategy which is half the time show one, half the time show, show two. And then the question is, what is the value of, of these two policies? How do we compute that? [NOISE] Well, I'm gonna use my payoff matrix, right? So, so 1 times 1 over 2 times the value that we get at 1, 1, which is equal to 2. So it's 1 times 1, 1 over 2 times 2 plus 0 times 1 over 2 times 4 plus 1 times 1 over 2, times minus 3, the value that I get is minus 3 plus ah, 0 times 1 over 2 times minus 3. Okay? And, well, what is that equal to? What is that equal to? There are two 0s here, that's minus 1 over 2. Okay? So I just computed that the value of these two policies is going to be minus 1 over 2. And again this is from the perspective of, of, um, agent A and it kinda makes sense, right? If agent A tells you I'm gonna always show you 1, then probably agent- and, and agent two is following this mixed strategy, agent A is probably losing, and agent A is losing minus 1 over 2 based on- based on this strategy, okay? Okay. So I guess this doesn't seem like we only have this one statement, so it's, we only take one action, in this environment, we have one state, take one action, and that would be the end state. If we had more than one state, Would we have that for every single one. So that opens up a whole set of new questions that you're not discussing in this class. So that introduces repeated games. Ah, so you might be interested in looking at what happens in repeated games. In this class right now we're just talking about this, one step one play. We're playing like zero-sum game um, but we're playing like we'll say, rock-paper-scissors and you just play once. Well you might say well, what happens if you play like ten times then you're building some relationship and weird things can happen and so, so that introduces the whole new class of games that we're not talking about here. All right. So, so the value is equal to minus 1 over 2. Okay? All right. So, so that was a game value. So, so we just evaluated it, right? If someone tells me it's pi A and pi B, I can evaluate it. I can know how good pi A and pi B is, from the perspective of agent A. Okay? So what do we wanna do like when we solve- when we want to try to solve games? All we wanna do is from the agent A's perspective, you wanna maximize this value. I want to get as much money as possible and its values from my agent A perspective. So I should be trying to maximize this, agent B should be trying to minimize this. Right? Like, like think minimax. So agent B should be min- minimizing this. agent A should be maximizing this. That's, that's what we wanna do. But with the challenge here is we are playing simultaneously, so we can't really use the minimax tree. Like if you remember the minimax tree like in, in that setting we have sequential place and and you could like wait for agent A to play and then after that play and that will give us a lot of information, here we're playing simultaneously. So what should we do? Okay so what should we do? So I'm going to assume we can play sequentially. So that's what I wanna do for now. So, so I'm going to limit myself to pure strategies. So maybe I'll, um, I'll come over here. So right now I'm going to focus only on pure strategies. I will just consider a setting- very limited setting and see what happens. And I'm going to assume oh, what if, what if we were to play sequentially, what would happen? How bad would it be if we were to play sequentially? So um, we have the setting where player A plays, goes first. What do you think? Would you think like if Player A goes first, Is that better for player A or is that worse for player A? Worse. Worse for player A. Okay. So, so that's probably what's gonna happen. Try that. [LAUGHTER] Okay. So player A was trying to maximize. Right? This V, player B was trying to minimize, right? And then each of them have actions of either or showing 1 or showing 2. This is player A, this is A, this is agent B. They can show 1, show 1 or 2, right? If we do one- if we show 1, 1, player A gets what? $2? Is that right? It's 2, right? I can't see the board. Um, otherwise player A gets minus $3 if you have 2, 2, player A gets $4. Right? So okay. So, so now if, if we have this sequential setting, if you're playing minimax, then player B is going second. Player B is going to take the minimizer here. So Player B is gonna be like this one and in this case player B is going to be like this one. What should player A do? Well in both cases player A is getting minus $3. It doesn't actually matter, player A could do any of them and player A at the end of the day is going to get minus $3. Right? And this is a case where player A goes first. What if player A goes second, second? Okay? So, so then player B is going first, player B is minimizing and then player A is maximizing [NOISE] and we have the same values here. Okay? So this is, this is player A going second, player A going second tries to maximize. So we'd like to pick these ones. Player B is, is here. Player B wants to minimize. So Player B is going to be like, okay, if you're going second I'd rather, I'd rather show you 1, because by showing you 1 I'm losing less. If I show you 2, I'm losing even more. All right. So, so and then in that setting, we are gonna get to, so player A is going to get $2. Okay? All right. So that was kind of intuitive if we have pure strategies, it looks like if you're going second that should be better. Okay. So, ah, so going second is no worse. It's the same or better. And that basically can be represented by this minimax relationship, right? So, so agent A is trying to maximize. So, so in the second case. [NOISE] In the second case, um, we are maximizing second over our actions of V of a and b, and Player B is going first. So this is going to be greater than or equal to the case where Player A is going, uh, first. Sorry no, not min. That makes sense. V of a and b. So I'm gonna just write these things that you're learning throughout on the side of the board, maybe up here. So what did we just learn? We learned, if we have pure strategies, if we have pure strategies, all right, going second is better. That sounds intuitive and right. [NOISE]. Okay. So far so good. Okay? So the question that I wanna try to think about it right now there is what if we have mixed strategies? What's going to happen if we have mixed strategies? Are we gonna get the same thing? Like, if you have mixed strategies is going second better, or is it worse, or is it the same? So, so that's the question we're trying to answer. Okay? So, so let's say Player A comes in, and Player A says, "Well, I'm gonna reveal my strategy to you. What I'm gonna do is I'm going to flip the coin depending on what it comes. I'm either show- going to show you 1, or I'm gonna show you 2. That's what I'm gonna tell you, tell you that's what I'm gonna do." Okay. So, so what would be the value of the game under that setting? So the value of the game, uh, would be, maybe I'll write it here. So the value of Pi A and Pi B. Pi A is already this mixed strategy of one-half, one-half, right? It's going to be equal to Pi- is this- yeah, actually. All right. So what is that going to be equal to? It's going to be Pi B times 1, right? Pi- so it's going to be Pi B, choosing 1 times one-half. The probability one-half Agent A is also picking 1. If it is 1, 1, we're gonna get 2, right, plus Pi B choosing 1, Pi A with one-half choosing 1, and then we're gonna get minus $3 sort of choosing 2. We're gonna get minus $3, plus Pi B choosing 2, times one-half Pi A choosing due- 2. We're gonna get $4, plus Pi B choosing 2 times Pi A choosing 1, and that's minus $3. So I just, like, iterated all the four options that we can get here, uh, under the policy of Pi B choosing 1 or 2, and then Pi A is always just half, right, because they, they are following this mixed strategy. So well, what is this equal to? Uh, that's equal to minus 1 over 2 Pi B of 1, plus 1 over 2 Pi B of 2. Okay. So that's the value. Okay? So, so again, the setting is someone came in, Agent A came in, Agent A told me, "I'm following this mixed strategy. This is gonna be the, the thing I'm gonna do." What should I do as an Agent B? What should I do as an Agent B? You always want to pick 1. So- okay, so that was too quick. So you always [LAUGHTER] have to do, do 1. But why, why is that? Well, well, if Agent A comes and tells me, "Well, this is a thing I wanna do," I should try to minimize value of Agent A, right? So, so what I'm really trying to do as Agent B is to minimize this, right, because I don't want Agent A to get anything. So if I'm minimizing this, in some sense, I'm trying to come up with a policy that minimizes this. Pi is the probability, so it's like a positive number. I've like a positive part and negative part here. The way to minimize this is to put as much weight as possible for this side and as little as possible for this side. So that tells me that never show 2 and always show 1. Does everyone see that? So, so the best thing that I can do as Agent 2 is to follow a pure strategy that always shows 1 and never shows 2. Okay. So this was kind of interesting, right? Like if someone comes in and tells me, "This is the thing. This is a mixed strategy I'm gonna follow," I'll have a solution in response to that, and that solution is always going to be a pure strategy actually. So, so that's kind of cool. All right. So, so this is actually what's happening in a more general case. I'm gonna make a lot of generalizations in this lecture. So I'll show you one example I generalize it, but if you're interested in details of it, like, we can talk about it offline. So yeah, so, so setting is for any fixed mixed strategy Pi A. So, so Pi A told me what their mixed strategy is. It's a fixed mixed strate- uh, mixed strategy. What I should do as Agent B is I should minimize that value. I should pick Pi B in a way that minimizes that value, and that can be attained by pure strategy. So the second thing that I've learned here, is if Player A plays, uh, uh, plays a mixed strategy, mixed strategy, Player B has an optimal pure strategy. And that's kind of interesting. [NOISE] Right. Okay. So, so in this case, also we, we haven't decided what the policies should be yet, right, like we- we've have started- we've still, we've still been talking about the setting where Pi A- like Agent A comes in and tells us what their policy is, and we know how to respond to it. It's going to be a pure strategy. Okay? So now we want to figure out what is this, this policy. Like what, what should be this mixed strategy actually? So, so I wanna think of it more generally. So, so I wanna go back to those two diagrams and actually modify those two diagrams in a way where we talk about it a little bit more generally. Maybe- yeah, I'll just modify these. Okay. So, um, so let's say that- okay, and, and I'm gonna think about both of the settings. So let's say it again. Player A is deciding to go first. Player A is going to follow a mix- a mixed strategy. So this is all we know, but we don't know what mixed strategy. Play- Player A is going to decide to do- to follow mixed strategy. This is Player A. Player A is maximizing. Player A is following a mixed strategy. The way I'm writing that mixed strategy is more generally saying Player A is gonna show 1 with probability p and is going to show 2 with probability 1 minus p. Or generally like some, some p-value. Okay? And then after that it's Player B's turn. We have just seen that Player B, the best thing Player B can do is, is to do a pure strategy. So Player B is either 100% is going to pick 1 or 100% is going to pick 2. Yes? Player B could really like [inaudible] terms with the same then like Player B following a mixed strategy. That would be the best strategy. You know it's just the same as any pure strategy, does that make sense? For those terms behind on the blue on the board here right there. Yeah. Those terms with the same blue terms, then like Player B can follow any kind of strategy, right? So the thing is that, that strategies are probabilities, right? So they are values from 0-1, and then you kinda always end up with this negative term that you're trying to make as negative as possible and this positive term that you are trying to get as positive as possible. And that's kind of intuitively why you end up with a pure strategy. And by pure strategy, what I mean is you always end up like putting as much possible like 1, like all your probabilities on the negative turn and nothing on the positive turn because you are trying to minimize this. So that's kinda like intuitively why you're getting this pure strategy. One-half and one-half? So, so you wouldn't get 1. So, so that's what I mean. So like, you wouldn't ever get like one-half and one-half. If you get one-half and one-half, that's a, that's a mixed strategy. That's not a pure strategy. And I'm saying you, you wouldn't get a mixed strategy because you would always end up in this setting that to minimize this, you end up pushing all your probabilities to this negative term, okay. All right. So, so, all right, so let me go back to this. So- all right. So we have the setting where Player A goes first. Player A is following a mixed strategy with p and 1 minus p. Player B is going to follow a pure strategy, either 1 or 2. I don't know which one, right? So, uh, what's gonna happen is if you have 1, 1 and then, then that is going to give me 2, value 2, right? So it's 2 times p. I'm trying to write the value here. Am I writing it right? Is it 2 times p plus? Yeah. 1 minus p times 3. Right. So with probability 1 minus p, this guy is gonna pick 2. If this guy picks 1, you're gonna get minus 3, minus 3. Okay? And then for this side, with probability 1 minus p, A is going to show 2. If I'm gonna show 2, then I'm gonna get 4. So it's 4 times 1 minus p. And with probability p, this guy's gonna show 1. I'm gonna show 2. So that is minus 3p. Okay. All right. So what are these equal to? So this is equal to 5p minus 3. That is equal to minus 7p plus 4. Okay? So, so I'm talking about this more general case. In this more general case, Player A comes in. Player A is playing first, uh, and is following a mixed strategy but doesn't know what p they should choose. They're choosing a p and 1 minus p here. And then Player B has to follow, uh, a pure strategy. That's what we decided. And then under that case, we either get 5p minus 3 and minus 7p plus 4, okay?. What should Player B do here? This is Player B and this min node. What should Player B do? Which, which- should, should Player B pick 1 or 2? It should- player B should pick a thing that minimizes between these two. All right? So Player B is going to take the minimum of 5p minus 3 and minus 7p plus 4, okay? What should Player A do? What should player A do? I'm thinking minimax, right? So- so when you think about the minimax, Player A is maxima- maximizing the value. So Player A is going to maximize the value that comes up here. So player is going to maximize that and also, I'm saying Player A needs to decide what P they're picking. So they're going to pick a P that maximizes that. Is this clear? [inaudible] Like these computations? Yeah, so these are the four different, uh, things in my, uh, payoff matrix. So I'm saying is, with probability P, A is going to show me 1, right? And I'm going to go down this other route where B is also choosing 1. So if one- like both of us are showing 1, then I'm going to get 2, right? So I'm going to get $2. So that's where the $2 comes from, times probability P. With probability 1 minus p, A is going to show me 2. I'm going to show 1, that's minus $3, times probability 1 minus p. So, so that's how and and for this particular branch, I know the pay off is going to be 5p minus 3. That makes sense? And then for this side again, like with probability 1 minus p, A is going to show me 2. If it is both of them 2, I'm gonna get $4. That's why it's 4 times probability of 1 minus p. With probability P, A is going to show me 1. So that's why I'll lose $3, that's minus 3 times probability p. So that's minus 7p. Okay. So and then, and then, the second player, what they're gonna do is, they're going to minimize between these two values and they're going to pick 1 or 2. They're gonna- they're deciding, "Should I pick 1 or should I pick 2?" And the way they're deciding that is by trying to pick, pick 1 or 2 based on which one minimizes these two values. But I'm writing it, uh, like using this variable p that's not decided yet. And this variable P is the thing that Player A needs to decide. So what, what p should Player A decide? Uh, Player A should decide the p that maximizes this. So I'm writing like, literally a minimax relationship here. Okay? All right, so the interesting thing here, is beside p minus 3, is some line, right? With positive slope. This is 5p minus 3, let's say. And this minus 7p, plus 4 is another line. Minus 7p plus 4. It's another line with negative slope. What is the minimum of this? Where is going to be the minimum of this happening? Minimum of these two lines? Where they meet each other, right? This is going to be the minimum of the two. Okay? So, so the p that I'm s- going to pick, is going to be actually the p, where, th- th- the value of p, where these two are equal to each other and that turns out to be at, I don't know what it is, 7 over 12 or something. Actually I don't remember this- what is this value? Yeah, so it's going to happen at 7 over 12. And the value of it is minus 1 over 12. Right? So okay, so let's recap. Okay, what did I do? So I'm talking about the simultaneous game, but I'm relaxing it and making it sequential. I'm saying A is going to play first, B is playing second. The thing that's going to happen is A is playing first, A is deciding to choose a mixed strategy. So A is deciding to say maybe one half, one half, but maybe he doesn't wanna say one half, one half, he wants to come up with some other probabilities. So the thing A is deciding is, "Should I pick 1 with probability p and should I pick 2 with probability 1 minus p and what should that p be?" So, so what is the probability I should be picking 1? So that's what A is trying to decide here. Okay? So whatever A decides with p and 1 minus p, ends up in two different results and based on them, B is trying to minimize that. When B is trying to minimize that, B is minimizing between these two linear functions. These two linear functions meet at one point, that is the point that this thing is going to be minimized and that actually corresponds to a p-value when A tries to maximize this. This is I know a little bit- this requires a little bit of thinking, but any clarification questions? Any- I see a lot of lost faces, so- [LAUGHTER] By having, um, [inaudible]. Yeah and then that the- yeah, the interesting point is exactly right. Yeah, so A is still by the way losing. So even in this case, where A is trying to come up with the best mixed strategy he could do, the best mixed strategy A is doing is show, show a 1 with probability 7 over 12 and show 2 with probability 5 over 12. This comes from here. Even under that scenario, A is losing. A is losing, minus 1 over 12. Okay? All right. Okay. So also, I haven't solved a simultaneous game yet, right? Like I have talked about the setting where A plays first. So what if B plays first? So I'm going to swap this. What if B plays first? So A goes second, B plays first. I'm gonna modify this one now. Okay, B goes first, A is going second. B is gone to start- is going to reveal the strategy- his strategy. The strategy that B is going to reveal, is also again, I'm gonna with probability p show you 1, with probability 1 minus p, show you, show you, uh, 2. Then A plays, A is trying to maximize. And A has to play a pure strategy because of that, right? Like the best thing A can do, is going to be a pure strategy. So A is always going to be either showing 1 or 2 and A is deciding which one, but doesn't know yet. And the values here are going to be exactly the same thing as there. So they're 5, 5, 5, 5p minus 3, minus 7p plus 4. Okay? All right. So what's happening here? So, so in this case, A is playing second. What A likes to do is A likes to maximize between 5p minus 3 and minus 7p plus 4. That's what A likes to do. B is going second, uh, sorry, B is going first, so then B has to minimize that and pick a p that minimizes that. Okay? So these two are exactly the same two lines but now I'm picking the maximum of them. The maximum of these two lines end up being exactly the same point as before, ends up being exactly the same p as before and giving you exactly the same value as before. So, so this is also equal to minus 1 over 12. So what this is telling me is, if you are playing a mixed strategy, even if you reveal your best mixed strategy at the beginning, it doesn't matter. It actually doesn't matter if you're going first or second. So like in the moral game when you're playing, if you were playing a mixed strategy and you would tell your opponent, "This is the thing, I'm gonna do and this is a mixed strategy," actually and if it was the optimal thing, like, like it didn't matter like if they know, know it or not, like you still get the same value. So again, you get 5p minus 3 and minus 7p plus 4. And then now you're minimizing or a maximum of these two lines, maximum of these two lines end up being at the same point and you pick a p that, that kind of maximizes that and you get the same value. So this is called the von Neumann's theorem. So von Neum- like this whole thing that you just, did over just one example, there is a theorem about it that says, for every simultaneous two-player zero-sum game, with a finite number of actions, the order of players doesn't matter. So B is playing second or B is playing first, the values are going to be the same thing. If you're minimizing or are maxim- or maximum or min- minimum of that value, it's going to be the same thing. Okay? So this is kind of the third thing that we just learned, which is von Neumann's Theorem, which says, if- I- I'm writing a modification of a simpler, shorter version of it. So if playing a mixed strategy, order of play doesn't matter. So remember, if you play mixed strategy, your opponent. And remember, if you play mixed strategy, your opponent is going to play pure strategy because this is like this the first point that we had before it. All right? If you, if you play mixed strategy, your opponent is going to follow a pure strategy. Either 1 or 2 with probability 1. [NOISE] But with probability p, like, if we're doing like ordering, like one of the two answers might- will come out, [inaudible] it'll be either one or two and then in that case, the second [inaudible]. So in this case, yeah. So, uh, the thing is these two end up being equal. So the way to- it doesn't, it doesn't matter because the way for you to maximize this is going to be the point where the two end up being equal. So the two branches, like if you actually plug in p equal to 7 over 12 here, like these two values end up being equal. Equal, right? [inaudible]. [OVERLAPPING] Uh, none [inaudible] actually equal and the reason that they end up being equal is you are trying to minimize the thing that this guy is trying to maximize. So you are trying to pick the p that actually makes this thing equal. So no matter what your opponent does, like you're gonna get the best thing that you can do. So, so yeah, like think of it like this. Okay. So I'm player A, I'm, I'm still- I still have a choice. My choice is to pick a p. I want to pick a p that I'm not gonna wi- like lose as much. What p should I pick? I should pick a p that makes these choices the same. Because if I pick a p that makes this one higher than this one, of course the second player is going to make me lose and then go down a route that's, that's be- better for the second player. So the best thing that I can do here is make these two as equal as possible. So then the second player whatever they choose, choose one or two, I guess it's gonna be the same thing, it's gonna be- does that make sense? So sounds no in expectations, like you're multiply by p and 1 minus p as you were saying, like if the [inaudible]. [OVERLAPPING] So in expectation when- you're saying when you are choosing p? Yes, so I'm choo- I'm treating p as a variable that I'm deciding, right? Like p is the thing I gotta be deciding. So I'm player A, I gotta be citing a p. That's not gonna be too bad for me. Like let say I would pick a p that doesn't make these things equal. Let's say, I don't know, I would pick a p that makes this guy I don't know 10 and this makes this guy 5. The second player is of course going to make me lose and of course is going to like pick the thing that's going to be the worst thing for me. So the best thing I can do is I can make both of them, I don't know, 7. So it's not gonna be as bad. So, so that's kind of the idea. All right. So let me move forward because there's still a bunch things happening. All right. So, so okay. So the kind of key idea here is revealing your optimal mixed strategy does not hurt you which is kind of a cool idea. The proof of that is interesting. If you're interested in looking at the notes, you can use linear programming here. The reason, kind of the intuition behind it is, is if you're playing mixed strategy, the next person has to play pure strategy and you have n possible options for that pure strategy. So that creates n constraints that you are putting in for your optimization. You end up with a single optimization with n constraints, and, and, and you can use like linear programming duality to actually solve it. So, so you could compute this using linear programming and that's kind of the one that's here. So, so let's summarize what we have talked about so far. So, so we have talked about these simultaneous games, er, and, and we've talked about the setting where we have pure strategies, and we saw that if you have pure strategies, going second is better. Right. Going second is better if you are just telling you what's the pure strategy you're using, right? So that was kind of the first point up there. And then if you're using mixed strategies, it turns out it doesn't matter if you're going first or second. You're telling them what your mixed- best mixed strategy is and they're going to respond based on that. So that's the von Neumann's minimax theorem. Okay? All right. So next 10 minutes, I want to spend a little bit of time talking about non-zero-sum games. So so far we have talked about zero-sum games, uh, where it's either minimax, I get some reward. You get the negative of that or vice versa. There are also these other things called collaborative games where we are just both maximizing something. So, so we both get like money out of it, and, and that's kinda like a single optimization. It's a single maximization and you can think of it as plain search. In real life, you're kind of somewhere in between that, and, and I want to motivate that by an example. So, uh, I want to do that b- by this idea of Prisoner's dilemma. How many of you have heard of Prisoner's dilemma? Okay. Good. Okay. So the idea of Prisoner's dilemma is you have a prosecutor who asks A and B individually if they will testify against each other or not, okay? If both of them testify, then both of them are sentenced to five years in jail. If both of them refuse, then both of them are sentenced to one year in jail. If one testifies, then he or she gets out for free and, and then the other one gets 10 years sentence. Play with your partner real quick. [NOISE] All right. [LAUGHTER] Okay. Okay, so let's look at the pay off matrix. So I think you kind of have an idea of how the game works. Is that A or B? So, uh, so you have two players A or B. Each one of you have an option. You can either testify or you can refuse to testify. So you can- B can testify and A can refuse to testify, and I am going to create this payoff matrix. This payoff matrix is going to have two entries now in each one of these, these cells. And, and why is that? Because we have a non-zero-sum game. Before, our payoff matrix only had one entry. Because this was for player A, player B would just get negative of that. But now player A and B are getting different values. So if both of us testify, then both of us get five years jail, right? So A gets five years of jail, B gets five years. Right? If both of us refuse, A gets one year of jail, B gets one year of jail. One year, one year of jail. And then if it is a setting where one of us testifies, the other one refuses, one of us gets 0, the other one gets 10 years jail. So if I refuse to testify, then I get 10 years jail right away and then B gets 0. And then in this case, A gets 0 and B gets 10. Okay? So the payoff matrix is now going to be for every player we are gonna have a payoff matrix. So now we have this, this B value function which is a function of a player. For policy A and policy B, will be the utility for one particular player, because you might be looking at it from perspective of different players. Okay? So the von Neumann's minimax theorem doesn't really apply here because we don't have the zero-sum game. But do you actually get something a little bit weaker, and that's the idea of Nash equilibrium. So a Nash equilibrium is setup policies Pi star A and Pi star B so that no player has an incentive to change their strategy. So, so what does that mean? So what that, that means is if you look at the, the value function from perspective of player A, value function from perspective of player A at the Nash equilibrium at Pi star A and Pi star B is going to be greater than or equal to value of, of any other policy Pi A if you fix Pi B. Okay and at the same time the same thing is true for value of B. So for agent B, value of B at Nash equilibrium is gonna be greater than or equal to a value of B at any other Pi B if if, if Pi A fixes their policy. Okay? So, so what does that mean in this setting? Do we have a Nash equilibrium here? So let's say I start from here. I start from A equal to minus 10, B equal to 0. Can I get this better? Can I make this better, or did I flip them I all? [NOISE] Okay. Flip, right? 0 minus 10, er, minus 10, 0. Okay. So let's say I start from here. Can I, can I get this better? Can I make this better? I start from this cell, A gets 0 years of jail. That's pretty good. B gets 10 years of jail. That's not that great. So B has an incentive to change that. Right? Like B has an incentive to actually move in this direction. Right? So B has an incentive to get 5 years jail instead of 10 years. Similar thing here. What if we start here? A has 1 year of jail, B has 1 year of jail. A has an incentive to change this now and get 0 years jail. B has an incentive to change this and get 0 years jail. And we end up with this cell. Where like, we don't have any incentive to change our strategy. So we have one Nash equilibrium here and that one Nash equilibrium here is, is both of us are testifying and both of us are getting 5 years jail. Just kind of interesting because there is like a socially better choice to have here, right? Like both of us, like if both of us would refuse, like we would each get 1 year jail but that's not gonna be a Nash equilibrium. Okay? All right. So there's a theorem which is, er, Nash's existence theorem which basically says if any finite player game with a finite number of actions, if you have any finite player game with a finite number of actions, then there exists at least one Nash equilibrium. And then this is usually one mixed strategy Nash equilibrium, at least one mixed strategy Nash equilibrium. In this case, it's actually a pure strategy Nash equilibrium. Uh, but, but in general, there is at least one Nash equilibrium if you have a game of this form. Okay? All right. So, uh, so let's look at a few other examples. Two-finger Morra. What would be the Nash equilibrium for that? So we just actually solve that using the minimax- von Neumann's minimax theorem, right? So there would be if you're playing a mixed strategy of 7 over 12 and 5 over 12, you might, you might kind of modify your Two-finger Morra game and make it collaborative. So in a collaborative setting, uh, what that means is we both get $2 or we both get $4 or we both lose $3. So, so a collaborative Two-finger Morra game, it's not a zero-sum game anymore and, and you have two Nash equilibria. So, uh, you would have a setting where A and B both of them play 1 and the value is 2, or A and B both of them play 2 and the value is 4. Okay? And then Prisoner's dilemma is the case where both of them testify. We just, we just saw that on the board. All right. Okay. So summary so far is we have talked about simultaneous zero-sum games. We talked about this von Neumann's minimax theorem, er, which has like multiple minimax strategies and a single game value, right? Like we had a single game value because it was zero-sum. But in the case of non-zero-sum games, er, we would have something that's slightly weaker that's Nash's existence theorem. We would still have multiple Nash equilibria, we could have multiple Nash equilibria. Uh, but we have multi- we also have multiple game values from- depending on whose perspective you are looking at. So this kind of was just a brief like short introduction to game theory and econ. There's a huge literature around different types of games, uh, in game theory and economics. If you're interested in that, take classes. And yeah, there are other types of games still like Security Games and or resource allocation games that have some characteristics that are similar to things we've talked about. If you're interested in any of them, maybe you can take a look at them, would be useful for projects. And with that, I'll see you guys next time.
Stanford_CS221_Artificial_Intelligence_Principles_and_Techniques_Autumn_2021
Logic_9_First_Order_Resolution_Stanford_CS221_AI_Autumn_2021.txt
OK, so in this module, we are going to be talking about the resolution for first order logic. This is an optional module, but I think it would be interesting to think about how we could apply a resolution when we have this more complicated logic, this first order logic. And so far, we have talked about syntax, semantics, we have talked about modus ponens, when we have Horn clauses in first order logic. And now we want to extend this, the idea of applying inference rules, to settings where we don't necessarily have Horn clauses. So if you think about first order logic, it's not really limited to settings where we have Horn clauses. We sometimes have non-Horn clauses. Here is actually an example. For all x x is student, this implies x knows some y. OK? So this some y, does there exist y here, is going to create a non-Horn clause. And why is that? Because an existential quantifier is really a glorified "or," right? Like it's a glorified disjunction. So what this is basically getting us is knows x and y1 or x, y2, and so on. And this basically creates an "or" on this side of the implication. And that makes this particular statement a non-Horn clause OK? So what does that mean? That means that I can't just apply modus ponens on it, OK? So what can we do here? So the high level strategy here is that we have this formula. We have this first order logic formula. First off, we need to convert it to a CNF form, new form, to a conjunctive normal form. And this is similar to before. Like before, even in propositional logic, when we had something that wasn't a Horn clause, we were starting with writing it as a CNF form. OK? And then after that, we repeatedly apply resolution rule on it. And our resolution rule here is going to be slightly different from the resolution rule that we had in propositional logic, because similar to modus ponens, we need to do unification. We need to do substitution. And similarly, we change our resolution rule to actually have that element of unification and substitution here, OK? Converting to CNF is also not exactly like we did converting to CNF in propositional logic. There are going to be a few new things that I'm going to attempt to give you some ideas around it. But in general, I'm just giving like a high level strategy idea of how you're going to apply resolution to first order logic. This is not a very complete explanation of that. And in general it gets a little bit messy, like when you think about applying resolution to first order logic. So think of this as a big picture, like high level strategy and overview for applying resolution here, OK? All right, so let's start with a formula. Let's say this is our formula. And we have for all x, for all y, and y is an animal implies x loves y. And that implies that there exists in y such that y loves x. OK? So this is some statement, some formula. And what we would like to do is we would like to convert this to a CNF form. So how does a CNF form look like in first order logic? So at the end of the day, the output is going to look something like this for us. So it's going to be an and of a bunch of clauses. So these are clauses because they have "or's" between them. And in addition to that, we have these new functions, these capitalized Y or capitalized Z. And these are called Skolem functions. And I'm going to talk about what they are in a few slides, OK? So there are a few things that are new, when we think about the CNF form. The first thing is that all variables that I have in this form are actually like universally quantified. So there is like a for all x, that exists here, that I have just dropped, OK? So then in reality there is a for all x that exists here. And then there are these Skolem functions that are functions of things that are existentially quantified, right? So basically they represent existential quantifiers here. And they're functions of this x thing that has a for all x at the beginning on it. OK? So those are kind of like two new things that are happening, in order to get a CNF form of this first order logic formula. Let's actually go through an example for this. So let's start with this statement that says anyone who likes all animals is liked by someone. OK? So one can write this as an input that says for all x, for all y, y is an animal implies x plus y, and that full thing implies that there exists a y. So y loves x, OK? All right, so first thing to do is, similar to before, if you want to follow like the steps of converting this to a CNF form, we are going to eliminate implication. So I'm going to eliminate this outside implication. How do I eliminate it? I'm going to take the negation of what comes before it. So negation, up until here, or the rest of the statement. I'm also going to replace this implication by negation of the first part or the second part, so negation of the first part or the second part. And we get this statement, OK? Now I'm going to push negations inwards and eliminate double negations. This is exactly what we have done before. So let me push negations inside. And it goes all the way to negation of love. And now we have ended up with this formula, where we have these quantifiers, right? Like we have these for all under exist, and so on. And everything else is an atomic formula, right? Remember before, like when we were trying to convert things to a CNF form, we would end up with propositional symbols, right? So we would have, you'd end up with propositional symbols that could take a positive or a negative value, right? So we would have positive or negative literals at the end of the day. But here we have atomic formulas. So we end up with these atomic formulas, or negations of these atomic formulas, OK? So now one thing that is new is we're kind of standardizing the variables here. So we have a y here, and we have a y here. But there is this existential quantifier on each of them. And these y's are kind of treated as a local variable. So in order to kind of like avoid confusion, you're going to define like a new variable for each of them. So I'm going to define a z here and keep this as y. And, again, the reason I'm doing this is, at the end of the day, I'm removing this for all x. And I want to make sure that this y is a function of x. And this z is a function of x. And these are different things. These are two different local variables, OK? All right, so this is new. So on the standardizing variables, this is a new step that is done here, OK? Now that we are left with this formula, what we are going to do is we're going to replace all these existentially quantified variables with something that's called the Skolem function, OK? So before we had there exists a y, and this there exists a y depends on x2, right? So for all x there exists a y. So this is really a function of x, the Skolem function, the y function of x. The Skolem function is y function of x, where z is a function of x. So I'm going to write these Skolem functions as functions of this variable, that is universally quantified. And then I'm going to just drop that. I'm going to later on drop this for all x, so that that makes my life easier. And then finally, I need to distribute or over and, so I can end up with clauses in conjunctive normal form. And this is a similar step that we have had before in propositional logic, and remove the universal quantifiers. And this is what I would end up having. So now I've ended up with the formula in CNF form in first order logic. OK? So just to recap what is new in it, we have Skolem functions, which kind of represent existential quantifiers and variables that are universally quantified. I've also dropped the universal quantifier on all my variables here, OK? So that those are kind of like the core differences here, OK? So now we are ready to talk about resolution. Now that we can write our formulas, our first order logic formulas in CNF form, then we can write the resolution rule as follows. So we have these atomic formulas, f1 or through fn or p. And then we have another thing in our set of premises, negation of q of or g1 through gm. And notice that p and q could be different things, because they might just look different from each other. So what we do is we unify p and q. And when we unify p and q, we get a substitution rule theta. And then what we can actually infer, what we can derive here from resolution, is going to be an or of f1 or'd all the way through fn or g1 all the way through gm. We are basically canceling out p and q with each other. But the reason we can do that is we have unified p and q with the substitution rule theta. So in this new formula, we are substituting theta in the formula. This is kind of similar to the substitution and unification that we did in modus ponens. We are just doing this now on resolution on these CNF clauses that we have just created. OK? Let's give an example here. So let's say that I have two CNF clauses here. So I have animal y of x or loves z of x x. And then I have negation of loves u and v, or feeds u and v. OK? So loves and negation of loves are the things that I would like to be able to do unification on. So if I unify these two, then I'm going to come up with a substitution rule that says substitute variable u, which functions Z of x, substitute the variable v, with variable x, OK? And at the end of the day, the thing that I am inferring, I'm deriving here, is going to basically cancel out these two. And it's going to get animal y of x or feeds u and v. But I'm not going to put u and v in there anymore. Why is that? Because I'm substituting this theta, I'm substituting z of x for u and x for v. So the thing that, at the end of the day, I'm proving is animal y of x or feeds to Z of x x. OK, so there's quite a bit of like simple manipulation going on here. But you kind of like get the gist of it. It's very similar to resolution that we have seen so far, combined with unification and substitution over these new clauses, these new CNF clauses, that we have talked about. And that summarizes how we do inference of using resolution in first order logic.
Stanford_CS221_Artificial_Intelligence_Principles_and_Techniques_Autumn_2021
Stanford_Talk_Inequality_in_Healthcare_AI_Data_Science_to_Reduce_Inequality_Improve_Healthcare.txt
All right. Let's get started. So welcome, everyone. We're really pleased to have Emma Pierson with us today. So Emma actually comes from Stanford. Well, I mean not originally, but she spent her undergrad and grad school at Stanford. She actually took one of my classes when I first started at Stanford. So Emma has done a lot of great work in machine learning, in particular addressing fairness. And my group has done some work in fairness and we always go and ask Emma, because she's kind of our go to resident expert on the topic. Since graduating, she's spending a year at MSR New England before starting as an assistant professor in Cornell next year. So I'm sure she'll have a lot of interesting things and important things to say along with the general theme that we've been trying to go for in these classes, which is how AI really matters and affects people's lives. So please take it away, Emma. Thank you. Thank you for this invitation. It's a pleasure to be here, to be back at Stanford if only virtually, and actually to be back specifically in CS 221, which is actually the first computer science class I ever took at Stanford. So it brings back fond memories. And I'm not just saying that to suck up to the professors. OK. So today I'm going to be talking-- basically giving a two part talk. In the first part of the talk, I'm going to give an overview of some of the recent projects that I've worked on, discussing sort of social implications of AI and trying to use it to improve people's lives. And then I'm going to give a little bit of a story about how I got here, just in case it's useful to you as you're sort of trying to unravel your own professional choices. So at a high level, as Percy said, I use AI and data science for very practical applications. And the specific applications I focus on are reducing inequality and improving healthcare. Today, I'm going to be talking about using AI to study inequality in three areas. First, I'm going to tell you a story about policing and how we can use AI to study inequality in policing. Then I'll talk about using AI to study inequality in pain. And then finally, I'll talk about using it to study inequality in COVID-19. So let's jump right into it. Let's talk about policing. This is joint work with a number of excellent coauthors whose names I will now attempt to rattle off, Camelia, Jan, Sam, Dan, Amy, Vignesh, Cheryl, Phoebe, sorry, Ravi, and Sharad. So it was quite a large project and the efforts of a ton of people. So why is policing something we care about? I think this year, that point doesn't really need to be explained, right? It's obvious that policing has a tremendous impact on communities across the United States. And in fact, it's one of the major leading causes of death for young men, particularly for young African-American men. And today I'm going to be talking to you about police traffic stops. Why do we care about police traffic stops? Well, they're one of the most common ways we interact with police. Tens of millions of Americans are stopped every year. And there's concern that traffic stops may be racially discriminatory. To be clear about what I mean by racial discrimination and I'll make this more precise in a couple of slides, this is when someone is treated more negatively because of their race. So someone is stopped by police because they're Black, they wouldn't have been stopped by police had they been driving the same way in the same car but they've been white, for example. Now, this is obviously very bad if it's happening. But it's hard to statistically test for. Let's talk about why. So the first challenge we confronted when we embarked on this project is that there was no unified dataset tracking every stop made by the police. Rather, the way data is stored is that each department stores data in its own little system and its own idiosyncratic format. And so we set about creating this dataset, and we did so in two stages. In the first stage, our journalist collaborators submitted data requests to more than 150 police departments over the course of five years. This was a colossal amount of work for them, journalists are amazing collaborators. Now of course, now that data comes pouring in and you have this nightmarish data standardization task, right, where every single dataset is in a different format. So we put in thousands of hours to clean up the data and put it into a standard format. Now, the good news for you is that we've made all this data available for you. So if you're looking for interesting datasets on inequality or on policing, this is a publicly available resource which is easy to download. The full dataset tracks some 227 million stops made across 56 city agencies, so that's stuff like the San Francisco Police Department and 33 state agencies, that would be like the California Highway Patrol. And in the main analysis I'll be talking to you about today, we're going to be analyzing 95 million stops. The reason that number is somewhat smaller is for example we have to filter for departments that have enough data to even do this analysis at all. If a department doesn't track the race of stopped drivers, it's very hard to analyze racial discrimination. So in our analysis, we look at three different questions. We look at whether the police discriminate in whom they stop in the first place. We look at whether they discriminate in whom they search after stopping them. And then we look at how policy changes affect these things. Today, I'm only going to be talking to you about the second question. And I'll do so because it's particularly interesting from sort of a data science AI method standpoint, but also because the methods I'll be describing to you are applicable to studying bias in many other human decisions as I'll describe. So are police searches discriminatory? A little bit of context on police searches. So after the police stop a driver, they're allowed to conduct a search in order to find contraband. Contraband here means things you're not supposed to be carrying like illegal drugs, weapons, et cetera. The purpose of a search is to find contraband. They're not supposed to search you just because they're curious or because they're trying to harass you or whatever. So because the purpose of a search is to find contraband, we're going to test whether minorities are searched when they are less likely to have contraband at a lower threshold of evidence. So if police are searching white drivers, for example, when they are only 40% likely to carry contraband, but they're searching Black drivers when they're only 20% likely to carry contraband. Those different thresholds, that would be discrimination under our definition of discrimination. Importantly, this is only one way the police can discriminate. There are a lot of other problematic things the police can do as we've seen this year, of course. We're testing for a very specific type of police discrimination. This is not comprehensive. So the first simple test of whether the police are discriminating and whom they search is to look at the search rates. In other words, how likely is someone to be searched after a stop? And the results of this analysis are shown for our data in the graph at right. State patrol stops on the left, city stops on the right, we're plotting the average search rate across locations on the y-axis. And you can see that there are these very big gaps in this plot, with Black and Hispanic drivers much more likely to be searched after a stop than are white drivers. But this by itself does not prove that the police are being discriminatory i.e. applying different thresholds on the basis of race. It's possible that some races are more likely to carry contraband, drugs, weapons, whatever. The purpose of a search is to find contraband. So if some groups are more likely to carry it, police may be more likely to search them even in the absence of applying different thresholds on the basis of race. So a second simple test that's been proposed to get around this problem is to look not at the rates of searches but at the outcomes of those searches. This is called an outcome test. And the idea is you look at how likely the search is to find contraband, we call that the hit rate. This was proposed by Becker and other economists, it's decades old, it's a very frequent test in the economics literature. The intuition behind this test is like look, if searches of white drivers are finding contraband 90% of the time, but searches of Black drivers are finding contraband only 10% of the time, it suggests the police are searching white drivers only when they're very likely to carry contraband, but they're searching Black drivers on the basis of relatively little evidence indicative of discrimination. So if there are differences in the hit rates by race, that's discrimination under the outcome test. And when you do this analysis on our data, you do indeed see that hit rates are lower for Black and Hispanic drivers in both state stops and city stops than they are for white drivers, suggesting discrimination against minority groups. But it turns out that there's a flaw in the outcome test as well. And this is called infra-marginality. And I'm going to illustrate it with a simple hypothetical example. Totally hypothetical, these numbers are made up. Imagine there are two races, Black drivers and white drivers. And imagine among each race there are two groups, those who are very likely to carry contraband and those who are quite unlikely. And these groups are easy to tell apart, maybe one of them is wearing blue hats. So among the likely group, 50% of Black drivers carry contraband and 75% of white drivers carry contraband. Among the unlikely group, 5% carry contraband regardless of their race. And importantly, imagine in this hypothetical example that the police are not being discriminatory. They search everyone who is more than 10% likely to carry contraband. They apply the same threshold irrespective of driver race. What are the hit rates for white and Black drivers going to be in this hypothetical example? Well, the police are going to search all the likely drivers and they're going to end up with a hit rate of 50% for Black drivers and 75% for white drivers. So from that difference in hit rates we're going to conclude that there's discrimination in this hypothetical example. But that's a misleading conclusion because by assumption, we're applying the same threshold to both groups. So why is this happening. Why are we getting this misleading result? Well, it's happening because the statistic we're looking at, the probability of carrying contraband conditional on being above the threshold is not the same as what we actually care about, which is the [INAUDIBLE].. These are simply different quantities. Threshold itself is hard to infer. It's not directly measurable from the data the way the hit rate is. So the solution that's been proposed is to use a Bayesian latent variable model to try to infer this threshold. So I'll tell you about that now. Before I do though, are there any pressing questions and also am I talking at an appropriate volume? Cool. So the threshold test proposes a stylized model of a police stop. And when I say stylized what I mean is, you can never capture all aspects of the real world in math, right? Your hope is that you capture sort of enough relevant aspects of the real world to enable you to measure the quantities of interest. In this case, the thing we want to measure is that threshold at which the search is being conducted. So the goal of this model is to estimate the search thresholds which are consistent with the observed data, namely, the search rates and the hit rates. And discrimination just as before, is if lower search thresholds are being applied in searches of minority drivers. So here's how the threshold test models a police stop. We imagine that when the officer stops someone, they estimate the probability p that that person carries contraband. p captures contextual factors like the age and the gender of the driver, how nervous they're acting, et cetera. And it's drawn from a risk distribution which is shown graphically at right. So the risk distribution is a probability distribution on the unit interval. So it ranges from 0 to 1. For example, if the police pull over a bus driver, p is probably quite low, right, because he's driving kids around. Hopefully he's not also like burying weapons or drugs, and so p would be pretty low. On the other hand, if they pull over a driver and he's like acting woozy and drinking out of a bottle, that's pretty sketchy, p is probably higher. Now, in order to fit this model at all, you have to make some assumption about what the risk distributions look like. You can't fit arbitrary probability distributions because then you would have infinite degrees of freedom. So the parametric assumption that the model makes is that the risk distributions are beta distributions, which is a very standard distribution on the unit interval. Now, if p is greater than some threshold, the officer searches the person. And if they search the person, they find contraband with probability p. So in the case of the bus driver, he'd be below the threshold. And so the officer wouldn't search him, wouldn't find contraband. In the case of the woozy acting driver, he would be above the threshold. So the officer would search him and would find contraband with a 75% probability. The model allows the thresholds and the risk distributions to vary by race and location and discrimination as before as if lower thresholds are being applied in searches of minority drivers. Now, in order to fit this model at all, this being a Bayesian model, you have to specify how you go from the unobserved objects to the observed data. So what are the unobserved objects and the observed data here? Well, the unobserved objects are the thresholds which are the main thing we care about and the risk distributions. So graphically, that's the dotted line and the blue line in the figure at right. The observed data are the search rates and the hit rates for each race and location. For example, the search rate for Black drivers in Alameda County is 30% and the hit rate is 40%. So how do we go from unobserved to observed? Well, I've shown this graphically at right. The search rate is the amount of the risk distribution that lies above the threshold. So graphically, it's the amount of gray mass. You can also express it as 1 minus the CDF of the risk distribution. This is intuitive, right? It's how much of the risk distribution lies above this threshold. The hit rate is what is the expected value of the risk distribution conditional on drawing from the gray mass. So conditional on drawing from the portion of the risk distribution which lies above the threshold, what's your expected value? So that's how we go from these unobserved objects to the observed data. That's sort of the likelihood portion of the Bayesian model. To complete the Bayesian model specification, you also need a prior, you need to place priors on your parameters, not going to describe that in detail. But basically in order to complete the specification, you place priors on the thresholds and the risk distribution parameters. Now by combining those two things, the likelihood and the prior, you can use standard Bayesian inference to infer the posterior over the parameters. And the specific thing we care about is what is our best estimate of what those thresholds are given our observed data. Now, unfortunately it turns out the story I told you is a little too simple. And it turns out that fitting a model on a data set of our size is much, much, much too slow. And the reason goes back to the fact that the risk distributions or beta distributions. In order to compute the search rate and the hit rate, you have to compute the CDF and conditional mean of the beta distribution. And it turns out that is very slow, especially when you have to compute their gradients [INAUDIBLE] which you have to do to use the [INAUDIBLE].. The exact mathematical details of why I'm not going to get into. But the TLDR is that fitting the entire national data set is impossible. And perhaps more importantly, the test can't be used by people who really need it-- journalists, police departments, anyone who doesn't have sort of a ton of compute and a ton of grad students. So what we had to do was replace the beta distributions with a new family of probability distributions called discriminant distributions. And describing those distributions in detail is beyond the scope of this talk, although I'm happy to chat with people afterwards if they're specifically interested in probability distributions. But it turns out that this new family of probability distributions makes the test run two orders of magnitude faster. And that makes it feasible to run on a dataset of our size. I guess a high level takeaway here is that like probability distributions are not just something you learn in CS 109, so you can pass CS 109, they're actually quite practically important, then it's worth paying attention to them and thinking about what their drawbacks are. For now though, I'm just going to show you the results, which is now we can actually take this fast threshold test and we can apply it to our national dataset. And so here what I'm showing you is the output of this model, it's the average estimated threshold. And again, we're averaging across locations. And you can see that the average threshold is lower for Black and Hispanic drivers than it is for white drivers, suggesting that they're being searched on the basis of less evidence. So to summarize what I've shown you from this search analysis, I've shown you three results. I've shown you that search rates are higher for minorities, that hit rates are lower, and that thresholds are lower. This is sort of a characteristic pattern for discriminatory searches. You'll see the same pattern, for example, if you look at stop-and-frisk data in New York City, which is a very, very obviously discriminatory policy. All three tests here are suggesting discrimination against minorities, but the threshold test is doing so in a way which is robust to the statistical flaws of simpler tests like infra-marginality. I mentioned that the same methods can be applied in other data sets where you have a binary decision and a binary outcome. So I just want to give you some quick examples of this. For example, we can apply it in the medical domain to COVID testing for example where the binary decision is, does someone get tested for COVID and the binary outcome is, do they test positive for COVID? And if you see, for example, that minorities who get tested for COVID are much more likely to test positive, then it's a worrisome sign because it suggests that they're only getting tested at higher thresholds of evidence. They may be being under tested for COVID and in fact, we do see some evidence so that is the case. So this is a more broadly applicable methodology. Finally to close on the public policy impact of this work, I mentioned one benefit of using this different probability distribution is your test runs 100 times faster and this makes it easier for journalists to use. And in fact that was exactly what we saw. The Los Angeles Times was able to take our faster test with some assistance from our team and use it to show that Black and Hispanic drivers in Los Angeles were being searched on the basis of less evidence. And in response to that within about a week, the LAPD announced that they were going to cut back on police searches in response to these concerns over racial bias. This is why working with journalists and other real world actors is nice because they help you translate your sort of research findings into real world impact. OK. So before I go on to the second story, are there any questions I should answer? Yeah, so we have one question from a student asking, in India police harass the poor based on how someone is dressed, for two-seater drivers for example. So can this model that you've been describing be applied based on economic status instead of race? Now that's-- I've given this talk like 50 times and no one has ever asked that question. That's super interesting. I would be curious to hear more. There is nothing in principle which precludes applying it on the basis of economic status. OK. Should I go on? That's the only question for now, yes. OK. Cool. All right. So let's move to our second story which is about using AI to study inequality in pain. And this is joint work with David, Jure, Sendhil, and Ziad. Jure is a professor here and he also prefers black and white photos, it would appear. OK. Oh, he's also my academic advisor, I guess is a relevant point. OK. So a general fact about pain is that disadvantaged groups experience more of it. You see this for socioeconomic disadvantage across a variety of types of pain, across multiple continents, across multiple samples, it's quite a robust finding. And you see it for racially disadvantaged groups as well. And this is also true in the condition we'll be talking about today, knee osteoarthritis, which is one of the most common causes of disabling pain in older adults. So mechanically what's happening is that with the wear and tear of time, sort of the padding between your knee bones erodes, the bones grind together. And this causes a lot of pain and it's like very common, odds are good that multiple people listening to this talk will develop it. So in osteoarthritis as in other conditions, disadvantaged groups experience worse pain. A natural explanation is like oh, maybe they just have worse osteoarthritis. But here's the interesting thing. Here's the fact that we're going to try and explain. It turns out these groups have worse pain even when we control for how severe the doctor thinks their disease is. So I want to explain to you what I mean by that. But in order for that to make sense, I have to explain how we measure severity in pain. So how do we measure severity? Basically, a doctor looks at an x-ray of the knee, grades it on a bunch of factors and says, this is a summary score. So like specifically, they'll look at an x-ray of the knee and say stuff like, oh, you definitely have an osteophyte, a bone spur, and you have these other features like the joint space between your knee bones has reduced. And so I'm going to give it a score called the Kellgren-Lawrence grade. That ranges from 0 to 4 and it's sort of a categorical summary measure where higher scores indicate worse pain. How do we measure pain? Well, you ask the patient a bunch of questions like, how much pain do you feel when you're bending your knee? And then we take the answers to those questions and we aggregate it into a single score called KOOS pain score. So it's the result of a survey. The data we're going to be using comes from the Osteoarthritis Initiative, it's publicly available data. All the results I'm going to be presenting are on about 1,300 people. And we're going to be comparing pain by three binary groupings. We're going to be comparing Black to non-Black patients and almost all the non-Black patients in the data set are white. And we're going to be comparing lower and higher income patients, lower and higher education patients. So what do I mean when I say disadvantaged patients have more pain? So here what I'm showing you is a vertical histogram with pain on the y-axis. So lower scores indicate worse pain. And I'm showing you the histograms for Black versus non-Black patients. And you can see that there's a big visual difference in the histogram where Black patients have worse pain. If you want to summarize it in a single measure, you can just take the difference in means for the two groups and it's about 10.6 points on the KOOS scale, which is about 2/3 of a standard deviation. So it's a big gap. And the results for income and education are somewhat smaller, but still substantively large and statistically significant. The things I'm showing in parentheses are the confidence intervals. So what happens when we control for severity? Does the pain gap go away? It turns out that it doesn't. So now the graph I'm showing you here at right has severity on the x-axis, that KLG score I was telling you about before. And pain is on the y-axis as before. And the important point from this graph is that the orange and blue lines are not on top of each other. Even conditional on severity, there's a gap in pain between Black and non-Black patients. And if we want to summarize the size of that gap in a single number, the standard way to do so is with a linear regression. Specifically we do a regression of pain on race and KLG. And that tells us basically the size of the pain gap when we control for that severity score KLG. And I've shown those numerical results in the second numerical column. You can see that for race for example, the pain gap shrinks from 10.6 points when we don't control for anything to 9.7 points when we do control for KLG. The important point being, it really doesn't get all that much smaller, right? 10.6 is as big as 9.7, it only gets 9% smaller and results for income and education are similar. So the high level takeaway is controlling for severity doesn't do very much to narrow the pain gap. This isn't our unique finding by the way, other studies find this as well. The goal of our paper is to explain why. Why is there a pain gap even conditional, on severity? Specifically, we're going to try and differentiate between two theories. The first theory, we call the outside their knees theory, namely that there are non-knee-related factors which are causing disadvantaged patients to report higher pain even when their knee disease is no more severe. And this isn't just some crazy theory we plucked out of thin air, a bunch of prior work points to some factors that might cause higher pain in disadvantaged groups. Maybe higher life stress, differences in access to pain medication, differences in how different groups report pain, there are a whole bunch of possibilities. The commonality here though, is that whatever the factor is, it isn't anything that can be seen in a knee X-ray, it's something outside the knee. But there's a second possibility, right, and we call this in their knees theory. Namely-- that there are pain related ailments in the knee X-ray which KLG isn't capturing. And if we could capture these physical features, we would be able to explain more of the pain gap. So under the first theory, there's nothing to be seen in the knee X-ray that would explain this gap. And under the second theory, there is something to be seen that KLG isn't picking up on. Then why is the second hypothesis plausible? Here are two reasons. The first is that we don't understand pain all that well. This is true generally. It's also true in osteoarthritis specifically. TLG just doesn't explain all that much of the variation in pain. And a possible reason for this is that KLG was developed decades ago in heavily white British populations. And so it's plausible that it's not capturing all the environmental or occupational features that may be relevant to pain in modern and more diverse populations that may live and work very differently. So we're going to try and test whether there are overlooked physical features in the knee which would explain the higher pain levels in disadvantaged groups. This isn't just an academically interesting question, it's also a question with concrete clinical implications. And the reason is that whether you get knee surgery depends on whether the source of your pain is in your knee. If you go to the doctor in a lot of pain and she looks at your knee and she says, I'm sorry, I can't see what's wrong with it. She's unlikely to give you knee surgery for an apparently healthy knee. She's more likely to prescribe non-specific therapies like opioids or other painkillers. In contrast, if you go to the doctor in a lot of pain and she says, aha, I know exactly what's wrong with you. You have very severe radiographic arthritis, you're a 4 on the Kellgren-Lawrence scale, then it's much more likely under clinical guidelines that you'll get some surgical intervention. Consequently, if KLG is missing true sources of pain within the knee in disadvantaged groups, these groups may be under-referred for surgery. OK. So we're going to try and test this. And methodologically what we're going to do is we're going to train a convolutional neural network. This is how you know this is sophisticated because we're using deep learning to search for additional signal in the knee x-ray which would explain the higher pain levels in disadvantaged groups. So what does that actually mean? How are you going to search for additional signal in the knee x-ray? Well, the standard approach to searching for signal in a medical image is to train a model to replicate the doctor's clinical judgment to train it to predict the KLG. The problem, though, is that if KLG doesn't capture all the pain relevant features, we don't want to just replicate it. We don't want to set a ceiling of clinical knowledge when by hypothesis, that clinical knowledge might be biased or incomplete. So instead what we're going to do is train the model to learn from the patient by predicting KOOS pain score. So to be very clear, the input to the model is an X-ray of the knees and the output is a knee specific pain prediction called ALG-P for algorithmic severity measure. And if when we control for this algorithmic severity measure ALG-P, it narrows the pain gap more than does controlling for the clinical severity measure KLG. It implies that the clinical severity score is overlooking knee features which might explain disadvantaged patients' higher pain levels. Before I go to the results, any questions about sort of the setup? No. Nice. I've got one. OK. It's in terms of comparing the pain gaps between different factors like income and race, do we have to consider overlap between the groups? Yeah. That's a great question. There is overlap between the groups, there's correlation between all three of these binary variables. Each of the individual pain gaps remains statistically significant even when you control for all three at once. You could probably do an analysis where you sort of controlled for all three at once, and that might be an interesting thing to do. Here to kind of keep the exposition as clear as possible, we looked at each group separately. But yeah, it's a good point, they're definitely correlated. Great. I think that's it for now. OK. So our first result is that the algorithm does in fact find additional signal for pain in the knee x-ray. The algorithmic severity score ALG-P predicts pain better than the clinician severity score KLG, the R squared is higher, the difference is statistically significant. And you see similar results for other predictive measures. But like, those R squareds are really not that high, right? R squared ranges from 0 to 1. If we're at 0.16, that's not all that high. And it's not the central question of our analysis anyway, which is does controlling for the algorithmic severity score reduce the pain gap? And it turns out that the answer to that second and more important question is also yes. So here, the first column is just what I showed you before. It says, when you control for KLG, the pain gap doesn't get that much smaller. But the second column is new. It says when you control for the algorithm severity score ALG-P, how much smaller does the pain gap get? The final column gives the ratio of the two columns. So for race for example, you can see that the algorithm explains 43% of the pain gap while KLG explains only 9%, the ratio of those two numbers is 4.7. The overall implication is that yes, there is overlooked signal in the knee X-ray which helps explain disadvantaged patients' higher pain. So this supports the in the knees hypothesis. Yes, you should never fit a neural net without doing a lot of robustness checks, whatever current computer science practice may be. And so we do a lot of them. I'm not going to talk about them now, but happy to talk about them more later if people have specific questions. I do though just want to talk about two accessory results. The first is that a diverse data set improves performance. Specifically, we compare training the model on a non-diverse train set from which we've removed all Black patients to a diverse train set from which we've removed the same number of non-Black patients. And so the size of the train set remains the same. We've just altered the racial diversity of it. And what we find is that while both models beat KLG, using a diverse train set further boosts performance. You get a better R squared, you get a bigger reduction in the pain gap. You see similar results for income and education as well. So to put this within broader context to sort of AI in medicine, there's been a lot of concern that the training data sets may not be sufficiently diverse, and this is actually more broadly true than AI in medicine. This is true in medicine full stop, and this sort of testifies to the importance of collecting diverse data. And then finally to speak about the clinical implications. As I said, one of the clinical implications of having good severity scores is that it influences the way surgery is allocated. So we decide to test how would using algorithmic pain scores affect the way surgery is allocated? Now, to test the way surgery is allocated, we replicate a previous study and we say, we're going to assume knee surgery is given to patients with high pain and severe disease. So you have to satisfy two criteria. And we try measuring severity in two different ways, using KLG, the clinician severity score and using ALG-P, the algorithm severity score. And we find that because ALG-P gives disadvantaged patients higher severity scores, it's in turn more likely to recommend them for surgery. For example, among Black patients roughly twice as many knees were eligible for surgery when using the algorithm severity measure, as opposed to KLGs. So to summarize, we trained a deep learning algorithm to predict pain from knee x-rays. Our algorithm finds overlooked signal in the knee x-ray which helps explain disadvantaged patients' higher pain. And a clinical implication is that these disadvantaged groups may be under-referred for surgery. To put this within broader context of sort of AI and medicine and AI fairness, there's been a lot of previous and very important work on how machine learning methods can potentially increase disparities in medicine and in other high stakes domains, and that's super important. But we should also keep the more optimistic flip side in mind that machine learning and AI, they give us predictive superpowers and they shouldn't inherently be a bad thing if we're wise enough to apply them properly. Specifically here we show how machine learning methods can also reduce disparities by detecting signals that humans miss. Key to our results here, key to reducing rather than to increasing disparities is first, the choice of the prediction test. So we didn't just try and replicate clinical knowledge. And second, we train the model on a diverse data set and we show that that contributes to our results. Any questions about this before I go to the third and final story? Yeah. So we have a question from the first section of your slides. Please. So can the Bayesian threshold test be applied with the observed data as the output of an algorithm? The observed? I mean, you would have to give me more details, but I'm intrigued. I mean, there's nothing-- it's designed to assess bias in decision making. So whether the decision maker is human or algorithmic, you could apply it to both. I would say, in the case of an algorithm it's likely that you know-- at least in principle someone knows the threshold, right? So it might be easier to just figure out the actual source code or procedure behind the algorithm rather than attempting to infer it. But there still might be some algorithmic settings where you don't know that threshold for example, at some third party company and they won't tell you what they're doing and then in principle, you might want to apply it here. Yeah. And then on the line of kind of determining whether or not something is discriminatory or biased, what metric would you suggest for testing if something like COMPAS is discriminatory? So how do you know if an algorithm is -- That's a big question. I would say it is highly context dependent. If you observe large disparities in things like-- in the case of COMPAS, you see these big disparities in FPR and TPR, false positive rate, true positive rate, that should certainly be a red flag that you want to dig deeper on, but then you want to try and understand why are these things rising and how can I ameliorate the situation? I don't think-- I would not say like in all cases use AUC and that is your golden answer. No, I don't think so. Should I go? Yeah. Thank you. Good to go. OK. Cool. So now I'm going to move to our final story on inequality. This is joint work with Serina and with Pang Wei. So I'm a little nervous because Pang Wei will actually know if the details are wrong here. And then with-- so Serina is a computer science PhD in Jure's lab. And then we also worked with Jaline, who's an epidemiologist at Northwestern, and then we also worked with Beth and David who are sociologists, and then with Jure, who is a computer scientist. So it's very interdisciplinary work because we're studying inequality and COVID-19. So intuitively it sort of draws on people in a bunch of different domains. OK. So as you know, viruses like COVID-19 spread through human contact. That's why I'm giving this talk remotely rather than in person. Which is to say there is an underlying contact network which modulates the spread of the virus. So under a simple epidemiological model, an infected person can infect anyone she comes into contact with some probability. Those people then infect their contacts and then you get this sort of incredible spread of the disease across the network. So because this network is so important to the spread of the disease, current models often attempt to estimate it in some way. So they can simulate the spread of the virus. But they often have to use simplistic estimates of the underlying contact networks because intuitively it's very hard to know who everyone comes into contact with unless you're living in some kind of surveillance state. So people do this in various ways. They might assume, for example, that anyone can infect anyone, so the network is fully connected. Or you might use some kind of network which captures trends that are very macro level. For example, an airline network which connects city to city but doesn't tell you anything about the network within a city. Or you might use historical data and say, I'm just going to assume that what patterns looked like in 2016 are what they look like now. Intuitively though, having really crude estimates of the contact network is not enough for a couple of reasons. The first is that we're undergoing an incredibly dramatic change in human mobility, probably in any of our lifetimes, hopefully in any of our future lifetimes, also, right? If we have these stay at home orders, reopening policies, like everything is crazy. And the second is that we often want to find or ask very fine grained questions that depend on mobility in a very fine grained way. For example, we might want to know the impact of fine-grained reopening policies, like what happens if I open restaurants from 3:00 to 4:00 PM on Saturdays but not on Wednesdays, or something like this. We also might want to understand inequality in infections by race or by socioeconomic status due to mobility patterns. And intuitively if we want to do that, we need to understand mobility at a fine grained level. Simply understanding how New York is connected to LA won't be very useful to helping me understand disparities in infection rates within New York, for example between rich and poor New York neighborhoods. So because we have to understand this mobility network in a fine grained way, our approach is a two-step approach. In the first step, we're going to try and estimate the human contact mobility network. And then we're going to try and build a model to capture transmission on this network. So let's talk about each of these steps in turn. So how do we estimate this network? Well, we're going to use cell phone mobility data from a company called SafeGraph. Specifically that data is going to tell us how many hourly visits there are from a neighborhood to a place. What do I mean by neighborhood? This is like a census block group which you can think of as a fairly fine grained census area with a couple of hundred to a couple of thousand people. A place which I'll refer to as a POI throughout the talk is a point of interest, like a restaurant or a cafe or a religious establishment. You can think of them broadly as places people go when they're not at home. So our cell phone mobility data set basically gives us some sense of the number of hourly visits from a neighborhood to a place. So mathematically what we're going to try and estimate is a network that links CBGs, neighborhoods, to POIs, places. So you can think of this in various ways. You could think of it as a list of matrices or a list of networks where each network represents sort of traffic at one hour or you could think of it as like a three dimensional cube where the dimensions are sort of neighborhoods, places, and time slices. But that's the object we're going to try to estimate. The problem we run into though, is that the cell phone data that SafeGraph provides doesn't actually provide us with an exact estimate of that hourly network. The data they give us for the number of visits from CBGs to POIs is only at a weekly or monthly level because of the way they aggregate their data. And they also censor it for privacy reasons. So in terms of the actual data that we have, we have the number of hourly people going to each POI, the number of hourly people leaving each CBG. And then we have a noisy estimate of the networks connecting POIs to CBGs. So you can think of it as the number of people going out, the number of people coming in, and then a noisy estimate of the matrix linking going out to coming in. Now, it turns out luckily that there's a machine learning algorithm which is designed exactly for this scenario and which you will learn about if you're lucky enough to work with Pang Wei and the other people in the Percy's lab. This is very much Pang Wei's work. And this is very fundamental to this project. And it's called Iterative Proportional Fitting. And basically it's designed for exactly this setting. It says, let's imagine that you're trying to estimate some matrix and you know the row sums of that matrix and you know the column sums of that matrix, and then you have a noisy estimate of the matrix itself. IPF is an algorithm that will give you back a matrix which is consistent with those row sums and column sums. And subject to that constraint is as similar as possible in terms of KLG divergence to the initial noisy matrix. And that's exactly the setting that we're operating in here. So we use IPF to estimate the true mobility networks from the noisy SafeGraph data. So that's a little matyh, a little abstract. Let's sort of give you a picture. So here what we're showing you is an example from the Chicago MSA and we're showing you from two time slices. The first time slice on the left comes from early March and the second comes from early April after social distancing measures have started to take effect. And the gray lines here represent the number of hourly visits from a CBG to a POI. So you can see two things from this visualization. First is that the density of the gray lines decreases indicating that total mobility has decreased from March to April. And the second is that most of the lines are vertical, indicating that people mostly hang around their own homes and that makes sense. OK. So now we got our network. Honestly if you didn't understand any of the math, that's fine, the main point is we have a network linking POIs to CBGs at an hourly level. Now we have to put a disease transmission model on top of this network and this relies on a pretty simple epidemiological model. And I'm going to give you a 30 second crash course in epidemiology and then you'll know about as much as I do about epidemiology. So let's describe the model now. So a very standard model in epidemiology is called an SEIR model. And probably some of you have heard of this if you've been reading the news. And the basic idea is that people move through four states in that order, SEIR, you can't go into any other order, you can't go back in loops. So how does this work? You start at the-- before diseases enter the population, you start in the susceptible state. Which is to say, you don't have the disease, you've never had the disease, but you're susceptible to it. Now, if you come into contact with someone who's infectious, you can move to the exposed state which is to say, you now have the virus, but you're not infectious yourself yet. So it's in your body but at low levels. Now, after some period of time, you move from exposed to infectious, meaning you have it and you can infect other people. And then after some further period of time, you move to the removed state, which is to say, you no longer have the disease, you can't catch the disease, maybe you've recovered, maybe you've died, but at any point, in any case you can't catch it again. So what we're going to do is we're going to say, we're at each hour of our simulation. For each neighborhood, each CBG in our simulation, we're going to model the fraction of people in each of these four states. So we might say, in neighborhood five at hour 4, 90% of people are in the suspensible state, 7% are in the exposed state, 1% are in the infectious state, and 2% are in the removed state. And then we're going to update that hour-by-hour. So we have to model transitions between these four states. Two of the transitions, the last two transitions are pretty straightforward and boring and don't depend on mobility. We just say at each time step, you have some constant chance of transitioning to the next step. But intuitively, the first transition, that S to E transition is going to depend a lot on mobility because whether or not you get sick depends on whom you come into contact with. So how do we model this critical S to E transition? We assume that infections can occur in two ways, at CBGs and at POIs. You can think of CBG infections as like you're just hanging around your house, but unfortunately someone in your house is sick and so now you're sick. You can think of POI infections as you went out to a bar, there was someone in the bar who was sick and now you yourself are sick. So we assume that the CBG infection rate is just proportional to the fraction of a CBG which is infected. Intuitively if more people are in your neighborhood or sick, it's more likely that you yourself will get sick. The POI infection rate is a little bit more complicated. We assume that the probability of getting infected at a POI is proportional to the fraction of the POI which is infected times a POI specific factor which is capturing sort of specific features about the POI. Like how big it is, how long people stay there. And so intuitively, places that are smaller and more crowded are more dangerous and that's what this part of the simulation is capturing. A nice thing about this model is that it's relatively simple. For each city we're only going to have three free parameters which remain fixed over time in spite of the dramatic changes in human mobility. Those three free parameters are going to scale those two types of infections, infections at CBGs and infections at POIs. And then we're also going to have a parameter which scales sort of the initial conditions in the model, what fraction of people started infected. The rest of the parameters we're just going to take from the prior literature. We're not going to estimate them at all, and this is important because it minimizes concerns about overfitting. OK. How are we actually going to choose those three free parameters? We're going to do what's called grid search. We're going to look over all possible parameter combinations of those three free parameters for each city. How are we going to choose which one is best? Well, we're going to take real COVID case data, the number of COVID cases every day from the New York Times. And we're going to keep the parameter combination which gives us the best fit to real cases in terms of RMSE. Now, in order to capture uncertainty in the parameters, we're actually not just going to show results from that best fit set of parameters. We're also going to use all parameter settings which yield an RMSE within 20% of that best fit RMSE. And that kind of captures the idea that, look, our parameters are somewhat uncertain here and we want to capture that uncertainty. Some of you might be thinking like, Oh, I think Bayesian inference or something might be a more principled way to do this. Totally agree. Please figure it out and write to us, I think that would be awesome. And in terms of the time period we're going to model, oh, but why didn't we do that? Because it's computationally difficult as it is to fit this model. And so we just weren't-- that would have been a further computational difficulty but I think it's an interesting direction for future work. Bayesian inference is actually taught next week, so. Oh, nice. Awesome. Hopefully they understand the police stuff and whatever. OK, well you'll understand it even better next week, that'll be great. Anyway, but yeah, but next week you can figure this out for us. That sounds good. OK. Cool. Anyway, we modeled early March to early May. And the reason we choose that time period is that's what was available when we were doing this analysis. Cool. OK. So to make things a little more concrete, I want to show you just a sort of video of how this model looks over time. Is this actually going to work? Praying. OK. So what's going on here? I'll just talk you through the three graphs in turn. The graph at right is showing you mobility over time. This is not from our model, it's from the raw data. The y-axis is the number of visits to POIs. So you can think of it as a measure of overall mobility. And you can see that it drops pretty dramatically about three weeks into the simulation, so three weeks into March, and that's as social distancing takes effect. The middle graph is model output. It's showing you the fraction of people the model thinks are in each of the four states and it's a logarithmic graph. And you can see that the fraction of people in the EIR states i.e. those who have had the disease rises over time. And you can also see how mobility is feeding into the model. For example, if you look at that E state, you can see sort of a very high frequency wiggle just like there is a very high frequency wiggle in the mobility patterns. Those are daily changes in mobility, so over the course of the day. And that's basically telling you look, people are more likely to get sick when they're going out in the middle of the day than in the middle of the night. And finally that graph at the left is showing you-- is sort of spatially geographically, where does the model think people are most likely to get sick? Redder indicates that a larger fraction of the population is in one of the infected states. And you can see that especially red segment in the middle of the city and I'll return to that point in a bit. So OK, anyone can make pretty graphs. Does this actually fit the data? Yes, it turns out it does fit observed case count data reasonably well. Here the orange X's are reported cases and the blue is the model prediction. And you can see that it fits the observed data reasonably well. Even if as in the left plot, you only fit the model on data prior to April, and then see how well it performs on data from April to May, it continues to fit the data reasonably well. This isn't just true in Chicago, that's not a cherry picked example, it fits data pretty well across cities. And it turns out that it also fits the data better than two baselines that we tried comparing to. But I think the high level point here is not that we have some super duper predictive model, the high level point is look, we have this model that fits the data reasonably well but also enables you to ask very fine grained questions. So let's talk about some of those fine grain questions. Now, what are some of the questions you can ask with this model? So this model is of what would have happened if we had done something differently. What if we had started distancing a week later? What if we had distance to only 50% as much as we actually had? It can help you ask stuff like what are the riskiest locations, the riskiest POIs? Are there POIs which are likely to be super-spreader locations because they have a ton of people? It can help you answer questions, like what's the impact of different reopening strategies? What happens if you reopen POIs only halfway? For example, like only to half of their maximum capacity. How do infection rates look like under that scenario? And finally, it can help you understand why socioeconomic and racial disparities arise. And today, I'm actually only going to talk about that fourth question. The rest of the answers to the other questions you can find in our paper and I think there are probably other interesting questions you can ask as well. The basic point though is because the models sort of model mobility from neighborhoods to individual places in such a fine grained way, you can ask a lot of questions that naturally flow from that fine grained mobility network. OK. So let's talk briefly about disparities. So we know that socially disadvantaged racial and socioeconomic groups were hit harder by COVID-19. Higher case rates, higher death rates, disparities are very dramatic. That's not our work, that's prior work, it's very, very clear, very striking. So there are a bunch of reasons for this, right. It's not all mobility. It's stuff like pre-existing conditions, differences and access to care or worse care when they do get into the hospital, et cetera. But mobility is probably part of it too. We know, for example, that if you are of lower socioeconomic status, it's harder for you to work from home, more likely that you're an essential worker, more likely that you have to go out and do these dangerous jobs and you expose yourself to risk of infection. So it's interesting to ask first, does our model learn that disparities flow in part from mobility? Can the model naturally predict the emergence of these disparities? And second if it does, can it expose the mechanisms via which these disparities arise? In order to study this we don't actually have data on individual people. So what we do is we compare neighborhoods. We compare higher and lower income neighborhoods, for example, and we look at how infection rates vary. So the first result is yes, the model does predict the emergence of these disparities based on mobility patterns alone. Here, the left graph is showing you disparities by income and the right graph is showing you disparities by race. On the x-axis what we're plotting is how much likelier are people to get infected. So for the left graph if you're in a lower income CBG and on the right graph if you're from a less white CBG. And you can see basically that all those boxes, all the blue boxes are to the right of 1 indicating that people are likelier to get infected under the simulation if they're from a lower income or a less white CBG. So the model is predicting these SES and racial disparities, socioeconomic and racial disparities on the basis of mobility patterns alone. And because the disparities by socioeconomic status are particularly dramatic, I'll focus on those for the rest of the talk but you can see all the results for both in the paper. So why is this happening? Well, we showed two mechanisms via which it arises. The first you probably already guessed, is that people from lower income and less white CBGs weren't able to reduce their mobility as much, they had to go out more, and this is probably in part because of stuff like differences in occupation, they're more likely to be essential workers. But the second mechanism is a little subtler, and it's this. It's that when they do go out, they go to places which are smaller and more crowded and therefore more dangerous. And this is true even within the same type of POI. So even conditional on, I went out, I went to a restaurant, the people coming from lower income CBGs tend to go to restaurants that are smaller and more crowded and more dangerous. And that's the second thing contributing to these infection rate disparities. So I want to show an example of this for Philadelphia, which is the place where we see the most striking disparities. Let's see if I can get this to play. Yeah, OK. So here this graph on the left is showing you Philadelphia. And it's showing your results over time. And what you can see over time is that this big red spot emerges in the middle of Philadelphia. And where is that? Well, it turns out to be the place with the highest population density. So that's the top right plot. And it's also the place with the lowest income. So this sort of very high density low income area has higher predicted infection rates than our model and that's happening because the POIs that people are going out to are smaller and more crowded and more dangerous. A final implication is that the model can look at the predicted impact of reopening plans for people in lower income deciles as opposed to the population as a whole. And basically what we show is that often reopening plans have larger predicted impacts for people in lower income deciles. So for people in poorer neighborhoods than for the population as a whole. So when you do consider a reopening plan, it's important not just to consider the overall impact, but also the impact on poorer neighborhoods. And in fact, California is starting to consider doing stuff like this, like you have to look at racial disparities and reopening and racial disparities and impact, you can't just look at the impact on the population as a whole. This is also a good practice by the way when you're evaluating the impact of an algorithm, you shouldn't just look at how it performs on the population as a whole, you need to also look at how it performs on different subgroups. So takeaways. This approach showcases the power of fine grained mobility networks. We showed that even a simple model leads to accurate fits in 10 different American cities, metropolitan statistical areas. We show that it can scale even to sort of large networks with lots of places and lots of people. We show that because you can capture these very micro trends down to neighborhoods and locations by the hour, you can perform these detailed analyses that can potentially inform more equitable analyses to COVID-19. I think a general question that I would have for people and I don't know if we want to talk about this now or at the end or not at all, but like what are other questions you might want to answer with this model because I think there are a lot of other things you can potentially ask beyond what we have asked and I'm curious as to your thoughts. Should we return to that point at the end though? I don't know how we're doing on time. Yeah. We have 20 minutes left. So maybe we can take some questions now and then we'll move on. Sounds great. Cool. So actually if you guys have working mics do you want to read out your own questions? Great, thank you. So my question was if you were able to take into account the percentage of people that wear masks? We are not. That's a great question. You're reviewer 2, 3, 1, I don't know. Many reviewers had that question. We do not attempt to take into account the fraction of people wearing masks. And I think that's an interesting direction for future work. Thank you. Yeah. Hi. I suppose this sort of goes along with your question that you put at the end of the slide, which is what other questions you might answer with this model. But I was wondering like, could this model of mobility be used to analyze other mobility issues that don't revolve around health or epidemiology, such as how different types of zoning codes or access or use of public transportation and different CBGs affect mobility of those neighborhoods? Yeah, absolutely. I mean, SafeGraph data is very broadly relevant to social science and other questions of mobility. And we're using it for other projects as well. It's definitely a gold mine for other, yes. I was wondering, do you think it's possible that we can make connections between-- physical mobility between CBGs and POIs and whether that somehow correlates to the degree of socioeconomic mobility with the-- CBGs. Exactly. Yeah. And you might look at-- Yeah, you might look at Susan Athey and Gentzkow, Athey and Gentzkow would be the names to look up, but like they look at sort of socioeconomic segregation. Sorry, they look at racial segregation using SafeGraph data, but then they correlate it with other measures and sort of economic opportunities and work from Raj Chetty I think in their paper. So that's absolutely something you can do. I mean, causal claims are hard, but it's still interesting. And I think that's it. Cool. All right. Let's go on then, great. OK. Cool. So I was asked to speak briefly about how I ended up on this path and doing this work just in case it was helpful to people so I attempted to write this down. So I liked math and physics and other similarly nerdy stuff ever since I was a little kid. This is a picture of me dressing up as a chessboard for Halloween. So you can tell I was super cool and definitely had a ton of friends. And I took my first AI class in high school, but I was the only girl in the class and I had a lot less experience than the boys. And some of them made fun of my lack of experience and told me I was the worst in the class. So by the time I got to Stanford I actually decided I was not particularly good at computer science. Then I came to Stanford as a physics major, and I did not even take any CS classes my first year at Stanford. But in my second year at Stanford, I decided I should give computer science another try. And so I actually enrolled in this class, which at that point was not taught by Percy, I don't know who were the teachers? Andrew? No. Subhashish. I think so and some other, I don't know-- I don't even remember who taught the class. I do remember the class was awesome. And honestly this is not propaganda, it was actually true. I thought that computer science was super cool. And that summer I started doing computer science research in a physics lab. I was developing algorithms to identify certain types of galaxies, but I realized something was missing, that I thought AI was amazing, but I didn't want to use it to study galaxies that were millions of years away. There were too many problems that were closer to home. And this was really driven home for me a few months later when I got a genetic test which told me I carried a mutation that gave me a very high risk of getting cancer, it's called a BRCA mutation, some of you might have heard of it. And as you can imagine, this was pretty difficult news to receive as a 20-year-old. And I spent the next few months being pretty upset about it. And during this period when I was trying to come to terms with the news, I came across this paper out of Daphne Koller's lab. She was an AI professor at Stanford at the time. I'm sure many of you have heard of her, these days she's in industry. And in her paper, they take images of cells from cancer patients and they apply a computer vision model to try to predict whether their patients will survive. And these days you would probably do this using deep learning, back then they were using old school computer vision. I thought it was the most amazing thing. And more importantly, it gave me hope. I thought fine, I have this cancer problem, I'll work on this problem with AI. I knew that I wasn't going to cure cancer, but I thought that working on it and learning about it would make me feel better. Understanding the things that frighten you often does. And so I wrote to Daphne and she wrote back on New Year's Day and she offered me a spot doing research in her lab and I took it. There's a lesson I take from this kind of tough period in my life, which is that crying is underrated or as Gandalf puts it, in Lord of the Rings, not all tears are an evil. I found tough times like this one to be very useful in crystallizing what matters to me. From that point on, I became much more intensely focused on what I wanted to do in life. So then I graduated Stanford with a bachelor's in physics and a master's in computer science. And I decided to take a job at 23andMe which was a genetics company that offered very cheap testing for the BRCA mutation which I carried. Their test was so cheap that my little sister who is then only a teenager could afford to get tested and learn that she did not carry the same mutation that I did. And I thought that was amazing, expanding access to genetic testing in that way. So I accepted a job at 23andMe. But about a month after I arrived at 23andMe, the US government sent them a letter ordering them to stop selling their health related tests because they hadn't gotten basically the proper regulatory approval. So they could no longer sell their BRCA test which was the whole reason I had gone to the company in the first place. And for the entire year I was there I basically didn't do any BRCA research at all. Which I think is another important lesson. Even if you start out on a path with the best of intentions, it's very easy to get derailed, at least for me and it's very hard to predict what projects will pan out. Then I went back to school to start my graduate research. I was still very motivated to do cancer research. And over the next couple of years, I wrote a half dozen papers developing AI papers for computational biology methods. But I began to feel my work was unsatisfying because I was still too far away from real people's lives. Early in my graduate research, my grandpa who carried the same genetic mutation that I do died of brain cancer. We were very close, that's us playing chess up there when I was little. And I wrote a fair bit of my master's thesis next to his hospital bed. The thesis develops new dimensionality reduction methods like a fancy PCA or factor analysis basically if you've heard of those things. For a certain type of biological data which is important in cancer and many other settings. And work like that, work like what I was doing just felt very far away, decades away from helping people like my grandpa which is not to say that no one should do it. I think it's super important that you have people doing that fundamental research even if it doesn't touch real people's lives for a long time. But I began to feel that it wasn't for me, that I wanted something that was going to help people in the short term if I was going to be happy with the sort of research that I was doing. So based on that understanding, I started working on data sets where each row was not a cell or a gene or something very abstract but a person. I kept working on health care problems and I also started working more on inequality. That took me forward to some of the problems that I've told you about today studying things like inequality and pain and policing and COVID which feel very concrete to me. Looking back at my research, I see a lot of failures and wrong turns. I went to 23andMe to research BRCA and I failed to do that. I went to grad school to study cancer and I mostly failed to do that. I've spent more than 10,000 hours of my life getting a PhD. And I think it's fair to say that many of those hours have not made anyone's lives better. There's a lot of time running down blind alleys. And even when you do have a good idea, there's a lot of time polishing it and polishing it to get it published. And even when you do publish it, often very few people read it. And even when people do read it, does it actually change their minds? A few months ago, a man contacted me because he was writing a New York Times piece about our work on policing that I was just telling you about. And I went back and forth with him meticulously trying to make sure he stated our conclusions accurately. I'm sure he was very sick of me. And when the piece finally came out, I read the New York Times comment section and it was obvious that none of the commenters were actually reading our research. They were just spouting what they already believed. And that is probably the project I've gotten to work on which has been most impactful. But even though I've spent so much of my life failing to do good, I still think it's important to try and that's the final topic I want to discuss. I outlined this part of the talk the night Ruth Bader Ginsburg died. I heard the news and I knew I wasn't going to be able to write any more code that day. So I decided I just walk until I felt better. But unfortunately, it got very dark and cold before that happened. So you'll forgive me if this comes across as a little moralistic or maudlin, but it wasn't the best evening. But before I tell you why I think you should try to do good rather than just making a lot of money, I want to acknowledge that there are students watching this talk who really do need to make a lot of money when you graduate. You have families to support. You have huge student loans. These are frightening economic times. And if that describes you, I'm not going to lecture you, and you should please feel free to ignore this bit. Still I can almost promise that for some of you listening to this talk, there will come a point not too long from now where you will have a choice between multiple jobs which are both fun and both interesting and both pay you more money than you could possibly need as a young person. That's what the Stanford computer science salary survey shows for the last six years I have data. And when that moment comes, I'm asking you to choose a job that makes the world better. And not just in some trivial way and not just for the very richest people. I'm not asking you to donate a kidney or storm the beaches at Normandy or risk your lives treating COVID patients, I'm asking you to choose to make a large amount of money as opposed to an obscene amount of money. It's just not that big a sacrifice in a world with such desperate problems where we've gotten so lucky. And I also think you'll find that you'll get more enjoyment out of whatever money you do make if you feel like you earned it doing something meaningful. The other reason I think we're compelled to fight for good is that there are a lot of people doing the opposite. I don't want to get political about this, but I think we've all seen just how catastrophic the consequences of that can be. So if we were given the most power to push the world in the right direction, take morally neutral or morally harmful careers instead, the world will slide in the wrong direction. I am only here giving this lecture, many of you are only here listening to it because people like Ruth Bader Ginsburg woke up every day for decades vowing to push the world in the right direction, to expand the circle of people allowed to be in classes like this one. She could have gone into corporate law instead. Apparently, she had a taste for Armani suits. She could have bought a lot more of them. I think, ultimately, a lot of us take high paying socially neutral or socially harmful jobs, not because we really need all that money but because we've internalized the implicit and insidious claim that if we make a lot of money we're good engineers, we've made it in life, we're worthy of respect. We need to break that link. We need to redefine what it means to be a good engineer. A few weeks ago I got an email from a recruiter from some big finance firm. And I responded the way I typically do. I told him I don't work for finance firms. And he asked me why I didn't want to work with the best engineers in the world. And I thought, the best engineers in the world think about the social implications of their work. The biggest factor determining your impact will not be whether you understand all the variants of gradient based optimization, although you should put some effort into learning those both because they are very useful and so Pang Wei won't kill me. The biggest factor determining your impact will be the problems you choose to work on. That's what makes a great engineer. I'll close with a quote from Ta-Nehisi Coates Between the World and Me which is a letter he writes to his son who is about your age. He writes, "I would have you be a conscious citizen of this terrible and beautiful world." This is what I would wish for my child and for my students and for myself and for you as well. Thanks very much for listening and I'm happy to take any further questions. [INAUDIBLE] Well, thank you very much, that was impressive listening to you. Well, I was thinking while you were talking about this medical impact in countries targeted especially with regards to the virus and underprivileged population. So there are quite a lot of actually cognitive biases that doctors can control while taking important medical decisions. Are we able to study some [INAUDIBLE] and how this happens. So just to help them avoid and eliminate or reduce this kind of risk? Yeah. Totally I mean I think this sort of behavioral economics approach, let's understand doctors' biases and put them in terms of these sort of common heuristics that people use, these common biases that people have is a broad and promising line of research. I'm not a behavioral economist. For one example of this type of work I would point you to-- it's a recent paper, it's called like who gets tested for heart attack and who should be. And it sort of studies how do these cognitive heuristics that people use play into decisions like this. The authors are-- you should look for [INAUDIBLE] at Obermeyer and there may be some other authors as well. But the broad answer to your question is yes. You can absolutely study doctors' biases in terms of things we know cognitively about people and how they make decisions. Thank you. You're going to read out your question. Oh, sure, yeah. This is about the difference between races in [INAUDIBLE] most recent study actually a [INAUDIBLE] only difference between IQ habits is what actually caused the difference or there's a difference as [INAUDIBLE] is a correlation or consequence. Is there any study looking deeper on this to understand the difference? I mean there are a lot of differences looking in-- a lot of studies looking at differences by race and ethnicity. This is a fraught topic. Some of the studies have not been good studies. So like in particular studies of racial differences in IQ I think it's a very fraught topic. And then there's stuff which is not at all fraught. Let's look at racial differences and I don't know, incidents of breast cancer or deaths from breast cancer. So yes. I mean, there's a lot of research in this area of varying quality, but a lot of it is super important. It is extremely difficult to figure out causality here most studies that claim really pushing a particular political agenda and should be treated with a lot of skepticism. OK. Thank you. Since we are about time, let's see, shall we wrap up? Yeah, sounds good. So if there's no other questions. If you can unmute and clap, that would be great. I'll count to three so we can all give Emma a really big round of applause after an amazing talk. 1, 2, 3. [APPLAUSE] Thanks so much for speaking. Bye. Thank you. It's a pleasure. Thank you for the great questions.
Stanford_CS221_Artificial_Intelligence_Principles_and_Techniques_Autumn_2021
Artificial_Intelligence_Machine_Learning_8_Neural_Networks_Stanford_CS221_AI_Autumn_2021.txt
Hi, in this module, I'm going to talk about neural networks, a way to construct nonlinear predictors via problem decomposition. So when we started, we talked about linear predictors. And they were linear in two ways. First is that the feature vector was a linear function of x, and the way that the feature vector interacted with the prediction was also linear. This gave you rise to lines. Next, we talked about nonlinear predictors but keeping the same linear machinery but just playing around with the feature vector, and by adding terms like x squared, you could get quadratic predictors and so on. So now what we're going to do is we're going to define neural networks, where we can just leave phi of x alone, the feature vector alone, and play with the way that the feature vector results in the prediction. And that will allow us to get all sorts of fancy stuff. So let me begin with a motivating example. So suppose you're trying to predict whether two cars are going to collide or not. So the input are the positions of the two cars. So x1 is the position of car 1, and x2 is the position of car 2. And what you like to output is whether y equals 1, whether it's safe, or y equals 1 whether-- y equals minus 1 or whether they'll collide or not. And what is unknown here, is that we're going to say that cars are safe if they're sufficiently far. So the distance between them is at least 1, then we're going to be safe. If we can visualize this, a true predictor, as follows. So here is x1 and x2 and what is going to happen is they're going to draw these two lines, here. And anything, any point that is over here, and anything that is over here, is going to be labeled as plus, which is safe. And anything that's in between, is going to be labeled as minus, or that they'll collide. OK, so let's do some examples, here. So suppose we have a point 0, 2, which is this point here. This is safe. So y equals 1. 2, 0 is also safe. And 0, 0 is here, which is not safe. And 2, 2 is minus 1, which is also not safe. So as an aside, this configuration points is what was historically known as an XOR problem and was shown that pure linear classifiers could not be used to solve this problem. You couldn't draw a line to separate the blue and the orange points. But nonetheless, we're going to show how neural networks can be used to solve this. OK, so the key intuition is the idea of problem decomposition. So instead of solving the problem all at once, we're going to decompose it into two subproblems. But first, we're going to test if car 1 is to the far right of car 2, and in the picture here, that corresponds to simply this region over here, which we're going to call h1. So h1 is whether x1 minus x2 is greater than or equal to 1. And then, we're going to find another subproblem, testing whether car 2 is to the far right of car 1, which is called h2, that corresponds to this region over here. And then, we're going to predict safe if at least one of them is true. So we just add the two here, which is either 1 or 0 and if at least one of them is 1, then we're going to return plus 1. And by convention, we're going to assume that the sign of 0 is minus 1. OK, so here are some examples, here. So suppose we have 0, 2, again. So this point, h1 says, nope, that's not on my side. h2 says, yep, that's on my side and at least one is enough to make the prediction plus 1. If you take 2, 0, that's this point, h1 says, yep. h2 says, nope, and then, that is 1, because all it takes is one. 0, 0 is this point, both of them say, no, and it's minus 1. And same with 2, 2, both of them say, no. It's minus. OK, so so far, we've just defined the true function f. Of course, we don't know f, so what we're going to do is try to move gradually to defining a hypothesis class. And the next step is to rewrite f using vector notation. So here are the two intermediate subproblems. And the predictor is f of x equals a sign. And what we're going to do is to write this in terms of a dot product between a weight vector and a feature vector. So here's a feature vector, 1, x1, x2. And then, we're going to define a weight vector, which is minus 1, and if you look at the dot product, it's going to be-- so it's minus 1, plus x1, minus x2. And if that quantity is greater than 0, then we're going to return 1, otherwise, return 0. And you can verify that. This is exactly just a rewrite of this expression. And similarly, if you reverse the roles of x1 and x2, then you can rewrite h2 in vector notation, as well. And now what we're going to do is we're going to just combine h1 and h2 by stacking them. So we're going to define this matrix, which is just the two weight vectors here, stacked up. So we have two rows, here. And we're going to multiply this matrix by the feature vector. So remember, left multiplication by a matrix is just taking the dot product with each of the rows of that matrix. And now, this produces a two dimensional vector, and we're going to test whether each component is greater than or equal to 0. So in the end, h of x is going to be a two dimensional vector, OK? And now, given that we can rewrite the predictor as simply the sign of the dot product between 1, 1 and h of x, which is simply the sum of the two components. So now, we've written f of x, which is the true function, in terms of a bunch of matrix or vector multipliers. Now, everything in red here, are just numbers. And so far, we've specified what they are, but in general, we're not going to know them, and we're going to have to learn them from data. But before we do that, we're going to preemptively see one problem that's going to come up. And this problem we saw before, when we tried to optimize the zero-one loss. So let's look at the gradient of h1 of x with respect to v1. We can plot this as follows. So here is the score z, which is the dot product. And this is h1, and this is just a step function. So the step function of threshold function is just whether z is greater than 0. It's 1 over here, and 0 over here. OK, so now, if you tried to do gradient descent on this, you're just going to get stuck because the gradients are going to be 0, basically everywhere. So the solution is to replace this threshold function with a more general activation function, sigma, which has more friendly gradients. So classically, and by classic I mean, like in the '80s and '90s, people used the logistic function as activation function, which looks like this. And this is just a kind of a smoothed out version of the threshold function. And in particular, its gradients are 0 nowhere, so that's just great. So the gradient, it can always move making progress. There is a caveat here, which is that if you look out here, this function is pretty flat, which means that the gradient is actually approaching 0, which means that if you're out here, then, you can get stuck, or at least make very slow progress. So in 2012, the ReLU activation was invented, which just takes a max of z and 0, so that looks like this. So if the input to the ReLU is less than 0, I'm just going to keep it, clip it 0. And then, otherwise, I'm going to just leave it alone. So now, this function actually has nice gradients over here. So the gradient never vanishes. It's always positive and bound away from 0. Although, over here, it is 0. So it turns out empirically, the ReLU activation function works really well. It's simpler in a lot of ways. So it's become the activation function of choice, here. So the solution here is to replace this threshold step function with an activation function. Choose your favorite, I would choose the ReLU. And now you have something that has non-vanishing gradients. So let's now define two-layer neural networks using the machinery that we've seen so far. So we're going to define some intermediate subproblems. So we start with a feature vector, phi of x. Now, I'm going to represent vectors and matrices using these dots. So this is a six dimensional feature vector, but in general, it's d dimensional. I'm going to next multiply it by this weight matrix, which is going to be a 3 by 6, but in general, a k by d matrix. And now, that generates a three dimensional or a k dimensional vector. I'm going to send it through this nonlinearity activation function, like the ReLU or the logistic. And I'm going to get a vector, which I'm going to call h of x. OK, so now, given this h of x, I can now do prediction by taking h of x and simply dot producting it with a weight vector, w. And if I take the sign, that gives me the prediction of that neural network. So one thing that's interesting here is that if you look at this equation, it really pretty much looks like the equation for a linear classifier. The only difference is now we have h of x instead of phi of x. So one way to interpret what neural networks are doing is that instead of using the original feature vector, we've learned a smarter representation. And at the end of the day, we're still doing a linear classification on top of that feature representation. So often people think about neural networks as doing feature learning for precisely this reason. And finally, now we can define the hypothesis class, F, is equal to the set of all predictors, and the predictor is given, parameterized by a weight matrix, V, and a weight vector, w, defined up here. And we can let the weight matrix be any arbitrary k by d matrix. And we let w be any d dimensional vector. Sorry, this d should actually be a k, there. I will fix that. OK, we have to find a hypothesis class that corresponds to two-layer neural networks for classification. Now, we can kind of push this farther. We can go and talk about deep neural networks. So remember, going back to single layer neural networks, a.k.a. linear predictors, we see that we take the feature vector. We take the dot product with respect to a weight vector. And you get the score, which can be used to drive prediction directly in the regression or take the sign to get classification predictions. For two-layer neural networks, we take phi of x. We take the dot product between layer one's weight matrix, take element wise activation function. And then, multiply the dot product with a weight vector. You get the score. And now, the key thing is this piece, V, apply V and then, apply sigma, you can just iterate over and over again. So here's a three-layer neural network, take phi of x, which is a feature vector. You multiply by some matrix, V1, take a nonlinearity, multiply by another matrix, take a nonlinearity. And then, finally, you get some vector that you take the dot product with w, and you get the score, which can be used to power your predictions. So one small note is that I've left out all the bias terms for notational simplicity. In practice, you would have bias terms. And you can imagine just iterating this over and over again. But what is this doing? It kind of looks like a little bit of abstract nonsense. You're just multiplying by matrices and sending them through a nonlinearity, and you hope something good happens. And that's not completely false, but there are some intuitions which we can derive. So one intuition is thinking about layers as representing multiple levels of abstraction. So in computer vision, let's say the input is an image. So you can think about the first layer as computing some sort of notion of edges. And the second layer when you multiply a matrix, and you take a nonlinearity, you compute some notion of object parts. And then that third layer, you multiply by a matrix and then apply some nonlinearity, you get some notion of objects. Now this is kind of just a story, and we haven't talked at all about learning. So this is definitely not true for all of neural networks. It turns out that when you actually learn a network to data, and you visualize what the weights are, you actually do get some interpretable results, which is kind of interesting and somewhat surprising. So now, there's a question of depth. So the fact that you take a feature vector and you apply some sort of transformation again, and again, and again to get a score. So why do we do this? So one intuition that we talked about already is this is representing different levels of abstraction to kind of low level pixels, to high level object parts and objects. Another way to think about this is this is performing multiple steps of computation. Just like in a classic program if you get more steps of computation, it gives you more expressive power. You can do more things. You can think about each of these operations as simply doing some compute. Now, it's maybe a kind of a foreign type of compute because you're multiplying by a crazy unknown matrix. But one way we can think about this is that you set up this computation and the learning algorithm is going to figure out what kind of computation makes sense for making the best prediction. Another piece of intuition is that empirically it just happens to work really well, which is not to be understated. If you're looking for a more theoretical reason, the jury's kind of still out on this. You can have intuitions how deeper logical circuits can capture more than shallower ones, but then, there's the kind of relationship between circuits and neural networks, which requires a little bit of massaging. So this is still kind of a pretty active area of research. To summarize, we start out with a very toy problem, the XOR problem, testing whether two cars are going to collide or not. And we used it to motivate problem decomposition and eventually, defining neural networks. We saw that intuitively neural networks allow you to define nonlinear predictors, but in a particular way. And the way is to decompose the original problem into intermediate subproblems, testing if the car is to the far right or the far left and then combining them over time. And you can kind of take this idea further and iterate on this decomposition multiple times giving rise to multiple levels of abstraction, multiple steps of computation. The hypothesis class is now larger. It contains all predictors where the weights of all the layers can vary freely. And then next up, we're going to show you how to actually learn the weights of a neural network. That is the end.
Stanford_CS221_Artificial_Intelligence_Principles_and_Techniques_Autumn_2021
Bayesian_Networks_5_Forwardbackward_Algorithm_Stanford_CS221_AI_Autumn_2021.txt
Hi. In this module, I'm going to introduce the forward-backward algorithm for performing exact and efficient inference in Hidden Markov Models, which are an important special case of Bayesian networks. So let's revisit our object tracking example now through the lens of a Hidden Markov Model. So recall that at each time, i, there's an object that is at a particular position, Hi. The object might have gone this trajectory. Now, at each position, there's also a noisy observation, 0, 2, and 2. So let's formally define a probabilistic story for how these data might occur. So we start at h1 which is the position of object at time step 1. And we're going to generate this position uniformly at random, probability 1/3, 1/3, 1/3 for each of these possible positions, 0, 1, or 2. And then I'm going to transition into the second time step. So in general, I'm going to look at Hi minus 1, looking to Hi, which is going to be up with probability 1/4, the same with probability 1/2, and down with probability 1/2. So mathematically, this looks like this. hi can be hi minus 1, minus 1, the same, or plus 1 probabilities. So this transition distribution is also used to generate H3 given H2. Now at each time step, I have an emission of E1, E2, and E3. And in general, I'm looking at the actual position at time step i. And we're going to generate Ei according to essentially the same process, which is up with probability of 1/4, same with probability of 1/2, and down with probability 1/4. And this is colloquial conditional distribution formally stated. So now, I multiply all the colloquial conditional distributions together. We have the probability of start, H1. We have the probability of hi given hi minus 1 for each subsequent time step times the probability of the noisy sensor reading, ei, given the actual position, hi, for all the time steps. And this gives us the joint distribution over all of the actual positions and sensor readings. So now, let's ask questions about our Hidden Markov Model. So there's two types of questions which are common. One is called filtering and the other is called smoothing. So the filtering question is something like this, which is, I'm interested in a particular object location at a particular time step, H2 given some evidence, which is all of the sensor readings that I've seen before that. Smoothing is similar, except for in addition, conditioned on the future. So I might observe E3 is equal to 2 as well. So notice that filtering is actually a special case of smoothing if we marginalize unobserved leaves. So to show this, suppose we have just this Bayesian network or HMM and I didn't observe E3. If I didn't observe E3, E3 is just on an unobserved leaf and I can marginalize it out by just removing it. Now, H3 is an observed leaf and I can remove that as well. So now, this filtering query is actually a smoothing query where there is no future because I don't observe the future. So now, let us just focus on smoothing queries without loss of generality. So the forward-backward algorithm is based on dynamic programming. And the key idea is to represent the set of all assignments using a lattice. So this lattice is a directly acyclic graph, not to be confused with actual HMM or Bayesian network. There's a start state and an end state. And each column is going to represent a particular value. And each row is going to correspond to a particular variable. And each path through this lattice now is going to correspond to an assignment of values to all the variables. For example, if I go through this path and set H1 equals 0, H2 equals 2, and H3 equals 1. So this is just a very compact way of representing all exponentially many assignments in a polynomial-sized object. So now I'm going to attach weights to the edges. So edge start to any of these initial states has weight the start probably times the first emission probability. For example, this edge here has weight probability of h1 equals 0 times probability of e1 equals 0 given h1. Because e1, remember, was observed to be 0 so I've plugged in that evidence. So now, the subsequent edges are between sum Hi minus 1 and sum Hi. And that has weight the transition probability times the emission probability of the destination state. For example, this edge here has weight probability of h2 equals 0 given h1 equals 0. That's the transition times probability of e2 equals 2, which is what we observed as evidence conditioned on h2 equals 0. And this one is h2 equals 0 given h2 equals 0 times the emission probability. And this edge doesn't have anything on it so we assume it to be 1. And now, each path, as we've stated before-- so from start to end-- is an assignment of all the variables. But in particular, it has weight that's equal to the product of the edge weights. So this path here has weight, which is simply the product of all these purple numbers. OK? So and that weight is actually the joint probability of this particular assignment and the evidence. OK, so now the key part is that smoothing queries, such as probability of Hi given Ei, E equals e is simply the weighted fraction of paths through Hi equals hi. So for example, if I'm interested in what is the probability of H2 equals 1 condition on evidence, what I'm really asking in the context of this lattice is what is the fraction of paths that pass through this node compared to all paths? So stated differently, I'm going to look at all the paths through this node, add up all their weights, and divide by the sum of the weights over all paths. So this gives us a really nice graphical interpretation of the smoothing queries. So now, we can compute those quantities using a recurrence. I'm going to define two types of objects, forward messages and backward messages. Here's our lattice. For the forward message, for each node here, is going to be F of i written of h of i. And this is supposed to be the sum of weights of paths from the start to a particular Hi equals hi. So for example, F2 of 1 is going to be the sum of the weights of all paths from start to H2 equals 1. And I can compute this recursively as follows. So all paths that go from here to here have to end-- have to have a previous position, which is one of these. So I'm going to sum over all possible hi minus 1 values of the previous variable. And then we're going to recurse on Fi minus 1 of hi minus 1. So the sum of all the weights to each of these previous locations times the weight along the edge, from a particular Hi minus 1 to hi. So the backward message is analogous. So Bi of hi is going to be the sum of weights of all paths from a particular Hi equals hi to the end. So B2 of 1 is the sum of all paths from this to the end here. And this again, is recursively defined as looking at all next nodes, hi plus 1, and recurse on B of i plus 1 of hi plus 1 times the weight of the edge from between the Hi and hi plus 1. OK, so now having defined forward and backward messages, I can multiply them together to form Si. And my claim is that the sum of the weights of all paths from start to end that go through a particular node is exactly Si of hi. OK? So for example, if I'm looking at this node again, Fi is looking at all the ways to go from start to this node. And then the Bi is looking at all the ways to go from this node to the end. And if I multiply all those two quantities together, then I get all the paths from start that go through this node to the end. So now, we're almost done. We can take these Sis and then we normalize them over all possible other values that hi could take. And that gives us exactly the probability of Hi equals hi given the evidence. And this is exactly the smoothing quantity that we're looking for. What is the probability of H2 equals 1, condition on [INAUDIBLE]. Now, putting things together, the forward-backward algorithm is going to simply compute all the forward messages proceeding from 1 to 2 to 3 all the way up to n where Fi depends on Fi minus 1. So I'm going forward. Then it's going to compute all the backward messages going from n down to 1 because Bi depends on Bi plus 1. Now, I'm going to multiply everything-- the Fi and the Bi together to compute Si and normalize and that gives me the answer to the smoothing question. So the runtime of this algorithm is-- so we have n time steps. And for each of the time steps, I have a number of domain elements that I need to consider-- so this is the number of nodes in this lattice-- and I have also another multiplicative factor of domain because to compute the recurrence. And this is exactly actually the number of edges in this lattice. So one other note is that notice that the forward-backward algorithm actually computes all the smoothing queries for each i. And this takes time-- the time complexity for computing all of them is exactly the same as computing the one for any individual one. And that's because there's a lot of shared computation. So the forward message here that's computed is used down here and here. And same with the backward messages in the other direction. So let's look at a quick demo of this in action. So here is the object tracking HMM. So we have H1, H2, H3. And we have the various probabilities of h1, h1 given h1, h2 given h1, e2, h3 and so on. And now I'm interested in the probability of H2. So here, notice that I'm actually not going to run forward or backward but I'm going to run this more general algorithm called sumVariableElimination. So the details are going to be a little bit different so don't worry about it too much. But I just want to give you a flavor of how it works. So here, the first thing I do is I compute this factor, which is actually the forward message where I've summed out the previous time step, Hi. And then I'm going to compute another factor, which is summing out the backward message, which sums out H3. And then I'm going to multiply them together and I get the probability of H2 to be 0.61 and 0.3. All right. So to summarize, we've presented the forward-backward algorithm for probablistic inference in HMMs, in particular, answering smoothing questions. So the key idea behind the forward-backward algorithm is this lattice representation, which allows us to compactly represent paths as assignments. And the weights of each assignment could be the product of the edge weights. That allows us to define a dynamic program, which computes the forward and backward messages in an efficient way. OK, now if you multiply the forward and backward messages and normalize, now you can compute all the smoothing queries you want in the same amount of time as computing any one of them because there's a lot of shared computation. All right, that's the end.
Stanford_CS221_Artificial_Intelligence_Principles_and_Techniques_Autumn_2021
Artificial_Intelligence_Machine_Learning_7_Feature_Templates_Stanford_CS221_AI_Autumn_2021.txt
Hi, in this module, I'm going to talk about how to use feature templates to organize and construct your features in a very flexible way. So recall that a hypothesis class is a set of all predictors that a learning algorithm is going to consider. And then, in the case of linear predictors, we've looked at predictors of function of x to be equal to, in the case of regression, w dot phi of x, or in the case of classification, sign of that quantity. And we allow the weight vectors to weight very freely, OK? So we can visualize the hypothesis class as follows. So imagine the space of all possible predictors, all possible functions mapping x to y. When you define a feature extractor phi, what you're doing is committing to a particular subset of all possible predictors. And usually, you do this by using prior knowledge. And the second part is the learning algorithm, where you're given script F, the hypothesis class, and you're asking the learning algorithm to choose a particular predictor from that set, based on data. So intuitively, we want the script F hypothesis class to contain the good predictors, of course. But it can also contain some bad ones, because they will be filtered out based on the basis of data. But we don't want it to be too big, so that the learning algorithm has trouble identifying good predictors from the bad predictors. So let's look at an example task. And I want to give you an idea of how to choose the feature extractor. So suppose you're given a string, such as abc@gmail.com and you're asked to predict whether this is a valid email address or not using a linear classifier. So in this case, what we have to do is to identify the feature extractor phi. So when you're designing a feature extractor, the main question you ask yourself is what properties of the input x might be relevant for predicting y? Of course, you don't want this necessarily to commit to a particular aspect to be important because you don't know. You want to learn that from data. But you should give the learning algorithm some guidance. So what we're going to do is define the feature extractor as given x, produce a set of feature name, feature value pairs. So in this particular example, the feature extractor is going to produce a feature vector. And let's say, in this case, we might look at the length. Is it greater than 10? In this case, it's 1 because length has something to do with whether it's a value address. The fraction of alphanumeric characters, 0.85 in this case. Does it contain an @ sign? That's 1 because it does contain an @ sign. Ends with com, and it's 1 here. And does it end with dot org? And it's 0 here. And so this is a feature vector that we might construct for this particular application. So now we go to prediction. So remember that we've previously defined the feature vector to just be a real vector. It's just a list of numbers. So what we've done right now, is to just annotate or comment each component of that feature vector with a name, that describes what that component is about. We can do the same thing with the corresponding weight vector. So here is a weight vector, just a list of numbers, and we can annotate each component with the name of the corresponding phi. And recall that the score is just that the dot product, w.phi of x. And just to write out the dot product, it's a sum over all the features or components of wj, the weight of that feature, times the feature value, OK? So here's an example. The weight of length greater than 10 is minus 1.2. The feature value is 1. So we have the product here, and you have all the other features. So a little piece of intuition here is that you can think about the score. Remember, in classification, positive scores result in positive classification, negative scores result in negative classification. You can think about each feature as providing a vote. You can think about if, let's say, of phi of xj is 1. And wj, if it's positive, that means it's voting in favor of positive classification. And if wj is negative, it's voting in favor of negative classification. And the magnitude of wj determines the strength of the vote. So that's another way to interpret the dot product before we previously saw that we can interpret it as the cosine of the angle, which is a more geometric interpretation. So, so far, we've seen that we can take inputs to find arbitrary features, extractors, get out our feature vectors into [INAUDIBLE].. But how do we choose these feature vectors? I just kind of made up the @, com, and org. Which ones do we include? So far we've used some prior knowledge, but it's very easy in this manner to miss some. What about suffixes like being in the US, for example? We need a more systematic way of doing this. And this is where feature templates comes in. So a feature template is simply a group of features all computed in a similar way. So here's an example. So the input abc@gmail.com, we're going to write the feature template as simply an English description with a blank. And that blank is meant to be filled in with an arbitrary value, last three characters equals something. And by instantiating that blank with all sorts of different values, then we begin to realize the feature vectors that are the features that are actually defined by this feature template. So the important part here is that we no longer have to say which suffixes are important. We don't have to say what types of patterns, what particular patterns to look at. We just have to know that there exists some suffix that might be important and define this feature template letting the learning algorithm sort out which of these many features are actually relevant. So if you continue this example, so the input is abc@gmail.com. We define this feature template, which can be instantiated by substituting something like dot com. We can also define this other feature template, length greater than blank. And we can plug in 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, and so on, into that. Some feature templates don't have a blank, and that's OK because that just corresponds to specify one single feature, and that has a particular value. So here's another example. So suppose the input is an aerial image, along with some metadata about the location. So you can go figure out where this actually is. So the feature template, in this case, we might want to look at the following. So we want to look at the pixel intensity of this image at a particular row and a particular column. And it's a color image, so there's three channels, RGB, so we identify a particular channel that we're looking at. So this might be instantiated as the pixel intensity of image at row 10 and column 93 red channel, and that might have a particular value. Another feature template might look at the metadata, the location, and be a feature on whether the latitude is in a particular range, and longitude is in a particular range. So this feature template gets instantiated, might be instantiated with particular values that denote ranges. So if you remember piecewise constant features, this is an example of piecewise constant features that carves up the world into a bunch of regions and has a feature of firing if the Lat long is in a particular region or not. So one thing you might know is that feature templates are pretty flexible, but sometimes, they can give rise to a lot of features. Last character equals blank, and there's already 26 if you only include the lowercase letters. And furthermore, most of these feature values are 0. So in these cases, this is what we mean when a feature vector is sparse. And you can actually represent sparse feature vectors more compactly by just, as a dictionary, mapping the feature name to the actual feature value. So in general, there's two ways you can represent feature vectors. One is using arrays and one is using dictionaries. So if your feature vector looks like this, which is dense or not sparse, that means all the feature values are mostly non-zero, then you might want to just represent this as an array, order the feature somehow, and just list out the numbers. But in cases where your feature vector looks like this, and has lots of 0's then, it will be more efficient to represent this as a dictionary, where, again, you specify the feature name, colon the feature value of only the non-zero elements. And by convention, anything that is not mentioned as a value of 0. So one interesting advantage of sparse features is that you don't have to a priori instantiate all the features in advance. You can, as data comes, you only kind of lazily build up these features over time. Whereas, if you were doing things in a dense way, you would have to predefine the fixed set of features that you are going to be working with. Now in recent years with deep learning, dense features and arrays have been much more ubiquitous, partly because you can take advantage of fast matrix computations on the GPT. So to summarize, we want to identify hypothesis classes. And in this case, we're looking at defining the hypothesis class with respect to the feature extractor. To define the feature extractor, we use feature templates, which is a convenient shorthand for unrolling a single feature template into a bunch of different features. We also saw that in some cases, the feature vectors were sparse, and therefore, you can use a dictionary implementation to be more efficient. OK, so that's the end of this module, thanks.